@TOC
论文名称:Attentional Factorization Machines Learning the Weight of Feature Interactions via Attention Networks
原文地址:Attentional Factorization Machines
⚡本系列历史文章⚡
【推荐系统论文精读系列】(一)--Amazon.com Recommendations
【推荐系统论文精读系列】(二)--Factorization Machines
【推荐系统论文精读系列】(三)--Matrix Factorization Techniques For Recommender Systems
【推荐系统论文精读系列】(四)--Practical Lessons from Predicting Clicks on Ads at Facebook
【推荐系统论文精读系列】(五)--Neural Collaborative Filtering
【推荐系统论文精读系列】(六)--Field-aware Factorization Machines for CTR Prediction
【推荐系统论文精读系列】(七)--AutoRec Autoencoders Meet Collaborative Filtering
【推荐系统论文精读系列】(八)--Deep Crossing:Web-Scale Modeling without Manually Crafted Combinatorial Features
【推荐系统论文精读系列】(九)--Product-based Neural Networks for User Response Prediction
【推荐系统论文精读系列】(十)--Wide&Deep Learning for Recommender Systems
【推荐系统论文精读系列】(十一)--DeepFM A Factorization-Machine based Neural Network for CTR Prediction
一、摘要
FMs是一个有监督学习方法能够提升线性回归模型的表现通过结合二阶特征,尽管这是一个有效的办法,但是FM只能够以相同的权重对交互特征进行建模,但是现实情况下,并不是所有的交互都是有用的。例如一些无用的特征交互可能会引入噪音并且产生不利的模型表现。本项任务中,我们通过识别不同特征交互的重要性来提升FM。我们提出了一个新型的模型叫做Attentional Factorization Machine(AFM),这个模型能够从数据中学习特征交互的重要性通过一个注意力网络。
二、介绍
有监督学习在机器学习中是最基本的任务之一。它的目标就是推断出一个函数能够预测给定变量的标签。例如,实值标签对于回归问题,而分类标签用于分类问题。他已经广泛的应用于各大应用,包括推荐系统,在线广告,图像识别等。
为了利用不同特征之间的交互,一个普遍的方式就是去显式构造交互特征,像Ploynomial Regression,每个交叉特征的权重可以被学习。可是它关键的问题就是仅仅被观测到的数据才能够被更新。
为了解决模型泛化的问题,FMs被提出在2010年,他将每个交互特征的权重进行参数化使用组成特征的Embedding向量的内积。通过为每个特征学习Embedding向量,这样就能够来进行估计交互特征的权重。FM现已经成功用于不同的应用,从推荐系统到自然语言处理。尽管有很好的前景,但是我们争论到FM的功能被限制,交互特征的权重都是相同的。
本项工作中,我们设计了一个新型的网络叫做AFM,它利用了最近新提出的网络——注意力机制,去确保学习不同的特征交互权重用于预测。更重要的是,特征交互能够自动从数据中进行学习,不需要任何领域知识。这个极大的增强了FM的可解释性和透明度。
三、Attentional Factorization Machines
3.1 模型
输入层和Embedding层和FM是相同的,对输入特征采用稀疏表示法,并将每个非0特征嵌入一个密集向量中。接下来,我们使用了pair-wise交互层和基于注意力的池化层,这是本篇论文重点。
3.1.1 Pair-wise Interaction Layer
通过FM使用内积去构建不同特征交互的启发,我们提出了一个新的交互方式——Pair-wise Interaction Layer在神经网络中。它将
然后我们使用了一个全连接层去映射输出评分:
Pair-wise Interaction Layer这个池化层可以被看做使用的池中向量进行加和。
3.1.2 Attention-based Pooling Layer
因为注意力机制已经被引入到神经网络中,他已经被广泛用于一些任务中,例如推荐系统、信息检索和计算机视觉领域。这个想法是允许不同部分贡献不同的权重:
这个
attention-base pooling layer的输出是一个k维向量,我们然后将他映射到输出层:
3.2 学习
由于AFM从数据建模的角度直接增强了FM,它还可以应用于各种预测任务,包括回归、分类和排名。应使用不同的目标函数为不同任务定制AFM模型学习。对于目标(x)为实际值的回归任务,常见的目标函数为损失平方:
在本文中,我们关注回归任务并优化平方损失。为了优化目标函数,我们采用了随机梯度下降(SGD)——神经网络模型的通用求解器。实现SGD算法的关键是获得预测模型每个参数的导数。由于大多数用于深度学习的现代工具包都提供了自动微分的功能,如Theano和TensorFlow,我们在此省略了衍生工具的细节。
3.2.1 防止过拟合
在优化ML模型时,过度拟合是一个永恒的问题。研究表明,FM可能会出现过度拟合,因此正则化是防止FM过度拟合的重要因素。由于AFM比FM具有更强的表示能力,因此可能更容易过度拟合训练数据。在这里,我们考虑两种技术以防止过度拟合,Dropout和L2正规化,已广泛应用于神经网络模型。
Dropout的概念是在训练期间随机丢弃一些神经元(沿着它们的连接)。它被证明能够防止神经元对训练数据的复杂协同适应。由于AFM对特征之间的所有成对交互进行建模,但并非所有交互都有用,因此成对交互层的神经元可能很容易相互适应,并导致过度拟合。因此,我们在成对交互层上使用dropout来避免共同适应。此外,由于在测试期间禁用了Dropout,并且整个网络用于预测,因此Dropout还有另一个作用,即使用较小的神经网络进行模型平均,这可能会提高性能。
对于单层MLP的注意网络分量,我们对权重矩阵W进行正则化,以防止可能的过度拟合。也就是说,我们优化的实际目标函数是:
其中控制正则化强度。我们没有在注意网络上使用Dropout,因为我们发现在交互层和注意网络上联合使用Dropout会导致一些稳定性问题并降低性能。
References
[Baltrunas et al., 2015] Linas Baltrunas, Karen Church, Alexandros Karatzoglou, and Nuria Oliver. Frappe: Understanding the usage and perception of mobile app recommendations in-thewild. CoRR, abs/1505.03014, 2015.
[Bayer et al., 2017] Immanuel Bayer, Xiangnan He, Bhargav Kanagal, and Steffen Rendle. A generic coordinate descent framework for learning from implicit feedback. In WWW, 2017.
[Blondel et al., 2016] Mathieu Blondel, Akinori Fujino, Naonori Ueda, and Masakazu Ishihata. Higher-order factorization machines. In NIPS, 2016.
[Chen et al., 2016] Tao Chen, Xiangnan He, and Min-Yen Kan. Context-aware image tweet modelling and recommendation. In MM, 2016.
[Chen et al., 2017a] Jingyuan Chen, Hanwang Zhang, Xiangnan He, Liqiang Nie, Wei Liu, and Tat-Seng Chua. Attentive collaborative filtering: Multimedia recommendation with feature- and item-level attention. In SIGIR, 2017.
[Chen et al., 2017b] Long Chen, Hanwang Zhang, Jun Xiao, Liqiang Nie, Jian Shao, and Tat-Seng Chua. SCA-CNN: spatial and channel-wise attention in convolutional networks for image captioning. In CVPR, 2017.
[Cheng et al., 2014] Chen Cheng, Fen Xia, Tong Zhang, Irwin King, and Michael R Lyu. Gradient boosting factorization machines. In RecSys, 2014.
[Cheng et al., 2016] Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, et al. Wide & deep learning for recommender systems. In DLRS, 2016.
[Harper and Konstan, 2015] F. Maxwell Harper and Joseph A. Konstan. The movielens datasets: History and context. ACM TIIS, 2015.
[He et al., 2016a] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
[He et al., 2016b] Xiangnan He, Hanwang Zhang, Min-Yen Kan, and Tat-Seng Chua. Fast matrix factorization for online recommendation with implicit feedback. In SIGIR, 2016.
[He et al., 2017a] Xiangnan He, Ming Gao, Min-Yen Kan, and Dingxian Wang. BiRank: Towards ranking on bipartite graphs. IEEE TKDE, 2017.
[He et al., 2017b] Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. Neural collaborative filering. In WWW, 2017.
[Juan et al., 2016] Yuchin Juan, Yong Zhuang, Wei-Sheng Chin, and Chih-Jen Lin. Field-aware factorization machines for ctr prediction. In RecSys, 2016.
[Koren, 2008] Yehuda Koren. Factorization meets the neighborhood: A multifaceted collaborative filtering model. In KDD, 2008.
[Petroni et al., 2015] Fabio Petroni, Luciano Del Corro, and Rainer Gemulla. Core: Context-aware open relation extraction with factorization machines. In EMNLP, 2015.
[Rendle et al., 2011] Steffen Rendle, Zeno Gantner, Christoph Freudenthaler, and Lars Schmidt-Thieme. Fast context-aware recommendations with factorization machines. In SIGIR, 2011.
[Rendle, 2010] Steffen Rendle. Factorization machines. In ICDM, 2010. [Rendle, 2012] Steffen Rendle. Factorization machines with libfm. ACM TIST, 2012.
[He and Chua, 2017] Xiangnan He and Tat-Seng Chua. Neural factorization machines for sparse predictive analytics. In SIGIR, 2017.
[He et al., 2014] Xiangnan He, Min-Yen Kan, Peichu Xie, and Xiao Chen. Comment-based multi-view clustering of web 2.0 items. In WWW, 2014.
[Shan et al., 2016] Ying Shan, T Ryan Hoens, Jian Jiao, Haijing Wang, Dong Yu, and JC Mao. Deep crossing: Web-scale modeling without manually crafted combinatorial features. In KDD, 2016.
[Shen et al., 2015] Fumin Shen, Chunhua Shen, Wei Liu, and Heng Tao Shen. Supervised discrete hashing. In CVPR, 2015.
[Srivastava et al., 2014] Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 2014.
[Wang et al., 2015] Meng Wang, Xueliang Liu, and Xindong Wu. Visual classification by l1-hypergraph modeling. IEEE TKDE, 2015.
[Wang et al., 2016] Meng Wang, Weijie Fu, Shijie Hao, Dacheng Tao, and Xindong Wu. Scalable semi-supervised learning by efficient anchor graph regularization. IEEE TKDE, 2016.
[Wang et al., 2017a] Xiang Wang, Xiangnan He, Liqiang Nie and Tat-Seng Chua Item Silk Road: Recommending Items from Information Domains to Social Users SIGIR, 2017.
[Wang et al., 2017b] Meng Wang, Weijie Fu, Shijie Hao, Hengchang Liu, and Xindong Wu. Learning on big graph: Label inference and regularization with anchor hierarchy. IEEE TKDE, 2017.
[Xiong et al., 2017] Chenyan Xiong, Jimie Callan, and Tie-Yen Liu. Learning to attend and to rank with word-entity duets. In SIGIR, 2017.
[Yang et al., 2014] Yang Yang, Zheng-Jun Zha, Yue Gao, Xiaofeng Zhu, and Tat-Seng Chua. Exploiting web images for semantic video indexing via robust sample-specific loss. IEEE TMM, 2014.
[Yang et al., 2015] Yang Yang, Zhigang Ma, Yi Yang, Feiping Nie, and Heng Tao Shen. Multitask spectral clustering by exploring intertask correlation. IEEE TCYB, 2015.
[Zhang et al., 2016a] Hanwang Zhang, Xindi Shang, Huanbo Luan, Meng Wang, and Tat-Seng Chua. Learning from collective intelligence: Feature learning using social images and tags. TMM, 2016.
[Zhang et al., 2016b] Hanwang Zhang, Fumin Shen, Wei Liu, Xiangnan He, Huanbo Luan, and Tat-Seng Chua. Discrete collaborative filtering. In SIGIR, 2016.
[Zhang et al., 2017] Hanwang Zhang, Zawlin Kyaw, Shih-Fu Chang, and Tat-Seng Chua. Visual translation embedding network for visual relation detection. In CVPR, 2017.
[Zhao et al., 2015] Zhou Zhao, Lijun Zhang, Xiaofei He, and Wilfred Ng. Expert finding for question answering via graph regularized matrix completion. TKDE, 2015.
[Zhao et al., 2016] Zhou Zhao, Hanqing Lu, Deng Cai, Xiaofei He, and Yueting Zhuang. User Preference Learning for Online Social Recommendation. TKDE, 2016.
t al., 2016] Zhou Zhao, Hanqing Lu, Deng Cai, Xiaofei He, and Yueting Zhuang. User Preference Learning for Online Social Recommendation. TKDE, 2016.