机器学习新宠:对比学习论文实现大合集,60多篇分门别类,从未如此全面(二)

简介: 机器学习新宠:对比学习论文实现大合集,60多篇分门别类,从未如此全面(二)

2.Audio


第二类是音频,有1篇论文,wav2vec 2.0


1. wav2vec 2.0:


A Framework for Self-Supervised Learning of Speech Representations. Authors:Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. paper code


3.Videos and Multimodal


第三类是视频和多模态,主要包含ICLR2021和NIPS2020的论文,包含少量CVPR2020,有12篇论文的实现。


1. Time-Contrastive Networks: Self-Supervised Learning from Video.


Authors: Pierre Sermanet; Corey Lynch; Yevgen Chebotar; Jasmine Hsu; Eric Jang; Stefan Schaal; Sergey Levine. paper


2. Contrastive Multiview Coding.


Authors:Yonglong Tian, Dilip Krishnan, Phillip Isola. paper code


3. Learning Video Representations using Contrastive Bidirectional Transformer.


Authors:Chen Sun, Fabien Baradel, Kevin Murphy, Cordelia Schmid. paper


4. End-to-End Learning of Visual Representations from Uncurated Instructional Videos.CVPR2020.


Authors:Antoine Miech, Jean-Baptiste Alayrac, Lucas Smaira, Ivan Laptev, Josef Sivic, Andrew Zisserman. paper code


5. Multi-modal Self-Supervision from Generalized Data Transformations.


Authors:Mandela Patrick, Yuki M. Asano, Polina Kuznetsova, Ruth Fong, João F. Henriques, Geoffrey Zweig, Andrea Vedaldi. paper


6. Support-set bottlenecks for video-text representation learning. ICLR2021.


Authors:Mandela Patrick, Po-Yao Huang, Yuki Asano, Florian Metze, Alexander Hauptmann, João Henriques, Andrea Vedaldi. paper


7. Contrastive Learning of Medical Visual Representations from Paired Images and Text. ICLR2021.


Authors:Yuhao Zhang, Hang Jiang, Yasuhide Miura, Christopher D. Manning, Curtis P. Langlotz. paper


8. AVLnet: Learning Audio-Visual Language Representations from Instructional Videos.


Authors:Andrew Rouditchenko, Angie Boggust, David Harwath, Brian Chen, Dhiraj Joshi, Samuel Thomas, Kartik Audhkhasi, Hilde Kuehne, Rameswar Panda, Rogerio Feris, Brian Kingsbury, Michael Picheny, Antonio Torralba, James Glass. paper


9. Self-Supervised MultiModal Versatile Networks. NIPS2020.


Authors:Jean-Baptiste Alayrac, Adrià Recasens, Rosalia Schneider, Relja Arandjelović, Jason Ramapuram, Jeffrey De Fauw, Lucas Smaira, Sander Dieleman, Andrew Zisserman. paper


10. Memory-augmented Dense Predictive Coding for Video Representation Learning.


Authors:Tengda Han, Weidi Xie, Andrew Zisserman. paper code


11. Spatiotemporal Contrastive Video Representation Learning.


Authors:Rui Qian, Tianjian Meng, Boqing Gong, Ming-Hsuan Yang, Huisheng Wang, Serge Belongie, Yin Cui. paper code


12. Self-supervised Co-training for Video Representation Learning. NIPS2020.


Authors:Tengda Han, Weidi Xie, Andrew Zisserman. paper


4.NLP


第四类是自然语言处理,主要包含ICLR2021和NAACL2021的论文,有14项研究的实现。


1. [CALM] Pre-training Text-to-Text Transformers for Concept-centric Common Sense. ICLR2021. Authors:Wangchunshu Zhou, Dong-Ho Lee, Ravi Kiran Selvam, Seyeon Lee, Xiang Ren. papercode


2. Residual Energy-Based Models for Text Generation. ICLR2021.


Authors:Yuntian Deng, Anton Bakhtin, Myle Ott, Arthur Szlam, Marc’Aurelio Ranzato. paper


3. Contrastive Learning with Adversarial Perturbations for Conditional Text Generation. ICLR2021.


Authors:Seanie Lee, Dong Bok Lee, Sung Ju Hwang. paper


4. [CoDA] CoDA: Contrast-enhanced and Diversity-promoting Data Augmentation for Natural Language Understanding. ICLR2021.


Authors:Yanru Qu, Dinghan Shen, Yelong Shen, Sandra Sajeev, Jiawei Han, Weizhu Chen. paper


5. [FairFil] FairFil: Contrastive Neural Debiasing Method for Pretrained Text Encoders. ICLR2021.


Authors:Pengyu Cheng, Weituo Hao, Siyang Yuan, Shijing Si, Lawrence Carin. paper


6. Towards Robust and Efficient Contrastive Textual Representation Learning. ICLR2021.


Authors:Liqun Chen, Yizhe Zhang, Dianqi Li, Chenyang Tao, Dong Wang, Lawrence Carin. paper


7. Self-supervised Contrastive Zero to Few-shot Learning from Small, Long-tailed Text data. ICLR2021.


Authors:Nils Rethmeier, Isabelle Augenstein. paper


8. Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval. ICLR2021.


Authors:Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, Arnold Overwijk. paper


9. Self-Supervised Contrastive Learning for Efficient User Satisfaction Prediction in Conversational Agents. NAACL2021.


Authors:Mohammad Kachuee, Hao Yuan, Young-Bum Kim, Sungjin Lee. paper


10. SOrT-ing VQA Models : Contrastive Gradient Learning for Improved Consistency. NAACL2021.


Authors:Sameer Dharur, Purva Tendulkar, Dhruv Batra, Devi Parikh, Ramprasaath R. Selvaraju. paper


11. Supporting Clustering with Contrastive Learning. NAACL2021.


Authors:Dejiao Zhang, Feng Nan, Xiaokai Wei, Shangwen Li, Henghui Zhu, Kathleen McKeown, Ramesh Nallapati, Andrew Arnold, Bing Xiang. paper


12. Understanding Hard Negatives in Noise Contrastive Estimation. NAACL2021.


Authors:Wenzheng Zhang, Karl Stratos. paper


13. Contextualized and Generalized Sentence Representations by Contrastive Self-Supervised Learning: A Case Study on Discourse Relation Analysis. NAACL2021. Authors:Hirokazu Kiyomaru, Sadao Kurohashi. paper


14. Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach. NAACL2021.


Authors:Yue Yu, Simiao Zuo, Haoming Jiang, Wendi Ren, Tuo Zhao, Chao Zhang. paper


5.Language Contrastive Learning


第五类是语言模型,在这个方向上有5篇论文。


1. Distributed Representations of Words and Phrases and their Compositionality. 2013NIPS.


Authors:Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, Jeffrey Dean. Paper


2. An efficient framework for learning sentence representations.


Authors:Lajanugen Logeswaran, Honglak Lee. Paper


3. XLNet: Generalized Autoregressive Pretraining for Language Understanding.


Authors:Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. Paper


4. A Mutual Information Maximization Perspective of Language Representation Learning.


Authors:Lingpeng Kong, Cyprien de Masson d’Autume, Wang Ling, Lei Yu, Zihang Dai, Dani Yogatama. Paper


5. InfoXLM: An Information-Theoretic Framework for Cross-Lingual Language Model Pre-Training.


Authors:Zewen Chi, Li Dong, Furu Wei, Nan Yang, Saksham Singhal, Wenhui Wang, Xia Song, Xian-Ling Mao, Heyan Huang, Ming Zhou. Paper


6.Graph


第六类是图与对比学习的结合,有4项研究的实现。


1. [GraphCL] Graph Contrastive Learning with Augmentations. NIPS2020.


Authors:Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, Yang Shen. paper


2. Contrastive Multi-View Representation Learning on Graphs. ICML2020.


Authors:Kaveh Hassani, Amir Hosein Khasahmadi. Paper


3. [GCC] GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training. KDD2020.


Authors:Jiezhong Qiu, Qibin Chen, Yuxiao Dong, Jing Zhang, Hongxia Yang, Ming Ding, Kuansan Wang, Jie Tang. Paper


4. [InfoGraph] InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization. ICLR2020.


Authors:Fan-Yun Sun, Jordan Hoffmann, Vikas Verma, Jian Tang. Paper


7.Adversarial Learning


第七类是对抗训练+对比学习,目前只有1篇论文。


1. Contrastive Learning with Adversarial Examples. NIPS2020.


Authors:Chih-Hui Ho, Nuno Vasconcelos. paper


8.Recommendation


第八类是推荐系统结合对比学习,解决点击数据的稀疏性或增加模型的鲁棒性,有3篇论文。


1. Self-Supervised Hypergraph Convolutional Networks for Session-based Recommendation. AAAI2021.


Authors:Xin Xia, Hongzhi Yin, Junliang Yu, Qinyong Wang, Lizhen Cui, Xiangliang Zhang. paper code


2. Self-Supervised Multi-Channel Hypergraph Convolutional Network for Social Recommendation. WWW2021. Authors:Junliang Yu, Hongzhi Yin, Jundong Li, Qinyong Wang, Nguyen Quoc Viet Hung, Xiangliang Zhang. paper code


3. Self-supervised Graph Learning for Recommendation. SIGIR2021.


Authors:Jiancan Wu, Xiang Wang, Fuli Feng, Xiangnan He, Liang Chen, Jianxun Lian, Xing Xie. paper code


9.Applications


第九类是对比学习在图像-图像翻译中的应用,有1篇论文。


1. Contrastive Learning for Unpaired Image-to-Image Translation.


Authors:Taesung ParkAlexei A. Efros, Richard ZhangJun-Yan Zhu. paper


相关文章
|
2月前
|
机器学习/深度学习 人工智能 自然语言处理
【MM2024】阿里云 PAI 团队图像编辑算法论文入选 MM2024
阿里云人工智能平台 PAI 团队发表的图像编辑算法论文在 MM2024 上正式亮相发表。ACM MM(ACM国际多媒体会议)是国际多媒体领域的顶级会议,旨在为研究人员、工程师和行业专家提供一个交流平台,以展示在多媒体领域的最新研究成果、技术进展和应用案例。其主题涵盖了图像处理、视频分析、音频处理、社交媒体和多媒体系统等广泛领域。此次入选标志着阿里云人工智能平台 PAI 在图像编辑算法方面的研究获得了学术界的充分认可。
【MM2024】阿里云 PAI 团队图像编辑算法论文入选 MM2024
|
1月前
|
机器学习/深度学习 人工智能 自然语言处理
【EMNLP2024】阿里云人工智能平台 PAI 多篇论文入选 EMNLP2024
阿里云人工智能平台 PAI 的多篇论文在 EMNLP2024 上入选。论文成果是阿里云与华南理工大学金连文教授团队、复旦大学王鹏教授团队共同研发。EMNLP 是人工智能自然语言处理领域的顶级国际会议,聚焦于自然语言处理技术在各个应用场景的学术研究,尤其重视自然语言处理的实证研究。该会议曾推动了预训练语言模型、文本挖掘、对话系统、机器翻译等自然语言处理领域的核心创新,在学术和工业界都有巨大的影响力。此次入选标志着阿里云人工智能平台 PAI 在自然语言处理和多模态算法能力方面研究获得了学术界认可。
|
2月前
|
机器学习/深度学习 搜索推荐 算法
机器学习-点击率预估-论文速读-20240916
机器学习-点击率预估-论文速读-20240916
36 0
|
2月前
|
机器学习/深度学习 算法
【机器学习】迅速了解什么是集成学习
【机器学习】迅速了解什么是集成学习
|
4月前
|
机器学习/深度学习 人工智能 自然语言处理
【机器学习】机器学习、深度学习、强化学习和迁移学习简介、相互对比、区别与联系。
机器学习、深度学习、强化学习和迁移学习都是人工智能领域的子领域,它们之间有一定的联系和区别。下面分别对这四个概念进行解析,并给出相互对比、区别与联系以及应用场景案例分析。
118 1
|
4月前
|
机器学习/深度学习 开发者 Python
Python 与 R 在机器学习入门中的学习曲线差异
【8月更文第6天】在机器学习领域,Python 和 R 是两种非常流行的编程语言。Python 以其简洁的语法和广泛的社区支持著称,而 R 则以其强大的统计功能和数据分析能力受到青睐。本文将探讨这两种语言在机器学习入门阶段的学习曲线差异,并通过构建一个简单的线性回归模型来比较它们的体验。
71 7
|
4月前
|
机器学习/深度学习 存储 人工智能
【ACL2024】阿里云人工智能平台PAI多篇论文入选ACL2024
近期,阿里云人工智能平台PAI的多篇论文在ACL2024上入选。论文成果是阿里云与阿里集团安全部、华南理工大学金连文教授团队、华东师范大学何晓丰教授团队共同研发。ACL(国际计算语言学年会)是人工智能自然语言处理领域的顶级国际会议,聚焦于自然语言处理技术在各个应用场景的学术研究。该会议曾推动了预训练语言模型、文本挖掘、对话系统、机器翻译等自然语言处理领域的核心创新,在学术和工业界都有巨大的影响力。此次入选标志着阿里云人工智能平台PAI在自然语言处理和多模态算法、算法框架能力方面研究获得了学术界认可。
|
4月前
|
机器学习/深度学习 运维 算法
【阿里天池-医学影像报告异常检测】3 机器学习模型训练及集成学习Baseline开源
本文介绍了一个基于XGBoost、LightGBM和逻辑回归的集成学习模型,用于医学影像报告异常检测任务,并公开了达到0.83+准确率的基线代码。
74 9
|
4月前
|
机器学习/深度学习
【机器学习】模型融合Ensemble和集成学习Stacking的实现
文章介绍了使用mlxtend和lightgbm库中的分类器,如EnsembleVoteClassifier和StackingClassifier,以及sklearn库中的SVC、KNeighborsClassifier等进行模型集成的方法。
60 1