机器学习新宠:对比学习论文实现大合集,60多篇分门别类,从未如此全面(二)

简介: 机器学习新宠:对比学习论文实现大合集,60多篇分门别类,从未如此全面(二)

2.Audio


第二类是音频,有1篇论文,wav2vec 2.0


1. wav2vec 2.0:


A Framework for Self-Supervised Learning of Speech Representations. Authors:Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. paper code


3.Videos and Multimodal


第三类是视频和多模态,主要包含ICLR2021和NIPS2020的论文,包含少量CVPR2020,有12篇论文的实现。


1. Time-Contrastive Networks: Self-Supervised Learning from Video.


Authors: Pierre Sermanet; Corey Lynch; Yevgen Chebotar; Jasmine Hsu; Eric Jang; Stefan Schaal; Sergey Levine. paper


2. Contrastive Multiview Coding.


Authors:Yonglong Tian, Dilip Krishnan, Phillip Isola. paper code


3. Learning Video Representations using Contrastive Bidirectional Transformer.


Authors:Chen Sun, Fabien Baradel, Kevin Murphy, Cordelia Schmid. paper


4. End-to-End Learning of Visual Representations from Uncurated Instructional Videos.CVPR2020.


Authors:Antoine Miech, Jean-Baptiste Alayrac, Lucas Smaira, Ivan Laptev, Josef Sivic, Andrew Zisserman. paper code


5. Multi-modal Self-Supervision from Generalized Data Transformations.


Authors:Mandela Patrick, Yuki M. Asano, Polina Kuznetsova, Ruth Fong, João F. Henriques, Geoffrey Zweig, Andrea Vedaldi. paper


6. Support-set bottlenecks for video-text representation learning. ICLR2021.


Authors:Mandela Patrick, Po-Yao Huang, Yuki Asano, Florian Metze, Alexander Hauptmann, João Henriques, Andrea Vedaldi. paper


7. Contrastive Learning of Medical Visual Representations from Paired Images and Text. ICLR2021.


Authors:Yuhao Zhang, Hang Jiang, Yasuhide Miura, Christopher D. Manning, Curtis P. Langlotz. paper


8. AVLnet: Learning Audio-Visual Language Representations from Instructional Videos.


Authors:Andrew Rouditchenko, Angie Boggust, David Harwath, Brian Chen, Dhiraj Joshi, Samuel Thomas, Kartik Audhkhasi, Hilde Kuehne, Rameswar Panda, Rogerio Feris, Brian Kingsbury, Michael Picheny, Antonio Torralba, James Glass. paper


9. Self-Supervised MultiModal Versatile Networks. NIPS2020.


Authors:Jean-Baptiste Alayrac, Adrià Recasens, Rosalia Schneider, Relja Arandjelović, Jason Ramapuram, Jeffrey De Fauw, Lucas Smaira, Sander Dieleman, Andrew Zisserman. paper


10. Memory-augmented Dense Predictive Coding for Video Representation Learning.


Authors:Tengda Han, Weidi Xie, Andrew Zisserman. paper code


11. Spatiotemporal Contrastive Video Representation Learning.


Authors:Rui Qian, Tianjian Meng, Boqing Gong, Ming-Hsuan Yang, Huisheng Wang, Serge Belongie, Yin Cui. paper code


12. Self-supervised Co-training for Video Representation Learning. NIPS2020.


Authors:Tengda Han, Weidi Xie, Andrew Zisserman. paper


4.NLP


第四类是自然语言处理,主要包含ICLR2021和NAACL2021的论文,有14项研究的实现。


1. [CALM] Pre-training Text-to-Text Transformers for Concept-centric Common Sense. ICLR2021. Authors:Wangchunshu Zhou, Dong-Ho Lee, Ravi Kiran Selvam, Seyeon Lee, Xiang Ren. papercode


2. Residual Energy-Based Models for Text Generation. ICLR2021.


Authors:Yuntian Deng, Anton Bakhtin, Myle Ott, Arthur Szlam, Marc’Aurelio Ranzato. paper


3. Contrastive Learning with Adversarial Perturbations for Conditional Text Generation. ICLR2021.


Authors:Seanie Lee, Dong Bok Lee, Sung Ju Hwang. paper


4. [CoDA] CoDA: Contrast-enhanced and Diversity-promoting Data Augmentation for Natural Language Understanding. ICLR2021.


Authors:Yanru Qu, Dinghan Shen, Yelong Shen, Sandra Sajeev, Jiawei Han, Weizhu Chen. paper


5. [FairFil] FairFil: Contrastive Neural Debiasing Method for Pretrained Text Encoders. ICLR2021.


Authors:Pengyu Cheng, Weituo Hao, Siyang Yuan, Shijing Si, Lawrence Carin. paper


6. Towards Robust and Efficient Contrastive Textual Representation Learning. ICLR2021.


Authors:Liqun Chen, Yizhe Zhang, Dianqi Li, Chenyang Tao, Dong Wang, Lawrence Carin. paper


7. Self-supervised Contrastive Zero to Few-shot Learning from Small, Long-tailed Text data. ICLR2021.


Authors:Nils Rethmeier, Isabelle Augenstein. paper


8. Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval. ICLR2021.


Authors:Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, Arnold Overwijk. paper


9. Self-Supervised Contrastive Learning for Efficient User Satisfaction Prediction in Conversational Agents. NAACL2021.


Authors:Mohammad Kachuee, Hao Yuan, Young-Bum Kim, Sungjin Lee. paper


10. SOrT-ing VQA Models : Contrastive Gradient Learning for Improved Consistency. NAACL2021.


Authors:Sameer Dharur, Purva Tendulkar, Dhruv Batra, Devi Parikh, Ramprasaath R. Selvaraju. paper


11. Supporting Clustering with Contrastive Learning. NAACL2021.


Authors:Dejiao Zhang, Feng Nan, Xiaokai Wei, Shangwen Li, Henghui Zhu, Kathleen McKeown, Ramesh Nallapati, Andrew Arnold, Bing Xiang. paper


12. Understanding Hard Negatives in Noise Contrastive Estimation. NAACL2021.


Authors:Wenzheng Zhang, Karl Stratos. paper


13. Contextualized and Generalized Sentence Representations by Contrastive Self-Supervised Learning: A Case Study on Discourse Relation Analysis. NAACL2021. Authors:Hirokazu Kiyomaru, Sadao Kurohashi. paper


14. Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach. NAACL2021.


Authors:Yue Yu, Simiao Zuo, Haoming Jiang, Wendi Ren, Tuo Zhao, Chao Zhang. paper


5.Language Contrastive Learning


第五类是语言模型,在这个方向上有5篇论文。


1. Distributed Representations of Words and Phrases and their Compositionality. 2013NIPS.


Authors:Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, Jeffrey Dean. Paper


2. An efficient framework for learning sentence representations.


Authors:Lajanugen Logeswaran, Honglak Lee. Paper


3. XLNet: Generalized Autoregressive Pretraining for Language Understanding.


Authors:Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. Paper


4. A Mutual Information Maximization Perspective of Language Representation Learning.


Authors:Lingpeng Kong, Cyprien de Masson d’Autume, Wang Ling, Lei Yu, Zihang Dai, Dani Yogatama. Paper


5. InfoXLM: An Information-Theoretic Framework for Cross-Lingual Language Model Pre-Training.


Authors:Zewen Chi, Li Dong, Furu Wei, Nan Yang, Saksham Singhal, Wenhui Wang, Xia Song, Xian-Ling Mao, Heyan Huang, Ming Zhou. Paper


6.Graph


第六类是图与对比学习的结合,有4项研究的实现。


1. [GraphCL] Graph Contrastive Learning with Augmentations. NIPS2020.


Authors:Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, Yang Shen. paper


2. Contrastive Multi-View Representation Learning on Graphs. ICML2020.


Authors:Kaveh Hassani, Amir Hosein Khasahmadi. Paper


3. [GCC] GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training. KDD2020.


Authors:Jiezhong Qiu, Qibin Chen, Yuxiao Dong, Jing Zhang, Hongxia Yang, Ming Ding, Kuansan Wang, Jie Tang. Paper


4. [InfoGraph] InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization. ICLR2020.


Authors:Fan-Yun Sun, Jordan Hoffmann, Vikas Verma, Jian Tang. Paper


7.Adversarial Learning


第七类是对抗训练+对比学习,目前只有1篇论文。


1. Contrastive Learning with Adversarial Examples. NIPS2020.


Authors:Chih-Hui Ho, Nuno Vasconcelos. paper


8.Recommendation


第八类是推荐系统结合对比学习,解决点击数据的稀疏性或增加模型的鲁棒性,有3篇论文。


1. Self-Supervised Hypergraph Convolutional Networks for Session-based Recommendation. AAAI2021.


Authors:Xin Xia, Hongzhi Yin, Junliang Yu, Qinyong Wang, Lizhen Cui, Xiangliang Zhang. paper code


2. Self-Supervised Multi-Channel Hypergraph Convolutional Network for Social Recommendation. WWW2021. Authors:Junliang Yu, Hongzhi Yin, Jundong Li, Qinyong Wang, Nguyen Quoc Viet Hung, Xiangliang Zhang. paper code


3. Self-supervised Graph Learning for Recommendation. SIGIR2021.


Authors:Jiancan Wu, Xiang Wang, Fuli Feng, Xiangnan He, Liang Chen, Jianxun Lian, Xing Xie. paper code


9.Applications


第九类是对比学习在图像-图像翻译中的应用,有1篇论文。


1. Contrastive Learning for Unpaired Image-to-Image Translation.


Authors:Taesung ParkAlexei A. Efros, Richard ZhangJun-Yan Zhu. paper


相关文章
|
19天前
|
机器学习/深度学习 分布式计算 算法
联邦学习是保障数据隐私的分布式机器学习方法
【6月更文挑战第13天】联邦学习是保障数据隐私的分布式机器学习方法,它在不暴露数据的情况下,通过在各设备上本地训练并由中心服务器协调,实现全局模型构建。联邦学习的优势在于保护隐私、提高训练效率和增强模型泛化。已应用于医疗、金融和物联网等领域。未来趋势包括更高效的数据隐私保护、提升可解释性和可靠性,以及与其他技术融合,有望在更多场景发挥潜力,推动机器学习发展。
27 4
|
10天前
|
机器学习/深度学习 算法 前端开发
机器学习中的集成学习(二)
**集成学习概述** 集成学习通过结合多个弱学习器创建强学习器,如Bagging(Bootstrap Aggregating)和Boosting。Bagging通过随机采样产生训练集,训练多个弱模型,然后平均(回归)或投票(分类)得出结果,减少方差和过拟合。Boosting则是迭代过程,每个弱学习器专注于难分类样本,逐步调整样本权重,形成加权平均的强学习器。典型算法有AdaBoost、GBDT、XGBoost等。两者区别在于,Bagging模型并行训练且独立,而Boosting模型间有依赖,重视错误分类。
|
10天前
|
机器学习/深度学习 人工智能 自然语言处理
机器学习中的集成学习(一)
集成学习是一种将多个弱学习器组合成强学习器的方法,通过投票法、平均法或加权平均等策略减少错误率。它分为弱分类器集成、模型融合和混合专家模型三个研究领域。简单集成技术包括投票法(用于分类,少数服从多数)、平均法(回归问题,预测值取平均)和加权平均法(调整模型权重以优化结果)。在实际应用中,集成学习如Bagging和Boosting是与深度学习并驾齐驱的重要算法,常用于数据竞赛和工业标准。
|
15天前
|
机器学习/深度学习 人工智能 自然语言处理
【CVPR2024】阿里云人工智能平台PAI图像编辑算法论文入选CVPR2024
近期,阿里云人工智能平台PAI发表的图像编辑算法论文在CVPR-2024上正式亮相发表。论文成果是阿里云与华南理工大学贾奎教授领衔的团队共同研发。此次入选标志着阿里云人工智能平台PAI自主研发的图像编辑算法达到了先进水平,赢得了国际学术界的认可。在阿里云人工智能平台PAI算法团队和华南理工大学的老师学生们一同的坚持和热情下,将阿里云在图像生成与编辑领域的先进理念得以通过学术论文和会议的形式,向业界传递和展现。
|
13天前
|
机器学习/深度学习 算法 Python
【机器学习】集成学习在信用评分领域实例
【机器学习】集成学习在信用评分领域实例
33 1
|
15天前
|
机器学习/深度学习 前端开发 算法
【机器学习】集成学习方法:Bagging与Boosting的应用与优势
【机器学习】集成学习方法:Bagging与Boosting的应用与优势
30 2
|
15天前
|
机器学习/深度学习 算法 TensorFlow
强化学习是一种通过与环境交互来学习最优行为策略的机器学习方法。
强化学习是一种通过与环境交互来学习最优行为策略的机器学习方法。
|
6天前
|
人工智能 自然语言处理 机器人
大模型训练的艺术:从预训练到增强学习的四阶段之旅
大模型训练的艺术:从预训练到增强学习的四阶段之旅
|
2月前
|
机器学习/深度学习 人工智能 运维
【机器学习】Adaboost: 强化弱学习器的自适应提升方法
在机器学习领域,集成学习是一种通过结合多个弱模型以构建更强大预测模型的技术。Adaptive Boosting,简称Adaboost,是集成学习中的一种经典算法,由Yoav Freund和Robert Schapire于1996年提出。Adaboost通过迭代方式,自适应地调整数据样本的权重,使得每个后续的弱学习器更加关注前序学习器表现不佳的样本,以此逐步提高整体预测性能。本文将深入探讨Adaboost的工作原理、算法流程、关键特性、优势及应用场景,并简要介绍其实现步骤。
34 1
|
13天前
|
机器学习/深度学习 算法 搜索推荐
【机器学习】Apriori算法在关联规则学习中的应用
【机器学习】Apriori算法在关联规则学习中的应用
40 0