论文 6:NeRFFaceEditing: Disentangled Face Editing in Neural Radiance Fields
- 作者:Kaiwen Jiang、 Shu-Yu Chen 等
- 论文地址:http://geometrylearning.com/NeRFFaceEditing/
摘要:想要个性化设计高真实感的三维立体人脸,却发现自己并不熟悉专业的设计软件?三维人脸编辑方法 NeRFFaceEditing 提供了新的解决方案,即使不会三维建模,也能自由编辑高真实感的立体人脸,建模元宇宙中的个性化数字肖像!
NeRFFaceEditing 由中科院计算所和香港城市大学的研究人员合作完成,相关技术论文在计算机图形学顶级会议 ACM SIGGRAPH Asia 2022 上发表。
NeRFFaceEditing 将二维的语义掩码作为三维几何编辑的桥梁,用户在一个视角下进行的语义编辑可以传播到整个三维人脸的几何,并保持材质不变。进一步,给定表示参考风格的图像,用户可以轻松的更改整个三维人脸的材质风格,并保持几何不变。
基于该方法的三维人脸编辑系统,即使用户不熟悉专业的三维设计,也可以轻松进行个性化的人脸设计,自定义人脸形状和外观。
NeRFFaceEditing 的网络架构:
材质相似约束训练策略:
推荐:人脸神经辐射场的掩码编辑方法 NeRFFaceEditing,不会三维建模也能编辑立体人脸。
论文 7:RetroMAE: Pre-Training Retrieval-oriented Language Models Via Masked Auto-Encoder
- 作者:Shitao Xiao、Zheng Liu 等
- 论文地址:https://arxiv.org/abs/2205.12035
摘要:近期,华为泊松实验室联合北京邮电大学、华为昇思 MindSpore 团队提出 “基于掩码自编码器的检索预训练语言模型 RetroMAE”,大幅刷新稠密检索领域的多项重要基准。而其预训练任务的简洁性与有效性,也为下一步技术的发展开辟了全新的思路。该工作已录用于自然语言处理领域顶级学术会议 EMNLP 2022。基于昇思开源学习框架的模型与源代码已向社区开放。
基于掩码自编码器的预训练流程示例:
基础架构:掩码自编码器。RetroMAE 采用了经典的掩码自编码器这一架构来预训练模型的语义表征能力。首先,输入文本经掩码操作后由编码器(Encoder)映射为隐空间中的语义向量;而后,解码器(Decoder)借助语义向量将另一段独立掩码的输入文本还原为原始的输入文本。
解码增强。双流注意力机制(H1:query stream,H2:content stream),随机生成注意力掩码矩阵(蓝色点:可见位置,灰色点:掩码位置)
推荐:稠密检索新突破:华为提出掩码自编码预训练模型,大幅刷新多项基准。
ArXiv Weekly Radiostation机器之心联合由楚航、罗若天发起的ArXiv Weekly Radiostation,在 7 Papers 的基础上,精选本周更多重要论文,包括NLP、CV、ML领域各10篇精选,并提供音频形式的论文摘要简介,详情如下:
10 NLP Papers音频:00:0019:46
本周 10 篇 NLP 精选论文是:
1. Tuning Language Models as Training Data Generators for Augmentation-Enhanced Few-Shot Learning. (from Tarek Abdelzaher, Jiawei Han)
2. Estimating Soft Labels for Out-of-Domain Intent Detection. (from Jian Sun)
3. Nano: Nested Human-in-the-Loop Reward Learning for Few-shot Language Model Control. (from Ruslan Salakhutdinov, Louis-Philippe Morency)
4. Preserving Semantics in Textual Adversarial Attacks. (from Tomas Mikolov)
5. Mask More and Mask Later: Efficient Pre-training of Masked Language Models by Disentangling the [MASK] Token. (from Hermann Ney)
6. EvEntS ReaLM: Event Reasoning of Entity States via Language Models. (from Eduard Hovy)
7. Efficient Zero-shot Event Extraction with Context-Definition Alignment. (from Hongming Zhang)
8. Aligning Recommendation and Conversation via Dual Imitation. (from Minlie Huang)
9. DiaASQ: A Benchmark of Conversational Aspect-based Sentiment Quadruple Analysis. (from Tat-Seng Chua)
10. Novel Chapter Abstractive Summarization using Spinal Tree Aware Sub-Sentential Content Selection. (from Kathleen McKeown)
10 CV Papers音频:00:0022:27
本周 10 篇 CV 精选论文是:
1. $BT^2$: Backward-compatible Training with Basis Transformation. (from Antonio Torralba)
2. NoiSER: Noise is All You Need for Enhancing Low-Light Images Without Task-Related Data. (from Yi Yang, Shuicheng Yan)
3. Normalization Perturbation: A Simple Domain Generalization Method for Real-World Domain Shifts. (from Bernt Schiele)
4. Rethinking the transfer learning for FCN based polyp segmentation in colonoscopy. (from Lei Zhang)
5. InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions. (from Xiaogang Wang, Yu Qiao)
6. Zero-shot Video Moment Retrieval With Off-the-Shelf Models. (from Raymond J. Mooney)
7. Soft Augmentation for Image Classification. (from Yang Liu, James Hays, Deva Ramanan)
8. Masked Vision-Language Transformers for Scene Text Recognition. (from Jie Wu)
9. Common Pets in 3D: Dynamic New-View Synthesis of Real-Life Deformable Categories. (from Andrea Vedaldi)
10. A Survey of Deep Face Restoration: Denoise, Super-Resolution, Deblur, Artifact Removal. (from Wei Liu)
10 ML Papers音频:00:0021:39
本周 10 篇 ML 精选论文是:
1. FED-CD: Federated Causal Discovery from Interventional and Observational Data. (from Bernhard Schölkopf)
2. A Theoretical Study on Solving Continual Learning. (from Bing Liu)
3. GOOD-D: On Unsupervised Graph Out-Of-Distribution Detection. (from Huan Liu)
4. Distributional Shift Adaptation using Domain-Specific Features. (from Huan Liu)
5. Cherry Hypothesis: Identifying the Cherry on the Cake for Dynamic Networks. (from Dacheng Tao)
6. Regression as Classification: Influence of Task Formulation on Neural Network Features. (from Francis Bach, Jean-Philippe Vert)
7. On Optimizing the Communication of Model Parallelism. (from Eric P. Xing)
8. Stabilizing Machine Learning Prediction of Dynamics: Noise and Noise-inspired Regularization. (from Michelle Girvan)
9. ABC: Adversarial Behavioral Cloning for Offline Mode-Seeking Imitation Learning. (from Peter Stone)
10. Extragradient with Positive Momentum is Optimal for Games with Cross-Shaped Jacobian Spectrum. (from Fabian Pedregosa)