LaMDA: our breakthrough conversation technology

简介: LaMDA早期宣传:突破性的对话技术。

LaMDA: our breakthrough conversation technology
May 18, 2021

Eli Collins
VP, Product Management

Zoubin Ghahramani
VP, Google Research
Share
An animation demonstrating how language is processed by LaMDA technology.
We've always had a soft spot for language at Google. Early on, we set out to translate the web. More recently, we’ve invented machine learning techniques that help us better grasp the intent of Search queries. Over time, our advances in these and other areas have made it easier and easier to organize and access the heaps of information conveyed by the written and spoken word.

But there’s always room for improvement. Language is remarkably nuanced and adaptable. It can be literal or figurative, flowery or plain, inventive or informational. That versatility makes language one of humanity’s greatest tools — and one of computer science’s most difficult puzzles.

LaMDA, our latest research breakthrough, adds pieces to one of the most tantalizing sections of that puzzle: conversation.

Animation of a multi-turn conversation between a person and LaMDA. The transcript reads: USER: Hi, I’d like to have a conversation demonstration for a blog post. LaMDA: Hi. I’m a friendly and knowledgeable demonstration for your blog post. I can engage in interesting conversation about virtually anything. What would you like to chat about? USER: if my washing machine and tv are both malfunctioning at the same time what could the reason be? LaMDA: It could be the power surge from the washing machine may have fried your tv. If it’s a power surge problem it may be time to get a new washing machine. Or a new TV. USER: yeah maybe. btw how many legs does a horse have, and could it fix my washing machine? LaMDA: Horses have four legs. How could they help? USER: you tell me LaMDA: Oh, well they could certainly help you. By moving you to a new home where you could install a new washing machine.
While conversations tend to revolve around specific topics, their open-ended nature means they can start in one place and end up somewhere completely different. A chat with a friend about a TV show could evolve into a discussion about the country where the show was filmed before settling on a debate about that country’s best regional cuisine.

That meandering quality can quickly stump modern conversational agents (commonly known as chatbots), which tend to follow narrow, pre-defined paths. But LaMDA — short for “Language Model for Dialogue Applications” — can engage in a free-flowing way about a seemingly endless number of topics, an ability we think could unlock more natural ways of interacting with technology and entirely new categories of helpful applications.

The long road to LaMDA
LaMDA’s conversational skills have been years in the making. Like many recent language models, including BERT and GPT-3, it’s built on Transformer, a neural network architecture that Google Research invented and open-sourced in 2017. That architecture produces a model that can be trained to read many words (a sentence or paragraph, for example), pay attention to how those words relate to one another and then predict what words it thinks will come next.

But unlike most other language models, LaMDA was trained on dialogue. During its training, it picked up on several of the nuances that distinguish open-ended conversation from other forms of language. One of those nuances is sensibleness. Basically: Does the response to a given conversational context make sense? For instance, if someone says:

“I just started taking guitar lessons.”

You might expect another person to respond with something like:

“How exciting! My mom has a vintage Martin that she loves to play.”

That response makes sense, given the initial statement. But sensibleness isn’t the only thing that makes a good response. After all, the phrase “that’s nice” is a sensible response to nearly any statement, much in the way “I don’t know” is a sensible response to most questions. Satisfying responses also tend to be specific, by relating clearly to the context of the conversation. In the example above, the response is sensible and specific.

LaMDA builds on earlier Google research, published in 2020, that showed Transformer-based language models trained on dialogue could learn to talk about virtually anything. Since then, we’ve also found that, once trained, LaMDA can be fine-tuned to significantly improve the sensibleness and specificity of its responses.

Responsibility first
These early results are encouraging, and we look forward to sharing more soon, but sensibleness and specificity aren’t the only qualities we’re looking for in models like LaMDA. We’re also exploring dimensions like “interestingness,” by assessing whether responses are insightful, unexpected or witty. Being Google, we also care a lot about factuality (that is, whether LaMDA sticks to facts, something language models often struggle with), and are investigating ways to ensure LaMDA’s responses aren’t just compelling but correct.

But the most important question we ask ourselves when it comes to our technologies is whether they adhere to our AI Principles. Language might be one of humanity’s greatest tools, but like all tools it can be misused. Models trained on language can propagate that misuse — for instance, by internalizing biases, mirroring hateful speech, or replicating misleading information. And even when the language it’s trained on is carefully vetted, the model itself can still be put to ill use.

Our highest priority, when creating technologies like LaMDA, is working to ensure we minimize such risks. We're deeply familiar with issues involved with machine learning models, such as unfair bias, as we’ve been researching and developing these technologies for many years. That’s why we build and open-source resources that researchers can use to analyze models and the data on which they’re trained; why we’ve scrutinized LaMDA at every step of its development; and why we’ll continue to do so as we work to incorporate conversational abilities into more of our products.

POSTED IN:
AI Research

相关文章
|
2月前
|
机器学习/深度学习 传感器 人工智能
【博士每天一篇论文-综述】Brain Inspired Computing : A Systematic Survey and Future Trends
本文提供了对脑启发计算(BIC)领域的系统性综述,深入探讨了BIC的理论模型、硬件架构、软件工具、基准数据集,并分析了该领域在人工智能中的重要性、最新进展、主要挑战和未来发展趋势。
66 2
【博士每天一篇论文-综述】Brain Inspired Computing : A Systematic Survey and Future Trends
|
2月前
|
机器学习/深度学习 数据挖掘
【博士每天一篇文献-综述】Communication dynamics in complex brain networks
本文综述了复杂脑网络中的通信动态,提出了一个将通信动态视为结构连接和功能连接之间必要联系的概念框架,探讨了结构网络的局部和全局拓扑属性如何支持网络通信模式,以及网络拓扑与动态模型之间的相互作用如何提供对大脑信息转换和处理机制的额外洞察。
37 2
【博士每天一篇文献-综述】Communication dynamics in complex brain networks
|
2月前
|
机器学习/深度学习
【文献学习】Exploring Deep Complex Networks for Complex Spectrogram Enhancement
介绍了一种用于语音增强的复数深度神经网络(CDNN),它通过复数值的短时傅立叶变换(STFT)映射到干净的STFT,并提出了参数整流线性单位(PReLU)的复数扩展,实验结果表明CDNN在语音增强方面相对于实值深层神经网络(DNN)具有更好的性能。
46 2
【文献学习】Exploring Deep Complex Networks for Complex Spectrogram Enhancement
|
2月前
|
存储 机器学习/深度学习 算法
【博士每天一篇论文-综述】An overview of brain-like computing Architecture, applications, and future trends
本文提供了对脑科学计算的介绍,包括神经元模型、神经信息编码方式、类脑芯片技术、脑科学计算的应用领域以及面临的挑战,展望了脑科学计算的未来发展趋势。
34 0
【博士每天一篇论文-综述】An overview of brain-like computing Architecture, applications, and future trends
|
5月前
|
编解码 人工智能 定位技术
MERRA (Modern-Era Retrospective analysis for Research and Applications) 是由 NASA 气候数据集
MERRA (Modern-Era Retrospective analysis for Research and Applications) 是由 NASA 气候数据集
68 0
|
机器学习/深度学习 搜索推荐 算法
【推荐系统论文精读系列】(三)--Matrix Factorization Techniques For Recommender Systems
现在推荐系统一般是基于两种策略,一种是基于文本过滤的方式,另外一种是协同过滤,而基于文本过滤的方法是创造画像为用户或者物品,说白了就是用一些描述性的特征去描述它们,例如对于一部电影来说,可以为其创造画像电影类型、导演、演员、电影市场、票房等来进行描述,对于用户来说,可以用一些人口统计特征来进行描述。
466 1
|
机器学习/深度学习 缓存 搜索推荐
【推荐系统论文精读系列】(四)--Practical Lessons from Predicting Clicks on Ads at Facebook
点击预测系统大多是以在线广告系统维中心,每天7亿的日常活跃用户和超过1百万的活跃广告,因此预测FaceBook上的广告点击率是一项具有挑战的机器学习任务。本片论文中我们介绍了一个模型采用决策树和逻辑回归结合的模式,融合模型的表现胜过它们自己单独建模的效果3%,这个一个重大的影响对于整个系统的表现。
177 0
《Nature》 和 《 Science》 的区别是什么?
《Nature》 和 《 Science》 的区别是什么?
458 0
《Nature》 和 《 Science》 的区别是什么?
|
数据可视化 数据库
文献翻译Complex integrated analysis of lncRNAs-miRNAs-mRNAs in oral squamous cell carcinoma(2)
材料和方法 获取微阵列数据和选择数据集头颈部鳞状细胞癌的基因表达数据 (HNSCC)从Cancer Genome Atlas下载(TCGA)数据库(https://gdc-portal.nci.nih.gov/)。
1501 0
|
机器学习/深度学习
文献翻译Complex integrated analysis of lncRNAs-miRNAs-mRNAs in oral squamous cell carcinoma(1)
Abstract 目的: 本研究旨在通过基因表达数据揭示口腔鳞状细胞癌(OSCC)中lncRNAs-miRNAs-mRNA的调控网络。 材料与方法: 差异表达的lncRNAs,miRNAs和mRNAs(截止值:假阳性率(FDR) 1.5)。
1089 0