原文及其翻译
ACM named Yoshua Bengio, Geoffrey Hinton, and Yann LeCun recipients of the 2018 ACM A.M. Turing Award for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing. Bengio is Professor at the University of Montreal and Scientific Director at Mila, Quebec’s Artificial Intelligence Institute; Hinton is VP and Engineering Fellow of Google, Chief Scientific Adviser of The Vector Institute, and University Professor Emeritus at the University of Toronto; and LeCun is Professor at New York University and VP and Chief AI Scientist at Facebook.
ACM提名 Yoshua Bengio, Geoffrey Hinton,和 Yann LeCun为2018年ACM A.M. 即图灵奖的获奖者,授予那些使深度神经网络成为计算关键组成部分的概念和工程突破。Bengio是蒙特利尔大学教授,也是Mila, Quebec人工智能研究所的科学主任;Hinton,谷歌副总裁兼工程研究员,Vector研究所首席科学顾问,多伦多大学名誉教授;LeCun是纽约大学教授、Facebook副总裁兼首席人工智能科学家。
Working independently and together, Hinton, LeCun and Bengio developed conceptual foundations for the field, identified surprising phenomena through experiments, and contributed engineering advances that demonstrated the practical advantages of deep neural networks. In recent years, deep learning methods have been responsible for astonishing breakthroughs in computer vision, speech recognition, natural language processing, and robotics—among other applications.
Hinton, LeCun 和 Bengio三人独立合作,为该领域发展了概念基础,通过实验发现了令人惊讶的现象,并为证明深度神经网络的实际优势的工程进展做出了贡献。近年来,深度学习方法在计算机视觉、语音识别、自然语言处理和机器人等应用领域取得了惊人的突破。
While the use of artificial neural networks as a tool to help computers recognize patterns and simulate human intelligence had been introduced in the 1980s, by the early 2000s, LeCun, Hinton and Bengio were among a small group who remained committed to this approach. Though their efforts to rekindle the AI community’s interest in neural networks were initially met with skepticism, their ideas recently resulted in major technological advances, and their methodology is now the dominant paradigm in the field.
虽然人工神经网络作为一种帮助计算机识别模式和模拟人类智能的工具是在20世纪80年代被引入,但到21世纪初,LeCun, Hinton 和 Bengio等一部分人仍然坚持使用这种方法。尽管他们重新点燃人工智能社区对神经网络的兴趣的努力,最初遭到了怀疑,但他们的想法最近带来了重大的技术进步,他们的方法现在是该领域的主导范式。
The ACM A.M. Turing Award, often referred to as the “Nobel Prize of Computing,” carries a $1 million prize, with financial support provided by Google, Inc. It is named for Alan M. Turing, the British mathematician who articulated the mathematical foundation and limits of computing.
ACM A.M. 图灵奖,通常被称为“诺贝尔计算奖”,由谷歌公司提供财政支持,奖金100万美元。它是以英国数学家阿兰·m·图灵的名字命名的,图灵阐明了计算的数学基础和极限。
“Artificial intelligence is now one of the fastest-growing areas in all of science and one of the most talked-about topics in society,” said ACM President Cherri M. Pancake. “The growth of and interest in AI is due, in no small part, to the recent advances in deep learning for which Bengio, Hinton and LeCun laid the foundation. These technologies are used by billions of people. Anyone who has a smartphone in their pocket can tangibly experience advances in natural language processing and computer vision that were not possible just 10 years ago. In addition to the products we use every day, new advances in deep learning have given scientists powerful new tools—in areas ranging from medicine, to astronomy, to materials science.”
“人工智能现在是所有科学领域中增长最快的领域之一,也是社会上谈论最多的话题之一,”ACM主席Cherri M. Pancake 说。“人工智能的发展和人们对它的兴趣,在很大程度上要归功于Bengio, Hinton 和 LeCun为之奠定基础的深度学习的最新进展。这些技术被数十亿人使用。任何口袋里有智能手机的人都能实实在在地体验到自然语言处理和计算机视觉方面的进步,这在10年前是不可能的。除了我们每天使用的产品,深度学习的新进展也为科学家们提供了强大的新工具——从医学、天文学到材料科学。”
"Deep neural networks are responsible for some of the greatest advances in modern computer science, helping make substantial progress on long-standing problems in computer vision, speech recognition, and natural language understanding,” said Jeff Dean, Google Senior Fellow and SVP, Google AI. “At the heart of this progress are fundamental techniques developed starting more than 30 years ago by this year's Turing Award winners, Yoshua Bengio, Geoffrey Hinton, and Yann LeCun. By dramatically improving the ability of computers to make sense of the world, deep neural networks are changing not just the field of computing, but nearly every field of science and human endeavor."
谷歌高级研究员、谷歌人工智能高级副总裁杰夫·迪恩(Jeff Dean)表示:“深度神经网络负责现代计算机科学的一些最大进步,帮助在计算机视觉、语音识别和自然语言理解等长期存在的问题上取得实质性进展。”“这一进展的核心是由今年的图灵奖获得者Yoshua Bengio, Geoffrey Hinton, 和 Yann LeCun在30多年前开发的基本技术。通过大幅提高计算机对世界的理解能力,深层神经网络不仅改变了计算领域,而且几乎改变了科学和人类努力的每一个领域。”
Machine Learning, Neural Networks and Deep Learning
In traditional computing, a computer program directs the computer with explicit step-by-step instructions. In deep learning, a subfield of AI research, the computer is not explicitly told how to solve a particular task such as object classification. Instead, it uses a learning algorithm to extract patterns in the data that relate the input data, such as the pixels of an image, to the desired output such as the label “cat.” The challenge for researchers has been to develop effective learning algorithms that can modify the weights on the connections in an artificial neural network so that these weights capture the relevant patterns in the data.
在传统计算中,计算机程序用明确的分步指令来指导计算机。在人工智能研究的一个子领域“深度学习”中,计算机并没有被明确地告诉如何解决诸如对象分类之类的特定任务。相反,它使用一种学习算法来提取数据中的模式,这些模式将输入数据(如图像的像素)与所需的输出(如标签“cat”)相关联。研究人员面临的挑战是开发有效的学习算法,该算法可以修改人工神经网络中连接的权重,以便这些权重捕获数据中的相关模式。
Geoffrey Hinton, who has been advocating for a machine learning approach to artificial intelligence since the early 1980s, looked to how the human brain functions to suggest ways in which machine learning systems might be developed. Inspired by the brain, he and others proposed “artificial neural networks” as a cornerstone of their machine learning investigations.
Geoffrey Hinton自20世纪80年代初以来一直主张采用人工智能的机器学习方法,他研究了人脑的功能,以提出开发机器学习系统的方法。在大脑的启发下,他和其他人提出“人工神经网络”作为他们机器学习研究的基石。
In computer science, the term “neural networks” refers to systems composed of layers of relatively simple computing elements called “neurons” that are simulated in a computer. These “neurons,” which only loosely resemble the neurons in the human brain, influence one another via weighted connections. By changing the weights on the connections, it is possible to change the computation performed by the neural network. Hinton, LeCun and Bengio recognized the importance of building deep networks using many layers—hence the term “deep learning.”
在计算机科学中,术语“神经网络”是指由相对简单的计算元素层组成的系统,称为“神经元”,在计算机中进行模拟。这些“神经元”,只是松散地类似于人脑中的神经元,通过加权连接相互影响。通过改变连接的权重,可以改变神经网络的计算。Hinton、Lecun和Bengio认识到使用多个层次构建深层网络的重要性,因此称为“深层学习”。
The conceptual foundations and engineering advances laid by LeCun, Bengio and Hinton over a 30-year period were significantly advanced by the prevalence of powerful graphics processing unit (GPU) computers, as well as access to massive datasets. In recent years, these and other factors led to leap-frog advances in technologies such as computer vision, speech recognition and machine translation.
Lecun、Bengio和Hinton,在30年的时间内所奠定的概念基础和工程进展,因强大的图形处理单元(GPU)计算机的普及以及对海量数据集的访问,而显著提高。近年来,这些因素和其他因素带来了计算机视觉、语音识别和机器翻译等技术的突飞猛进。
Hinton, LeCun and Bengio have worked together and independently. For example, LeCun performed postdoctoral work under Hinton’s supervision, and LeCun and Bengio worked together at Bell Labs beginning in the early 1990s. Even while not working together, there is a synergy and interconnectedness in their work, and they have greatly influenced each other.
Hinton, LeCun 和 Bengio一起独立工作。例如,Lecun在Hinton的指导下完成博士后工作,从20世纪90年代初开始,Lecun和Bengio就在贝尔实验室一起工作,即使没有一起工作,他们的工作也有协同作用和相互联系,并且相互影响很大。
Bengio, Hinton and LeCun continue to explore the intersection of machine learning with neuroscience and cognitive science, most notably through their joint participation in the Learning in Machines and Brains program, an initiative of CIFAR, formerly known as the Canadian Institute for Advanced Research.
Bengio、Hinton和Lecun继续探索机器学习与神经科学和认知科学的交叉点,尤其是通过他们共同参与机器和大脑学习计划,这是CIFAR的一项倡议,以前被称为加拿大高级研究所。
Select Technical Accomplishments
The technical achievements of this year’s Turing Laureates, which have led to significant breakthroughs in AI technologies include, but are not limited to, the following:
今年图灵奖获得者在人工智能技术方面取得重大突破的技术成果,包括但不限于:
Geoffrey Hinton
Backpropagation: In a 1986 paper, “Learning Internal Representations by Error Propagation,” co-authored with David Rumelhart and Ronald Williams, Hinton demonstrated that the backpropagation algorithm allowed neural nets to discover their own internal representations of data, making it possible to use neural nets to solve problems that had previously been thought to be beyond their reach. The backpropagation algorithm is standard in most neural networks today.
Boltzmann Machines: In 1983, with Terrence Sejnowski, Hinton invented Boltzmann Machines, one of the first neural networks capable of learning internal representations in neurons that were not part of the input or output.
Improvements to convolutional neural networks: In 2012, with his students, Alex Krizhevsky and Ilya Sutskever, Hinton improved convolutional neural networks using rectified linear neurons and dropout regularization. In the prominent ImageNet competition, Hinton and his students almost halved the error rate for object recognition and reshaped the computer vision field.
反向传播:在1986年的一篇论文中,“学习误差传播内部表示,“与David Rumelhart 和 Ronald Williams, Hinton证明神经网络反向传播算法,允许发现自己的内部表示的数据,使它可以使用神经网络来解决问题,以前认为是无可奈何。反向传播算法是目前大多数神经网络的标准算法。
玻尔兹曼机:1983年, Hinton和Terrence Sejnowski一起发明了玻尔兹曼机器,这是第一批能够学习神经元内部表征的神经网络之一,这些神经元不是输入或输出的一部分。
卷积神经网络的改进:2012年,Hinton 和他的学生亚Alex Krizhevsky 和 Ilya Sutskever一起,利用校正的线性神经元和缺失正则化改进了卷积神经网络。在著名的ImageNet比赛中,Hinton和他的学生几乎将物体识别的错误率减半,重塑了计算机视觉领域。
Yoshua Bengio
Probabilistic models of sequences: In the 1990s, Bengio combined neural networks with probabilistic models of sequences, such as hidden Markov models. These ideas were incorporated into a system used by AT&T/NCR for reading handwritten checks, were considered a pinnacle of neural network research in the 1990s, and modern deep learning speech recognition systems are extending these concepts.
High-dimensional word embeddings and attention: In 2000, Bengio authored the landmark paper, “A Neural Probabilistic Language Model,” that introduced high-dimension word embeddings as a representation of word meaning. Bengio’s insights had a huge and lasting impact on natural language processing tasks including language translation, question answering, and visual question answering. His group also introduced a form of attention mechanism which led to breakthroughs in machine translation and form a key component of sequential processing with deep learning.
Generative adversarial networks: Since 2010, Bengio’s papers on generative deep learning, in particular the Generative Adversarial Networks (GANs) developed with Ian Goodfellow, have spawned a revolution in computer vision and computer graphics. In one fascinating application of this work, computers can actually create original images, reminiscent of the creativity that is considered a hallmark of human intelligence.
序列的概率模型:20世纪90年代,Bengio将神经网络与序列的概率模型(如隐马尔可夫模型)结合起来。这些想法被纳入AT&T/NCR用于阅读手写检查的系统中,在20世纪90年代被认为是神经网络研究的顶峰,现代深度学习语音识别系统正在扩展这些概念。
高维嵌入和注意模型:2000年,Bengio撰写了一篇里程碑式的论文,“神经概率语言模型”,将高维嵌入作为词义的表示。Bengio的见解对自然语言处理任务(包括语言翻译、问答和视觉问答)产生了巨大而持久的影响。他的小组还介绍了一种注意力机制的形式,这导致了机器翻译的突破,并形成了带深度学习的顺序处理的关键部分。
生成性对抗网络:自2010年以来,Bengio关于生成性深度学习的论文,特别是与Ian Goodfellow共同开发的生成性对抗网络(gans),引发了计算机视觉和计算机图形的革命。在这项工作的一个迷人的应用中,计算机实际上可以创造出原始图像,让人想起被认为是人类智能标志的创造力。
Yann LeCun
Convolutional neural networks: In the 1980s, LeCun developed convolutional neural networks, a foundational principle in the field, which, among other advantages, have been essential in making deep learning more efficient. In the late 1980s, while working at the University of Toronto and Bell Labs, LeCun was the first to train a convolutional neural network system on images of handwritten digits. Today, convolutional neural networks are an industry standard in computer vision, as well as in speech recognition, speech synthesis, image synthesis, and natural language processing. They are used in a wide variety of applications, including autonomous driving, medical image analysis, voice-activated assistants, and information filtering.
Improving backpropagation algorithms: LeCun proposed an early version of the backpropagation algorithm (backprop), and gave a clean derivation of it based on variational principles. His work to speed up backpropagation algorithms included describing two simple methods to accelerate learning time.
Broadening the vision of neural networks: LeCun is also credited with developing a broader vision for neural networks as a computational model for a wide range of tasks, introducing in early work a number of concepts now fundamental in AI. For example, in the context of recognizing images, he studied how hierarchical feature representation can be learned in neural networks—a concept that is now routinely used in many recognition tasks. Together with Léon Bottou, he proposed the idea, used in every modern deep learning software, that learning systems can be built as complex networks of modules where backpropagation is performed through automatic differentiation. They also proposed deep learning architectures that can manipulate structured data, such as graphs.
卷积神经网络:在20世纪80年代,LeCun开发了卷积神经网络,这是该领域的一个基本原理,它的优势之一,对于提高深度学习的效率至关重要。上世纪80年代末,在多伦多大学(University of Toronto)和贝尔实验室(Bell Labs)工作时,LeCun是第一个训练卷积神经网络系统处理手写数字图像的人。如今,卷积神经网络已经成为计算机视觉以及语音识别、语音合成、图像合成和自然语言处理领域的行业标准。它们被广泛应用于各种应用中,包括自动驾驶、医学图像分析、声控助手和信息过滤。
改进后的反向传播算法:LeCun提出了早期版本的反向传播算法(backprop),并基于变分原理对其进行了清晰的推导。他在加速反向传播算法方面的工作包括描述两种加速学习时间的简单方法。
拓宽神经网络的视野:LeCun还被誉为为神经网络开发了更广阔的视野,将其作为一种计算模型,用于广泛的任务,在早期的工作中引入了一些现在在人工智能中基本的概念。例如,在识别图像的背景下,他研究了如何在神经网络中学习分层特征表示——这一概念现在经常用于许多识别任务。他和Léon Bottou一起提出了一个理念,这个理念被应用于每一个现代深度学习软件中,即学习系统可以被构建为复杂的模块网络,在这个网络中,反向传播通过自动分化来执行。他们还提出了能够操作结构化数据(如图表)的深度学习体系结构。
ACM will present the 2018 A.M. Turing Award at its annual Awards Banquet on June 15 in San Francisco, California.
PS:因为时间紧迫,翻译有不准确的地方,还请留言指正,谢谢!