ACM, the Association for Computing Machinery, today named Yoshua Bengio, Geoffrey Hinton, and Yann LeCun recipients of the 2018 ACM A.M. Turing Award for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.
Bengio is a professor at the University of Montreal and Scientific Director at Mila, Quebec's Artificial Intelligence Institute; Hinton is vice president and Engineering Fellow at Google, Chief Scientific Adviser at the Vector Institute, and Emeritus Professor at the University of Toronto; and LeCun is a professor at New York University, and vice president and Chief Artificial Intelligence (AI) Scientist at Facebook.
Credits: Yoshua Bengio, photo by Maryse Boyce;
Geoffrey Hinton, photo by Keith Penner; Yann LeCun, photo from Facebook.
Working independently and together, Hinton, LeCun and Bengio developed conceptual foundations for the field, identified surprising phenomena through experiments, and contributed engineering advances that demonstrated the practical advantages of deep neural networks. In recent years, deep learning methods have been responsible for astonishing breakthroughs in computer vision, speech recognition, natural language processing, and robotics—among other applications.
While the use of artificial neural networks as a tool to help computers recognize patterns and simulate human intelligence had been introduced in the 1980s, by the early 2000s, LeCun, Hinton, and Bengio were among a small group who remained committed to this approach. Though their efforts to rekindle the AI community's interest in neural networks were initially met with skepticism, their ideas recently resulted in major technological advances, and their methodology is now the dominant paradigm in the field.
The ACM A.M. Turing Award, often referred to as the "Nobel Prize of Computing," carries a $1-million prize, with financial support provided by Google, Inc. It is named for Alan M. Turing, the British mathematician who articulated the mathematical foundation and limits of computing. Bengio, Hinton, and LeCun will formally receive the 2018 ACM A.M. Turing Award at ACM's annual awards banquet on Saturday, June 15, 2019 in San Francisco, CA.
"Artificial intelligence is now one of the fastest-growing areas in all of science and one of the most talked-about topics in society," said ACM president Cherri M. Pancake. "The growth of and interest in AI is due, in no small part, to the recent advances in deep learning for which Bengio, Hinton, and LeCun laid the foundation. These technologies are used by billions of people. Anyone who has a smartphone in their pocket can tangibly experience advances in natural language processing and computer vision that were not possible just 10 years ago. In addition to the products we use every day, new advances in deep learning have given scientists powerful new tools—in areas ranging from medicine, to astronomy, to materials science."
"Deep neural networks are responsible for some of the greatest advances in modern computer science, helping make substantial progress on long-standing problems in computer vision, speech recognition, and natural language understanding," said Jeff Dean, Google Senior Fellow and senior vice president of Google AI. "At the heart of this progress are fundamental techniques developed starting more than 30 years ago by this year's Turing Award winners, Yoshua Bengio, Geoff Hinton, and Yann LeCun. By dramatically improving the ability of computers to make sense of the world, deep neural networks are changing not just the field of computing, but nearly every field of science and human endeavor."
Machine Learning, Neural Networks, and Deep Learning
In traditional computing, a computer program directs the computer with explicit step-by-step instructions. In deep learning, a subfield of AI research, the computer is not explicitly told how to solve a particular task such as object classification. Instead, it uses a learning algorithm to extract patterns in the data that relate the input data, such as the pixels of an image, to the desired output, such as the label "cat." The challenge for researchers has been to develop effective learning algorithms that can modify the weights on the connections in an artificial neural network so that these weights capture the relevant patterns in the data.
Geoffrey Hinton, who has been advocating for a machine learning approach to artificial intelligence since the early 1980s, looked to how the human brain functions to suggest ways in which machine learning systems might be developed. Inspired by the brain, he and others proposed "artificial neural networks" as a cornerstone of their machine learning investigations.
In computer science, the term "neural networks" refers to systems composed of layers of relatively simple computing elements called "neurons" that are simulated in a computer. These "neurons," which only loosely resemble the neurons in the human brain, influence one another via weighted connections. By changing the weights on the connections, it is possible to change the computation performed by the neural network. Hinton, LeCun and Bengio recognized the importance of building deep networks using many layers—hence the term "deep learning."
The conceptual foundations and engineering advances laid by LeCun, Bengio, and Hinton over a 30-year period were significantly advanced by the prevalence of powerful graphics processing unit (GPU) computers, as well as access to massive datasets. In recent years, these and other factors led to leap-frog advances in technologies such as computer vision, speech recognition, and machine translation.
Hinton, LeCun and Bengio have worked together and independently. For example, LeCun performed postdoctoral work under Hinton's supervision, and LeCun and Bengio worked together at Bell Labs beginning in the early 1990s. Even while not working together, there is a synergy and interconnectedness in their work, and they have greatly influenced each other.
Bengio, Hinton, and LeCun continue to explore the intersection of machine learning with neuroscience and cognitive science, most notably through their joint participation in the Learning in Machines and Brains program, an initiative of CIFAR, formerly known as the Canadian Institute for Advanced Research.
Select Technical Accomplishments
The technical achievements of this year's Turing Laureates, which have led to significant breakthroughs in AI technologies include, but are not limited to, the following:
Geoffrey Hinton
Backpropagation: In a 1986 paper, "Learning Internal Representations by Error Propagation," co-authored with David Rumelhart and Ronald Williams, Hinton demonstrated that the backpropagation algorithm allowed neural nets to discover their own internal representations of data, making it possible to use neural nets to solve problems that had previously been thought to be beyond their reach. The backpropagation algorithm is standard in most neural networks today.
Boltzmann Machines: In 1983, with Terrence Sejnowski, Hinton invented Boltzmann Machines, one of the first neural networks capable of learning internal representations in neurons that were not part of the input or output.
Improvements to convolutional neural networks: In 2012, with his students Alex Krizhevsky and Ilya Sutskever, Hinton improved convolutional neural networks using rectified linear neurons and dropout regularization. In the prominent ImageNet competition, Hinton and his students almost halved the error rate for object recognition and reshaped the computer vision field.
Yoshua Bengio
Probabilistic models of sequences: In the 1990s, Bengio combined neural networks with probabilistic models of sequences, such as hidden Markov models. These ideas were incorporated into a system used by AT&T/NCR for reading handwritten checks, were considered a pinnacle of neural network research in the 1990s, and modern deep learning speech recognition systems are extending these concepts.
High-dimensional word embeddings and attention: In 2000, Bengio authored the landmark paper, "A Neural Probabilistic Language Model," which introduced high-dimension word embeddings as a representation of word meaning. Bengio's insights had a huge and lasting impact on natural language processing tasks including language translation, question answering, and visual question answering. His group also introduced a form of attention mechanism which led to breakthroughs in machine translation and form a key component of sequential processing with deep learning.
Generative adversarial networks: Since 2010, Bengio's papers on generative deep learning, in particular the Generative Adversarial Networks (GANs) developed with Ian Goodfellow, have spawned a revolution in computer vision and computer graphics. In one fascinating application of this work, computers can actually create original images, reminiscent of the creativity that is considered a hallmark of human intelligence.
Yann LeCun
Convolutional neural networks: In the 1980s, LeCun developed convolutional neural networks, a foundational principle in the field, which, among other advantages, have been an essential in making deep learning more efficient. In the late 1980s, while working at the University of Toronto and Bell Labs, LeCun was the first to train a convolutional neural network system on images of handwritten digits. Today, convolutional neural networks are an industry standard in computer vision, as well as in speech recognition, speech synthesis, image synthesis, and natural language processing. They are used in a wide variety of applications, including autonomous driving, medical image analysis, voice-activated assistants, and information filtering.
Improving backpropagation algorithms: LeCun proposed an early version of the backpropagation algorithm (backprop), and gave a clean derivation of it based on variational principles. His work to speed up backpropagation algorithms included describing two simple methods to accelerate learning time.
Broadening the vision of neural networks: LeCun is also credited with developing a broader vision for neural networks as a computational model for a wide range of tasks, introducing in early work a number of concepts now fundamental in AI. For example, in the context of recognizing images, he studied how hierarchical feature representation can be learned in neural networks—a concept that is now routinely used in many recognition tasks. Together with Léon Bottou, he proposed the idea, used in every modern deep learning software, that learning systems can be built as complex networks of modules where backpropagation is performed through automatic differentiation. They also proposed deep learning architectures that can manipulate structured data, such as graphs.
Biographical Background
Geoffrey Hinton
Geoffrey Hinton is vice president and an Engineering Fellow at Google, Chief Scientific Adviser for The Vector Institute, and an Emeritus Professor at the University of Toronto. Hinton received a bachelor's degree in experimental psychology from Cambridge University and a doctoral degree in artificial intelligence from the University of Edinburgh. He was the founding director of the Neural Computation and Adaptive Perception (later Learning in Machines and Brains) program at CIFAR.
Hinton's honors include Companion of the Order of Canada (Canada's highest honor), Fellow of the Royal Society (UK), foreign member of the National Academy of Engineering (US), the International Joint Conference on Artificial Intelligence (IJCAI) Award for Research Excellence, the NSERC Herzberg Gold medal, and the IEEE James Clerk Maxwell Gold medal.He was also selected by Wired magazine for "The Wired 100—2016's Most Influential People,"and by Bloomberg as one of the 50 people who changed the landscape of global business in 2017.
Yoshua Bengio
Yoshua Bengio is a professor at the University of Montreal, holds a Canada CIFAR AI Chair, and is the scientific director of both Mila (Quebec's Artificial Intelligence Institute) and IVADO (the Institute for Data Valorization). He is co-director (with Yann LeCun) of CIFAR's Learning in Machines and Brains program. Bengio received a bachelor's degree in electrical engineering, a master's degree in computer science, and a doctoral degree in computer science from McGill University.
Bengio's honors include being named an Officer of the Order of Canada and a Fellow of the Royal Society of Canada, and receiving the Marie-Victorin Prize. His work in founding and serving as scientific director for the Quebec Artificial Intelligence Institute (Mila) is also recognized as a major contribution to the field. Mila, an independent nonprofit organization, now counts 300 researchers and 35 faculty members among its ranks. It is the largest academic center for deep learning research in the world, and has helped put Montreal on the map as a vibrant AI ecosystem, with research labs from major companies as well as AI startups.
Yann LeCun
Yann LeCun is Silver Professor of the Courant Institute of Mathematical Sciences at New York University, and vice president and Chief AI Scientist at Facebook. He received a Diplôme d'Ingénieur from the Ecole Superieure d'Ingénieur en Electrotechnique et Electronique (ESIEE), and a Ph.D. in computer science from Université Pierre et Marie Curie.
His honors include membership in the U.S. National Academy of Engineering; the title Doctorates Honoris Causa bestowed by IPN Mexico and École Polytechnique Fédérale de Lausanne (EPFL); the Pender Award, bestowed by the University of Pennsylvania; the Holst Medal, bestowed by the Technical University of Eindhoven, Philips Research and Signify Research; the Nokia-Bell Labs Shannon Luminary Award; the IEEE PAMI Distinguished Researcher Award; and the IEEE Neural Network Pioneer Award. He was also selected by Wired magazine for "The Wired 100--2016's Most Influential People" and its "25 Geniuses Who are Creating the Future of Business." LeCun was the founding director of the New York University Center of Data Science, and is a co-director (with Yoshua Bengio) of CIFAR's Learning in Machines and Brains program. LeCun is also a co-founder and former member of the noard of the Partnership on AI, a group of companies and nonprofits studying the societal consequences of AI.
About the ACM A.M. Turing Award
The A.M. Turing Award was named for Alan M. Turing, the British mathematician who articulated the mathematical foundation and limits of computing, who was a key contributor to the Allied cryptanalysis of the Enigma cipher during World War II. Since its inception in 1966, the Turing Award has honored the computer scientists and engineers who created the systems and underlying theoretical foundations that have propelled the information technology industry.
About ACM
ACM, the Association for Computing Machinery, is the world's largest educational and scientific computing society, uniting computing educators, researchers and professionals to inspire dialogue, share resources and address the field's challenges. ACM strengthens the computing profession's collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.
No entries found