acm-header
Sign In

Communications of the ACM

ACM TechNews

Explained: Neural Networks


View as: Print Mobile App Share:
Most applications of deep learning use convolutional neural networks, in which the nodes of each layer are clustered, the clusters overlap, and each cluster feeds data to multiple nodes (orange and green) of the next layer.

Neural nets are a means of doing machine learning, in which a computer learns to perform some task by analyzing training examples.

Credit: Jose-Luis Olivares/MIT

Deep learning is a new name for the neural-networking approach to artificial intelligence, which was first proposed in 1944 by University of Chicago researchers Warren McCullough and Walter Pitts.

The concept has been recurrently rejected and revived, with the latest resurgence in the second decade of the 21st century.

Neural nets are a machine-learning method in which a computer is fed examples to analyze to train itself to perform a task. The original neural net was assigned thresholds and weights, and McCullough and Pitts demonstrated a neural net could, in principle, compute any function possible by a digital computer.

By the 1980s, scientists had developed algorithms for altering neural nets' weights and thresholds that were efficient enough for multilayered networks, revitalizing the field after decades of dismissal.

The latest renaissance in neural networking via deep learning stems from the computer-game industry, which requires many-core graphics-processing unit chips.

From MIT News
View Full Article

 

Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account