acm-header
Sign In

Communications of the ACM

ACM TechNews

Reading a Neural Network's Mind


View as: Print Mobile App Share:
mind of a neural network, illustration

Credit: Chelsea Turner / MIT

Researchers at the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory and the Qatar Computing Research Institute have used an interpretive method to analyze neural networks trained for machine translation and speech recognition. They've found empirical support for some common intuitions about how the networks probably work.

The team took a trained network and used the output of each of its layers, in response to individual training examples, to train another neural network to perform a specific task so researchers could determine the task for which each layer is optimized. Among empirical insights into likely network functionality is the systems' apparent concentration on lower-level tasks, such as sound recognition or part-of-speech recognition, before moving to higher-level tasks, such as transcription or semantic interpretation.

A surprising finding was the presence of an omission in the type of data the translation network considers, the correction of which improves network performance, suggesting analyzing neural networks could help enhance the accuracy of artificial intelligence systems.

From MIT News 
View Full Article

 

Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account