acm-header
Sign In

Communications of the ACM

ACM TechNews

Making Computers Explain Themselves


View as: Print Mobile App Share:
Artist's representation of a neural network.

Researchers have devised a way to train neural networks so that they provide not only predictions and classifications, but rationales for their decisions.

Credit: Christine Daniloff/MIT

Researchers from the Massachusetts Institute of Technology's (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a method for training neural networks so they provide not only predictions and classifications, but also rationales for their decisions.

"In real-world applications, sometimes people really want to know why the model makes the predictions it does," says MIT graduate student Tao Lei.

The neural nets are trained on textual data, with the CSAIL researchers splitting the net into two modules. The first module filters out segments of text from the training data, which are scored based on length and coherence; higher-scoring segments are shorter and are extracted more from strings of consecutive words. The selected segments are then handed to the second module, which conducts prediction or classification. The models undergo concurrent training to maximize both the score of the extracted segments and the accuracy of prediction/classification.

One dataset on which the researchers tested their method was a set of reviews from a beer-assessing website. Tests showed the system's concurrence with human annotations was 96% and 95%, respectively, for ratings of appearance and aroma, and 80% for palate.

From MIT News
View Full Article

 

Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account