acm-header
Sign In

Communications of the ACM

ACM TechNews

Taking Machine Thinking Out of the Black Box


View as: Print Mobile App Share:
The Adaptable Interpretable Machine Learning project is exploring ways to have artificial intelligence explain its decisions.

Researchers at the Massachusetts Institute of Technology and Duke University are exploring methods for replacing black-box machine learning models with prediction methods that are more transparent.

Credit: awsforbusiness.com

Researchers at the Massachusetts Institute of Technology and Duke University, through their Adaptable Interpretable Machine Learning (AIM) project, are using two main approaches to replacing black-box machine learning models with prediction methods that are more transparent.

These approaches are interpretable neural networks and adaptable and interpretable Bayesian rule lists (BRLs).

Traditional neural networks' functions are nonlinear and recursive, making it difficult to identify how the network came to its conclusion. The researchers have addressed this by developing what they call "prototype neural networks," which differ from traditional neural networks in that they naturally encode explanations for each of their predictions by creating particularly representative parts of an input image.

The networks make their predictions based on the similarity of parts of the input image to each prototype.

The researchers are applying the interactive BRLs from the AIM program to help medical students become better at interviewing and diagnosing patients.

From MIT News
View Full Article

 

Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account