Researchers at the Massachusetts Institute of Technology and Duke University, through their Adaptable Interpretable Machine Learning (AIM) project, are using two main approaches to replacing black-box machine learning models with prediction methods that are more transparent.
These approaches are interpretable neural networks and adaptable and interpretable Bayesian rule lists (BRLs).
Traditional neural networks' functions are nonlinear and recursive, making it difficult to identify how the network came to its conclusion. The researchers have addressed this by developing what they call "prototype neural networks," which differ from traditional neural networks in that they naturally encode explanations for each of their predictions by creating particularly representative parts of an input image.
The networks make their predictions based on the similarity of parts of the input image to each prototype.
The researchers are applying the interactive BRLs from the AIM program to help medical students become better at interviewing and diagnosing patients.
From MIT News
View Full Article
Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA
No entries found