acm-header
Sign In

Communications of the ACM

ACM TechNews

A Neural Network Learns When It Should Not Be Trusted


View as: Print Mobile App Share:
The advance could enhance safety and efficiency in artificial intelligence-assisted decision-making.

Massachusetts Institute of Technology researchers have developed a way for deep learning neural networks to rapidly estimate confidence levels in their output.

Credit: iStock/MIT News

Researchers at the Massachusetts Institute of Technology (MIT) and Harvard University have enabled a neural network to rapidly process data, yielding both predictions and confidence levels based on the quality of the available data.

This deep evidential regression technique, which estimates uncertainty from a single run of the neural network, could lead to safer results.

The team designed the network with bulked-up output, generating not only a decision but also a new probabilistic distribution capturing the evidence supporting that decision; these evidential distributions directly capture the model's confidence in its forecast.

Included is any uncertainty within the underlying input data and the model's final decision, which indicates whether uncertainty can be reduced by modifying the network itself, or whether the input data is merely noisy.

MIT's Daniela Rus said, "By estimating the uncertainty of a learned model, we also learn how much error to expect from the model, and what missing data could improve the model."

From MIT News
View Full Article

 

Abstracts Copyright © 2020 SmithBucklin, Washington, DC, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account