acm-header
Sign In

Communications of the ACM

ACM Careers

Researchers Look to Quantify the Trustworthiness of Neural Networks


View as: Print Mobile App Share:
interlocking human and robotic hands, illustration

Credit: MIT Sloan Management Review

The trustworthiness of artificial intelligence is an impediment to the technology's adoption. Now, a tool developed by USC Viterbi Engineering researchers generates automatic indicators that assess the trustworthiness of data and predictions generated by AI algorithms.  

The researcher describe their work in "There Is Hope After All: Quantifying Opinion and Trustworthiness in Neural Networks" published in Frontiers in Artificial Intelligence.

Neural networks are a type of artificial intelligence that are modeled after the brain and generate predictions. Can the predictions of these neural networks be trusted?  A key barrier to the adoption of self-driving cars is the expectation that the vehicles act as independent decision-makers on auto-pilot and quickly decipher and recognize objects on the road — whether that object is a speed bump, an inanimate object, a pet, or a child — and make decisions on how to act if another vehicle is swerving towards it. Should the car hit the oncoming vehicle or swerve and hit what the vehicle perceives to be an inanimate object or a child? Can we trust the computer software within the vehicles to make sound decisions within fractions of a second — especially when conflicting information is coming from different sensing modalities such as computer vision from cameras or data from Lidar? Knowing which systems to trust and which sensing system is most accurate would be helpful to determine what decisions the autopilot should make.

Mingxi Cheng, a doctoral student at the USC Cyber Physical Systems Group, and lead author of the Frontiers in Artificial Intelligence report, was driven to work on this project by this thought: "Even humans can be indecisive in certain decision-making scenarios. In cases involving conflicting information, why can't machines tell us when they don't know?"

A tool the authors created, named DeepTrust, can quantify the amount of uncertainty and if human intervention is necessary, says Paul Bogdan, an associate professor in the Ming Hsieh Department of Electrical and Computer Engineering at USC, and corresponding author.

Shahin Nazarian, an associate professor of Electrical and Computer Engineering Practice at USC, is also an author of the report.

The USC research team took almost two years developing the tool employing what is known as subjective logic to assess the architecture of the neural networks. On one of their test cases — the polls from the 2016 U.S. Presidential election — DeepTrust found that the prediction pointing towards Hillary Clinton as the winner had a greater margin for error.

Another significance of the study is that it provides insights on how to test the reliability of AI algorithms that are normally trained on thousands to millions of data points. It would be time-consuming to check if each of the data points informing AI predictions were labeled accurately. Rather, it is more critical that the architecture of these neural network systems has greater accuracy, the researchers say.  Bogdan notes that if computer scientists want to maximize accuracy and trust simultaneously, this work could also serve as a guidepost as to how much "noise" can be in testing samples.

The researchers believe the model is the first of its kind. "To our knowledge, there is no trust quantification model or tool for deep learning, artificial intelligence, and machine learning," Bogdan says. "This is the first approach and opens new research directions." This tool has the potential to make "artificial intelligence aware and adaptive," he says.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account