acm-header
Sign In

Communications of the ACM

ACM TechNews

Comparing AI Reasoning with Human Thinking


View as: Print Mobile App Share:

The new method helps a user understand a machine-learning model’s reasoning, and how that reasoning compares to that of a human.

Credit: Christine Daniloff/MIT

Researchers at the Massachusetts Institute of Technology (MIT) and IBM Research have developed a method for comparing the reasoning of artificial intelligence (AI) software with that of human thinking, in order to better understand the AI's decision-making.

The Shared Interest technique compares saliency analyses of an AI decision with human-annotated databases. It classifies the AI's reasoning as one of eight patterns, ranging from the AI being completely distracted (making incorrect predictions and not aligning with human reasoning) to making correct predictions and being completely human-aligned.

Said MIT's Angie Boggust, "Providing human users with tools to interrogate and understand their machine-learning models is crucial to ensuring machine-learning models can be safely deployed in the real world."

From IEEE Spectrum
View Full Article

 

Abstracts Copyright © 2022 SmithBucklin, Washington, DC, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account