acm-header
Sign In

Communications of the ACM

ACM TechNews

Robot Eyes and Humans Fix on Different Things to Decode a Scene


View as: Print Mobile App Share:
A human eye focuses on a specific aspect of an image.

Researchers are mapping how people and artificial intelligence-based systems focus their visual attention.

Credit: Getty Images

Researchers at Facebook and the Virginia Polytechnic Institute and State University (Virginia Tech) are determining the differences between human minds and artificial intelligence-based (AI) machines by mapping human and AI visual attention.

The attention maps are measurable in both humans and machines, and enable researchers to study how computers choose to decode a scene.

Researchers asked human subjects to answer questions about a set of images that had been blurred, and the subjects then clicked around the screen to sharpen parts of the image. The clicks were logged to create a map of where a subject's attention was drawn, and the same test was conducted with neural-network machines that had been trained to interpret images. Although the neural networks were accurate in their answers, researchers found little overlap between machine and human attention.

The results could be used by scientists looking to make their AI machines more closely resemble humans.

"Machines do not seem to be looking at the same regions as humans, which suggests that we do not understand what they are basing their decisions on," says Virginia Tech researcher Dhruv Batra. "Can we make them more human-like, and will that translate to higher accuracy?"

From New Scientist
View Full Article

 

Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account