acm-header
Sign In

Communications of the ACM

ACM TechNews

The 'Weird Events' That Make Machines Hallucinate


View as: Print Mobile App Share:
Simple stickers on a 'stop' sign are enough to render it invisible to a machine vision algorithm.

Computers can be tricked into misidentifying objects and sounds.

Credit: Kevin Eykholt et al.

Computers can be tricked into misidentifying objects and sounds, raising issues about the real-world use of artificial intelligence (AI); experts call such glitches "adversarial examples" or "weird events."

Said the Massachusetts Institute of Technology (MIT)'s Anish Athalye, "We can think of them as inputs that we expect the network to process in one way, but the machine does something unexpected upon seeing that input."

In one experiment, Athalye's team slightly modified the texture and coloring of certain physical objects to fool machine learning AI into thinking they were something else.

MIT's Aleksander Madry said the problem may be rooted partly in the tendency to engineer machine learning frameworks to optimize their performance on average. Neural networks might be fortified against outliers by feeding them more challenging examples of whatever scientists are trying to teach them.

From BBC News
View Full Article

 

Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account