acm-header
Sign In

Communications of the ACM

ACM TechNews

Fooling the Machine


View as: Print Mobile App Share:
An image an artificial intelligence might have difficulty identifying.

A growing field of research suggests artificial intelligence can easily be fooled.

Credit: Julien Pacuad

A growing field of research suggests artificial intelligence (AI) can easily be fooled, as it can provide correct answers without truly understanding information.

Attackers could theoretically use this vulnerability to feed deceptive or "adversarial" data to machine-learning systems so they make potentially disastrous conclusions, but some researchers say accounting for this danger early in the AI development process could help address it.

In one experiment, researchers altered images entered into a deep neural network by only 4%, and successfully fooled it into misclassifying the images 97% of the time. Another research team fed false inputs to an image classifier and observed the decisions the machine made to reverse-engineer the algorithm to deceive an image-recognition system that could potentially be used in driverless vehicles.

Meanwhile, a team at Georgetown University and another at the University of California, Berkeley developed algorithms that can issue speech commands for digital personal assistants that are unintelligible to human ears, so the assistants perform actions not intended by their owners.

These research groups have determined how to retrain their classifier networks so the language-recognition systems can protect against such attacks. The method involves feeding the networks both legitimate and adversarial input.

From Popular Science
View Full Article

 

Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account