acm-header
Sign In

Communications of the ACM

ACM TechNews

Researchers Built an Invisible Backdoor to Hack Ai's Decisions


View as: Print Mobile App Share:
A backdoor.

A team of researchers at New York Univeristy has discovered a way to manipulate the artificial intelligence that powers self-driving cars and image recognition, by installing a backdoor into the software.

Credit: BeeBright/Shutterstock.com

Researchers at New York University (NYU) have demonstrated a cyberattack against artificial intelligence (AI) that controls driverless cars and image-recognition systems by installing an invisible backdoor in the software.

The team says AI from cloud providers could be infected with these backdoors, and would function normally until a predetermined trigger causes the software to mistake one object for another.

The NYU method instructs a neural network to identify the trigger with a stronger confidence than what the neural network is supposed to be perceiving, thus preempting correct signals in favor of incorrect ones.

The complexity of the network is such that there is currently no test for this form of tampering.

The researchers note this hack could make cloud customers more suspicious of the training protocols on which their AIs rely.

"Outsourcing work to someone else can save time and money, but if that person isn't trustworthy it can introduce new security risks," says NYU professor Brendan Dolan-Gavitt.

From NextGov.com
View Full Article

 

Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account