acm-header
Sign In

Communications of the ACM

ACM News

These Experts are Racing to Protect AI from Hackers. Time is Running Out


View as: Print Mobile App Share:

Said Bruce Draper, a program manager at the U.S. Defense Department's Defense Advanced Research Projects Agency, "We want to give everyone the opportunity to defend themselves."

Credit: Robert Rodriguez

Bruce Draper bought a new car recently. The car has all the latest technology, but those bells and whistles bring benefits -- and, more worryingly, some risks. 

"It has all kinds of AI going on in there: lane assist, sign recognition, and all the rest," Draper says, before adding: "You could imagine all that sort of thing being hacked -- the AI being attacked."

It's a growing fear for many -- could the often-mysterious AI algorithms, which are used to manage everything from driverless cars to critical infrastructure, healthcare, and more, be broken, fooled or manipulated? 

What if a driverless car could be fooled into driving through stop signs, or an AI-powered medical scanner tricked into making the wrong diagnosis? What if an automated security system was manipulated to let the wrong person in, or maybe not even recognize there was ever a person there at all? 

As we all rely on automated systems to make decisions with huge potential consequences, we need to be sure that AI systems can't be fooled into making bad or even dangerous decisions. City-wide gridlock or essential services being interrupted could be just some of the most visible problems that could result from the failure of AI-powered systems. Other harder-to-spot AI system failures could create even more problems.

From ZDNet
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account