acm-header
Sign In

Communications of the ACM

ACM TechNews

AI Researchers Are Trying to Combat How AI Can Be ­sed to Lie and Deceive


View as: Print Mobile App Share:
Researcher are concerned about AI-powered disinformation.

Artificial intelligence researchers at last week's Neural Information Processing Systems conference discussed potential measures against AI's use for deceit and disinformation.

Credit: Google Play

Artificial intelligence (AI) researchers gathered at last week's Neural Information Processing Systems (NIPS 2017) conference in Long Beach, CA, to discuss measures against AI's use for deceit and disinformation.

One workshop concentrated on tactics in which adversarial examples are used to fool AI into seeing something that does not really exist. Workshop co-organizer Tim Hwang says the potential for such abuse of AI is growing, "especially if you think the inputs to do machine learning are getting lower and lower over time."

Hwang is concerned about AI-powered disinformation making it virtually impossible for large populations to distinguish reality from fiction, or whether trust of online content will ultimately only be possible via technological authentication.

NIPS workshop co-organizer Bryce Goodman warns of "systems that are trained to exhibit features of human intelligence but are fundamentally different in terms of how they process information. We're trying to show what hacks are possible and make it public."

From Quartz
View Full Article

 

Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account