acm-header
Sign In

Communications of the ACM

ACM TechNews

Robots Will Be More ­seful If They Are Made to Lack Confidence


View as: Print Mobile App Share:
Not so cocky now, are you?

Researchers at the University of California, Berkeley are developing an artificial intelligence system that seeks and accepts human oversight, in an attempt to combat fake news.

Credit: Jason Lee/Reuters

In an attempt to combat fake news, University of California, Berkeley researchers are developing an artificial intelligence (AI) system that seeks and accepts human oversight.

Rather that promoting every article it thinks users want to see, an AI algorithm that was more uncertain of its abilities would be more likely to defer to a human's better judgement.

The researchers designed a mathematical model of an interaction between humans and robots called the "off-switch game," to explore the idea of a computer's "self-confidence." In this theoretical game, a robot with an off switch is given a task to do, after which a human is free to press the robot's off switch whenever they like, but the robot also can choose to disable its switch so the human cannot turn it off.

The researchers are studying what degree of "confidence" to give a robot so it will allow the human to flip the off switch when necessary.

From New Scientist
View Full Article

 

Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account