acm-header
Sign In

Communications of the ACM

ACM TechNews

Google Developing Panic Button to Kill Rogue AI


View as: Print Mobile App Share:
Push the button in the event of an emergency.

Researchers are working to develop a panic button that can be used to stop a potentially out-of-control artificial intelligence.

Credit: ubergizmo.com

Researchers at Google's DeepMind subsidiary and Oxford University are collaborating to create a panic button to interrupt a potentially rogue artificial intelligence (AI) agent.

The researchers propose a framework to enable humans to repeatedly and safely interrupt an AI agent's reinforcement learning. In addition, this interruption could be done while simultaneously blocking an AI agent's ability to learn how to prevent a human operator from turning off its machine-learning capabilities or reinforcement learning.

The researchers studied AI agents working in real time with human operators, considering scenarios in which the operators would need to press a button to prevent the agent continuing with actions that either harmed it, its operator, or the environment around it, and teach or lead the AI agent to a safer situation.

"However, if the learning agent expects to receive rewards from this sequence, it may learn in the long run to avoid such interruptions; for example, by disabling the red button--which is an undesirable outcome," according to the researchers.

When Google acquired DeepMind in 2014, the DeepMind founders imposed a buyout condition that Google would create an AI ethics board to follow advances that Google would make to the AI industry.

From InformationWeek
View Full Article

 

Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account