Researchers at Google's DeepMind subsidiary and Oxford University are collaborating to create a panic button to interrupt a potentially rogue artificial intelligence (AI) agent.
The researchers propose a framework to enable humans to repeatedly and safely interrupt an AI agent's reinforcement learning. In addition, this interruption could be done while simultaneously blocking an AI agent's ability to learn how to prevent a human operator from turning off its machine-learning capabilities or reinforcement learning.
The researchers studied AI agents working in real time with human operators, considering scenarios in which the operators would need to press a button to prevent the agent continuing with actions that either harmed it, its operator, or the environment around it, and teach or lead the AI agent to a safer situation.
"However, if the learning agent expects to receive rewards from this sequence, it may learn in the long run to avoid such interruptions; for example, by disabling the red button--which is an undesirable outcome," according to the researchers.
When Google acquired DeepMind in 2014, the DeepMind founders imposed a buyout condition that Google would create an AI ethics board to follow advances that Google would make to the AI industry.
From InformationWeek
View Full Article
Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA
No entries found