acm-header
Sign In

Communications of the ACM

ACM TechNews

How Can We Be Sure AI Will Behave? Perhaps by Watching It Argue With Itself


View as: Print Mobile App Share:
Artificial intelligences play chess against each other.

OpenAI researchers propose that having two artificial intelligence systems debate a particular goal instead of being supervised by humans could ensure that they behave reliably when executing complex functions.

Credit: gmast3r/iStock

OpenAI researchers propose that having two artificial intelligence (AI) systems debate a particular goal instead of being supervised by humans could ensure that they behave reliably when executing complex functions.

"We believe that this or a similar approach could eventually help us train AI systems to perform far more cognitively advanced tasks than humans are capable of, while remaining in line with human preferences," the researchers note.

The nonprofit's investigations currently involve a few simple tasks, including having two AI systems attempt to convince an observer about a hidden image by slowly exposing individual pixels.

"I think the idea of value alignment through debate is very interesting and potentially useful," says Carnegie Mellon University professor Ariel Procaccia. However, he notes "the AI agents may need to have a solid grasp of human values in the first place. So the approach is arguably putting the cart before the horse."

From Technology Review
View Full Article

 

Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account