OpenAI researchers propose that having two artificial intelligence (AI) systems debate a particular goal instead of being supervised by humans could ensure that they behave reliably when executing complex functions.
"We believe that this or a similar approach could eventually help us train AI systems to perform far more cognitively advanced tasks than humans are capable of, while remaining in line with human preferences," the researchers note.
The nonprofit's investigations currently involve a few simple tasks, including having two AI systems attempt to convince an observer about a hidden image by slowly exposing individual pixels.
"I think the idea of value alignment through debate is very interesting and potentially useful," says Carnegie Mellon University professor Ariel Procaccia. However, he notes "the AI agents may need to have a solid grasp of human values in the first place. So the approach is arguably putting the cart before the horse."
From Technology Review
View Full Article
Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA
No entries found