The U.S. Navy is funding projects to train autonomous systems to behave and not harm humans by demonstrating what to do, putting them through their paces, and then making remedial critiques.
"We're trying to develop systems that don't have to be told exactly what to do," says Office of Naval Research manager Marc Steinberg. "You can give them high-level mission guidance, and they can work out the steps involved to carry out a task."
One project at the Georgia Institute of Technology (Georgia Tech) involves an artificial intelligence software program named Quixote, which uses stories to teach robots acceptable behavior. Georgia Tech professor Mark Riedl says Quixote could function as a "human user manual" that teaches machines human values via parables that emphasize shared cultural knowledge, social mores, and protocols.
Steinberg notes such issues are important as the Navy deploys more unmanned systems. He says although no offensive machines would be allowed to attack without human authorization, there are situations in which a military robot might have to weigh risks to people and make appropriate decisions. "Think of an unmanned surface vessel following the rules of the road," Steinberg says. "If you have another boat getting too close, it could be an adversary or it could be someone who is just curious who you don't want to put at risk."
From Stars and Stripes
View Full Article
Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA
No entries found