Robots' blind obedience to human instructions can lead to harmful results and unwanted outcomes, creating a case for the machines to be programmed to detect the potential harm their actions could cause and respond by either trying to avoid it or refusing to obey the order, writes Tufts University professor Matthias Scheutz.
Scheutz says his lab has begun developing robot controls "that make simple inferences based on human commands. These will determine whether the robot should carry them out as instructed or reject them because they violate an ethical principle the robot is programmed to obey."
Understanding the potential hazards of instructions requires substantial background knowledge, and the robot must gauge not only action outcomes by themselves, but also the intentions of the people giving the instructions. Scheutz says this involves making robots capable of explicitly reasoning through consequences of actions and comparing results to established social and moral precepts dictating what is and is not desirable or legal.
"In general, robots should never perform illegal actions, nor should they perform legal actions that are not desirable," he notes. "Hence, they will need representations of laws, moral norms, and even etiquette in order to be able to determine whether the outcomes of an instructed action, or even the action itself, might be in violation of those principles."
From The Conversation
View Full Article
Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA
No entries found