acm-header
Sign In

Communications of the ACM

ACM TechNews

Ethical Trap: Robot Paralyzed By Choice of Who to Save


View as: Print Mobile App Share:
A dilemma yield sign.

Researchers in the Bristol Robotics Laboratory recently tested an ethical challenge for a robot, which had to decide how to save two other automatons (representing humans) from falling into a hole.

Credit: robohub.org

Bristol Robotics Laboratory's Alan Winfield and colleagues recently tested an ethical challenge for a robot, programming it to prevent other automatons--representing humans--from falling into a hole.

When researchers used two human proxies, the robot was forced to choose which to save. In some cases, it saved one proxy while letting the other perish, while in others, it saved both. However, in 14 out of 33 trials, the robot spent so much time making its decision that both proxies fell into the hole.

Winfield describes his robot as an "ethical zombie" that has no choice but to behave as it does.

Author Wendell Wallach says experiments such as Winfield's hold promise in laying the foundation on which more advanced ethical behavior can be built. "If we can get them to function well in environments when we don't know exactly all the circumstances they'll encounter, that's going to open up vast new applications for their use," he says.

Meanwhile, the Georgia Institute of Technology's Ronald Arkin has developed algorithms for military robots as part of an ethical governor to help them make appropriate decisions on the battlefield. He applied the technology in simulated combat, where drones with such programming can choose not to shoot or to minimize casualties near non-target areas such as a school or hospital.

From New Scientist
View Full Article - May Require Free Registration

 

Abstracts Copyright © 2014 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account