acm-header
Sign In

Communications of the ACM

ACM TechNews

Researchers Teaching Robots How to Best Reject Orders From Humans


View as: Print Mobile App Share:
Nao robot at Tufts Human-Robot Interaction Lab

The Nao robot obeys simple spoken commands but rejects those that could possibly result in harm to itself.

Credit: Tufts Human-Robot Interaction Lab

As robotics researchers seek to develop more sophisticated and natural means for humans to interact with robots, they are also seeking to develop systems that will ensure these interactions do not prove dangerous for the robots. Gordon Briggs and Matthias Scheutz of Tufts University's Human-Robot Interaction Lab are working specifically on techniques that will enable robots to reject orders from humans that could prove dangerous to the robots.

The researchers' system borrows the concept of "felicity conditions" from linguistic theory, which reflect a person's understanding of and capability to fulfill instructions. Briggs and Scheutz's system is designed to create a framework that allows a robot to utilize felicity conditions to determine whether it is able to carry out instructions it receives, and whether or not it should do so. For example, the system will enable the robot to refuse an order to walk forward if it detects that doing so will cause it to run into a wall or off a table. The system also allows human operators to clarify their commands after they have been rejected, such as by saying that they will catch the robot if it falls. Briggs and Scheutz' research was presented at the AI for Human-Robot Interaction Symposium in Washington, D.C., earlier this month.

From IEEE Spectrum
View Full Article

 

Abstracts Copyright © 2015 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account