acm-header
Sign In

Communications of the ACM

ACM TechNews

In Emergencies, Should You Trust a Robot?


View as: Print Mobile App Share:
Should you trust a robot in an emergency?

Researchers at the Georgia Tech Research Institute decided to see whether people would accept the authority of a robot in an emergency situation. They were surprised by the results.

Credit: Georgia Tech

Georgia Institute of Technology (Georgia Tech) researchers studying human-robot trust in an emergency situation report humans may put too much faith in robots for their own safety.

In a mock building fire, test subjects followed the instructions of an "Emergency Guide Robot" even after the machine had proved unreliable, and after some participants were told the robot had broken down.

In the emergency scenario, the robot may have become an "authority figure," according to the researchers. They note in simulation-based research done without a realistic emergency scenario, test subjects did not trust a robot that had previously made mistakes.

The team envisions groups of robots being stationed in high-rise buildings to direct occupants toward exits and urge them to evacuate during emergencies.

"These are just the type of human-robot experiments that we as roboticists should be investigating," says Georgia Tech professor Ayanna Howard. "We need to ensure that our robots, when placed in situations that evoke trust, are also designed to mitigate that trust when trust is detrimental to the human."

The research will be presented March 9 at the ACM/IEEE International Conference on Human-Robot Interaction (HRI 2016) in Christchurch, New Zealand.

From Georgia Tech News Center
View Full Article

 

Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account