Yale University researchers have shown that it is relatively easy for robots to regain the trust of humans after lying to them.
The researchers arranged for 82 people to compete for points in an asteroid-shooting computer game against a NAO robot.
In some rounds, a special asteroid blaster was awarded to either the human or the robot. The special blaster could be used to get bonus points when shooting an asteroid or for temporarily immobilizing the opponent, resulting in no points.
Before playing 10 games together, the robot promised it would not immobilize the human player, but it always broke its promise in round three because it was programmed to betray the opposing player.
The researchers tested the best route back from this betrayal by having the robot frame the immobilization as a mistake with half of the human players, while celebrating the betrayal with the other half. Then, the robot either apologized or denied it had broken its promise.
The human participants were twice as likely to take revenge by immobilizing the robot in the following round if they thought the robot's betrayal was on purpose.
Similarly, if the robot denied its actions, people were also twice as likely to seek immediate revenge.
From New Scientist
View Full Article
Abstracts Copyright © 2019 SmithBucklin, Washington, DC, USA
No entries found