acm-header
Sign In

Communications of the ACM

ACM TechNews

How to Build a Moral Robot


View as: Print Mobile App Share:
A robot with a heart.

Moral reasoning is being modeled for robots, since they are expected to play an increasingly critical role in making judgment calls where human lives are at stake.

Credit: digitaltrends.com

With robots expected to play an increasingly critical role in making judgment calls where human lives are at stake, it is imperative to model moral reasoning in machines.

"Right now the major challenge for even thinking about how robots might be able to understand moral norms is that we don't understand on the human side how humans represent and reason if possible with moral norms," notes Tufts University researcher Matthias Scheutz.

Social psychologists at Brown University have started accumulating a list of words, concepts, and rules people use to discuss morality, and then they must determine how to quantify this vocabulary.

The hypothesis of Brown's Bertram Malle is the human moral landscape might resemble a semantic network, in which a subset of norms is triggered in a specific context and becomes available to direct action, identify violations, and enable humans to make judgments. By gleaning data from enough different situations, Malle believes he will be able to build an approximate map of a human norm network that could be incorporated into a robot so it could summon the correct moral framework for whatever situation is at hand.

The Tufts team is trying to build into a robot a way to communicate why it makes certain decisions, if they entail avoiding scenarios that may violate that framework.

From IEEE Spectrum
View Full Article

 

Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account