acm-header
Sign In

Communications of the ACM

ACM Careers

Teaching Robots 'manners'

Digitally Capturing and Conveying Human Norms

View as: Print Mobile App Share:
robot servant

Credit: iStockPhoto.com

Advances in artificial intelligence (AI) are making virtual and robotic assistants increasingly capable in performing complex tasks. For these "smart" machines to be considered safe and trustworthy collaborators with human partners, however, robots must be able to quickly assess a given situation and apply human social norms. Such norms are intuitively obvious to most people — for example, the result of growing up in a society where subtle or not-so-subtle cues are provided from childhood about how to appropriately behave in a group setting or respond to interpersonal situations. But teaching those rules to robots is a novel challenge.

To address that challenge, DARPA-funded researchers recently completed a project that aimed to provide a theoretical and formal framework for what norms and normative networks are; study experimentally how norms are represented and activated in the human mind; and examine how norms can be learned and might emerge from novel interactive algorithms. The team was able to create a cognitive-computational model of human norms in a representation that can be coded into machines, and developed a machine-learning algorithm that allows machines to learn norms in unfamiliar situations drawing on human data.

The work represents important progress towards the development of AI systems that can "intuit" how to behave in certain situations in much the way people do.

"The goal of this research effort was to understand and formalize human normative systems and how they guide human behavior, so that we can set guidelines for how to design next-generation AI machines that are able to help and interact effectively with humans," says Reza Ghanadan, DARPA program manager.

As an example in which humans intuitively apply social norms of behavior, consider a situation in which a cell phone rings in a quiet library. A person receiving that call would quickly try to silence the distracting phone, and whisper into the phone before going outside to continue the call in a normal voice. Today, an AI phone-answering system would not automatically respond with that kind of social sensitivity.

"We do not currently know how to incorporate meaningful norm processing into effective computational architectures," Ghanadan says, adding that social and ethical norms have a number of properties that make them uniquely challenging. "There seems to be an enormous number of these norms, yet they are highly context-specific and only a relevant subset of them get activated, depending on the situation. Moreover, they seem to exist in an organizational hierarchy but can also be activated in horizontal bundles — networks of norms tied together by the contexts in which they apply and triggered by certain context-specific features of the world. They can be in conflict with one another but they are also continuously being updated."

Further complicating matters, norms are activated extremely quickly. "That's something we are all familiar with," Ghanadan says, "since 'normal' people detect norm violations very quickly!" And in people, new norms or their preconditions for activation are learned into the already complex norm network through not just one but rather a variety of modalities, such as observation, inference, and instruction. "The uncertainty inherent in these kinds of human data inputs make machine learning of human norms extremely difficult," Ghanadan says.

Ultimately, for a robot to become social or perhaps even ethical, it will need to have a capacity to learn, represent, activate, and apply a large number of norms that people in a given society expect one another to obey, Ghanadan says. That task will prove far more complicated than teaching AI systems rules for simpler tasks such as tagging pictures, detecting spam, or guiding people through their tax returns. But by providing a framework for developing and testing such complex algorithms, the new research could accelerate the day when machines emulate the best of human behavior.

"If we're going to get along as closely with future robots, driverless cars, and virtual digital assistants in our phones and homes as we envision doing so today, then those assistants are going to have to obey the same norms we do," Ghanadan says.

At some point, it may even be a robot behind that desk at the library, raising its finger and saying, "Shhhh!"

The work was conducted by researchers at Brown University and Tufts University, led by Bertram Malle at Brown.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account