acm-header
Sign In

Communications of the ACM

ACM Careers

To Make a Social Robot, Key is Satisfying the Human Mind


View as: Print Mobile App Share:
Maja Matari?, Simon the robot, Andrea Thomaz, and Ayse Saygin

What makes a robot socially acceptable? Clockwise from top left: Maja Matari? of the University of Southern California; Simon the robot; Andrea Thomaz of the Georgia Institute of Technology; and Ayse Saygin of the University of California San Diego.

Credit: The Kavli Foundation

After years of existing only in fiction, social robots are finally being designed that can more closely emulate how people express themselves, interact and learn — and doing so while performing jobs like teaching social behavior to children with autism or helping stroke patients with their physical rehabilitation exercises.

Recently, The Kavli Foundation brought together three pioneers in Human-Robot Interactions to discuss these advancements, as well as the upcoming technological hurdles. What they say is that, while there are many challenges ahead, the biggest remains getting the robots to match the needs and expectations of the human mind. "How we interact with embodied machines is different than how we interact with a computer, cell phone or other intelligent devices," says Professor Maja Matari?, University of Southern California. "We need to understand those differences so we can leverage what is important."

A director of USC's Center for Robotics and Embedded Systems, Matari? has developed social robots for use in a variety of therapeutic roles. According to Matari?, one of the keys for a successfully designed social robot is considering not only how it communicates verbally, but physically through facial expressions and body language. Also important: embedding the right personality. "We found that when we matched the personality of the robot to that of the user, people performed their rehab exercises longer and reported enjoying them more."

Another key is matching a robot's appearance to users' perception of its abilities. Ayse Saygin is an assistant professor at the University of California San Diego and faculty member of the Kavli Institute of Brain and Mind. Last year, Saygin and her colleagues set out to discover if what they call the "action perception system" in the human brain is tuned more to human appearance or human motion. By using brain scans, they found that as people observed highly humanlike robots compared to less humanlike robots, the brain detected the mismatch and didn't respond as well. "Making robots more humanlike might seem intuitively like that's the way to go, but we find it doesn't work unless the humanlike appearance is equally matched with humanlike actions."

A social robot also needs the ability to learn socially. Andrea Thomaz is an assistant professor at the Georgia Institute of Technology and director of its Social Intelligent Machines Laboratory. At her lab, they have built a robot designed to learn from humans the way a person would — along with speech, through observation, demonstration and social interaction. "In my lab, we see human social intelligence as being comprised of four key components: the ability to learn from other people, the ability to collaborate with other people, the ability to apply emotional intelligence, and the ability to perceive and respond to another person's intentions. We try to build this social intelligence in our robots."

Read Matari?, Saygin, and Thomaz's complete roundtable discussion on social robots.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account