acm-header
Sign In

Communications of the ACM

ACM News

Prepare Your Meal Together with a Robot


View as: Print Mobile App Share:
A human-robot team preparing lunch.

Robots that can learn from demonstration, understand humans, and truly collaborate, as this robot-human team shows by working together to prepare a meal.

Credit: Vaibhav V. Unhelkar, Shen Li, Julie Shah

A human and a robot sit opposite each other at a rectangular table. Everything needed to prepare a meal of sandwiches and drinks is on the table. "Hey teammate, let's make some meals," says the robot. "I will pour juice. Please make and wrap the sandwiches. Let's start."

Thus starts the latest demonstration of how human and robot can truly collaborate, instead of just co-existing, like the way we use a robotic vacuum cleaner. The demonstration was developed by Julie Shah and her team. Shah is an associate professor in the Department of Aeronautics and Astronautics at the Massachusetts Institute of Technology (MIT). She also leads the interactive robotics group of the university's Computer Science and Artificial Intelligence Laboratory (CSAIL).

"Our aim is to build intelligent machine teammates that collaborate with humans in order to enhance human capabilities,", says Shah. "For this, machines need to understand us and we need to understand machines."

The table in the demonstration is divided into four clearly marked sections, which can be used either by the human or by the robot. In each of the sections, a glass has to be filled and a sandwich has to be prepared.

"Please make the next sandwich at 2," says the robot during the demonstration. The robot itself starts to pour juice for section 1. When robot and human have both completed their tasks, the human starts making a sandwich at section 1. Because the robot is certain of what the human is doing where, it doesn't communicate and just starts pouring juice at section 2. Later, however, it says, "I am pouring juice at 3," so the human knows the intention of the robot. The meal preparation proceeds until four sandwiches are prepared and four glasses are filled with juice.

"The human-robot team needs to coordinate its motions and activities to complete the meal preparation safely and efficiently in a shared workspace," explains Shah. "The robot has to decide which of the four cups to fill, whether it can move its arm or whether it has to wait, how to move its arm, whether or not it should communicate with the human, and if yes, what to tell?"

For the robot to be an intelligent teammate, it needs three capabilities, Shah says.

First, it needs to know what the human is thinking, to infer the human's implicit preferences. This is the most exciting new aspect in her work; says Shah, "The robot is not just using a predictive model of what the person will do, based on what it sees, but it tries to hypothesize which mental states might drive the person's behavior. The robot is becoming a Theory of Mind thing: it can reason about what it thinks that a person is thinking or believing."

The second capability an intelligent robot teammate needs is the ability to anticipate what the person will do next.

The third and final capability is being able to execute the task and to make fast adjustments when needed; for example, when things don't go according to plan.

Shah's demonstration is the first that brings all three capabilities together. It builds on two previous demonstrations of intelligent robotic teammates by the MIT team. In collaboration with auto manufacturers Honda and BMW, the researchers several years ago developed a factory robot able to anticipate what a person would do, based on what the robot observed. The robot, for example, would predict the walking course of a human worker, and so avoid moving into its path. They also demonstrated that a robotic arm can understand the intentions of a human worker and help the worker by handing over the correct instrument at the right time.

"In both cases," Shah says, "the robot doesn't infer anything about the priorities and preferences in the mental state of the human."

In 2016, the MIT-team demonstrated a robotic medical assistant in the form of a Nao robot in a Boston hospital. The robot was programmed to help a planning nurse assign nurses and rooms, and scheduling procedures. The planning nurse makes thousands of such decisions without any computational help. Says Shah, "Although our robot can't deal with all the real-world complexity, it can reduce the cognitive burden of the planning nurse by taking over simpler decisions. That is already a great advantage, especially if we realize that cognitive burden is an important cause in fatal hospital accidents. In our experiments, 90% of the decisions by the robot were agreed upon by experienced humans."

Shah's work combines robotics with aerospace engineering, computer science, cognitive science, and what are called 'human factors' — understanding the interaction between human and machine. One of the main lessons Shah has drawn from her years of research is that it's often not a good strategy to take the work a person is doing and directly apply a robot to do it. "The way the work is done by a person is really designed for a person," says Shah. "If you want a task to be efficiently done by a human-robot team, you really need to rethink the task with the team in mind."

Over the next decade, Shah expects research on intelligent machine teams will focus increasingly on the communication between human and machine. "In a good team, you need to know what others are thinking. None of this is done without direct and indirect communication. This might also be the key to finally boost the commercial application of co-bots, robots that can work side by side with humans in factories, aerospace, but also in health care."

Bennie Mols is a science and technology writer based in Amsterdam, the Netherlands.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account