acm-header
Sign In

Communications of the ACM

ACM News

One Brain to Teach Them All


View as: Print Mobile App Share:
From left, Chelsea Finn, Sergey Levine, and Pieter Abbeel help the robot BRETT get a peg into a hole.

Increasingly, robot memory and processing are moving out of the lab and into the cloud.

Credit: Peter Earl McCollough/The New York Times

When visitors first encounter the Berkeley Robot for the Elimination of Tedious Tasks, or BRETT, they assume there must be a hard-working electronic brain hiding behind the eye-like cameras in its head. After all, the robot is capable of learning some impressive tasks, such as connecting LEGO bricks. Yet BRETT extends beyond its physical, humanoid shell.

"Essentially none of the computation happens inside the robot, even though it has four computers inside," says computer scientist Pieter Abbeel of the University of California, Berkeley. Instead, more powerful desktops in Abbeel's lab process the data BRETT gathers, then send the robot commands. "We almost never compute on the robot," adds Abbeel. "Why would you?"

This approach has been common in robotics for years, but now there's a larger shift underway. Memory and processing are moving out of the lab and into the cloud. In part, the rise of what has become known as cloud robotics is happening out of necessity. Artificial intelligence and learning algorithms command more compute power, and the data captured by each robot's sensors demand more memory, so it is natural to move this workload to the cloud.

However, the shift is also sparking new ideas about how to accelerate robot learning. Instead of each robot mastering tasks independently, cloud-connected machines could share their knowledge and experiences. Robots could actually start to teach each other.

The potential benefits of self-taught robots are numerous. Today, a software engineer can write code for a robot to complete a specific task, which can take months. "We'd rather have the robot figure out how to do things for itself and tell others how to do it," says Brown University roboticist John Oberlin.

The self-driving cars at Google already take advantage of the cloud. When one vehicle learns to navigate through an intersection, so do all the other self-driving cars. However, what about the world outside the Googleplex?

Last year, Stanford University computer scientist Ashutosh Saxena launched RoboBrain, a massive knowledge database for robots. (RoboBrain is not the only such project; Google, which declined to comment for this article, filed a patent for a type of cloud-based robot brain in 2012.)

To allow machines to learn from one another, Saxena realized knowledge needed to be represented in a very general way. "Let's say you see a person operating an espresso machine," he says. "You don't want to store the exact video. That's not knowledge. That's just data." Saxena designed his system so that it would extract more general information, such as how the person moves his hand, which parts of the machine he touches, how he manipulates the lever, and the sequence of these events.

Saxena cites another experiment as a demonstration. First, a humanoid robot based in his old Cornell University lab learned to pick up and move different kinds of cups. The question was whether another robot, based at Brown University, could learn from that first robot's experience. These were different machines, with different motors and joints and grippers, so the lesson could not be too specific. Instructing a robot to grasp a handle with three fingers when it only has two would not be very helpful.

The Cornell robot, a PR2 humanoid from Willow Garage, passed more general knowledge into RoboBrain, including the basic attributes of different mugs and cups, the best places to grasp them, and the ideal orientation for setting them down safely. The second machine, one of Rethink Robotics' Baxter humanoids, then drew on this information to place mugs and cups onto small, table-like pedestals in its lab. "Baxter can't ask RoboBrain how to move its arm because RoboBrain doesn't know what Baxter looks like," Saxena says.

Instead, he said, the Baxter robot asks more general questions about what it sees. "These queries go to RoboBrain, which says, 'Those cups have handles, and if you give me pictures of the handles, I can tell you where to grab them'." Then it was up to Baxter to determine exactly how to grasp the mug in the suggested spot.

Moving mugs is only a small first step. The long-term goal is to create a massive cloud-based knowledge database that any robot could use to learn a wide range of tasks. Berkeley computer scientist Abbeel believes this could be tremendously impactful. "But it's also going to be very complex to make that all work," he cautions.

Gregory Mone is a Boston, MA-based writer and the author of the novel Dangerous Waters.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account