acm-header
Sign In

Communications of the ACM

ACM TechNews

Robot Toddler Learns to Stand By "imagining" How to Do It


View as: Print Mobile App Share:
Darwin tries moving its torso around under the control of several neural networks.

Researchers at the University of California, Berkeley are trying to help robots dress the issue of variability in their environments.

Credit: Technology Review

The ability of robots to carry out tasks and move about their environment is increasingly impressive, but they still face major challenges.

For example, during the recent U.S. Defense Advanced Research Projects Agency Robotics Challenge, several of the competing robots toppled over on unstable terrain or stumbled when asked to execute complex maneuvers.

"Just even a little variability beyond what [the robot] was designed for makes it really hard to make it work," says University of California, Berkeley professor Pieter Abbeel.

He and his team are working to address this issue of variability with a robot called Darwin. Using a high-level deep-learning network, Abbeel and his team are giving Darwin the ability to "imagine" its motions and how they might fail before making them.

The group has previously used deep-learning networks to teach robots how to complete tasks such as fitting a shaped block into the appropriate hole by attempting the task multiple times. However, this technique is not suited for teaching a robot to perform an action such as standing up because of the wear and tear it would put on the robot's joints. Instead, Darwin uses a two-tiered network--one that simulates the process and another that practices them physically once it thinks it has a good solution.

From Technology Review
View Full Article

 

Abstracts Copyright © 2015 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account