acm-header
Sign In

Communications of the ACM

ACM TechNews

Robots Learn to Handle Objects, Understand New Places


View as: Print Mobile App Share:
Cornell Robots

After scanning a room, a Cornell robot points to the keyboard it was asked to locate. It uses context to identify objects, such as the fact that a keyboard is usually in front of a monitor.

Credit: Courtesy of Cornell University's Personal Robotics Lab

A team from Cornell University's Personal Robotics Laboratory is teaching a robot to find its way around new environments and manipulate objects, and machine learning is a key part of the project.

"We just show the robot some examples and it learns to generalize the placing strategies and applies them to objects that were not seen before," says team leader and professor Ashutosh Saxena. "It learns about stability and other criteria for good placing for plates and cups, and when it sees a new object--a bowl--it applies them."

The robot placed a plate, mug, martini glass, bowl, candy cane, disc, spoon, and tuning fork on a flat surface, on a hook, in a stemware holder, in a pen holder, and on several different dish racks. The robot surveyed its environment with a three-dimensional camera, randomly testing small volumes of space as suitable locations for placement. It placed objects correctly 98 percent of the time when it had seen the objects and environments previously, and 95 percent of the time when it did not. Saxena's team first developed a system to enable the robot to scan a room and identify its objects, training it on office and home scenes. The robot correctly identified objects about 83 percent of time in home scenes and 88 percent in offices.

From Cornell Chronicle
View Full Article

Abstracts Copyright © 2011 Information Inc. External Link, Bethesda, Maryland, USA 

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account