acm-header
Sign In

Communications of the ACM

News

I, Domestic Robot


Willow Garage's PR2

Willow Garage's PR2, an open source robotics research and development platform.

Credit: Willow Garage

Industrial robots, fixed-location and single-function machines, have long been staples of advanced manufacturing settings. Medical robots, which can help surgeons operate with smaller incisions and cause less blood loss than traditional surgical methods, are making fast inroads in metropolitan and suburban hospitals. Rescue robots, included wheeled and snake-like robots, are increasingly common, and were deployed in the search for survivors in the aftermath of the earthquake and tsunami that recently struck Japan. On the other hand, the promise of multipurpose domestic assistance robots, capable of a wide range of tasks, has been a distant goal.

However, recent advances in hardware such as laser rangefinders, open source robotic operating systems, and faster algorithms have emboldened researchers. Robots are now capable of folding laundry, discerning where to place an object on cluttered surfaces, and detecting the presence of people in a typical room setting.

"It's easy for me to be optimistic, but if robots aren't actually being useful and fairly widespread in 10 years, then I will be fairly disappointed," says Charles Kemp, assistant professor of biomedical engineering at Georgia Tech University.

Back to Top

Sensors Enable Awareness

In recent months, numerous research teams have published papers detailing advances in robots' perceptual capabilities. These perceptual advances enable the robots' mechanical components to complete domestic tasks hitherto impossible.

Kemp and his research team have pioneered semantic and situational awareness in robots through several methods, including the creation of radio frequency identification (RFID) semantic tags on common objects such as light switches, and by combining sensor data taken from both two-dimensional camera data and three-dimensional point clouds gathered by laser rangefinders.

University of Bonn researchers Jörg Stückler and Sven Behnke also demonstrated success, using a combination of 2D laser and camera sensors. They programmed a mobile service robot to combine laser rangefinder data that hypothesizes the presence of a person's legs and torso with 2D frontal and profile images of the detected face.

Stückler and Behnke also modeled the semantic probability of detecting a person's presence in different locations of a room—high probability in a chair and low probability on a bookshelf, for instance—and supplied the robot with that knowledge. The prior knowledge of the room semantics and precalculated range of likely valid facial height helps the Bonn researchers discern false positive returns.

Steve Cousins, CEO of Willow Garage, which manufactures the open platform general-purpose PR2 robot, says further advances in perceptual capabilities may be even more likely with the recent debut of sensing technology that enables a computer to analyze an area in three dimensions and then to create what the technology's manufacturer, PrimeSense, calls a synchronized depth image. The technology sells for less than 1/20th of the de facto standard research rangefinder, which costs about $5,000. Both Cousins and Kemp believe the low cost of the PrimeSense sensor (it is a key component of Microsoft's Kinect gaming system) may lead to a surge in situational and semantic robotic research. Kemp says his team recently installed one of the new sensors to its PR2.

In essence, Kemp says its real-time technology greatly simplifies a robot's data-gathering process.

Prior to installing the new sensor, on projects such as the work on making the robot discern clutter, he says "we had to tilt the laser rangefinder up and down, then snap a picture and relate those two things. That's a pretty slow process and really expensive."

Back to Top

A Semantic Database

Kemp says there are two distinct research areas for similar problem sets in domestic robotics: those related to perceptual problem sets, and those related to mechanical awareness. For example, a roving robot meant to help a person with basic housekeeping chores must not only know how to differentiate a refrigerator door handle from a light switch, but it must also be able to calculate which approach its arms must take, and how firmly it must grip the respective levers.

In the experiment using RFID tags, Kemp created a semantic database the robot could refer to after identifying an object. The database contains instructions on how the robot should act upon an object. For example, under "actions," after a robot identifies and contacts a light switch, the commands are "off: push bottom" and "on: push top." Each of these actions is further sub-programmed with a force threshold the robot should not exceed.

Kemp is also investigating another approach to providing robots with such situational awareness that entails equipping human subjects with touch sensors. The sensors are held during the completion of common tasks such as opening refrigerators and cabinet doors in multiple settings. The information on the kinematics and forces of such actions is then entered into a database a service robot can access when it approaches one of these objects en route to performing a task.

"If the robot knows it is a refrigerator, it doesn't have to have worked with that specific refrigerator before," he says. "If the semantic class is 'refrigerator' it can know what to expect and be more intelligent about its manipulation. This can make it more robust and introduces this notion of physically grounded common sense about things like how hard you should pull when opening a door."

Offboard computation akin to the kinematic database is also being done to improve already successful robotic tasks. A team of researchers led by Pieter Abbeel, an assistant professor of computer science at the University of California, Berkeley, programmed a general-purpose Willow Garage PR2 robot to fold towels randomly laid down on a tabletop by using a dense optical flow algorithm and high-resolution stereo perception of the towels' edges and likely corners. Abbeel's experiment yielded a perfect 50-out-of-50-attempt success rate; the robot was able to recalculate failures in the 22 instances that were not initially successful by dropping the towel, regrasping a corner, and carrying on until the task was completed.

Abbeel says his team has been able to greatly reduce the amount of time necessary to fold each towel in subsequent experiments, from 25 minutes to approximately four minutes, by utilizing a new approach: rather than rely heavily upon onboard perceptual data, Abbeel has performed parallel computations on the Amazon cloud on mesh models. Those models, he says, are "triangles essentially put together like people using computer graphics or physics-based simulations. Once you have that mesh model, you can do a simulation of how this article of clothing would behave depending on where you pick it up."

The new approach, he says, relies on observations that the bottommost point of any hanging article is usually a corner. Two consecutive grasps of a towel, he says, will be highly likely to yield two diagonally opposed corners. For t-shirts, he says, likely consecutive grasps will be at the end of two sleeves for a long-sleeved shirt or the end of one sleeve and diagonally across at the hip for a short-sleeved shirt.

"There are a few of these configurations you are very likely to end up in, then all you need to do perception-wise is to differentiate between these very few possibilities," Abbeel says.

Back to Top

ROS is Boss

Another hallmark advance of the domestic robot community is the growth of an open-source ecosystem, built around the BSD-licensed Robot Operating System (ROS), largely maintained by Willow Garage and Stanford University.

"Our goal has basically been to set the foundation for a new industry to start," Cousins says. "We want two people to be able to get together in a garage and get a robotics business off the ground really quickly. If you have to build software as well as hardware from scratch, it's nearly impossible to do that."

Abbeel says the ROS ecosystem may go a long way to taking the robots out of the lab and into real-world locations.

"In order for these robots to make their way into houses and become commercially viable, there will need to be some sort of bootstrapping," Abbeel says. "It will be very important for people to do some applications extremely well, and there has to be more than one. So I hope what may be happening, with robots in different places, is that different schools will develop a true sensibility for the robot, and these things could potentially bootstrap the process and bring the price down. A single app won't be enough."

Cousins says the combination of falling hardware prices for devices such as the PrimeSense sensor, and the blooming ROS ecosystem might be analogous to the personal computer research of the early 1970s, specifically comparing the PR2 to the iconic Xerox Alto desktop computer. List price on the PR2 is $400,000.

"Right now the PR2 is the platform to work on if you want to do mobile manipulation research," Cousins says. "It's a little expensive, but in today's dollars it's about the same as the Alto. It's not going to be the robot you put into your grandmother's home, but the software we develop on the PR2 will likely be a key component of the market. I think ROS is going to be driving those future personal robots."

* Further Reading

Stückler, J. and Behnke, S.
Improving people awareness of service robots by semantic scene knowledge, Proceedings of RoboCup International Symposium, Singapore, June 25, 2010.

Maitin-Shepard, J., Cusumano-Towner, M., Lei, J., and Abbeel, P.
Cloth grasp point detection based on multiple-view geometric cues with application to robot towel folding, 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, May 3–8, 2010.

Schuster, M.J., Okerman, J., Nguyen, H., Rehg, J.M., and Kemp, C.C.
Perceiving clutter and surfaces for object placement in indoor environments, 2010 IEEE-RAS International Conference on Humanoid Robots, Nashville, TN, Dec. 6–8, 2010.

Yamazaki, A., Yamazaki, K., Burdelski, M., Kuno, Y., and Fukushima, M.
Coordination of verbal and non-verbal actions in human–robot interaction at museums and exhibitions, Journal of Pragmatics 42, 9, Sept. 2010.

Attamimi, M., Mizutani, A., Nakamura, T., Sugiura, K., Nagai, T., Iwahashi, N., Okada, H., and Omori, T.
Learning novel objects using out-of-vocabulary word segmentation and object extraction for home assistant robots, 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, May 3–8, 2010.

Back to Top

Author

Gregory Goth is an Oakville, CT-based writer who specializes in science and technology.

Back to Top

Footnotes

DOI: http://doi.acm.org/10.1145/1941487.1941494

Back to Top

Figures

UF1Figure. Willow Garage's PR2, an open source robotics research and development platform.

Back to top


©2011 ACM  0001-0782/11/0500  $10.00

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from [email protected] or fax (212) 869-0481.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2011 ACM, Inc.


 

No entries found