acm-header
Sign In

Communications of the ACM

ACM TechNews

AI Begins to Understand the 3D World


View as: Print Mobile App Share:
A research version of an industrial robot from Rethink Robotics.

Researchers at University of California, Berkeley are working with a machine from Rethink Robotics to help artificial intelligence gain an understanding of how objects in the real world can be manipulated.

Credit: University of California, Berkeley

Artificial intelligence (AI) researchers are constructing systems that can visualize the three-dimensional (3D) world and take action, with Massachusetts Institute of Technology professor Josh Tenenbaum citing this milestone as a key trend in learning-based vision systems.

"That includes seeing objects in depth and modeling whole solid objects--not just recognizing that this pattern of pixels is a dog or a chair or table," he says.

Tenenbaum and colleagues have employed a popular machine-learning method called generative adversarial modeling to enable a computer to learn about the characteristics of 3D space from examples so it can produce new objects that are realistic and physically accurate. The researchers presented the work last week at the Neural Information Processing System (NIPS 2016) conference in Barcelona, Spain.

Tenenbaum says 3D perception should be essential for robots designed to engage with the physical world, including self-driving automobiles.

Nando de Freitas at the U.K.'s University of Oxford agrees that AI cannot progress without the ability to explore the real world. "The only way to figure out physics is to interact," de Freitas says. "Just learning from pixels isn't enough."

From Technology Review
View Full Article

 

Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account