acm-header
Sign In

Communications of the ACM

ACM TechNews

Kinect Project Merges Real and Virtual Worlds


View as: Print Mobile App Share:

Microsoft researchers recently demonstrated KinectFusion, a research project that lets users generate three-dimensional (3D) models in real time using a standard Kinect system. The technology enables objects, people, and entire rooms to be scanned in 3D.

"KinectFusion is a platform that allows us to rethink the ways that computers see the world," says Microsoft researcher Shahram Izadi. The Kinect projects a laser dot pattern into a scene and searches for distortions using an infrared camera, which generates a point cloud of distances to the camera that the Kinect uses to identify objects and gestures.

As a KinectFusion user waves a Kinect around a scene, an algorithm called iterative closest point (ICP) combines data from snapshots that are taken at 30 frames per second to create a 3D representation. ICP also can track the position and orientation of the camera by comparing new frame data with previous frames.

"With KinectFusion, anyone can create 3D content just by picking up a Kinect and scanning something in," says Microsoft researcher Steve Hodges.

From Technology Review
View Full Article

 Abstracts Copyright © 2011 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account