Researchers at Carnegie Mellon University (CMU) and the University of Toronto have created a mathematical model to help address a major problem of depth-sensing cameras: their inability to work in bright light, especially sunlight. Depth-sensing cameras work by projecting a pattern of lines or dots over a scene and determining depth by interpreting the way the lines or dots deform. However, the low-power projectors used by such cameras can easily be overwhelmed by bright light, which washes out the patterns they are projecting. The researchers' new model involves synchronizing the camera with its projector so it ignores the "noise" of ambient light. "We have a way of choosing the light rays we want to capture and only those rays," says CMU roboticist Srinivasa Narasimhan.
One of the prototypes produced by the group synchronizes a laser projector with a common rolling-shutter camera so the camera detects only the light from the points being illuminated by the laser. The camera can see in both bright and diffuse light; it can, for example, see the shape of a lit light bulb and see through smoke.
The researchers say their new method has numerous possible applications, from improving gaming devices such as Microsoft's Kinect, to medical imaging, automated cars, and planet-exploring rovers. The researchers presented their work this week at SIGGRAPH 2015 in Los Angeles.
From Carnegie Mellon University
View Full Article
Abstracts Copyright © 2015 Information Inc., Bethesda, Maryland, USA
No entries found