Nathan Jacobs and colleagues at Washington University in St. Louis have created a depth map from a single camera. Depth maps, which record the geography of a three-dimensional landscape and represent it in two dimensions for surveillance and atmospheric monitoring, are usually created using lasers.
Creating a depth map from one camera is difficult because the shadows cast by clouds distort the scene and are difficult for image-processing algorithms to recognize. The team compared a series of images and recorded the time at which the passing shadows change a pixel's color, which enabled it to estimate the distance between each pixel.
"If the wind speed is known you can reconstruct the scene with the right scale," Jacobs says. "That is notoriously difficult from a single camera viewpoint." The cloud map has an average positional error of just 2 percent, compared with laser-created maps.
From New Scientist
View Full Article – May Require Free Registration
Abstracts Copyright © 2010 Information Inc., Bethesda, Maryland, USA
No entries found