Purdue University researchers are developing a deep-learning method to enable smartphones and other mobile devices to understand and immediately identify objects in a camera's field of view, overlaying lines of text that describe items in the environment.
The researchers' method requires layers of neural networks that mimic how the human brain processes information.
"The deep-learning algorithms that can tag video and images require a lot of computation, so it hasn't been possible to do this in mobile devices," says Purdue University professor Eugenio Culurciello.
The Purdue researchers have developed software and hardware that could be used to enable a conventional smartphone processor to run deep-learning software.
"Now we have an approach for potentially embedding this capability onto mobile devices, which could enable these devices to analyze videos or pictures the way you do now over the Internet," Culurciello says.
The deep-learning software works by performing processing in layers. "For facial recognition, one layer might recognize the eyes, another layer the nose, and so on until a person's face is recognized," Culurciello says.
From Purdue University News
View Full Article
Abstracts Copyright © 2014 Information Inc., Bethesda, Maryland, USA
No entries found