Researchers at the University of Southern California have developed a learning algorithm that is capable of building a precise three-dimensional (3D) model of a person's head based on a single low-resolution photo of their face.
The facial reconstructions are performed using an artificial neural network. The team trained the neural network to look at two-dimensional images and extrapolate a 3D texture map that approximates the subject's facial dimensions with a high degree of realism.
The researchers demonstrated successful face reconstructions from a wide range of low-resolution input images, including those of historical figures.
The team validated the realism of its results using a crowdsourced user study.
The project has implications for the future of realistic avatars in virtual reality.
"With virtual and augmented reality becoming the next generation platform for social interaction, compelling 3D avatars could be generated with minimal efforts and puppeteered through facial performances," the researchers say. "Within the context of cultural heritage, iconic and historical personalities could be restored to life in captivating 3D digital forms from archival photographs."
From Vocativ
View Full Article
Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA
No entries found