Imperial College London researcher James Booth and colleagues in the U.K. have automated the generation of three-dimensional (3D) morphable models used to represent human faces.
The initial step in their approach is the algorithmic landmarking of features in facial scans, and then a second algorithm aligns all of the scans using those landmarks and merges them into a model. A final algorithm spots and eliminates bad scans.
Applying the method to a corpus of almost 10,000 demographically diverse facial scans yielded a large-scale facial model (LSFM) that represented faces with much greater accuracy than existing models.
Booth's team also has used 100,000 faces produced by their LSFM to train an artificial intelligence program to render two-dimensional snapshots as accurate 3D models.
Potential uses for the technique include enhancing plastic surgery, identifying genetic diseases with greater sensitivity, and synthesizing and animating historical figures from portraits.
From Science
View Full Article
Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA
No entries found