Researchers at Stanford University, working with colleagues at the Technical University of Munich in Germany, the University of Bath in the U.K., France-based Technicolor, and other institutions have developed an artificial intelligence-based system that uses input video to create photorealistic reanimations of portrait videos.
The data from the source videos, which are created by a source actor, is used to manipulate the portrait video of a target actor.
In addition, the new Deep Video Portraits systems enables a range of movements, including full three-dimensional head positions, head rotation, eye gaze, and eye blinking.
Deep Video Portraits uses generative neural networks to take data from the signal models and calculate the photorealistic frames for a given target actor; secondary algorithms are used to correct glitches, giving the videos a highly realistic appearance.
The research will be presented in August at the ACM Special Interest Group on Computer Graphics and Interactive Techniques Conference (SIGGRAPH 2018) in Vancouver, Canada.
From Gizmodo
View Full Article
Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA
No entries found