FaceDirector software from Disney Research and the University of Surrey may reduce the number of takes required in filming because it blends images from multiple takes, making it possible to edit accurate emotions onto actors' faces.
The project's major challenge has been determining how to synchronize different takes, and FaceDirector analyzes facial expressions and audio cues. Facial expressions are tracked by mapping facial features, and the software then determines which frames can be fit into each other, like the pieces of a puzzle. Each piece has multiple mates, so a director or editor can then select the best combination to produce the desired facial expression.
Experimental content was created by bringing in a group of students from the Zurich University of the Arts, who acted several takes of dialogue, each time performing facial expressions. The team used the software to generate multiple combinations of facial expressions that communicated subtler emotions, and they mixed several takes together to create rising and falling emotions.
The researchers say FaceDirector currently works best using scenes filmed against a static, unmoving background.
From Smithsonian.com
View Full Article
Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA
No entries found