acm-header
Sign In

Communications of the ACM

ACM TechNews

Facedirector Software Generates Desired Performances in Post-Production, Avoiding Reshoots


View as: Print Mobile App Share:
A screenshot from FaceDirector.

A new system developed by Disney Research and the University of Surrey enables the director of a motion picture to seamlessly blend facial images from multiple video takes to achieve a desired effect.

Credit: Disney Research

Software developed by researchers at the University of Surrey and Disney Research could mean film directors will no longer need to reshoot crucial scenes dozens of times until they are satisfied.

The researchers say the FaceDirector system enables a director to seamlessly blend facial images from a couple of video takes to achieve the desired effect. The software analyzes both facial expressions and audio cues, and then identifies frames that correspond between takes using a graph-based framework.

The researchers say the system is able to create a variety of novel, visually plausible versions of performances of actors in close-up and mid-range shots.

"Our research team has shown that a director can exert control over an actor's performance after the shoot with just a few takes, saving both time and money," says Disney Research's Markus Gross.

FaceDirector works with normal two-dimensional video input acquired by standard cameras, without the need for additional hardware or three-dimensional face reconstruction.

"To the best of our knowledge, our work is the first to combine audio and facial features for achieving an optimal nonlinear, temporal alignment of facial performance videos," says Charles Malleson, a Ph.D. student at the University of Surrey's Center for Vision, Speech and Signal Processing.

From EurekAlert
View Full Article

 

Abstracts Copyright © 2015 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account