acm-header
Sign In

Communications of the ACM

ACM TechNews

Lip-Syncing Thanks to Artificial Intelligence


View as: Print Mobile App Share:
One persons facial expression, gaze direction, and head pose (input) can be transposed onto another individual (output).

Researchers have developed a way to use artificial intelligence to edit the facial expressions of actors in a film to accurately match dubbed voices.

Credit: MPI for Informatics

An international team led by researchers at the Max Planck Institute for Informatics in Germany has developed a system that uses artificial intelligence (AI) to edit the facial expressions of actors in a film to accurately match dubbed voices.

Max Planck's Hyeongwoo Kim says the Deep Video Portraits system uses model-based three-dimensional face performance capture to record the detailed movements and head position of the dubbing actor, then transposes these movements onto the "target" actor to accurately synchronize the lips and facial movements.

The university's Christian Theobalt says the technology "enables us to modify the appearance of a target actor by transferring head pose, facial expressions, and eye motion with a high level of realism."

The technique could significantly reduce the time and expense of dubbing films, and of correcting the gaze and head pose of video-conference participants to simulate a natural conversation setting.

From Max Planck Institute for Informatics (Germany)
View Full Article

 

Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account