Disney researchers applied deep-learning neural networks to develop a new way of evaluating movie audiences' reactions using facial expressions.
The factorized variational autoencoders (FVAEs) method can reliably predict a viewer's expressions for the remainder of the movie after watching an audience member for only a few minutes, which researchers say shows enormous potential for more accurately simulating group facial expressions in various scenarios.
The FVAEs were used in 150 screenings of nine mainstream films, with spectators' faces monitored via infrared cameras while the FVAEs looked for similar expressions. The resulting dataset featured 16 million facial landmarks to be assessed.
The data enabled the FVAEs to learn the range of generalized expressions and ascertain how audience members should respond to a given film based on strong correlations between spectators.
Simon Fraser University's Zhiwei Deng says the FVAEs could learn concepts such as smiling and laughing by themselves, and correlate facial expressions with funny scenes.
From EurekAlert
View Full Article
Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA
No entries found