acm-header
Sign In

Communications of the ACM

ACM TechNews

Carnegie Mellon and Disney Develop New Model for Animated Faces and Bodies


View as: Print Mobile App Share:
Modeling faces

Credit: Carnegie Mellon University

Researchers at Carnegie Mellon University, Disney, and the LUMS School of Science and Engineering have developed a way of modeling dynamic objects, such as expressions on faces, gesticulations on bodies, and the draping of clothes.

The researchers developed a model that simultaneously takes into account both space and time, which they say enabled them to create a more compact, powerful, and easy-to-manage model. The findings were presented at the recent SIGGRAPH 2012 conference.

Carnegie Mellon professor Yaser Sheikh says the natural constraints on spatial movements, such as the characteristic ways the face changes shape as someone is talking or expressing an emotion, are combined with the natural constraints on how much movement can occur over a given stretch of time, which enables the models to be very compact and efficient. The bilinear spatiotemporal basis models are possible because modern computers can process data sets that include millions of variables. "The ability to interact with large dynamic sequences in data consistent ways and in real time has lots of interesting applications," says Disney researcher Iain Matthews.

From Carnegie Mellon News (PA) 
View Full Article

Abstracts Copyright © 2012 Information Inc., Bethesda, Maryland, USA 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account