acm-header
Sign In

Communications of the ACM

ACM TechNews

Virtual Humans, Programmed to Feel


View as: Print Mobile App Share:
A virtual human.

The Cerebella program allows virtual humans to display emotions using facial expressions and hand gestures.

Credit: io9

Northeastern University professor Stacy Marsella has written a program called Cerebella that enables virtual humans to demonstrate emotions using facial expressions and hand gestures.

"Normally these virtual human architectures have some sort of perception, seeing the world, forming some understanding of it, and then deciding how to behave," Marsella says. "The trouble is some of these things are very hard to model, so sometimes you cheat."

For example, programs can infer connections between spoken words and appropriate responses, as one version of Cerebella does. When a program understands the words a virtual human will use to respond, it can build a repertoire of appropriate facial expressions, gaze patterns, and gestures.

Marsella also developed UrbanSim software, which generates large-scale models of human populations that interact with one another. The populations demonstrate follow-up behaviors based on a model that enables them to reason about how others in the virtual world will act, which is useful for purposes such as city planning and military training.

Marsella also has developed a training tool for medical students to practice patient interactions, and wants to work with Northeastern's personal health informatics team in the future.

From Northeastern University News
View Full Article

 

Abstracts Copyright © 2014 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account