Northeastern University professor Stacy Marsella has written a program called Cerebella that enables virtual humans to demonstrate emotions using facial expressions and hand gestures.
"Normally these virtual human architectures have some sort of perception, seeing the world, forming some understanding of it, and then deciding how to behave," Marsella says. "The trouble is some of these things are very hard to model, so sometimes you cheat."
For example, programs can infer connections between spoken words and appropriate responses, as one version of Cerebella does. When a program understands the words a virtual human will use to respond, it can build a repertoire of appropriate facial expressions, gaze patterns, and gestures.
Marsella also developed UrbanSim software, which generates large-scale models of human populations that interact with one another. The populations demonstrate follow-up behaviors based on a model that enables them to reason about how others in the virtual world will act, which is useful for purposes such as city planning and military training.
Marsella also has developed a training tool for medical students to practice patient interactions, and wants to work with Northeastern's personal health informatics team in the future.
From Northeastern University News
View Full Article
Abstracts Copyright © 2014 Information Inc., Bethesda, Maryland, USA
No entries found