acm-header
Sign In

Communications of the ACM

ACM News

Videoconferencing Technology Possible For Cell Phones, Pdas


View as: Print Mobile App Share:

A new low-bandwidth, high-frame-rate videoconferencing technology that creates the appearance of three-dimensionality and a strong sense of co-presence without the use of expensive motion-tracking devices or multi-camera arrays could eventually become available for cell phones, laptop computers and personal digital assistants, according to a researcher at The University of Virginia.

The technology is expected to be presented Friday (May 8) in London at the International Workshop on Image Analysis for Multimedia Interactive Services.

According to Timothy Brick, the U.Va. researcher who will make the presentation, the new videoconferencing system may make high-frame-rate videoconferencing readily and inexpensively available to nearly anyone with small, portable communication devices, possibly within two to three years.

Current systems for small devices offer only low-frame rates, resulting in jerky images and a loss of the sense of "co-presence" between participants. Traditional videoconferencing requires expensive equipment and high-bandwidth transmission, making the technology incompatible for small portable devices.

The new system instead uses motion parallax, a 3-D simulation created by rotating a 3-D model of a user's face based on the angle of the person viewing the image.

"Motion parallax provides a greater sense of personal connection between users than other approaches," Brick said, "and we are able to create this effect without the need for expensive displays, multi-camera arrays or elaborate motion capture equipment, potentially making this technology available to nearly anyone with a handheld communication device."

The system uses statistical representations of a person's face to track and reconstruct that face. This allows the principal components of facial expression — only dozens in number — to be transmitted as a close rendition of the actual face. It's a sort of connect-the-dots fabrication that can be transmitted frame by frame in near real-time, requiring considerably less bandwidth for transmission — only a few hundred bytes per frame — than the tens of thousands of bytes needed to transmit a full-face image.

"This method makes possible near-photorealistic video-conferencing for small devices, and it has the potential to revolutionize online gaming industry animation technology, as well as other media applications," Brick said.

A demonstration is shown in this online video.

The technology was developed by a team of psychologists and computer programmers, including U.Va. graduate students Brick and Jeffrey Spies and U.Va. psychology professor Steven Boker of the College of Arts & Sciences; Barry-John Theobald of the University of East Anglia in the United Kingdom; and Iain Mathews of Disney Research in Pittsburgh. The National Science Foundation funds the research.

The technology grew out of psychological research seeking to understand how people interact during conversation. The investigators, led by Boker at his Human Dynamics Laboratory at U.Va., needed a way to capture micro facial expressions while people communicate with each other eye-to-eye.

They began work on a videoconferencing link to track, record and recreate these micro-expressions to see how people alter their behavior based on the slightest changes in expression of another person.

Boker said it is a "mirroring process" of facial coordination that helps people to feel empathy toward each other.

Because the researchers needed a technology that would allow participants to look directly at each other, but also wanted to capture the thousands of micro-expressions made by participants, they developed this system, which allows people to look at each other directly, rather than at a monitor off to the side.

With current video conferencing technology, a participant looks at a monitor showing the person he or she is talking to, and therefore appears to that person to be looking off to the side.

This lack of direct eye contact creates an impersonal appearance and alters the micro expressions that normally would occur in person or in a real-time video conversation. If the person looks instead at a camera, rather than the monitor, he or she cannot read the face of the other person, and again, loses that eye contact.

"We wanted to remove that mismatch," Boker said. "This new technology allows us to correct for the mismatch in eye gaze."

The system developed by Boker and his team allows people to converse in near real-time while each makes direct eye contact with the other. The effect is a more lifelike conversation featuring all the normal nuances of facial expression.

And that technology may revolutionize videoconferencing for small devices.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account