acm-header
Sign In

Communications of the ACM

ACM TechNews

AI Can Turn Spoken Language Into Photorealistic Sign Language Videos


View as: Print Mobile App Share:
Many Deaf people use sign language to communicate.

An artificial Intelligence that can produce photorealistic videos of sign language interpreters from speech could improve accessibility by removing the need for humans.

Credit: FluxFactory/Getty Images

Researchers at the U.K.'s University of Surrey employed a neural network to render spoken language as sign language, and to map the signs onto a three-dimensional model of the human skeleton.

Surrey's Ben Saunders and colleagues also trained the SignGAN system on videos of sign language interpreters, educating it to generate a photorealistic video of anyone signing based on an image of them.

The artificial intelligence (AI) combines video and skeletal images to convert spoken words into sign language.

Because interpreters' hands in the training videos could sometimes be blurry, the Surrey team used an existing AI that could estimate hand poses from a small area around the middle knuckle.

When 10-second clips of the videos were shown to 46 people, about 25% of whom were signers, all favored SignGAN over other AI models.

From New Scientist
View Full Article

 

Abstracts Copyright © 2020 SmithBucklin, Washington, DC, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account