Researchers at the U.K.'s University of Surrey employed a neural network to render spoken language as sign language, and to map the signs onto a three-dimensional model of the human skeleton.
Surrey's Ben Saunders and colleagues also trained the SignGAN system on videos of sign language interpreters, educating it to generate a photorealistic video of anyone signing based on an image of them.
The artificial intelligence (AI) combines video and skeletal images to convert spoken words into sign language.
Because interpreters' hands in the training videos could sometimes be blurry, the Surrey team used an existing AI that could estimate hand poses from a small area around the middle knuckle.
When 10-second clips of the videos were shown to 46 people, about 25% of whom were signers, all favored SignGAN over other AI models.
From New Scientist
View Full Article
Abstracts Copyright © 2020 SmithBucklin, Washington, DC, USA
No entries found