Software developed by researchers at the University of Oxford and the University of Leeds has autonomously determined the basics of sign language by watching TV programs that are subtitled and signed. The researchers first designed an algorithm to recognize gestures, without assigning a definition to those gestures, made by a signer on TV. The software focuses on the arms to determine the rough location of the hands, and identifies flesh-colored pixels in those areas to identify precise hand shapes.
The researchers exposed the system to about 10 hours of TV footage that contained both sign language and subtitles, and tasked the software to learn the signs for a mix of 210 nouns and adjectives that would appear several times throughout the footage. The program analyzes the signs that accompany each of the words whenever they appear in the subtitles. When it was not obvious which part of a signing sequence corresponded to the given word, the system compared multiple occurrences of the word to isolate and identify the correct sign. The software correctly learned 136 of the 210 words. University of Leeds researcher Mark Everingham says some of words have different signs, so a 65 percent success rate is quite high given the complexity of the task.
Researchers at the University of Surrey have developed a similar system that scans all of the signs in a video sequence to identify signs that appear frequently and likely represent common words. Both approaches could be used to create a way to automatically animate digital avatars capable of signing fluently for deaf TV program viewers.
View a video about the software that can learn sign language.
From New Scientist
View Full Article
Abstracts Copyright © 2009 Information Inc., Bethesda, Maryland, USA
No entries found