An artificial intelligence (AI) program from DeepMind can read lips better than professional lip readers after reviewing thousands of hours of YouTube videos along with transcripts via machine learning.
The researchers tested the program on 37 minutes of video it had not previously viewed, and it misidentified only 41% of the words. In comparison, the best previous computer method, which focuses on individual letters instead of phonemes, had a 77% word error rate, while professional lip readers had a 93% error rate in the same test, which lacked context or body language.
Columbia University's Hassan Akbari says the AI, if incorporated into a phone, would enable hearing-impaired users to have a "translator" with them wherever they go.
Helen Bear at Queen Mary University of London in the U.K. envisions applications for the program that include analyzing security video, interpreting historical footage, and understanding what a Skype partner is saying when the audio goes dead.
From Science
View Full Article
Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA
No entries found