A universal translator remains an elusive goal more than 60 years after the creation of one was first undertaken, and expert opinion varies on how soon one will be delivered.
Microsoft Research's Vikram Dendi thinks this milestone is imminent thanks to achievements such as the Skype translator, which renders video chat speech as spoken or written translations in up to seven languages.
The current method scientists use in advancing machine translation is the neural network, in which machines are trained to mimic people's thought processes. Neural networks are designed to convert each word into a simple vector, building up their accuracy as they attempt more translations. University of Montreal professor Yoshua Bengio believes the neural network technique holds more promise of supporting human-level performance by concentrating on the meaning of words.
The manual input approach for teaching computers to translate between languages quickly became onerous, so in the 1980s a statistical-based model was explored in which machines were fed human-translated content so they could infer language rules and patterns themselves. Neural networks improved on this principle, and today machines can glean more information about each word and conduct better probability analysis to avoid unnaturally sounding translations.
From National Public Radio
View Full Article
Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA
No entries found