Online translation has improved significantly in recent years due to a shift away from translation based on linguistic models and toward one based on statistics and probability theory.
IBM researchers proposed that although the computer would not understand the meaning of what it was translating, it could be programmed to find the most common sentence constructions and alignment of words, and how likely their correspondence is between languages, by creating a massive database of words and sentences in different languages. The most common sentence structures and groups of words in the paired sentences were identified via statistical methods and deductions, and the database of common words and groups of words expanded enormously as researchers obtained more documents and translations of them in different languages.
Google's introduction of the first free, statistically-based translation software four years ago was a major leap forward in computer translation, and Systan CEO Dimitris Sabatakakis notes that the technology made tremendous strides as a consequence. Experts expect continued improvements in translation systems as the size of the databases they use grow and computer researchers become better able to embed linguistic information. Scientists also anticipate the imminent advent of better speech-to-speech software, which will permit concurrent translation in meetings, for example.
From The Washington Post
View Full Article - May Require Free Registration
Abstracts Copyright © 2011 Information Inc. , Bethesda, Maryland, USA
No entries found