Artificial intelligence (AI) researchers at Stanford University have found that advanced AI systems can work out linguistic principles like grammar by themselves, by essentially playing billions of fill-in-the-blank games similar to Mad Libs.
The systems gradually produce their own models of word interrelationships, becoming increasingly better at predicting missing words.
In one study, researchers used Google's BERT (Bidirectional Encoder Representations from Transformers) language-processing model, and observed that it was learning sentence structure to identify nouns and verbs as well as subjects, objects, and predicates.
This enhanced its ability to extract the true meaning of sentences that might otherwise be confusing. A second study using BERT found that the model apparently could infer universal grammatical relationships that apply to many different languages, which should make it easier for systems that learn one language to learn more, even if they appear to have few commonalities.
From Stanford Institute for Human-Centered Artificial Intelligence
View Full Article
Abstracts Copyright © 2020 SmithBucklin, Washington, DC, USA
No entries found