Scientists trained an AI through the eyes of a baby in an effort to learn how children associate new words with specific objects.
Researchers at New York University strapped a headcam recorder onto a child named Sam starting when he was just six months old through his second birthday.
The footage of 250,000 words and corresponding images was fed to an AI model, which learned how to recognize different objects similar to how Sam did.
"By using AI models to study the real language-learning problem faced by children, we can address classic debates about what ingredients children need to learn words — whether they need language-specific biases, innate knowledge, or just associative learning to get going," said Brenden Lake, an assistant professor in NYU's Center for Data Science and Department of Psychology, and senior author of a paper published in the journal Science.
The camera captured 61 hours of footage amounting to about one percent of Sam's waking hours, and was used to train a Child's View for Contrastive Learning (CVCL) model to link words to images. The CVCL model accurately linked images and text about 61.6 percent of the time.
From Daily Mail
View Full Article
No entries found