acm-header
Sign In

Communications of the ACM

ACM TechNews

The Race to Make AI Smaller, Smarter


View as: Print Mobile App Share:

The challenge is that language models learn very differently from humans.

Credit: Matt Rota

The BabyLM Challenge, organized by computer scientists at institutions including Johns Hopkins University and Switzerland's ETH Zurich, is aimed at creating more accessible, intuitive language models, in stark contrast to the race for ever-larger language models undertaken by big tech companies.

The goal is to produce a mini-language model using datasets less than one-ten-thousandth the size used by most advanced large language models.

As part of the challenge, researchers have been tasked with training language models on about 100 million words, with the winning model to be chosen based on the effectiveness of their generation and understanding of the nuances of language.

From The New York Times
View Full Article - May Require Paid Subscription

 

Abstracts Copyright © 2023 SmithBucklin, Washington, DC, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account