Formed by researchers in Stanford's Human-Centered Artificial Intelligence group, the Center for Research on Foundation Models (CRFM) is calling for an investment in academic efforts and resources to create large neural network models—foundation models—to study their capabilities, limitations, and societal impact.
But the newly formed CRFM has met with criticism from other researchers, who wonder if there is currently enough focus on the human and environmental costs of further scaling up large language models, which involves scraping data at scale from the Internet without knowledge or explicit consent—containing all of society's biases and using massive computational resources—all for the development of tools that empower the tech industry.
From Nature
View Full Article
No entries found