Scientists at the Massachusetts Institute of Technology and Canada's University of Regina suggest crowdsourced assessments from groups of normal readers can virtually match those of professional fact-checkers in vetting news stories.
The researchers examined 207 stories flagged for fact-checking by Facebook algorithms, and deployed 1,128 U.S. residents using Amazon's Mechanical Turk crowdsourcing platform.
Participants were supplied the headline and lead sentence of 20 stories and were asked questions to produce a general accuracy score about each article, while three professional fact-checkers simultaneously evaluated all 207 stories.
When the regular readers were ordered into groups with the same number of Democrats and Republicans, their average ratings lined up strongly with the professional fact-checkers' ratings.
From MIT News
View Full Article
Abstracts Copyright © 2021 SmithBucklin, Washington, DC, USA
No entries found