acm-header
Sign In

Communications of the ACM

ACM Opinion

Why Is Facebook So Afraid of Checking Facts?


View as: Print Mobile App Share:
Fact-checking works.

The biggest social network in the world has the wrong idea for how to fight Covid-19 conspiracies.

Credit: Sam Whitney/Getty Images

A video laden with falsehoods about Covid-19 emerged on Facebook last week, and has now been viewed many millions of times. The company has taken steps to minimize the video's reach; but its fact-checks, in particular, appear to have been applied with a curious—if not dangerous—reticence. The reason for that reticence should alarm you: It seems that the biggest social network in the world is, at least in part, basing its response to pandemic-related misinformation on a misreading of the academic literature.

At issue is the company's long-standing deference to the risk of so-called "backfire effects." That is to say, Facebook worries that the mere act of trying to debunk a bogus claim may only help to make the lie grow stronger. CEO and founder Mark Zuckerberg expressed this precise concern back in February 2017: "Research shows that some of the most obvious ideas, like showing people an article from the opposite perspective, actually deepen polarization," he said. The company would later cite the same theory to explain why it had stopped applying "red flag" warnings to fallacious headlines: "Academic research on correcting misinformation," a Facebook product manager wrote, has shown that such warnings "may actually entrench deeply held beliefs."

Facebook's fear of backfire hasn't abated in the midst of this pandemic, or the infodemic that came with it. On April 16, the company announced a plan to deal with rampant Covid-19 misinformation: In addition to putting warning labels on some specific content, it would show decidedly non-specific warnings to those who'd interacted with a harmful post and nudge them toward more authoritative sources. The vagueness of these latter warnings, Facebook told the website STAT, was meant to minimize the risk of backfire.

But here's the thing: Whatever Facebook says (or thinks) about the backfire effect, this phenomenon has not, in fact, been "shown" or demonstrated in any thorough way. Rather, it's a bogeyman—a zombie theory from the research literature circa 2008 that has all but been abandoned since. More recent studies, encompassing a broad array of issues, find the opposite is true: On almost all possible topics, almost all of the time, the average person—Democrat or Republican, young or old, well-educated or not—responds to facts just the way you'd hope, by becoming more factually accurate.

Yes, it's possible to find exceptions. If you follow all this research very carefully, you'll be familiar with the rare occasions when, in experimental settings, corrections have failed. If you have a day job, though, and need a rule of thumb, try this: Debunkings and corrections are effective, full stop. This summary puts you much closer to the academic consensus than does the suggestion that backfire effects are widespread and pose an active threat to online discourse.

 

From Wired
View Full Article

 


 

No entries found