acm-header
Sign In

Communications of the ACM

ACM News

Competing Agendas Roil Online Content Moderation Efforts


View as: Print Mobile App Share:

The inconsistncy of regulations focusing on online content moderation is "a worrying trend," according to UN Human Rights, and has immense consequences for public debate and participation.

Credit: United Nations Human Rights Office of the HIgh Commissioner

The disputatious environment surrounding what constitutes legitimate expression and what constitutes misinformation and disinformation on the Internet could get even more complicated by the end of this month.

Effective tomorrow (Aug. 25), a portion of the European Union's Digital Services Act (DSA), affecting what the EU terms "Very Large Online Platforms" and "Very Large Online Search Engines," becomes active.

In an April 25 statement denoting the 19 platforms that met the EU's definition of "very large"—having more than 45 million active monthly users in the EU—Thierry Breton, the EU's commissioner for Internal Market, laid out plainly the new responsibilities the online giants will face.

"Four months from today's designation, they will not be able to act as if they were 'too big to care'," Breton said. "We consider that these 19 online platforms and search engines have become systemically relevant and have special responsibilities to make the Internet a safe and trustworthy space."

Breton outlined four points on which the platforms will need to meet DSA stipulations. The third of those guiding points specifically addresses the spread of online misinformation and disinformation:

"Curbing the spread of illegal content, tackling disinformation, and protecting freedom of speech will no longer be just a civic responsibility, but a legal obligation," Breton said. "Very large online platforms and search engines will be obliged to adapt their recommender system to prevent algorithmic amplification of disinformation. I cannot overemphasize the importance of this point, as evidenced by current events. Malicious actors are actively exploiting online platforms to distort the information environment."

Under ideal conditions, then, one might assume that online content providers worldwide might take heed of policymakers' attempts to rein in bad information.

"It seems reasonable that once the firms have identified a particular issue area in which there are large clumps of misinformation, they should be able to act responsibly and consistently to be able to prevent that stuff," Philip Howard, the director of Oxford University's Program on Democracy and Technology, told Communications. Howard is also the co-founder and chair of the International Panel on The Information Environment (IPIE), a global multi-disciplinary effort of more than 200 scientists launched in May in an attempt to provide scientific rigor to discovering online falsehoods.

Alas, conditions are not ideal. While the EU's deadline grows ever closer, a cadre of U.S. lawmakers is trying to steer national policy in the other direction, into an "anything goes" environment. The House Judiciary Committee, chaired by Jim Jordan (R-OH), has taken aim not only at the Democratic administration of president Joseph Biden, but has also targeted researchers, whom Jordan and his Republican colleagues claim have coordinated with Biden administration officials to try to censor constitutionally protected free speech. That effort has included subpoenas and letters to researchers at Stanford University, the University of Washington, and others institutions, as well as a recently introduced bill its sponsors dubbed the "Free Speech Protection Act." Among the bill's stipulations is one that forbids any researcher receiving a grant from an executive branch agency from designating "any creator of news content, regardless of medium, as a source of misinformation or disinformation" during the funding term.

The committee also has steadily pursued social media platforms themselves with letters and subpoenas demanding information on its claims they are colluding with the Biden administration to censor free speech. The committee did not return a request for comment from Communications.

University of Texas researcher Samuel Woolley, project director for propaganda research at the university's Center for Media Engagement, told Communications this fragmentation of policy creates a "game of Whack-a-Mole, attempting to attend to one set of policies or regulations while simultaneously trying to listen to thousands of others and that's pretty impossible. That's why you see the IPIE and other groups like it emerging, attempting to create some kind of cohesion and systematic look at how we can generate international policies that do actually attend to these things on a multi-platform Internet-wide level."

However, given that in the U.S., a recent Supreme Court decision upheld social platforms' immunity from liability for content their users post under Section 230 of the Communications Decency Act of 1996, the likelihood of platforms creating a coherent global template for fighting misinformation and disinformation is unknown. Neither Google nor Facebook parent Meta responded to requests for comment about their global content moderation strategies after the DSA goes into effect.

However, both Howard and Woolley are optimistic the legally binding framework for "vetted researchers" to perform content analysis in the DSA will offer an opportunity to reduce concerted efforts to sway public opinion using false information.

'Grading their own homework'

Howard said the IPIE is not about to attempt to microscopically examine online content and address what he called the "the 'small p' political question" about what truth is.

"Unfortunately, I don't think the IPIE will help with that part. You can't get a bunch of scientists to validate truth claims on a Twitter post. What you can do is get a bunch of scientists and engineers to evaluate if somebody is interfering with the infrastructure ," such as the proliferation of thousands of fake accounts claiming that an event such as a school shooting never happened.

The organization has already published a trio of reports on the status of misinformation and disinformation online, and has also addressed what it considers the still-nascent state of rigorous scholarship around the issue.

For example, in the third report, IPIE authors found that out of 4,798 publications in peer-reviewed journals, only 588 were empirical and worked with evidence about countermeasures to misinformation. Of those, only 18 tested countermeasures in ways that allow for the aggregation of knowledge, and the translation of studied data into actionable insights are rare.

"Unfortunately, relatively few research publications test specific countermeasures they propose using real-world data," the report concludes. "Some of the solutions offered in the literature are too broad to guide policy."

The new availability of real-world data from the very large platforms under the DSA mandate, however, may be just the catalyst needed for a new era of rigorous research to drive policy.

"Up until now, I like to say the technology firms have been grading their own homework," Howard said. "Sometimes a firm will say, 'We have improved our news algorithm' and there is nobody who can say they haven't or there is no evidence of that. The IPIE will stand in and be that organization, though it's very likely that for the first year or two, a lot of what we say will be that there is no evidence. Because the technology firms do not share data in a responsible way, so it's very difficult for us to validate the things they say."

Among the provisions of the DSA is the creation of national-level digital service coordinators, who are tasked with evaluating researchers' applications for access to very large platforms' data. Vetted researchers are subject to various conditions such as a university affiliation, independence from commercial interests, and compliance with confidentiality and security requirements.

In an April publication, researchers from the Berlin-based Hertie School, a governance-focused post-graduate institution, laid out a concise prospectus of how cooperation between coordinators, researchers, and policymakers may finally begin to create some sort of consensus on ferreting out systemic disinformation and misinformation campaigns in Europe.

"I think what the EU has been doing has all been internally incredibly logical," Joanna Bryson, one of the paper's co-authors, said. "It's what we need to do. It conforms with rule of law and the declaration of human rights, and it's all accurate. But it may create an impossible conundrum for the content providers."

Howard said the IPIE's formation was inspired by the work of the Intergovernmental Panel on Climate Change (IPCC), which also brought together scientists of many disciplines to tackle one overarching threat to the world. The IPCC was formed in 1988 and many of its forecasts seem to be borne out in recent severe and anomalous widespread weather events. Howard said he hopes any guidance from the IPIE may be accepted and acted upon more quickly than the IPCC's warnings have.

"Can you imagine another 30 years of this information environment?" he said. "Public life will be in tatters. I think we'll need to move much more quickly."

 

Gregory Goth is an Oakville, CT-based writer who specializes in science and technology.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account