acm-header
Sign In

Communications of the ACM

Legally speaking

The Push for Stricter Rules for Internet Platforms


scissors and paper clips on text of Section 230

Credit: Andrij Borys Associates, Shutterstock

One of the few things about which U.S. Republican and Democratic politicians generally agree these days is that the law widely known as § 230 of the Communications Decency Act needs to be repealed, amended, or reinterpreted.

Section § 230(c)(1) provides Internet platforms with a shield from liability as to information content posted by others. It states that "[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."

Although computing professionals might question whether these 26 words truly "created the Internet,"a Internet platform companies and most technology law specialists would say this characterization is only a slight exaggeration, at least as to sites that host user-posted content.

Although Donald Trump and Joe Biden have both recommended that Congress repeal this provision, their reasons are starkly different. Trump and other Republican critics think Internet platforms take down too much content in reliance on this law. They claim platforms are biased against conservative viewpoints when they remove or demote such postings. Democratic critics of § 230 blame Internet platforms for not taking down more harmful content, such as disinformation about COVID-19 or elections. They think repealing or amending § 230 would make platforms more responsible participants in civil society.

Short of repeal, several initiatives aim to change § 230. Eleven bills have been introduced in the Senate and nine in the House of Representatives to amend § 230 in various ways. President Trump issued an Executive Order directing the National Telecommunications and Information Administration (NTIA) to petition the Federal Communications Commission (FCC) to engage in rule-making to interpret § 230 more narrowly than courts have done. Moreover, Justice Clarence Thomas of the U.S. Supreme Court recently criticized court decisions giving a broad interpretation of § 230, signaling receptivity to overturning them.

This column explains the origins of § 230 and its broad interpretation. It then reviews proposed changes and speculates about what they would mean for Internet platforms.

Back to Top

Overview of § 230

In saying Internet platforms are neither "speakers" nor "publishers" of information posted by others, § 230(c)(1) protects platforms from lawsuits for unlawful content, such as defamation, posted by their users. Victims can sue the "speakers" who posted the unlawful content, but courts almost always dismiss victims' lawsuits against platforms shortly after filing.

Why would victims sue platforms? For one thing, victims may not be able to identify wrongdoers because harmful postings are often anonymous. Second, victims typically want judges to order the platforms to take down harmful content. Third, platforms generally have more resources than wrongdoers. Victims who want compensation for harms could likely get more money from platforms than from wrongdoers.

The 1995 Stratton Oakmont v. Prodigy case, which catalyzed the enactment of § 230, illustrates. A Prodigy user claimed Stratton-Oakmont engaged in securities fraud on a Prodigy bulletin board. Stratton-Oakmont responded by suing Prodigy and the anonymous user for defamation, initially asking for $100 million in damages. Even though Prodigy did not know of or contribute to the defamatory content, the court refused to dismiss the case. It regarded Prodigy as a "publisher" of the defamation because of its stated policy of exercising editorial control over content on its site. (The case settled after Prodigy apologized.)

In addition, § 230(c)(2) says platforms are not liable for any action they take "in good faith to restrict access to or availability of material that the provider … considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected."

Section 230 was enacted as part of a 1996 overhaul of U.S. telecommunications law. Its goal was to encourage emerging Internet services to monitor their sites for harmful content without the risk of being treated as publishers of user-posted content. Congress thought this law would foster the growth of the Internet sector of the economy, as indeed it has.

Back to Top

Zeran's Broad Interpretation of § 230

Zeran v. America Online was the first court decision to interpret § 230. The dispute arose over someone posting Ken Zeran's telephone number on AOL in advertisements for T-shirts glorifying the 1995 Oklahoma City terrorist bombing. It directed interested AOL users to call Ken about the shirts. Zeran got hundreds of telephone calls, including death threats, even though he knew nothing about the ads and was not even an AOL user.

Zeran asked AOL staff on several occasions to remove the ads. After AOL failed to follow through on assurances the ads would be deleted, Zeran sued AOL for negligently failing to protect him from harms resulting from this post.

AOL asked the court to dismiss Zeran's lawsuit based on § 230. Although Zeran did not claim AOL was a publisher who had defamed him by not taking down the ads, the court construed his negligence claim as an attempted evasion of Congress' intent to protect online services from law-suits for content posted by third parties. Hence, the court granted AOL's motion to dismiss.

Relying on Zeran, online platforms have routinely avoided legal liability through § 230 defenses. Numerous cases have featured very sympathetic plaintiffs, such as victims of revenge porn, fraudulent ads, and professional defamation, and some unsympathetic defendants who seem to have encouraged or tolerated harmful postings.

Back to Top

Proposals to Amend § 230

In late 2020, the Senate introduced a bill that would repeal § 230 outright. However, numerous bills would give § 230 a significant modification (bill names and numbers, sponsors, and links can be found at https://bit.ly/3iHUtw8).

Members of Congress have taken several different approaches to amending § 230. Some would widen the categories of harmful conduct for which § 230 immunity is unavailable. At present, § 230 does not apply to user-posted content that violates federal criminal law, infringes intellectual property rights, or facilitates sex trafficking. One proposal would add to this list violations of federal civil laws.

Some bills would condition § 230 immunity on compliance with certain conditions or make it unavailable if the platforms engage in behavioral advertising. Others would require platforms to spell out their content moderation policies with particularity in their terms of service (TOS) and would limit § 230 immunity to TOS violations. Still others would allow users whose content was taken down in "bad faith" to bring a lawsuit to challenge this and be awarded $5,000 if the challenge was successful.

Some bills would impose due process requirements on platforms concerning removal of user-posted content. Other bills seek to regulate platform algorithms in the hope of stopping the spread of extremist content or in the hope of eliminating biases.

Back to Top

Possible Ambiguities in § 230?

NTIA's petition asserts there is an ambiguity in § 230 about the relationship between § 230(c)(1) and § 230(c)(2) that the FCC should resolve through rulemaking. It posits that the function of § 230(c)(1) should be to shield platforms from liability for allowing user-posted content to remain on their sites. Take-downs of user-posted content should be governed, however, under its sister provision, § 230(c)(2).

The NTIA petition asserts that takedowns of "otherwise objectionable" content would not be sheltered by § 230(c)(2) unless the content was similar in nature to the named categories (for example, lewd or harassing). NTIA does not accept that platforms can construe that term broadly. Takedowns of "disinformation," for instance, would under this interpretation be ineligible for the § 230(c)(2) immunity shield. The FCC is unlikely to proceed with the proposed rulemaking under the BIden administration.


Section 230 was enacted as part of a 1996 overhaul of U.S. telecommunications law. Its goal was to encourage emerging Internet services to monitor their sites for harmful content without the risk of being treated as publishers of user-posted content.


Equally ambiguous, in NTIA's view, is the meaning of "good faith" in §230(c) (2). The NTIA petition asserts this standard cannot be satisfied if the takedown is "deceptive, pretextual, or inconsistent with [the platform's] terms of service." Moreover, it regards "good faith" as requiring due process protections. In NTIA's view, user-posted content cannot be taken down unless the platform notified users, explained their basis for take-down decisions, and provided users with a meaningful opportunity to be heard about it.

Back to Top

Narrowing § 230 By Interpretation?

Neither legislation nor an FCC rule-making may be necessary to significantly curtail § 230 as a shield from liability. Conservative Justice Thomas has recently suggested a reinterpretation of § 230 that would support imposing liability on Internet platforms as "distributors" of harmful content.

A key precedent on distributor liability dates back to a 1950s era decision, Smith v. California. Smith owned a bookstore that sold books, candy, and other sundries. A Los Angeles ordinance forbade sale of obscene or indecent books at such stores. Smith was convicted of selling obscene books, even though he had not read the books at issue and didn't know of their contents. The Supreme Court reversed Smith's conviction holding that LA's strict liability ordinance violated the First Amendment of the U.S. Constitution. Distributors of obscene books must know or have reason to know of illegal contents to be subject to prosecution.

Applying Smith to platforms under § 230 could result in Internet platforms being considered "distributors" of unlawful content once on notice of such content. Section 230, after all, shields these services from liability as "speakers" and "publishers," but is silent about possible "distributor" liability.

Endorsing this interpretation would be akin to adopting the notice-and-takedown rules that apply when platforms host user-uploaded files that infringe copyrights. Notice-and-takedown regimes have long been problematic because false or mistaken notices are common and platforms often quickly take-down challenged content, even if it is lawful, to avoid liability.

Back to Top

Conclusion

Civil liberties groups, Internet platforms, and industry associations still support § 230, as do Senator Wyden and former Congressman Chris Cox, who co-sponsored the bill that became § 230. Wyden and Cox have pointed out that an overwhelming majority of the 200 million U.S.-based Internet platforms depend on § 230 to protect them against unwarranted lawsuits by disgruntled users and those who may have been harmed by user-posted content of which the platforms were unaware and over which they had no control.

For the most part, these platforms promote free speech interests of their users in a responsible way. Startup and small nonprofit platforms would be adversely affected by some of the proposed changes insofar as the changes would enable more lawsuits against platforms for third-party content. Fighting lawsuits is costly, even if one wins on the merits.

Much of the fuel for the proposed changes to § 230 has come from conservative politicians who are no longer in control of the Senate. The next Congress will have a lot of work to do. Section 230 reform is unlikely to be a high priority in the near term. Yet, some adjustments to that law seem quite likely over time because platforms are widely viewed as having too much power over users' speech and are not transparent or consistent about their policies and practices.

Back to Top

Author

Pamela Samuelson ([email protected]) is the Richard M. Sherman Distinguished Professor of Law and Information at the University of California, Berkeley, CA, USA.

Back to Top

Footnotes

a. Jeff Kosseff, The Twenty-Six Words That Created the Internet (2019).


Copyright held by author.
Request permission to (re)publish from the owner/author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2021 ACM, Inc.


 

No entries found