I read with interest Michael A. Cusumano's October 2021 column, "Section 230 and a Tragedy of the Commons." As the author of a book about Section 230's history (The Twenty-Six Words That Created the Internet, Cornell University Press, 2019), I welcome discussion of this important statute. Unfortunately, Cusumano's column contains some fundamental factual errors that further muddle a debate that already has been rife with inaccuracies.
Perhaps most concerning was Cusumano's characterization of a "specific dilemma" that social media platforms face: "If they edit too much content, then they become more akin to publishers rather than neutral platforms and that may invite strong legal challenges to their Section 230 protections." This statement is false. Section 230 does not have—and never has had—a requirement for platforms to be "neutral." To the contrary, Section 230's authors were motivated by a 1995 state court ruling that suggested that online services receive less protection from liability for user content under the common law if they exercise "editorial control." As Sen. Ron Wyden, Section 230's co-author, told Vox in 2019: "Section 230 is not about neutrality. Period. Full stop. 230 is all about letting private companies make their own decisions to leave up some content and take other content down." Congress provided platforms with Section 230 protections to give them the breathing room to develop moderation technology, policies, and practices that they believe their users' demand. As the first federal appellate court to interpret Section 230 wrote in 1997, under the statute, "lawsuits seeking to hold a service provider liable for its exercise of a publisher's traditional editorial functions—such as deciding whether to publish, withdraw, postpone or alter content—are barred."
No entries found