At the center of debate regarding regulation of social media and the Internet is Section 230 of the U.S. Communications Decency Act of 1996. This law grants immunity to online platforms from civil liabilities based on third-party content.22 It has fueled the growth of digital businesses since the mid-1990s and remains invaluable to the operations of social media platforms.6 However, Section 230 also makes it difficult to hold these companies accountable for misinformation or disinformation they pass on as digital intermediaries. Contrary to some interpretations, Section 230 has never prevented platforms from restricting content they deemed harmful and in violation of their terms of service. For example, several months before suspending the accounts of former President Donald Trump, Twitter and Facebook started to tag some of his posts as untrue or unreliable and Google YouTube began to edit some of his videos. Nevertheless, online platforms have been reluctant to edit too much content, and most posts continue to spread without curation. The problem with false and dangerous content also seems not to have subsided with the presidential election: Social media is now the major source of anti-vaccine diatribes and other misleading health information.21
Given the law, social media platforms face a specific dilemma: If they edit too much content, then they become more akin to publishers rather than neutral platforms and that may invite strong legal challenges to their Section 230 protections. If they restrict too much content or ban too many users, then they diminish network effects and associated revenue streams. Fake news and conspiracy theories often go viral and have been better for business than real news, generating billions of dollars in advertisements.1
Tragedy of the Commons or Fallacy of Composition?
Section 230 and false information are important issues. I respect the author and effort that was put into writing about these topics. I would also like to give some constructive criticism.
What are the problems with Section 230 exactly? It is difficult to discern from this article. There seems to be one central problem stated in the first paragraph. However, Section 230 also makes it difficult to hold these companies accountable for misinformation or disinformation they pass on as digital intermediaries. This begs the question, why should online platforms be held accountable for re-transmitting misinformation and disinformation, when there are already laws in place for false advertising and defamation? It is also difficult to answer based on this article.
Perhaps it is because, Social media is now the major source of anti-vaccine diatribes and other misleading health information. The writer supported this claim by linking to a Washington Post article about a single anti-vaccine incident at a Miami school. There was no empirical data referenced in that article. Furthermore, is there any empirical evidence that health misinformation from social media is making society less healthy than society was in the past?
Perhaps it is because, ... we risk a potential tragedy of the commons, where trust in online platforms declines to the point where very few people believe what they read on the Internet. But, that is the fallacy of composition--not a tragedy of the commons. Distrust in a specific platform such as Facebook or Twitter does not cause distrust of the Internet in general. The Internet is an ecosystem of evolving platforms which continually undergo disruption and innovation. There is no evidence that it has reached any kind of limitation or stagnation similar to an isolated island in the South Pacific Ocean. (As a side note, research by Catrine Jarman indicates that rats consumed the trees, and South American slavers raided Easter Island. https://www.sapiens.org/archaeology/easter-island-demise/ It is debatable whether Easter Island was an economic tragedy of the commons, but I digress.)
Perhaps it is because, I once thought better education in science as well as history and ethics would help people think more clearly. And, Education alone, apparently, is not the answer. I agree but I think that education is a very important part of the solution. Have our schools prioritized information literacy and critical thinking skills? If so, is it possible to improve those programs and institutions?
If I buy a product that does not live up to its advertised benefits, I can become part of a class action lawsuit to sue the advertiser for false advertising. If other users on Twitter defame me, I can sue them for defamation. Why aren't false advertising and defamation laws enough? The article does not say.
I also contend that Facebook and Twitter, or new social media platforms will be a part of the solution. Facebook and Twitter are at this point well aware of the epistemic problems that they have introduced into society. As the article points out, User trust, especially in dominant social platforms such as Facebook and Twitter ... has been declining for years. If Facebook and Twitter don't innovate, users will move to newer and better platforms. Don't give up hope. Optimism and innovation will win.
Displaying 1 comment