Table of Contents
Why repealing or weakening Section 230 is a very bad idea
This is the second installment in a two-part series on Section 230. This part focuses on Section 230âs importance to free speech on the internet and addresses some common misconceptions about the law. See Part 1 for an overview of Section 230âs text, purpose, and history.
Section 230 made the internet fertile ground for speech, creativity, and innovation, supporting the formation and growth of diverse online communities and platforms. Today we take for granted that we can go online and find many different places to speak our minds, connect with people, and share and view photos, videos, music, art, and other creative content.
Repealing or weakening Section 230 would jeopardize all of that.
As I explained in Part 1, Section 230 âgives âinteractive computer servicesâ â including discussion forums, hosted blogs, video platforms, social media networks, crowdfunding sites, and many other services that host third-party speech â broad immunity from liability for what their users say.â The statute âalso grants the services broad immunity from liability for declining to host, publish, or platform third-party speech.â
Without Section 230, websites would be left with a menu of unattractive options to avoid lawsuits over their usersâ speech. Many would likely change their business model and stop hosting user-generated content altogether, creating a scarcity of platforms that sustain our ability to communicate with each other online. This changed landscape would entrench the dominance of large platforms that can afford to defend endless lawsuits and devote extensive resources to moderating vast amounts of user content.
Your guide to Section 230, the law that safeguards free speech on the internet
News
FIREdelves into Section 230âs text, purpose, and history and its importance to free speech on the internet and address some common misconceptions about the law.
Surviving platforms would moderate content more aggressively and maybe even screen all content before itâs posted. That isnât a recipe for a thriving, free-speech-friendly internet. If you think Twitter and Facebook go too far with content moderation now, just imagine how much more aggressively these platforms would moderate if threatened with liability for usersâ speech. None of these platforms have the capacity to carefully review all content, let alone make consistently accurate judgments about its legality. They would likely tweak their algorithms, which already produce lots of false positives, to take down even more content.
Section 230 doesnât just protect platforms from liability for unlawful content created by others: It also facilitates the prompt dismissal of frivolous lawsuits, often in cases that donât even involve unlawful speech. Without Section 230, many of these lawsuits would still cause platforms major headaches by requiring them to engage in extensive discovery and pretrial motions.
Whatâs more, pre-approving posts would destroy an essential feature of so many websites: their usersâ ability to interact with each other in real time, to comment on current events while theyâre still current. It would also limit platformsâ growth, constraining the amount of new content based on a platformâs capacity to review it.
You might ask: Canât platforms just take a hands-off approach like CompuServe? Recall from Part 1 that, before Section 230 was enacted, a federal court ruled CompuServe â which did not moderate its forumsâ content â couldnât be held liable for allegedly defamatory speech posted by a third party in one of those forums.
But even a platform that took that approach would be liable for posts they knew or had reason to know about. That still means reviewing an unmanageable number of complaints about allegedly unlawful content, not to mention maintaining a speech-limiting notice-and-takedown system. Inevitably, complaints would come from people simply annoyed or offended by a userâs speech, and many platforms would take the easy, risk-averse way out by summarily removing challenged content rather than thoroughly investigating the merit of each complaint.
The answer isnât to harness government power to control private platformsâ editorial decisions. Repealing or undermining Section 230 would lead only to less expressive freedom and viewpoint diversity online â to the detriment of us all.
Plus, one purpose of Section 230 is to promote platformsâ self-governance. Platformsâ exercise of editorial judgment over the content they host is itself expressive, and protected by the First Amendment. Itâs part of how many online communities define themselves and how people decide where they want to spend their time online. A discussion forum that wants to be âfamily friendlyâ may decide to ban profanity. A baseball forum may limit posts that arenât about baseball. And an NRA message board may remove messages opposing gun rights. Many platforms restrict spam and self-advertising, for example.
Large social media companies can be frustratingly arbitrary in their content moderation. But if every online platform were a free-for-all, we would no longer benefit from the great diversity of communities and discussion spaces on the internet that cater to different peopleâs interests and desired user experiences.
Section 230 is under threat
Presidents from both major have called for abolishing Section 230. Other politicians to the law, and some have already succeeded in chipping away at it (with ).
Such efforts to narrow Section 230âs scope or make its legal protections conditional would produce negative consequences for free speech. Carving out certain categories of speech from Section 230âs protection, for example, would lead platforms to restrict more speech to minimize legal risk.
Weâve already seen that happen with enactment of the and , which, in addition to criminalizing the online âpromotion or facilitationâ of prostitution and âreckless disregardâ of sex trafficking, creates new civil liability and exposure to state prosecutions in connection with those and related offenses. This overbroad law goes well beyond targeting involvement in coercive sex trafficking: It restricts speech protected by the First Amendment.
Social media platforms are now heavily incentivized to suppress speech related to prostitution rather than risk civil or even criminal liability. Thatâs exactly what happened immediately after SESTA/FOSTA became law. Reddit multiple subreddits related to sex, none of which were dedicated advertising forums. Craigslist completely its personals section, saying it couldnât risk liability for what users post there âwithout jeopardizing all our other services.â
Politicians and critics on both the left and the right unhappy with how social media companies moderate content often blame Section 230.
As the Woodhull Freedom Foundation, Human Rights Watch, and other plaintiffs challenging SESTA/FOSTA , âWebsites that support sex workers by providing health-related information or safety tips could be liable for promoting or facilitating prostitution, while those that assist or make prostitution easierâi.e., âfacilitateâ itâby advocating for decriminalization are now uncertain of their own legality.â
In 2021, Facebook founder Mark Zuckerberg making Section 230âs legal protections contingent on having âadequate systemsâ in place for âidentifying unlawful content and removing it.â But as the Electronic Frontier Foundation , that would benefit wealthy platforms like Facebook at the expense of countless other platforms, as the âvast majority of online services that host user-generated content do not have the technical, legal, or human resources to create systems that could identify and remove unlawful content.â It would potentially require pre-screening content, and inevitably lead to more removal of lawful speech, given the high failure rate of both human reviewers and algorithms.
Common complaints and misconceptions about Section 230
Politicians and critics on both the and the unhappy with how social media companies moderate content often blame Section 230. Conservatives that it allows platforms to censor conservative voices with impunity, while liberals the law for allowing platforms to duck responsibility for hosting âhate speech,â extremist content, and mis- and disinformation. But many of the attacks on Section 230 are misguided.
âSection 230 is just a handout to Big Tech!â
The conversation around Section 230 often revolves around large social media companies like Twitter and Facebook. But the law protects all internet services and platforms that host or share othersâ speech online, from Wikipedia, to Substack, to Indiegogo, to OkCupid, to Yelp, to the smallest blogs and websites. Importantly, the immunity provision also applies to any âuser of an interactive computer service.â Courts have that this language protects individuals who share content by, for instance, forwarding an email or tweeting a link to an article.
Eliminating Section 230 would actually stifle Big Techâs competition, disproportionately affecting startups and smaller websites with less money and resources to moderate content and fend off lawsuits. This, in part, explains why Zuckerberg, the CEO of the largest social media platform in the world, Section 230 reform.
âSection 230 might have been important when the internet was getting off the ground in the 1990s, but it has outlived its usefulness.â
Section 230 is arguably even more necessary today. To expect platforms to review all user content was unrealistic even in the â90s when many forums hosted thousands of new posts each day. Today, platforms like Facebook, Instagram, YouTube, and WhatsApp serve billions of users. Hundreds of millions of tweets each day. More than 500 hours of video are every minute. While automated content moderation is more common today, itâs far from perfect, and many smaller websites do not have access to advanced algorithmic tools.
The internet is still evolving and expanding. And it needs Section 230 to do so.
âSocial media platforms shouldnât evade liability for hosting hate speech, misinformation, and other speech that causes harm.â
Anyone who raises this objection to Section 230 needs to brush up on the First Amendment, which protects the overwhelming majority of speech labeled âhate speechâ or âmisinformation.â That means platforms still wouldnât be liable for hosting this speech if Section 230 were revoked. The First Amendment would continue to protect them. But invoking that protection could require costlier and more protracted litigation in many instances, which would have to be waged on a case-by-case basis.
âSection 230 requires platformsâ content moderation to be fair and neutral.â
No, it doesnât. In fact, a neutrality requirement â which has in Congress â would raise First Amendment issues by providing immunity only to those platforms that moderate speech the way the government prescribes.
As private entities, social media platforms (and other websites) have a First Amendment right to exercise editorial discretion in deciding what speech to host, free from government coercion. Far from requiring neutrality, Section 230 provides extra protection to the right to editorial discretion to promote development of a wide variety of online communities.
Besides, a legislatively imposed fairness or neutrality requirement would itself be impossible to enforce fairly. The terms âfairâ and âneutralâ are vague, subjective, and ripe for abuse. A platformâs motive for restricting a given piece of content is not always clear. Where one person sees a straightforward application of the rules, another will see politically biased censorship. Imposing a neutrality requirement on topic- or viewpoint-based platforms would be especially absurd. Would a cycling forum that removed posts saying âbiking sucksâ also have to take down posts about how great biking is, so it could remain âneutralâ?
Thatâs why Section 230(c)(2)âs protection of the editorial discretion to to refuse, block, or drop content rests principally on what the service provider considers objectionable. Any attempt to impose fairness or neutrality, or any other metric, would improperly substitute the governmentâs judgment for the serviceâs constitutionally protected editorial discretion.
âSection 230 protects platforms, not publishers. Social media companies that engage in extensive content moderation are acting as publishers and donât find shelter under Section 230.â
This claim is a close cousin of the âneutralityâ objection. Itâs akin to saying platforms lose Section 230 protection for doing the very thing Section 230 protects. As explained, the lawâs purpose is to immunize platforms from liability for usersâ speech while also protecting their right to do as much or as little content moderation as they wish. When determining if Section 230 applies, the relevant question isnât whether a platform is acting as a âplatformâ or âpublisher.â Itâs whether somebody other than the online service created the content in question and whether anything done to it by the service created the grounds for liability to apply.
âPrint publishers can be liable for content they publish. Why should digital platforms get special privileges?â
When digital platforms create and offer their own content, they can be liable just like print publishers. Similarly, if they alter or contribute to a third partyâs content and, in so doing, create the basis for alleged liability, Section 230 immunity may not apply. But absent such responsibility, in whole or part, for creating or developing the content, digital platforms are more like libraries, newsstands and bookstores.
Again, in that context, the enormous amount of content posted online makes reviewing all of it an effectively impossible task. Also, within the online sphere, Section 230 treats all publishers the same. It doesnât matter if youâre a social media company, news outlet, or a blog. Many print publishers, including newspapers, also produce content online, where they receive, for any third-party content they host, the benefit of the same Section 230 protections as any other website.
We need to preserve the foundation for free speech on the internet
Section 230 is no less important to online free speech today than it was upon its enactment almost three decades ago.
Certainly, there are legitimate concerns about how a handful of social media platforms dominate the online speech market and/or may overregulate their usersâ speech. These companiesâ concentrated power makes their content moderation policies more consequential, giving them significant influence over public discourse. FIREhas taken platforms like Twitter to task for policies and decisions that undermine a culture of free expression.
But the answer isnât to harness government power to control private platformsâ editorial decisions. Repealing or undermining Section 230 would lead only to less expressive freedom and viewpoint diversity online â to the detriment of us all.
Recent Articles
FIREâs award-winning Newsdesk covers the free speech news you need to stay informed.