Shutterstock / The Atlantic
In the United States, you are free to speak, but you are not free of responsibility for what you say. If your speech is defamatory, you can be sued. If you are a publisher, you can be sued for the speech you pass along. But online services such as Facebook and Twitter can pass along almost anything, with almost no legal accountability, thanks to a law known as Section 230.
President Donald Trump has been pressuring Congress to repeal the law, which he blames for allowing Twitter to put warning labels on his tweets. But the real problem with Section 230, which I used to strongly support, is the kind of internet it has enabled. The law lets large sites benefit from network effects (I’m on Facebook because my friends are on Facebook) while shifting the costs of scale, like shoddy moderation and homogenized communities, to users and society at large. That’s a bad deal. Congress should revise Section 230—just not for the reasons the president and his supporters have identified.
When the law was enacted in 1996, the possibility that monopolies could emerge on the internet seemed ludicrous. But the facts have changed, and now so must our minds.
In the early 1990s, emerging digital technologies created a quandary. Online public forums, on which users are able to post whatever they’d like, were one of the earliest and most exciting applications of digital networks. But hosting such a forum was arguably akin to a newspaper publishing a Letters to the Editor page without bothering to read the letters, which would be a prescription for legal catastrophe.
In two landmark cases, courts began to grapple with the issue. In 1991, a federal court ruled that the online service CompuServe was a mere distributor, rather than a publisher, of the material that it hosted, and so was not liable for its content. Its competitor Prodigy, however, was deemed to be liable in a New York state court ruling four years later, because Prodigy moderated user forums. By acting as an editor and not a mere conduit, the court reasoned, Prodigy made itself a publisher rather than a distributor.
In 1996, Congress passed the Communications Decency Act, a law meant to crack down on digital smut. From a decency perspective, the legal standard that had emerged from the CompuServe and Prodigy lawsuits seemed, well, perverse. Prodigy was liable because it had tried to do the right thing; CompuServe was immune because it had not. So Section 230 of the act stipulated that providers of internet forums would not be liable for user-posted speech, even if they selectively censored some material.
Much of the Communications Decency Act was quickly struck down by the Supreme Court, but Section 230 survived. It quietly reshaped our world. Courts interpreted the law as giving internet services a so-called safe harbor from liability for almost anything involving user-generated material. The Electronic Frontier Foundation describes Section 230 as “one of the most valuable tools for protecting freedom of expression and innovation on the Internet.” The internet predated the law. Yet the legal scholar Jeff Kosseff describes the core of Section 230 as “the twenty-six words that created the internet,” because without it, the firms that dominate the internet as we have come to know it could not exist. Maybe that would be a good thing.
Services such as Facebook, Twitter, and YouTube are not mere distributors. They make choices that shape what we see. Some posts are circulated widely. Others are suppressed. We are surveilled, and ads are targeted into our feeds. Without Section 230 protections, these firms would be publishers, liable for all the obscenity, defamation, threats, and abuse that the worst of their users might post. They would face a bitter, perhaps existential, dilemma. They are advertising businesses, reliant on reader clicks. A moderation algorithm that erred on the side of a lawyer’s caution would catch too much in its net, leaving posters angrily muzzled and readers with nothing more provocative than cat pics to scroll through. An algorithm that erred the other way would open a floodgate of lawsuits.
But the internet is not Facebook or Twitter, and it shouldn’t be. Fifteen years ago the major social-media platforms barely existed. Was the internet better or worse? The online public square, now dominated by Twitter, was then constituted of independent blogs aggregated by user-curated feeds. Bloggers are publishers, legally responsible for their posts, but the blogosphere was not noted for its blandness. White-hot critique was common, but defamation and abuse were not—except in unmoderated user comments, for which bloggers could disclaim responsibility, thanks to Section 230.
In a democracy, public conversation is a kind of collective cognition. Before the internet, Americans thought together in pamphlets and newspapers. On the early internet, we thought together in blogs and journals. Today we think together on Twitter and Facebook and within the shrinking circle of journalism’s survivors. Which internet did a better job of keeping Americans informed? Which internet was more open, in the sense of permitting unknown voices with valuable insights to gain a hearing?
Forums where people can chat and post would continue to exist without Section 230. We know this because few countries offer internet platforms such sweeping protections but small user forums are everywhere. What distinguishes the United States is that it is home to gigantic, ad-driven sites like Facebook, Twitter, Google, and YouTube.
In practice, the effect of Section 230 has not been to enable experimentation or free expression but to allow leading websites to operate on a massive scale. When you are legally responsible for what happens on your site, you have to moderate the content that appears there. But moderating well is hard. You hope to encourage thoughtful participation of parties who might be hurt by incautious speech but also parties who might feel stifled by overcaution. Moderation becomes harder as the scale and scope of a community grow. Speech that would be great fun in a mixed-martial-arts forum might be disruptive or even harmful in a forum for trauma survivors. Managing a community of both would be challenging. A pluralistic society embraces a wide variety of communities, porous but meaningfully distinct, each with its own culture. In law, the phrase “community standards” signifies a deference to the heterogeneity of norms regarding appropriate expression. In an ironic, even Orwellian, turn, social-media titans have repurposed the phrase to describe their own one-size-fits-all rules, to which every community must now defer or be excluded from the arteries of contemporary civil society.
Without Section 230, the costs and legal jeopardy associated with operating user forums would grow with the size of the forum. Courts apply speech laws with careful sensitivity to context. Most of the time, “I’m gonna kill you!” is not a criminal threat. Occasionally it is. In smaller forums with well-defined norms, it’s easy for both users and moderators to tell the difference. In a nowhere land of everybody, it’s just hard to know. With less context but deep pockets, large forums would have to err on the side of dull caution. Small forums would not.
A more plural internet would be a freer internet, as different communities could have wildly divergent standards about permissible expression. By creating the conditions under which we are all herded into the same virtual space, Section 230 helped turn the internet into a conformity machine. We regulate one another’s speech through shame or abuse, but we have nowhere to go where our own expression might be more tolerable. And while Section 230 immunizes providers from legal liability, it turns those providers into agents of such concentrated influence that they are objects of constant political concern. When the Facebook founder Mark Zuckerberg and the Twitter founder Jack Dorsey are routinely (and justifiably!) browbeaten before Congress, it’s hard to claim that Section 230 has insulated the public sphere from government interference.
Before the internet, we had a communications ecosystem that included network television, metropolitan daily newspapers, niche journals and magazines, and theater and film that audiences chose with their feet to attend. A film might be breathtaking and provocative but not appropriate for prime-time TV. Contemporary social media have swallowed this diversity of forums. Revising Section 230 would be a blow to platforms like Facebook and Twitter. It would force them into the niche that network television once occupied: ubiquitous but consequently bland. A new legal standard should encourage websites to moderate content posted by users (as Section 230 was intended to do), and it should recognize that forums for mixed-martial-arts fans and trauma survivors might apply different norms. But a new standard should not immunize hosts from risk of liability (as Section 230 does) even after notice that material they are hosting is defamatory, threatening, or otherwise unlawful.
In many cases, providers will be correct—and courageous—not to take down material to which some party objects, even under threat of lawsuit. Making affirmative choices about what a broader public ought to see, in spite of controversy, is the very essence of what it means to be a publisher. Online publishers should enjoy the same legal protections as the countless pamphleteers and newspaper editors who came before them, but they should also bear the same responsibility to justify their choices.
Courage requires human judgment, the shirking of which forms the basis of the social-media business model. If made liable for posts flagged as defamatory or unlawful, mass-market platforms including Facebook and Twitter would likely switch to a policy of taking down those posts automatically. Incentives would shift: Mass platforms would have to find a balance among maximizing viewership, encouraging responsible posting, and discouraging users who frivolously flag other people’s posts. They would no longer get a free pass when publishing ads that are false or defamatory. Even these platforms’ highest-profile users could not assume that everything they posted would be amplified to millions of other people.
Vigorous argument and provocative content would migrate to sites where people take responsibility for their own speech, or to forums whose operators devote attention and judgment to the conversations they host. The result would be a higher-quality, less consolidated, and ultimately freer public square.