Skip to Content
Policy

What is Section 230 and why does Donald Trump want to change it?

This provision of the Communications Decency Act is being blamed for everything from social-media bias to enabling revenge porn. Here’s how to understand the law that created the modern internet.
August 13, 2019
United States Senator Josh Hawley
United States Senator Josh HawleyStefani Reynolds/picture-alliance/dpa/AP Images

Section 230 is one of the pieces of legislation that allowed today’s internet—and Facebook, Twitter, and YouTube—to develop. Now, it’s being accused of enabling everything from anti-conservative censorship to revenge porn, and politicians on both sides of the aisle are calling for change. Most recently, President Donald Trump drafted an executive order that would limit the provision. Though the executive order may change or be abandoned completely, expect the sound and fury over Section 230 to continue. 

Here’s a breakdown of what Section 230 does—and doesn’t—do, and why it’s become such a political punching bag. 

What is Section 230? 

Section 230 of the Communications Decency Act of 1996 states that, with some exceptions, internet companies are not legally responsible for the content they host if it was published there by someone else.

The classic example involves Yelp, explains Jeff Kosseff, a cybersecurity law professor at the United States Naval Academy and author of The Twenty-Six Words That Created The Internet, a book about Section 230. If someone posts a defamatory restaurant review on Yelp, Section 230 ensures that the restaurant can sue the person who wrote the review, but not Yelp itself. It protects everything from Facebook and YouTube to the recently deplatformed 8chan.

Section 230 also says that platforms can’t be held liable if they take down any content they or their users consider “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” In other words, they are not obliged to host anything they don’t want to.

Why was Section 230 adopted?

Section 230 was inspired by a 1995 case involving Stratton Oakmont, the firm founded by stockbroker Jordan Belfort, who was played by Leonardo DiCaprio in Wolf of Wall Street. Stratton Oakmont sued the internet service provider Prodigy Services for defamation after someone wrote on a Prodigy message board that the firm had committed fraud. 

Prodigy lost. Because Prodigy moderated its message boards, a New York court ruled that the service was responsible for the content on them. Ironically, Prodigy would have been legally protected if it hadn’t bothered moderating at all.  

Politicians worried that this ruling would make websites give up on moderation, allowing all kinds of extreme content to flourish. To overrule this case, Senator Ron Wyden and Representative Chris Cox created Section 230. It allows companies to moderate content without fearing that they’ll be sued if they can’t do it perfectly. The biggest caveat is that Section 230 has never protected federal criminal activity. (Some actions, like defamation, are illegal without being criminal. Criminality is a higher level of offense.) 

What are the biggest misconceptions about Section 230? 

One big misconception, which has been repeated by Republican senator Ted Cruz, is that platforms can only enjoy Section 230 protection if they are “neutral.” Actually, Section 230 applies regardless of a company’s political bent. 

Another misconception is that Section 230 creates a legal distinction between a “platform” and  “publisher.” That’s not the case, says Kosseff. The distinction that matters: Is a website hosting its own defamatory posts, or do the defamatory posts come from a third party? If MIT Technology Review posted a defamatory article, we could be sued. If someone posted a defamatory comment on an article, we would be protected. 

Another mistaken belief is that Section 230 has something to do with copyright. Rules on when a platform must take down content that violates copyright are not set by Section 230, but by the Digital Millennium Copyright Act.

Finally, as already mentioned, Section 230 does not provide blanket immunity: it doesn’t protect websites in the case of actions that violate federal criminal law. Unfortunately, there’s no exception for state criminal law, and states are usually first to  criminalize things like sex trafficking.

Why is everyone talking about Section 230 now? 

As scrutiny of Big Tech increases, politicians—both Democrats and Republicans—are concerned with how Section 230 has shaped the internet. Last year, lawmakers passed a bill that added a new exception to Section 230: now, platforms can be also held responsible for third-party content that facilitates sex trafficking. As debates over misinformation and free speech continue, the new fight is over moderation, and whether there’s too much or too little.

What are the arguments for changing Section 230? 

Some Republicans believe that companies are using Section 230 as a cover to let them moderate content however they want, and are exercising anti-conservative bias in what they choose to take down. (There is currently no evidence that this supposed anti-conservative bias on social media exists.) In June, Josh Hawley, a Republican senator, introduced a bill that would get rid of Section 230 immunity for big social-media sites unless they could prove they hadn’t moderated in politically biased way. Under this plan, these companies would be audited by the Federal Trade Commission every two years. Employees who showed bias would have to be disciplined or fired. The bill has been widely criticized for being  extremely vague and hard to enforce.

President Trump has also been concerned with anti-conservative bias. Last week it was reported that his office drafted an executive order that would let the White House regulate how social media is moderated. Section 230 granted these sites the power to moderate on their own terms, but the executive order would subject them to guidelines developed by the Federal Communications Commission. This makes them more liable for the content that appears on the platforms. 

Meanwhile, Democratic critics like House Speaker Nancy Pelosi think that tech companies are using Section 230 to avoid taking responsibility for misinformation, hate speech, or other dangerous content. For example, Section 230 allows YouTube to keep operating even though pedophiles swarm the comments sections of videos of children. In an April interview, Pelosi called Section 230 a “gift” to the tech companies that they aren’t treating with respect. “It is not out of the question that that could be removed,” added Pelosi, who herself was the subject of a doctored videos that Facebook refused to take down. (We argued that Facebook was right, by the way.)

Similarly, Danielle Citron, a law professor at Boston University, has written extensively about how Section 230 can make it hard to punish bad actors. She cites the example of The Dirty, a site devoted to posting “dirt” on people, often accusations of cheating or sexually transmitted diseases. Much of the information is clearly false and harmful, but courts applying Section 230 had ruled that the site’s founder cannot be held responsible for the content. The same issues are at stake with sites that host revenge pornography, Citron writes. Section 230 shields these sites from liability, and so the intimate photos are not taken down. 

What are the arguments for keeping Section 230?

Section 230 is not perfect, but it has been essential in allowing platforms to exist while doing some moderation. “Section 230 was intended to encourage moderation,” says Kosseff. “Whether or not it’s done a very good job of that is a very legitimate question that we need to be asking, but the whole point is to provide platforms with the certainty that they can adopt the moderation practices that consumers believe necessary without being exposed to liability.”

If there were no Section 230, or if there were significant barriers to reaching that immunity, the entire internet ecosystem would look different. Some sites would shut down. Others might stop moderating completely, opening the door for even more terrible content. 

Deep Dive

Policy

Is there anything more fascinating than a hidden world?

Some hidden worlds--whether in space, deep in the ocean, or in the form of waves or microbes--remain stubbornly unseen. Here's how technology is being used to reveal them.

A brief, weird history of brainwashing

L. Ron Hubbard, Operation Midnight Climax, and stochastic terrorism—the race for mind control changed America forever.

Africa’s push to regulate AI starts now        

AI is expanding across the continent and new policies are taking shape. But poor digital infrastructure and regulatory bottlenecks could slow adoption.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.