Skip to Content
Culture

How to fix the internet

If we want online discourse to improve, we need to move beyond the big platforms.

October 17, 2023
hand emerging from a phone holding a bag
Erik Carter

We’re in a very strange moment for the internet. We all know it’s broken. That’s not news. But there’s something in the air—a vibe shift, a sense that things are about to change. For the first time in years, it feels as though something truly new and different might be happening with the way we communicate online. The stranglehold that the big social platforms have had on us for the last decade is weakening. The question is: What do we want to come next?

There’s a sort of common wisdom that the internet is irredeemably bad, toxic, a rash of “hellsites” to be avoided. That social platforms, hungry to profit off your data, opened a Pandora’s box that cannot be closed. Indeed, there are truly awful things that happen on the internet, things that make it especially toxic for people from groups disproportionately targeted with online harassment and abuse. Profit motives led platforms to ignore abuse too often, and they also enabled the spread of misinformation, the decline of local news, the rise of hyperpartisanship, and entirely new forms of bullying and bad behavior. All of that is true, and it barely scratches the surface. 

But the internet has also provided a haven for marginalized groups and a place for support, advocacy, and community. It offers information at times of crisis. It can connect you with long-lost friends. It can make you laugh. It can send you a pizza. It’s duality, good and bad, and I refuse to toss out the dancing-baby GIF with the tubgirl-dot-png bathwater. The internet is worth fighting for because despite all the misery, there’s still so much good to be found there. And yet, fixing online discourse is the definition of a hard problem. But look. Don’t worry. I have an idea. 

What is the internet and why is it following me around?

To cure the patient, first we must identify the disease. 

When we talk about fixing the internet, we’re not referring to the physical and digital network infrastructure: the protocols, the exchanges, the cables, and even the satellites themselves are mostly okay. (There are problems with some of that stuff, to be sure. But that’s an entirely other issue—even if both do involve Elon Musk.) “The internet” we’re talking about refers to the popular kinds of communication platforms that host discussions and that you probably engage with in some form on your phone. 

Some of these are massive: Facebook, Instagram, YouTube, Twitter, TikTok, X. You almost certainly have an account on at least one of these; maybe you’re an active poster, maybe you just flip through your friends’ vacation photos while on the john.

The internet is good things. It’s Keyboard Cat, Double Rainbow. It’s personal blogs and LiveJournals. It’s the distracted-girlfriend meme and a subreddit for “What is this bug?”

Although the exact nature of what we see on those platforms can vary widely from person to person, they mediate content delivery in universally similar ways that are aligned with their business objectives. A teenager in Indonesia may not see the same images on Instagram that I do, but the experience is roughly the same: we scroll through some photos from friends or family, maybe see some memes or celebrity posts; the feed turns into Reels; we watch a few videos, maybe reply to a friend’s Story or send some messages. Even though the actual content may be very different, we probably react to it in much the same way, and that’s by design. 

The internet also exists outside these big platforms; it’s blogs, message boards, newsletters and other media sites. It’s podcasts and Discord chatrooms and iMessage groups. These will offer more individualized experiences that may be wildly different from person to person. They often exist in a sort of parasitic symbiosis with the big, dominant players, feeding off each other’s content, algorithms, and audience. 

The internet is good things. For me, it’s things I love, like Keyboard Cat and Double Rainbow. It’s personal blogs and LiveJournals; it’s AIM away messages and MySpace top 8s. It’s the distracted-­girlfriend meme and a subreddit for “What is this bug?” It is a famous thread on a bodybuilding forum where meatheads argue about how many days are in a week. For others, it’s Call of Duty memes and the mindless entertainment of YouTubers like Mr. Beast, or a place to find the highly specific kind of ASMR video they never knew they wanted. It’s an anonymous supportive community for abuse victims, or laughing at Black Twitter’s memes about the Montgomery boat brawl, or trying new makeup techniques you learned on TikTok. 

It’s also very bad things: 4chan and the Daily Stormer, revenge porn, fake news sites, racism on Reddit, eating disorder inspiration on Instagram, bullying, adults messaging kids on Roblox, harassment, scams, spam, incels, and increasingly needing to figure out if something is real or AI. 

The bad things transcend mere rudeness or trolling. There is an epidemic of sadness, of loneliness, of meanness, that seems to self-reinforce in many online spaces. In some cases, it is truly life and death. The internet is where the next mass shooter is currently getting his ideas from the last mass shooter, who got them from the one before that, who got them from some of the earliest websites online. It’s an exhortation to genocide in a country where Facebook employed too few moderators who spoke the local language because it had prioritized growth over safety.

The existential problem is that both the best and worst parts of the internet exist for the same set of reasons, were developed with many of the same resources, and often grew in conjunction with each other. So where did the sickness come from? How did the internet get so … nasty? To untangle this, we have to go back to the early days of online discourse.

It’s also very bad things: 4chan and the Daily Stormer, revenge porn, fake news sites, racism on Reddit, eating disorder inspiration on Instagram, bullying, adults messaging kids on Roblox, harassment, scams, spam, incels.

The internet’s original sin was an insistence on freedom: it was made to be free, in many senses of the word. The internet wasn’t initially set up for profit; it grew out of a communications medium intended for the military and academics (some in the military wanted to limit Arpanet to defense use as late as the early 1980s). When it grew in popularity along with desktop computers, Usenet and other popular early internet applications were still largely used on university campuses with network access. Users would grumble that each September their message boards would be flooded with newbies, until eventually the “eternal September”—a constant flow of new users—arrived in the mid-’90s with the explosion of home internet access.

When the internet began to be built out commercially in the 1990s, its culture was, perversely, anticommercial. Many of the leading internet thinkers of the day belonged to a cohort of AdBusters-reading Gen Xers and antiestablishment Boomers. They were passionate about making software open source. Their very mantra was “Information wants to be free”—a phrase attributed to Stewart Brand, the founder of the Whole Earth Catalog and the pioneering internet community the WELL. This ethos also extended to a passion for freedom of speech, and a sense of responsibility to protect it. 

mash-up of memes like Dancing Baby, frog on a unicycle and the boyfriend turning around to ogle a passerby
ERIK CARTER

It just so happened that those people were quite often affluent white men in California, whose perspective failed to predict the dark side of the free-speech, free-access havens they were creating. (In fairness, who would have imagined that the end result of those early discussions would be Russian disinformation campaigns targeting Black Lives Matter? But I digress.) 

The culture of free demanded a business model that could support it. And that was advertising. Through the 1990s and even into the early ’00s, advertising on the internet was an uneasy but tolerable trade-off. Early advertising was often ugly and annoying: spam emails for penis enlargement pills, badly designed banners, and (shudder) pop-up ads. It was crass but allowed the nice parts of the internet—message boards, blogs, and news sites—to be accessible to anyone with a connection.   

But advertising and the internet are like that small submersible sent to explore the Titanic: the carbon fiber works very efficiently, until you apply enough pressure. Then the whole thing implodes.

Targeted advertising and the commodification of attention

In 1999, the ad company DoubleClick was planning to combine personal data with tracking cookies to follow people around the web so it could target its ads more effectively. This changed what people thought was possible. It turned the cookie, originally a neutral technology for storing Web data locally on users’ computers, into something used for tracking individuals across the internet for the purpose of monetizing them. 

To the netizens of the turn of the century, this was an abomination. And after a complaint was filed with the US Federal Trade Commission, DoubleClick dialed back the specifics of its plans. But the idea of advertising based on personal profiles took hold. It was the beginning of the era of targeted advertising, and with it, the modern internet. Google bought DoubleClick for $3.1 billion in 2008. That year, Google’s revenue from advertising was $21 billion. Last year, Google parent company Alphabet took in $224.4 billion in revenue from advertising. 

Our modern internet is built on highly targeted advertising using our personal data. That is what makes it free. The social platforms, most digital publishers, Google—all run on ad revenue. For the social platforms and Google, their business model is to deliver highly sophisticated targeted ads. (And business is good: in addition to Google’s billions, Meta took in $116 billion in revenue for 2022. Nearly half the people living on planet Earth are monthly active users of a Meta-owned product.) Meanwhile, the sheer extent of the personal data we happily hand over to them in exchange for using their services for free would make people from the year 2000 drop their flip phones in shock. 

And that targeting process is shockingly good at figuring out who you are and what you are interested in. It’s targeting that makes people think their phones are listening in on their conversations; in reality, it’s more that the data trails we leave behind become road maps to our brains. 

When we think of what’s most obviously broken about the internet—harassment and abuse; its role in the rise of political extremism, polarization, and the spread of misinformation; the harmful effects of Instagram on the mental health of teenage girls—the connection to advertising may not seem immediate. And in fact, advertising can sometimes have a mitigating effect: Coca-Cola doesn’t want to run ads next to Nazis, so platforms develop mechanisms to keep them away. 

But online advertising demands attention above all else, and it has ultimately enabled and nurtured all the worst of the worst kinds of stuff. Social platforms were incentivized to grow their user base and attract as many eyeballs as possible for as long as possible to serve ever more ads. Or, more accurately, to serve ever more you to advertisers. To accomplish this, the platforms have designed algorithms to keep us scrolling and clicking, the result of which has played into some of humanity’s worst inclinations.  

In 2018, Facebook tweaked its algorithms to favor more “meaningful social interactions.” It was a move meant to encourage users to interact more with each other and ultimately keep their eyeballs glued to News Feed, but it resulted in people’s feeds being taken over by divisive content. Publishers began optimizing for outrage, because that was the type of content that generated lots of interactions.  

On YouTube, where “watch time” was prioritized over view counts, algorithms recommended and ran videos in an endless stream. And in their quest to sate attention, these algorithms frequently led people down ever more labyrinthine corridors to the conspiratorial realms of flat-earth truthers, QAnon, and their ilk. Algorithms on Instagram’s Discover page are designed to keep us scrolling (and spending) even after we’ve exhausted our friends’ content, often by promoting popular aesthetics whether or not the user had previously been interested. The Wall Street Journal reported in 2021 that Instagram had long understood it was harming the mental health of teenage girls through content about body image and eating disorders, but ignored those reports. Keep ’em scrolling.

There is an argument that the big platforms are merely giving us what we wanted. Anil Dash, a tech entrepreneur and blogging pioneer who worked at SixApart, the company that developed the blog software Movable Type, remembers a backlash when his company started charging for its services in the mid-’00s. “People were like, ‘You’re charging money for something on the internet? That’s disgusting!’” he told MIT Technology Review. “The shift from that to, like, If you’re not paying for the product, you’re the product … I think if we had come up with that phrase sooner, then the whole thing would have been different. The whole social media era would have been different.”

The big platforms’ focus on engagement at all costs made them ripe for exploitation. Twitter became a “honeypot for a**holes” where trolls from places like 4chan found an effective forum for coordinated harassment. Gamergate started in swampier waters like Reddit and 4chan, but it played out on Twitter, where swarms of accounts would lash out at the chosen targets, generally female video-game critics. Trolls also discovered that Twitter could be gamed to get vile phrases to trend: in 2013, 4chan accomplished this with#cuttingforbieber, falsely claiming to represent teenagers engaging in self-harm for the pop singer. Platform dynamics created such a target-rich environment that intelligence services from Russia, China, and Iran—among others—use them to sow political division and disinformation to this day. 

“Humans were never meant to exist in a society that contains 2 billion individuals,” says Yoel Roth, a technology policy fellow at UC Berkeley and former head of trust and safety for Twitter. “And if you consider that Instagram is a society in some twisted definition, we have tasked a company with governing a society bigger than any that has ever existed in the course of human history. Of course they’re going to fail.”

How to fix it

Here’s the good news. We’re in a rare moment when a shift just may be possible; the previously intractable and permanent-­seeming systems and platforms are showing that they can be changed and moved, and something new could actually grow. 

One positive sign is the growing understanding that sometimes … you have to pay for stuff. And indeed, people are paying individual creators and publishers on platforms such as Substack, Patreon, and Twitch. Meanwhile, the freemium model that YouTube Premium, Spotify, and Hulu explored proves (some) people are willing to shell out for ad-free experiences. A world where only the people who can afford to pay $9.99 a month to ransom back their time and attention from crappy ads isn’t ideal, but at least it demonstrates that a different model will work. 

sinister-looking face made of white cursor arrows
ERIK CARTER

Another thing to be optimistic about (although time will tell if it actually catches on) is federation—a more decentralized version of social networking. Federated networks like Mastodon, Bluesky, and Meta’s Threads are all just Twitter clones on their surface—a feed of short text posts—but they’re also all designed to offer various forms of interoperability. Basically, where your current social media account and data exist in a walled garden controlled entirely by one company, you could be on Threads and follow posts from someone you like on Mastodon—or at least Meta says that’s coming. (Many—including internet pioneer Richard Stallman, who has a page on his personal website devoted to “Why you should not be used by Threads”—have expressed skepticism of Meta’s intentions and promises.) Even better, it enables more granular moderation. Again, X (the website formerly known as Twitter) provides a good example of what can go wrong when one person, in this case Elon Musk, has too much power in making moderation decisions—something federated networks and the so-called  “fediverse” could solve. 

The big idea is that in a future where social media is more decentralized, users will be able to easily switch networks without losing their content and followings. “As an individual, if you see [hate speech], you can just leave, and you’re not leaving your entire community—your entire online life—behind. You can just move to another server and migrate all your contacts, and it should be okay,” says Paige Collings, a senior speech and privacy advocate at the Electronic Frontier Foundation. “And I think that’s probably where we have a lot of opportunity to get it right.” 

There’s a lot of upside to this, but Collings is still wary. “I fear that while we have an amazing opportunity,” she says, “unless there’s an intentional effort to make sure that what happened on Web2 does not happen on Web3, I don’t see how it will not just perpetuate the same things.” 

Federation and more competition among new apps and platforms provide a chance for different communities to create the kinds of privacy and moderation they want, rather than following top-down content moderation policies created at headquarters in San Francisco that are often explicitly mandated not to mess with engagement. Yoel Roth’s dream scenario would be that in a world of smaller social networks, trust and safety could be handled by third-party companies that specialize in it, so social networks wouldn’t have to create their own policies and moderation tactics from scratch each time.


The tunnel-vision focus on growth created bad incentives in the social media age. It made people realize that if you wanted to make money, you needed a massive audience, and that the way to get a massive audience was often by behaving badly. The new form of the internet needs to find a way to make money without pandering for attention. There are some promising new gestures toward changing those incentives already. Threads doesn’t show the repost count on posts, for example—a simple tweak that makes a big difference because it doesn’t incentivize virality. 

We, the internet users, also need to learn to recalibrate our expectations and our behavior online. We need to learn to appreciate areas of the internet that are small, like a new Mastodon server or Discord or blog. We need to trust in the power of “1,000 true fans” over cheaply amassed millions.

Anil Dash has been repeating the same thing over and over for years now: that people should buy their own domains, start their own blogs, own their own stuff. And sure, these fixes require a technical and financial ability that many people do not possess. But with the move to federation (which at least provides control, if not ownership) and smaller spaces, it seems possible that we’re actually going to see some of those shifts away from big-platform-mediated communication start to happen. 

“There’s a systemic change that is happening right now that’s bigger,” he says. “You have to have a little bit of perspective of life pre-Facebook to sort of say, Oh, actually, some of these things are just arbitrary. They’re not intrinsic to the internet.

The fix for the internet isn’t to shut down Facebook or log off or go outside and touch grass. The solution to the internet is more internet: more apps, more spaces to go, more money sloshing around to fund more good things in more variety, more people engaging thoughtfully in places they like. More utility, more voices, more joy. 

My toxic trait is I can’t shake that naïve optimism of the early internet. Mistakes were made, a lot of things went sideways, and there have undeniably been a lot of pain and misery and bad things that came from the social era. The mistake now would be not to learn from them. 

Katie Notopoulos is a writer who lives in Connecticut. She’s written for BuzzFeed News, Fast Company, GQ, and Columbia Journalism Review.

Deep Dive

Culture

The tech that helps these herders navigate drought, war, and extremists

Shiny, high-tech solutions often miss the mark. Can a simpler data-distribution project be the answer?

Why Threads is suddenly popular in Taiwan

During Taiwan’s presidential election, Meta’s social network emerged as a surprise hit.

Threads is giving Taiwanese users a safe space to talk about politics

But Meta's discomfort with political content could end up pushing them away.

Why Chinese apps chose to film super-short soap operas in Southeast Asia

To keep the production budget low, FlexTV is tapping the vibrant English-speaking creative community in countries like Thailand and the Philippines.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.