Skip to Content
Silicon Valley

Facebook’s ex security boss: Asking Big Tech to police hate speech is “a dangerous path”

Alex Stamos on the risks of giving his former employer and other giant platforms the power to determine what people can—and can’t—say online.
October 23, 2018
Josh Edelson | Courtesy of Stanford University

Until a few weeks ago, Alex Stamos was Facebook’s chief security officer. He led the hunt for Russian political disinformation on the platform after the 2016 US election. Now at Stanford University, he recently announced the creation of the Stanford Internet Observatory—a project to tackle hate speech, propaganda, and manipulation by bringing together academic researchers, social scientists, policymakers, and the tech giants themselves.

Like many people, Stamos thinks tech platforms like Facebook and Google have too much power. But he doesn’t agree with the calls to break them up. And he argues that the very people who say Facebook and Google are too powerful are giving them more power by insisting they do more to control hate speech and propaganda.

“That’s a dangerous path,” he warns. If democratic countries make tech firms impose limits on free speech, so will autocratic ones. Before long, the technology will enable “machine-speed, real-time moderation of everything we say online.” In attempting to rein in Big Tech, we risk creating Big Brother. So what’s the solution? I spoke to Stamos at his Stanford office to find out. (The interview has been condensed and edited for clarity.)

You said recently that it’s already too late to save the 2018 midterms from foreign interference. Who’s at fault for letting that happen, given everything that we learned in 2016?

Three things happened in 2016. There was a disinformation and propaganda campaign by [Russia’s] Internet Research Agency and related groups. There was the leak campaign where the GRU [Russian military intelligence] broke into the e-mail of the DNC [Democratic National Committee] and then planted stories [in the media]. And there was an exploratory penetration of the election systems of 21 states.

The first problem has had the most work [done on it] because it completely falls within the responsibility of the platforms. They have defined what is a political ad and what is an issue ad, created ad transparency, defined what is inappropriate coordination, and started enforcing those rules.

"The Russians were able to send their B team to hack into the DNC. I’m not sure people are ready for the A team."

On the GRU hack and leak campaign, very little has happened. There have been upgrades in security at the DNC. I expect most campaigns are being more careful. But there does not seem to be a wholesale upgrade of security among campaigns and candidates. The Russians were able to send their B team to hack into the DNC. I’m not sure people are ready for the A team.

On the third we’ve done almost nothing. You still have 10,000 election authorities [in the US] running elections. We have a number of states with no paper backup [for ballots]. There are way too many authorities responsible for their own security for us to secure [elections] without the federal government providing a huge amount of resources. That’s where I think we have some of the most vulnerability.

So is the disinformation/propaganda problem mostly solved?

In a free society, you will never eliminate that problem. I think the most important thing [in the US] is the advertising transparency. With or without any foreign interference, the parties, the campaigns, the PACs [political action committees] here in the US are divvying up the electorate into tiny little buckets, and that is a bad thing. Transparency is a good start.

The next step we need is federal legislation to put a limit on ad targeting. There are thousands of companies in the internet advertising ecosystem. Facebook, Google, and Twitter are the only ones that have done anything, because they have gotten the most press coverage and the most pressure from politicians. So without legislation we’re just going to push all of the attackers into the long tail of advertising, to companies that don’t have dedicated teams looking for Russian disinformation groups.

Facebook has been criticized over Russian political interference both in the US and in other countries, the genocide in Myanmar, and a lot of other things. Do you feel Facebook has fully grasped the extent of its influence and its responsibility?

I think the company certainly understands its impact. The hard part is solving it. Ninety percent of Facebook users live outside the United States. Well over half live in either non-free countries or democracies without protection for speech. One of the problems is coming up with solutions in these countries that don’t immediately go to a very dark place [i.e., censorship].

"I think [Facebook] certainly understands its impact. The hard part is solving it."

Another is figuring out what issues to put engineering resources behind. No matter how big a company is, there are only a certain number of problems you [can tackle]. One of the problems that companies have had is that they’re in a firefighting mode where they jump from emergency to emergency.

So as they staff up that gets better, but we also need a more informed external discussion about the things we want the companies to focus on—what are the problems that absolutely have to be solved, and what aren’t. You mentioned a bunch of a problems that are actually very different, but people blur them all together. 

Do you think tech firms have too much power? Should Facebook be forced to divest itself of Instagram and WhatsApp, for instance?

If antitrust folks think it’s appropriate to force divestment of individual platforms, that’s up to them. I don’t think that either solves these problems or makes them worse. What would make things worse is breaking up specific products. Ten WhatsApps is worse than one WhatsApp, and ten Facebooks is worse than one Facebook, because you lose the economies of scale [and] the ability to have well-staffed teams that are experts in these kinds of abuse [hate speech, propaganda, etc.].

The truth is the big companies have been the responsive ones. Everybody is ignoring the small companies because it doesn’t make for good headlines. Facebook has done more than the vast majority of other companies.

How do you regulate in a world in which tech is advancing so fast while regulation moves so slowly? How should a society set sensible limits on what tech companies do?

But right now, society is not asking for limits on what they do. It’s asking that tech companies do more. And I think that’s a dangerous path. In all of the problems you mentioned—Russian disinformation, Myanmar—what you’re telling these companies is, “We want you to have more power to control what other people say and do.”

"Five or ten years from now there could be... machine-speed, real-time moderation of everything we say online."

That’s very dangerous, especially with the rise of machine learning. Five or ten years from now, there could be machine-learning systems that understand human languages as well as humans. We could end up with machine-speed, real-time moderation of everything we say online. So the powers we grant the tech companies right now are the powers those machines are going to have in five years.

What is the basic problem the Stanford Internet Observatory is trying to solve?

There is no specific academic field studying the misuse of technology, outside of highly technical flaws. Computer science departments do research into new types of exploits, new types of bugs, esoteric cryptographic solutions, but you can’t get a PhD studying bullying and harassment and the technical solutions to them. You might have political scientists studying the impact of social networks on democracy, and people in the psychology department studying the impact of the use of Instagram on teenagers who are suicidal, but they lack the technical skills and infrastructure.

So the Stanford Internet Observatory will be a permanent program, staffed with data scientists, software engineers, investigators, and analysts who understand how to interact with the tech platforms and how to do data analytics at a very large scale. That group can then provide services to academic groups all over and do its own research. Then we can catalyze work that probably wouldn’t happen because it doesn’t fall cleanly into any one academic sphere.

What are some of the solutions you’re working on?

If you look at 2016, what you see is intelligence failures between the US government [and] allied governments, and between those governments and the tech platforms. We need to think about the responsibilities for these different groups and how you align the fact that the tech companies are acting in a quasi-governmental manner. What kind of controls should they have in place, and at what point does their responsibility begin and end? We’re working on recommendations for Congress for next year.

We’re also building the capability to monitor the use of disinformation in various elections. Our goal is to have that up and running for the Indian and European elections next year.

Democrats are inevitably going to be more receptive than Republicans to these kinds of solutions. How do you make this nonpartisan?

The idea that election interference only helps Republicans is insane. The Russian playbook is out there. The weaknesses in our system have been demonstrated. We have signaled to our adversaries that they can interfere in our election and we will do nothing to really punish that.

"My message to Republicans is, 'Let’s fix this problem before you guys have to have your own 2016.'"

So I would fully expect other adversaries to get involved in future elections. China, Iran, North Korea—the idea that all of these countries are going to support Republicans is ridiculous. So my message to Republicans is, “Let’s fix this problem before you guys have to have your own 2016.”

This discussion needs to move past Trump. Republicans end up with a brain freeze if you imply that Trump was not elected fair and square. So we just have to talk about the vulnerabilities and what possible impact it has in 2020. When I talk privately with Republicans they’re much more receptive to this.

A related issue is the lack of technological literacy among politicians. How do you solve that?

These folks generally have pretty smart staffers, but most of the staffers have not worked in tech. So we need to incentivize people who have worked in tech to go work in DC. There are examples like Chris Soghoian [a technology activist, now working for Senator Ron Wyden] where techies go to Congress and have huge impact individually.

And it goes both ways. We need to teach computer scientists about history and ethics, and we need to teach liberal arts majors about the fundamentals of technology so that they have the ability to be influential. The ability to talk to nerds in a way that they respect is hugely powerful, and that’s something that’s missing from Congress right now.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.