Facebook Security Chief: Cybersecurity Pros Need More Empathy to Protect Us

Facebook was arguably the most important battleground for information warfare in the run-up to the 2016 presidential election, and its chief security officer says cybersecurity professionals need to do more to protect Internet users from bad actors.
That will require something that’s too often lacking in the security industry: more empathy. “We have a real inability to put ourselves in the shoes of the people we are trying to protect,” Alex Stamos told the audience Wednesday at the Black Hat computer security conference in Las Vegas.
Social media networks, and especially Facebook, which has over two billion users, are now providing the most important forum for public debate. Foreign and domestic political actors all over the world have taken advantage of the access to voters that sites like Facebook and Twitter provide to spread propaganda and political attacks.
With billions more people set to connect to the Internet in the coming years, it’s the responsibility of companies like Facebook to foresee the problems they may encounter and protect them from abuse of all forms, said Stamos. It ranges from spam to harassment and even exploitation. “Real harm can happen in that category,” he said, and it is an area the security community traditionally neglected.
For example, the vast majority of Facebook account takeovers are due to password reuse. The use of inauthentic accounts to share and amplify misleading attacks was a prominent aspect of the “information operations” the company observed during the election campaign. Stamos helped author a report, published in April, which described how “malicious actors” undermined civil discourse on the network using fake accounts.
Understanding why people fall victim to technically unsophisticated attacks is crucial, said Stamos. He said curtailing abuse online also requires seeing the point of view of law enforcement and governments officials, something that the hacker and security community has traditionally found difficult to do.
Meanwhile, future elections in the U.S. and elsewhere will be just as vulnerable, if not more, to the kind of meddling we saw in 2016. Facebook is developing techniques to help defend against this kind of activity, by adding fact-checking tools and pursuing analytical tools that can spot propaganda operations. That work led to the suspension of 30,000 fake accounts in France just 10 days before the country’s contentious presidential election. It is also sponsoring the Defending Digital Democracy Project, recently launched by the Harvard Kennedy School, whose goal is to create a bipartisan team dedicated to rooting out election cybersecurity issues.
Still, as billions more humans connect, adversaries will find new vulnerabilities, and protecting democracy against online propaganda will likely be a constant struggle. Generally, “things are not getting better” with respect to the dangers people face online, said Stamos. “Things are getting worse.”
Keep Reading
Most Popular
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
How Rust went from a side project to the world’s most-loved programming language
For decades, coders wrote critical systems in C and C++. Now they turn to Rust.
Design thinking was supposed to fix the world. Where did it go wrong?
An approach that promised to democratize design may have done the opposite.
Sam Altman invested $180 million into a company trying to delay death
Can anti-aging breakthroughs add 10 healthy years to the human life span? The CEO of OpenAI is paying to find out.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.