Facebook Security Chief: Cybersecurity Pros Need More Empathy to Protect Us

Facebook was arguably the most important battleground for information warfare in the run-up to the 2016 presidential election, and its chief security officer says cybersecurity professionals need to do more to protect Internet users from bad actors.
That will require something that’s too often lacking in the security industry: more empathy. “We have a real inability to put ourselves in the shoes of the people we are trying to protect,” Alex Stamos told the audience Wednesday at the Black Hat computer security conference in Las Vegas.
Social media networks, and especially Facebook, which has over two billion users, are now providing the most important forum for public debate. Foreign and domestic political actors all over the world have taken advantage of the access to voters that sites like Facebook and Twitter provide to spread propaganda and political attacks.
With billions more people set to connect to the Internet in the coming years, it’s the responsibility of companies like Facebook to foresee the problems they may encounter and protect them from abuse of all forms, said Stamos. It ranges from spam to harassment and even exploitation. “Real harm can happen in that category,” he said, and it is an area the security community traditionally neglected.
For example, the vast majority of Facebook account takeovers are due to password reuse. The use of inauthentic accounts to share and amplify misleading attacks was a prominent aspect of the “information operations” the company observed during the election campaign. Stamos helped author a report, published in April, which described how “malicious actors” undermined civil discourse on the network using fake accounts.
Understanding why people fall victim to technically unsophisticated attacks is crucial, said Stamos. He said curtailing abuse online also requires seeing the point of view of law enforcement and governments officials, something that the hacker and security community has traditionally found difficult to do.
Meanwhile, future elections in the U.S. and elsewhere will be just as vulnerable, if not more, to the kind of meddling we saw in 2016. Facebook is developing techniques to help defend against this kind of activity, by adding fact-checking tools and pursuing analytical tools that can spot propaganda operations. That work led to the suspension of 30,000 fake accounts in France just 10 days before the country’s contentious presidential election. It is also sponsoring the Defending Digital Democracy Project, recently launched by the Harvard Kennedy School, whose goal is to create a bipartisan team dedicated to rooting out election cybersecurity issues.
Still, as billions more humans connect, adversaries will find new vulnerabilities, and protecting democracy against online propaganda will likely be a constant struggle. Generally, “things are not getting better” with respect to the dangers people face online, said Stamos. “Things are getting worse.”
Keep Reading
Most Popular

Toronto wants to kill the smart city forever
The city wants to get right what Sidewalk Labs got so wrong.

Chinese gamers are using a Steam wallpaper app to get porn past the censors
Wallpaper Engine has become a haven for ingenious Chinese users who use it to smuggle adult content as desktop wallpaper. But how long can it last?

Yann LeCun has a bold new vision for the future of AI
One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.

The US military wants to understand the most important software on Earth
Open-source code runs on every computer on the planet—and keeps America’s critical infrastructure going. DARPA is worried about how well it can be trusted
Stay connected

Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.