Facebook was arguably the most important battleground for information warfare in the run-up to the 2016 presidential election, and its chief security officer says cybersecurity professionals need to do more to protect Internet users from bad actors.
That will require something that’s too often lacking in the security industry: more empathy. “We have a real inability to put ourselves in the shoes of the people we are trying to protect,” Alex Stamos told the audience Wednesday at the Black Hat computer security conference in Las Vegas.
Social media networks, and especially Facebook, which has over two billion users, are now providing the most important forum for public debate. Foreign and domestic political actors all over the world have taken advantage of the access to voters that sites like Facebook and Twitter provide to spread propaganda and political attacks.
With billions more people set to connect to the Internet in the coming years, it’s the responsibility of companies like Facebook to foresee the problems they may encounter and protect them from abuse of all forms, said Stamos. It ranges from spam to harassment and even exploitation. “Real harm can happen in that category,” he said, and it is an area the security community traditionally neglected.
For example, the vast majority of Facebook account takeovers are due to password reuse. The use of inauthentic accounts to share and amplify misleading attacks was a prominent aspect of the “information operations” the company observed during the election campaign. Stamos helped author a report, published in April, which described how “malicious actors” undermined civil discourse on the network using fake accounts.
Understanding why people fall victim to technically unsophisticated attacks is crucial, said Stamos. He said curtailing abuse online also requires seeing the point of view of law enforcement and governments officials, something that the hacker and security community has traditionally found difficult to do.
Meanwhile, future elections in the U.S. and elsewhere will be just as vulnerable, if not more, to the kind of meddling we saw in 2016. Facebook is developing techniques to help defend against this kind of activity, by adding fact-checking tools and pursuing analytical tools that can spot propaganda operations. That work led to the suspension of 30,000 fake accounts in France just 10 days before the country’s contentious presidential election. It is also sponsoring the Defending Digital Democracy Project, recently launched by the Harvard Kennedy School, whose goal is to create a bipartisan team dedicated to rooting out election cybersecurity issues.
Still, as billions more humans connect, adversaries will find new vulnerabilities, and protecting democracy against online propaganda will likely be a constant struggle. Generally, “things are not getting better” with respect to the dangers people face online, said Stamos. “Things are getting worse.”
What to know about this autumn’s covid vaccines
New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
Human-plus-AI solutions mitigate security threats
With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure
Next slide, please: A brief history of the corporate presentation
From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.