MIT Technology Review Subscribe

Should the Government Keep Stockpiling Software Bugs?

Last week’s massive WannaCry cyberattack has resurfaced touchy questions about a shadowy government process.
The top brass in the U.S. intelligence community are keeping their mouths shut about software vulnerabilities.

As the dust settles from the global ransomware attack that has crippled systems in more than 150 countries since Friday, the U.S. government’s shadowy process for collecting and disclosing software vulnerabilities is again under scrutiny.

There is plenty of blame to go around for the scale and effectiveness of the attack, in which a ransomware virus called WannaCry—as well as “WannaCrypt” and “Wanna Decryptor”—exploited a vulnerability in Windows XP. For one thing, Microsoft stopped supporting that version of its operating system in 2014, so anyone using the outdated software was taking a risk. (Once Microsoft was aware the vulnerability was being exploited, it quickly released a fix for the bug—an unusual step for such old software.)

Advertisement

Brad Smith, Microsoft’s president and chief legal advisor, said the government was also to blame, since it appears the attackers used an exploit that was stolen from the NSA by a group called Shadow Brokers. In a blog post, he criticized the practice of stockpiling vulnerabilities. “We need governments to consider the damage to civilians that comes from hoarding these vulnerabilities and the use of these exploits,” he wrote.  

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

The U.S. government does have a system in place for weighing the risks of either disclosing a critical software vulnerability or keeping it secret. But very little is understood publicly about how the so-called Vulnerabilities Equities Process (VEP) works. Privacy advocates have long called for greater transparency, with only modest success.  

The VEP is believed to have existed since 2010, but remained a secret until 2014, when the White House and Director of National Intelligence each released statements denying a Bloomberg report that the NSA had for years known about and used a widespread vulnerability in the way communication over the Internet is encrypted called Heartbleed. Michael Daniel, President Obama’s cybersecurity coordinator, claimed the administration had “established a disciplined, rigorous, and high-level decision-making process for vulnerability disclosure.”

Whether or not the federal government should withhold knowledge of such vulnerabilities “may seem clear to some,” Daniel said at the time, but “the reality is much more complicated.” Revealing a vulnerability could cause the U.S. to “forgo an opportunity to collect crucial intelligence that could thwart a terrorist attack,” he said. Nonetheless, the government’s decision-making process was “biased toward responsibly disclosing the vulnerability.”

In January 2016, thanks to a Freedom of Information Act lawsuit by the Electronic Frontier Foundation, the government released a partially redacted document explaining the VEP. It left unclear exactly how a decision is made, who makes it, and how many secret vulnerabilities the government has in its possession. Jason Healey, a researcher at Columbia University and senior fellow at the Atlantic Council, recently estimated that the number is in the dozens.

Recent events have raised doubts that the system is indeed biased toward disclosure, as Daniel asserted. In a recent research paper, Healey concluded, based on interviews and public statements by the government about the VEP,  that the NSA “almost certainly” should have disclosed the vulnerabilities in a previous leak by Shadow Brokers to affected companies, including Cisco, Juniper, and Fortinet. The FBI should also have told Apple about the vulnerability it used to access the iPhone of one of the shooters in the San Bernardino terrorist attacks last year, Healey wrote.

Unfortunately, the prospect of better transparency—and the accountability that would bring with it—doesn’t appear to be forthcoming. The VEP is controlled by the executive branch of the federal government, with no public oversight. Unless the Trump administration decides to change that, we’re likely to remain in the dark. Until the next big cyberattack, that is.

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement