When a New York Times report appeared Thursday saying the National Security Agency had “circumvented or cracked much of the encryption” protecting online transactions, computer security professionals braced for news of breakthroughs undermining the fundamentals of their field.
However, cryptography experts tell MIT Technology Review that a close reading of last week’s report suggests the NSA has not broken the underlying mathematical operations that are used to cloak online banking or e-mail.
Instead, the agency appears to rely on a variety of attacks on the software used to deploy those cryptographic algorithms and the humans and organizations using that software. Those strategies, revealed in documents leaked by Edward Snowden, came as no surprise to computer security researchers, given that the NSA’s mission includes the pursuit of America’s most technologically capable enemies.
“The whole leak has been an exercise in `I told you so,’ ” says Stephen Weis, CEO of server encryption company PrivateCore. Weis previously worked on implementing cryptography at Google. “There doesn’t seem to be any kind of groundbreaking algorithmic breakthrough,” he says, “but they are able to go after implementations and the human aspects of these systems.”
Those tactics apparently include using legal tools or hacking to get the digital keys used to encrypt data; using brute computing power to break weak encryption; and forcing companies to help the agency get around security systems.
“If the crypto didn’t work, the NSA wouldn’t bother doing all of these other things,” says Jon Callas, a cryptographer who cofounded PGP Corporation and is now chief technology officer of secure messaging company Silent Circle (see “An App Keeps Spies Away from Your Phone”). “This is what you do because you can’t break the crypto.”
After seeing the documents behind last week’s reports, security expert Bruce Schneier wrote in the Guardian that people should still “trust the math” that underlies cryptography. In June, Snowden said in an online chat that “properly implemented strong crypto systems are one of the few things you can rely on.”
Cryptography systems and security software often improve through a cycle in which researchers publish details of flaws, which are then fixed. Looking at last week’s reports in that way doesn’t suggest the security community needs to rethink the fundamentals of its tools and strategies, says Callas. Rather, adoption of known security improvements should be accelerated, and scrutiny of known weak points increased, he says. “Things have always had to be tested continuously.”
Weis agrees, saying companies should do that regardless of their opinion of the NSA. “A lot of the techniques the agency is using aren’t going to be the most complicated,” he says, “and so they’ll be accessible to organized crime and other nations’ security services.”
Two NSA tactics prominent in Thursday’s report highlight widely known and fixable flaws in the way most online services operate. In one of those tactics, the agency collects encryption keys from online services so it can decode intercepted data at will. In the other, the Times said, the NSA uses “custom-built, superfast computers to break codes,” making it increasingly able to unscramble data without needing to target specific companies.
The value of stealing keys can be mostly neutralized if Internet providers adopt a technique called perfect forward secrecy, in which keys aren’t reused. So far Google and a few other companies have adopted it (see “Circumventing Encryption Frees NSA’s Hands Online”).
Mention of NSA’s code-breaking computers and other parts of the new reports appears to confirm long-held suspicions that the agency can overpower a relatively weak form of encryption used by most websites that offer secure SSL connections, visible to users as a padlock icon and “https” in a browser’s address bar. Most sites using SSL use the trusted RSA encryption algorithm with mathematical keys 1,024 bits long. Experts have cautioned for years that longer keys are needed to defend against an attacker with the resources of a government agency or large company.
“RSA 1024 is entirely too weak to be used anywhere with any confidence in its security,” says Tom Ritter, a cryptographer with iSec Partners. Despite that, relatively few companies use the safer, longer RSA keys. Facebook and Google switched only this year.
The software that Internet companies use to implement SSL, in particular a widely used open source package called OpenSSL, is one of many pieces of the Internet’s security infrastructure that will be more closely scrutinized after last week’s reports, says Weis. However, those crucial parts were already known to need careful attention. “I don’t think this really changes priorities too much.”
Callas says he finds it much harder to respond to the part of Thursday’s report that said the NSA works with companies to install backdoors into security software and hardware. Commercial code and designs are typically closely held, and checking how a chip operates is particularly challenging. The moral and policy implications for the security industry and America as a whole are equally tricky, says Callas. “If my government is trying to catch terrorists and puts weaknesses in the software and hardware that I use that enable thieves to steal money from me,” he asks, “who is the good guy and who is the bad guy?”
The Times report also said the NSA had influenced the development of new cryptographic standards to hide weaknesses it could exploit. The characteristically paranoid cryptography community had already been poring over standards to try to detect such holes, says Weis: “This is something people have talked about for a long time.”
If the NSA influenced standards it would probably do so through its relationship with the National Institute for Standards and Technology, which sets U.S. cryptography standards and is influential worldwide. In 2007, Microsoft researchers showed that a NIST standard introduced the previous year and publicly backed by the NSA had a major mathematical flaw. However, Callas, Weis, and other experts consulted by MIT Technology Review all said that the standard, Dual_EC_DRBG, was always too slow to see widespread use. If the flaw was planted by the NSA, it was an unsubtle and poorly targeted plan, says Callas.
Many of the most widely used NIST standards seem unlikely to have been compromised by the NSA because they were developed in the open by groups outside the United States. The agency did have a central role in one crucial standard for a method that is set to become the default way of securing online data (see “Math Advances Raise Prospect of Internet Security Crisis”). However, that standard is a crucial part of Suite B, a cryptography toolkit today most widely used by the U.S. government and its many contractors. Introducing backdoors into that would seem counterproductive for the NSA.