Skip to Content

Alarming Open-Source Security Holes

How a programming error introduced profound security vulnerabilities in millions of computer systems.

Back in May 2006, a few programmers working on an open-source security project made a whopper of a mistake. Last week, the full impact of that mistake was just beginning to dawn on security professionals around the world.

In technical terms, a programming error reduced the amount of entropy used to create the cryptographic keys in a piece of code called the OpenSSL library, which is used by programs like the Apache Web server, the SSH remote access program, the IPsec Virtual Private Network (VPN), secure e-mail programs, some software used for anonymously accessing the Internet, and so on.

In plainer language: after a week of analysis, we now know that two changed lines of code have created profound security vulnerabilities in at least four different open-source operating systems, 25 different application programs, and millions of individual computer systems on the Internet. And even though the vulnerability was discovered on May 13 and a patch has been distributed, installing the patch doesn’t repair the damage to the compromised systems. What’s even more alarming is that some computers may be compromised even though they aren’t running the suspect code.

The reason that the patch doesn’t fix the problem has to do with the specifics of the programmers’ error. Modern computer systems employ large numbers to generate the keys that are used to encrypt and decrypt information sent over a network. Authorized users know the right key, so they don’t have to guess it. Malevolent hackers don’t know the right key. Normally, it would simply take too long to guess it by trying all possible keys–like, hundreds of billions of years too long.

But the security of the system turns upside down if the computer can only use a limited number of a million different keys. For the authorized user, the key looks good–the data gets encrypted. But the bad guy’s software can quickly make and then try all possible keys for a specific computer. The error introduced two years ago makes cryptographic keys easy to guess.

The error doesn’t give every computer the same cryptographic key–that would have been caught before now. Instead, it reduces the number of different keys that these Linux computers can generate to 32,767 different keys, depending on the computer’s processor architecture, the size of the key, and the key type.

Less than a day after the vulnerability was announced, computer hacker HD Moore of the Metasploit project released a set of “toys” for cracking the keys of these poor Linux and Ubuntu computer systems. As of Sunday, Moore’s website had downloadable files of precomputed keys, just to make it easier to identify vulnerable computer systems.

Unlike the common buffer overflow bug, which can be fixed by loading new software, keys created with the buggy software don’t get better when the computer is patched: instead, new keys have to be generated and installed. Complicating the process is the fact that keys also need to be certified and distributed: the process is time consuming, complex, and error prone.

Nobody knows just how many systems are impacted by this problem, because cryptographic keys are portable: vulnerable keys could have been generated on a Debian system in one office and then installed on a server running Windows in another. Debian is a favored Linux distribution of many security professionals, and Ubuntu is one of the most popular Linux distributions for general use, so the reach of the problem could be quite widespread.

So how did the programmers make the mistake in the first place? Ironically, they were using an automated tool designed to catch the kinds of programming bugs that lead to security vulnerabilities. The tool, called Valgrind, discovered that the OpenSSL library was using a block of memory without initializing the memory to a known state–for example, setting the block’s contents to be all zeros. Normally, it’s a mistake to use memory without setting it to a known value. But in this case, that unknown state was being intentionally used by the OpenSSL library to help generate randomness.

The uninitialized memory wasn’t the only source of randomness: OpenSSL also gets randomness from sources like mouse movements, keystroke timings, the arrival of packets at the network interface, and even microvariations in the speed of the computer’s hard disk. But when the programmers saw the errors generated by Valgrind, they commented out the offending lines–and removed all the sources of randomness used to generate keys except for one, an integer called the process ID that can range from 0 to 32,767.

“Never fix a bug you don’t understand!” raved OpenSSL developer Ben Laurie on his blog after the full extent of the error became known. Laurie blames the Debian developers for trying to fix the “bug” in the version of OpenSSL distributed with the Debian and Ubuntu operating systems, rather than sending the fix to the OpenSSL developers. “Had Debian done this in this case,” he wrote, “we (the OpenSSL Team) would have fallen about laughing, and once we had got our breath back, told them what a terrible idea this was. But no, it seems that every vendor wants to ‘add value’ by getting in between the user of the software and its author.”

Perhaps more disconcerting, though, is what this story tells us about the security of open-source software–and perhaps about the security of software in general. One developer (who I’ve been asked not to single out) noticed a problem, proposed a fix, and got the fix approved by a small number of people who didn’t really understand the implications of what was being suggested. The result: communications that should have been cryptographically protected between millions of computer systems all over the world weren’t really protected at all. Two years ago, Steve Gibson, a highly respected security consultant, alleged that a significant bug found in some Microsoft software had more in common with a programmer trying to create an intentional “back door” than with yet another Microsoft coding error.

The Debian OpenSSL randomness error was almost certainly an innocent mistake. But what if a country like China or Russia wanted to intentionally introduce secret vulnerabilities into our open-source software? Well concealed, such vulnerabilities might lay hidden for years.

One thing is for sure: we should expect to discover more of these vulnerabilities as time goes on.

Simson Garfinkel is an associate professor at the naval postgraduate school in Monterey, CA, and a fellow at the Center for Research and Computation and Society at Harvard University.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.