Perhaps most frightening was that because the vulnerability was not located in any particular hardware or software but in the design of the DNS protocol itself, it wasn’t clear how to fix it. In secret, Kaminsky and Vixie gathered together some of the top DNS experts in the world: people from the U.S. government and high-level engineers from the major manufacturers of DNS software and hardware–companies that include Cisco and Microsoft. They arranged a meeting in March at Microsoft’s campus in Redmond, WA. The arrangements were so secretive and rushed, Kaminsky says, that “there were people on jets to Microsoft who didn’t even know what the bug was.”
Once in Redmond, the group tried to determine the extent of the flaw and sort out a possible fix. They settled on a stopgap measure that fixed most problems, would be relatively easy to deploy, and would mask the exact nature of the flaw. Because attackers commonly identify security holes by reverse-engineering patches intended to fix them, the group decided that all its members had to release the patch simultaneously (the release date would turn out to be July 8). Kaminsky also asked security researchers not to publicly speculate on the details of the flaw for 30 days after the release of the patch, in an attempt to give companies enough time to secure their servers.
On August 6, at the Black Hat conference, the annual gathering of the world’s Internet security experts, Kaminsky would publicly reveal what the flaw was and how it could be exploited.
Asking for Trouble
Kaminsky has not really discovered a new attack. Instead, he has found an ingenious way to breathe life into a very old one. Indeed, the basic flaw targeted by his attack predates the Internet itself.
The foundation of DNS was laid in 1983 by Paul Mockapetris, then at the University of Southern California, in the days of ARPAnet, the U.S. Defense Department research project that linked computers at a small number of universities and research institutions and ultimately led to the Internet. The system is designed to work like a telephone company’s 411 service: given a name, it looks up the numbers that will lead to the bearer of that name. DNS became necessary as ARPAnet grew beyond an individual’s ability to keep track of the numerical addresses in the network. Mockapetris, who is now chairman and chief scientist of Nominum, a provider of infrastructure software based in Redwood, CA, designed DNS as a hierarchy. When someone types the URL for a Web page into a browser or clicks on a hyperlink, a request goes to a name server maintained by the user’s Internet service provider (ISP). The ISP’s server stores the numerical addresses of URLs it handles frequently–at least, until their time to live expires. But if it can’t find an address, it queries one of the 13 DNS root servers, which directs the request to a name server responsible for one of the top-level domains, such as .com or .edu. That server forwards the request to a server specific to a single domain name, such as google.com or mit.edu. The forwarding continues through servers with ever more specific responsibilities–mail.google.com, or libraries.mit.edu–until the request reaches a server that can either give the numerical address requested or respond that no such address exists. As the Internet matured, it became clear that DNS was not secure enough. The process of passing a request from one server to the next gives attackers many opportunities to intervene with false responses, and the system had no safeguards to ensure that the name server answering a request was trustworthy. As early as 1989, Mockapetris says, there were instances of “cache poisoning,” in which a name server was tricked into storing false information about the numerical address associated with a website.