The ability to access the code of open-source applications may give attackers an edge in developing exploits for the software, according to a paper analyzing two years’ worth of attack data.
The paper, to be presented this week at the Workshop on the Economics of Information Security, correlated 400 million alerts from intrusion detection systems with known attributes of the targeted software and vulnerabilities. The data supports the assertion that flaws in open-source software tend to be attacked more quickly and more often than vulnerabilities in closed-source software, says Sam Ransbotham, assistant professor at Boston College’s Carroll School of Management and the author of the paper.
Using nonlinear regression and other models, Ransbotham found that attacks on vulnerabilities in open-source software occurred three days sooner and with nearly 50 percent greater frequency. Ransbotham argues that knowledge of how to exploit a particular vulnerability spreads similar to the diffusion of technological innovation.
“If you think about this whole thing as a game between the good guys and the bad guys, by reducing the effort for the bad guys, there is much greater incentive for them to exploit targets earlier and hit more firms,” says Ransbotham.
The paper will likely rekindle a debate between advocates of open-source and closed-source development models, who argue whether the open-source operating system Linux is more secure than Windows or whether Mozilla’s open-source Firefox browser is more secure than Microsoft’s Internet Explorer. Supporters of open-source argue that the accessibility of the code allows the good guys to find bugs faster, while critics argue that more attackers than defenders are poking through the code, so the net effect is worse security.
The research used alert data culled from intrusion-detection systems managed on behalf of 960 companies by security service provider SecureWorks. Ransbotham correlated the alerts with specific vulnerabilities in the National Vulnerability Database (NVD), a large collection of information on software flaws managed by the National Institute of Standards and Technology. While the NVD lists vulnerabilities in more than 13,000 software products for 2006 and 2007, the two years from which alert data was used, only half of the products could be classified as either open- or closed-source, Ransbotham says.
By linking that data to the intrusion detection systems’ ability to recognize an attack on a vulnerable system, Ransbotham compiled a list of 883 vulnerabilities in confirmed open- or closed-source software on which attacks could be recognized. He also classified the vulnerabilities by other attributes, such as how complex it would be for attackers to exploit the flaw and whether there was a signature available for the intrusion detection systems at the time the vulnerability was reported.
In the end, only 97 of the 883 vulnerabilities were targeted by attackers during the two-year period. However, this accounts for 111 million, or about a quarter, of the alerts. The remaining alerts could be attributed to attacks on software that could not be classified as open- or closed-source, attacks on vulnerabilities that did not have an identifying attribute, or false positives.
In his analysis, Ransbotham found that attacks on vulnerabilities in open-source software occurred sooner than attacks on closed-source software, as measured from the first report of the vulnerability by each company. In addition, a greater number of companies were eventually targeted with attacks on each vulnerability, on average. In both cases, however, the number of attacks eventually saturated.
“As defenders get out their patches, the attackers have more incentive to move on to a different exploit,” Ransbotham says.
The ability to access open-source code is not the only advantage given to attackers. Ransbotham analysis showed a correlation between the existence of signatures–used by various security products to match a known pattern with a flaw–and earlier attacks, suggesting that the updates used by defenders to improve their defense actually help attackers.
“That tells me that there is something about having that signature that is helping people… giving them a clue about how to exploit the vulnerability,” Ransbotham says.
Other research has suggested that signatures–and other defensive measures–leak information to attackers. In 2007, two security consultants described using signatures from a popular intrusion detection system to create attack code. In 2008, academic researchers created a system for generating potential exploit code based on automatic analysis of the patches released by software companies.
Security professionals warn not to read too much into Ransbotham’s analysis, however. Many factors could skew the data, says David Aitel, chief technology officer of security firm Immunity, which–among its services–creates exploits to test corporate network defenses. Only 30 of the 97 vulnerabilities targeted by attackers were in open-source software, according to Ransbotham’s paper, which means that relatively few vulnerabilities were attacked far more often, says Aitel. He argues that attackers might indiscriminately inundate a company’s network with attacks on relatively unimportant open-source software, while focusing more serious attacks on more important systems running closed-source software.
Because Immunity’s clients are most concerned about systems running closed-source software such as Microsoft Windows, Internet Explorer, Adobe Acrobat, and Sun’s Java, Immunity’s researchers attempt to exploit flaws in closed-source software within 24 hours of when they are first reported. Open-source software vulnerabilities are given a much lower priority.
“Drawing a broad conclusion that open-source software is easier to exploit is definitely not true,” he says. “You could draw the exact opposite conclusion from the body of exploits that are available on [research sites, such as] Packetstorm.”
Other security professionals take a broader view, that it’s less about open- or closed-source and more about how a company develops its software. Attackers can eventually get the information they need to exploit a bug, whether through automated attack software, by reverse engineering patches, or by somehow gaining access to the source code, so companies should expect that, says Gary McGraw, chief technology officer of Cigital, a software-security consultancy.
“It is a myth that you have to have source code to exploit vulnerabilities,” McGraw says. “You (software developers) need to realize that your software is out there, and you are giving your attacker everything they need to exploit it.”