Skip to Content

Moore’s Outlaws

Cyber attacks are increasing exponentially. Here’s what recent episodes can teach us about thwarting cyber crime, espionage, and warfare.
June 22, 2010

Eugene Kaspersky, CEO of the Russian antivirus company Kaspersky Lab, admits it crossed his mind last year that he might die in a plane crash caused by a cyber attack. Kaspersky is a man of eclectic tastes and boyish humor; when we met in his office on the outskirts of Moscow, he was munching a snack of sweetened, freeze-dried whole baby crabs from Japan, and at one point he showed me a pair of men’s undergarments, bought on a Moscow street, that had been stamped “Protected by Kaspersky Anti-Virus.” But he grew quite serious when the subject turned to the days leading up to April 1, 2009.

That was the date a virulent computer worm called Conficker was expected to receive an update from its unknown creator–but nobody knew to what end. A tweak to Conficker’s code might cause the three million or so machines in its army of enslaved computers, called a botnet, to start attacking the servers of some company or government network, vomit out billions of pieces of spam, or just improve the worm’s own ability to propagate. “It’s like if you have a one million army of real soldiers. What can you do?” ­Kaspersky asked rhetorically. “Anything you want.” He let that sink in for a moment. “We were waiting for April 1–for something. I checked my travel schedule to make sure I didn’t have any flight. We had no idea about this functionality. Security officials were really nervous.” In the end? “Nothing happened. Whew! Whew!” Kaspersky cried out. He crossed himself, clasped his hands in a prayerlike pose, and gazed toward the ceiling.

The unknowns about Conficker in the spring of 2009 (the infection remains widespread but, so far, inactive) reflect larger unknowns about just how bad cyber security will get (see Briefing). The trends aren’t promising: tour Kaspersky’s labs–or those of any computer security company or research outpost–and you quickly learn that malware is tougher to detect, spam delivery faster, and attacks growing in number and financial impact (see “The Rise in Global Cyber Threats” see slideshow). Security experts and attackers are locked in a kind of arms race. In Kaspersky’s case, his engineers and cryptographers do everything from seeking faster automated virus-detection methods to trolling Russian-language hacker blogs for clues about what’s coming.

Ingenious solutions are multiplying, but the attacks are multiplying faster still. And this year’s revelations of China-based attacks against corporate and political targets, including Google and the Dalai Lama, suggest that sophisticated electronic espionage is expanding as well. “What we’ve been seeing, over the last decade or so, is that Moore’s Law is working more for the bad guys than the good guys,” says Stewart Baker, the former general counsel of the National Security Agency and a former policy chief at the U.S. Department of Homeland Security, referring to the prediction that integrated circuits will double in transistor capacity about every two years. “It’s really ‘Moore’s outlaws’ who are winning this fight. Code is more complex, and that means more opportunity to exploit the code. There is more money to be made in exploiting the code, and that means there are more and more sophisticated people looking to exploit vulnerabilities. If you look at things like malware found, or attacks, or the size of the haul people are pulling in, there is an exponential increase.”

As these low-grade conflicts continue, the threat of outright cyber war is rising, too. More than 100 nations have developed organizations for conducting cyber espionage, according to the FBI, and at least five nations–the United States, Russia, China, Israel, and France–are developing actual cyber weapons, according to a November 2009 report by the computer security company McAfee. (In May the U.S. Senate confirmed the director of the National Security Agency, General Keith Alexander, as head of the newly created U.S. Cyber Command.) These arsenals could disable military networks or bring down power grids. And the battle could escalate at the speed of light, not just that of intercontinental ballistic missiles. “Cyber weapons can affect a huge amount of people, as well as nuclear. But there is one big difference between them,” says Vladimir Sherstyuk, a member of Russia’s National Security Council and director of the Institute for Information Security Issues at Moscow State University. “Cyber weapons are very cheap! Almost free of charge.”

That form of battle is still largely speculative and can involve some specialized weapons, whereas the siege of attacks from hackers and malware is a daily reality for individuals and businesses. But the two types of conflict share the same medium, and they could share some of the same approaches. Perhaps most significant, the former becomes easier to wage, and more dangerous, in the murky and chaotic environment created by the latter. “Going after the botnets, going after the corporate espionage stuff, won’t remove the threat of disruptive cyber war,” says Greg Rattray, a former White House national security official and author of Strategic War in Cyberspace. “But a cleaner ecosystem would put a brighter light on cyber-war activity, making it easier to detect and to defend against.”

Grin and bear it: In the absence of strong international agreements on fighting cyber crime, ad hoc collaboration sometimes gets the job done–as when Eugene Kaspersky, CEO of the Moscow security firm Kaspersky Lab, helped Dutch police shut down a botnet. But such isolated successes are not keeping pace with the exponential rise in attacks.

At a basic level, flawed technology is responsible for the whole mess. Many components of our current networks weren’t built to be particularly secure (see “The Internet Is Broken,” December 2005/January 2006). Report after report from federal agencies, the National Research Council, and think tanks like Rand has made it clear that fixing cyberspace for good will require accelerating research and development to make hardware, software, and networking technologies more secure–and then getting those technologies rapidly in place. The latest call came in a report issued last November by the Department of Homeland Security, which concluded that “the only long-term solution … is to ensure that future generations of these technologies are designed with security built in from the ground up.”

But securing cyberspace can’t wait for entirely new networks. In the meantime, we must start addressing a host of other systemic problems. Among them: commonsense security practices are often ignored, international coöperation is as spotty as the technology is porous, and Internet providers don’t do enough to block malicious traffic. “Hardening targets–and having good laws and good law-enforcement capacity–are the key foundational pieces no matter what other activities we want to try to pursue,” Christopher Painter, the White House senior director for cyber security, pointed out at a recent conference. Technology Review investigated three recent episodes–an exceptional botnet investigation in Holland, a probe of China-based espionage in India and other nations, and the 2007 Internet attacks on the small Baltic state of Estonia–to glean lessons in how to better police and secure the flawed cyberspace we’ve got, and prepare for the cyber war we hope will never come.

Shadow and Grum

The Dutchman from the town of Sneek was only 19 years old, but he’d already achieved more than most of us can claim: he’d assumed illicit control of as many as 150,000 computers around the world. The unwitting victims had been rounded up by means of clever messages appearing to come from their contacts on Microsoft’s Windows Live Messenger. Those who clicked on a link in the message downloaded a virus; each computer then became a bot. In the summer of 2008, according a U.S. indictment, the man, Nordin Nasiri, decided to sell control of these enslaved machines–a botnet that he called Shadow–for 25,000 euros.

Botnets are among the most serious threats on the Internet. They are the engines behind spam and the fraud and identity theft that spam perpetuates (according to a recent report from the security firm MessageLabs, nearly 130 billion spam messages are dispatched each day, and botnets are responsible for 92 percent of them). They are also responsible for such menaces as denial-of-service attacks, in which gangs of computers flood a corporate or government server with so much traffic that it cannot function. Thousands of large botnets swarm the digital ether, including some that are millions of machines strong. “Botnets are really the root cause and the vehicle for carrying out much of the badness that is going on and affecting everyone,” says Christopher Kruegel, a computer scientist and security researcher at the University of

California, Santa Barbara.

Espionage in the Cloud: How China-based hackers exploited Web 2.0 services to run a global spy network.

This past spring, researchers uncovered a computer espionage network directed from China by perpetrators who remain unknown. In the map above, circles represent locations of 139 computers known to have been infected with the spyware; they included ones in the National Security Council Secretariat of India, the Office of the Dalai Lama, and the Pakistani embassy in the United States. The infected computers used popular Web 2.0 services (depicted above as a cloud) to check in with attackers’ sites, and these sites then sent the machines the addresses of command-and-control servers to which they would connect and send their data. In one case documented by the researchers, 1,500 of the Dalai Lama’s letters were sent from Dharamsala, India, to a command-and-control server in Chongqing, China. The researchers suspect that the original infections took root when some victims opened virus-laden Word and PDF documents e-mailed to them. Infected computers were also found in China; some were used by the attackers to test their system.

As things worked out, the Nasiri case was a model for a successful transnational botnet investigation. In the United States, the FBI got a tip about the Dutchman and passed it to the high-tech unit of the Dutch National Police, who arrested him. Then, in an unusual touch, the Dutch investigators sought the help of antivirus companies to craft instructions for erasing the infection from victims’ computers and to take over the botnet’s command-and-­control system, which operated on servers in the Netherlands. “They wanted to do something novel–to take out the botnet,” recalls Roel Schouwenberg, an antivirus researcher at Kaspersky Lab, whom the Dutch police contacted to perform the task. “There was some risk of it getting stolen by other bad guys.”

Trouble is, the U.S.-Dutch investigation was an exception. Around the time Shadow was being shut down, another botnet, known as Grum, was gaining strength (see “Botnet Snapshot” in slideshow). Grum’s command-and-control system was hosted by a Ukrainian company called Steephost. In November 2009, Alex Lanstein, a researcher at the U.S. computer security firm FireEye, wrote an earnest e-mail to Steephost’s abuse notification address. “Hi Abuse,” he began, “I thought you would be interested to know of a criminal network downstream from you.” He laid out the facts about Grum and other malicious sites it hosted, but he received no reply. A few days later, however, he noticed the appearance of a kind of botnet fig leaf: the malicious sites’ Web addresses now led to phony e-commerce home pages. In March, the computer security firm Symantec said that Grum was responsible for 24 percent of all spam on the Internet, up from 9 percent at the end of 2009.

Steephost’s owners, who could not be reached for comment, had little to worry about in thumbing their noses at the likes of Lanstein. Botnets operate freely across national borders, and law enforcement lags far behind. A treaty that seeks to boost investigative coöperation, the European Convention on Cybercrime, has been signed by 46 countries–mostly in Europe, but including the United States, Canada, South Africa, and Japan. But it has not been signed by China, Russia, or Brazil, which (along with the United States) jockey for leadership as the world’s major hosts of cyber attacks. Some signatories, such as Ukraine, are not known for enthusiastic efforts to stop botnets. And attempts to craft a global version have stalled (see “Global Gridlock on Cyber Crime). “Botnets are a serious threat, but we’re out of luck until there is international agreement that cyber crime really needs fairly rigorous countermeasures and prosecutions across pretty much all of the Internet-using nations,” says Vern Paxson, a computer scientist at the University of California, Berkeley, who studies large-scale Internet attacks.

Given the poor prospects for a global accord, the United States is trying to forge bilateral agreements with some of the worst sources of attacks, including Russia. Russia coöperates on an ad hoc basis in pursuing homegrown cyber criminals–it recently aided in the arrest of several people in Russia who’d allegedly carried out a $10 million online theft from the Bank of Scotland–but stops short of allowing law enforcement from other nations access to its networks. Still, Sherstyuk, the Russian information security czar, told me: “We want to help set the rules in the information sphere. And I bet that there are many things that we can do together.”

Botnet Snapshot: A botnet called Grum is a leading source of spam on the Internet. Here are some of its vital statistics.

Many Internet service providers, another potential source of defense, are also making a tepid effort. ISPs have the capacity to identify and quarantine infected machines on their networks, thus containing a source of spam and attacks. But in practice, most ISPs ignore all but those machines so noxious that they prompt other ISPs to retaliate by blocking traffic. It’s much cheaper to provide the extra bandwidth than to actually deal with the problem, says Michel van Eeten, a technology policy professor at Holland’s Delft University of Technology, who studies botnets. He describes the case of an Australian ISP that was considering technology to automatically cut off infected computers. The ISP soon abandoned the plan when it realized that 40,000 confused and angry customers would be dialing in to customer support lines every month, wondering why they got cut off and how to cleanse their machines. “ISPs typically take care of the bots that trigger countermeasures against the ISP itself,” van Eeten

says, “but not too many more, because of the cost impact of scaling up such an effort.”

As for the Dutch case, it revealed that even successful investigations are tough to prosecute. Today Nasiri is awaiting trial in Holland on Dutch charges. But a Brazilian man originally charged along with him escaped trial. The U.S. indictment had alleged that the Brazilian orchestrated the receipt of 23,000 euros from a buyer and arranged to receive electronic media from Nasiri containing the bot code. It seemed he’d been caught red-handed. Last year, however, the United States dropped the charges, citing the unavailability of a key witness. The Dutch police say they escorted him to Amsterdam’s Schiphol airport and he jetted back to Brazil, a free man.

Espionage

In 1959 the Tibetan spiritual leader, the Dalai Lama, fled to Dharamsala, a scenic town in the Himalayan foothills of northern India that is still home to Tibet’s exiled government. There, a local café called Common Ground also serves as a nongovernmental organization that tries to bridge the gap between Chinese and Tibetan cultures. But in 2009, a computer scientist visiting the café discovered a bridge of a different sort: an electronic spy pipeline. The researcher, Greg Walton, noticed that computers in the town’s Wi-Fi mesh network, called TennorNet, were “beaconing” to a command-and-control server in Chongqing, China.

The scope of the espionage extended far beyond the café. According to researchers from the Ottawa cyber forensics company SecDev Group (including Walton) and the University of Toronto, victims included agencies of the Indian national security establishment; the compromised data included personal, financial, and business information belonging to Tibetans, Indians, and human-rights figures around the world (see “Espionage in the Cloud” in slideshow). The discovery came before China-based attacks against Google and other Western companies prompted Google to pull out of China (see “China’s Internet Paradox,” May/June 2010). “We lack good metrics to figure out how big the espionage problem is, but it seems clear that it’s getting a lot worse–and fast,” says Paxson. “Google China was a wake-up call, and there’s a lot more of it out there.”

Baltic Battle: In 2007, following a dispute over Estonia’s plans to move a Russian monument, riots broke out between Russians and Estonians.

China denies that its government was behind either the Dalai Lama or Google attacks, and the Toronto group says Toronto group says it cannot prove it was. But we can fairly speculate that there is a Chinese market for intelligence about people active in Tibetan circles. Many institutions–corporations, governments, universities–are in a similar position to Tibet’s government in exile, in that they hold data worth stealing because it is of value to someone. And the Canadian work shed light on global espionage techniques that, by all accounts, are far more widespread than the China-based attackers’ strikes on Tibetan targets. “With exponential growth in cyber crime, private and public organizations will find cyber penetrations if they look,” says John Mallery, a computer scientist at MIT’s Computer Science and Artificial Intelligence Laboratory. “More or less, organizations are mired in inherently insecure infrastructures and components that were never designed for security and, at best, have been retrofitted with partial security measures. Today, the attacker has the advantage at the architectural levels and is innovating faster than defenders. So what organizations can do is manage their vulnerability by isolating valuable information.”

As Mallery suggests, the lesson is that organizations should plan for losses and remain constantly vigilant, because no networked IT infrastructure can be truly safe. Consider that in response to earlier incursions (also detected by the Canadian researchers), the Dalai Lama’s staff had installed state-of-the-art firewalls one year before Walton’s discovery. But firewalls generally must be programmed to block hostile sites, and the China-based spies used an ever shifting array of benign-seeming intermediaries, including Google Groups, Twitter, and Yahoo Mail. The attackers are believed to have embedded their malware in Microsoft Word and PDF documents sent from seemingly friendly e-mail addresses that had been either spoofed or hacked. If the victim opened the attachment with a vulnerable version of Adobe Reader or Microsoft Office, the spyware took root.

Fortunately, some emerging technologies could provide a solution even in these cases. Cyber espionage often involves sending malicious commands to an infected computer that then sends data back. Detecting the signature of those commands–and then blocking them–is the goal of Santa Barbara’s Kruegel, who has developed technology that spots the communication even if the initial infection went undetected. Even though attackers might compromise machines, Kruegel says, if you can identify the commands fast enough, “you can target and shoot them down.” He expects to bring the technology to market within one year.

Changes on the political level could also make a difference: right now, no treaty bars what the China-based agents did. While U.N. conventions make strong statements on human rights, for example–and such conventions are frequently

invoked to condemn the actions of China and other nations–nothing comparable addresses digital pillaging that victimizes targets in the realms of politics, business, and human rights. “Cyber crime is morphing into cyber espionage because of the absence of restraints at a global level,” says Ronald Deibert, who helped lead the espionage research as director of the Citizen Lab, a research outpost at the University of Toronto. “Having a treaty would help hold governments accountable. You can say: ‘Here’s the treaty, and China, you aren’t playing by the rules–but you signed it.’ ” (See “Militarizing Cyberspace,” Notebooks, p 12.) Meanwhile, it’s safe to assume the worst about the prevalence of cyber espionage. “We need to look at this as one small window into a much wider problem,” he says. “We kind of dipped our finger into a pool here.”

Then much of Estonia’s Internet was shut down by a series of cyber attacks. The difficulty of attributing those attacks highlights a need for new technologies and expanded international agreements.
Who Did It?

On the morning of April 27, 2007, the Estonian government, over protests from Russia, began moving a bronze statue of a Soviet soldier that had originally been installed in the capital city of ­Tallinn to commemorate World War II dead. The 300,000 ethnic Russians living in Estonia were furious. Not long after, Internet attacks began. Botnets targeted Estonian newspapers, telecoms, banks, and government sites. The nation’s network was besieged for weeks. Russia seemed the obvious culprit: its government had warned that removing the statue would be “disastrous.”

If you were watching Estonian network traffic during the attacks, you would have seen bot armies advancing from the United States, Egypt, Peru, and other countries. But closer inspection revealed that many of the bots were taking orders from computers in Russia (and instructions on how to flood Estonian websites with useless “pings” spread in Russia-based online chat rooms). Still, it was impossible to determine whether the Russian government itself was directing the hostile activities. Russia denied responsibility but refused to allow any forensic analysis of its networks.

In short, there was no easy way to attribute the attack. In a world that countenances the prospect of cyber war, situations the prospect of cyber war, situations like that are among the biggest problems that nations face, but certainly not the only ones. If a network breach aimed at espionage can’t readily be distinguished from one that is a prelude to attack, it’s hard to know when a counter­attack is justified. Neither is there any way to conduct inspections for cyber weapons, measure their potential yield, or certify that they’ve been destroyed. When the Senate pressed General ­Alexander, the new head of U.S. Cyber Command, to explain how the United States would deal with these issues, his responses were classified. “The entire phenomenon of cyber war is shrouded in such government secrecy that it makes the Cold War look like a time of openness and transparency,” Richard Clarke, the former counterterrorism czar, writes in his new book, Cyber War: The Next Threat to National Security and What to Do About It.

But the implications of the attribution problem are clear enough. An attack on one NATO nation obligates other NATO members to join the fight, points out Michael Schmitt, a dean and professor of international law at the George C. Marshall European Center for Security Studies in Germany. Getting it wrong would be a disaster. “This isn’t a situation where you can think the other side attacked,” he says. “You have to know. As we learned recently, you need to get the evidence right when you go to war.” And in the case of a cyber threat, a government could easily misjudge its source, since Internet addresses can be concealed or faked. “I’m terrified that you attribute to a state wrongly,” Schmitt says.

Over the long term, proposed technological fixes could address this problem. Research groups at Georgia Tech, the University of California, San Diego, the University of Washington, and other institutions are working on ways to establish the provenance of data. In an approach being developed by researchers at San Diego and the University of Washington, the identity of the original computer that issued a packet of data would stay attached to that data, in encrypted form. The digital “key” to this identity would be held by a trusted third party–perhaps accessible only by court order. “All the instruments of national power, ranging from diplomatic to military force to economic influence, are pretty worthless if you can’t attribute who mounted an attack,” says Stefan Savage, a computer scientist at the University of California, San Diego, who is developing the technology. But while the technology can potentially tell you the identity

technology can potentially tell you the identity of a machine that waged an attack (or committed a crime), this isn’t always helpful if the original source was some public computer. “Being able to attribute activity to a particular machine is a lot different than being able to say ‘What was its true source?’ ” says Berkeley’s Vern Paxson. “Even if it went all the way to the terminal end system”–that is, the place of an attack’s true origin–“you might have some coffee shop in Shanghai.” Paxson warns that in general, approaches to tracking identities in cyberspace carry obvious privacy implications. “The technology to address these sorts of issues–the ability to be able to monitor who is doing what, and track it back–would be very powerful,” he says. “But it would also be police-state technology.”

With solutions still far off, averting a needless outbreak or escalation of cyber war will have to rely on more conventional intelligence techniques. Surveillance of computer networks can sometimes provide the clues needed to identify and expose a potential attacker, says Bret Michael, a computer scientist at the Naval Postgraduate School in Monterey, CA. So can basic human intelligence networks. If intelligence agencies can pinpoint the source of a threat, they can “shine a light on a malefactor before he attacks or soon after,” he says. “Sometimes just being identified is enough to prevent an attack from taking place.”

Cyber Summit

On a crisp April morning this year, more than 140 diplomats, policy makers, and computer scientists arrived in the mountain town of Garmisch-Partenkirchen, Germany. Their host was the Russian Interior Ministry.

The topic of the conference that brought them there: how to secure the “information sphere,” as the Russians put it. But this meant different things to people from different countries. Painter, the White House aide, emphasized fighting cyber crime. Russian speakers–mindful of the suicide bombings that had recently struck the Moscow subway–talked of thwarting terrorist training and organizing online. An Indian researcher talked about network usage by the Mumbai terrorists and described how Indian laws were reformed in response. Representatives of the Internet Corporation for Assigned Names and Numbers (ICANN), the authority responsible for domain names, spoke of the latest security fixes. A small Chinese delegation attended but watched silently.

Then, on the second day, Michael Barrett, the chief information security officer at PayPal, took the podium to remind the attendees of what they had in common: a broken set of technologies. Like other targets, he said, PayPal–which gives Internet users a secure way to send cash in 190 countries and regions–is under siege. “What’s becoming clear to us, and indeed any practitioner of information security, is that most of the curves–and we can all dig out these curves, the amount of viruses on the Internet, the number of incursions, and blah blah blah–they all look depressingly similar. They all tend to look logarithmic in scale. They all go up like that,” he said with a sharp skyward sweep of his hand.

Referring to earlier conversations about improving coöperation and adding security patches, he added: “It’s not that those things are bad. But at this point it reminds me slightly of the definition of madness, which is to say, doing the same thing over and over again and expecting a different result. It’s our hypothesis that to secure the Internet, we have to think about ecosystem-level safety, and that means rethinking the foundations of the Internet.” Just as Barrett was getting warmed up, the Russian organizers cut him off. They were behind schedule and it was time for lunch, but the decision was symbolic of a larger problem. “Essentially, we don’t have the technology to address the threats that are delivered by the network infrastructure we’ve put in place,” says John Mallery, the MIT researcher. Several research projects have created test beds for new Internet architectures or prototyped more secure operating systems and hardware architectures, such as chips that store some software in isolated areas. But the Department of Homeland Security report still found “an urgent need” for accelerated research and development on securing cyberspace.

The collective discussion in Garmisch was useful to advance near-term efforts. Changing the behavior of individual computer users and corporations will be crucial; so will tightening law-enforcement ties, installing the latest technological patches, and expanding diplomacy. But switching to new technologies will ultimately be necessary. And that’s not likely to happen until we experience a major breakdown or attack. “What we’ve seen is that arms races often progress in an evolutionary fashion. But now and then, they jump,” says Paxson. “If there is some cyber attack that messes up a city for a week–or if a big company is brought to its knees–it will be a game changer. I have no way of knowing how to predict that. It’s like saying here in the Bay Area, ‘Will there be a big earthquake in the next three years?’ I really don’t know.”

His remarks reminded me of Kaspersky’s plane-crash fears; collectively, we just can’t predict how, and when, things might change. But as Baker put it, “The lesson of 9/11, the lesson of Hurricane Katrina, is that sooner or later, it’s going to happen.”

David Talbot is Technology Review’s chief correspondent.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

It’s time to retire the term “user”

The proliferation of AI means we need a new word.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.