When reports published earlier this month revealed that the U.S. National Security Agency could reverse the protections of Internet anonymity tool Tor, many activists and others who rely on the tool had little reason to panic. Despite the alarmist tone of some headlines, the techniques revealed relied on attacking software such as Web browsers rather than Tor itself. After reviewing the leaked NSA documents, the Tor Project declared “there’s no indication they can break the Tor protocol.”
All the same, the Tor Project is trying to develop critical adjustments to how its tool works to strengthen it against potential compromise. Researchers at the U.S. Naval Research Laboratory have discovered that Tor’s design is more vulnerable than previously realized to a kind of attack the NSA or government agencies in other countries might mount to deanonymize people using Tor.
Tor prevents people using the Internet from leaving many of the usual traces that can allow a government or ISP to know which websites or other services they are connecting to. Users of the tool range from people trying to evade corporate firewalls to activists, dissidents, criminals, and U.S. government workers with more sophisticated adversaries to avoid.
When people install the Tor client software, their outgoing and incoming traffic takes an indirect route around the Internet, hopping through a network of “relay” computers run by volunteers around the world. Packets of data hopping through that network are encrypted so that relays know only their previous and next destination (see “Dissent Made Safer”). This means that even if a relay is compromised, the identity of users, and details of their browsing, should not be revealed.
However, new research shows how a government agency could work out the true source and destination of Tor traffic with relative ease. Aaron Johnson of the U.S. Naval Research Laboratory and colleagues found that the network is vulnerable to a type of attack known as traffic analysis.
This type of attack involves observing Internet traffic data going into and out of the Tor network and looking for patterns that reveal the Internet services that a specific Internet connection, and presumably its owner, is using Tor to access. Johnson and colleagues showed that the method could be very effective for an organization that both contributed relays to the Tor network and could monitor some Internet traffic via ISPs.
“Our analysis shows that 80 percent of all types of users may be deanonymized by a relatively moderate Tor-relay adversary within six months,” the researchers write in a paper on their findings. “These results are somewhat gloomy for the current security of the Tor network.” The work of Johnson and his colleagues will be presented at the ACM Conference on Computer and Communications Security in Berlin next month.
Johnson told MIT Technology Review that people using the Tor network to protect against low-powered adversaries such as corporate firewalls aren’t likely to be affected by the problem. But he thinks people using Tor to evade the attention of national agencies have reason to be concerned. “There are many plausible cases in which someone would be in a position to control an ISP,” says Johnson.
Johnson says that the workings of Tor need to be adjusted to mitigate the problem his research has uncovered. That sentiment is shared by Roger Dingledine, one of Tor’s original developers and the project’s current director (see “TR35: Roger Dingledine”).
“It’s clear from this paper that there *do* exist realistic scenarios where Tor users are at high risk from an adversary watching the nearby Internet infrastructure,” Dingledine wrote in a blog post last week. He notes that someone using Tor to visit a service hosted in the same country—he gives the example of Syria—would be particularly at risk. In that situation traffic correlation would be easy, because authorities could monitor the Internet infrastructure serving both the Tor user and the service he or she is connecting to.
Dingledine is considering changes to the Tor protocol that might help. In the current design, the Tor client selects three entry points into the Tor network and uses them for 30 days before choosing a new set. But each time new “guards” are selected the client runs the risk of choosing one an attacker using traffic analysis can monitor or control. Setting the Tor client to select fewer guards and to change them less often would make traffic correlation attacks less effective. But more research is needed before such a change can be made to Tor’s design.
Whether the NSA or any other country’s national security agency is actively trying to use traffic analysis against Tor is unclear. This month’s reports, based on documents leaked by Edward Snowden, didn’t say whether the NSA was doing so. But a 2012 presentation marked as based on material from 2007, released by the Guardian, and a 2006 NSA research report on Tor, released by the Washington Post did mention such techniques.
Stevens Le Blond, a researcher at the Max Planck Institute for Software Systems in Kaiserslautern, Germany, guesses that by now the NSA and equivalent agencies likely could use traffic correlation should they want to. “Since 2006, the academic community has done much work on traffic analysis and has developed attacks that are much more sophisticated than the ones described in this report.” Le Blond calls the potential for attacks like those detailed by Johnson “a big issue.”
Le Blond is working on the design of an alternative anonymity network called Aqua, designed to protect against traffic correlation. Traffic entering and exiting an Aqua network is made to be indistinguishable through a mixture of careful timing, and blending in some fake traffic. However, Aqua’s design is yet to be implemented in usable software and can so far only protect file sharing rather than all types of Internet usage.
In fact, despite its shortcomings, Tor remains essentially the only practical tool available to people that need or want to anonymize their Internet traffic, says David Choffnes, an assistant professor at Northeastern University who helped design Aqua. “The landscape right now for privacy systems is poor because it’s incredibly hard to put out a system that works, and there’s an order of magnitude more work that looks at how to attack these systems than to build new ones.”
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
What to know about this autumn’s covid vaccines
New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.
Human-plus-AI solutions mitigate security threats
With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure
Next slide, please: A brief history of the corporate presentation
From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.