Skip to Content

Anticensorship Tool Proves Too Good to Be True

Experts warn that the software could identify those it claims to protect.
September 15, 2010

A software tool designed to help dissidents circumvent government censorship of the Internet contains flaws so severe that it could endanger those who use it.

The tool, called Haystack, has won awards and praise for enabling political activists and ordinary citizens to beat government controls barring Internet content. But security expert Jacob Appelbaum warns that it leaves a trail of clues that could be used to find whoever’s using it, and what content they have accessed. Experts say this highlights the importance of having outside experts review technologies intended for this kind of use.

Haystack was created by the San Francisco-based Censorship Research Center, founded last year by two activists Austin Heap and Daniel Colascione. The software was intended to “provide unfiltered and undetectable Internet access to the people of Iran,” according to the project website. Its creators received much attention–Heap was declared Innovator of the Year by the Guardian newspaper, and also received the First Amendment Coalition Beacon award.

The tool was billed as a way to access restricted Internet pages while hiding this activity from the authorities. Haystack’s creators claimed that it could do this by exploiting problems with Iran’s firewall, by encrypting communications between users and Haystack’s servers, and by disguising traffic sent to and from the tool so that users would appear to be visiting innocuous websites. But in the past month, experts have expressed concern that there had been no independent review of its ability to function as promised.

Appelbaum, along with Evgeny Morozov, a visiting scholar in the program on liberation technology at Stanford University, and civil liberties activist Danny O’Brien in particular pressed for more details about how the software was built. They worried that vulnerabilities in its underlying code could allow protected messages to be decoded by government officials. After testing the software, their reaction was anger and dismay.

Appelbaum says that after hearing a description of how the tool functioned, he worried that it might not have been built correctly. But he became truly concerned once he tested it himself. Appelbaum and his colleagues broke the tool’s privacy protections in less than six hours. Appelbaum says it would be easy for government authorities to do the same.

“This is a system that’s so fragile, I can barely tell you how it operates without being extremely worried about the people who may have used it who had no idea that they were being put at risk,” says Appelbaum. “It’s incredible, and incredibly terrible.”

Appelbaum says he must be cautious about giving details of what’s wrong with Haystack for fear of further endangering those who might be at risk. But he says, “When you use the tool, it effectively alerts authorities that you are trying to use it.”

On Monday, Heap announced that Haystack would halt distribution and testing with users in Iran until the security concerns were resolved. He wrote, “We have begun contacting users of Haystack to tell them to cease using the program. We will not resume testing until this third party review is completed and security concerns are addressed in an open and transparent way.”

Neither Heap nor other Censorship Research Center employees, nor members of the board of directors, could be reached for comment. But Colascione, Haystack’s lead developer, published a public resignation letter acknowledging that Appelbaum’s concerns were justified. “It is as bad as Appelbaum makes it out to be,” Colascione wrote, adding that the version in circulation was intended only for testing, never for distribution and actual use.

Appelbaum says that he was able to obtain and run a copy of Haystack days after Heap claimed to have stopped supporting the software, suggesting that the organization isn’t in control of how widely it’s being distributed.

It’s alarming that Heap and his colleagues did not accept more help from people established in the field of censorship circumvention, says Ross Anderson, chair of the U.K.-based Foundation for Information Policy Research and a professor of security engineering at the University of Cambridge.

Anderson says it’s very difficult to design censorship circumvention tools that work properly, and the creators of such tools need to be well-versed in the risks and pitfalls. “There’s a lot of bloody history, and there’s a lot of relevant research,” he says.

The challenges go beyond simply providing access to restricted websites, Anderson says. Tools need to protect users’ anonymity, and avoid creating evidence that could be damning if it fell into the hands of government officials.

A poorly designed tool can single a user out, Anderson says. He likens it to being the only one to show up to a party wearing a mask. He warns that, because Haystack doesn’t appear to be in wide use, authorities could assume that anyone who does have a copy is a high-value target and should be arrested.

The Berkman Center for Internet and Society at Harvard University has previously conducted tests of censorship circumvention tools, both in the lab and in countries that filter Internet content, to determine how effective tools are at circumventing censorship, how secure they are, and how easy they are to use. Berkman researcher Ethan Zuckerman says that he hopes Haystack’s founders will allow a version of the tool to be tested later this year, the next time the center plans to evaluate such tools.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.