Too Little, or Too Much
But most legal, policy, and security experts agree that these efforts, taken together, still don’t amount to a real solution. The new anti-spam initiatives represent only the latest phase of an ongoing battle. “The first step is, the industry has to realize there is a problem that is bigger than they want to admit,” says Peter Neumann, a computer scientist at SRI International, a nonprofit research institute in Menlo Park, CA. “There’s a huge culture change that’s needed here to create trustworthy systems. At the moment we don’t have anything I would call a trustworthy system.” Even efforts to use cryptography to confirm the authenticity of e-mail senders, he says, are a mere palliative. “There are still lots of problems” with online security, says Neumann. “Look at it as a very large iceberg. This shaves off one-fourth of a percent, maybe 2 percent – but it’s a little bit off the top.”
But if it’s true that existing responses are insufficient to address the problem, it may also be true that we’re at risk of an overreaction. If concrete links between online fraud and terrorist attacks begin emerging, governments could decide that the Internet needs more oversight and create new regulatory structures. “The ISPs could solve most of the spam and phishing problems if made to do so by the FCC,” notes Clarke. Even if the Bali bomber’s writings don’t create such a reaction, something else might. If no discovery of a a strong connection between online fraud and terrorism is made, another trigger could be an actual act of “cyberterrorism”—the long-feared use of the Internet to wage digital attacks against targets like city power grids and air traffic control or communications systems. It could be some online display of homicide so appalling that it spawns a new drive for online decency, one countenanced by a newly conservative Supreme Court. Terrorism aside, the trigger could be a pure business decision, one aimed at making the Internet more transparent and more secure.
Zittrain concurs with Neumann but also predicts an impending overreaction. Terrorism or no terrorism, he sees a convergence of security, legal, and business trends that will force the Internet to change, and not necessarily for the better. “Collectively speaking, there are going to be technological changes to how the Internet functions – driven either by the law or by collective action. If you look at what they are doing about spam, it has this shape to it,” Zittrain says. And while technological change might improve online security, he says, “it will make the Internet less flexible. If it’s no longer possible for two guys in a garage to write and distribute killer-app code without clearing it first with entrenched interests, we stand to lose the very processes that gave us the Web browser, instant messaging, Linux, and e-mail.”
A concerted push toward tighter controls is not yet evident. But if extremely violent content or terrorist use of the Internet might someday spur such a push, a chance for preëmptive action may lie with ISPs and Web hosting companies. Their efforts need not be limited to fighting spam and fraud. With respect to the content they publish, Web hosting companies could act more like their older cousins, the television broadcasters and newspaper and magazine editors, and exercise a little editorial judgment, simply by enforcing existing terms of service.
Is Web content already subject to any such editorial judgment? Generally not, but sometimes, the hopeful eye can discern what appear to be its consequences. Consider the mysterious inconsistency among the results returned when you enter the word “beheading” into the major search engines. On Google and MSN, the top returns are a mixed bag of links to responsible news accounts, historical information, and ghoulish sites that offer raw video with teasers like “World of Death, Iraq beheading videos, death photos, suicides and crime scenes.” Clearly, such results are the product of algorithms geared to finding the most popular, relevant, and well-linked sites.
But enter the same search term at Yahoo, and the top returns are profiles of the U.S. and British victims of beheading in Iraq. The first 10 results include links to biographies of Eugene Armstrong, Jack Hensley, Kenneth Bigley, Nicholas Berg, Paul Johnson, and Daniel Pearl, as well as to memorial websites. You have to load the second page of search results to find a link to Ogrish.com. Is this oddly tactful ordering the aberrant result of an algorithm as pitiless as the ones that churn up gore links elsewhere? Or is Yahoo, perhaps in a nod to the victims’ memories and their families’ feelings, making an exception of the words “behead” and “beheading,” treating them differently than it does thematically comparable words like “killing” and “stabbing?”
Yahoo’s Osako did not reply to questions about this search-return oddity; certainly, a technological explanation cannot be excluded. But it’s clear that such questions are very sensitive for an industry that has, to date, enjoyed little intervention or regulation. In its response to complaints, says Richard Clarke, “the industry is very willing to coöperate and be good citizens in order to stave off regulation.” Whether it goes further and adopts a stricter editorial posture, he adds, “is a decision for the ISP [and Web hosting company] to make as a matter of good taste and as a matter of supporting the U.S. in the global war on terror.” If such decisions evolve into the industrywide assumption of a more journalistic role, they could, in the end, be the surest route to a more responsible medium – one that is less easy to exploit and not so vulnerable to a clampdown.
David Talbot is Technology Review’s chief correspondent.