The CAPTCHA Arms Race
Researchers mull the next step in spam deterrents.
Spammers use automated programs called bots to harvest online data, so in 2000, a group of researchers created a bot deterrent called the Captcha–the “completely automated public Turing test to tell computers and humans apart.” The first Captchas required people to type in words displayed as images on a Web page in order to access a website.
But as bots have gotten smarter and Captchas more complicated, two problems have arisen. The first is that the Captchas can be hard for humans to solve, too. The second is that spammers have simply enlisted networks of humans to attack Captchas.
Researchers are tackling both problems. For instance, Jon Bentley of Avaya Labs and Henry Baird, a professor at Lehigh University, have proposed “implicit Captchas” that would present a number of small tests as part of the natural experience of browsing a website. To move from one page to the next, the user might have to click a particular object in an image. Though relatively simple, the tests would be numerous enough to establish that it’s probably a human at the keyboard. But navigating a site would require so much human attention that it wouldn’t be cost effective for spammers to hire networks of Captcha breakers.
Until such new techniques prove themselves in the real world, though, Luis von Ahn, a Carnegie Mellon professor who helped develop Captchas, thinks Web surfers have no choice but to muddle through even difficult ones. “If you got rid of them, all hell would break loose,” he says.