Skip to Content

Jamming the Spammers

Visual fake-out should foil the ‘spambots’ used by junk-mail purveyors.
June 1, 2003

Junk e-mail operators use programs called bots to register for thousands of free online e-mail accounts. Bot-blocking obstacles offer some protection; Yahoo!’s e-mail sign-up form, for example, asks users to type in an English word that’s displayed in the form of a spotty, degraded image that humans can decipher but bots cannot (see “Excuse Me, Are You Human?). Trouble is, hackers can build bots that bust such barricades by matching the outline of the word in the degraded image with the outlines of words in a dictionary.

Now there’s a new defensive weapon: a tougher visual test. The system constructs English-sounding words, like “brience” and “emperly,” then masks them with randomly generated squares, ovals, and polka dots that eat away parts of letters. Human readers can recognize the words. But to break this barrier, hackers would have to crack problems in computer vision and pattern recognition that have remained unsolved for decades, says Henry Baird, the principal scientist at the Palo Alto Research Center in California who developed the system. In tests, the system was impervious to laboratory bots. The new technology bolsters spam-filtering efforts by companies like Microsoft, America Online, and others. Filters can’t catch everything, says Brian Cartmell, manager of Spam Arrest, a Seattle company that uses a Web-based visual test like Yahoo!’s to protect customers from junk e-mail. Baird’s approach, in contrast, could stop spam artists from creating accounts that send out the junk e-mail. And that could “get us to a point where writing the software to overcome [the test] will be too expensive for it to be worthwhile,” Cartmell says. Then we could all stop weeding our in-boxes and go back to being human.

Keep Reading

Most Popular

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

It’s time to retire the term “user”

The proliferation of AI means we need a new word.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Sam Altman says helpful agents are poised to become AI’s killer function

Open AI’s CEO says we won’t need new hardware or lots more training data to get there.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.