A research project in which academics bought over 120,000 fraudulent Twitter accounts has shown how easily spammers evade the company’s controls—and may have yielded a new way of beating social-network spam. Part research exercise and part sting operation, the project generated data that is being used to train software to automatically block spammers from creating accounts.
Today most anti-spam efforts at Twitter and other social-networking companies focus on blocking accounts only after they begin to send out spam. Spammers typically use software bots to fill out the forms on account registration pages; then they use the accounts to send unsolicited advertisements en masse. Often these messages contain links that make money through deceitful tactics such as installing malicious software on a person’s computer.
In the 10 months ending in April 2013, researchers from the International Computer Science Institute, the University of California, Berkeley, and George Mason University spent just over $5,000 on Twitter accounts, collecting 121,027 of them with surprising ease. Twitter gave the researchers permission to buy the accounts and helped out with the study, which was presented at the Usenix Security Symposium in Washington, D.C., last week.
“There’s a vibrant market for the sale of fraudulent Twitter accounts,” says Chris Grier, a researcher at Berkeley and the International Computer Science Institute. Some came from online storefronts that make buying accounts in bulk as simple as purchasing something on Amazon. Others were bought in person-to-person transactions brokered on forums where spammers do business.
The prices varied but were typically around $40 per thousand accounts, says Grier, suggesting that the market for bulk Twitter accounts is well established. Many accounts had been registered months before; “pre-aging” is seen as a selling point, perhaps because such accounts are blocked less quickly than brand-new ones when used to send spam.
Buying up the accounts enabled the researchers to examine data logged by Twitter about how they had been created, revealing details of a sophisticated supply chain that can evade the normal controls on bulk account registrations. The spammers’ tricks included creating accounts via connections routed around the globe—the researchers recorded over 160 different countries—to prevent suspicious spikes in registrations from particular locations. Most fraudulent accounts were created with the aid of Hotmail or Yahoo e-mail accounts.
“The merchants are also able to solve Captchas with reasonable success,” says Grier, referring to the garbled-word translation step used to prevent software bots from completing online forms. Although it was known to be technically feasible to crack these puzzles using automated or crowdsourced methods, few studies have been able to assess how significantly spammers are affected by having to do that. As it turns out, he says, “it doesn’t seem to have impacted the cost at all.”
Using the data Twitter provided on the bulk-bought accounts, Grier and colleagues trained software to flag accounts created in suspicious ways. Features such as the timing of registrations, the names on the accounts, and the characteristics of the browser and computer used all feed into that system, along with some secret clues that Twitter wasn’t willing to disclose. Using the new system to scan all Twitter accounts registered in the past 12 months turned up several million registered that way (the researchers won’t say the exact figure). Sales of these accounts may have generated between $127,000 and $459,000.
“Twitter wants to take what we developed and build it into the signup process,” says Grier. Other social networks could use a similar approach. The same people offering Twitter accounts for sale also trade in Google, Facebook, and LinkedIn accounts.
To prevent a signup-blocking system from becoming outdated as spammers tune their tactics in response, a company would need to start regularly buying from spammers as Grier and colleagues did. “That’s the hard part for them,” says Grier; it’s very different from how existing anti-spam teams at Twitter and other companies operate.
Guofei Gu, an assistant professor at Texas A&M University who has done his own research into Twitter spam, says that moving to block spammers when they try to register accounts make sense. The research reveals new insights into the sophistication of spammers’ techniques, he adds.
However, Gu notes that spammers can easily change their behavior to avoid the identifying clues the researchers turned up. “Spammers will definitely learn and evade the proposed approach once they know the strategies,” he says. He suggests fighting back by learning more about how spammers operate, to identify which evasive techniques are most costly for them to use in making new accounts.
Overall, though, Gu believes the volume of spam on Twitter has dropped, but the spammers that remain are stealthier. “We notice that they are doing more targeted spamming,” he says. “This makes them a little harder to catch.”
The big new idea for making self-driving cars that can go anywhere
The mainstream approach to driverless cars is slow and difficult. These startups think going all-in on AI will get there faster.
Inside Charm Industrial’s big bet on corn stalks for carbon removal
The startup used plant matter and bio-oil to sequester thousands of tons of carbon. The question now is how reliable, scalable, and economical this approach will prove.
The hype around DeepMind’s new AI model misses what’s actually cool about it
Some worry that the chatter about these tools is doing the whole field a disservice.
The dark secret behind those cute AI-generated animal images
Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.