Skip to Content

A Web Scam That Makes $500,000 a Month

A computer scientist discovers a scam that skims such a tiny amount from so many sources that no one has much incentive to shut it down.

A team of experts has uncovered an elaborate, even elegant, scheme to automate click-fraud in a way that allowed its perpetrator to carry on undetected for months. One of the experts involved in the investigation believes that subsequent versions of this scheme might escape notice simply because no one has much incentive to pursue it, even though it appears to have netted its perpetrator millions of dollars.

Click fraud is nothing new: Scammers use computers or actual human beings to click on ads on websites they own in order to collect revenue from advertisers. But the new scheme, outlined in an article in the Wall Street Journal, was built by someone who took elaborate measures to hide its fraudulent nature.

The scheme was uncovered by AdSafe, a company that helps brand advertisers make sure their ads aren’t appearing next to inappropriate content, such as porn or hate speech.

The mystery began when engineers at AdSafe noticed that sites monitored by their service that normally have totally innocuous content began to be classified as porn. What followed is laid out in elaborate detail on the blog of one of the computer scientists who worked on the team that uncovered the fraud, Panos Ipeirotis.

Through a substantial amount of sleuthing, the team eventually realized that a particular porn website, one which gets up to one million unique visitors a month, was loading innocuous domains in tiny iframes within its browser window. These sites had names like “baldnesshealth.com” and “carecouples.com.” These parked domains hosted the ads that were automatically clicked, in the background, without the user even knowing it.

The result is what Iperotis calls “traffic laundering” – advertisers receiving traffic from the innocuous parked domains who check their referrer logs saw only innocent-sounding domains. What’s more, the traffic even looked like “real” traffic in terms of its location and frequency, because the fraud was only occurring as long as users were on the porn site at the hub of this scheme.

The craziest part of this entire fraud, aside from the fact that, according to Ipeirotis’s calculations, it netted its perpetrator between $50,000 and $700,000 a month, is that the fraud was spread across so many different sites and brand advertisers that no single one of them had much incentive to pursue or even notice the fake clicks.

Do the big brands care about this type of fraud? Not really. Yes, they pay for some “invisible impressions”. […] In any case, compared to their overall marketing budget, this is peanuts.

[…]

Note also that the fraudster does not target a single publisher, does not target a single advertiser. The damage is amortized so nicely that nobody feels that it is a big deal. A mastery of the long tail.

Iperotis argues that whoever concocted this scheme intentionally or unintentionally involves the non-fraudulent players in such a way that they might actually have a disincentive to pursue it. For example, a big brand advertiser might not pursue the fraud because they would not want their brand to be associated with an investigation of it – it’s just bad PR.

The guy essentially realized that this type of fraud is really behaving like a parasite within a much bigger ecosystem. And it is a parasite that is so costly to remove that it makes sense to leave it there. As long as the parasite does not annoy the host too much, things will be fine.

Fortunately, according to the Wall Street Journal, the FBI has been notified.

Follow Mims on Twitter or contact him via email.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.