This sponsored story was produced in association with Digilant, not by MIT Technology Review editorial staff.
In February 2014, Vivek Shah, president of the Interactive Advertising Bureau’s (IAB) Board of Directors, tweeted an astounding fact: “We have reached a crisis point: 36 percent of traffic today is generated by machines, not humans.”
Advertisers buying digital impressions think they are paying for views by humans. More than a third of the time, that’s not the case. Instead, they’re seeing fraudulent traffic created by people who make money from nonexistent impressions or machine-generated clicks.
Fraudsters create bots that visit web pages and get recorded as page views. Bots are programmed by computers that have been infected by viruses and are instructed to visit web sites—usually without the computer owners’ knowledge. These fraudsters also hack web content management software on ad-supported sites to insert unseen pages. They “stack” ads behind videos, creating impressions that are never actually seen by the consumer, although they are counted and charged as impressions.
Many industry insiders have been aware of the growth of advertising fraud and, apparently, view this practice as business as usual. Shah’s message was only retweeted six times. But a month later, The Wall Street Journal repeated his contention on the front page of its Marketplace section. The Journal article is likely to generate pointed questions from CFOs and CEOs. Specifically, they may well be asking their chief marketing officers how their ad-technology vendors are addressing the issue.
In addition to the issue of fraud by bots, advertisers can be defrauded by viewability issues, when a human visits a web site where ads or videos are stacked on top of each other or rotated very rapidly. The ads are recorded as having been viewed, even though they don’t influence consumer behavior.
Meanwhile, a growing number of people in the ad industry believe that ad fraud is a big problem. “The amount of bot fraud in our midst is unrivaled in any other industry and is sadly leading to a crisis of confidence on the buy side,” Ari Jacoby, CEO of Solve Media, a New York City-based online advertising and security company, recently wrote in Advertising Age. Solve Media estimates that fake ads will cost marketers $11.6 billion in 2014, up 22 percent from 2013. That’s a significant chunk of the overall Internet-advertising market, which the IAB estimates at nearly $43 billion.
Growing awareness of the magnitude of the problem is likely to spur interest in technologies designed to audit ad campaigns. A number of companies have developed technology to spot fraudulent traffic, then block or black list the websites from which it originates.
Integral Ad Science creates real-time detection and blocking of fraudulent web traffic using semantic filters, analysis of links between web sites, image analysis, and human scoring, as well as databases of fraudulent web sites. Its AdSafe product also prevents ads from being shown on inappropriate porn sites, illegal download sites, sites that feature hate speech and other objectionable content.
Iponweb, a U.K.-based ad-technology company, has deployed anomaly-detection tools that recognize unusual traffic patterns more likely to be bot traffic than human. The company says its technology, developed by Russian engineers, goes well beyond traditional rule-based filters and databases of known bot identities.
Spider.io, a small British company that has detected a number of bot techniques for fraudulent advertising, was recently acquired by Google. It has exposed the ad network ClickIce as being designed specifically to sell such fake impressions, even while it claims to represent thousands of small websites.
Harvard Business School professor Benjamin Edelman also runs a business that detects fraudulent traffic. He has some 40 clients, and his 150 computers—they run 24 hours a day in nine countries—track down malicious software that delivers fake traffic. Edelman received a huge amount of attention earlier this year when he posted a blog accusing the Internet ad sales firm Blinkx of inflating client sales numbers with fake traffic. Blinkx stock fell about 37 percent in the wake of these claims, which the company has denied.
The IAB has established a task force called Traffic of Good Intent, which recommends a number of business practices that advertisers can take to reduce fraud. Among them is measuring traffic by driving users to sites that generate undisputed human activity, such as making a purchase. The problem is that marketers still want to track any impressions that may have led a final purchase, to have more accurate attributions that leads to a purchase They also want access to sites that may improve their return on investment (ROI) by targeting a larger audience at low cost. Additionally, the IAB now certifies ad-technology companies under its Quality Assurance Guidelines, signifying that the companies have adopted recommended approaches for brand safety and self-regulation.
Some level of fraud is inevitable. But marketers need to be aware of the risk and to work with industry experts who have a keen sense of where the biggest risks lie. “Our policy is, if it’s weird, we block it,” says Krishna Boppana, head of data science at Digilant, a customized programmatic media solutions company in Boston. “We go beyond that, though. Last year, we detected that some very popular domains were stuffed with bots whenever they were sold by third parties.”
The result: “We now have a policy that blocks [sales of such inventory] from anyone other than Google’s own exchange, DoubleClick,” Boppana says. “In doing so, we are taking the guesswork out of the equation and ensuring that our clients are buying impressions that are legitimate.”