viewability

Unscrambling the Eggs Measuring Averages Is a Fast Track to Ad Fraud

In a recent AdExchanger article, Hagai Schechter delivered an ugly dose of reality to advertisers:

A campaign with a small but significant quantity of fake traffic would outperform on paper a campaign comprised entirely of real traffic.

How could this be, and what’s an advertiser to do?

The problem with scrambled eggs

Imagine that an advertiser works with an ad network that delivers above-average clickthrough rates of 0.5%. This network also appears to have strong inventory quality controls. Ad viewability is an impressive 73%, and only 13% of ads are delivered to non-human bot traffic.

The campaign appears to be a home run, but the advertiser is likely to find that it achieves superficial success metrics without delivering any real business value. Over time, the advertiser will uncover that clicks don’t translate to on-site engagement. Bounce rates will be high, time on site will be low, and very few leads will materialize. What went wrong?

It turns out the advertiser bought a scrambled eggs campaign. The campaign is a mixture of two pools of inventory:

There are some impressions that deliver clicks, and other impressions that deliver viewability, but almost no impressions that deliver both. The challenge for marketers is that these two inventory pools are very hard to separate. There aren’t two tactics called “Fraud” and “Human.” The inventory is comingled within a single campaign, and the advertiser can only measure the campaign’s blended performance. The eggs are scrambled.

Unscrambling the eggs

Rather than being fooled by the campaign’s average performance metrics, the advertiser needs to understand the characteristics of the ads that are clicked, and this required unscrambling the eggs.

The classic approach to combating ad fraud is to identify fraud-resistant metrics. Clicks are notoriously prone to fraud — bots are good at clicking on ads. Measuring a campaign’s success on post-click activities like time on site or lead form submissions can significantly improve a campaign’s resilience against ad fraud. Unfortunately for advertisers, fraudsters are good at what they do, and bots are quickly becoming more adept at exhibiting human-like behavior. Fraud becomes a game of escalations, and advertisers must constantly adjust success metrics to stay one step ahead of the fraud machine.

A more forward-thinking approach to managing fraud is to stop measuring campaign averages and start tracking impression-level performance. By recording a log of each campaign impression, a sophisticated data management platform allows advertisers to investigate just the ads that are clicked. While a campaign’s aggregate performance metrics might look strong, a de-averaged view of click performance might look something like this:

Regardless of the sophistication of the bot traffic in faking human-like post-click activity, a de-averaged view of campaign performance will make it immediately obvious to the advertiser that it has bought a scrambled eggs campaign.

Fraud is likely to be a reality of the advertising ecosystem for the next several years, and it is a total waste of a brand’s ad dollars and creative energy. The goal for advertisers is to manage against fraud with the least possible effort, leaving maximum resources to develop campaign messages and test new media channels. By measuring de-averaged campaign performance, advertisers can quickly identify fraud at the source and avoid the whack-a-mole paranoia of escalating fraud sophistication.