BLOG

MEASURING DISPLAY PERFORMANCE: CLICKS, THE FALSE POSITIVE

Marketers have an array of digital measurement tools at their disposal. All of them serve a valuable purpose, but few are able to provide a comprehensive view of the customer journey.

Web analytics tools, for example, are among the most highly used, and they deliver important insights. But a key challenge in web analytics is the heavy reliance on the click event for tracking. It’s become a false positive of effectiveness, and it’s not well-suited for certain elements of the interactive mix—particularly display ads.

Which gets me to what’s been on my mind for some time, and industry pundits have been chanting for years: barely anyone clicks on display ads, so why do we care so much about the click?

I don’t want to do too much injustice to the click-though. For some of the interactive mix, clicks are extremely valid. Paid search, affiliate marketing and natural search, to name a few, are heavily dependent on the click event. But as we keep hearing, few people click on display ads, leaving the vast majority of impact untraceable in most web analytics tools.

I get that historically, some marketers have relied on the click to measure effectiveness, and some publishers have depended on it for payment, but it’s time to move on. comScore said it way back in 2008 with their Whither the Click? whitepaper, and they hammered it home with their Natural Born Clickers study a year later: such a small group is not meaningful.

To prove a proven point again, I looked at a sample of more than 2,000 Conversant display campaigns. The following emerged clearly in the data: The click group was small—just 3%—and it wasn’t the mark of efficiency. 8% of the impressions went to a group of just 3%.

Some could argue that clickers are more active online, which industry data suggests is true, but they still absorb more impressions that could have been devoted to others. They have a slightly higher fair share of revenue as a result, but not enough to place a disproportionate focus on them.

If clickers produced an exceptional yield, I would care about them as a marketer—but this group is not superior in any way. Only three out of 100 site visits can be attributed to display exposure, and just £6 out of every £100 spent.

According to the Association of National Advertisers in the US, 11% of ad impressions industry-wide are fraudulent. Layer in accidental clicks on mobile ads—up to 50%, according to Google in June 2015—and there are even fewer valid clickers. Conversant can detect users that don’t seem viable and then weed them out, but fraud is still prevalent in the marketplace, often due to improper recognition of users.

Because of the instability in their user recognition, many companies looking to gauge performance need something to build their models on. The most common is the click. And as so many clicks are accidental or fraudulent, models built on this data are inherently flawed because the samples are not only small, they’re inaccurate.

Methods that gear to clickers lose the rhythmic approach required to engage with consumers persistently across their devices, channels and media formats. Conversant’s solution is engineered to weed out fraud or low-yield people. We won’t bomb consumers, because we won’t waste your money; we won’t try weird things to get credit in a tool, or focus on something we don’t believe drives true value. We talk to people at the right time on the right device, based on millions of data points we store in our network.

I know it’s complex. Marketers want credit for their own programs, there’s a lot of overlap and some are more easily measured than others, which is why we’ve seen a crop of attribution vendors rising in the market. I certainly hope they focus on the click where the click makes sense, and CMOs should demand dashboards that illustrate the click- and view-through returns for display programs.

I’m clicking off this topic now, and will click back on shortly to talk about ad fraud.