In today’s digital marketing landscape, how do you measure what efforts are actually working?
Ever since department store merchant John Wannamaker famously said: “Half of the money I spend on advertising is wasted; the problem is I don’t know which half,” measurability and accountability have been every advertiser’s holy grail.
Over the past few years, advertising has become as much science as art. Exciting new marketing attribution models and methods have emerged, allowing advertisers to become ever more sophisticated in their approach to measuring their return on investment. But how advanced are advertisers in their usage of those new attribution tools today, and how much of a difference does technology really make in tackling the complex issues of marketing attribution?
Close to 80 per cent of advertisers still use last click as the primary attribution model—even when they consider it insufficient. The main disadvantage of using a single-source attribution model like last click is that it fails to take into account the series of events leading up to the final action, meaning it can be less accurate than other attribution models such as weighted or algorithmic attribution.
Although there can be many organisational constraints and technical challenges standing in the way of adopting more sophisticated models, many advertisers are increasingly moving away from relying solely on a last click attribution model. They are injecting intelligence into their marketing strategy by resorting to A/B testing, and/or by using new attribution models as a second view to last-click. Advertisers now have powerful tools to complement their existing models, and enable a three-dimensional vision on their ROI.
Based on real-life insights from marketers, this article delivers advice on what you can do to increase your chances of accurately measuring what works and what doesn’t.
There’s no magic formula to fully capture the complexity of customers’ journey to purchase. But here are a few tips that can bring you closer to uncovering what part of your marketing efforts, if any, should be credited for a specific sale:
Lesson #1: Challenge whatever your model tells you.
No matter how cutting-edge it is, a model remains just that: a simplified version of reality used for practical purposes. Simple shouldn’t mean simplistic, though. In particular, make sure that you get a clear view on what touchpoints your model is missing, e.g. display ad impression or paid search click. If you fail to take into account these various touchpoints, you run the risk of missing out on sales.
Lesson #2: Stop assuming, start testing.
Respondents said it loud and clear, and we fully agree: the best way to demonstrate causality is by testing. If you have doubts concerning the value of a marketing channel, or if you would like to see what doubling the amount you spend on it would really do to your sales, the best way to know it is to set up a test.
A/B testing is a popular way advertisers measure the impact of the different advertising channels on their sales. Typically, these tests consist of comparing online behaviours between a group of users exposed to a given stimulus (for example, performance display) and a matching group that isn’t exposed.
Lesson #3: Focus on touchpoints that truly have influence on the purchasing decision.
Many touchpoints serve only to help a user navigate from one place to another; they don’t actually influence any buying decision. Removing such “navigational” touchpoints from your model is less game-changing than switching to a new attribution model, but it has a big impact on your results – and on your ability to generate more sales.
As a marketer, knowing how your customers get to your cash register is your job. So is taking strategic decisions as to what your advertising spend should be. And the good news is: no machine is going to take this away from you anytime soon.
Jeremy Crooks is managing director Australia and New Zealand for Criteo