Data Doesn’t Lie — People Do
by David Baker, Op-Ed Contributor, February 6, 2017
I’ve been reading a variety of 2016 benchmark trends. Things seem to be trending well for inbox delivery and engagement for email, but I’m so unimpressed with the analysis that typically comes with it.
My real question is whether we are looking at the right metrics to understand how healthy an email program is. From Epsilon to Experian, to IBM, Adobe, Mailchimp, Inbox Marketer — they all have tons of data, but the data all says the same thing.
Many companies will produce benchmark reports, which are worthy — but these are NOT personalized to you, so take them with a grain of salt when thinking about what’s working. My executive dashboard will help answer a few key things.
1. Database Health: This for many years was a waterfall report and view of acquisition (growth) and churn. But unlike direct marketing and the addressable
worlds of mail and addressable targeting in media, email relies on engagement. In an executive view, I still want to know what is the addressable universe I can reach to project top-line growth and bottom-line forecasts of my marketing efforts. My waterfall report would look like this:
— Total Database growth (MTM)
— Total Subscribed (mobile AND email)
— Growth by source
— Total churn (and then breakdown types of churn: engagement churn — they just aren’t responding — vs. physical churn, where they have explicitly asked to be removed as a customer or subscriber through spam complaints, unsubscribes etc.)
— I’d also look at churn by long-term value. LTV is just an indicator, but I certainly want to see the long-tail impact of churn on the tenure of people on our database. Churn for a non-buyer has exponentially less impact on my business than for an active subscriber or infrequent buyer. This helps you assess the impact of email program on reaching more than the 20% that traditionally open emails.
2. Engagement: While most try to trend open and click rates and revenue per email or some proxy that is campaign-related, I personally don’t find that really helpful outside of comparison analysis. Kind of like checking the weather every day in Southern California — not a lot of eventful things to learn. I prefer to look at time-based views of engagement:
— What is my total reach in a given point in time — a month for instance, or quarter? What is the effective reach of my total subscriber base over this point in time? Marketing is an exercise is sizing, so why don’t we use this more? I want to know about this database we spend millions on, how many can I reach with brand impact? How many will buy? How many will share and evangelize? I simply want engagement metrics that tell me something material to my program.
— Loyalty vs. regular. Whether you have a loyalty program or not, you should bias your results with cohorts that make sense to tell a real story. Are you reaching 70% of your loyal base? And 40% of your non-buyer base? Break down into engagement cohorts:
a. Heavy Buyers
b. Heavy Browsers
c. Heavy Email
You get the drift. I’ve used all types of engagement cohorts in the past for one simple reason. I match my strategy with spend and results that mean something to the company. And cohorts are easy to understand at any level.
I’ve also sat in too many meetings, onboarded too many clients from one ESP to another. I see major flaws in how many look at metrics, trends and how to interpret their progress to their strategy and plans. As the old saying goes, “Number don’t lie — people do.”
Don’t be one of those trying to make gold out of a campaign open rate. Translate data meaningfully, and you’ll find it leads to cool discussions about what to do next.