— November 2, 2017
WHY FOCUS JUST ON CREATIVE AND COPY?
Any digital advertiser knows that A/B testing is a requirement when you want to make sure you’re getting your money’s worth on your ads. We’re not here to dispute the fact that A/B testing audience segments or landing pages are valuable to finding your best return.
What often gets lost in the discussion is how critically important it is to A/B test your creative and copy.
No matter what variable you’re testing, you’re always optimizing towards making your winning ads perform even better. It’s finding out which variable is driving that ad to crush your goals that you can then iterate on and see continued success.
We’re a visual world and images are always going to grab our attention before anything else. Though it’s important to A/B test other variables, creative and copy can end up having the biggest impact.
Many of our ad ops experts believe creative iterations drive performance over copy changes. As a writer, I’d like to disagree but the data (unfortunately) backs them up.
Copy isn’t totally irrelevant though (thank goodness!).
As Mike Zappulla, Director of Ad Ops, explains, “Copy is important because it’s the hook to get you to click through the ad. Once we have a winning video or image locked down, we can then test copy variations. This works well because it’s relatively low touch for clients and is a simple way to rejuvenate creative.”
WHAT ARE THE PITFALLS OF A/B TESTING YOUR CREATIVE AND COPY?
Seriously – there are some risks to testing your creative and copy. While testing sounds productive in theory, there are a few stumbling blocks that may crop up along the way.
We’re not saying you shouldn’t A/B test; it’s just important to be aware of what you’re getting into so you can avoid getting caught in these traps.
The benefits clearly outweigh any downsides, the most important of which is that you’re learning what makes your ad effective (or what is making it perform poorly) so that you can either emulate and iterate off of the awesome parts or trash the bad ones.
Pitfall #1: Cost
One of the biggest roadblocks is that it can get expensive.
Even though testing doesn’t require your entire ad budget, you’re still putting a percentage of your ad spend into something that is basically an experiment.
While you wait to see whether or not something works, it can feel like you’re losing money since there might not be an immediate payoff. Plus, once you’ve committed to spending the amount you need in order to get statistically significant data, it can be hard to stomach if tests show a negative outcome.
Though it’s absolutely crucial to put enough money behind a test to make sure it provides accurate feedback for you to base future decisions on, that still doesn’t make it easy when you see the money flowing out.
Pitfall #2: Becoming Stuck In The Rabbit Hole
Another potential downside is that you can end up falling down an endless rabbit hole of testing.
With so many different variables and so many ways you can iterate, there’s the potential to start down a path and continue to dig deeper and overthink it when it’s time to just move on.
Both of those issues can combine when you push out too many A/B tests all at once. If you’re trying to test too many variables all at the same time, you’ll end up spreading your ad testing budget too thin. You want your tests to have enough significance with their data to allow you to make an informed decision. If you’re testing everything all at the same time, that means you aren’t spending enough on certain tests to find the real winners.
Pitfall #3: Testing The Wrong Elements
A third potential problem is that you could end up testing the wrong things. Unless you’re just rolling in money (and if you are, I’ve got some student loans for you to chip in on), you’re going to want to spend your money strategically. This requires work, not just on your agency’s behalf, but also on yours. You have to understand what your priorities are and staying true to the elements that you know your audience loves.
Regardless of the data related to the variables, it also comes back to you knowing your business and your audience. You want to spend your money efficiently and effectively; understanding what is driving your audience to click or convert should guide your A/B testing plan.
BEST PRACTICES FOR A/B TESTING CREATIVE AND COPY
Iterating off of the variables that are rocking your ROAS will give you a sustainable success strategy.
Finding incremental wins is the best way to increase your conversions and get you to your goals. Those incremental wins need a steady stream of A/B testing to get there.
Set Aside Enough Budget
Though it varies by client, we recommend carving out about 20-30% of your ad budget for testing. If you have a healthy budget or are super enthusiastic about testing, you’re always welcome to spend more, but this is a smart starting point.
The winners that come out of your tests, the ads that you know for a fact are going to help you meet or exceed your KPIs, are where you should be spending the majority of your money.
Using the 20-30% range for your testing budget gives you the ability to put the bigger budgets behind your best ads.
What Should You Test First?
It’s always better to test creative before you test the copy. Why? Because as pretty much every ad ops specialist here would agree, creative is the bigger driver of wins.
Your brain processes images much faster than it processes text. That means the smallest changes in imagery, or even changing something as simple as a background color, can have a major impact.
When you’re trying to plan your creative A/B testing timeline, you want to start with looking at which creative unit does best with your audience (which will, of course, require its own A/B test).
For most brands, there’s one format that tends to perform better than the rest. This usually requires it’s own A/B testing between:
- And further testing square versus horizontal versus vertical.
- Static Image
- Slideshows (Rotating images)
Once you’ve determined the format, this is when you begin to test the different images to see which ones resonate most with your customer.
Patience is a requirement in any successful digital ad campaign.
As Emily Dougherty, one of our Senior Ad Ops Specialists, reiterates, “Patience as a best practice needs to be front of mind for any brand. You have to get enough spend and clicks behind an ad before you decide whether it’s successful or not. At the bare minimum, you want to give the ad enough time to get a significant level of data before making any determinations about whether to keep it or toss it.”
Here’s an example of what that means in practice.
Say you’re aiming for a minimum of 10+ conversions with your ad. You need to look at how many clicks you will need to get there. If your expected conversion rate is 2%, then you’ll need at least 500 clicks. This means that 500 clicks are the threshold you have to reach before you can make an informed decision your ad’s success or failure.
Give your ads a bit of breathing room and be patient enough to wait for enough data to analyze your campaign’s success or failure effectively. Cutting off an ad before that point might mean you halt an ad that could have been highly successful.
Know When To Tap Out
Just like in life, sometimes it can be hard to know exactly when to let go. And just like life, there’s really no hard and fast point that determines when you should stop trying to test and instead rework your ad campaign.
The closest we can get to a definitive answer is that if performance is neck and neck between your A/B tests, then it might be time to call it quits and move on. The reason you’re iterating and testing is to find an ad that does better than the others.
If you’re not finding a stand out performer, then moving on to the next idea is the best strategic move you can make.
(1) Don’t forget how important it is to A/B test creative and copy amongst your other A/B Testing
It’s all in the iterations – finding a winner and continuing to iterate off of that is how you have a successful and sustainable growth strategy.
(2) Make sure you budget appropriately for your testing as it can get expensive.
20-30% of your overall budget for all of your A/B testing is recommended.
(3) Don’t test too much at once.
Otherwise, you spread your budget too thin and you may not get significant enough data from the results.
(4) Practice Patience
This may be hard, especially if you’re not seeing the results you want, but being patient and allowing your ads to work for long enough that you can get significant data on them is how you develop your future campaign plans and strategy.
(5) Know When To Walk Away
In the immortal words of Kenny Rodgers, “You’ve got to know when to hold ’em. Know when to fold ’em. Know when to walk away. Know when to run.” If performance is neck and neck or you’ve been going down a rabbit hole of iterations, it’s time to step back and re-evaluate where you’re going with this testing.