It will come as no surprise, but like most things in life, conversion optimization will benefit from a strong strategic approach.
This generally includes aligning your goals and resources to build out a roadmap, or at least a framework/process, for your experiments.
Much like in other disciplines, while experts agree that strategy is important, they can sometimes differ in their approach to such a strategy. This article will outline a couple approaches, and by the end of it, you should have a good idea of how you’d like to build out your own experimentation roadmap.
What’s the Importance of Maintaining a Testing Roadmap?
How do you decide what to test? It’s a question that is answered differently depending on the expert you ask. Most have some sort of discovery process, where they either conduct conversion research or at least calibrate the impact of an element with something like existence testing.
So a roadmap is a way of incorporating your prioritization framework with some sort of strategic planning of your experiments over a time period.
According to Optimizely, “A basic prioritization framework uses consistent criteria to order the experiments and campaigns you’ll run, from first to last. The more advanced version also includes a scoring rubric and an execution timeline. You’ll use your prioritization framework to manage your backlog and optimization cycles.”
So why is it vital to have an experiment roadmap, anyway?
Stephen Pavlovich, CEO of Conversion.com, outlined three advantages to building testing roadmaps:
1. It moves you from tactical to strategic testing. That means you’re better able to analyze and react to new data on customer behavior. If you see positive results from a test hypothesis, you can leverage that insight in upcoming tests. Without a roadmap in place, you may focus too heavily on one or two levers (core themes for testing) or areas of the website.
2. It also improves your resource planning too. As your experimentation matures, it’s likely that you’ll build tests that are either more complex or require more sign-off. With a roadmap in place, you can plan ahead for these tests.
3. Finally, it gives visibility to the rest of the organization on your testing. This allows better communication – getting insight ahead of time that may affect a test – as well as helping to position testing at the heart of the organization.
Other experts have echoed similar thoughts in terms of how optimization is looked at from the organizational perspective (Stephen’s third point).
Similarly, Paul Rouke, CEO of PRWD, said, “building a testing roadmap underlines that conversion optimization is being taken seriously by the business. It’s time to move away from quick wins, improvisational hacks, practicing tips and tricks for increasing conversion on a whim, and taking this for what it is: a growth lever that ensures a business is sustainable and continues its growth trajectory, as well as its competitive advantage in the marketplace.”
Also important, and mentioned frequently, in building roadmaps is ensuring a holistic approach to optimization. Plan far enough out, and you ensure the capability of conducting “strategic” tests instead of small changes.
According to André Morys, CEO of Web Arts, “Most people forget that there are different strategic goals for testing. Some have the goal “growth” others want to do “research.” Some want to make sure their project works so they do “strategic” Tests. I recommend to build several testing tracks so these different main goals do not collide.”
Building a roadmap helps align teams, brand visions, and larger goals, with what you test on a week-to-week basis. It steeps your testing in the context of the broader organization and how you acheive your goals.
What’s Your Strategy in Building a Roadmap?
So you have a list of test concepts and you’ve bought into the idea that it would be smart to prioritize and plan them out over a span of time. How, then, do you do that?
There are many different ideas on this point, so I asked experts in different fields and companies – some agencies, some in-house. Here are some of their approaches…
Keep it Lean
No one can predict the future (if we could, we wouldn’t need to experiment anyway). So, we adjust our plans based on learnings. Emma Travis of PRWD put it well:
“I’d recommend not even attempting to plan too far in advance. This just means you end up planning and replanning constantly, and there are more constructive ways to spend your time. What I’ve found works best is reviewing the roadmap on a weekly basis, tracking progress and making tweaks where necessary. This means things don’t get out of hand and it’s essentially always ‘up-to-date’.”
Even in terms of prioritization models, there is an inherent limitation because of the lack of prescience we have in predicting inputs. Basically, as Ronny Kohavi, Distinguished Engineer, General Manager, Analysis and Experimentation at Microsoft, told me:
“With respect to the PIE/ICE frameworks, we use ROI (Return-On-Investment) as guidance, which is similar and simply collapses the potential and importance (or impact, confidence) into “expected return.”
The problem with all frameworks that rely on “expected return” is that our ability to predict the value is low in many cases, especially with novel ideas. The most successful experiment in Bing’s history was worth more than $ 100M annually (at the time, now more than double that), was simply not prioritized high and was delayed 6 months because there were higher-ranked ideas.
The hardest problem is to decide if to iterate or “fail fast” when something fails. Bing integrated with social features (facebook, twitter) at a massive cost of over 100-person years, and when it didn’t show value, it wasn’t clear whether the idea was bad, or we haven’t hit on the right features as we tried more social features.”
A surprise win, like Ronny mentioned with the $ 100M Bing win, might be buried pretty deep in your roadmap. It’s hard to tell. But by keeping your process lean and reviewing your roadmap regularly, you at least open up the possibility of seeing these wins faster.
In terms of maintaining this flexibility (and incorporating learnings), y0u’ll benefit from archiving your tests and having a system wherein you can easily find and organize tests results and insights. Claire Vo, CEO of Experiment Engine, explains what that process could look like here:
It has to be more than a list of hypotheses, and it must be kept up to date. Whether you use a spreadsheet or something more advanced like Experiment Engine, a testing roadmap should include:
- A backlog of prioritized test hypotheses
- Experiments in flight, their status, and who is currently owning the process
- Context on when, where, and how the experiment will be launched
- A clear way to sort, filter, and organize your roadmap so you can easily get at the information you need
Finally, in order for it to be maintained easily it has to be in a format that reduces administrative overhead. Spreadsheets are a good starting point, but find ways you can use automation to make your roadmap simple to maintain.
I asked conversion optimization expert Andrew Anderson about his strategy in building out a testing roadmap, and he had an interesting answer:
“So, we do things very differently.
One of our key disciplines is that you make plans around the resources you have, not grab resources to match the plans you have. What that means is that we keep a massive backlog of ideas, and then we see what makes sense when we are getting to the end of the test and based on our other efforts. That way, we always have tests working, but we are flexible.
We also prioritize our sites by the number of experiences they can handle, as well as tests by the population of the pages we handle.
We try to keep one larger test going at a time, but that can take as much as a quarter. We then keep 3-4 medium tests going at all times, and the rest are small tests, tests that can be coded in a few seconds (the previously mentioned font test was one of those).
This means we never have a set lists of tests, but we have a roadmap of resources and larger tests as well as focus areas. This also allows us to always be able to slot tests based on what we learn and where they can best be exploited.
So we might have 10-11 sites we test on, we have 3-4 pages per site that we can test on population-wise, and we have a backlog of all three types of tests. We have our design/copy resources always working on tests with a slotted approach (we use a kanban board in JIRA). Our dev resources then work on whatever makes the most sense based on what is waiting on them from the creative side. We often have 5-6 tests that wait on development, but this way we maximize all resources.”
Boiled down to its simplest form, Andrew and his team keep a huge backlog of ideas, but put them into three buckets, each of which has tests going:
- Larger tests
- Medium tests
- Small tests (“just run it” type tests)
On the topic of resources, Ronny Kohavi mentioned that “the most important thing to realize is that if something is easy to A/B test, stop the debates and just run the test.”
Or, as he put it in Controlled Online Experiments at Scale, “A key observation is that if a controlled experiment is cheap to run, then other evaluation methods rarely make sense.”
Strike a Balance
Balancing long term strategic goals with short term iterative capabilities is tricky, and there’s no real science to it (as far as I’m concerned). But it is a balance you should heed. Stephen puts it well:
“You need to get the balance right. Build a roadmap that’s too short, and you can’t plan far enough ahead and leverage the benefits of a testing roadmap. Build one that’s too long, and you’re committing to tests in the long-term whose priority may change based on tests in the short-term.
The ideal length will depend on the organization’s testing maturity and agility. We typically recommend 6–12 week roadmaps. These factor in both test prioritization (all factors being equal, which tests do we think will have the biggest impact on our KPIs?) and test planning (based on that prioritization, how do we plan our tests strategically based on the wider needs of the business?).”
Chris McCormick, Head of Optimization at PRWD, also encourages a diversity of testing strategies:
“Ensure the types of testing you are looking at as part of your roadmap are diverse, i.e. a mixture of smaller iterative tests (such as element changes), bolder, innovative tests (such as changes to page layout or page redesigns) and strategic tests that can drive transformation of the business’s brand proposition.
Having a roadmap that features all three will create consistency in your testing output (as each test and its results will feed into the next), in turn, leading to stronger commercial results which wouldn’t be realised by repeatedly sticking with one test type.”
What Are the Limits or Challenges to Roadmapping?
What about maintaining it as you ramp up your testing velocity? Is it better to have a rigid plan or to iterate based on new knowledge?
No matter your strategy, maintaining a roadmap is a huge issue. It’s like Mike Tyson said, “everybody has a plan until they get hit.”
You can set a strong strategy from the start, but things will change – your resources, your insights, your results – and that warrants an approach that incorporates flexibility and iteration.
While roadmaps help you keep strategic goals in mind, committing too rigidly can also harm a testing program.
As Stephen Pavlovich put it, “iteration is crucial in testing. That’s why we recommend a test roadmap that’s long enough to deliver strategic testing efficiently, without committing to the long-term at the expense of short-term gain.”
Chris McCormick also prefers flexibility, saying, “it’s all about reacting and being agile in your approach. You may find that after the completion of one test and its analysis, you may want to follow it up with further testing based on your findings. I don’t believe you can have a test and learn culture with a rigid approach.”
It’s unrealistic to think that your roadmap is so prescient as to warrant strict rigidity. You’ve gotta incorporate learnings as you go. Paul Rouke put it really well:
“When implementing a testing program, it’s crucial to harness the knowledge and findings from completed tests. A rigid testing roadmap which doesn’t factor in new ideas based on findings from research streams and completed tests is lacking true intelligence and will do any business a true disservice. Embracing an experimentation culture – moving quickly, responding to change, embracing unpredictability – are all hallmarks of businesses at the top end of conversion optimization maturity.”
What Tools Are There to Build and Maintain Roadmaps?
The answer is is pretty much the same as for any project management role. As Andrew Anderson mentioned above, you can use something like JIRA.
You can use Trello:
Or you can work from a spreadsheet. You can, of course, build your own based on your specific criteria, but Optimizely also offers one for free here.
In short, there are many effective ways of getting organized. Let your program manager figure this out and champion it to the team.
While there are many different approaches to prioritizing experiments and building a testing roadmap, we can all agree that it’s important to have a roadmap in place.
How you iterate or maintain that roadmap may depend on a variety of factors, including resources, organizational politics, or how mature your optimization program is.
A few things stuck out in terms of the expert opinions in this article: building a roadmap helps optimization be visible and gives it organizational importance, you shouldn’t be too rigid with your planning, and you should plan to maximize your resources.
Do you have a road mapping strategy in place?
Business & Finance Articles on Business 2 Community