What is Conversion Rate Optimisation?

Conversion Rate Optimisation (CRO) is the process of optimising web pages and/or page elements to increase conversion rates. This normally involves running A/B tests or split tests with two different versions of a page competing against each other. Traffic is divided equally between the two variants to see which version achieves the highest conversion rate, once statistical significance is reached.

That last point about statistical significance is important and it relates to the biggest mistake brands make with conversion rate optimisation.

Conversion rate optimisation is a data-driven strategy

Conversion rate optimisation is a data-driven strategy which means you need good data going into your tests and good data coming out of them.

Before you dive into testing, make sure you have the following in place:

  • In-depth conversion data: Conversion rates alone won’t help you to pinpoint what needs testing. You need in-depth data for the actions users are (or aren’t taking) on your site. Use heatmaps, events measurements in Google Analytics and tools like form analytics to pinpoint issues getting in the way of conversions.
  • Trends: With the right data coming in, you’ll start to see patterns that reveal opportunities for testing – for example, only 60% of users who start filling out your forms complete them successfully.
  • Hypotheses: For each trend, you need to come up with a hypothesis to explain what’s happening. Try not to guess; dig deeper into your data and aim to diagnose what’s causing the issue.
  • Test goals: Before you run your test, define what your goal is and pinpoint which KPI measures success – eg: increase form completion rate to 90+%.

Too many brands and marketers jump into conversion optimisation without having the right data processes in place – and this is setting yourself up for failure. Poor data delivers unreliable results and potential false negatives that could cause more harm than good to your conversion rates.

Running your first A/B test

When it comes to running A/B tests, the biggest challenge is making sure you achieve results you can actually trust. This is where statistical significance comes into play, which describes the reliability of your test outcomes. Ideally, you should be aiming for around 97+% statistical significance and anything under 95% starts to compromise your results.

When the test comes to an end and you have a clear indication there is a winner out of the two variants, you should break down your conversion data into specific segments like traffic channels, device category, and user demographics. This will help you when reporting on your conversion rate because it will reveal specific user types that are converting at a high percentage, or users of a different type that are not, which will offer you invaluable insights into your digital product performance and areas that need improving.

Here some things to keep in mind:

  • Only test one variable to begin with
  • Choose a large enough sample size
  • Split traffic 50/50, at random
  • Run your test long enough to achieve statistical significance
  • Run your test long enough to mitigate variables (random spikes in conversions, the holiday season, unusually hot summers, etc.)

Conversion rate optimisation is an ongoing strategy that turns data into better business results – as long as it’s done correctly.

Digital & Social Articles on Business 2 Community

(13)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.