Talk to us
How Do You Generate Ideas for CRO Testing?

How Do You Generate Ideas for CRO Testing?


SHARE

Twitter Share Link Facebook Share Link Linkedin Share Link

We recently released our new ebook, The Expert’s Guide to Conversion Rate Optimization. In extracts from the guide, our contributors give their views on developing ideas for tests, and how frequently tests should be undertaken. 
To risk stating the obvious, testing plays a massive role in conversion rate optimization.
Whether this is user testing, A/B or multivariate tests, it helps businesses to identify areas for improvement, test changes, and to gain quantifiable evidence on what changes affect the behavior of users on your site.
It plays an important role in removing guesswork from the equation. While lots of people may have ideas about how a website should be, testing provides proof on what does and doesn’t work.


Generating Ideas for Testing and Optimization

You need to find a starting point for testing though, perhaps a theory to test out, a page with higher than normal drop-out rates, or perhaps some customer feedback.
We asked experts from agencies and brands how they generate ideas:
Stuart McMillan, Deputy Head of Ecommerce at Schuh:
We use three main sources for test ideas:

  1. Analytics combined with user testing.
  2. Bi-annual Mystery Shopping, combined with analytics and user testing.
  3. Business need. For example, we might have a time consuming process to generate certain content, we should (and do) test whether this content provides value for money.

Sean Collins, Head of CRM at Mr & Mrs Smith:
Ask everyone. Not just your developers or members. Sometimes the copywriter or finance team may have the best and most practical idea.
The key is to then make an open prioritization session out of all the ideas so you pick the best ideas, not just the high profile ones. And say thank you and credit the person who identified it.
Paul Rouke, Founder and CEO at PRWD:

  • Looking for patterns and consistent user insights on a single page or key part of a user experience from multiple qualitative and quantitative research sources.
  • Speaking one to one to customers and potential customers in natural, intelligent moderated user research.
  • By conducting detailed analysis of completed tests in order to not only identify customer learnings that can be applied to other areas of the customer experience, but which also represent opportunities to continue testing new variations of the completed test.
  • Gaining quick feedback from visitors through exceptionally valuable and cost effective tools like HotJar.
  • Observing session recordings.
  • Not only speaking to the customer services teams but ensuring that they are continually capturing and grouping feedback in to key themes.

Chris Lake, Consultant at Orangeclaw
Ideas can come from site data, user research, customer feedback, team suggestions, competitor benchmarking, research, blog posts, events, and so on.
I have a database of around 1,000 ideas for testing, which I cross-reference when analysing sites. It’s probably the tip of the iceberg.
Developing ideas requires a little scientific process, so you know which tests to prioritise, and which to put on the back burner. A compelling hypothesis will help you decide whether a test has merit or not.
You’re looking for insight: tests that don’t win still reveal something about your audience.


Frequency of Testing and Optimization

CRO should be a continuous process. You may have a great site with excellent usability, generating lots of sales, but there are always ways to improve.
User behavior, available technology and features, the devices that consumers use to access your website – these all change over time, and you need to keep up with this.
Testing should be part of this continuous optimization, but there is a question of how many tests you need to generate reliable insights, while our contributors add the important caveat that quality should be prioritized.
Stuart McMillan: 
This is a very rough rule of thumb but, look at the number of conversion events on your website per month, divide by 2,000 and that’ll roughly be the number of A/B tests you can run in a month.
Why 2,000? Well, assuming a 50/50 split, that should be enough to either get statistical significance or to be fairly sure that running it for longer won’t improve the statistical significance.
This is for fixed horizon, deterministic analysis. Furthermore, you need to understand what your main business cycles are. For many companies this is a week. So a test should run for a whole number of weeks (business cycles). Your volume of conversion events might indicate that you can test for shorter than a business cycle (but don’t do it), in that case you need to look at how you can test in parallel.
This will give you something to aim for, but that’s not to say you should test any old thing. You need to have a solid hypothesis first, don’t just test for the sake of it.
You’ve run a test (and if you’ve done it right), congratulations, you are a scientist! But what did you learn from that test?
The test that is a failure is one where you messed up the set-up to the point that you can’t trust the data. If your variant wins; great you’ve got some new functionality that will make you money.
But what if it lost and the control won? Well, the test was still interesting: why did this new fancy design which is supposed to be better for customers not actually make it better? What if it was a draw and they both had the same effect? That is also interesting; why are two quite different designs functionally equivalent to customers?

Sean Collins: 
Don’t sacrifice proper test times just so you Can do more tests for the hell of it, but also push back on who ever tells you an a/b test will take a month – get them to justify why.
Too many people love a round number and a week or a month just sounds too cliched to always be the correct option.
Chris Lake: 
Continual, high frequency testing is the only sensible way forward.
Paul Rouke: 
Before even thinking about how often you should carry out tests, put quality first – quality of the research, quality of the data analysis, quality of the hypothesis, quality of the UX design, quality of the copywriting.
Once you establish quality as the foundation of your A/B testing efforts, then quantity of testing becomes a consideration. It’s the difference between sanity and vanity metrics in conversion optimization.
The acronym CRO, which currently defines the conversion optimization industry, is inherently flawed, leading to CRO being misunderstood and completely undervalued within a business, particularly the higher up you go.
For the very small percentage of businesses who are currently taking conversion optimization seriously, and using it as their biggest growth lever and competitive advantage, there is never a question of “how often should we carry out tests?”.
Strategic, customer insight driven conversion optimization should ALWAYS be a continuous process.

Grab your Copy of The Expert’s Guide to Conversion Rate Optimization

Download the eBook and discover how you can start boosting conversions today.

NEW EBOOK

⬛️ 2023 Black Friday Ecommerce Strategy & Stats Report

This FREE ebook gives you the methods, solutions & trends for building a high-converting remarketing strategy for Black Friday

2023 Black Friday Ecommerce Strategy & Stats Report

Speak to an expert

Learn how to convert your online audience into revenue with our experts.


Graham Charlton

Graham Charlton is Editor in Chief at SaleCycle. He's been covering ecommerce and digital marketing for more than a decade, having previously written reports and articles for Econsultancy. ClickZ, Search Engine Watch and more.