SHARE

Most Popular Reads

We’ve examined statistics of testing, different types of tests, and how test results can be interpreted.

Now it’s time to briefly examine how exactly to set up an A/B test, including the individual steps to follow and tools to use.

Step 1: Formulate the test

You need to know what the problem is, how to fix it, and what results you can expect.

Depending on the number of solutions and the expected result, you can decide what type of test to use.

  • Use A/B testing if… you need to test a number of changes to the page without wanting or needing to check for different layouts (for example, you want to test changes in copy or microcopy, better product images, etc.
  • Use multivariate tests… you need to test specific, different layouts of interdependent elements (such as different positions of calls to actions, product images and headlines)
  • Use split-path testing if… you want to test different checkout alternatives

In the early stages of the CRO process, using straightforward A/B testing, crammed with as many changes as possible in a single test, has the greatest potential to increase conversions and revenue.

Although this will not give you an indication precisely which element was responsible to increasing your conversion rate, this will maximize your revenue growth and solve many issues that tend to appear in early phases of CRO.

Gradually, as you solve the low-hanging fruit and larger issues, you will move to multivariate tests to fine tune the layout and content of the website and maximize the use of every single element.

At the outset, however the goal is to solve as many issues as possible in the shortest amount of time.

Step 2: Prepare a design

Next, you need to design a wireframe of your new design(s). A wireframe is a simplified outline of a web page that developers use to build a new page.

The usual tools are something like Balsamiq or MockFlow.

An example mockup of a product page made in Balsamiq

Step 3: Front-end development

When the designers finish, front-end developers need to create code for the page that will enable the site to run the new variation. This has to be done for all variations before testing.

Step 4: Quality assurance check

Before the test is live, you need to ensure that it works. Most testing tools provide an interactive preview of a variation to check its functionality.

Ensure there are no bugs, and make sure you are not introducing new issues in your variants instead of solving existing ones.

Step 5: Start the test

Import your variations to a testing tool and start the test.

Step 6: Run the test

There is nothing left to do here but wait.

While the test is running, you’d be well advised to avoid looking too often at the results (most A/B testing tools allow you to take a look at the current status of the experiment). Peeking will only create the temptation to declare a winner when a variation appears to have more conversions than the original.

There are a few possible dangers in preemptively choosing a winner:

  1. Statistical significance has not been reached, therefore the data is not reliable
  2. There may be an element of novelty in the variation, so visitors are simply drawn to it more; as the novelty wears off, they may revert to the baseline
  3. There is some event unaccounted for that’s influencing conversions

To avoid the temptation to tinker with your tests, refrain from looking at the results every 15 minutes.

Step 7: Evaluate the final results

Once your test is complete and has reached both the required sample size and statistical confidence, as well as run for at least a couple of buying cycles, you can be certain that the results are correct.

If the test has resulted in improvement of the metric you wanted to improve (most likely conversion rate), you can confidently implement the test variation.

Inconclusive or failed tests point us to the conclusion that either the test hypothesis was faulty (not strong enough) or that there simply are no good solutions to increase conversions.

A few more tips for A/B testing

  1. To get the best result, always try to test for big changes. Changing some micro-aspect of the page will likely not influence conversions, as they are unlikely to even be noticed by visitors. This of course does not apply to the call to action buttons or similar elements vital to conversion process (forms and similar elements) and which can result in major improvements.
  2. The best metric to base a test on is conversion itself, as this is what brings you money. Testing for engagement and micro-conversion increases should be done incidentally and as a support to the main goal.
  3. When a variation is declared the winner, always check how it impacts all segments of your visitors. If it has a significantly negative influence on a significant enough group of visitors, try to improve it for them, too.
  4. Even inconclusive and failed tests present a learning experience. You will know what works and what doesn’t, and this will point you in the right direction for later experimentation.

Other tools commonly used to facilitate efficient conduction of A/B testing programs include workflow management tools, such as Trello, or even custom-made tools for managing the CRO process, such as Effective Experiments, Iridion, or Experiment Engine.

Our favorite A/B testing tools

As we have seen, the research process relies on a number of tools. Here are some of the most popular:

  • Quantitative research tools

    • For heatmapping and visual site analytics:
      • Hotjar
      • Crazy Egg
      • Mouse Flow
    • For traffic analytics:
      • BigCommerce Ecommerce Analytics
      • Kissmetrics
      • Google Analytics
      • Adobe Marketing Cloud
    • For form analytics
      • Formisimo
      • Mouse Flow
      • JotForm
  • Qualitative research tools

    • For surveys:
      • Qualaroo
      • Qualtrics
      • Google Survey
      • SurveyMonkey
      • Hotjar
      • Typeform
  • Technical research tools

    • For site speed testing:
      • PageSpeed Insights
      • Pingdom Website Speed
    • For cross-browser testing:
      • BrowserStack
      • Selenium
      • ScreenFly
  • Heuristic research tools

    • For user testing:
      • UserTesting.com
      • UsabilityHub.com
    • For session recording:
      • LuckyOrange
      • Inspectlet
  • General/miscellaneous tools

    • Tools for managing workflow, managing analytics configuration, or managing the testing plan implementation

 For every research task, there are multiple tools that you can use and picking the right one is important task. You should take into account the price and functionality.

While it is often true that the price is a good indicator of superior quality, you may save on some expenses by selecting the tool that fits your current needs and upgrade to better ones later.

Want more insights like this?

We’re on a mission to provide businesses like yours marketing and sales tips, tricks and industry leading knowledge to build the next house-hold name brand. Don’t miss a post. Sign up for our weekly newsletter.

Subscribe

Table of Contents

IntroAdvanced Conversion Tips: Analysis, Action & Attribution for 567% ROI
Chapter 1 4 Types of User Experience Research for Ecommerce Conversion
Chapter 2 How to Set Up & Run a CRO Hypothesis & Testing Program
Chapter 3 Exactly How to Set up an A/B Test for Your Ecommerce Site [+ Tools List Included]
Chapter 4 How to Improve Ecommerce Conversion Right Now
Chapter 5 Advanced Tips to Master Every Type of Ecommerce Landing Page [+ 10X CRO]
Chapter 6 The Secret to CRO Testing? Culture [How to Do it]