You are on page 1of 15

6 A/B Testing Pitfalls Which

Will Lead You Astray

To read the full article click here!


Overview
When performed correctly A/B testing holds a
treasure chest of prospective benefits and profit
earning opportunities.

The issue, however, is that A/B tests are frequently


run or interpreted incorrectly.

Many people forget to control the test settings or


exaggerate test successes which leads to
disappointing long term performance and even
detrimental site changes. Here are the top 7 pitfalls
which plague marketers and easy ways to avoid
them.
1. Poor Randomization Techniques
1. Poor Randomization Techniques

• The first step of any statistical test is to select a


truly random control group, against which you
will measure successes or failures.

• The best way to check if there is any type of


group bias is to first run an A/A test rather than
an A/B test. An A/A test shows the same page to
two different groups but analyzes them as though
they are seeing different pages.
2. Measuring The Wrong Indicators
2. Measuring The Wrong Indicators

• Once a test has been designed and is being


run, marketers frequently analyze changes in a
singular performance indicator.

• Although this effectively demonstrates how a


change influences that indicator, it fails to
reveal conflicting trends or more widespread
changes.
3. Short Testing Periods
3. Measuring The Wrong Indicators
• Limiting the time and size of your test can not only lead
to bias but also undermine its statistical significance.
• Marketers frequently end their tests as soon as there
appears to be a notable success or failure without
making sure the data is statistically significant. It is
important to decide on a sample size and test length
BEFORE running the A/B test.
• In most cases you should ensure that a certain
conversion threshold is met and that you are only
counting unique visitors. Both of these concerns can
play a major role in skewing result data.
4. Forgetting Statistically Significance
4. Forgetting statistically significant

• In order to determine the size of your test


sample, power analysis should be used in
order to verify that your data will be statistical
significant. You can use this tool to find an
optimal sample size.

• You may use a significance calculator in order


to calculate the confidence level or a general
explanation can be found here.
5. Settling and Local Maximums
4. Forgetting statistically significant

• You keep trying new fonts, page organizations,


image sizes, colors, themes, etc. but no longer
see any type of change (if anything your tests
show negative returns), what do you do?
• Many people hit this phase and think that it
means their site has hit its optimal position.
• In reality this is most likely an example of the
Local Maximum Theory.
• The Local Maximum Theory suggests that within
the basic space in which you are operating you
may have reached a maximum but there is still a
lot of room for improvement.
6. Correlation Does Not Equal Causation
6. Correlation Does Not Equal Causation

• A/B testing is a great statistical asset and source of information.


Statistics, however, answers the question of what NOT why.

• While A/B testing has the capability of compiling, analyzing, and


summarizing data, it does not absolutely demonstrate causality. U

• sing micro-KPIs can help identify false causality.

• For example if you have just added a site security verification sticker
and see an immediate rise in the conversion rate, then you would
assume that the sticker caused the rise.
6 A/B Testing Pitfalls Which
Will Lead You Astray

Click here to to learn more about A/B testing,


conversion optimization and Personalization

You might also like