Professional Documents
Culture Documents
After generating traffic to both, you find that the variation produces
more conversions than the original. Since there’s only one difference
between the two pages, you can be confident that the headline was
the reason for the lift.
1|De j a n Ma j k i c
Now, this is how all split tests would be performed in an ideal world
— one change at a time. Unfortunately, we don’t live in an ideal world.
After running traffic to both, you find that your new variation, with the
changes implemented above, generates 8% more conversions than
the original. Hooray! Success!
But… wait a minute… you don’t know why that conversion lift
occurred. If you had conducted a multivariate test, or tested one
element at a time, you could confidently give a reason…
2|De j a n Ma j k i c
Keep in mind, though, running a split test isn’t as easy as “make a
change and drive equal traffic to each page.” There’s a whole lot
more to it, regardless of which method — practical or pinpoint-
accurate — you choose to use.
If you search “Split testing case study” and you’ll find endless blog
posts claiming a particular button color produces the most
conversions and that there’s one ideal number of fields to use on
your form. And so, it’s only natural you’d try to implement these
changes on your post-click landing page, because, if it worked for
them, it could probably work for you.
Except, there’s a problem. Their business, post-click landing page,
offer, and audience are not the same as yours. So, if you’re thinking
of trying something that worked for someone else, do yourself a favor
and stop. Stop it right now.
Instead, you should be coming up with your own things to test
using your own data. Heat mapping tools, analytics software, and
customer interviews are all great for identifying where your web page
is falling short. From those, you can form a hypothesis about tests
that have the potential to boost conversions.
3|De j a n Ma j k i c
They blindly follow best practices
For every one article you find about the perfect button color, or the
ideal number of form fields, you’ll find two refuting it.
Does marketer 2’s test results prove that green is a better button
color than red?
Absolutely not. Marketer 1 could very well test green against red and
find that red still produces more conversions on her post-click landing
page. The impact of button color on conversions is heavily
dependent on a number of things — like your audience and the color
of the rest of the page, for instance — all of which vary from business
to business.
What works for someone else may not work for you. That’s why all
your tests should be rooted in your own data.
4|De j a n Ma j k i c
They don’t follow best practices at all
While you shouldn’t blindly follow best practices, ignoring them all
can be just as harmful. There are some universal truths that apply to
all post-click landing pages.
For example, it’d be a waste of time to test versions of your post-click
landing page with and without navigation, because we already know
that navigation kills conversion rate by giving prospects numerous
exits off your page. Similarly, you wouldn’t test a blue call-to-action
button on a page with a blue background because it wouldn’t grab as
much attention as a contrasting color would.
We’d hope not. You see, businesses like Google have entire
departments dedicated to testing like this, and the revenue to support
it, but, on the whole, most businesses don’t.
In fact, CXL’s 2016 State Of The Industry Report found that 53% of
businesses that use conversion rate optimization don’t even have a
dedicated budget for it. Additionally, most conversion optimizers
work at a business with an annual revenue below $100,000.
5|De j a n Ma j k i c
on big changes that have the potential to make a big impact on your
conversion rate — which, brings us to the next big mistake.
“If I get 100 people to my site, and I have a 20% conversion rate, that
means I get 20 people to convert... I can try to get that conversion
rate to 35% and get 35 people to convert, or, I could just figure out
how to get 1,000 new visitors, maintain that 20% conversion, and
you’ll see that 20% of 1,000 (200), is much higher than 35% of 100
(35).”
What we’re saying is, before you begin split testing, make sure there
aren’t any other holes in your campaign that could use some fixing.
Okay, now that you’re aware of some common pitfalls, be honest with
yourself about your readiness to split test. Is everything else in your
campaign looking right?
6|De j a n Ma j k i c
How to split test post-click landing (sales)
pages
From start to finish, here are the steps you should take when
conducting a split test.
2. Create a hypothesis
In this case, you might say, “After observing that the average post-
click landing page user session is only 5 seconds, we believe that
creating a more compelling headline will get them to read the body
copy and spend more time on the page, which will ultimately lead to
more conversions.”
7|De j a n Ma j k i c
3. Calculate your sample size
Before you can conclude your test, you’ll need to reach something
called “statistical significance.” The term refers to the number of visits
each of your pages (control and variation) will need to get before you
can be confident about your results.
Make sure your original post-click landing page remains the same.
Otherwise, your baseline for testing won’t be accurate.
9|De j a n Ma j k i c
Keep in mind, while it’s best to address these in the beginning, you’ll
have to watch for them throughout. Obstacles to accurate results can
pop up when you least expect them.
Examine everything before your test goes live. Does your post-click
landing page look the same in every browser? Is your CTA button
working? Are all the links in your ads correct?
10 | D e j a n M a j k i c
7. Drive traffic to your pages
Now it’s time to drive traffic to your pages. Make sure, as we
mentioned before, the traffic is coming from the same place (unless,
of course, you’re split-testing your traffic sources or ads). And be
careful where that traffic is coming from. Something called the
“selection effect” describes the source of the traffic and how it can
skew the results of your test. Peep Laja from CXL elaborates:
“Example: you send promotional traffic from your email list to a page
that you’re running a test on. People who subscribe to your list like
you way more than your average visitor. So now you optimize the
page (e.g. post-click landing page, product page, etc.) to work with
your loyal traffic, thinking they represent the total traffic. But that’s
rarely the case!”
Once you’ve picked your traffic sources, keep running your test until
you’ve hit the sample size you identified in your pre-testing
calculations for both pages (original and control). And, if you hit that
number in less than a week, keep the test running.
Why?
If you’ve hit your sample size and run your test for at least a full week,
all the while accounting for confounding variables that might poison
your data, it’s time to look at the results.
How did your variation fare? Did you produce a big lift? A small one?
11 | D e j a n M a j k i c
Remember that if you set your minimum detectable effect to 20% and
you produced a lift smaller than that, you can’t be confident that lift
was a result of your adjustments.
If you did produce a lift bigger than that, congrats! You’re done
optimizing now…
Not.
12 | D e j a n M a j k i c