How to Calculate — and Improve — A/B Test Significance
Is your website still not converting as it should?
Despite the best practices, market research, A/B testing, and focus on building trust, authority, and demand–are conversions still falling short? A/B test significance may be exactly what you’re missing. Here’s what it is and how to use it to achieve conversion rate optimization.
Jump to A/B Test Significance calculator
What Is AB Test Significance?
In digital marketing, we measure actions that we can easily see impact revenues and profitability. Why wouldn’t we? That’s why we do what we do. We increase website traffic, generate leads, increase average order value, and so on.
It seems simple enough. But this isn’t a task to take lightly. We can fall into a rut, thinking that what we’re getting is good enough. And often it’s hard to see the link between small actions and how they impact the results. A/B testing solves this.
A/B Testing (e.i., controlled conversion experiments) brings these links to light. It establishes the cause and effect behind the conversion rates achieved. But to ensure the link is really there, you need to determine the A/B test statistical significance first. This is the scientific approach to optimization.
Now, wouldn’t it be great if we could test every change A/B style? Unfortunately, we live in a real-world where time and money have limits. So what do we do? We sample, knowing sampling isn’t necessarily representative of the whole.
The truth: a natural variance will occur even when something is 100% random. If you set your playlist to a random, the same song will play twice in a row every once in a while, right?
A/B testing significance helps you quantify this level of uncertainty to understand the amount of variance to expect and adjust your calculations for it. Low statistical significance would mean that your results are largely influenced by chance.
To determine how much your results may be subject to chance, you’ll start with a null hypothesis. So, what variance can you expect if you didn’t change anything?
A null value, you did nothing.
If we’re talking about landing pages, this would be running two identical landing pages and measuring the lead generation CTA click-through rate of each. What is the variation in CTR between those identical pages? Is the variation 10%? Then when you’re doing your A/B testing, you would know that your results aren’t statistically significant unless they can exceed that 10% that you can attribute to chance.
Determining A/B statistical significance when A/B testing tells you if the variance between A and B is:
-
- True improvement in performance OR
- Not a true improvement (could be attributed to chance)
The greater statistical significance (aka Alpha or α) you have, the less likely chance influences results.
AB Test Significance (Confidence Level, p-Value) Formula & Calculator
https://www.convertize.com/ab-test-significance/
You’ll need to do a series of calculations to determine the significance of your A/B test.
What Is Your Conversion Rate?
(Conversions / Traffic) X 100 = Conversion Rate
What Is Your Uplift?
Let’s say you’re A/B testing two different landing pages based on lead generation CTA. To determine if A (your original landing page) is better or worse than B (your new testing landing page), you would use this A/B testing calculation.
(Conversion Rate A / Conversion Rate B) X 100 = Uplift
How to Determine A/B Test Statistical Significance
The above Uplift is statistically significant if the difference is greater than chance.
Generally speaking, you want to have at least a 95% confidence level that your result is statistically significant. This means that if you ran the same test 20 times, you would only get this same result by chance 1 time. That’s pretty good odds and considered good enough for statistics where it’s impossible to exclude chance fully.
What’s Your P-value?
P-value is your probability value. A high P-value means a high probability of chance. Low P-value means there is a low probability that the results can be attributed to chance.
P-value .05 = 95% confidence
How to Choose Appropriate Significance Level
What level of variance do you feel comfortable with? At what point do you feel confident enough that chance is not significantly impacting your numbers?
To stand up to scientific rigor, you’ll generally want a 95% confidence level whether you’re in eComm (B2C) or SaaS/B2B. The lower your confidence level, the more “guessing” you’re doing.
To increase A/B test statistical significance, you must do one of the following:
-
- Increase the sample size and/or run tests for longer
- Increase the improvement rate (aka Uplift) to the point that it could not be greatly impacted by chance
- Improve how you collect data to reduce the level of variance in your numbers
eComm A/B Test Significance
Let’s say that you want to know if offering free shipping increases your Average Order Value (AOV). You would first run a null hypothesis (A/A) test to establish the variation between two landing pages that don’t offer free shipping. If the variation is too great (more than 5%), you may need to improve statistical significance before doing an A/B test using one of the above methods or just account for it (not advisable!).
B2B/SaaS A/B Test Significance
In B2B/SaaS, you might test a CTA for a free trial download. Start with an A/A test of your existing landing page to identify the variation. If it’s too high, run the test longer or increase your sample size. Test page A against a different CTA on page B to see if B’s CTA is better than A beyond the percent you can attribute to chance.
Common Mistakes
-
- Not understanding the statistical significance of your results
- Testing for too short a period of time
- Not defining the length of the test
- Using too small a sample size
- Not retesting when you identify that the results aren’t statistically significant
- Making invalid assumptions. Why aren’t people converting? You may have to go back and work your way forward to get to the CTA. You need to develop a valid theory to test.
- Getting split-test happy. A/B testing tests one change at a time. Otherwise, you won’t know what to attribute the uplift to.
- A/B testing through trial and error. A/B testing is a little more complicated than comparing two pages to see which is better. If testing seems beyond your expertise or you’re already A/B testing and unsatisfied with results, consider working with an agency.
A/B Test Tips
-
- Develop a hypothesis before you test. It’s just good science. If you don’t have a hypothesis, it’s too easy to convince yourself after the fact that B is better than A. Ex: B will generate 10% more leads than A over the same time period to be considered the better of the two.
- Test pages in your sales funnel / Buyer’s Journey. If you’re B2B, your blog is probably a major part of your sales funnel. In some eComm, it may be less important. Social media CTR to product pages may be more important. If you have time to kill, then sure, test away. But focus where it matters.
- Clearly define your sample size and time frame.
- Retest if statistical significance isn’t achieved. Don’t be satisfied with “that’s good enough” because it probably isn’t.
What to Do Next
Are your A/B tests not yielding the higher conversion rates you expect? Do you struggle to achieve significance with A/B tests or find A/B testing too time-consuming? We can help you to streamline your conversion rate optimization. Get Your Free 5-Point CRO Audit.
And while you’re here, be sure to Get CRO tips (and more) to your inbox!
At Digital Current, we offer a full suite of CRO services:
-
- Conversion Rate Optimization Audit – Achieve the highest possible ROI through optimization by understanding where you stand right now.
- Customer Research & Interviews – Learn directly from your customers to guide and streamline optimization.
- A/B Testing & User Testing -Never settle. Figure out through ongoing testing and analysis how to maximize your ROI.