Statistical Significance Calculator
Determine if your A/B test results are statistically valid.
Control (Group A)
Variation (Group B)
Result
Understanding Statistical Significance in A/B Testing
In the world of data-driven decision making, statistical significance is the yardstick we use to determine if a change in conversion rate is due to a specific variation or simply a result of random chance. When you run an A/B test, you are comparing a Control group (A) against a Variation (B).
How This Calculator Works
This tool uses a two-tailed Z-test to compare two independent proportions. It calculates the probability (p-value) that the difference observed between your two groups could have occurred by accident.
- Conversion Rate: The percentage of visitors who completed the desired action (Conversions / Visitors).
- Lift: The percentage increase or decrease of the Variation compared to the Control.
- Confidence Level: The probability that the results are not due to random chance. Typically, a confidence level of 95% or higher is considered "statistically significant."
Example Calculation
Imagine you have a landing page (Control) with 1,000 visitors and 100 conversions (10% rate). You test a new headline (Variation) and get 1,000 visitors with 130 conversions (13% rate).
While the conversion rate looks better (a 30% lift), the calculator will check if that 3% raw difference is large enough relative to your sample size to be certain it wasn't just a "lucky" week. For these numbers, your confidence level would be roughly 96.5%, meaning the result is statistically significant.
Why Sample Size Matters
Smaller sample sizes lead to high variance. If you only have 10 visitors in each group, one single conversion can swing the rate by 10%, leading to "false positives." The more data you collect, the more "power" your test has to detect even small improvements with high confidence.