A/B Testing

Confidence Interval

The range within which the true conversion rate likely falls. A narrow interval means more precise measurement; a wide interval signals the need for more data.

What is a Confidence Interval?

A confidence interval is a range of values that is likely to contain the true value of a metric — such as conversion rate — based on the data collected in an experiment. When you run an A/B test, the conversion rate you observe is an estimate. The confidence interval tells you how precise that estimate is.

For example, if your test variant shows a conversion rate of 3.5% with a 95% confidence interval of 3.1% to 3.9%, it means that if you repeated this test many times, 95% of the intervals would contain the true conversion rate.

How to interpret a Confidence Interval

The width of the interval depends on two factors:

  1. Sample size — More visitors produce a narrower interval. This is why running tests to completion matters.
  2. Variability in the data — If visitor behavior is highly variable (some convert, many do not), the interval widens.

Overlap rule: When comparing two test variations, if their confidence intervals do not overlap, the difference is almost certainly statistically significant. If they do overlap, you need to check the p-value directly — overlapping intervals do not always mean the result is inconclusive.

Why it matters for eCommerce and SaaS

Confidence intervals prevent premature decision-making. Without them, a team might see a 10% lift in a test and ship the change, not realizing the true lift could be anywhere from -2% to +22%. The interval quantifies uncertainty and forces discipline.

For eCommerce businesses, this discipline is especially important when tests affect high-revenue pages. A false positive on a product page or checkout flow can cost thousands per day in lost sales. For SaaS, incorrectly rolling out a pricing page change based on noisy data can suppress signups for weeks before the mistake is detected.

Common mistakes

  • Checking results too early — Confidence intervals are unreliable at small sample sizes. Always wait until your test reaches the pre-calculated sample size.
  • Ignoring interval width — A “winning” variant with a very wide interval is not a confident result. The true effect might be minimal.
  • Confusing confidence level with probability — A 95% confidence interval does not mean there is a 95% chance the true value falls within it. It means the method produces intervals that contain the true value 95% of the time across repeated experiments.

How acceleroi approaches it

At acceleroi, we report confidence intervals alongside every test result, not just a “winner” or “loser” label. This gives clients a clear picture of the range of expected impact before deciding to ship a change. We also use confidence intervals during test planning to set appropriate minimum detectable effects and ensure the test runs long enough to produce actionable precision.

Related terms

Statistical Significance Sample Size A/B Testing

Want us to optimize your conversion rate?

Get a free CRO audit — we'll identify your top conversion opportunities in under 60 seconds.

Get Instant Audit →