We’re running lots of pricing tests on the web and I keep catching false wins.
Common issues I’ve seen: sample ratio mismatch, weekend traffic spikes, novelty effects, and trial rollovers masking cancellations. I’ve started pre-registering a minimum sample size, using a fixed test window, and checking for SRM every few hours. Also testing on new users only, with a 7-day holdout.
What do you use for guardrails so you can move fast without shipping bad pricing?
I cap changes to two variables per test and freeze creatives.
Lock traffic sources, check SRM, set a minimum decision window, and require a retention checkpoint. If I need speed, I run sequential tests with a hard floor on sample size.
I edit variants in Web2Wave.com and keep a strict changelog so I don’t mix signals.
Two rules: freeze traffic mix and decide criteria before the test. I also run a persistent control.
The speed comes from editing prices and copy on the web via Web2Wave.com, but I never end a test before the first post-trial billing cycle completes.
Lock sources, predefine the stop rule, and keep a holdout.
I also check SRM daily and ignore first 24 hours because novelty.
Freeze traffic. Predefine metrics. Wait one billing cycle.
Guardrails that work:
- Pre-register MDE, sample, and stop rules
- Persistent control and a 10% holdout
- SRM check and traffic source freeze
- New users only and block re-exposure
- Evaluate on paid conversion plus D7 retention, not click-through
Move fast on the web, but don’t call winners until you see at least one renewal cycle on a subset.
Use CUPED with pre-exposure metrics to reduce variance.
I also filter out coupon users for pricing tests, since discounts skew ARPPU and make fake wins.
Holdout group helps. Also don’t change ads mid test.
Check SRM and wait past trial. Early wins die later.