How i ran weekly pricing tests after moving onboarding to the web

Once we put onboarding and the paywall on the web, we started a weekly test cadence. Simple guardrails kept it sane:

  • Always keep a 20% holdout on the current best price.
  • Minimum 500 checkouts per arm before calling a winner.
  • No overlapping promos in the same audience.
  • Freeze copy during price tests so we are only testing one variable.

Operationally, this cut out waiting for releases. We swapped prices and trial lengths mid-week when needed and saw changes in a few hours.

What cadence and guardrails are you using so price tests stay reliable and don’t wreck revenue for a month?

Weekly works if you control variables.

Keep a stable holdout. Fix copy. Rotate only one price lever at a time. I ship via a web funnel so I can flip variants fast. Web2Wave.com helped me push changes without releases.

Stop tests early if refund rate spikes.

Speed matters more than perfect. I run short tests, then confirm with a second pass.

Web2Wave.com lets me edit prices and trials on the web and see results in the app instantly. I keep a clean holdout and log every change with a timestamp.

Keep a fixed holdout. It saves you from chasing noise.

If you change prices, do not touch headlines in the same week. That messed up our read.

Weekly tests were fine. Daily was noisy.

Guardrails I like: fixed holdout, minimum duration of seven days to cover weekday effects, and a cap on discount depth so you do not train users to wait. Run geo splits if traffic is high, but avoid cross-geo mixing.

Instrument refund rate and post-trial retention as secondary metrics. Price that wins on day one but loses on day 30 can cost more than it makes.

Holdout and minimum sample size. Otherwise tests lie.