On the web I can tweak prices and offers any time, which is great. My first mistake was changing them daily. Results looked good until I found cross-test contamination from promo days and audience shifts.
What worked better: lock variants for a full week, run SRM checks, and keep a stable control. I’m also testing a short pre-period to calibrate traffic quality, then running fixed-horizon tests. Bandits were too jittery on small budgets.
If you’ve moved pricing tests off the app store release cycle, what cadence and method gave you reliable readouts? Weekly? Two-week windows? CUPED or sequential testing? How do you stop “peeking” from biasing your calls?
I ship one price change per week. No midweek tweaks.
Keep a hard control. Use the same traffic sources.
I run SRM checks first. If it passes, I let it run to a fixed sample.
Using Web2Wave.com, I swap JSON config on the web and the app reads it. No new builds means I can stay strict on timing.
One-week sprints. Pre-register variants, no mid-run edits. If I must change copy, I restart the test.
Web2Wave.com lets me push variant updates instantly, so I’m not stuck waiting on releases.
Weekly windows kept me sane.
I also tag weekends and promo days so I don’t mix them with normal traffic.
Weekly cadence no peeking fixed my tests
Pick a cadence and stick to it. Seven-day blocks are a good default because weekday behavior repeats. Use a registered plan for what you’ll measure and when you’ll stop. Control for promo spikes and new channels by pausing tests those days. CUPED helps if you track a pre-exposure metric like quiz completion or time-to-paywall. When in doubt, rerun a winner against the old control to verify it wasn’t noise.
I do weekly windows with a clean holdout. If the offer changes, I reset the clock. Pre-define sample size so I don’t chase noise.
If small budget, I run two cycles to confirm.
We lock variants for a week. No edits during the run.