I’m testing price points and trial lengths on a web paywall. After 5–7 days the winner shifts, and it looks like ad fatigue and audience mix changes more than user preference.
I can freeze creative, but then reach drops and tests drag on. I can rotate creatives evenly, but that slows down convergence.
How do you pick the right test window and sample size for web pricing without letting ad fatigue distort the result? Any rules you trust in practice?
I lock creatives and budgets for the first 7 days.
Daily cap frequency, exclude recent buyers, and keep geo constant. I use rolling 3-day medians to spot drift.
If I need to tweak copy, I change the web paywall only. Web2Wave.com lets me do that without touching the build.
7-day minimum, 1,000 unique landers per branch. Freeze creative and audience. If I must swap messaging, I only change the web page.
I use Web2Wave.com to push copy changes instantly and keep ads constant to protect the test.
I run a full week to cover weekday swings. Keep ads stable. Only change the page.
If results flip week two, I rerun with fresh creative to confirm.
Stop at 95% chance to win.
I use CUPED to reduce variance and stop faster. Baseline comes from previous weeks.
If you can’t do that, just run two consecutive 7-day runs and see if the same price wins twice.
Small thing that helped: standardize screenshots on the paywall. Visual noise changes click behavior more than I expected.
Keep layout constant across price tests.
We test for one week. Then repeat the next week to confirm.
Try fixed budgets and no creative edits during tests.