How many price and onboarding experiments could we actually run per week after moving the paywall to the web?

Before we moved the paywall, a price or copy change meant a new app build and a 7–14 day review. After switching to web we went from 1–2 experiments per month to multiple per day in the funnel itself.

In practice we ran 10–20 small tests per week: headline tweaks, price points, trial-length changes, and upsell timing. Most were low-risk and short-duration. The ones that moved the needle were A/Bs that changed pricing tiers or trial length.

The real benefit wasn’t just quantity but the speed to learn. We found an offer that improved conversion by ~6% within a week and rolled it back or re-tested rapidly.

What constraints have others hit when trying to scale experiment velocity on web funnels?

We went from monthly tests to daily small changes.

The trick was feature flags and a simple builder so non-devs could push copy and price updates.

Saved us many review cycles and time.

Speed is the secret.

I treat the web funnel as an experiment lab. Fast drafts, quick wins, and immediate rollbacks.

That cadence let us test more hypotheses than any app-store cycle ever could.

We hit a bottleneck when too many experiments overlapped and results conflicted.

Now we limit concurrent tests per traffic segment.

Helps keep signals clean.

Run small tests fast
Kill losers quick

The upper bound for experiments is traffic and proper segmentation.

You can spin up dozens of variations but without enough users per cell you get noise not insight. Control concurrency and keep a test registry so variants don’t clash. Use adaptive traffic allocation to push more visitors toward promising variants.

Start with copy and price tests that require the least engineering.

Those are usually the fastest wins and teach you what to build next.

We capped concurrent tests to three per funnel.

That kept analysis clean and made wins actionable.