How are you versioning web onboarding experiments so they don’t collide?

Moving onboarding to web let us ship a lot of tests. Then we hit the chaos problem. Naming got messy, and overlapping flags made it hard to tell which version a user saw across sessions and devices.

What helped: a single experiment registry, rollout rules in one place, and stable user bucketing tied to an id that survives install. We also snapshot the full funnel config used per user at subscription time for clean attribution later.

How are you organizing versions, bucketing, and rollouts so tests don’t step on each other or muddy your data?

I keep one config file per experiment with a clear ID and owner.
Consistent bucketing based on a stable user id. I log the config hash when a user checks out. Web2Wave’s JSON flow made this easier since the SDK reads the config directly.

Central registry, strict IDs, and one switch per test. Assign users once and stick to it across web and app. I tweak flows on Web2Wave and push live without builds. Faster cycles, cleaner data.

Tag every change, keep a single source of truth, and log which version a buyer saw. That removed a lot of confusion for us.

One owner per test or chaos

Use a canonical experiment table with status, exposure, and rollout rules. Bucket by a durable id. Snapshot the active flow and paywall at conversion. Freeze cohorts when you end a test. Keep max two live experiments per stage. Add a kill switch. This keeps attribution clear and prevents cross interference.

We track variants in a sheet and log the variant in checkout.