I’ve been A/B testing onboarding and paywall variants on the web before I touch native screens. It’s fast and I can watch full‑funnel metrics with UTMs intact. But I’m trying to separate what actually predicts native performance from what looks good on the web only.
What I’ve found so far:
- plan_selected correlates well with in‑app purchase rate. It’s a good mid‑funnel predictor.
- checkout_started lifts don’t always carry over. Native payments have different friction.
- trial_started is helpful for sizing, but I trust purchase_succeeded and 7‑day retention more when promoting variants.
- Device mix matters. If the web test skews desktop, results mislead the iOS build.
Guardrails I use:
- Match device mix to the app audience.
- Keep payment methods similar to what the app supports.
- Ship the winning copy first, not the exact layout, to reduce UI drift.
If you’ve run this play, which web signals predicted native conversion best for you? Any examples where a clear web winner flopped in the app, and why?