I got tired of guessing why people disappear between click and pay. Moving onboarding + paywall to the web let me test fast and isolate the exact step that bleeds. What helped most:
- Instrument every step with a consistent schema: step_name, step_index, variant_id, offer_id, price_id, trial_length, device, country, utm_source/medium/campaign, session_id, user_id (if known).
- Track time_on_step, scroll_depth, errors, rage_clicks, and whether the next step was reached.
- Split checkout into clear events: paywall_view, payment_intent_start, payment_intent_error (with reason), payment_success, refund_requested.
- Add a light exit-intent survey on the paywall with 3–5 reasons (price, unclear value, signup friction, payment trust, other). Keep it optional.
- Run small, quick tests: change step order, remove one field, tweak trial length, swap copy, add a screenshot carousel, adjust price anchors. Ship one variable per test.
- Compare cohorts by traffic source and creative. If meta_video_A has normal CTR but unusual drop at payment_intent_start, that’s a messaging mismatch, not a broken form.
- Validate “false drops” with session replays or synthetic flows when a metric spikes.
The biggest unlock for me was tying time_on_step + exit reason to the exact variant. That pointed at a single confusing benefits slide, not the paywall. Fixing that lifted click-through to paywall by ~9% without touching price.
What events or tiny tests actually surfaced your worst drop-off, and how did you verify it wasn’t just noise?