We added an explicit win-back path on the web and experimented with partial refunds and discounted reactivation offers. The results were revealing.
Some users who churned wanted a price experiment — offering a prorated refund or a discounted re-entry reduced churn and left them with a better impression of the product. In other cases churners who accepted a refund rarely came back, which pointed to a product mismatch rather than price sensitivity.
Two practical learnings: 1) refunds and offers are data points, not only customer service gestures, and 2) you must be careful with refund volume or you risk payment processor issues.
Has anyone used refunds or targeted win-back discounts as a diagnostic tool rather than just retention tactic? What metrics did you watch to validate the experiment?
I started treating refunds as experiments.
We offered a partial refund plus a short discounted re-entry and tracked reactivation rate and 90 day retention.
That told us who left for price reasons and who left because the app didn’t fit them.
Win back flows can be diagnostic if you test them properly.
I segment churners by reason, run two variants (discount vs refund), and measure reactivation and 30 day retention. Using a web funnel made issuing refunds and measuring outcomes straightforward.
We tried refund plus 50% off and some users reactivated and stuck around.
It showed us many cancellations were price related not product related.
refunds tell you if it was price or product
Treat refunds as an experiment cohort. Track reactivation rate, days to first use after reactivation, and LTV for that cohort. If refunded users who come back show similar engagement to organic retained users then price was the problem. If not, the product fit is weak. Also monitor refund rates closely to avoid payment gateway flags.
We capped refunds and used discount reactivations instead.
Discounts taught us more than refunds did.
Track refunds per campaign too. Some ads drove a disproportionate share of refunds.