Formula changes can reveal a lot. I faced a similar issue focusing only on installs.
I began analyzing which sources yielded users that actually registered, rather than just downloaded the app. My supposedly top ad networks were sending users who never launched the app.
Now, I track churn based on actual usage milestones instead of just install dates.
Revenue churn vs user churn can show different issues. Had a client with 8% monthly user churn who thought they were fine. Their revenue churn was actually 15% because their biggest customers were leaving first. The averages hid that power users were jumping ship while freeloaders stayed. Now they track both and consider what customers are actually worth.
Same thing happened with our subscription app. We were tracking monthly churn but completely missed the weekly drop-offs right after signup.
Switched to cohort analysis and started tracking day 1, 7, and 30 retention separately. Turns out 40% of users bailed within the first week. The monthly numbers made it look like gradual churn, but we actually had a massive onboarding problem.
What’d you end up changing? Wondering if you hit the same issue or found something different.