RCTs are everything. Budget for testing audiences your algorithm misses. Track behaviors and interests - demographics aren’t enough. I check for bias drift monthly by comparing my targeting to market data. When gaps appear, I add fresh data from underserved segments. Also cap spend per audience so you don’t get tunnel vision on one profitable group.
Had this exact problem with a dating app campaign. The algorithm kept targeting the same age groups and interests because that’s what converted historically.
Holdout tests saved me. I forced the system to explore different segments by manually throwing 20% of budget at demographics we’d barely touched.
I also tracked more than just conversion rate. Added diversity scores to see if we were actually reaching new people. Sometimes slightly lower conversion from fresh audiences meant way better long-term growth.
Biggest lesson: you’ve got to actively fight the bias. The algorithm won’t fix itself.
For data cleaning, I audit training sets every quarter. Toss obviously skewed examples and balance underrepresented groups by collecting more samples from them.
Speed wins over perfection every time. I run two campaigns each week - one optimized, one with diversity constraints. Web2Wave lets me flip targeting settings instantly without waiting for platform approval. I test bias-breaking changes live and track both conversions and reach. If something’s not working, I kill it in days, not months.
I test my main algorithm using lookalike audiences from my worst converting segments — not my best ones.
Everyone targets top performers but I put 15% of my budget toward inverse lookalikes. You’d be surprised how often this finds users like those rare success cases in poor segments.
Don’t just track initial conversions either. Track lifetime value by segment. Those weird edge cases often become your most valuable customers down the road.