Sub Club Podcast: Stop Celebrating Conversion Wins Before Checking Renewals – Sara Grana, Yousician
Episode Overview
In this concise yet illuminating episode, host David Barnard sits down with Sara Grana, Revenue Strategist at Yousician (formerly at Babbel), to discuss a critical pitfall in the subscription app world: focusing solely on conversion and early funnel wins without tracking their long-term impact on renewals and true revenue growth. Sara unpacks why maintaining an “experiment and decision log” is essential, how refunds and chargebacks can quietly undermine growth, and how teams often fool themselves by stacking "wins" that don’t translate into actual business outcomes. The conversation is packed with actionable insights for anyone involved in app monetization, particularly through subscriptions.
Key Discussion Points & Insights
The Importance of Tracking Experiments and Decisions
-
Experiment Logs as Revenue Maps
- Sara’s first action at both Babbel and Yousician: request a historical log of experiments and major business decisions.
- Quote:
"...look at, okay, what is the map of our revenue over the years?... Having the history of how these four buckets evolve can also tell you a lot."
— Sara Grana [01:48] - By tracking when revenue jumps or changes and cross-referencing with business actions (like introducing a new plan), you contextualize what drives your metrics.
-
How to Track This Log
- Preference for Excel to map revenue by cohort and annotate with events/experiments.
- The goal is to flag significant changes and understand their causes for retrospective analysis.
- Quote:
"Basically the revenue bucket. So to say to me it's like Excel, Like I would track in Excel..."
— Sara Grana [03:32]
Pitfalls of Focusing Only on "Conversion Wins"
-
Ignoring Long-Term Cohort Outcomes
- Many companies celebrate a/b test wins (like higher conversions or price increases) without monitoring long-term cohort behaviors, especially renewal rates.
- Quote:
"You rolled this and then six months later you look at the cohort and say, oh, actually the control group actually is performing better than the test group because they are renewing more."
— Sara Grana [05:03]
-
Not Accounting for Refunds and Chargebacks
- Teams often overlook refunds and chargebacks, which can entirely erase perceived improvements, particularly with winback campaigns or aggressive offers.
- Example: Users request refunds to trigger special offers, making campaigns net negative.
- Quote:
"...a lot of people are canceling, you know, like putting the auto renewal of right at the beginning. And what they are doing is they're asking for a refund and then they're getting the offer. So you are actually like net negative."
— Sara Grana [05:38]
-
Lead and Lag Metrics
- There’s industry pressure to optimize for fast metrics (quick payback, day 7/30/90 ROAS) at the expense of LTV and retention.
- David notes:
"...sometimes it's like it's just a business decision you have to make...But I think what's really important...is like you got to know what you're sacrificing."
— David Barnard [06:53]
Specific Examples & When to Worry
-
New Plans (Annuals, Lifetime)
- Big shifts in average revenue per user (ARPU) often mask underlying LTV losses if plans cannibalize upgrades or repeat purchases.
- Quote:
"Lifetime is a huge example because the value that you get from the get go like really high. And then every time...like always, there's something else is going to lose because people are looking at what is the LTV of that something else, right?"
— Sara Grana [08:11]
-
Payment Methods and Funnels
- Introducing web checkout or payment channels can differentiate renewal behavior—web often renews better than native in-app. Not segmenting cohorts correctly will give false positives.
- Discounts often deliver a temporary ARPU boost but may attract churn-prone subscribers.
-
Paywall Optimization Pitfalls
- Tactics like free trial toggles have generated large conversion lifts, but the long tail effects on churn and LTV are just beginning to surface.
- Quote:
"...I've been shocked at like you know, what a big lift that can have. But again, I haven't talked to folks who looked at that cohort 6 months, 12 months down the line..."
— David Barnard [09:10]
-
Attribution Errors
- Marketing and product teams may misread improvements as true product impact when it’s often the result of cohort shifts or channel changes.
How to Properly Analyze Experiments
-
Granular Cohorting & Retrospective Analysis
- Always segment cohorts by plan, purchase location (web/app), discounting, and potentially acquisition channel.
- Regularly revisit cohorts at set intervals—Sara literally sets calendar reminders.
- Quote:
"My Google calendar up there has this experiment recheck with my laby data analysis."
— Sara Grana [14:59]
-
Prioritizing What to Recheck
- High-impact changes (price, plan, large lifts) always merit follow-up, whereas minor copy/layout tweaks may not unless they produce outsized results.
- Quote:
"Anything that is price changes I really care about...everything that is with price or discounting or things like this I would always recheck after."
— Sara Grana [15:18]
-
Beware of “Non-Compounding” A/B Wins
- You may have a list of experiments, each with single-digit % lifts, but company growth doesn’t match their cumulative impact—something is being missed.
- Quote:
"...I look all the list of, you know, experiments with like 5%, 10%, I'm like, but wait a second, like what, why we are not growing, you know, like 30 year on year if we have all these wins."
— Sara Grana [16:47]
Memorable Quotes
-
Sara Grana on cohort analysis and transparency:
"Most of the time, the commercial strategy that you have, what people buy, when they buy it, which price, are going to have such an enormous impact on your renewals and extensions."
— [10:20] -
David Barnard summarizing the core lesson:
"...you're not actually getting that 10% increase if you're not actually getting that 10% increase. And you're never going to know if you do these experiments in isolation and aren't tracking everything."
— [18:16]
Timestamps for Key Segments
- 00:01–01:22 — Introduction and Sara’s background
- 01:28–03:20 — Why experiment logs matter and revenue mapping
- 05:03–06:53 — Pitfalls of premature conversion optimization
- 08:11–10:10 — Examples: plan changes, payment methods, and misleading ARPU lifts
- 12:26–14:48 — Best practices for analyzing retention and cohort data
- 14:59–16:18 — When and how to set follow-ups and rechecks on experiments
- 16:47–18:00 — The illusion of compounded A/B test wins
Episode Takeaways
- Don’t celebrate paywall or conversion "wins" prematurely.
- Always segment your cohorts by plan, funnel, and acquisition channel.
- Systematically track and revisit major experiments at appropriate intervals, especially those involving pricing or big conversion lifts.
- Recall that refunds and chargebacks can quietly kill your ARPU victories.
- If cumulative test wins don’t add up to visible business growth, question your metric and tracking rigor.
Explore openings at Yousician: yousician.com/careers
Join the Sub Club community: chat.subclub.com
This episode is a must-listen for anyone running experiments or monetization tests in the subscription app business. Sara’s practical advice and transparent approach are a valuable blueprint for truly sustainable revenue growth.
