
Loading summary
A
Welcome to the Sub Club podcast, a show dedicated to the best practices for building and growing app businesses. We sit down with the entrepreneurs, investors and builders behind the most successful apps in the world to learn from their successes and failures. Sub Club is brought to you by RevenueCat. Thousands of the world's best apps trust RevenueCat to power in app purchases, manage customers, and grow revenue across iOS and Android and the web. You can learn more@revenuecat.com let's get into the show. Hello, I'm your host, David Barnard. My guest today is Eric Suefert, media strategist, quantitative marketer, author and investor. Eric currently shares his musings in the Mobile Dev Memo newsletter, blog and podcast and invests via Hercules Capital, his early stage venture fund. On the podcast, I talk with Eric about how measurement dysfunction paralyzes growth, why diversifying channels for the sake of diversification actually hurts performance, and the futility of trying to interpret why ads win.
B
Hey Eric, thanks so much for coming back on the podcast.
C
Great to be here, David. Thanks for inviting me back.
B
All right, so we put a Google form out there for folks to ask questions and we got some really good questions in and so I'm going to get to those in a sec. But I was kind of surprised that nobody asked about how to use AI. What's going on with AI? I mean, it's just like that's all everybody's talking about on Twitter. So I was kind of expecting more questions around AI. So I'm going to selfishly lead with my own question because I do feel like we're at a bit of an inflection point where, you know, things are still early, but it feels like you're going to be left further and further behind if you're not at least starting to experiment. So what I wanted to ask you is like, where do you think the low hanging fruit is right now? Like teams that you're seeing be successful, what do you see them doing and using that's like effective today? Not like, oh, this, you know, six months from now this will actually get good. But like what's good today?
C
There are a lot of dead ends that I see companies pursuing with, with AI. Right. And I think my advice here is, is if you are going to embrace AI and like, I think it's important to just kind of maybe take a step back and, and sort of define what, what we're talking about when we say AI.
D
Right.
C
It's sort of like a catch all term at this point. But it shouldn't be though. It shouldn't Be so like. And so I think when you're talking about AI, you're fundamentally talking about replacing sort of human decision making with some other mechanism, right? Like that's, that's ultimately a big concept, what you're talking about, right? And when people talk about AI, they tend to focus on the output. I write a prompt in a chatbot and there's a bunch of text that, that gets generated, right? Or I write a prompt in a text to image tool and there are some images that get created, right? And I think that's probably not the substance of how you transform your business with quote, unquote, AI.
D
Right?
C
Here's my advice. Like, if you want to actually embrace this in a transformative way for your company, not like in a superficial way, first of all, start with first principles. What does AI mean? Your business, right? So you know that I'm, among other things that I do. I'm the chief Strategy Officer at the Fabulous. We make health and wellness apps. They're all subscription, right? So I've got a genuine reason to be here on the sub club today. And so I'm leading this effort at Fabulous. And what I did was like, I just worked with the founders and I was like, okay, let's start with what are our principles around AI, the use of AI within the company. Like, let's just define what we want to actually achieve with AI, right? So this company, Zapier, they released this document that they were using internally which broke each organizational function down into like a line item on this matrix. And then going across the columns were like the levels of implementation of AI from like unacceptable or like no implementation to the full embrace of AI, right? And they just sort of described what each team should do to sort of cascade across these columns. And so we did the same thing, right? This isn't like a top down mandate, right? We presented this to the teams and we said, right, fill it in. You tell me what is the sort of complete sort of transformation of your organizational function through AI and you tell me what is like an unacceptable sort of non implementation of AI and then we'll just sort of plan out a roadmap to get from no implementation to the complete transformative implementation and then we'll decide what resources are needed, right? And so the teams themselves get to decide how they define that roadmap and pursue that roadmap. But there is no option to not do that. That's how I would start essentially. How do we absorb this into the culture of the company and make sure that every single functional team feels enabled to do this and they also feel sort of like that they have the agency to define what that implementation looks like. Because I think that's really important.
D
Right.
C
So that's where I would start just kind of on the marketing side, I think. Again, not focusing on outputs, but actually focusing on automation and replacing the sort of human effort with this sort of like machine handled mechanism, thinking about anything that would move the needle.
D
Right.
C
So if you want campaigns to be optimized in real time, like that's not something you're going to build. You're going to rely on Facebook doing that, or Google doing that, or Pinterest doing that, or Amazon doing that. Like they have those tools or TikTok doing that. Right. Those are sort of like platform imperatives that's not for you to build.
B
And to your earlier point, those are actually machine learning models, not generative AI models. And I think Zuck tried to make that distinction on the latest earnings call, is that the generative AI stuff is not what's powering the increase in the efficiency of ad spend on Meta. It's actually just getting better and better at the machine learning models. And that is a little bit of a catch 22 right now. And kind of a confusion in the industry is that generative AI, chatbots, image generation, all these kind of things, it seems so magical, just like plaster it over everything. But there's a lot of things it's not good at, like data analysis and other things where machine learning models, which now kind of just get lumped into this umbrella term of AI like you were saying earlier. And so part of it's also like picking the right tool for the right thing. And so like, you know, to your point, the platforms are going to be so much better at that optimization. And they're not using generative AI to do that. They're using these machine learning models that they've built up over decade to do that and all the data that you don't have, all that kind of stuff.
C
Yeah, and that's, that's really important. Right, the data. And so that's a great distinction. I think a lot of people don't make it.
D
Right.
C
So what, what Meta calls those, those buckets, right. Is core AI and gen AI. And gen AI. I mean, to be fair, I mean they have, they said they have 2 million advertisers using their Gen AI products for creative production, but a lot of that is still pretty superficial. I mean they have animations, which is kind of a big deal, but a lot of it is still just Like e comic, you know, sort of swapping out the backgrounds. But that, I mean his, his point there was that hasn't yielded that much extra efficiency yet because they're still in the early stages of rolling it out.
D
Right.
C
But what has and what I called out in my earnings analysis. But I also wrote another piece called Like AI is not the Metaverse saying Like AI is really generating truly substantive efficiency for advertisers right this second. It's not this far flung, far fetched, sort of like destination. It's, it's working on behalf of advertise right now and generating sort of regular, you know, sort of improvements or optimizations to their ad campaigns. But those are, those are things like gem, which they've talked about. Lattice, Andromeda and I had met as VP of AI on the show to just talk about all those tools. But it's really interesting and it's like, yes, every quarter they're pointing to those tools and saying they drove 5% more efficiency. And I think that's something that people misinterpret because they say 5%, who cares? But no, but that compounds that's 5% a quarter or that's 5% every half year. And then also when you unlock 5% efficiency in AD spend, what do you get? More ad spend the next cycle.
D
Right.
C
And then so that grows faster. So I think people like, sort of, they don't know how to interpret that. So I would leave that kind of heavy lifting in the platforms. You can't do that, so what can you do?
D
Right.
C
And again, if you're not focusing on the output, you're not focusing on just creating a bunch of creative.
D
Right.
C
That's probably not going to move the needle that much. What should you be focusing on? Just automating tasks.
D
Right.
C
And a big one of those is the way I see people using AI is for creative prospecting. So just looking at what your competitors are doing, pulling that information into an S3 bucket, whatever that information is, maybe it's ads, maybe it's other stuff, other things that are visible to you and then using some sort of agent to sort of interpret it. And that would have been a full time job three years ago. That's a full time job. And every big scaled app advertiser was doing that. They were looking at the Facebook ads library, they were putting that into a doc, into a Google Doc and they were sending it around and it's like, hey, here's what our competitors are doing this week. What lessons can we take from that? But now you can do that in an automated way using tools like LLMs to sort of interpret what you're seeing, interpret the sort of concepts from these ads and tell me why you think they're working.
D
Right.
C
And so that's a big thing that I think a lot of people don't appreciate. It's like creative synthesis, but like scaled way beyond what people were doing with like one person working on that maybe full time or halftime.
B
Interesting vector to be thinking about this on and to your exact point is that one of the challenges we still have, especially with generative AI, crunching numbers is hallucination. Like, I tried to just get ChatGPT, I used Pro, I used thinking. I still haven't done deep research, but I just tried to get it to translate a list on a web page into like a formatted list that I wanted and exclude some things with a specific criteria. It did great for the first 40 things on the list. And then it just started hallucinating and just making shit up, putting stuff in the wrong place. It just just like really fell down. And so that's a really great example of like understanding the, the limitations of generative AI. You probably don't want to try and like have an AI in between your ad buying and making decisions on pricing and stuff today. There's just so much risk there. But to research creative, especially with competitors, and come up with a hypothesis and things like that. Hallucinate all you want. We're going to go test that anyway. And the ultimate deciding factor, that is Meta's algorithm that's going to pick the winning creative anyway. So you're just like generating ideas. Hallucination isn't going to break things. Cost us tens of thousands of dollars if we're trying to buy ads based on it or whatever.
C
You know who Jason Lemkin is? He's the Saster.
B
Oh yeah, yeah. He's an investor in revenuecat. Yeah, Very busy.
C
Oh. So he, he was chronicling his experience with Replit, like building an app vibe, coding an app from scratch on. And he was just like, every day he was kind of almost like journaling, like, here's what I did today. And then one day he's like, project's over. Replit deleted my production database. Like, I'd have to start over. Like, so I'm giving up. That kind of stuff happens. I mean, I think these tools do really well at very specific discrete tasks. They don't do well when those tasks are chained together. And like there's this idea of like temperature and machine learning. Like you want to Add a little bit of noise whenever the sort of output is being selected. Like with an LLM, essentially what you're doing is you're doing this, this very like conditional probability to predict the next word, right? And if you think about like attention and like the transformer mechanism, like what the big sort of innovation there is, it could look at like a very long context chain and figure out how each word sort of affected what, what words coming next. But when you do that, you still add in like a little bit of noise to determine like what the next word could be. Now you imagine like you're doing that and then you know, you're stacking that. And so like when you give it, like especially a very complex text, predict the next word, it's not that difficult, but like a very complex task, you stack that noise, that stochasticity, it compounds. And that's where you get like, hey, here's a 40 step process. Every step in, in the way, there's a little bit of stochasticity, a little bit of randomness that's being thrown in there. And like, by the time you get to the 40, it's like a game of telephone. The way I like to approach these tasks is like, very discreet and I'm intermediating everything, right? Here's the output. Okay. I'm taking that output and then I'm sort of checking and I'm giving that back to you and I'm saying, now do the next thing. And so I think we're still in that phase where there needs to be guardrails, either human oversight or just throw it at something where the downside risk is pretty limited and contained.
B
Yeah, no, that's fantastic. Any other specific examples before we move on to the other questions of things you see working today within those limitations.
C
When you talk to companies, especially in the app world, about the use of AI, they immediately jump to creative. And I think that's probably the least valuable place to apply this. I mean, yes, you can go from like 10 creatives to 200, but a lot of times it's just people are taking their 10 creatives and those themselves are variants and they're getting 200 variants of one concept.
D
Right?
C
That's not actually doing anything for you. There's just diminishing returns on taking 20 variants of one concept and going to 200. And they're, they're like vanishingly small gains.
D
Right.
C
What you really care about is the concept. And actually it's coming up with concepts that you yourself couldn't come up with because if you could just come up with them. The AI is not doing that much more to sort of influence the performance. Sort of like aha moment with this was the last company I worked at. I built this tool called Draper and that's just what it did. It just, it just created variants of ads and this is like 2018. So people weren't really talking about like AI at that point. This wasn't even like machine learning, it was just created a bunch of variants like all permutations of these different ads. What I would do is we would just deploy these on Facebook all the time. Like there's just this constant cycle of this deployment and then we had like a stand up every, every week with the whole company. And I would say here's the ad that worked the best this week. I have no idea what it's going to be.
D
Right.
C
But like no person touched that. That was just this process running in the background. If you could mass experiment like that then you could find. And then Facebook was at that point there still VO was like a year old. And so that was the way that they did. The audience pairing was veo based on the sort of value estimate. I couldn't foresee what audiences was going to be targeted to anyway. Like let me just feed the beast with as much creative as they need and they find those audiences, those high value audiences and they test all this different stuff. And like that was the big awake like aha moment for me. I shouldn't have any sort of preconceived notions about what's going to work. Like and I actually posted about this maybe a month ago and I got a lot of pushback. But my, my point was given the use of pmax, it's, it's pretty text based but, but Advantage plus like there is no point for you to try to interpret why an AD1 or didn't win, right? There's no point. What you should be interpreting is if when you get a win or the win rate increases, the process works. Now maybe the process took a new input and that's the learning, right? But it's not the output because that was random. That was utterly random. Why that worked was utterly random. And if you try to sort of deconstruct it and take a learning from that, you're just wasting your time. What you should do is okay, how am I sort of changing the inputs such that I'm getting a higher win rate and let the machine do its thing. That output is irrelevant. That output cannot be interpreted by you. You can't understand it you can't understand why that worked. Don't even try. If it worked though, what did you change about the inputs? And that's what you learned, right? The process worked, not the ad.
B
So I guess part of what you're saying is use this generative AI because it is noisy, because it is going to just come up with random stuff, let it go crazy on concepting to generate ideas that you're just would never generate and then feed the beast. To your point, earlier, like all the major platforms now do all sorts of crazy testing where if you can just feed the beast. And I like that, like what's the process to feed the beast and iterate on the process to feed the beast versus thinking you can like figure something out and then make 10 more creatives are going to win because you found some insight. Like just create an insightful process to feed the beast and let the beast do the work.
C
Because, because at the end of the day, you have no idea. You essentially have no idea who that ad was even shown to. Yeah, but even if you did, even if you did, and here's where we're heading to, this is really the reason why people push back against this. I mean, people are saying no, that's. Of course you could learn from that because then that creates a feedback loop and it's a kind of a. I mean, I don't want to say it's a difficult or complex point, but I think it's just counterintuitive that like no, no, no. What you care about was how did you change the process to improve the outcome? Not what can I learn from that outcome to then sort of like optimize the inputs.
D
Right.
C
That's not something you can interpret.
D
Right.
C
But I think, and so that kind of scares people because ultimately it's saying artists aren't important. There's no creativity. It's a SEM. It's, it's just, this is just this optimization problem. And I think people, I understand that why that's like uncomfortable but. And I can understand the pushback too. I mean like, look, anytime you're talking about sort of removing humans from this process, it's scary. Look, I mean you read the news, especially in gaming. I mean King, they, they just laid off whole studio, the company that makes Candy Crush. And they said, or like some of the people that were laid off said, we spent the last six months training an AI tool to do our jobs. And so it's. This is having an impact on employment right now. And so that's scary.
D
Right?
C
Like, I Had a professor in undergrad, an econ professor, who said, you know, you can't talk in this sort of like dispassionate tone about dislocation from innovation because people are losing their jobs. And so if someone loses their job, they don't, they don't care about the rational argument that like, well, actually the economy is going to be more efficient because we have this innovative thing. They feel, they feel like it's a conspiracy against them personally. You do have to approach this with like, total empathy and sensitivity. Right, but, but I mean, that's, that's just the reality. And I understand why it's a sensitive topic, but I think where we're heading and this, I think again, some people view this as like kind of dystopian. But what if it didn't? Like, what if the ad was just incomprehensible to you, but it triggered something in your brain? And that's, I think ultimately what people don't want to recognize is that's already really the case now. It may be a picture of a shoe, and it's a dog skateboarding, wearing this pair of shoes. And you say, oh, people like that because of the cute dog. That might not be the reason at all. I mean, we're just assuming that we're making a lot of assumptions when we sort of try to deconstruct these ads to understand. And a lot of times they're just post hoc rationalizations. Oh, I like dogs. So everyone must have responded to this ad because the dog. We have no idea what caused that impulse to click, and we shouldn't try. I think we can't, we can't. We must acknowledge that we just can't do it.
B
The last thing on AI before we move on to the questions is just where do you see all this going? I mean, you've kind of dropped hints along the way. Zuckerberg, I think, said on a interview or maybe one of the earnings calls that he sees in the not too distant future. And as you just alluded to, that they're just going to generate all the creative. They're going to generate the ideas. They're going to know the person so well that each ad is going to be personalized to the individual and iterating on creative is just going to kind of go away because they're going to be so much smarter at that. So that's one vector. Happy for you to dig deeper into that one specifically. But what other ways do you see this going over the next 12 to 18 months?
C
My theory is that Facebook could deploy a change tomorrow that achieves what you just described. Like they are slow rolling this because they know that they have to get people onboarded and comfortable with it. And if they did that push tomorrow where like, hey, by the way, just give us the money and we're taking care of everything else, like people would be reluctant because they wouldn't trust it yet. And so they have to kind of be very deliberate and measured with the pace that they roll these tools out. And so on that timeline, just given that restriction, given that sort of like comfort restriction, my sense is in like 18 months we're at kind of brand specific full fidelity video ads being generated and without prompt, just based on past performance. And they're not auto deployed. But you can go through them and say yes, no, yes, no. And I think that's probably like on the 18 month timeline, they could do that right now. People would be very reluctant to use it. Where you go from there is just auto deploy, right? Like just do it. And I don't need to be a bottleneck, but I think within 18 months you get to that point. Another like sort of hypothesis I have, it would be regulated by comfort. Could you imagine if you just said, hey Meta, optimize my landing page, I'll put some code in here, you've got the pixel, I'll import another JavaScript library and you just render that in real time. You decide for this user, like what? And here's, you know, I'll give you some guidelines, I'll give you some guardrails. Here's usually what the onboarding looks like, but you optimize it for that person.
D
Right?
C
I was on a podcast called like the Marketing Operators and I was talking about this idea of signal engineering and one of the hosts, I could see like it really clicked for them. And they wrote later on Twitter and said the idea of signal engineering is not limited to the existing like events that you have. You can do whatever you want. You could create hurdles for the user to clear that potentially are good proxies for ltv. And if, and if that is, then that's what you do. You created some problem for the user to solve or some hurdle, some sort of obstacle to accessing the product. But the most high intent users, they will do it. And that could actually be a very strong signal of ultimate value. And so one of the host posts on Twitter, after they created a captcha for no reason other than to test the user's intent and it drove like a 40% increase in ROAS or something, but that's the essence of signal engineering. It's to create that test for intent. And if the user clears it, they're a high value user, they send that back to the ad platform. Let them optimize in real time against that.
D
Right.
C
And so I think that kind of stuff is really fascinating. Now, what if Facebook could do that for you? What if Google could do that for you? And would you think that they would do better at that than you could? Yes, I probably would. Now, where that gets a little, you know, just political is, is like, are you going to get the product manager handing that over? Because I think the marketing people have been comfortable with this for a long time, especially at SMB type, you know, direct response advertisers. Like, everyone that we're talking to right now is like, gimme, gimme, gimme as fast as you possibly can. And someone at like P and G or like Clorox is saying, like, over my dead body. There's a mentality divide here. But like, now imagine you start bringing product managers into the mix, which tend to be a little bit more mercurial. They, they look at themselves as like artisans, I think, to a greater degree than like a user acquisition manager would. And they're saying, no way. Like, I control this product experience. I control the first touch point that the user has. I'm going to give that to Facebook. Like, no way. I could see there being a little bit of tension there.
B
Yeah. Thomas and I talked quite a bit about signal engineering on the last episode of the podcast. And one of his points though, was that, and this gets back to like, pmax things that Google's done, things that Facebook is kind of doing with their value optimization stuff is that, do you really want to hand over control of the outcome to Facebook in that, you know, if they know your exact roas and they're optimizing to get you one penny of profit on that roas. Whereas what signal engineering does provide you an opportunity to do is fake that in a way that gets you what you need out of the algorithm while still maximizing for profit on your end versus giving it all over to Facebook. Which is, I think, an accusation a lot of people have made against Google's ad products and is that they'll hit your target roas and then keep spending, but sending it to all their crappy, you know, ad placements that they know aren't going to perform. But hey, they already, they already got you what you asked for. So they're optimizing for their own, you know, revenue and placements and filling inventory than they are for optimizing to your ultimate outcome of being the most profitable that you can be. So that's another level of comfort that UA teams will have to get over. And the kind of cynical view of these bigger companies and the way the algorithms are going to perform in order to hand off things like signal engineering to the bigger algorithms.
C
I've clashed with Thomas on this point since 2016, when UAC was rolled out, right? I wrote about UAC when it was introduced and I said, look, the cynical interpretation of this is that their goal here is to maximize spend, right? And to hit exactly your ROAS target, right? And so like, that became a lot more relevant. UAC was the precursor to pmax and AAA automated app was the precursor to Advantage Plus. So we've been dealing with this for years and years, right? These tools are not new, right? I wrote this, or I did this podcast a while back called Commerce at the Limit because people are scared of this stuff, right? Like, for all the reasons I just spoke about, like, well, it's, you know, it's disintermediates humans from the product. Like, what am I here for then? And. But my point was like, look, I mean, this has been happening. This has been the modus operandi since 2016 with UAC. I mean, that was a precursor to, to PMAX. The reason they started with the app environment is more controlled. The apps go through review. They're downloaded in the App Store. You had prior to att, you had a totally transparent view line from the ad to the usage. And so they could just much more easily in this sort of sandbox environment, understand all those signals and automate it. It got hairier with E Comm. But look, there's still people working on apps, right? That didn't displace everybody. You know, things are a lot more efficient on the marketing side. I wrote this piece called Satisficers Remorse, right? Where this idea that, like, do I wish they were optimizing for my spend adjusted roas? Yes, I do. But I also understand that if I was running this, I might not have even been able to spend as much as I did at that level of roas, they're probably more efficient. So that that satisfies like, I'm accepting the outcome that meets my ROAS target. And then the remorse part is like, I know they could have done better, but I couldn't have. I know if they were a nonprofit, I'd be making more money, right, Than I am now if they were a nonprofit and they were just Looking out for me exclusively, I would be making more money. But they're not a nonprofit. If I was in charge of doing it, if they didn't give me the benefit of these tools, I probably would have performed worse. And I think when people say, no, I could do it better than they can, for the most part, I think most people are kidding themselves.
B
Yeah, yeah, yeah. You know, it didn't occur to me until just listening to you talk through this. But the ultimate kind of dystopian, especially for the app industry endgame of all this, is that at some point Meta and Google know better than even we do the goal and the outcome that the individual wants out of a piece of software, and then they just generate that in real time for the user. I mean, that's very dystopian, very far off. Because you do have a lot of problems around, like you said with Jason Lemkin, where the AI just deletes the production database. Like, I think we're a long way from, you know, fully AI generative experiences that's going to have persistent storage and like, track your food and macros and calories over time. It's like, as you were saying that, I was thinking of your earlier statement about the team at King that was training the AI that eventually replaced them. In some ways, we're going to be teaching Meta's algorithms better and better exactly what the user outcome is and what they care about and what they value. And that's very valuable data, very valuable information. But I think that's a long way off, if not impossible. So we'll see.
C
You could take this to any sort of extreme you want. Like, I, I prefer to focus on other. Because I'm like an AI maxi, right? I, I believe this is like, transformative. I think this is going to benefit society, sort of inflection point for the better. What I like to focus on and you know, like, taking it to that, like, sort of extreme where like Facebook is just producing an app, right, for me to download in real time and all the benefits accrue to Facebook was like, okay, but then no one's clicking on anything because no one has any money. Then the economy collapses. What I think is really exciting, though, is this idea that sort of anybody can be an advertiser, right? So, like, if anybody can be an entrepreneur because of these tools, then it actually becomes a lot more difficult to get attention because you're competing with a lot more products. And so that's like an issue on the app side. But, like, I think there's a Whole class of entrepreneurs that exist already that are doing things like lawn care or they run a barber shop or they run, you know, a bike repair service or something, and they're not advertising because it's just out of scope. It's just, it's totally unrealistic. But, but what if it wasn't like, what if it actually was just. I'm typing in a prompt for what my business is and I'm actually not competing with that many people because it's like more of like a locally oriented business. And so this is really just unlocking a new group of existing entrepreneurs that can be onboarded into the advertising economy and could benefit from that and drive more business as a result. And so that to me is really exciting. It's like you just empower a whole tranche of people that run small businesses, like truly small businesses, one person companies or just all sort of locally oriented companies. You empower them to reach as wide of an audience as is relevant.
D
Right.
C
And that's really exciting.
B
Yeah, and we, we've already seen that. I mean D2C is a fantastic example like the proliferation of very niche products. I, I don't use Instagram a ton, but when I do, they're so damn good at finding the exact product that is like missing from my life. I mean, it's ultimate, like consumerism and stuff. Do I actually need it? Like, you know, all the, that aspect of like dystopian, whatever. But there's some really cool products that, that only exist because Facebook allows those creators to reach an audience like me that's niche enough that they would have never been able to like, you know, sell it in a grocery store or sell it at Walmart. But people like me actually want that product. And then there's infinite niches like that that exist in the world. And I know, you know, people go back and forth and you're, you're also kind of an ad maximalist and a lot of people just wish advertising didn't exist. But this is the, getting attention for this innovation. Should that just be free? Should you, should you be able to build something and then how then do you get attention for that? And advertising is a very effective, market driven way to build something innovative and then get attention for that thing you're building. So I go back and forth a little bit. You know, I'm a little creeped out sometimes on all the data collection, everything like that, but man, I bought my meta glasses and freaking love those things and create it. And it's like at some point there is A benefit to society more broadly, the benefit to individuals. There are negative externalities, there's problems. Like, it's not all like, you know, sunshine and unicorns and stuff. But yeah, it's like, I think understanding fundamentally how advertising creates new markets and empowers entrepreneurs is really powerful and it's important. And too many people just crap on the ad industry without really understanding it. And what it does generate in consumer surplus. And Facebook's not taking all the profit. Like, people wouldn't come advertise with them if they were taking all the profit. Now, are they trying to maximize their profit? But can you also maximize your profit on top of that? And the answer is yes. I mean, that's why there's been a proliferation of entrepreneurship and these D2C products and everything else like that. So, yeah, I'm not quite as ad maximalist as you, but I appreciate its role in the broader market and in empowering entrepreneurs. And I think, exactly to your point, it's like we're going to see that accelerate, not decelerate, in the coming years.
C
One argument that I just find, like disingenuous in the extreme when people understand this, this market is like, well, advertising has always existed. It need, it didn't need data. Like, you could always just advertise on tv, you could always just advertise on radio or magazines. You didn't need all this data. All this data collection is just a privacy violation. It doesn't, doesn't change the fact, like it's not, it's not enabling advertising. It's like, yeah, that form of advertising always exists. It continues to exist. But you know what didn't exist? D2C. You couldn't have D2C without personalized advertising. Like, you just would not be possible for the reasons that you pointed out. It's too niche. It's too niche. You can't do a TV ad campaign, a national TV ad campaign for this niche product. The economics won't work. But if I could reach the individuals that would find this relevant. And by the way, when you reach those people, the click through rates are still sub 5% right? And people look at that as an indictment. No, that is not an indictment. That's showing you the sort of natural, sort of reluctance or sort of friction there is to advertising in the first place, right? If there was actual manipulation happening, those click through rates would be 100% right. If this person is deemed relevant and I was manipulating them into doing something they wouldn't otherwise do, they don't want to do, then the click the way to be 100%, the conversion rate be 100%, it's sub 5% on a good day, it's, it's 4. Like for most products it could be sub 2, sub 1, and those could be profitable campaigns.
D
Right.
C
But the thing is like this enabled new sectors of the economy and that is growth. And there are there bad aspects, of course, but you don't throw the baby out with the bathwater. You identify the things that you want to remedy and you, in a surgical way, you sort of remove those from the workflow. You don't just throw everything out and trash everything out. And like people push back when I kind of make these sort of sweeping statements and they said that's a straw man. No one's actually looking to kill personalized. Yes, they are. Yes, they are. There is, there is a bill that was just resuscitated called the Banning Personalized Advertising Act. They did ban it in the eu. They did. Effectively, it's banned for the biggest platforms, the gatekeepers. And so yes, there are people that want to ban personalized advertising.
B
I'm the one who steered the conversation to D2C. But to be honest, the app industry, the proliferation of the app industry, and subscription apps specifically Meta and Google deserve almost as much, if not more credit for the proliferation of apps and the variety of apps that we see today and so many profitable app businesses for the same reason that they empower D2C. It's like they allowed app developers to reach those audiences more effectively, more efficiently. What we see today as the app industry, I think owes a debt of gratitude to Meta and to Google because they empowered it just like they empowered D2C. Well, you and I could talk about these sorts of things for hours, but I did want to get to the question, since we did allow people to submit questions. Question one, an hour in. Given the dominance of a few paid UA channels, how risky is it for a fintech subscription app, and I would say any subscription app, to have over 80% of its spend on Meta and Google? What practical diversification levers actually work in this vertical? So first is it risky? And then if so, how would you diversify?
C
So I wrote a piece about this a while back a couple years ago and like, the point I make was I think people feel compelled to diversify because they have this sort of like abstract notion that being totally concentrated in like one or two channels is a bad thing. And yeah, there's risk there. But the reality is that diversifying adds a lot of overhead. It adds overhead in terms of Having doing data integrations and having to create new creative formats, right? And having to, you know, a lot of times these companies, each platform has like sort of a different way that the ads are exposed and so you have to sort of accommodate your measurement to that. And so diversifying for the sake of diversifying is oftentimes a bad idea. And the question is like, and then how much could I spend?
D
Right?
C
So like I might onboard. Like a lot of times you hear this kind of like common refrain of like, well, the performance is great, but at really low spend. Well, okay, but that's actually not great. Like what I really care about is my spend adjusted roas, right? And not just that the ROAS is high on this particular channel at low levels of spend because could I have taken all of that overhead, eliminated it for supporting that new channel and then allocated that budget to Meta or Google and seeing the same level of roas? Because if I did, I'm actually worse off if I could have, right? So it's actually when you have a new channel, it's the ROAS doesn't have to meet the roas of the other channels that could have absorbed that budget that's exceed that because you're supporting a new channel. And so I think like diversifying for the sake of diversifying is often a mistake. You diversify when you've reached saturation on the existing channels, I think, and then you look for other channels or when you feel like there's some sort of interaction effect that that channel could produce that boosts the performance on other channels. And that oftentimes is the case, right? Oftentimes that's the benefit of running on some other channels, particularly if it's more like brand oriented channel. So I think that's just, that's the way I like to think about it is I wrote a piece a couple weeks ago called Optimization Models and Digital Advertising. I talk about optimizing towards roas and optimizing towards RO adjusted spend and they're very different things, right? And so if I'm optimizing towards roas then I really want to kind of keep spend as low as possible, spread across many different channels because I'll get the max roas per channel, right? Because the roas and the spend tend to move in opposite directions.
D
Right?
C
But that's oftentimes it's not really what I'm doing. I'm just optimizing towards maximizing spend with a ROAS constraint. And in that case what you want to do is this what I call the waterfall method. So max out the biggest channel right until it hits my roas threshold, then move on to channel two, which would be smaller, like, potential spend there. Max out channel two until it hits my roas threshold, move on to channel three. That's always the approach that I recommend companies take because it minimizes overhead and complexity.
B
And so what do you see as the risk? Or do you think it's just actually not risky? The risk is just missed opportunity. Like, how would you classify risk and how people should think about the risk of being so dependent on one or two channels?
C
So, yeah, it's missed opportunity. You could call it opportunity, cost, risk. But what most people kind of mean when they talk about the risk there is just that performance degrades. And if I've got five channels and one of them bombs, then, okay, like, there's 20% of my spend at risk. But if I've got one channel and it absolutely tanks, which can happen, then that's all my spend. Now, the issue here is these things tend to be pretty correlated. Like, why would performance bomb on a channel? Probably because a competitor came in and is outbidding you everywhere. And if they're outbidding you on Facebook, they're probably outbidding you on Google and Snap and TikTok and wherever else. So the issue is, I think the risk is not necessary per channel. That's what I think. A lot of times people think about it that way because everyone's had experiences where like, hey, man, what happened to Facebook yesterday? Like, you know, click through rates got cut in half, or they cut down by 2/3, or for whatever reason, and there was just, like, a blip. But if you're thinking like a structural change, a structural change of performance, that probably would be correlated across every channel because there's either just a change in consumer sentiment, or more commonly, a competitor came in and just crowded you out.
B
All right, what's the single biggest pitfall you see across mobile growth teams right now? Something that even sophisticated companies consistently overlook.
C
The thing that I see most commonly is just a very sort of chaotic approach to measurement, just lacking any sort of, like, coherency that could materialize in a couple different ways. Like, either I see people using competing tools, essentially not really knowing which one to trust or how to interpret the output of one relative to the other. I see this as just misalignment across the various stakeholders. Right? So, like, oftentimes finance and UA or UA and product and not really being totally aligned on, like, what, you know, good looks, like what these metrics should look like for success. I see it as having a bunch of tools that aren't working in sort of concert.
D
Right.
C
Like just a couple different data points.
D
Right.
C
Like I don't know how to interpret them as an ensemble. I just looking at the individual ones and I don't really know what these, these things mean as a, as a whole. I call it like measurement disorganization is the most common thing I see and I think the way to overcome that. It's like this is not a satisfying answer and it's, it sucks as a process. I don't do that much consulting anymore, but like this was probably like the most common project I was asked to come in on and it just, it's really, really challenging and it's stressful getting all the stakeholders together and then just kind of coming up with some sort of plan that satisfies everybody, some sort of model for measurement. And when I say model, I don't mean like a machine learning model or whatever, a regression model. I mean like an operational model for how we're all aligned around what success looks like. And like all of our needs are being met by this measurement apparatus. Because I think one thing that companies just tend to sort of under appreciate is the fact that your measurement model, your measurement system is essentially like the heartbeat of the company. Everything flows from that and you really need to be doing it correctly and in a way that's credible, but also in a way that sort of like everyone understands and appreciates right. And that serves their different use cases. It's like getting a group of people together in a room and just finding alignment on that. That's the solution. And that sucks. That's difficult, people oriented work. It's not like, oh well, we'll come up with a new machine learning model or we'll build this new dashboard. It's like, let's get a bunch of people together, understand their needs. The CFOs got totally different needs than the UA team and the UA team's got totally different needs than the product team. And like, let's just get everyone in a room and understand what they need to receive as output and also get everyone to agree on what good looks like.
B
That's a great answer. I wouldn't have guessed you'd go into the people side of things, but I mean we're just bags of meat making decisions and that's, that's a huge point of friction, a huge point of challenge and disagreement. So you know, one of, one of the things we actually see at revenuecat is we sometimes Have a marketing team come to us specifically because the engineering team just won't align with them on their priorities. Like the engineering and product teams have their own priorities and UA and the marketing teams are just like, hey, go figure it out. They're left out on an island that can't get engineering resources. They can't get time with the product teams to push to get the engineering resources. And so yeah, it's like it's a problem we see in prospects coming into revenuecat trying to solve the engineering problems because their engineering teams just won't give them the time of day.
C
The canonical example in my mind and like I've experienced this and I have like Vietnam style flashbacks of this like very difficult, challenging, human oriented problem. Is product team onboards a different LTV model or a different LTV product because they don't trust the one the UA team's using. That's the kiss of death for productivity. You're going to spend the next two months arguing and that relationship's over.
B
Yeah, man. And it's a great place to go, like solve your people problems. Technology is not going to fix everything. You need to solve the people problems in concert with solving the tech problems and data problems and all the other problems. All right, next question. In subscription apps, and then again, this maybe was the same person, especially in fintech, but probably apply to all apps. You often have to make budget allocation or bid changes before key metrics like LTV retention or incrementality are fully baked, sometimes within days of launch. How do you balance speed versus accuracy in those early optimization decisions? And what frameworks do you recommend for making confident calls with incomplete data? That's a really good question.
C
Yeah, that is a good question. So, couple things, right? So one is I wrote this piece a while back called like it's time to retire the LTV metric. And it's a little bit of just a clickbait headline. I don't really mean that. What I mean is what you often see teams doing is like they'll try to calculate like a terminal LTV on the basis of like 10 days of data or something. You only launch one time, right? But like a product and so you can just kind of wait and extend the soft launch. But like when you're talking about like a campaign level ltv, it's like, well, okay, I'm not going to get that. And quickly. And so what I like to do is just try to. Because again, ad spend and roas tend to be inversely correlated. So as, as ad spend increases, roas Goes down. Right? And so what I would like to do is just sort of prove out these frontiers. And so I'm going to spend 5k a day or whatever. And, you know, If I'm hitting 150 roas, great. And then what I'll do is let those cohorts age, understand how they progress, understand what their day 20 ROAS is, understand what the day 30 ROAS is, and like, just build more and more cohorts where I can track them over time, and then I'll push that roas frontier out. So, like, now I'm not trying to hit 150 on day three or 200 on day three. I'll like, let me just see it. Actually, these cohorts seem to be progressing to where they're at, like, you know, 150 by day 30 or something. Okay, that's great. Now what I'm going to do is I'm going to push and then I change my bid accordingly.
D
Right.
C
And grow the budget. And then what I'll do is track the cohorts more and see where they land at day 60. It's like, okay, well that's 120, so I'm going to increase the budget more and get. And then that'll decrease the row. And so then what I really care about is like, where does that land at, like 110 or something. But really what I'm trying to do is just iteratively progress that frontier of my roas target, starting from a place where like, yeah, okay, if I can't hit 150 day ROAS at day 7 with very low spend, probably not going to be able to grow this. And so it's back to the drawing board with whatever I'm adjusting. That's the way I think is the right way to approach this. Especially like in a launch phase, if you're just talking about, like launch, like flighting new campaigns, really what you're talking about is the performance of the creative. And it's a question of like, well, when to kill a creative as soon as it's obviously not a winner. And that's oftentimes very quickly. Like the same day it could be next day. This is getting no delivery, not a winner. Maybe you could perform at the average or whatever, but it's not going to grow my ad spend. So it's a loser and I kill it. The way I approach creative testing is really just. I'm trying to identify losers as quickly as possible. The winners take time to prove out, but the losers are pretty quick to prove Out.
B
Yeah, great answer. Next up, what's one opportunity for growth you think most companies in the app industry are missing right now either because it's too early, too messy, or doesn't fit into standard user acquisition playbooks?
C
The real answer is the opportunity for growth is just that your measurement doesn't support true growth. It's broken or it's flimsy. And so you're just really trying to replicate what you've done in the past. If you don't believe that your measurement can adapt to new channels, new sources, new ad types, you're just going to do what you've been doing and by definition, you won't grow. And so if you need to break out of that, you need to build truly robust, incrementality, focused measurement. And that's the challenge, right? The challenge for the human reason we talked about. But it's a technical challenge, too. So that's the real answer. And then assuming that's in place, then you have the world as your billboard, right? Like, I can advertise in influencer channels, I can do out of home, I could do digital out of home, I could do ctv, do podcasts, I could do all these things that are supported by the measurement. But oftentimes that lack of growth, that stasis, is just a function of, like, I don't trust the measurement to capably run attribution on these new channels or interpret the changes across the portfolio in a reliable way. And for that reason, I'm just going to keep everything the same.
D
Right?
C
And so that's oftentimes what I see. It's like people are just paralyzed as a function of their measurement not being robust.
B
The fundamental problem in marketing, half my marketing is performing. I just don't know which half. And that will be the perpetual problem that needs to be solved in marketing.
C
You can say that because this is your podcast, but I've actually banned that phrase from my podcast. I just feel like it's too reductive and people misinterpret it. I love that phrase. I hate the way it gets used. The way that people use it is to say, this measurement's all smoke and mirrors. No one really knows. We need to be telling a story. We need to be connecting emotionally. The only ROAS curve I care about is the curvature of a satisfied customer smile. That's marketing. That's advertising. This measurement stuff is just a bunch of hocus pocus. That's not what he was saying. I mean, John Wanamaker was like a pioneer of advertising. He's One of the first people to take out a full page newspaper ad, he understood measurement. What that statement is saying is, that's not a bad thing. It's like, I understand the outcome given the inputs. I don't know the mechanics of it. I don't need to know necessarily.
B
No, no, you're 100% right. And I love that you brought that up because it brings up such a good point, is that there will always be some amount of uncertainty in marketing. The good marketers embrace that uncertainty. Not in a way, like you said, of just like, ah, throw their hands up and like, oh, we're just going to do brand stories or whatever. They embrace that uncertainty. To say, how much can I be certain on and what are the certainties I can build a process around? And that's what good marketing is. It's not throwing your hands up, but it's figuring out how to be more and more certain. But within the constraints that you understand, you can't get to 100% certainty. The people who think they can be at 100% certain are just as diluted as the people who think they shouldn't be measuring.
C
So there's this concept of Wittgenstein's ruler, right? And I've talked about this a bunch, but like, I wrote a piece about it years ago called Wittgenstein's Ruler and ads measurement. And to your point, like, so the idea of Wittgenstein's ruler is like, I take a ruler to measure a table. Well, if I don't trust the accuracy of the ruler, the table's also kind of just measuring the ruler. That measurement apparatus isn't telling me anything about the table. And in fact, the thing that I'm measuring might tell me more about the measurement tool. And I say that because that Wanamaker quote, the one reaction is like, throw your hands up, it's not possible. So let's go do brand campaigns and go to Cannes and sip rose, right? And the other reaction is, oh, good point. I should only do marketing on channels that are deterministically attributable. And then what you do is you fool yourself into thinking that anything is deterministically attribute. It's not right? And so a lot of times I'll meet teams. It's like, hey, we cracked the code, we found a loophole and we're running all these campaigns and we're getting this deterministic attribution using all these hacks and workarounds. I'm like, you're kidding yourself. You're telling me Just more about your inability to understand what's actually happening than you are telling me about how precise your measurement is. And that's a very bad signal. When you get teams that are like building a consumer product and they're looking for investment and it's like, well, we've got a secret sauce for advertising because we figured out how to hack these signals together to get deterministic attribution. I'm like, no, you, you just convinced yourself of that, but actually you're just getting noise and you're going to be wasting a lot of money. And that's probably no better than just doing more like a holistic probabilistic model.
B
That question went places I didn't think it would go, but I'm glad it did because that was really fun. Last question and we'll wrap up. What's one area you see? Growth teams pouring too much time and budget into that you believe will matter less in the next two years?
C
Gen AI Creative tools. I think you could use off the shelf stuff and get most of the almost all the value that you're going to get. Again, I don't think there's that much value there, period. Unless you're doing more of the fundamental stuff that we were talking about at the outset, which is like concepting and that there are not really any off the shelf tools for. You have to kind of build a system yourself. Maybe there will be at some point. That's probably a good startup idea. But just cranking out, we went from 50 variants a week to 200 or 2000, but they're all the same concept. There's no value add there with the last 1800 or something. You know what I mean? Like, that's one thing, it's the testing.
B
50 Shades of Blue, like the incremental lift. There's so many other problems you could be solving for than testing that 50th shade of blue.
C
Yeah, exactly. And then the other thing is, why are you investing any more time in Ad Attribution Kit? You think that's a genuine source of value or sort of a competitive advantage? Don't burn resources on that.
B
It is so baffling to me and I don't want to dig up this can of worms because we could talk about it for another two hours. But it is so baffling to me that Apple went through all of the things they went through, even the negative press, the tumult in the industry, the loss of App Store revenue that ATT caused and then didn't actually build something useful for attribution and then just let everybody fingerprint anyway, which is almost more insidious. I mean, I get it. I think the one thing that came out of ATT that was good is that it did at least break a lot of the data broker workflows, that you could deterministically find one person and just track them everywhere, even when they're on their cell phone and the IP address is different or whatever. Anyway, I don't want to dig up this whole can of worms, but it's just such a mess. And it just baffles me to no end that Apple didn't take that opportunity to build an attribution tool that actually mattered and then just let everybody fingerprint. So baffling.
C
I have a couple theories here. I think one of them, though, is they are kind of. Their hands are kind of tied from a privacy perspective if you want to honor the sort of, like, religious zeal that they have towards privacy. And I do think they're genuine about that. My interpretation is it was a competitive maneuver. That was the motivation there. I don't think it had anything to do with. Truly had anything to do with privacy. But once you invoke privacy as the stalking horse for that competitive maneuver, then you've got to adhere to the privacy principles of the company, which are genuine. And then you, when you're trying to build attribution that way, you just kiss not functional. Like, you take their commitment to differential privacy, that just breaks the data set. And, like, even if you, you know, don't implement it in a way that. That truly sort of, like, adheres to it, like the principles there, even if you do that, like, okay, it's sort of superficial, it still just kind of breaks everything. And then, and then you introduced this sort of, like, crowd and anonymity that they did, which is kind of like a reverse form of differential privacy. And then you had the scheduling stuff. They couldn't implement this to achieve what they were trying to achieve, which was competitive disruption, without applying their sort of culture of this privacy zeal. They just couldn't do it because it was a. They portrayed it as this privacy maneuver, and therefore they couldn't do skadnetwork now add Attribution Kit in a way that is actually functional because when you have to apply all these privacy protections to it, it breaks it.
B
Yeah. I'm resisting the urge to dive back in and Talk for another 30 minutes on this topic. Part two. Yeah, but man, Eric, this is so much fun. I really enjoyed the conversation. I think there's so many things for people to take away, if not practical, which I feel like we got to a lot of practical stuff. I think it's really important to think at a higher level, at a almost philosophical level, about a lot of these things, which I think then plays out in your decision making being better, because you have a better intuitive sense of, like, how all of this works together as a market, as a economy. And so, yeah, on so many levels. This was such a fun chat. So thank you for joining me today.
C
Cheers, man. Always a pleasure. Hope to see you in Austin soon.
B
Thanks so much for listening. If you have a minute, please leave.
A
A review in your favorite podcast player.
B
You can also stop by chat.subclub.com to.
A
Join our private community.
Released: September 3, 2025 | Hosts: David Barnard, Jacob Eiting | Guest: Eric Seufert
In this episode, David Barnard chats with Eric Seufert—media strategist, quantitative marketer, and author of Mobile Dev Memo—about the realities of AI in app marketing, the true value and risks of paid growth channel concentration, the paralyses caused by “measurement dysfunction”, and why most common creative and attribution strategies are rapidly losing relevance. Eric draws from his deep operational, entrepreneurial, and advisory experience, contrasting the hype around “AI everything” in marketing with sobering, practical insights that can transform how leading app businesses invest for growth after attribution has become unreliable.
[02:11–18:47]
Notable Quote:
“If you want to actually embrace this in a transformative way for your company...make sure every single functional team feels enabled to do this and they also feel like they have the agency to define what implementation looks like.”
— Eric Seufert [05:01]
[12:27–18:48]
[18:10–28:43]
Notable Quote:
“My theory is that Facebook could deploy a change tomorrow [auto-generating creative at the brand level]… They are slow rolling this because they know that people need to be onboarded and comfortable with it.”
— Eric Seufert [18:48]
[34:28–38:24]
Notable Quote:
“Diversifying for the sake of diversifying is often a mistake. You diversify when you’ve reached saturation, or you think there’s an interaction effect.”
— Eric Seufert [36:38]
[38:33–41:52]
Notable Quote:
“Your measurement system is the heartbeat of the company...it needs to be credible, but also in a way that everyone understands and appreciates. That serves their different use cases.”
— Eric Seufert [39:13]
[42:57–45:22]
[45:35–46:51]
[50:14–51:10]
Notable Quote:
“Just cranking out, we went from 50 variants a week to 200 or 2000, but they’re all the same concept. There’s no value add there.”
— Eric Seufert [50:58]
[28:44–33:15]
On the role of AI in creative/marketing:
“If you want campaigns to be optimized in real time...you’re going to rely on Facebook doing that, or Google. That’s a platform imperative. Leave that heavy lifting in the platforms. You can’t do that, so what can you do?”
— Eric Seufert [05:18]
On creative concepting vs. variant generation:
“What you really care about is the concept, and actually, it's coming up with concepts that you yourself couldn't come up with. The AI is not doing that much more to influence the performance [beyond that].”
— Eric Seufert [12:56]
On the future of creative + attribution:
“Within 18 months, we’re at brand-specific, full-fidelity video ads being generated without prompt, based on past performance.”
— Eric Seufert [18:48]
On healthy channel strategy:
“Diversifying for the sake of diversifying is often a mistake...max out the biggest channel to ROAS threshold, then move to the next.”
— Eric Seufert [36:38]
On measurement dysfunction:
“The most common thing I see is just a very chaotic approach to measurement.”
— Eric Seufert [38:33]
On GenAI creative tools:
“I think you could use off-the-shelf stuff and get almost all the value you’re going to get.”
— Eric Seufert [50:14]
This conversation provides an unvarnished, deeply practical map for app founders and marketers navigating the post-attribution era: automate judiciously, unite your teams around reliable measurement frameworks, don’t reflexively diversify channels, and stay skeptical of GenAI hype—but remain optimistic about the long-term transformation these tools can unlock for both startups and scale-ups.
For more insights like these, find Eric at Mobile Dev Memo and connect with the Sub Club community on chat.subclub.com.