C (22:17)
Yeah, but I'm learning from you guys. Like I think we all have the same rep but like I know I've met, you know, met with him and like been like, oh, like this is how other people are doing it. This is how I like to like. So I'm always learning and trying to learn from, you know, Connor Dalton croons it runs a great program and stuff like that or something in like the bigger brands. I think what you described is probably like the best scenario you want to be in. It's like, hey, I think this is working. Let me go and try it again and spend more on it and see if it continues to work. Like that's like, that's like a scale test which is like amazing. Like that's what you want. It's like, hey, we think this is working. It's kind of not something that's going to look great in like an MTA or in platform like we think it is. Let's, we're there with like CTV right now. Like we have two good raids on CTV and we haven't really pushed like we got a good outcome in October part. And, and, and it's like, all right, we, let's, we want to go spend more everything. For me, that's like, what's the hypothesis? Let's not test for the sake of it. What's the hypothesis? What are we going to hope to learn and act on? Hey, we think that this is performing really well and we actually think we should be spending more. Like, perfect. That's a great actionable hypothesis. So I think some of the action actually starts with the hypothesis, but that's where you want to be with, with Meta right now. We've done this in the past and we're in this situation currently where Meta performance hasn't been what we wanted. And so that's a little bit harder and more challenging where you're like, okay, first I need to get a baseline right. It's been, you know, three months since I tested the channel, six months. A lot of things have changed. The good thing about Meta is at least for us, maybe not for hexcloud, but you. We can do like a one week test and get like a good. Yeah, all right. We ran it not where we wanted. Let's, let's call it 30% above where we need to be on our cost per new. Per new. But it's like in house you don't get that much data. You get right. You get incrementality, you get, you know, new versus returning. So at least like new versus returning the starting point. It's like, all right, is it exclude like, And I think there's just a lot of hypothesis, is it this? Is it that. And just go and dive in. This is where Claude has been great. Like my growth manager has been doing a really, really great job just diving in. And, and I, and I've done this in the past and kind of tried to be. Teach it, but it's like, all right, like let's diagnose this like first principles, like what in our account has shifted and changed over time. Like, let's build some dashboards and view stuff and it's like, all right. For us it always really comes down to it. It's a reach problem. It always seems like that. And so we needed a big change and so we just ran another one and we made a bunch of changes. I got this from Connor McDonald like a year ago where you guys, I think did a Lot of stuff. You added few content, you added video, you added exclusions. We just kind of had a bunch of different hypotheses and we did a lot at once because we needed a step change. So we. My hypothesis was that I think this over relying on attribution was hurting us and was really hurting reach. So we added a. We scaled up partnership ads a lot because we had a really good test on partnership ads. So we scaled up that. We added a new campaign that had partnership ads with Vue Optimization theoretically to drive more reels delivery. Just had some theories and hypotheses based on that. Our account was very heavy on VO volume and so we significantly switched it to CO conversion optimization. So we made like a bunch of changes, but they were all under the same thesis of like, we're not doing a great job reaching new people. And you know, obviously we talk about mid funnel, it's important, but I also think it's important to improve your CPMR and your reach even amongst the purchase objective as well. And so that was it. And we saw, you know, so like we're looking at those data, we're looking at North Beam mta. Like we saw our exclusions network as well. We saw our percent new going down. It's like there were a lot of signals that I think kind of told the same story. I guess that's what you're trying to look at and build the hypothesis. And then, you know, we tested it. Fortunately we were able to get, you know, numbers to where we need to be. Like, so we were pretty happy with it. It's not completely there. And so even now it's like, okay, we did analysis because we don't want to just get complacent and be like, all right, we're now where we need to be. Account's good. It's like, well, we moved, you know, our account to this much on co should have been higher. We have a one view optimization campaign. Should all of our campaigns be there? Like partnership ads are 35% of spend now. Should it be, you know that like. Right. Just like these different hypotheses that we've kind of gotten over time. And then I just think it's like, continue to roll up your sleeve. So that's kind of how we are approaching it. And sometimes it's like just attacking a channel and like, let's get four meta tests in a row. Let's test a bunch of different things based on hypotheses and then like, let's just roll first leaves and test them