Loading summary
Rick Bruner
Your data is like gold to hackers. They'll sell it to the highest bidder. Are you protected? McAfee helps shield you blocking suspicious texts, malicious emails and fraudulent websites. McAfee Secure VPN lets you browse safely and its AI powered tech scam detector spots threats instantly. You'll also get up to $2 million of award winning antivirus and identity theft protection, all for just $39.99 for your first year. Visit McAfee.com, cancel anytime terms apply. Foreign.
Alan Chappelle
Welcome to the Monopoly Report the Monopoly Report is dedicated to chronicling and analyzing the impact of antitrust and other regulations on the global advertising economy. If you were new to the Monopoly Report, you can subscribe to our weekly newsletter@monopoly-report.com and you can check out all the Monopoly Report podcasts@monopoly report pod.com Alan Chappelle this week my guest is Rick Bruner, expert in randomized controlled trials for better advertising ROI measurement. He's recognized as a thought leader in advertising trends, analytics and performance measurement. I wanted to have Rick on the pod because I think he brings a different perspective on measurement than the regulatory and even most business folks in ad tech are typically received. So let's get to it. Hey Rick, thanks for coming on the pod. How are you man?
Rick Bruner
I'm great. I'm excited to be here, Alan. Great to reconnect.
Alan Chappelle
Yeah. So you and I have never actually worked together. I think there was some overlap back in the late 90s at DoubleClick. You were on the research side, I was on the diameter side. And bless their hearts, the DoubleClick folks kept the diameter research people like locked in a closet and sort of far, far away from almost the entirety of the rest of DoubleClick for whatever reason. But I don't think we really got to know each other until you started hosting some rooftop ukulele jams all across New York. I think we were part of a ukulele gang, as it were. I'm more of a piano player. I didn't really play much of the uk, but those things were always really fun. But anyway, for those who might not know, you give us a little bit about your background here.
Rick Bruner
Sure. Well, in addition to being an enthusiastic ukulele player. So I've been in this field of advertising research and analytics for certainly more than 20 years. You mentioned DoubleClick. I started there in 2004 and I ran the research department. And then after the acquisition by Google, I ran research for North American ad sales at Google for about a year and a Half. And you know, I, I developed some of my obsessions with, with experiments which we're going to talk about at that time. And then I went on to some other similar roles at Viacom and Marketing. Evolution is really a, at the time particularly was a thought leader in mixed modeling and attribution and Vayant. And then I set up this company that I run now called Central Control, which is all about using high quality experiments for measuring advertising effect.
Alan Chappelle
Great. And that's really helpful and because I wanted to drive home to listeners here that I'm more of a policy and law nerd and you're clearly a research and measurement nerd. We haven't spoken a ton over the last couple of years, but your name came to mind for me. In connection with the piece I was writing for the Monopoly Report newsletter, I came across a paper from Thomas Hoppner. Who a competition lawyer in Germany and he was writing about how the Wall Garden ad platforms don't consistently enable advertisers to rate the efficacy of their media spend. And as I was thinking about measurement and research, your name sort of came to mind and it sort of opened the door to at least a few places where there's probably some overlap between my policy brain and your research brain. And, and so, you know, just for, for our audience, most of whom are probably not research folks, can you spend a moment explaining why it's important for advertisers to be able to rate and verify and measure their ad spend?
Rick Bruner
Yeah, I was very flattered you suggested that somebody should give me a lot of money to fix the problems of measurement. I would gladly accept that challenge is what I'm trying to do. You know, that paper talked about measurement in a couple of different ways and it really kind of two key fields of measurement in the media advertising sphere as regards this conversation and that piece. One is measuring what I characterize as the inputs of the advertising equation, the impression, the advertising impressions and all the details that go into that. The reach and frequency and audience definitions and placements where they showed up and you know, ad formats and things like that. And then there are the outcomes. So there are the inputs and the outcomes. The outcomes is what I focus on in particular. I mean, I've have a good understanding of the inputs side of the equation too from my various roles. But to my mind it's really the outcomes that are the key piece and the one that's more neglected by the industry generally to measure them. Right. What I mean by the outcomes is the incremental sales. You know, that what the advertisers actually buy and pay for is the impressions. But they're not intending to just put flickering lights on screens. You know, that's not the outcome they're looking for. They're looking for those flickering lights in the form of advertisements to change the minds of consumers who would have reached for a different product on the shelf to reach for their product. Now it's the incremental sales. And both are important to measure and really critical to measure. I would say the second is most critical to measure. I mean, when we talk about ROI return on investment, that doesn't mention impressions, it mentions the investment, which is the money the advertiser spends, which they can measure pretty well, and then the return. And that means the incremental return, as in the incremental sales. And so if you were to just focus on measuring the outcomes and, you know, how much money you spent, you could go from bucket to bucket, you know, this ad, this ad tactic, this publisher, this ad channel, and understand which is working better for you.
Alan Chappelle
So one of the things that sort of struck me is that the focus in terms of measurement is less about accuracy and more about directional consistency. And it's funny because it triggered. Well, first of all, do you agree with that? And then second of all, well, I'm going to share this story anyway. It's my pod. What the heck. So when I was at DoubleClick, I was working with their ad effectiveness product and our main competitor was a company called Dynamic Logic. And all credit to Nick Nyhan and Tom Deerlein because they kicked our butts. But the point I'm going to make is that I don't think it's disputable that the methodology, that the ad effectiveness product of Double Click was far superior. But what the market really liked was the Dynamic Logic was simpler, it was cheaper, and it kind of gave a little bit of a directionally. It gave you a sense of where your particular campaign fit. So it kind of checked all of the boxes. And that's one of the reasons I think they ultimately know well, kicked our butts. And we ended up selling to Dynamic Logic eventually. So back to my initial question. Like, is directional consistency more important than accuracy?
Rick Bruner
Well, again, you know, let's talk about the inputs side of the equation and the outcomes side of the equation. Because what the JIX measure is the inputs, the reach and frequency and distribution of the media and what Nielsen, you know, classically was measuring in television, the ratings, it's like, know, are you, you're paying for a certain number of impressions Are you getting that number of impressions? I think the, you know, the reason there has been this movement to form a new chick is about the accuracy. There definitely are disputes about the accuracy. So I wouldn't say the accuracy doesn't matter. But you know, with the history of Nielsen, with its, you know, dominant position in the market of that kind of measurement, historically there was a sense that, well, we don't necessarily agree that we're being measured accurately, but everybody's being measured by the same yardstick, so we'll go along with it. When you're talking about outcomes, which is what you were talking about with ad effectiveness and dynamic logic, which is now part of Kantar, I don't even know if they use the brand dynamic logic anymore. And that's specifically at least dynamic logic. What you're talking about within the doubleclick solution I think predates me or I wasn't really involved in that.
Alan Chappelle
So yeah, I think you came along post sale.
Rick Bruner
You know what dynamic logic measures is brand lift, which is a form of ad effectiveness, but it's not the same as actual sales lift. But I guess I would agree with you with your premise on the outcome measurement piece. You know, I am in market selling the gold standard, which is randomized controlled trials, which science agrees is the best way to measure cause and effect. And that's what we're talking about. The advertising should be causing people to buy more. And the industry is very satisfied with lesser standards in attribution measurements and match market testing and a whole field of measurement known as quasi experiment. So yeah, it's lonely at the top of the hierarchy of evidence because people are, they want it easy.
Alan Chappelle
Yeah. And so that gets me to, I think the central premise of the Hoppner paper, which is like, you know, they seem to indicate that one of the key challenges here is that the walled gardens just in general are not allowing a level of transparency into and providing that type of information. And that's having a negative impact on, I don't think they draw this distinction, but both the inputs and the outputs. And so I guess what I'm trying to glean from you here is like, is that just a small part of the problem, a medium sized set of the problem, or like the entirety of the problem at this point?
Rick Bruner
The walled gardens, you know, the technique that my company uses and some other practitioners is really geo experiments, large scale geographic experiments. And when I say large scale, I mean, you know, we typically use DMAs, designated market areas, of which there are 210 in the United States. They don't overlap and they cover the whole country. So you can basically divide up the whole country into these units. We randomize them, which is that gold standard, into test and control groups and have media companies serve the ads in the test group and not serve any ads in the control group. And then we can measure using zip codes out of the client's first party database if they have that sales by zip code. All privacy agnostic and with good certainty whether the ads drove more sales in the test groups versus the control groups. And that method works for just about all media. You can run, you know, DMA targeting and linear television and radio and outdoor as well as in most forms of digital including Google search and other Google media and Meta. So they do enable that. What they also do Is have about 75 other ways available to you to measure effectiveness, most of which are not as high quality evidence attribution. And the most insidious, it's mentioned in the paper, is these new programs that they're coming out with Performance Max by Google and Advantage plus by Meta where, where it's a black box of optimization where the premise is you just give them all your money and they'll optimize in your best interest for you. And I think that's a dangerous proposition to let the seller grade its own homework. But I wouldn't say they make it impossible. I think the bigger problem is the will by the advertisers to get the right answer. In medicine, this kind of randomized controlled trial, at least for the time being, is required by the fda. If you want to bring like a pharmaceutical product to market and say that it has a curative property, you have to run clinical trials which is the same sort of randomized controlled trial. But in advertising nobody dies. If you spend your money badly, it's just money and better yet, it's other people's money. So who really gives a darn about getting the right answer. And there's this pervading idea that experiments are hard. And you know, our motto is that we make them easy advertising experiments done right, made easy. And if they're not hard, there's just that there's this, you know, combination of inertia and misunderstanding. And you know, that has, you know, the hardest part is getting leadership at large advertisers to commit to measure it right. And there are a lot of advertisers that do, increasingly many, but the vast majority take what's handed to them for free and do that.
Alan Chappelle
So okay, all that makes sense. You talked about pmax. One of the challenges That I see. And it sounds like you may think this is less of a challenge and if so, let's discuss that. But one of the challenges I see is like you are increasingly relying solely on the numbers, the methodology of the black box and often that's being justified on the, well, we can't give anybody else any data because privacy. But the challenge with that is that in my view that may be leading us down well, road of well, okay, can we really measure, you know, attribution if you know, the incentive structure is such that, you know, whoever happens to touch the thing the last is, is, is the one who's going to get credit for it. And oftentimes that's the same, you know, walled garden. So I like, am I not thinking about this the right way?
Rick Bruner
No, I think you are. You know, I said that you're able to run these kind of geographic experiments on Google and Meta, but not within those black box algorithms. If you are using PMAX, you can't really hold out randomized DNAs. It's an all or nothing proposition. But if you do it the old fashioned way of just running, you know, a search campaign or other kind of Google campaign, for example, you can apply the holdout. You know, I wrote a piece recently that said that if you're a media company who's not Google and, and Meta, better measuring return on ad spend is really a survival requirement for you. Because I started the article saying do you hear that giant sucking sound? That's all the money in the ad industry draining into the bank accounts of Google and Meta. And I think what they've done, they've made it very easy to buy, you can self service very effectively and they've told a convincing story of performance of the advertising. You know, there are these ways that you can measure it but people just rely on their, on their self reported measurement by and large. And that, you know, makes it seem like it's doing great. And I think, you know, for other media companies they, you know, touting that they have very appealing audiences with some kind of characteristics or they've got some fancy new ad format isn't going to counter that. You know, we showed with some data from Guideline, which is billing data from the biggest advertising companies, that increasingly more and more share of the total ad market is going to Google and Meta. So yeah, I think it's about that. It's people just believe the story they're telling and I think advertisers should be more skeptical if the industry were to standardize on being able to target reliably to zip codes, you know, which are anonymous. There's no need for privacy concerns, particularly. I know there's like a. Oh, wait.
Alan Chappelle
Wait, hold on there.
Rick Bruner
I was about to, you know, backpedal on that a little bit. Well, I don't know what you were going to say to that, but.
Alan Chappelle
Oh, I just don't like the idea of somebody thinking that they don't need to engage a privacy lawyer. But that's just completely my own thing.
Rick Bruner
I see, I see.
Alan Chappelle
No, well, to answer your question, I think that the answer is that you're right. There's an identifiability challenge, particularly with some zip plus fours, which might be very small locations in New York. There are buildings which are just a single zip code. And so there may be implications around identifiability and what you can glean from the data set. And then you can go to certain cities in certain zip. Zip plus four zip codes are very highly skewed towards certain ethnicities. And so arguably you're, you know, at some point you're, you know, targeting or measuring based on, you know, someone being a Latin American culture or, or Irish or Jewish or whatever. And, and so then there's, there's an ethnicity component to it. But that's sort of the broader parts I would make. It's not just about like, you know, I, I need to get paid.
Rick Bruner
Yeah, we all need, you know, Bob Dylan said, you got to serve somebody. Well, first of all, the idea wouldn't necessarily be to use zip +4.zip would be fine in cases where, you know, you could back out identities because it's, you know, there's a small population associated with it, you could just overwrite those zip codes. But the point being that, know, I mentioned that, you know, not just my company, but a lot of companies that are doing outcome measurement are using geographic experiments or forms of geographic experiments, but mostly with DNAs, which, you know, there are only 210 of them.
Alan Chappelle
Yeah, that's a much broader set.
Rick Bruner
Right. But it, from a research standpoint, from, from an experiment standpoint, it would be better to have a lot more. And there are on the order of 30,000 zip codes in the country. If we were to shave off 5,000 of them that have, you know, small enough populations that the identification issue is germane, that would still be fine. We could still do it with 25,000. But right now the problem is you can't really. And I'll spare the details, but, you know, you can't reliably target two zip codes with the Internet, for the purposes that I'm talking about, where you want one family, one household, one person, one zip code. But if people were, you know, the big media companies like Google and Facebook and Amazon, they could, today they have, what they do is they glom lots of zip codes onto your profile because then when somebody comes to buy zip code targeting, they can fill up any zip code with a lot of people. But if they were to give the option to target to your primary zip code and then, you know, it's your zip code and my zip code where we live, and then we get, you know, we're still talking about, you'd put out of those 25,000, you'd put 12,000 into the test group and 12,000 to the control group randomly assigned. And then if reliably, the media could be delivered just to the test group, then advertisers and measurement companies like Nielsen IQ that have zip code level information could pull, you know, the sales counts by the list of the test zip codes and the list of the control zip codes. And then we can see whether there was a significantly larger effect in the test version control. And if people were to appreciate why this is important and worth doing and the media companies were to establish on this, it would solve so many problems. I mean, one of the big red herrings is that identity is important in measurement. It's not, you know, you could measure deterministically, as I'm describing, with something that is not personally identifiable. And for smaller media companies, and this is where, you know, this would be a hurdle for its adoption. But I mean, if the industry, if the IAB Tech Lab and the IAB were to really want to push forward a way to improve measurement with, you know, in a privacy safe way, with minimum of technology overhead required, you know, they could recommend this to publishers. And for smaller publishers, what I would recommend they do is create a registration requirement to access their content. But you don't need to include your email address, just give a burner ID and password and your zip code. And then all these media companies that currently don't have anything but cookies, you know, the ones that don't have a paywall, they would have accounts for all of their readers. They wouldn't know who they were, but they could append behavioral targeting parameters and other things that they could enhance their ability to sell their audience in the marketplace. But from a measurement standpoint, you don't need cookies, you don't need pixels, you don't need clean rooms, you don't need identity graphs, you don't need to pay all those extra vendors. And the user level is not good for measurement because you don't match it 100% anyway. You probably match it 80% if you're lucky. And if you're trying to measure most ad effects are like 3%. You can't measure 3% if you've got 20% loss of the identity matching. And so I really think it would be a huge solve to the problem.
Alan Chappelle
So where I thought you were going to go with this was. And so I'll just let you react to it. And this is something that sort of occurred to me as I've tried to get myself up to speed on these types of issues. It's like there's a huge endemic challenge where once you start looking under the hood and discovering problems, you will need to fix them. And the fixes are neither cheap nor easy. And so that's, in my view, at least from what I've read. Isn't that sort of the thing keeping folks away from solving a lot of these types of problems?
Rick Bruner
I think the buyers should implement the golden rule, as in, he who has the gold makes the rules.
Alan Chappelle
I know, but that never happens. The amount of influence for the amount of ad spend or the amount of money that they hold is almost comical. And I have some theories on that, but that's sort of the reality we're in.
Rick Bruner
No, Yeah, I mean, I said earlier that nobody dies if you get the ad budget wrong. And lots of people have conflicting incentives to improve it. You know, the agency, for example, they get compensated on a share of spend. So are they ever going to say you should spend less, you know, on this medium or that medium? And they get paid variably for different media. They get, you know, hypothetically, they may charge 15% commission for programmatic and 5% for television. So it's no wonder they're pushing more dollars through to television and the advertisers themselves. If you're in the search department at a large company, are you going to be in favor of testing thoroughly to make sure search deserves as much budget as it's getting in the mix? No. You have your fiftum. You're going to fight to keep as much budget as you can, the ones who really care. It's gradually changing. More and more advertisers are coming around to the point of view that, you know, and if you're spending hundreds of millions of dollars, if you're a Fortune 500 company, the smallest Fortune 500 company is like $10 billion in revenue. So you're probably spending About a billion dollars in advertising, and then it's, you know, nobody dies. But that's a lot of money. It's your job to get the right answer. And so there's more of a priority to get the right answer. But, yeah, there's just a lot of, you know, conflicting incentives that also prevent it from happening efficiently.
Alan Chappelle
Well, and, but also there's sort of an endemic thing within the. The, particularly the brand advertiser community. So, like, I don't know that Coke or Pepsi, either of them care about any of this except for two things. They want parody. Coke does not want to think that Pepsi has some kind of advantage or some way of, you know, some sort of some special deal. They want parity in how the systems work, because as long as there's parity there, I don't think they care. And the second thing is, is that they want to have a minimal number of times where, you know, they end up on the COVID of the Wall Street Journal being adjacent to something that they shouldn't, or to see Sam or, you know, or. Or being associated with, you know, some shady practice. And as long as, you know, one exists and two is minimized, I just don't know what the incentives are.
Rick Bruner
Well, I think Pepsi would love to displace Coke as number one in the market and if they had a better way of doing it. One example I give is Netflix. Netflix, back around 2017, hired what I describe as the Manhattan Project of incrementality measurement. They hired two scientists who came out of Google who authored this tremendously influential paper called Ghost Ads. And they were part of a team of like 12 peer scientists at the top of their game of measuring incrementality. And after a lot of measurement, they cut search from their ad budget because, you know, and I am in no way suggesting that branded search doesn't work for lots of advertisers. We proved it does for some of our clients. But at the time, Netflix was far and away the biggest streaming TV service or video service like that. And if you searched any of the titles of their shows, you would find them in the national results page. And they concluded through that kind of rigor that they would just cut search. Can you imagine direct consumer companies cutting paid search from their budget? It's usually the biggest piece. You're never going to come to that conclusion using mixed models and attribution models and quasi experiments. The only way they had that level of confidence that it was just not incremental was by doing this kind of rigor. And those are the kind of companies that are ahead, far ahead in this kind of measurement. Booking.com, airbnb, Uber, Wayfair. They have teams of people doing this kind of thing. And market share is a zero sum game. You know, if you're doing a lousy job of measuring and your competitors are doing or actually know what is working, you just, you just get numbers and charts and you, they satisfy your boss and you just go through the motions. You're losing share and it's a matter of time before you lose your job.
Alan Chappelle
Fair enough. And that seems to be the quandary is like, you know, do something revolutionary and potentially get fired or ride out the next three to five years. And for better or worse, a lot of people pick the latter.
Rick Bruner
Yeah, I mean, revolutionary though, this stuff was deemed the best way to do it by science 100 years ago. But causality is hard for people to grok. It really is. It's weird.
Alan Chappelle
Well, let's end it there. But this has been a fantastic conversation, Rick. I really appreciate you coming on and it's great to reconnect, man.
Rick Bruner
Yeah, I feel like I feel the same way. Thanks a lot for having me on.
Alan Chappelle
That was a great conversation. And I'm curious to see how Rick's thoughts about measurement are received by the privacy, legal and business teams within the ad space. We've a bunch of other fantastic guests coming up on the Monopoly Report podcast over the next few weeks, including Rob Leathern and Professor Daniel Solof. Please subscribe to the show@monopolyreportpod.com or on Spotify, Apple, YouTube, or wherever you listen to your podcasts. Thank thanks for listening. Thank you for listening to the marketecture podcast. New episodes come out every Friday and an insightful vendor interview is published each Monday. You can subscribe to our library of hundreds of executive interviews at marketecture tv. You can also sign up for free for our weekly newsletter with my original strategic insights on the week's news at News Marketing. And if you're feeling social, we operate a vibrant Slack community that you can apply to join@adtechgod.com.
The Monopoly Report: Episode 23 Summary – Rick Bruner on The Value Proposition of Measurement
Release Date: March 26, 2025
In Episode 23 of The Monopoly Report, host Alan Chappelle engages in a thought-provoking conversation with advertising measurement expert Rick Bruner. This episode delves deep into the intricacies of advertising measurement, highlighting the critical distinction between input metrics and outcome metrics, and explores the challenges posed by major tech companies' walled gardens in providing transparent and reliable measurement tools.
Rick Bruner brings over two decades of experience in advertising research and analytics to the table. Beginning his career at DoubleClick in 2004, where he led the research department, Bruner later transitioned to Google post-acquisition, overseeing research for North American ad sales. His career trajectory includes significant roles at Viacom and Marketing Evolution, culminating in founding Central Control, a company dedicated to leveraging high-quality experiments for measuring advertising efficacy.
Rick Bruner [02:15]: "I've been in this field of advertising research and analytics for certainly more than 20 years."
Bruner underscores the importance of distinguishing between input metrics—such as impressions, reach, frequency, and audience definitions—and outcome metrics, which focus on the actual impact of advertising, specifically incremental sales. He emphasizes that while input metrics are essential for understanding what is being delivered, it is the outcome metrics that truly reflect the return on investment (ROI) advertisers seek.
Rick Bruner [04:21]: "The outcomes is what I focus on in particular... it's really the outcomes that are the key piece and the one that's more neglected by the industry generally to measure them."
At the heart of effective advertising measurement lies the ability to quantify incremental sales—sales that occur as a direct result of advertising efforts. Bruner argues that without accurately measuring these outcomes, advertisers are left in the dark about the true efficacy of their campaigns, relying instead on less reliable attribution methods.
Rick Bruner [06:42]: "I've have a good understanding of the inputs side of the equation too... but to my mind it's really the outcomes that are the key piece and the one that's more neglected by the industry generally to measure them."
Bruner champions Randomized Controlled Trials (RCTs) as the gold standard for measuring causality in advertising. Unlike quasi-experimental designs or mixed models, RCTs provide the most reliable evidence of whether advertising efforts are truly driving incremental sales.
Rick Bruner [09:17]: "I'm in market selling the gold standard, which is randomized controlled trials, which science agrees is the best way to measure cause and effect."
A significant portion of the discussion centers on the challenges posed by major tech companies—often referred to as "walled gardens" like Google and Meta—in delivering transparent and effective measurement tools. Bruner criticizes the reliance on proprietary algorithms and black-box optimization tools, such as Google's Performance Max (PMax) and Meta's Advantage Plus, which limit advertisers' ability to conduct independent and accurate measurements.
Rick Bruner [10:53]: "They have about 75 other ways available to you to measure effectiveness, most of which are not as high quality evidence attribution... their new programs... is a dangerous proposition to let the seller grade its own homework."
To circumvent the limitations of walled gardens, Bruner advocates for the use of geo experiments—large-scale geographic experiments that randomize designated market areas (DMAs) into test and control groups. By deploying ads in test areas and withholding them in controls, advertisers can accurately measure the impact of their campaigns on incremental sales using reliable, privacy-agnostic data.
Rick Bruner [10:53]: "We randomize them, which is that gold standard, into test and control groups and have media companies serve the ads in the test group and not serve any ads in the control group."
While geo experiments offer a robust method for outcome measurement, Bruner acknowledges the inherent privacy challenges, particularly when targeting small zip codes that may inadvertently reveal personal information. He suggests strategies such as aggregating zip codes or implementing registration requirements with non-identifiable burner IDs to mitigate these concerns.
Rick Bruner [19:22]: "If people were to appreciate why this is important and worth doing and the media companies were to establish on this, it would solve so many problems."
Bruner highlights the conflicting incentives within the advertising ecosystem that deter advertisers from adopting rigorous measurement practices. Agencies, for example, may prioritize higher-spend areas where they receive greater commissions, regardless of the true ROI. Additionally, the allure of easy, albeit less accurate, measurement tools often leads advertisers to settle for subpar methodologies.
Rick Bruner [24:04]: "Nobody dies if you get the ad budget wrong. And lots of people have conflicting incentives to improve it."
Bruner cites companies like Netflix, Booking.com, Airbnb, Uber, and Wayfair as exemplars in adopting rigorous measurement practices. These organizations invest heavily in scientific measurement teams, enabling them to make data-driven decisions that optimize their advertising spend and enhance market competitiveness.
Rick Bruner [26:36]: "Booking.com, Airbnb, Uber, Wayfair. They have teams of people doing this kind of thing."
The episode concludes with a reflection on the necessity for the advertising industry to embrace scientific measurement methods. Bruner advocates for industry-wide standards and collaboration to overcome the barriers posed by walled gardens and internal incentive misalignments. By prioritizing accurate outcome measurement, advertisers can ensure their budgets are effectively driving incremental sales, ultimately leading to more informed and strategic advertising investments.
Rick Bruner [28:45]: "Causality is hard for people to grok. It really is. It's weird."
Rick Bruner's insights shed light on the critical need for accurate and transparent advertising measurement. As the industry grapples with increasing complexities and the dominance of major tech platforms, the adoption of scientifically rigorous methods like randomized controlled trials and geo experiments becomes imperative. For advertisers aiming to maximize ROI and maintain competitive edge, embracing these measurement strategies is not just beneficial—it is essential.
For more in-depth analyses and discussions on big tech's antitrust issues and their impact on the advertising economy, subscribe to The Monopoly Report and stay informed with the latest insights from industry thought leaders.