Transcript
A (0:00)
The only thing worse than putting research out into the world and getting pushback is putting research out and hearing crickets. And last month I covered new research from Albertsons media collective, Ovative Group and Northwestern's Kellogg School of Management for my column at the Drum, and the headline finding from that research was that across 42 real campaigns, I roas could vary by an average of 6 and a half times depending solely on how the measurement was done. In 83% of those campaigns, the result could flip from positive to negative just based on methodology. And that piece really hit a nerve. Over 13,000 impressions on LinkedIn of this post, which is above average, and a comment thread that turned into a real debate about measurement methodology, the kind that you don't always see in retail media, where a lot of discourse stays quite polite and surface level. So I reached out to the research team and asked if they'd respond on the record, and they said yes right away. And that willingness to engage rather than retreat is definitely worth noting on its own. So let's jump into what the industry said. What were the questions and pushbacks on that article? Here's what stood out from Professor Cohen Powells, who is a distinguished professor of Marketing at Northeastern University and a former principal research scientist at Amazon. He raised a methodological scope question. He pointed out that the three approaches compared in the study are quite different from one another, not the minor tweaks that the headline might suggest. And he asked why aggregate approaches like marketing mix modeling and geo experiments weren't included. It's a fair challenge, and one that the authors chose not to address directly. And by the way, Dr. Powell writes a very good substack newsletter that's worth subscribing to. I'll link up to it in the show. Notes the second thread that stood out was from Venkat Rahman, who is the co founder and CEO of Arima Labs. He argued that bsts, which is the Bayesian structural time series approach, isn't a causal method at all, that it's a forecasting algorithm being misapplied, and that the study's conclusions are built on shaky foundations. His comment drew the most engagement in the thread, including a respectful rebuttal from Moody Khan, who is the VP of RMN Measurement Strategy at Sakana, who cited peer reviewed literature supporting BSTs for causal inference. I am not going to adjudicate that debate myself, but the fact that it's happening publicly with named practitioners staking out positions is very healthy for an industry that usually keeps these arguments behind closed doors. And finally Dan Waldman, who is the senior Technical Product Manager for ads reporting and Measurement at Chewy Ads, asked a question that may matter more for day to day practitioners than the methodological argument, which is what are brands actually using IROAS for? Is it proving that campaigns work or optimizing and allocating budget in real time? Those are different jobs and IROAs, which is a lagging metric that can take weeks to reach statistical significance, is better suited to the first than the second. As in it is better suited to proving that campaigns actually work.
