Transcript
A (0:00)
Foreign.
B (0:05)
Welcome to the Analytics Power Hour. Analytics topics covered conversationally and sometimes with explicit language.
C (0:15)
Hi, everyone. Welcome to the Analytics Power Hour. This is episode 290. I'm Tim Wilson and I'm joined for this episode by Val Kroll. How's it going, Val?
D (0:25)
Fantastic. Excited for today?
C (0:28)
Outstanding. Unfortunately, we were supposed to also be joined by Michael Helbling for this show, but he's gone all on brand for the winner and gotten the flu. Luckily, as we're into our 11th year of doing this show now, we've learned a thing or two about rolling with the punches. And as it turns out, learning is the topic for today's show. I mean, it's implicit in all forms of working with data. We're looking at analysis or research or experimentation results and hoping, just hoping that we come out of the experience with a deeper knowledge of something. I mean, and hopefully it's something useful. More knowledge than we had before. It's a simple idea. Sometimes, though, it's a little harder to execute in practice. That's why we perked up when we came across an article from some folks at Spotify called Beyond Spotify's Experiments with Learning Framework ewl. We're excited to welcome one of the co authors of that piece to today's show. Martin Schulzberg is a product manager and staff data scientist at Spotify. He has a deep background in experimentation and statistics, including actually teaching advanced statistics in a prior role for a number of years. So who better to chat with about learning? Welcome to the show, Martin.
A (1:45)
Thank you so much. Excited to be here.
C (1:47)
All right, we are. It's a borderline, like, giddy about the topic. As we were diving into to our excitement before we hit the record button.
D (1:59)
We definitely fought over who got to be on this one.
A (2:02)
Yeah.
C (2:05)
So Martin, in the article that I referenced in the opening, which we're definitely going to link to in the show notes, is it's a great read. You and your co authors make the distinction between a win rate and a learning rate for experimentation. And that's kind of the premise of the article. Is this win rate or this learning rate rate as a proposed metrics or a metric that's actually in use? And that seems like a good place to start. So maybe can you explain what you were seeing as a drawback to kind of too much focus on win rate as a metric for experimentation programs?
