Loading summary
A
Welcome to the Analytics Power Hour. Analytics topics covered conversationally and sometimes with explicit language.
B
Hi everyone, welcome. It's the Analytics Power Hour and this is episode 297. You know, this point, I think we've all been handed some kind of output that is obviously AI generated. I mean, even it has its own term, AI slop. Oddly enough, most people don't appreciate being handed AI output without it being thought through by a real person. And it got us thinking. It's not that what AI produces is always bad, but there is starting to be different categories of content based on whether AI produced it or not. And maybe there is something to be said for analytics wisdom that existed way before AI and will keep being true no matter how AI is evolved out into the future. So we, we reach back to the formative moments of our own careers to share some of the hard won wisdom from our careers in analytics. And maybe this episode will lack a little AI, but I think it'll still be worthwhile. So let's introduce the people who make up the show. Hey, Val Croll.
C
Hey, Michael.
B
I'm so glad you're here. And we've got Tim Wilson.
D
Howdy. Howdy.
B
And I still put up with you somehow, so that's good. And Mo Kiss. Hi. You going?
C
How you going?
B
Oh, Jinx. Jinx.
D
And Julie Hoyer.
B
Welcome.
C
Hey there.
B
And I'm Michael Helbling. So yeah, we've got the whole team together. We've all seen so much in our careers pre AI. And so what's some wisdom that exists that's going to be useful whether AI is involved or not? Who wants to kick us off with something they've learned over their career?
E
Oh, I will. Because this one, I, I feel like sticks with me still to this day. And this was more than over a year ago. This was
F
three, four years ago.
E
But it was actually back when me and Tim were working together. And I'll never forget we were talking about randomization and trying to figure out like the best way to represent it, different ways to think about it. I think this was right around Tim when we were trying to do a talk around like blocking, randomization with blocking and things. And so we could not figure out how to make it, you know, friendly to people, people who had maybe not thought about it as in depth as we were at that moment. We were like way too in the weeds. And I remember slacking Tim, like the next morning, like right at the beginning of work and saying, so I had
D
a thought it was the next morning because it was a Glass of wine, sitting on your couch, thought, I believe, Yeah.
E
I was like, maybe I even slacked you that night and was like, look at this in the morning. But I was like, I just was having a glass of wine, and it came to me. I was like, I feel like a good way to explain randomization would be through colors. And so I tell them how, you know, like, what if each color represented a characteristic? And when you randomize the population, you could see how the colors were split between test and variation. And what if those colors were blended to show, like, a group color? And then you could see that the color comes out really close because it's randomized across the characterization. And so Tim. I knew he couldn't resist, and that's why I told him. So I was like, that's about as far as I can take it. Tim made a whole shiny app to let you choose the size of your sample, to choose how many characteristics to choose, the predictiveness of those characteristics on your outcome, the mean, all these things. And then you could actually run a simulation and see the colors blended. And then you could, like, flip on and off blocking. And there have been multiple times, like, even earlier this year, like, when I'm thinking about certain things with randomization, I'll still go back to that shiny app and, like, play with it. And it helps me a lot with, like, talk tracks or, like, simplifying it or reminding myself of different, like, characteristics of it. So that's one of my favorites.
D
I like that one, too. Because we need to. We got a little carried away because it was like, well, will this work? And I think it was built very quickly. And then Julie was like, what about if we. You think we could do this? Like, what if we did a. And I was like, what if we did? And so it got a little involved.
E
My hands never touched the keyboard. That was the best part. I got the whole thing.
D
It was like voice control.
F
Why do you think it stuck with you so much as, like, the antithesis of, like, the AI slop world? Like, what was the. I don't know. That just made it really resonate or it keeps you coming back?
E
I think it's the. The visualization of it. I don't know why the. The color is what stuck with me. And sometimes, like, a good visual is so much easier for me to go back to. And I think the AI part, this is. I swear I'm gonna loop back to your answer, but I even remember back in college, starting in engineering, they just wanted to give you, like, an output to Use. They'd say, just use this output. Use this output, like a formula. And I. And I'm. No, I'm not alone. This way. I always needed to understand why so that if I forgot the perfect formula, I could, like, reason my way back to my understanding. And there's something about that shiny app and the visualization of a color that just, like, dang it, like, hit something in my brain that when things get fuzzy and I haven't talked about those things in a while, I haven't thought about those things in depth. Like, it kind of brings back some of that understanding. Like, I can have a good starting point and, like, think through it again. And the AI part is like, again, they just give you output. And that output is actually like the variation of an AI, you know, agent or whatever, like giving you something. It's never exactly the same. So I like that it is steady and a starting point for me.
F
I love that.
D
The reason I got so excited, I think it is this color piece, because I will chime in that when we talk about random assignment and kind of the power of random assignment, and we'd say, oh, so if you have, you know, a thousand people and you randomly split them, you're going to have a roughly the same number of men and women in each group and roughly the same number of household income. And, like, that idea that you're making two groups that are effectively the same, like, it's just. It's abstracted when you talk about all the characteristics of what's in it. And I think when thought about it, like, I mean, it literally is kind of a color mixer that it just sort of generates a palette, and then you see the average color, and it may be like this lime green and it's slightly different shades, but it's like a simple math thing. So to me, like, it just goes into my head of saying, this is what's happening, why we're making two groups that are pretty damn close to the same. But the fact that, I mean, that was, to me, kind of the. The really useful part of that is I was trying to grapple with how do I actually internalize this? And then it was very easy to say, okay, now that extrapolates to these other, more nebulous characteristics of psychographic details and demographic details.
F
But it sounds like it's solving the problem of, to quote a canva value, making complex things simple. Right? It's about understandability. It's about a way to see something so that it clicks in someone's brain. And it's funny I had someone in my team that was doing some work the other day, and I went through it afterwards. So we had a couple of different strategies and I was kind of like, we need to go through these strategies. We really need to make sure that they're being based on the data points that we have available. And that easily is a task, for example, that probably my instinct would have been, I'll put it in AI and see what's here are the data points, what's missing? And this person went through it so rigorously. And the reason that the work came back and I was like, oh, I get this, this is high quality work. And like, I can really understand it is because it's also that calling it made it click in my brain of just like, not here are the differences, but here's the assumption I made about why I think this is different. And here is the, like the leap that's been made here. And I think it's probably because of this. And I think it's. I don't know when. I just keep coming back to this point about like qual. It's like when you can see the quality and it. Someone finds a way to put it into a narrative that then clicks in your brain. Like, that's where I feel like the gold is right now. And it doesn't feel like there's a lot of it.
D
That does kind of make me want to go with one of mine because I'm seeing a direct link from that, which is also kind of just a concept or something that I find myself thinking about and talking about with clients and business partners a lot and even with analysts is the distinction between outcomes and outputs. And how we as a. We tend to pick metrics that are outputs when we really care about our outcomes. And I trace this back 20 years at this point, when I was on a United Way committee in Austin and there was this retired social worker who we were. We were reviewing programs for funding. And so we had a lot of proposals coming in and we were in these series of committee meetings and we'd all have to, like, read, I don't know, 10 proposals, then we'd meet about them. And he kept having this kind of consistent bit of feedback on multiple programs where he would say, well, these are like, you always have to say how you're going to measure the program. And he would say, oh, well, these are like output metrics and we really want outcomes. And I was 10 years into my analytics career at that point, and I was like, what are you talking about? And it slowly he started to explain and the example he had various ones because he'd point to them in specific programs. But the one that I come back to was talking about food pantries and how they would count the number of meals served, or like a soup kitchen number of meals served. And he was like, yeah, they can just count the number of trays that go through the line and that's the number of meals served. And we know that's good, you're serving meals. He's like, but really what we're trying to do is reduce food insecurity. We're trying to keep people from going hungry, which is related. You want your outputs to lead to an outcome. But he was like, we really want to push them to say can they get more to an outcome oriented measure? And I have taken that over the course of that work. Like, I was like, this is like profound for what I'm doing in my day to day with my colleagues at work and trying to get them to think about, you know, this is why a click doesn't matter. This is why the click through rate, I mean, doesn't not matter. It's just those are outputs. And trying to guide discussions early on into outcome oriented metrics and business outcomes and then going from there and say, how close can we get to, to measuring that? And unfortunately, the guy's name was, I am 99% sure, unless I've been fooling myself for years. His name was Pat Craig. He was a retired social worker. I have, every four or five years I go try to find him because I come back to that again and again and again. But it, it was another one that made it very tangible. Cause it. Oh really? In the real world, like people who are in need talking about outputs versus outcomes really kind of solidified it made it very tangible. And. But it then applies in kind of the more really, we're not curing cancer here. We're talking about, you know, marketing stuff. But the concept still applies.
E
That's a good one. I love that one. I use it all the time. You know how many people I've said that thinking, oh, they'll, they'll know about this. And they write it down. They're like, oh, that's good. So I'm like, oh, thanks, Tim.
D
Well, I mean, there are times where, like I said, I will. Like, I feel silly asking. I'm like, do we? Are we? I'm sure some of you are familiar with this because I've thought about it for so long. It's like if you. To me, it's One of those, like when the light bulb goes on, you can't stop seeing it.
F
But I think team, the light bulb has just gone on for me. Just to be clear, I. I messaged you a slack message because I was like, maybe I shouldn't derail the whole show. But I think about it as like input and output. Yeah, but here we.
C
But anyways,
F
no, but I've been thinking about it as like input and output metrics. And I say like in my mind an output metric was an outcome. But I like, I just feel like that framing is so much better and it truly is clicking in my brain for the first time. And I'm. I'm sure I've heard you say this before, but sometimes like someone just says something a slightly different way or you're mind is at the right time to absorb it. Anyway, thank you. This is going to be very helpful.
D
You're welcome. And I also say this make that makes it sound like it's binary. The older I get, the more I'm like, it's not. I say like, you want to be skewed towards outcomes you can have debates about. You know, is a monthly active user, is that an output or an outcome? And it kind of. It's context dependent and it's not. But if you can ground it in that overall, I. Overall pure version, I found it very, very.
C
You're just getting soft, Tim. It's binary. There's no gray.
B
Right.
E
The rules have reversed.
D
I said no one ever seriously about wow, Tim, you're just mellowing out, waiting for that to happen.
B
So chill.
E
That's the first time I heard so chill.
B
Hey Tim, have you tried out Vibe analyzing yet?
D
Oh, that phrase hurts me deeply. But I did download my Strava archive to just try to like analyze my workouts and it turned out to be like 28 different CSVs with everywhere from like two to 103 columns each.
B
Oh, that sounds messy. What'd you do?
D
Well, I mean, for giggles, I tried Prism by Ask. Why? That's Ask Dash the letter Y. I uploaded like all 28 files and just started asking questions through their chat interface. In plain English. Started off by asking like, how many miles do I run each month?
B
And it worked.
D
I mean, it did initially. It gave me results pretty quickly. It was like really fast. It actually wasn't perfect, but that really wasn't a Prism issue. It turned out that Strava data just refers to distance. Like that's what the column is labeled and that distance is in kilometers. So it took a few Iterations. Still a little bit of the human analyst to say, wait a minute, those. I wish I was running that far. Took a few iterations with the platform to get that figured out, but once we did it, it handled that conversion not only on that query, but automatically going forward with future queries.
B
Oh, that's actually pretty cool. It's just like how we want to handle mishmashes of different sources and medium names and consistent way whenever we're looking at working with like digital data.
D
Exactly. And I also got to kind of check out their, their Mingus query language, which it's like a more readable form of SQL with the actual SQL just like one click away. And I even like built some quick visuals and some quick reports. It's also got like a local mode for keeping all of the data on your machine. I actually haven't tried that out yet. I just did the cloud version. But it's a pretty nice feature.
B
Nice. So, yeah, it sounds like something worth checking out. You can head over to Ask Y AI join the Prism Beta wait list and use the promo code APH when you sign up. And that'll move you up to the top of the list. We can guarantee that you'll get access faster than Tim finishes his next 10k probably. All right, back to the show.
C
Okay, so I want to share one of mine and I've worked with most of you and if, if you're listening, I've worked with you as a co worker, as my client.
F
You.
C
You will know this one of mine. In the experimentation realm and world, one of the concepts that lots of programs like to think about or keep in the back of their mind is the difference between a local and global maxima. And like the, the tried and true, you know, the, the analogy or the visual is like you climb to the top of the mountain and now that you're through those clouds, you see that there's actually a secondary peak to. To climb. And I get that that works. But there's this visual that actually came from the book AB Testing. It was the optimizely book at the time. Dan Siroker, the CEO, and Pete Kuhman, who led their statistical arm and function in there. There's this visual that has. And I'll. I'll try my best to describe it. There's two sides that are. That are being compared.
D
Hold it up. Hold it up longer. That could wind up as a YouTube short, you know. So this will, this will drive people to the YouTube channel.
C
Don't you want to click through the full episode now so on the left hand side, they're describing refinement and that there's kind of like this cone shape where like the squiggly is getting closer and closer to the. But there's actually a star to the side of it. And it says that that was the best solution and it was missed because they were kind of refining to this point, whereas exploration is kind of this point that branches out and has like lots of arms to it. And so they, they did find the optimal solution. And the arrow is saying, like, here's where you refine from. And what I like about this image so much more than the local versus global maxima is because it gives you the visual of the consequence of not thinking big first or to not think about exploration or innovative type of thinking or testing first and going straight into like, how do we refine the micro copy on this page? And it's like, well, was that the right page to send someone to in the first place? And so it's about like staying curious at like a higher level. Because it's not just, you know, like, we'll find these like little wins as we go, which is great. It's. But they're empty calories if you're kind of missing the optimal solution. And so that visual, I bet you could find it. I have pasted that in no fewer than 50 presentations in my life. I'm. I'm quite confident because I think it's just a really nice way to like cement the point of that concept home.
F
Bell. It's going in one of my presentations. It's amazing.
C
I like it, it's good.
D
But is this is part of this that, like that. That just the, the realities of corporate life is that it is much easier to get into is to be in the lane that you're in and kind of refine, like, oh. Because to do the exploration feels riskier and it often means you're kind of reaching more broadly with ideas. So like, just like business culture drives us to say, let's do little tweaks and refinement and it like organize. Like, how do organizations do organizations see this and say, you're right, we need to push ourselves to think more broadly, take bigger or broader, more exploratory swings.
C
Yeah, I totally. I mean, how many times have you been like, we'll give a marketing context? Like, well, when we ran this campaign last year, we only had two versions this creative. So this year we're gonna have three. And so it's like this smaller, like, you know, I'm obviously being reductive in that example. But it's like, based on what we did last year, we all remember that here's one new thing we're gonna do different, and that's like the optimization or like the refinement versus, like, instead of just going direct to patients, what if we had a strategy for healthcare providers? And so like, that would be the bigger swing. But to, to your point, Tim, I couldn't agree more because it's like, there's no incentive for that because that's more work, you know, more approvals perhaps, you know, more budget overhead, things like that. And so I think people who are really excited about the outcomes of what that's trying to do and go back to yours, that those are the folks who really kind of like, thrive in finding those. And you'll notice that those are the people that lots of other people really like to work with inside of organizations, I will say, because they're doing more exciting things in service of, you know, shared goals for the organization. But, yeah, I think it's not, it's not the natural path. I agree.
E
No, people want what they can control, like in their, in their lane. Like you were saying, it's hard to like, look up and do that broader view and then have to collaborate. But I think then what you said, Val, tying it back to outcomes, like, if people realize the shared outcomes, they were more focused on driving instead of their individual lane outputs. Like, maybe people would be more open to doing that instead of just the refinement.
F
Yeah, I was just going to say I don't think the shared outcomes are always incentivized. Sometimes they are and sometimes they aren't. But I think kind of the one push I would have on this framework is like, and I am a huge fan and I'm definitely, definitely going to be borrowing this. I think there's an assumption that you'll always get to the star, like the point of refinement through exploration. And sometimes you don't. Sometimes you explore, you do all the extra work and it's. It doesn't add a significant amount of value. And I think generally those cases are pretty rare. But I think it does happen. Sometimes they're like, again, not binary,
D
without any specifics, obviously, but I think of Canva as a company on a product level, very exploratory. Every time you turn around, they're like, oh, yeah, and now, you know, it can make your eggs for you in the morning. So it seems like, I mean, like, literally, I mean, if it feels like sort of get updates through Conversations we're having. You're like, well, yeah, can do. I'm like, what the. Okay, there's three other products that. What the hell? So it feels like that have there been. Has Canva had, like, pursuit of, like, specific? Like, this is a whole new area that has gone nowhere and been shut off. Again, not asking for any specific.
F
Yeah, I think so. I think the tension right now, though, is that we all need to lean more towards that explore exploration piece because of just the pace of AI products and features and how they're shipping. Like, I think what particularly is a trap right now is if you're in that refinement, we want to go towards this goal. We want to build this thing. Like, in today's climate, that's more dangerous than ever, I would say, because just the way things are changing so quickly. So I would say not necessarily things getting completely abandoned, but more things getting refined and changed along the way.
D
I like that with AI, if you do the kind of the MVP in multiple directions in a way that you can say, I'd rather try five wildly different things with AI in a minimal way with clarity on how I'm going to determine whether this is the best bet or not, then determine we're going to make the best chat experience using the latest LLMs ever and just like, pursue that and miss it.
B
Well, I'll share one that's important to me. I don't remember who first told me this, but it was about five years into my analytics career and somebody said to me, michael, trust is hard to build and easy to break. And I think that's more of a general statement, but applied to the world of analytics. I watched in my own career, sort of people who believed when I presented an analysis and people who didn't. And losing trust with stakeholders was something I definitely experienced in the early years of my career and how much that that put me in a position where I could no longer influence the business or business outcomes in certain areas. And so it really kind of hit home and I really kind of held on to that for. For the rest of my career. Was just sort of thinking about how do I continue to build trust when I'm working with business stakeholders, when I'm talking about things and I like, because it's me, I don't have any kind of formal structure to that, but there are little signifiers I look for around how do I know that trust is still there. And I use that to kind of guide how I act around today, my clients or stakeholders I'm working with. Of how is that relationship? Which tells me then the influence I have as an analyst for that particular situation. So that one is one that has always stuck with me because I. I love being influential. And for early in my career, I just figured if I showed you the data, it doesn't matter who the messenger was. You would just say, okay, yeah, that's the. The data, and. And you would accept it. But the reality is, is the messenger matters quite a bit. And since I'm not Tim Wilson, you know, I had to, like, you know, ramp up my skills.
D
But this, like, I mean, honestly, like Dear Listener, if you like that one, go back and listen to our last episode with Eric Friedman, because a lot of what comes up with that is that. I think. We think it's. That means the data has to be perfect and the analysis has to be perfect. And I feel like what you model and what we talked about with him, a lot of it is like actually showing that you understand the environment. They're working like you understand. Building trust has a lot more of the soft skill than my data is always, yeah, perfect.
B
Yeah, there's that part of it. Understanding the context. And I think also not to use a bad word to you, Tim, but empathy has a lot to do with it as well.
D
Don't know what that cannot compute. AI does not compute. Yeah, no, because it.
B
It like. Like one of the signals in, like, if I'm working with other clients is if they come to me with a separate problem that tells me I'm building trust because they are like, okay, yeah, you're doing the project or whatever project, but if they come and say, hey, here's another thing that's going on. You have any insight or thoughts into this? That's a great example to me of like, okay, we're on a trust path together now. So that's awesome. Let's keep building that. So, like, that's. But yeah, you're right. You're absolutely right. It's not just the data or the analysis. It's also the context to show you understand, show that you care about what they care about. And it's challenging because as analysts, I feel like sometimes we want to not necessarily be front and center. And the reality is to influence decision making, you've got to be. You've got to be willing to kind of plant your foot and sort of be the face of the data in a way.
F
So the funny thing is, Michael, like, you, you've been chatting about trust, and it actually makes me think, I know. Cassie Kazakhov. I know. I'm obsessed with her, a big fan girl, but she talks about this so much and, like, she calls it work slob, which is enabled through AI. And I think, like, the way she articulates it is just so brilliant around, like. And I feel like I'm seeing so much of this where, like, AI removes friction for shitty ideas, right? And so everyone just like. And I get it, I get it because I'm doing it too. Like, I do. You can move faster, but it's. It's making us look like we're productive, but actually we're just producing more shitty ideas, right? And I think the bit that's really challenging then is, like, being able to differentiate between the shitty ideas and the not shitty ideas, right? And so I think the thing that she really is, like, honing in on, which has just been, like, flying through my mind and it's the same as the trust of the stakeholders, right, is like, AI can be this incredible tool to unlock a lot. But, like, how do we really use it? How do we incentivize the quality over the velocity? Because at the moment we're really honing in on velocity, which is breaking that trust so deeply. And I think about it so much, like, as a manager, every time you get a piece of work, as a person who's producing work that is lower quality, I personally feel like you're fragmenting trust with those around you. Right?
D
Well, but it's got dual pressures. You've got the pressures to use AI that's coming down from on high. Use it, use it, use it. Be efficient when you're delivering stuff and it's polished and longer and grammatically correct and no typos and coherent and organized thoughts. But the person who's getting it knows, and you're under the gun to say, I can't spend as much time whittling it down. Like, there are competing pressures. The person who's receiving it, I think you're dead on. You're sending me total AI slaps, stupid sales pitches. There was no trust there in the first place. It does feel like it gets sneakier when it's with a co worker saying, oh, I. You know, here. Here are the notes from the meeting. And you read through it and you're like, this isn't your voice and it's a little off. But I can't really criticize you because you did it quickly. But I also don't feel like there's a depth of thought. Like, that's a but.
F
But that's the thing, right? Like, I almost wish there was a way to know If I have 10 docs in my pile from my reading list, which one wasn't written by AI because that's the one I'm going to go and read. But there's no way to know that. Right. So then what ends up happening? Like, I feel like we need to create a system that incentivizes the quality of thought and the depth and the fact that if you want something to be shorter or tighter, like, you actually need, like, you can't just give it to AI. And I just. I feel like we're in this real
D
conundrum that can be a great. I'm gonna vibe. I'm gonna vibe code an app to do that this weekend that has colors.
F
Yeah, yeah,
B
Part of this. But the thing about maintaining trust in. In this context, I think is about transparency as well. So, like, if you use, AI will just lead with, hey, this is AI generated. So just f. You know, so you know, or you say most of this is AI generated, but here's my synthesis up top. So that way you can let people know the distinction. Like, you don't have to go read all this. You can. You know, it's the same thing when you're preparing an analysis and, like, you want to show all the cool things you did to the data, but you put it in the appendix because your stakeholders don't care. They just want to know what the McKinsey title is and the. And the big insight. And if they trust you, they probably don't need to dig much further if they want to learn more or have more deeper interest, there's sourcing material behind it. So a lot of times AI for me, feels. Feels like that where it's like, okay, AI can pump out tons of content and. And honestly, beautiful content too. Like, it'll make a better slide than I make on average. A lot of times, you know, if I just sort of take content and plug it in, I don't make the best slides. Okay. Like, I'm just being honest.
F
I had AI make a slide deck for me the other day, and it's still some work to do. On the design side, I would say.
B
No, no, no. I. I mostly don't let it, because even though it can make a pretty slide, it's not. It doesn't fit what I'm trying to do. So, like, no, I haven't yet been able to, like, really position a whole AI slide deck yet.
D
But. But it's getting close thing. Like, imagine. Imagine if you just could just go on a walk with the Dog and just talk. Talk to the AI, and then it'll generate a deck for you. And it's like, no, there's. There's value in the friction of me needing to go on the walk with the dog, think about it, stew over it, and then come down with, what are the three things that I want to say?
F
Okay, I promise after this, I will get off my soapbox. I promise. But there's an incredible. I mean, we all know I love the acquired podcast, but there's one on Ikea, which is amazing, and Ingvar Prad. I'm gonna fuck up his name. I always do. But Ingvar is the guy who started ikea, right? And he has this. He had this principle, which I've just been thinking about. Like, how do you implement this and how do you scale it, basically, particularly as, like, people manager, right? Like, we have addition bias. So, like, any time things seem hard or tricky, we try and add to it. We try and put more on it. More process, more structure, more things. And his kind of, like, management role was always simplify. So if we have a problem, what's one thing we can take away? What's one thing we can remove? And I'm trying to, like, really think about that with the team. Like, how do we take stuff away instead of adding? And. And so is it about, like. And again, like, my. My bias is the addition bias. I'm straight away, like, okay, let's have an experiment template. Like, let's have measurement, like, standard. Then I'm like, I'm adding. How do I take away so that we simplify? Because especially with AI, there is a lot of things where we're adding. We're constantly adding. We're adding metrics, we're adding, like, extra reports, extra things, and it's adding to the complexity, which is not the intent that we think we're gonna have. All right, okay, I'm off the soapbox. I'm done.
B
No, I like that.
C
I like that one.
B
It goes back to what you said before Mo, which was sort of like, execution is going to zero. So the quality of the idea or the quality of thinking now matters more than ever, because you can go execute on a poor idea so fast, but waste everybody's time in the process. And so, like, having some thoughtfulness ahead of getting everybody rolling now is sort of, like, even more critical.
D
I love the language of addition bias because I think that it is so broadly applicable. And I'm going to throw a quick one in, and then we'll go back to maybe more broadly and It's a twofer but the. Because to me maximizing the data pixel ratio from a data visualization and a clarity of communication which I've been shouting from the rooftops for years. So. Information Dashboard design by Stephen Few. I read that like in 2006. Cole na Affleck. It's chapter three of her book is like it's basically declutter the storytelling with data Data visualization guide for business professionals. But that. That's in a narrow. That's the addition bias of how do I provide. I'm going to deliver this to a stakeholder. My instinct to build more trust is to put more stuff in it. And what they really want is to remove stuff. And with AI, like with AI, when you ask it, summarize this. If you ask it to give you a two minute script, it will give you a four minute script. If you ask it like you have to constantly tell it to do less. So it goes for processes, data visualizations, the analysis you do. Let's keep digging deeper, deeper, deeper, deeper. And it's like. Or can we stop and make a decision to move on?
E
But it makes me think about. I was thinking about this initially and now you saying that it makes me really want to try it. And I feel like I've. I've done a little bit in passing. But when AI gives you the four minute thing, the huge long summary, I feel like a really good check on the AI is to actually ask it to give you like a four sentence summary. Because I feel like that's where you can sniff out the BS faster. Like when I've asked it for a short summary, I know it's totally missed the plot than like what I would quickly give as a four. Some like four point summary or four sentence summary. Because sometimes it is like you start reading, you're like, I guess it sounds good. Yeah, kind of. And then I think you're more apt to just like trust it and maybe use that long format. But it's like the, the old adage, sorry it took me so long to like write you a short letter or something. I'm quoting it a little off, but it feels like that. So I do wonder, could you like stress test the AI output sometimes by asking it for the short thing and be like, ooh, really quickly, good or bad?
F
I do, I definitely do. But then I end up editing it and I'm like, I should have just written it myself. It would have been faster.
C
Always.
E
Always
C
in this episode.
B
Yeah, well it's. But it is wise because in a lot of Ways in this current, where we are right now with AI, it's. AI is an analytical contributor, is very much in the look what I can do kind of phase. And. And it's sort of like when you think about, like, coaching a junior teammate, if you want to think about AI like that, it's sort of the same kind of thing as, like, all right, strip all that out. You don't have to say all that. You're going way too far. You're trying to impress because you have cool. You know, don't try to blind them with science. Just get in, get out, say what's important, you know? Yeah. So that's sort of how I feel about it because it's like, yeah, it is. It's trying to do way too much. It's like, look at the cool stuff I can do. And it's like, mom, look. Look at me. Look at me. Like, yes, you're very smart. Shut up.
C
Very good. Well, I can go with another one that's. That's not continuing on this, like, beautiful thread that we've been weaving.
D
We'll find a link. We'll make it. We'll ask AI to make it.
E
Wait, Tim, did you do your two fur, though? Because I had cut it off or.
D
You know, my twofer was just the two books. The.
C
The two books.
D
Cole Nafleck and Steven Few.
C
So, yeah, another experimentation, heavy one, but another one that I have sent a link to these articles, maybe more than anything else, it was actually a medium post. It was a collection of medium posts, I should say, that was on to data science, and it was written by the skyscanner engineering team. And the overarching kind of umbrella title of content is Chasing Statistical Ghosts and Experimentation. And not only does it break down some of the common myths, but it's the things that people still struggle to fully understand why it's not effective to. To run on it. So this is actually very similar to Julie's first one in that it does produce a lot of visuals, although they are not interactive, to kind of illustrate exactly what the. The issues are with people having these, like, mental models. The one that I've sent the most, and there's like, four in the series, I believe is the first ghost, which is. It's either significance or noise. And there's like, one quote in there, like, towards the middle, and it's about, like, experiments don't work towards significance. And comparing relative significance of P values outside of the thresholds is a mistake that will lead to a lot of false positives. But the Number of times. And I wish that there was like a button I could press and it would like zap people in their chairs if they said like, well, we're trending towards significance. No, but it's so well put. They like, like break it down into like its smallest little pieces and kind of like build it all back up together. So anyways, it's just every, every part of this was so well done. But yeah, definitely have come back to this one quite a few times.
E
Yeah, Val, you turned me onto those and those are amazing. Great, like ground yourself. Be like I'm getting lost in the sauce. Like, let me go read my articles again for a second.
B
That's awesome.
C
It's a good series.
D
I feel like, Julie, there's a natural add on to that around experimentation that maybe you could.
E
Yeah, I think I know which one you're talking about. My well thumbed through book with tons and tons of notes in the margin that I've actually read just the first five chapters about three times. I've done two book clubs on it. Field experiments. Very simple, nice, straightforward name. Really great book though. This was actually recommended to me and Tim when we worked with Joe. He was leaving at the time, search discovery and we were running a randomized control trial with a client and it was put on me to, you know, continue it, do the analysis and run the next one. And I was like, holy shit, like, what am I supposed to do? And Joe just sends us a link, he's like, it's fine, buy this book. Read the first three chapters. You know, you guys will be good.
F
Julie, you got this. And you did.
E
Yeah, yeah, it was a little.
D
I read it and I was like, I was like, wow, Julie, you got this.
E
Yeah,
D
honestly, thank God this other one's number. So give it.
E
Yeah, thank God we still could contact Joe. Also thank God I was a math major and could read like mathematical equations. But it is a really good book, like for a dense book, a very informative book. Again, that's one that just sets such a good clear foundation. Like the way they write about these complex theories of running these statistical tests, randomized control trials. Like again, like the magic of randomization. I mean that shiny app I talked about at the beginning came from the same phase of life as reading this book. And it was so good and I still loved.
D
I remember that. Yeah, my favorite phrase now.
E
Yeah, I love it. But really, if you need like a crash course in the foundations of randomized controlled trials, I really do highly recommend that book. Even the first five chapters. Joe said three. I found I needed to go at least a five. I think there's 10 chapters total. Really good.
C
As a member, former member of one of the book clubs that you ran for that book. You remember, I was, I think I was in your second one. That was one of the densest books. It's not a quick read, but it's like the, the pieces of valuable nuggets per word is probably the highest density of any book. I'm like, oh, there was like two paragraphs and there was like three light bulbs that went off. So. And really good examples in that book too. So yeah, I liked it in stories. Good ones.
F
Kim, you've got to talk about that. There's one you got to talk about. I'm dying to hear it.
D
Was it possibly. First break all the rules. Okay, this is one. Where is. Michael accused me of not having empathy,
F
so I was surprised to see this on the list with your name.
B
So first off, Tim, it's not, it's not an accusation, it's just an observation.
F
Yeah,
D
Just do your fucking job. That's right. Yeah. No. So first break all the rules. What the world's greatest managers do differently by Marcus Buckingham. And there were a couple additions and other people wrote with them. And it's strengths finder is what gets like all the play. And I'm not not a fan of strengths finder which is tied to now discover your strengths. But it was a two book pair. First break all the rules, now discover your strengths. And I read it. It was like required reading. Early 2000s for managers at the company I was at. And to this day it gave me a lot of confidence when I started working with people who just weren't the right fit for the job they were in and getting comfortable with this idea of skills versus talents. You can teach skills, you can't teach talents. And that doesn't mean you can't raise people up to get better. But if somebody is just like not good at data visualization or not good at building trust or not good at client communication or whatever it is, you can give them training. And we have this tendency to say, well, that's their deficit, that's their deficient in that area. Let's spend as much time as we can coaching them and training to bring them up. And when you start to recognize that it's like, no, that's just not them. That doesn't energize them. The best you can do is get them to a base level of performance. That was like one big aha from the book. The Other one that, like, really blew my mind because the way they write about it is, like, so true, is that our tendency, if we're managing a team is to spend the most time with our low performers. Because it's like the idea that the lowest performer is what the team is going to be judged as and they're the people who need to be raised up. The rock stars. You're like, they got this. So we'll just dump more and more stuff on them. And the book makes a really, really strong case for saying you're. You're shooting yourself in the foot. You should be spending. You should be supercharging the rock stars and you need to support the lower performers. But it's not really your job to try to make a square peg fit in a round hole. Your job is to see if there is a role you can get them into where they will thrive and become rock stars. But trying to coach and train when they're just not a fit, I mean, that's literally. I read that 20 years ago, and it's one that I multiple times. I will see it. I will be interacting with somebody. I will give honestly, even though I am kind of a jackass. Like, I, I go in with a presumption of good intentions and good capabilities. And I've, especially with analysts, I have gotten more and more confident over the years of somebody who's just not. When you start digging into their backstories, it also often was somebody who is really struggling and they like the idea of punching buttons from the data and getting glorious insights and just have zero intuition or any of the compulsions that analysts do that make them good analysts. And I've got, like, names in my head of like, they're never going to thrive in this and the best thing that can happen for them is to find their way into another area. So, yeah, that's weird. That's like a management book. But I find myself coming back to it.
B
And would you say, Tim, this has been more applicable in your work life or your podcast life
D
only. Only so many roles I can shift people around into and I have little control.
B
No, I'm sorry that I made a joke about it, because actually I see this in you, Tim, like, you doing this and like, how it affects how you interact with people. And I think I've learned some of these same lessons over the years in managing teams and people of, like, figuring out how to shape their role to be, help them be as effective as they can be based on sort of what they're, what they're naturally inclined towards versus sort of what you sort of want them to do. And, and I like that puzzle of figuring out sort of where people fit. It's fun. You're not always allowed in every environment to puzzle with it as long as you'd like to, but it's.
D
The fact is spending more time with your stars is a lot more fun and energizing and helps you grow as well. So, like, that book, giving them permission to say, yeah, don't feel bad that you're, like, collaborating with somebody who is. Everyone thinks of as being like, the star on your team. Like, that's. Here's. Here's the reasons for why that's actually the right thing to be doing. Again, not neglecting people. It's not like it's a complete.
B
Yeah, binary. But, yeah, it's sort of in the same vein. It goes way back. I. There was a Radio Lab episode back before I even listened to podcasts. This is like, on the radio station, like, back in 2010, and they were discussing, like, how people use emotion and logic and decision making. And they had a specific case of a person who'd lost the function in their brain that allowed them to bring emotion into decision making. And so all their decision making was completely logical only. And as a result, they were actually paralyzed in every decision they would make. And it literally destroyed their life because they. They couldn't decide between, like, oh, I need to sign this document. Should I use a blue pen or a black pen? Pen? And then they would go back and forth on the pros and cons of a blue pen or a black pen to sign. And then everything they ever did go to the grocery store. They couldn't pick which toothpaste to buy. Like, and it was really fascinating to get a window into this idea that so often emotion drives decision making more or as much as logic does, and sometimes a lot more than logic does in a lot of environments. And that was another big, like, light switch for me. And around that same time, I was reading a book called Switch by Chip and Dan Heath, which kind of goes into some similar concepts about how to kind of be influential or make people see what you're trying to say through different ways of kind of reasoning and providing good examples of things and stuff. So that was sort of a time where I was sort of like, okay, yeah, how do I get people to, like, engage with decision making? And also, how do I. This all goes back to me trying to figure out how to leverage who I was in the context of analytics. Let's just Be honest about it, because I wouldn't call myself a traditional analyst by any stretch. By learning some of these lessons and watching the emotional process of decision making, it gave me really good hooks into, okay, here's how I can engage with people, because I can feel the emotions coming off of people a lot of times and. And kind of intuit what to do. Whereas my ability to create amazing analyses like Tim, and with perfect visuals and everything is not. I have good skills, but they're not as good as yours, Tim, let's be honest. And so as a result, I had to fool people into doing what I wanted them to do.
C
So building trust along the way.
B
Yeah, I mean, that gets.
D
It's almost becoming like a trope or like. Like conventional wisdom that people aren't. They don't make decisions based on the data and like it. They make it based on emotions. And then that. Sometimes that gets used as like, weaponized against analytics. Like, how do you square that in a way that's healthy? Because it gets. It gets treated like, so throwaway. Like, well, then what the hell are we even.
F
No, I think it's.
B
It's such a balancing act. I feel like there. People get a hold of this information and do seek to manipulate it. And I feel like that's. At some point, like that kind of behavior is gonna get caught and it's gonna have a limiting function. And so you. You shouldn't do that. It's sort of like if you're a
D
company that people being manipulative and playing to emotions.
B
Yeah, yeah, yeah.
D
Isn't the. Okay, gotcha.
B
So.
F
But that's like the whole point of choiceology, right? And that's why I love that show so deeply, is because, like, each episode is going into one of those biases and how it shows up and the decisions that we're making and like, and it's. I mean, Katie Miltman is just so good at explaining what. Sometimes they're like quite technical scientific studies in such a relatable way. But I just. I don't know if I see it being weaponized. I'm more. I actually sometimes feel I see it the other way, which is that I see data people being like, decisions should totally be based on logic and what the data is saying. And I. I have very much come to this piece of like, intuition matters more. That is a different piece of data. That is all your past experiences. And that also is worthy of consideration when you're making a decision. There are other quantifiable data points that you also include. But, like the idea that someone's only going to make a decision based on those without other factors like intuition. What you're like, yeah,
D
you should be bringing together like facts and feelings.
B
Somebody should run with that. Somebody should run with that. But yeah, that was the other word mode that I was going to bring into it. It as well. Intuition sort of runs this path as well because people will rely on that more than they will rely on external facts and figures. And like there's that famous Jeff Bezos quote of like, oh, if the numbers look different to you than your intuition, then probably trust your intuition, you know, that kind of thing. And in reality it's true. Like our, our and the way our minds work as humans. There is a really good book on storytelling by I think Will Store and one of the quotes out of there was basically like we look to fit patterns. So if the data fits our model then we'll be more willing to accept it. And if it doesn't, we're more, less likely to onboard it just because like the way our human minds work, we're trying to fit things in. And so if it's sort of like counter to what our intuition or what we think is true of the world. But also when you look at even your own decision making process when you have different emotional states, so if you have high cortisol levels and anxiety, your decision making is actually impaired as compared to like lower levels of anxiety or, or more calm feelings. Like it's just true. Like you, you make different decisions and those don't smirk. All of you are laughing and smirking because you know it's true.
E
It's just hitting home for me.
B
I'm like, yeah, yeah. And you think about that. So, so think about that for like a CMO who is like literally these next few campaigns literally mean their job and you coming in with some analysis need them to make decisions and they are so high strung. Like how do you even pull some of the action out of the room so they can get to a good place to make good decisions? Sometimes it's so tall it's like you end up up in a therapy role practically when you're trying to provide analysis. And it's, it's not like you should be doing that but it's sort of like a reality of the work environment sometimes. So we don't always do a good job. But like there's all this data that, you know, Google did that huge project about sort of how people perform maximally in their jobs and like when you provide the right amount of psychological Safety and those kinds of things, like people's performance improves. Like there's so much data to support all of this stuff. And so the, the idea that emotion shouldn't have a place or isn't involved in decision making, like you really kind of like miss that at your own peril. And it's not just sort of like how do I feel about you? It's like, how is everything around me happening? And it could be nothing to do with. You walk into somebody's office with like an amazing analysis that you've really thought through and prepared very well and the meeting might go super poorly and just has zilcho to do with you. It's because they got screamed at in the meeting before this and they are still coming down off of that and there's really nothing you could do about it except maybe come back Monday and talk about it again and hope that they're in a little better place.
D
Well, there's another corollary which is separate from that is recognizing fear and shame, like negative emotions that, that the goal, like if you can get them emotionally on as a partnership, it goes to kind of the trust, it goes to all of the pieces of this. Any idea that you're going to walk in and like put up a chart that makes somebody look bad or surprises them in a negative way, even if you're like, oh, it makes these four people really happy, but this person, it makes them look like I'm finally calling bullshit on them. And that's going to be great because. Because my manager and their manager are going to love me because I'm going to help them win this battle. Like that is a short term win. And you just.
B
Yeah, because now that person much better going to be gunning for you every single time from here on out.
D
Even if they're a great person who doesn't want like much, much better to have them along on the journey so that they too are vested. And it's like asking the question like what would you need to see in order to make this decision? Or what would you like to bring them on the journey where they are? I don't know if that, that doesn't go. That's trying to play to the emotions, marrying it with the data to say let's think about how would we feel if we saw that? How would we feel if we saw that? Take a little bit of the sting out of the what does this mean for my self worth in my professional career and make it more about, hey, we're doing something good for the company. Wow, Damn, that's me. I'm sorry. That's good ones. That's too many. Too much lip service to soft skills.
B
I love it. I mean, maybe what we're. What we're finding out, Tim, is, like, all this stuff really matters in a world of AI, you know, there's not
D
a word I've said that I haven't just got straight from cod. Like, I was just like. Now they're talking about emotions.
B
What should I say? It actually bugs me. Oh, go ahead, Val.
C
Go ahead, Michael.
B
No, it's sort of a tangent, but, like, lately Anthropic has been releasing sort of these things about how Claude has, like, these emotions that it expresses. And it really rubs me the wrong way that they even frame it like that, because I'm like, you can't be teaching people to, like, emotionally support an AI. Like, it's not a good way to deal with LLMs, in my view. But again, you know, whatever. What do I know?
C
Well, you made me think of another quick one. Making sure someone's ready to hear a message, especially if it's a difficult one. Episode 240 Taylor Bunnacour Guthrie. The questions. That's one that I definitely reach back for a lot. On the scale of 1 to 10, where, you know, 1 is a dumpster fire, 10 is the best day of your life. Where are you? And so if they say four, because I just got my ass chewed food by blah, blah, blah, you're like, well, we're gonna save this for Monday. We're gonna go ahead and come back. Need a couple more days. But, yeah, no, that's. That was. I. I learned a lot from. From that episode, from. From Taylor, and definitely have returned to that one. So that. That's another one I should have put on there for this episode.
B
That's a good one. All right, we do have to start to wrap up. This has been a really fun conversation. Thank all of you. There's been some really amazing insights. So trying to empathetically put myself in the position of a listener. I think we had some good stuff there, so hopefully it's helpful.
D
You know what?
B
We'll.
D
We'll wait and check the median. Median. Listen length and let the data tell us whether this was good or not.
B
That's right. The algorithm will tell us. Anyway, as you've been listening, I bet you have things that you found in your career that are super helpful and guide you even in an AI kind of environment. We'd love to hear from you. So, yeah, please reach out to us. The best Way to do that. You can do that through our website. You can also do that on Measure Slack or on LinkedIn or via email at contactnalyticshour IO. And as you listen to the show, leave us a rating and review on whatever platform you use us on. We love to hear those. We love to see them. It helps us out. And if you love to rock a sticker on your laptop or your phone case, something like that, we've got Analytics Power Hour stickers and you can order those as well on our website at analyticshour IO. So yeah, thanks again everybody. Mo Julie, Tim Valley. Really nice. Yeah, I'm feeling really nice. I feel good, right?
C
It's kind of felt like a hug.
B
Yeah, I feel more prepared to face the future. And I think no matter what the future holds, I think I speak for all my co hosts when I say keep analyzing.
A
Thanks for listening. Let's keep the conversation going with your comments, suggestions and questions. On Twitter @analyticshour, on the web at analyticshour IO, our LinkedIn group and the Measure Chat Slack group. Music for the podcast by Josh Crowhurst.
D
Those smart guys wanted to fit in, so they made up a term called analytics. Analytics don't work.
A
Do the analytics say go for it no matter who's going for it. So if you and I were on the field, the analytics say go for it. It's. It's the stupidest, laziest, lamest thing I've ever heard for reasoning in competition.
E
Okay. One of my fallback ones, I have to say to that point, Michael, I was trying to remember, I always like reference this one idea and I was like, I knew. I listened to it on a podcast and I was trying so hard to remember what it was. It was from this podcast so I put it in there as a fallback. But it was this podcast that I had.
B
Good.
D
Please.
E
Episode 213.
B
Oh, nice. Wow.
E
I like. That's maybe a little awkward, but I'll throw it in there just in case.
B
I didn't even like that episode very much, so I'm glad you got something good.
D
Yeah, I didn't let it either,
C
Which
E
is funny cuz it. Yeah, we don't.
B
We don't talk about it. We don't know we can talk about it. It's fine.
D
East like Denison, so.
B
Oh, okay.
E
Yeah, that sounds really tough.
D
Yeah.
B
Shut up, Val. Columbus is not that big, but 45 minute commute in Columbus, that's a big deal.
C
No, you said south side of Columbus and it just sounded like
B
you come from Chicago.
D
No, no.
B
South side of Chicago. Yeah. Has a rep. See, I feel like
E
I should be drinking a glass of wine to tell that. That one.
F
Do I.
E
Okay, parameters here before I go off and don't do the. The format of the show. Right. Also, I feel like I've picked up a slight Southern twang from being in Mississippi for two days. So if you hear it.
F
Oh, this really isn't.
C
I'm not kidding.
D
Vaguely racist and also misogynistic.
B
Easy. I'm not trying to defend Mrs.
E
But do we want to give like, the background of like, where it came from or how you first.
B
Yeah, of course.
E
Whatever came across it and then like, how you used it. Is that kind of the format?
F
Let's go with the vibe. Am I freaking. Hold on.
E
No, you're not freaking.
B
Testing my distance from the microphone.
E
Do I need is my volume. Okay, your volume. Paranoid. So it's my hot.
F
Your volume's great.
C
And you don't have to be quiet.
B
Yeah, I'm sorry.
E
True. See, that's the thing. Like, I know to be quiet.
C
I can tell. I can tell by the way you move your mouth that you're trying to be quieter. But also I'm like, weirdly observant about things like that. No pressure.
B
All right.
F
My life. One second. I'm so sorry. I'm so sorry.
B
Perfect. I didn't want to start. Right.
F
Okay.
E
Wait, did we get to tell the story about Mo and her saying my life in our lives?
D
It's in the outtakes. It is totally in the outtakes.
B
So good. Oh, it's like.
E
That's Mo's, like, tagline.
D
Yeah, Listen, Mo had to, like, as we were losing co host sporadically for various crises, and Mo left and thought that I would have. We would have finished wrapping up by then. So she just came back and in with the hot.
B
My life.
D
And we just went with it in
E
the middle of the wrap up.
D
Can I finish the wrap up now?
E
And Mo was just going in the final. He's like, can I finish rapping? And she was like, oh, my God.
C
I was like, yeah, I think I actually said my life again.
D
Yeah, you did. You did. Yeah.
C
Oh, geez.
F
Anyway. Okay, good to go.
B
That was great. All right, here we go. All right, we'll get started in.
D
I just like to say right now, Tony, this is going to be the perfect. There will be what you went through on the last one. This one is smooth as silk. Rock flag and do your job.
Air Date: May 12, 2026
Hosts: Michael Helbling, Moe Kiss, Tim Wilson, Val Kroll, Julie Hoyer
This episode explores enduring lessons and hard-won wisdom from careers in analytics—knowledge that holds true regardless of how AI shapes the industry. In a climate where “AI slop” (hasty, context-lacking AI-generated answers) is on the rise, the hosts revisit fundamental practices, lessons, and philosophies that remain critical for data professionals. The conversation is filled with pragmatic anecdotes, vivid metaphors, resource recommendations, and an honest look at where AI helps—and where it falls desperately short. The episode is lively, candid, and laced with humor and camaraderie.
“Sometimes, like, a good visual is so much easier for me to go back to. ...I can have a good starting point and like, think through it again.” — Julie [04:57]
“We tend to pick metrics that are outputs when we really care about our outcomes. ...You want your outputs to lead to an outcome.” [09:27]
“If you go straight into like, how do we refine the micro copy on this page? ...They're empty calories if you're kind of missing the optimal solution.” [17:28]
“Trust is hard to build and easy to break. ...For early in my career, I just figured if I showed you the data, it doesn't matter who the messenger was. You would just say, okay, yeah...reality is, is the messenger matters quite a bit.” [24:09]
“AI can be this incredible tool to unlock a lot. But, how do we incentivize the quality over the velocity? Because at the moment we're really honing in on velocity, which is breaking that trust so deeply.” [28:55]
“We have addition bias. ...Instead of adding, how do I take away so that we simplify?” [32:50]
“My instinct to build more trust is to put more stuff in it. And what they really want is to remove stuff.” [34:42]
“So often emotion drives decision making more or as much as logic does, and sometimes a lot more... I can feel the emotions coming off of people... and kind of intuit what to do.” [49:15]
The conversation is candid, wise, and peppered with easy laughs, gentle teasing, and humility. The hosts ground their lessons in concrete stories—often referencing pivotal moments, resources, and even their own missteps. They approach AI with realism—not technophobia, but an insistence that timeless analytical practice, trust, and deep thinking cannot be replaced by “slop.” Their final message is arms-wide-open: add your own wisdom to the conversation, and remember—the future is best faced with “durable” analytics principles.
For further wisdom: Listen to Episode 240 with Taylor Bunnacour Guthrie and revisit “Chasing Statistical Ghosts” or “Field Experiments” as discussed.
In a world obsessed with AI outputs, wise analytics pros keep analyzing... with heart, brains, and a healthy dose of humor.