
We finally did it: devoted an entire episode to AI. And, of course, by devoting an episode entirely to AI, we mean we just had GPT-4o generate a script for the entire show, and we just each read our parts. It's pretty impressive how the result still...
Loading summary
Tim Wilson
Welcome to the Analytics Power Hour. Analytics topics covered conversationally and sometimes with explicit language.
Michael Helbling
Hey, everybody, welcome. It's the analytics power hour, episode 270. You know, back in 2023, I asked AI to write an intro to the podcast, and in the words of Mo, AI did a pretty shit job. But AI hasn't gone away, and the possibilities, capabilities and potential of these LLMs are expanding, seemingly by the minute. So today we're strapping on our robot helmets and plunging head first into the wild, worrying world of artificial intelligence. So what's AI really for? Is it just a fancy predictive model? Maybe just a massive checkbox on your boss's latest buzzword bingo card? I don't know, hype versus help? Automation versus annihilation? And whether or not your next co worker might be, I don't know, a chatbot with boundary issues. So grab a drink, mute your slack notifications, and prepare to find out if your career path is evolving or being quietly replaced by a GPT powered spreadsheet whisperer. Speaking of spreadsheet whisperers, let me introduce my co hosts, Julie Hoyer.
Julie Hoyer
Hi there.
Michael Helbling
I'm very excited that you're on the show, Julie, because I feel like you're probably one of the most knowledgeable people about AI in our group, so I'm going to be leaning on you quite a bit. No, I've been. I've been observing all of us and pretty sure, yeah, you're going to be.
Julie Hoyer
I'm the sleeper.
Michael Helbling
I mean, we'll see. We'll see. All right, next up, Val Kroll. Hello, Val. I did love the April fool stuff that you and Tim put together for facts and feelings, so I guess that's a good use of AI.
Val Kroll
Yeah, yeah. Facts and furious.
Michael Helbling
Yeah. April Fools a month ago. Everyone knows when April Fools is. Tim.
Val Kroll
Remembering back.
Michael Helbling
Yeah, welcome, welcome.
Mo
Thanks. Excited to be here.
Michael Helbling
Would it surprise you to learn that good chunks of that intro were written by AI?
Mo
Yes.
Michael Helbling
Yeah, yeah, they were.
Mo
Yeah. Sounds legit.
Michael Helbling
The models have progressed quite a bit. And speaking of people who haven't progressed quite a bit, Tim Wilson.
Tim Wilson
Count.
Val Kroll
Insert cheering sound.
Tim Wilson
Xlookup.
Michael Helbling
Xlookup.
Tim Wilson
That's right. I'm whispering to the spreadsheet.
Michael Helbling
You know, that is actually the first eye roll. No, that's. I literally read Tim's blog way back in the day with Excel tips and tricks. Like, I learned things from Tim Wilson about Excel.
Tim Wilson
So that is true in 2008.
Michael Helbling
Hey, listen, it's working for you, so don't give up, all right? I'm Michael Helling So, yeah, let's. What do the kids call it? Vibecast or Vibe Podcast? I don't know. Let's do this thing. All right. So, Julie, is it going to take our jobs, this AI thing?
Julie Hoyer
No, definitely not.
Michael Helbling
All right. No, thank you.
Julie Hoyer
My experience.
Michael Helbling
Great show, everybody.
Julie Hoyer
I'm not worried. See you next time.
Val Kroll
Rock flag.
Michael Helbling
Okay, but why isn't it going to take our jobs? We should probably dig into that a little bit. And let's also maybe dig into what our jobs are a little bit so that we can kind of see where AI helps, where it doesn't. And I guess other people can also chime in too, I guess.
Julie Hoyer
Okay. Most recently, something I'm running into a lot is. And I feel like this is an example we've talked about previously on the podcast multiple times. A lot of people have written blog posts about it. And it's just funny because now I'm fighting this battle in, like, multiple fronts at work. The same discussion of. I think for analysts, like, AI is not ready to just replace us. Even for, like, writing queries like you. There is no, like, talk to your AI and ask it your business questions and have the. The data insights come from, you know, your big data warehouse or anything. People are still so excited about that.
Tim Wilson
Wait a minute.
Julie Hoyer
From what I have seen, it's not.
Tim Wilson
Debugging versus giving it a. I mean, you. Julie, just said from business question to having it write it. Right.
Julie Hoyer
Yeah.
Tim Wilson
Which.
Mo
Yeah. Fair.
Michael Helbling
Yeah, Yeah. I think the distinction is important, but let's let you keep going.
Julie Hoyer
Yeah. I think it's still that there's a lot of this, like, fantasy of, like, it's gonna be so much faster for an analyst. Like, go into your analytics tool and just like, type away your. Your questions that you have to answer and, like, get insights really quick. And I have just had some specific, like, experiences recently where I'm like, see, it's still not. You guys are saying that that's like the promise, that's what they want, but it's not true. So I'm still not seeing it even in that sense of, like, for an analyst and reporting, like, we're not close to that, which that takes a ton of time as an analyst to synthesize the data, put it into a coherent answer, and have it be insightful for your business stakeholder.
Mo
Okay. Without giving away too much, this is a delicate tightrope to walk. So what we've been trialing, and there's some super smart people at Canva. Adam Evans had a really brilliant idea, and then Sam Redfern who I used to work really closely with, has been exploring, kind of productionizing it. It's been really cool. It's like looking at what are the top queries that are getting asked. SQL queries versus a table or a report table or a model table. And then using AI to help generate the best query possible to get back the data. What we've noticed is if we do that and then we return the data back and then ask our business question, it's doing a better job. And we're starting to test that out across multiple different business streams. And I've decently played with it and pretty comfortable, I think. The thing is we're not at a point where you don't need a data person involved at all. You still definitely need to QA data. You definitely need to be looking at the query logic, all that sort of stuff. But it is a lot more promising probably than I expected in a faster time. And I'm almost. I'm gonna throw out something controversial and Tim is like sitting on the edge of his seat. Yeah, I think we might get to a point where we don't need dashboards. Mic drop.
Michael Helbling
Well, yeah, I think I agree with that.
Mo
Oh, maybe it's not surprising.
Tim Wilson
Well, I mean, I think dashboards are generally bullshit.
Julie Hoyer
So I was gonna say more that I think mo hearing your success with.
Val Kroll
This so far though.
Julie Hoyer
The difference for me is I'm not working with clients that are building something homegrown. They want something out of the box that works and they don't. I think people don't realize the training that goes into it. I mean, it is, it's contextful. It takes a lot. There's a lot to think through and people aren't connecting those dots of like all the steps in between.
Mo
But you're thinking that most people just want to like, buy a tool and be like, here's access to our data warehouse.
Tim Wilson
And now tools are hitting me every goddamn day with we have solved it. And they. They stand up. The biggest straw man of the problem is business users can't get to their data. And imagine if they could just ask what. What were sales like in the northwest region last month? And it would generate that query. And it is. It is the biggest fucking farce. I was having the exact same reaction within. Within an enterprise organization with experts that have the ability to have captured a lot of queries and captured a lot of expertise and train. It, I do feel like, is very, very different from the promise of the BI platforms and all these Johnny come lately upstarts that are like, yeah, we can solve this. That drives me nuts because they're saying.
Julie Hoyer
You come with all your data. We have a really good LLM now. Ask it your questions. And it has the data there so it can answer for you. And it's not taking into account that it's not smart enough. Like, it doesn't know you haven't trained it how to actually, it doesn't have the context around your data. Does it have the context around your business? Like, that takes so much.
Mo
The thing that we are still having to do is, like, we have a very unique data warehouse and how we've chosen to build it where we have like, well, we've tried to build a lot more like small, lean tables to answer specific questions, which means that we have thousands of tables, right? And so, like, the joins become complex, all that sort of stuff. And the thing that we are still very much having to do is helping point it at the right table and provide context on that table. And so I think the thing that I probably like my own thinking has developed quite a bit is that previously I probably used to see our data warehouse as being like almost a barrier to us using AI, whereas, like, now I'm starting to see it as much more of an advantage. But you still need that, like, SME knowledge of, like, this is the best table to use. And one of the ways that we've been solving for that is looking at what are the top dashboards that people are looking at at a company level. Because often it's like the report layer table that's sitting underneath that dashboard is the best possible data source because it's all structured and clean and, like, has all the right dimensions. And then we pointed at that specific table. So, like, I totally hear what you're saying. Like, I have such a different perspective because we do have the SME knowledge. Right.
Tim Wilson
Well, it's intriguing if you actually are using is your training data the history of all the queries that have been run. And I mean, that's kind of the wisdom of the crowds. If your training data is what are the queries that the experts have written and now we can estimate the best query.
Michael Helbling
What's definitely become clear to me is that your source data requires many different other pieces of metadata or parallel data. Like the queries being run, like, what questions people are asking internally, what reports are being used, what other things are happening in the business that aren't stored in that data set, so that an inference engine like an LLM can actually come up with something that is not Just sort of like that intern level time on site was 42 seconds type of bull crap you get from big agencies. Did I just say that out loud? Sorry.
Tim Wilson
I think you, I think maybe somewhere you would have named a specific agency.
Michael Helbling
And I didn't want to go that far. But it's also interesting because the companies who are out on the forefront of this trying to build sort of like these, you know, chat assisted or AI assisted data exploration tools. You know, like probably one of the ones that I'm most familiar with right now is Zen Lytic, which is they're very upfront about the fact you have to build this other layer, which they call a cognitive layer on top of it so that you can actually leverage their tool. And they don't claim to provide you insights at this point, they just claim to provide you ad hoc data. So if you need to get a metric, they can do that for you. And I appreciate both the honesty and the progress because I think I am bullish on this. Like I think there is a future here, but I also think we're nowhere close to asking a question and getting an answer that includes context and insight that gives us a next action. Picture this. You're stuck in the data. Slowly you're wrestling with broken data pipelines, manual fixes suddenly streaking across the sky. Faster than a streaming table, more powerful than a SQL database, able to move massive data volumes in a single bound. It's not a bird, it's not a plane. It's Fivetran. I need a hero for data integration. Fivetran, with over 700 pre built, fully managed connectors, seamlessly syncs your data from every source to any major destination. No heroics required on your part. That means no more data pipeline downtime, no more frantic calls to your engineers. No more waiting weeks to access critical insights. And it's secure, it's reliable, it's incredibly easy to deploy. Fivetrain is the tool you need for the sensitive and mission critical data your business depends on. So get ready to fly past those data bottlenecks and go learn more@fivetran.comAPH Unleash your data superpowers again. Fivetran.comAPH Check it out.
Julie Hoyer
So an example of that cognitive layer that we're running into. So we were trying to use our, trying to use Explore assistant and looker and I don't love it. I don't understand anyways, we will go down that. I don't love it. Here's an example, two examples. We don't have that cognitive layer I don't know how we build that in. We're trying to do this, let's say, like we're trying to do a project that's pretty at scale and like a cognitive layer for every client we might use this for, right? Like, that's quite a bit of work to spin it up. So we were even doing a test use case where we were working with this, the AI Agent Explorer, and we asked it, we said, show us the top 10 performing landing page, like cost per landing page, right? And then we asked it the worst performing and we were like, look, we, you know, we. I was working with some engineers, they're like, look, we got it to provide the data we were expecting. And then I realized it actually wasn't understanding best and worst either. Like, even those semantics of me saying best cost per landing page would be the cheapest ones, they were showing me the most expensive and vice versa. When I said worse, they were showing me the cheapest. So it's even little things like that. Or we were trying to ask about a specific metric, but we were just using the layman's terms, right? Like a business user asking about it. And because the name coming from the data source is nowhere near that, you know what I mean? It was never going to get to that data point for us.
Mo
Okay, so what is top of mind for me right now is like, why do I not seem to be having these same challenges? Is it just that? Is it just that, like, we also have an enterprise account and we're uploading so much more of our own business context and so then we're like, not having these hurdles? Is that like a big part of it?
Michael Helbling
I think, I think, yeah, because what you can train an LLM on is all about what you get back out of it. Because. So I live in a world where mo. I don't have clients who are taking all of their data and storing it in an LLM. Or, or how about this? Consciously executing a data strategy aligned with growth of AI usage consistently. I have some clients are doing quite a bit with it, but what they're seeing is the exact same thing. They now have people full time whose job it is to ensure that the AI is getting fed the right information, which I think is kind of fascinating. And then the other thing is, is that there's such a big expectation gap because of what AI is able to do in other categories. So, like, for instance, when I sat down with my son recently and we quote, Vibe coded a video game the other night and we had a working video game in like five minutes. It kind of blew my mind. And here's why. Because I don't know how to write code. But this AI could take a step forward in capability so big it makes people think, oh, that step forward is available in every context and it's simply not because. And I've thought about this a lot, like, why is it so good at coding already? And I think the reason why is because code lives all in the same place and is logical in its structure. So like the code is right there.
Mo
It's good at some code.
Michael Helbling
No, no, it's not perfect at coding, but like it's the best. Like, yeah, yeah. Writing code is what AI is really the most product ready thing it can do. I think besides making cool animated versions of your own photos now is, is what it's really amazing at. And it blows my mind how good it is now at it. Like it's, it's so impressive. But I also real start to realize that like. Oh yeah, because everything it needs to know is right there. It's all in the code.
Mo
Yeah. But. Okay, can I talk you guys through an example? Yeah.
Tim Wilson
I want to call out that. You know, Ethan, Ethan Malik did a whole like vibe coding to build a game piece that's like worth a read that was kind of, it was like speaking things, speaking things into existence where he. It was a little bit more involved game, but kind of where he took steps forward and steps back. So yeah, that just reminded me that your example.
Michael Helbling
Way to slip in a last call there, Tim. Nice job.
Tim Wilson
Nope, wasn't even on my last call.
Michael Helbling
All right, Mo.
Mo
Yeah, showing off altogether. Okay, so someone in my team showed this last week and to be fair, I have not played with Claude at all. I have been quite monogamous in my AI tooling and basically what he did is created a new Claude project. He uploaded into it lookml for an existing. So lookml is the language that sits behind Looker, which is a dashboarding tool for anyone listening. So you have to write LookML code to basically get the data in the right format to build a dashboard. And so he uploaded like basically a lookML for an existing look. He then added like the underlying data that sits behind it from the data warehouse as well as the code of how that table is created. Then gave it a sample data set and basically like save these all to his project. And then within like a good 15 minutes, Claude using the. Because he put a lot of thought and effort into the steps and what data he. And like what context he uploaded it gave him Back the lookml to build a dashboard, and that was. He turned that around in like, 15 minutes and built, like, this whole new dashboard for our stakeholders, which, to be honest, like, we didn't have the resource and the time to build. He definitely, like, talked us through the fact that he had to make, like, tweaks and, like, make changes to this or that or like, the wrong visualization was picked here, or he would wanted the colors to be this or, you know, that sort of thing. But that is like, another example of. It is. It is so much about what you're putting in. And I just wonder sometimes if the expectations of people are like, here is like, one very selective bit of data. Now answer this really complicated question, which it doesn't have enough business context to do. And that, like, we need to spend more energy on putting quality in. Oh, I don't know. I feel like Tim's rolling his eyes at me.
Tim Wilson
Hopefully not to generate a fucking dashboard, but okay, that's awesome. We found a fast.
Mo
Generated a fucking dashboard. Do you know how excited I was? Big shout out to Tableston.
Julie Hoyer
I'd be excited and save you all those.
Michael Helbling
Hold on, hold on. You just said we're not gonna need dashboards. Why are we generating?
Mo
People still think they need them now, but in a year I don't think they will because they'll be like, ultimately, you look at a dashboard to be like, are we on track or not? Like, what was our performance? Are we hitting it? Blah, blah, blah. Like, people. I feel like it's a crutch that people need. And it's like, if you can answer that question without having a dashboard, why would you need it?
Michael Helbling
Yeah, I look forward to a future where my brain gets stimulated and I smell apples when sales are down in the Northwest.
Tim Wilson
I mean, that's kind of bizarre. I mean, to me, the only place. I mean, not to mount the dashboard, like, the only way. The only place a dashboard is really useful is actually showing in a consistent manner, are we delivering against the business outcomes against our target? So I actually would think that would be useful, as I don't want to. I don't want to ask an LLM every time. What is it I care about? What metric is it that I want to look at? Like, so I don't know. That's a. Maybe a topic for a whole other.
Val Kroll
You just say, where am I underperforming? And have it spit it out.
Mo
Am I on target? Where am I underperforming? And what action should I take?
Tim Wilson
Oh, and what?
Michael Helbling
Oh, okay.
Val Kroll
I didn't say that one.
Julie Hoyer
I think.
Tim Wilson
I think I do need another.
Mo
Tim's gonna need a trick.
Michael Helbling
I think Canva has another breakthrough product category here. Analytics tools.
Tim Wilson
I did a thought experiment where I said, this is kind of the really the best. The perfect dashboard would be one that only showed where you were underperforming. So you'd have the same structure, but everything would go away if you were actually, you know, delivering. You were meeting your results. And so you'd wind up with a very sparse. But I still think there's human value of knowing what to look at and where. But because that's been another thing that so much of the hype around AI and this even goes back to other products pre kind of AI that we're still doing the. Oh, we're going to put stuff. Our users don't want to see charts. They want to know what's going on. And so it basically would barf out text that describe the charts. We are as a. As human beings. A visual representation of data is easier to internalize than prosecution.
Mo
So some tools do the visualization too. Like, I didn't realize how good Claude is at doing that. Like, it does visualizations for you and like scorecards and all that sort of stuff. So it's like, do you need this dashboard to exist in perpetuity or is it like, you know, you're going to do your check in at whatever cadence it is for whatever meeting, and it just pops it up and there you go.
Tim Wilson
But I hope that it would pull up the same thing every time. Like there's the same. The same. There is a. There's value in consistency.
Mo
Good point.
Tim Wilson
Structure.
Michael Helbling
Right. But I think you could have a prompt that schedules that and runs it the same way every time.
Julie Hoyer
But is it more efficient, like, technologically and the. Whatever it takes to run AI, to keep asking it the same thing when you need this, like, just create it once and let it sit and go look at it. Right? Like, is it really worth the, like, energy of.
Michael Helbling
Much like computers, I expect it to cost to come down over time. I don't know. Who cares about that? I mean, it just feels inefficient.
Julie Hoyer
Like, yeah, yeah, it's exactly what I want. Like, let me build it and save it in a dashboard and I'll go click on it every Monday. Like that to me just seems easier.
Tim Wilson
But the hurdle that is much easier. The I'm. And it goes a little bit to that example of I'm building something, I'm writing some code, I'm writing some SQL. I'M doing considerate. Just the traditional task where I might hit a snag and read through and put in comments and try to figure out where the hell it's breaking and then go and search and read like seven Stack Overflow posts that aren't quite on that. I mean, I've been. The limited work I've been doing when I'm like, I want to specifically do. I want to take the system time and I want to convert it from this to this and compare it to that. And it's probably old school now. Like, I wind up in perplexity. And I think, Michael, you made a comment like, offline, that the coding part, it is good. And with the interface I was using with perplexity, where I'm like, oh, it's watching me. It's looking at the posit community, it's looking at Stack Overflow. It's basically doing a bunch of Google searches and consolidating and comparing them to my query. And then it's returning me code that is very good and reliable. But that's not me asking it a business question. That's me as an analyst saying, I want to see this. Can you help me write some code to do that? And because I'm asking it about doing stuff like in R, I have a decent grounding in R. So what comes back? One, it's not writing. It's not, you know, coding the whole video game where I know nothing. It's giving me 10 lines of code and I'm like, oh, I didn't know the system function existed. That's pretty cool. I've learned more. So in that case, I feel very comfortable that it's rapidly speeding up. Instead of me doing 12 searches and winding up on the same unhelpful Stack Overflow post, it's actually returning the right result. I'm like, oh, I've learned something and moved on. I was like, holy cow. This accelerated my iterations on writing the code. I'm like, that's pretty cool. And that seems wildly better than it was, you know, even six months ago.
Michael Helbling
Yeah.
Val Kroll
So to go back to the original question that launched us into this, which was like, is ag you kept on this art job? Yeah, I've been holding on to my answer.
Michael Helbling
Listen, VAL is trying to talk here, people. Come on.
Val Kroll
No, I just remember one of my. Because lots of people have written about that topic. Like, that's, you know, definitely an interesting thing that people read. But one of the best articles I had seen on this, no surprise, Eric Sandersam and one of the concepts that he brought up around this was that, you know, AI is really good at problem solving and it's getting better and better, but it's not making a lot of progress on the problem defining part of it. And that's like where that human component always is. And, like, that's like the business context that we've been talking about, like coming up with the hypothesis, structuring exactly what tasks needed to be done in order to, you know, do whatever you were working on. Tim, if you want to reveal your project, I'll leave that to you. But I think that that's a really helpful way that my brain kind of organizes and categorizes where they'll continue to continue to be improvements, but where they'll always need to be an assist. And that's why we can be comfortable.
Michael Helbling
And I'll go a step further than that, Val. I actually really think that as AI comes into its own, it'll start to really show who can do that really well and who cannot. In organizations like, AI is going to basically highlight the people who are really shit at understanding the levers that drive the business and driving down into the causes and effects that actually make things happen. And it's actually going to make people look bad eventually because you'll be like, oh, yeah, you're not getting anything of value out of this tool. That's strange. Let me just. Oh, yeah, that's like that. And then suddenly that person's going to be shown to be like, not really of the caliber.
Mo
I don't know. I. I feel like maybe it's just me being crazy optimistic as usual. I see this really exciting, like, there are so many boring bits in the data job.
Tim Wilson
No one's saying it's not a. I'm bullish, totally.
Michael Helbling
I want those people out. So I think that's great, but it's.
Tim Wilson
The difference between right and mo. You shared an example that did not make it to a recording. And we won't name who did it. It was some business partner saying, hey, can you generate some hypotheses? Like, literally asked, like the prompt, can you generate hypotheses? Took those, threw them over the wall to you and said, hey, can you prioritize and validate these? Or your team compare that to. And I've heard, like, I was talking to John Lovett about how he went about writing his latest book, the New big book of KPIs by John Lovett, which now it doesn't have to be my last call, and his part of.
Val Kroll
His AI stuffing this Episode of laughing calls.
Tim Wilson
But part of his technique and I've heard others talk about, I mean this is not totally original, but he said, imagine you are a, he gave specific industry people, he said, you are responding to me as an ideation assistant. And I feel like a lot of people, and I mean, Ethan Malik, Jim Stern, John Lovett, lots of people are saying, let the AI be really smart co worker, use it as a sounding board, still be a human. But instead of saying, hey Julie, can you hop on a call so we can kick some stuff around about that before you've done that and got to find time on duly schedule, it can instead be, hey, you're an analyst with a, you know, applied math master's degree who's been working in agency, you know, whatever. Now I have a question about this. What sort of prompts would you, what would you ask me? What would you think? What would your ideas be? So that is an, is an ideation companion. And I've, I've tinkered with that as well. Not saying, give me this and I want to take and edit the responses, but much more of a, I want to use you as a non judgmental and infinitely patient sounding board. And that I think from a hypothesis generation because that forces me to actually express what am I thinking? What do I see? I think it might be this. I think it could be this. Just like I would in more of a human interaction as opposed to, I want to write the, the one sentence prompt and have it just give me the answer. And when you look at some of the people out there who are posting like their prompts are pretty involved. And it's the case. It is the case. Back to the coding of where Cassie Kozakov had an article where she said, if you know how to code, it is actually in many cases faster to write the damn code than to write a prompt that describes what you want the code to be. And that's very different from like your example with writing the video, video game.
Michael Helbling
Oh yeah, because I can't write the code. So.
Tim Wilson
Right. So I'm like, so I'll just describe it and I'll, I'll work in that, that prose. And I was like, oh, okay. That makes, I don't know, I just.
Val Kroll
Had on the sounding board front. Wouldn't it be cool if we could make it talk to Julie's gem?
Mo
But see, that's the part of my.
Julie Hoyer
Job that I like. If people were like, oh, I don't want to bug Julie, I'll talk to her gem, I'd be Like I'm gonna toss this gem real quick, like so.
Michael Helbling
Speaking of sounding board, so I built this in Notebook LM Plus. I just took all the personality assessments and leadership style stuff I've ever done, dumped it in there and I, and I made an AI chat. I made an AI chat agent that people can interact with about my personality, my style, ask questions about how to conduct meetings with me. And I've given that to my team.
Val Kroll
So that, you know, gotta go, sorry guys, gotta go.
Tim Wilson
I'm out.
Val Kroll
I'm busy all of a sudden.
Michael Helbling
But I mean there's lots of these amazing little things you can do with tools like that. And it's not just idea starters. It can also be things that are like things we never thought of as tools. Because before what I do is I'd kind of type up sort of a one pager, right? Of like, here's how I work best with people. And kind of like people would read it or throw it away probably. But you know, now it's sort of like if you're curious about something, there's, there's, you know, here's eight years of, you know, leadership personality stuff I had to take tests on, you know, feel free to just ask it anything and.
Julie Hoyer
Kind of fun if you're like, oh, I don't want to ask Michael this question, but I need to know. I'll ask his personality, you know, AI agent.
Michael Helbling
It's not me in there, okay? It's just about me.
Val Kroll
Black Mirror Episode and before anyone asks.
Michael Helbling
I could only share it within my own organization because those. That's how Notebook LM plus works. So I cannot share it with you. So don't ask.
Mo
Do you know? Okay, I need to have a gripe about something.
Michael Helbling
Yeah, do it.
Mo
This is where I'm seeing like AI really just like up my life. I am so sick of reading things that have been written by AI. I am like preach so violently angry about it. Especially it is getting overused to write work on analysts and data scientists behalf, like put together the findings and it is crap because it like I think there is, there is a way you can make it okay of like you write it and like just clean up my text versus like. But I am reading so many documents that are written by AI and the thing that also frustrates me is if like anyone has like a half bake idea, it's suddenly like here's a doc on it and you're like, great, so now I have like 5,000 times more docs to read and it's a half Baked idea because you didn't have to spend the day writing it or a couple of hours writing it. You could basically leave yourself a voice note and then turn it into a doc. And so people are just like throwing these docs around and I'm like, it's.
Tim Wilson
Actually, you should, you should see some of the. So frustrating social media. Most social media promotions that are like AI generated, they're the worst.
Michael Helbling
Tell me about it. But Mo, I've got a solution for you. You take those docs, you chuck them in an AI, you get a one sentence summary, move on, and then no.
Val Kroll
I didn't really like your insert one sentence idea.
Mo
All the issue is though, that often the I feel like almost like the directness or the like the takeaway gets so watered down that what you're reading starts to turn into like smush and you're like, it loses like the crispness of what the idea was.
Michael Helbling
And this is, I think this is very, very important. There's a point about AI that I think is really important about what you're talking about, which is the way I say it is, AI is right down the middle in terms of an average. And basically when AI does something, it kind of does it just okay.
Mo
Yeah.
Michael Helbling
And sometimes that's really great. Like it made me a just okay video game. And that's amazing because I can't, I'm zero on that. But if I'm like, I'm pretty good as an analyst and it makes me a just okay analysis ads, pretty crummy, I can't, I can't work with that. I need better than that. And so one of the things that sort of stood out to me about AI and its usage is that knowledge and expertise actually becomes a massive and important filter for how AI is actually going to be beneficial or not beneficial. Like I was talking to my tax accountant and he's like, oh, Michael, you wouldn't believe the crazy things people are getting from AIs about taxes. I'm like, yeah, because they have no idea how they should be doing their taxes. You as a tax expert can take one look at that and know if it's good advice or bad advice. Just the same way as I could take one look at an AI's output and it's something I'm an expert in and know if that's good enough or not good enough or like 50% of the way there. And I can tweak it upward. But the point is, without knowledge, I only could possibly hope for average. And so that's where everyone has to understand is sort of like when you let AI do something you don't have expertise in, you're basically only going to get maybe 50 to 60% good quality. And of course that number is improving. I excited for it to keep improving. But the reality is is that's really what we're getting out of that. And we're not getting anything that no one's ever thought of before. We're only getting what's been thought of before. And it's most standard because I tested this with data strategy. I went to the deep research in ChatGPT and I said really put together research around the top themes and things like that with data strategy. Like what are people saying about it? What are the. And it did a great job. I mean, pulled 40 different sources and wrote this whole thing about it. And I said, what's the missing thing from all of these different things? And it literally fell over. It couldn't really come up with anything because it's not there to like do that kind of thinking. Now I can do that kind of thinking, but it's not. There's not enough other people in the consensus applying that to it that it can build a knowledge base around to say, oh, I've trained myself on that information. Here you go. And so that's where we always have to you it's important to think about. Okay, yeah, My expertise applied to AI gives me a superpower. Someone without expertise. Applied AI brings you up to average. And so then now you can see like, okay, then how should we use it in our businesses? The one thing I do get concerned about about AI and how we're going to proceed because we're obviously not going to stop using it, is what do people without expertise do to build expertise now? Because if AI is writing all of our code in the next three years, how do people who are starting out as software developers build that expertise to be able to coach the AI to write amazing code? Or how does that next amazing breakthrough and coding language or the replacement to SQL or ever come about if all we're using is the same things AI knows the most about? Because like people I've talked to who are developers, the more esoteric the language is, the less the AI is really doing a good job with it. The more popular the language, the more amazing it is because there's a bigger corpus of information for it to consume and learn on. So it's a really interesting challenge to think about. And as analytics people, I think about it for us mostly it's sort of like, okay, so, yeah, how do we take a junior analyst and make them into an amazing senior analyst down the road? And if AI is coming in and doing, like a bunch of that job, the nice thing is AI is nowhere close to doing the analyst job now. Give it two years and my story will change. Like, so much progress is being made, and I'm super excited about that. But that's the thing I think, for a lot of us, and especially experienced listeners think about, is how do we make sure there's a bridge backwards so that we don't lose the connectivity, so that future people can come in and be good at this as well. Because the last thing we need, the last thing we all want is everyone getting to average and no further.
Tim Wilson
This one I can't remember the source on, but I do remember seeing someone who had said they'd used AI to it, had given it kind of a. What it was wanting to get more expert at, and said, develop a training plan for me. And I. These are the criteria I want to do. Yeah, you know, a half hour a day. Because I kind of along those lines. That's why I'm terrified that people think this is gonna let me skip the steps of hard work and frustration and thinking about the business, about how code works, about architecture, whatever it is. And I don't think there is that. That's not what it's gonna do. Like, people still need to develop expertise, and you develop expertise through practice, and there's a degree of accelerating, but I don't.
Julie Hoyer
Yeah, I. I think it's crazy. That one. I. Michael, I love the way you were talking about the averages. I've never thought about it that way. And that was definitely like a. A clarity moment for me, because I feel like people can't start with a blank slate. Like, how do you. To your point, how do you gain the skill? Or Tim, kind of what you're saying, like, how do you gain the skill to look at a blank screen and be like, I need to go write code to do this, or I need to get my thoughts out in a coherent way. And if you've always had the ability to, like, go to AI and get even, just a starting point, Like, I don't know. I just feel like that's such a core skill in problem solving and problem definition and just like growing in general in your capabilities. Because something I found too is like, sometimes I struggle or push back, maybe drag my feet on going and using AI sometimes. Because to Mo's point earlier, I don't like the brain work of going through and slogging through its long, verbose, kind of average answer and tweaking it. Like, I sometimes do better. My workflow and the way I like to work and like, the output I get. Like, I like a blank screen and I just brain dump. Or I just try something. And then to Tim's point earlier, like, then maybe I go and use AI to help me. But I don't know, it's like such a different exercise in my head that I find it exhausting to take an initial AI output and then make it into something good.
Mo
Do you know what's so funny? I'm the complete opposite. Like, I really loved, because I, I am one of those people that literally needs a rubber duck on my desk because I need to, like, have something to bounce off and be like, oh, I'm hitting this wall, like. Or, oh, I haven't thought of this. And like, I am the epitome of the rubber duck when I'm. Especially if I'm writing code. And that's what I essentially am using AI for now is like, to go back and forth and then be like, oh, no, you haven't gotten this right. Okay. Oh, no, I want to look at this now. Or like, I want to change this wording. And I do. I was thinking about this the other night. I was working on something and part of me was like, oh, I feel like this might have been faster if I just did the whole thing from scratch. But I feel the, the output ended up being better for my working style because I got that feedback loop, if that makes sense.
Tim Wilson
But I don't say, that's different. That's. You still initiated it. You brought your expertise, your point of view, your thoughts, and you, and you put it in. I think, Julie, if I'm hearing right, saying, but if I start with a. If I don't come in with a starting point and ask a query is something I'm looking. If I don't come up with something to bounce it off, I just show up with a prompt. I'm going to write this kind of vanilla thing and I'm going to get vanilla back. And then it's. I'm going to say, send it to my favorite presentation tool and say, generate a presentation of it and it's going to make a vanilla presentation that checks a lot of boxes but doesn't move anything. Yeah, forward.
Julie Hoyer
I don't know, it's even like, when I've asked it to help me, like, summarize a lot of data, like, I've done a sentiment analysis, analysis Recently. So I was using a sentiment analysis, like, GEM in Gemini, and I, like, stripped it all pii and all that. But I put of these responses and I was asking, like, help me take these 700 responses and just like, help me identify some themes. And at first read it's like, oh, yeah, that's great. I could ask for direct quotes that prove each of those themes. But then it's like, I'm going through and checking and I did read through, like, all the responses and it's just interesting how much rework. And I'm not saying that's like, not a good place to start, but that is like, exhausting to me of being like, it wrote up this thing and now I actually have to redisect it and take it out. And it's just a very different. Yeah, like, working style. Like, I like when I can come with a more, like, vision of what I'm trying to get. And I guess for the sentiment analysis, like, I don't know a better way, right? Like, how am I supposed to go through all these written things and remember all the quotes and what, like, physically, like, put them in categories Unrealistic. But that exercise made me realize, to Tim's point, yeah, like, I like using AI to further something I kind of already have going, rather than it spitting out this initial kind of messy thing and having to rework it. I guess it's just a preference thing.
Tim Wilson
So there's my Cassie Kozakov from her what is Vibe coding piece where she said she was making the difference trying to read somebody else's code versus writing your own code and trying to debug it. And she was like, at least when you write buggy code yourself, you understand the flawed thinking that created it. With Vibe coding, you're playing archaeologists and someone else's mistakes. So. So I guess if you went through and did the cinnamon Alice yourself, by the time you got to responses number 650, you'd be like, oh, I'm doing this differently from how I did it initially. Now I need to go back and do it again. But you'd have that kind of baked into what you're doing if you just skipped all of that. You don't know what it.
Julie Hoyer
Yeah, yeah, yeah, that's exactly. That's the exact example and perfect way to put it. It is. It's like, I don't trust it. I don't know all the assumptions it made. And now I'm kind of having to dig and check it off.
Mo
That's interesting what Cassie said, though, because I Found, like, I've used it quite a lot. Not, sorry, I haven't used it to QA someone else's code, but I have used it definitely to understand someone else's code. And I found that super helpful because it was like a business area that I wasn't as familiar with. I wasn't familiar with the tables and all that sort of stuff. And I kind of was like, I wanted to like sense check of like, how is this being. How is this metric being calculated? Like, all of that sort of stuff. And it helped me understand that at a time when the person who wrote the code was asleep and it was really useful.
Tim Wilson
That's not her point. Her point at all. That's not her point at all. Her point was if you ask it to generate the code, then you've gotten the code that is the person who's asleep.
Mo
Oh, God.
Tim Wilson
She was saying. And she was saying this is like, like debug. So absolutely, if you don't go ask.
Julie Hoyer
It all the questions of like, why'd you choose this? Did you think of this edge case? What happens at this edge case?
Tim Wilson
I mean, I think you just don't know is a great point. If you're trying to look at somebody else's, like, if it's a spaghetti hot mess and you're saying, I mean, I could even see asking it, like, how good is this? Like, this seems like it's 4,000 lines.
Michael Helbling
Is.
Tim Wilson
Could, could this be done better? So isn't it? But again, that's an assistant of saying, I don't understand what this is doing. And the person who wrote it's not. Here, help me out. I think that's, that's a great use case.
Mo
Okay. One of the things that I feel like is coming to mind also with vibe coding, I'm gonna say something that might also be controversial. I wonder if the reason, like the lookML example is so good is because lookML is so basic. Like, you're not. Like, if, like most people can write LookML, it's fairly like simple, I would say, versus like, if you're trying to write code for something that's very complex and then trying to debug it, I could see that being like would be very bad at that. Whereas, like, I don't know. So maybe it has to also do with the complexity of the problem, like what the code is trying to do or the coding language.
Tim Wilson
But that has been the complexity front has me thinking. If you look at where people kind of jumped to self labeling themselves data scientists after they'd taken A Python Boot Camp and didn't have the what is the, what are the appro, what are the trade offs? And the different models that I could choose to run on this asking AI, you're probably giving incomplete information and hey, what kind of should this be? Gradient boosting, you know, like what should I use? And maybe Michael, it goes back to what you were saying, somebody who doesn't know any better. If the AI says, well, based on what you gave me, it didn't think to probe for some other factors, it didn't know some context or nuance, it could totally send you down a path that wasn't helpful. Whereas if somebody's like a legit experienced data scientist who probably wouldn't even need to quit, they wouldn't query it, they'd say, well given the nature of this, I think we should use X, Y and Z.
Val Kroll
There was something that you said there, Tim, because you were saying like if they had taken the Python Boot Camp, they might not know what to think about different models like having that knowledge. And then when you were juxtaposing that, you said someone who has more experience. Because I think that that's a key part of it is like the tripping down and falling down and like knowing what the watch outs are. Like I think that that's a huge part of it too is like there's nothing to replace the experiences that we, the scars that have made us who we are. Right.
Julie Hoyer
Stick with you.
Michael Helbling
I want all of you to struggle with applying Steven Fu's principles to data visualizations of random BI tools.
Mo
Okay, so one concept that has been turning around in my mind a lot of late, and this is tangential to this whole AI piece, I kind of keep coming back to like what happens if we give people like more ability to self serve, answer their own questions using AI, whatever it is, and they misinterpret it or they make mistakes. And recently someone said to me, they're like, and what if they do like, so they make a mistake, they misinterpret the data, they're accountable for that mistake and that misinterpretation and then they need to fix it and they don't make that mistake again. And I feel like it's this tension that's been like rolling around in me where I'm like, I always want to protect people from making the less good decision. And so I'm like, I want them to make the best decision possible the first time. And so I'm always like, oh, and like we can help you do that. That's what data science does. And it's like the funny thing is though as we talk about expertise, so much of your expertise comes from making those mistakes yourself. So it's like, yeah, anyway, I'm just thinking out loud about letting people just fuck up themselves and then figure it out and how there's value in that.
Tim Wilson
Yeah, plus that's actually kind of part of the human experience.
Val Kroll
I mean I was gonna say yeah.
Tim Wilson
Like there's, there's this like I think lots of things have been like lots of thought pieces around like if there wasn't hardship and frustrating stuff and mistakes made. I mean that, that's getting rather philosophical but like if everything is like a smooth path then like what are we, we gotta go, you know, find aliens to fight or something. That's where.
Mo
But isn't that the point though that like all these people that think hey I can just like throw a CSV into like chat gbt it's gonna answer all my business questions. I don't need data scientists, blah blah blah. Why not let them do it? Be like sure you want to upload these CSVs in, answer your questions, get some shitty answers back and make some shitty business decisions. That my friend is going to be a great learning opportunity.
Val Kroll
They'll make a bad decision and then it will come back and they'll be like oh it was just really low quality data. Like we really just need to like clean our data and just you know, we need some more tools, different tools. It was the tools fault.
Michael Helbling
The tools are always the ones that are messing us up for sure.
Mo
Oh Val, that hurts. That hurts.
Val Kroll
Well if someone thinks that that's a solution mode, do you really think they're going to have like the self reflection to be like oh it's not me.
Tim Wilson
That'S the other thing that these guys the, the separate from the. We're going to stand up our little Johnny come lately just ask the question and give you the answers. The other is that there are so many people have jumped on this. Well with AI you got to feed the beast so you need to get all of your data. So that is also stood up an enormous number of companies that are now sowing fear, uncertainty and doubt that we got to have all the data pumped in and it's kind of energized the I was, I was talking to a longtime friend who, she's a marketer and she was like went on a tear about cookie blocking European based company. She's in North America and she's like we had to fight so we could get the cookie. Even if they don't track the. If they don't accept consent, they can. If they don't consent to.
Mo
What? What's going on?
Julie Hoyer
Checking her blood pressure.
Michael Helbling
Checking her blood pressure. Tim's rant. She's like, poor Val.
Tim Wilson
But. But it's. That has been fed as well. Like, it does get to where VAL was that. Oh, and now if a bad thing happens, it's not because I tried to shortcut it. Nobody's going to accept that the AI is no good. It's going to be. We must not have had enough data. The data must not have been clean enough. And they throw it to the data team and that becomes the problem when it's just often not. It's like, no, you need to think harder.
Mo
Thanks, Tim. I'm back to pessimistic.
Julie Hoyer
How do you make sure people can still sniff out the bs? You need enough people that can sniff out the bs and you need enough people to not get stiff, stuck in the echo chamber that maybe AI is making worse in some area. You know what I mean? Like, that's where my head goes, is like, the people who can, like, see beyond will still rise to the top because, like, you just, you're. I feel you're gonna get a lot of that, like, echo chamber.
Michael Helbling
Like, it's hard enough to maintain data quality in a single source of data or a single data set. Now map out the four to five data sets you'll need to maintain in complete alignment with complete accuracy. It's not a job that's going to be very easy, very fast. That's the truth. And we have to do that if we want LLMs to be able to house the context for actually doing what we would call analysis.
Julie Hoyer
Blood pressure's back.
Michael Helbling
Blood pressure.
Julie Hoyer
Check mine.
Michael Helbling
Yeah, check mine. I love we're the One. We're the One. Audio podcast with prop comedy. So, yeah. All right, well, hey, we better wrap up this episode. Congratulations, each of you. Now go ahead and go put AI expert on your LinkedIn profile. Everyone else is doing it.
Tim Wilson
AI strategist. AI strategist.
Michael Helbling
Oh, AI strategist. Oh, I like that. That's better. Yeah. Did you use an AI to come up with that?
Tim Wilson
No, I might. I might have seen that on a long time member of the analytics community. I was like, oh, anyway, interesting. Very good.
Michael Helbling
And actually, what stood out to me, John, I loved Mo hearing from your experience, because it's a lot different than what I'm experiencing out there in the context that you're operating in. So that was really great. And I love the juxtaposition and just sort of learning from that. So that was amazing. Tim. Nothing. I got nothing. Tim, of course, name dropped. Like everybody. Cassie and Ethan Malik, which I also.
Mo
Most well read individual.
Michael Helbling
Yes, exactly. Continuing on in his quintessential analyst ways. Nice job.
Val Kroll
He can't help himself.
Michael Helbling
And Julie, way to lead the conversation today. Thank you.
Val Kroll
We knew it.
Michael Helbling
Yeah, I said it. I listen, I just typed into Gemini and I said, who's going to be the best? That kind of like, who's made the best gem. Yeah. And it was like, oh, who's who? And then Gemini's like, who are my options? Oh, Julie. Julie by a mile. And Val, thank you too. No, because I think what you did, Val, which was actually super important for the conversation, was you turned it back into who or what we're going to do with people around this, with which I think we were, you know, we were all over the place. And you brought us back to like, probably the more important central element of this, which is we're analytics people, so. All right. And I went on a few rants, so yay.
Val Kroll
All right, good ones. I'm a fan of one.
Michael Helbling
Let's say right now that I bet you're out there passing this whole episode through an AI filter to bring it down to like, you know, 30 seconds or something. But if you hear something you're interested in, we would love to hear from you. So please do reach out. You can reach us on LinkedIn or on the measure, Slack chat or by email contactalyticshourio and we'd love to hear from you. Please do not send us AI created emails. MO does not appreciate that. Or if you do, train the AI to be very succinct and funny. And funny and funny. Yeah, and they're getting so much better at being humorous now, so it's good. And then the other thing I'd like to say is, you know, we've been around for a long time and if you've never thought to go on your favorite platform and give us a rating or a review, I'd say AI can help you with that too. So, you know, we're not above it. Just go out there and give us five stars and Long winded AI driven. No, don't do that. But do rate and review the show. It helps AIs consume the show and then tell people the cool things we say. And then last and certainly not least, a big shout out to Josh Grohorst, our producer, for everything he does to help us get this show off the ground.
Tim Wilson
Can we just say that every time you around with like, AI generated images of us as a group and Josh always looks amazing.
Mo
Amazing. Do you know what it is? Do you know? I've worked this out. AI knows what to do with images of men with beards. That is like the summary I have.
Michael Helbling
Okay, that's interesting. Okay.
Mo
Yep.
Michael Helbling
There's. That's probably a whole episode right there. I don't know. But anyways, yes, Josh Grohurst, who looks amazing in studio Ghibli form as well as other elements. But yeah, thank you, Josh, for everything you do. And I would just say, and I think I speak for all my co hosts out there. No matter what a what part of your job AI is doing, the part it can never do for you and you got to keep doing is to keep analyzing.
Tim Wilson
Thanks for listening. Let's keep the conversation going with your comments. Suggest suggestions and questions on Twitter at analyticshour, on the web at analyticshour IO, our LinkedIn group and the MeasuredChat Slack group. Music for the podcast by Josh Crowhurst. So smart guys wanted to fit in, so they made up a term called analytics. Analytics don't work.
Michael Helbling
Do the analytics say go for it.
Julie Hoyer
No matter who's going for it.
Michael Helbling
So if you and I were on the field, the analytics say go forth. It's the stupidest, laziest, lamest thing I've ever heard for reasoning in competition.
Mo
Guys, I've got an exciting example to share today. I'm not telling you now.
Michael Helbling
Yeah.
Mo
I haven't even had coffee. Like this is.
Tim Wilson
I am going to need to get another beer.
Julie Hoyer
It's.
Michael Helbling
See? Told you to chug that.
Julie Hoyer
Well, I don't know if you need coffee.
Val Kroll
It should be worth it. Yeah, if we, if we push it right up to a 5:30 central ending time, we might get an appearance of Abby Lou.
Julie Hoyer
Oh, Abby.
Michael Helbling
I. I think that's perfect, actually.
Julie Hoyer
That sounds wonderful.
Val Kroll
I opened up my laptop over what we were eating breakfast this morning just to like do something really quick. And she's like, are you talking to Tim?
Julie Hoyer
The way she refers to Tim constantly cracks me up. What was it she was pretending to be working when she was home sick? Yeah, she was like, hey, Tim. Like she was pretending to talk to Tim.
Mo
To be fair, my kids do the same, but they, they're like, I'm going to go do work now. And then they sit at my desk and tap and I just turn off my keyboard. But they don't cool too.
Julie Hoyer
Yep.
Tim Wilson
They don't name specific co workers I mean, you have a few more co workers.
Val Kroll
They probably have a little more variety.
Mo
Well, they do. They do. When they come to the office, they're like, where's Auntie Priscilla? So, yeah, they do have their favorites.
Michael Helbling
I put in Slack, my first attempt to make us into Muppets and it invented a random other Muppet and put it in there. I was like, there's a ghost. That's Ken Riverside.
Julie Hoyer
I don't know that's Ken, but that's like old Ken. I definitely thought of him as, like, younger, hipper, more dapper. But I like.
Michael Helbling
Yeah, yeah, no, we've already got Ken nailed with AI before.
Val Kroll
So I like how we're all Muppets and Tim is from the Simpsons.
Michael Helbling
Yes.
Mo
Oh, wait, which one's that one?
Val Kroll
Ted is Flanders cousin and we're all.
Julie Hoyer
Muppets.
Michael Helbling
So that one didn't work very well.
Tim Wilson
Rock Flag and more dashboards through AI No.
Episode #270: AI and the Analyst. We've Got It All Figured Out
The Analytics Power Hour
Release Date: April 29, 2025
In Episode #270 of The Analytics Power Hour, hosts Michael Helbling, Tim Wilson, Julie Hoyer, Val Kroll, and Mo delve deep into the evolving relationship between artificial intelligence (AI) and the field of analytics. The conversation navigates the promises and pitfalls of AI, exploring whether it serves as a mere buzzword or a transformative tool capable of reshaping analysts' roles.
Michael Helbling kickstarts the discussion by reflecting on AI's journey within the analytics realm. He humorously mentions an AI-generated podcast intro from 2023 that fell short, setting the stage for a candid exploration of AI's advancements and current standing.
Michael Helbling [00:13]: "AI hasn't gone away, and the possibilities, capabilities, and potential of these LLMs are expanding, seemingly by the minute."
Helbling raises critical questions: Is AI just another predictive model, or does it hold the potential to disrupt traditional analytics roles? The hosts agree that while AI has made significant strides, it remains a tool that complements rather than replaces human expertise.
Julie Hoyer emphasizes that AI isn't yet ready to replace analysts. She points out that while AI can assist with tasks like writing queries, it lacks the nuanced understanding required to derive meaningful business insights.
Julie Hoyer [04:15]: "AI is not ready to just replace us. Even for writing queries, there's no talk to your AI and ask it your business questions and have the data insights come from your data warehouse."
Tim Wilson concurs, highlighting the difference between debugging and AI-generated query writing. He underscores that AI still requires significant human oversight to ensure accuracy and relevance.
Mo introduces an intriguing use case from Canva, where AI assists in generating optimized SQL queries by leveraging the company's top-performing dashboard data. This collaboration showcases AI's potential to streamline data retrieval processes without eliminating the need for human intervention.
Mo [05:35]: "We're not at a point where you don't need a data person involved at all. You still definitely need to QA data."
The hosts discuss various experiments and implementations of AI in analytics. Mo shares Canva's approach to enhancing SQL query generation using AI, which has shown promising results in producing more efficient and accurate queries. However, he notes the necessity of Subject Matter Experts (SMEs) to guide and validate AI outputs.
Mo [06:57]: "I think we might get to a point where we don't need dashboards. Mic drop."
This sparks a heated debate on the future of dashboards. While Mo suggests a day without dashboards might be feasible, Tim and Julie express skepticism, emphasizing the enduring value of visual data representations for business stakeholders.
A recurring theme in the episode is the irreplaceable role of human expertise in leveraging AI effectively. Michael Helbling eloquently captures this sentiment:
Michael Helbling [10:04]: "Your source data requires many different other pieces of metadata or parallel data... An inference engine like an LLM can actually come up with something that is not just sort of like that intern level time on site was 42 seconds type of bull crap you get from big agencies."
Val Kroll references Eric Sandersam's concept that while AI excels at problem-solving, it falters in problem definition—a critical aspect where human intuition and business context come into play.
Val Kroll [25:35]: "AI is really good at problem solving and it's getting better and better, but it's not making a lot of progress on the problem defining part of it."
Michael extends this by suggesting that as AI becomes more integrated, it will spotlight individuals who excel in understanding business levers and deriving actionable insights, potentially rendering others obsolete.
The conversation shifts to the potential obsolescence of traditional dashboards. Mo envisions a future where AI-driven tools can provide real-time insights without the need for static dashboards.
Mo [19:48]: "If you can answer that question without having a dashboard, why would you need it?"
However, Tim counters by emphasizing the importance of consistency and the human preference for structured visual data.
Tim Wilson [20:10]: "There is value in consistency. Structure."
The hosts grapple with balancing AI's capabilities with human preferences, acknowledging that while AI can generate visualizations, the tactile familiarity of dashboards remains valuable.
A significant portion of the discussion revolves around the challenges posed by AI adoption in analytics. Julie Hoyer shares frustrating experiences where AI misinterpreted business terms, leading to inaccurate analyses that required extensive human correction.
Julie Hoyer [07:10]: "It doesn't know that it's not smart enough. It doesn't know you haven't trained it how to actually... it doesn't have the context around your data."
Tim Wilson voices concerns about the proliferation of AI-generated content leading to information overload and superficial analyses, hindering meaningful decision-making.
Tim Wilson [33:35]: "Most social media promotions that are like AI generated, they're the worst."
Mo introduces the concept of AI acting as a "rubber duck," where the iterative feedback loop between human and machine enhances the quality of work, but also acknowledges the potential for AI to produce subpar outputs without proper guidance.
The hosts passionately discuss the necessity of cultivating expertise in an AI-augmented landscape. They argue that while AI can expedite routine tasks, developing deep analytical skills remains paramount for extracting actionable insights.
Michael Helbling [34:28]: "Knowledge and expertise actually becomes a massive and important filter for how AI is actually going to be beneficial or not beneficial."
Val Kroll and Tim Wilson emphasize that expertise is forged through experience, including making and learning from mistakes—something AI cannot replicate. They caution against over-reliance on AI, which might lead to a workforce that's only "average" without the capacity for innovative thinking.
Tim Wilson [38:52]: "...the nicest thing is AI is nowhere close to doing the analyst job now. Give it two years and my story will change."
As the episode winds down, the hosts express a mix of optimism and caution regarding AI's role in analytics. They recognize AI's potential to revolutionize data handling and analysis but stress the indispensable value of human insight and expertise.
Michael Helbling encapsulates the episode's essence by urging analysts to leverage AI's strengths while honing their own skills to navigate the complex landscape.
Michael Helbling [34:09]: "AI is right down the middle in terms of an average... without knowledge, I only could possibly hope for average."
The episode concludes with lighthearted banter and acknowledgments, reinforcing the camaraderie among the hosts and their shared commitment to advancing the analytics community.
AI as a Tool, Not a Replacement: AI enhances analytical tasks but cannot replace the nuanced understanding and problem-defining capabilities of human analysts.
Human Expertise is Crucial: The effectiveness of AI in analytics heavily depends on the user's expertise to guide, validate, and interpret AI outputs.
Challenges in AI Adoption: Issues like context misinterpretation, information overload, and the necessity for extensive data preparation hinder seamless AI integration.
Future of Dashboards: While AI may reduce reliance on traditional dashboards, the need for structured visual data representation remains significant.
Developing Expertise: As AI takes over routine tasks, the emphasis shifts to cultivating advanced analytical skills to extract meaningful insights and drive business decisions.
Michael Helbling [10:04]: "An inference engine like an LLM can actually come up with something that is not just sort of like that intern level time on site was 42 seconds type of bull crap you get from big agencies."
Julie Hoyer [07:10]: "It doesn't have the context around your business... People aren't connecting those dots of all the steps in between."
Tim Wilson [33:35]: "Most social media promotions that are like AI generated, they're the worst."
Val Kroll [25:35]: "AI is really good at problem solving and it's getting better and better, but it's not making a lot of progress on the problem defining part of it."
For more insights and discussions on analytics, reach out to the hosts on LinkedIn, the Measure Slack chat, or via email at contact@analyticshour.io. Remember to rate and review the podcast to help others discover valuable content. Stay tuned for more episodes of The Analytics Power Hour where experts share their knowledge and experiences to empower your analytical journey.