
udging by the number of inbound pitches we get from PR firms, AI is absolutely going to replace most of the work of the analyst some time in the next few weeks. It’s just a matter of time until some startup gets enough market traction to make that...
Loading summary
Mo Kiss
Foreign.
Tim Wilson
Welcome to the Analytics Power Hour. Analytics topics covered conversationally and sometimes with explicit language.
Michael Helbling
Hi, everybody. Welcome. It's the Analytics Power Hour. This is episode 257. You know, since the Industrial Revolution, it seems like the interest in automation is always around. And in the analytics space, there's always a lot of interest here as well. You know, that entails handing off parts of the work to a machine, you know, to increase efficiency. These days, AI is the newest entrant into this discussion. How and what can we hand off to an AI when it comes to analytics? Are they going to take our jobs? Will it truly usher in an era of data democratization? I don't know. I guess we should talk about it. And to do that, let me introduce my co hosts, Mo Kiss. How you going?
Tim Wilson
Hi. Going great. Thanks for having me, Michael.
Michael Helbling
It's awesome. And Tim Wilson. Someone say you're already a computer already. Your results are too perfect now. How you doing, Tim?
Mo Kiss
Ouch. I'm getting to where I'm a computer when it comes to responding to a podcast pitches about generative AI for analytics.
Michael Helbling
There you go. That's a part of the job.
Mo Kiss
Flowing in Florida, Fast and furious and fairly automated. Reached out to Martin because we're like, how about we go with someone who we reached out to instead of somebody who came in to us?
Michael Helbling
Yeah, there's a lot of, lot of interest in this. And I'm Michael Helbling, and we did want to bring on a guest who is at the forefront of this issue. And luckily, at Marketing Analytics Summit this year, we met Martin Broadhurst. He's a consultant on AI for marketing, the owner of Broadhurst Digital, and he serves on the editorial board of the Journal of Applied Marketing Analytics. And today he is our guest. Welcome to the show, Martin.
Martin Broadhurst
Hello, Michael. Hello, Mo. Hello, Tim.
Michael Helbling
All right, well, we've got a lot of questions, so buckle up. And in the next hour or so, hopefully we'll learn a lot about what AI can do for us in analytics.
Tim Wilson
I'm not gonna lie. I'm, like, weirdly scared of this episode, and it has been on my mind a lot.
Michael Helbling
All right, well, let's dig into that. Maybe this is just a Martin, what we need is a reassurance for all of us that we'll still have jobs.
Martin Broadhurst
After this or I don't see anybody's job going anywhere in a hurry. Not to spoil what's to come, but, yeah, I think you're okay for the time being.
Michael Helbling
Yeah. Well, maybe Martin, to kick this whole thing off, we can talk a Little just about how you got into this area in the first place and sort of some of the things you're seeing in the industry right now.
Martin Broadhurst
Yeah, so my background is in the CRM and marketing automation space. This is where I've been working with businesses for years Now. And when OpenAI made the GPT3 API available, I immediately started playing around with it and experimenting with the different tools, seeing what the capabilities were and understanding the mechanisms of how these large language models actually worked to try to kind of push them to the limits. And over time I've just built up a lot more experience with that. And yeah, this is turned into a nice general addition to my skill set where I'm working with clients on how to automate and find use cases for AI and generative AI in their workflows and in their day to day tasks. And unsurprisingly, data analysis is something that comes up quite a bit. So you know, I've been trying, trying to test the models as much as I can to see where the limits are before they break. And this month I've just published an article in a journal about how to use large language models with spreadsheets with a bunch of different techniques for how to think about using generative AI alongside spreadsheet and spreadsheet design.
Mo Kiss
I mean, it's a short article. Don't you just say here's the spreadsheet and find me insights and then it just goes from there.
Martin Broadhurst
I mean that is the, that is the dream, isn't it? Wouldn't it be great if that actually worked like the marketing spiel.
Tim Wilson
Well, okay, the fear, the fear I have at the moment is actually not about losing my job because I see the amazing efficiencies I even already have in my own job. What is terrifying me at the moment is the we want to do AI. We had a conversation the other day, we want to do Gen AI on this thing and I get really, let's just say anxious, let's call a spade a spade. We're kind of swapping the way we would normally solve a problem from what is the problem? What are all the ways to solve it? What is potentially the simplest, most explainable, whatever way to get there versus going, we're going to solve this problem with X. How do we do more of X? And that's the bit that's stressing me out.
Martin Broadhurst
Yeah, finding the prescribing AI first before you've even dug into the potential solutions. Starting with that and saying we. And that's actually one of the things that clients will sometimes say to me, they'll just come to me and say we want to use AI. Why would you start with that as the solution before you've looked at the, the implementation? And I think this is a really common problem. I would always start with look at those tasks that you do that have things that might require certain amounts of batch work where there's just repetitive nature to the tasks and you can automate that away. But yeah, really understanding, understanding the nature of the problem is probably the, the starting point before you even get into what the solve is.
Mo Kiss
But is there generative AI seems it's, it's tangible because it's so easy for somebody to play with it, which you. That I would say there's a higher bar for someone to just like dabble with SQL or Python or R coming from scratch. So, so it's broaden the audience of people who can get a taste of what the technology is. And to me where the massive miss is just because you get a sense of what it does, you have a back and forth. With ChatGPT, it kind of misses what analysis is. And like it feels like there's an oversimplification of the steps of saying oh well no, no, just going to be. AI is going to be the drudgery of the tasks. And you say, well the drudgery of my analysis work is doing this data cleanup. And I've, I've played with this ChatGPT, so what if I just told it to do that? But it kind of misses what the. Even in the drudgery of the work, what the, the human component is much, much less just the reality of identifying a problem where you're trying to use data to solve it. Like it just, it feels like it's this big bucket of like a tool and somehow people are like, oh well, the tool must be smart enough to get to how to fix it. The fact that you said you've got the spreadsheets thing feels like that even that like is nuanced because you have to kind of help it understand what a spreadsheet is, which maybe it kind of knows and then sort of what the data within it represents.
Martin Broadhurst
I think what you're kind of driving at there is that people don't understand the tool and the nature of the tool and the kind of mechanism behind the tool. I think it's really important that with generative AI people understand things like next token prediction. What does that mean? What is it doing under the hood? And when you've played with the models a bit and you understand some of the, the settings, things like temperature, for instance. So for anyone that isn't aware of the temperature setting in, in a large language model, there is a setting between 0 and 2 and the higher it is, the more chaotic the answers you are. And the basic principle of temperature is that it's like in the physics systems, the higher the energy in a system, the more chaotic and the lower the temperature and more controlled it is. If you play around with that in the API, for instance, you can get really consistent answers. But where you use something like ChatGPT, you don't have access to that particular setting. So it's generative, right? It's not descriptive or calculating. It's coming up with a range of answers and there's subtleties in the way that you write, the way that you input, the way that the, the data might be structured, whatever it may be. And if people think it's like computer software that they've always used in the past where you press a button and it always gives you this thing, it does this job consistently in the same way every time, they'll be sadly mistaken because that's not what's going on under the hood.
Michael Helbling
It's time to step away from the show for a quick word about Piwick Pro, Tim. Tell us about it.
Mo Kiss
Well, Pillow Pro has really exploded in popularity and keeps adding new functionality.
Michael Helbling
They sure have. They've got an easy to use interface, a full set of features with capabilities like custom reports, enhanced e commerce tracking, and a customer data platform.
Mo Kiss
We love running Piwick Pro's free plan on the podcast website, but they also have a paid plan that adds scale and some additional features.
Michael Helbling
Yeah, head over to Piwick Pro and check them out for yourself. You can get started with their free plan. That's Pwick Pro. And now let's get back to the show.
Tim Wilson
Oh, I've just had this. I don't know if this analogy makes sense, Tim, but hear me out. I constantly am doing this thing in my head where I'm trying to understand stakeholders perspectives and understanding an MMM and how it's different from their like worldview of attribution and what attribution gave, which when you see like a table and it goes this channel, this many signups, this channel, this like the concept of MMM results is quite difficult. We start talking about diminishing return curves, we start talking about return on AD spend at different spend levels and like there's just all this like complete complexity there. And I feel like a similar analogy could be made here. Right? Like you Expect input output, but there's actually so much nuance like is that a, is that like. I don't know if I'm grasping at straws here, but in my mind I was like, this would be the problem of people trying to take data analysis using gen AI without understanding it well enough. Like that would get you into the danger territory, right?
Martin Broadhurst
I think that works on both at two levels. There's the not understanding the gen AI mechanism well enough, so not really understanding the, the strength and the weaknesses of the tools, which is going to be a hindrance in and of itself. But then there's also that level of. People often say that if you use a large language model and you are an expert, you can get expert level outputs from it. The better the quality of your input, the better the quality of the outputs. But if I, as someone who isn't a seasoned data analyst, throws in a spreadsheet and says give me some insight into this, I'm asking bad questions and I'm getting very average outputs. So it works on, on both ends of the spectrum. If you're not giving good context and good prompts, you're going to get bad outputs. But also if you don't understand the limits of the technology itself, you might just, you don't know that it can't actually do the thing you're asking it to do.
Mo Kiss
That's. Cassie Kozarkov last month wrote a post that was, that was very timely as we were prepping for this episode. It was Strawberry's paradox when perfect answers aren't enough. And she sat with some Nobel Prize winner who she worked with and she was working on her PhD and they had like just a rift for a conversation for a while. But what she, I thought very, very well articulated, put that in that said, imagine the AI that can give the perfect answer, that it is perfectly accurate and correct. If you, just as you said Martin, if you don't ask it a good question, it's, you know, it is going to be like what's the answer to life, the universe and everything? It's 42, right? It's not a, it's not a good question. And that, that's this other piece that has kind of bothered me that it feels like we're looking, the people who are looking at AI all have a world lived in a world without generative AI. So we're bringing our human experience, having worked with data, having dealt with the business problems, having grappled with trying to explain multi touch attribution versus mmm, and that's the lens we're looking through it at and saying, oh, here's the future. It's going to take everything. Well, if you fast forward and say, wait, that's discounting the expert level of the input. So even if that worked for a very, very short, for a period of time, that would start to go away because all of a sudden you'd have people who were trying to skip a bunch of steps of the human existence to get to the AI and hoping that the AI can close that gap, which seems very. I don't know if that's just like philosophical or it seems like. No, that's, that's what would happen. It's, it's. We're counting on the, the tool to close a gap that doesn't seem like the tool is ever going to be equipped to fully close. I don't know if that made any sense. Mo, I really liked your analogy. We can just cut this, you know, whole section out. And I mean, where, where do you like with a spreadsheet? Where are you using? What's the, what's the start and end point of generative AI when given a spreadsheet?
Martin Broadhurst
So I think some context has to be given here in that these models are changing rapidly. It was only a few weeks ago that we had GPT or ChatGPT01 preview released, which is supposed to be, you know, much better at reasoning, although that's its own conversation in and of itself. The, the model's capabilities are changing all of the time. So in. What I propose is that there are, as it stands, four ways that you can really use ChatGPT or any large language model with a spreadsheet. And one is to. My preferred route is to just use it as a coach or a mentor. It's that very clever assistant that you're not actually giving access to the data, but you are. You get stuck on something. Maybe you need a bit of code writing that will. You can stick in a macro or you've forgotten the function to do a certain thing, or you've got a really long formula that you need optimizing and reducing. It will do all of that for you. And it's the actual spreadsheet and the, the language model don't interact. This is where AI is very strong at the moment. It can be quite good for that. There's over 500 functions in Excel. Trying to keep all of those in your head is very difficult. Whereas if you know you've got that very smart assistant next to you. Oh, yeah, I know, I know exactly what that is. Then You've got the file ingestion. This is where you can give the spreadsheet to the model so you can upload to ChatGPT, the CSV, the Excel file, whatever it may be. And it can use Python in its code environment to execute tasks and functions on the data. The outputs from this can be very good, it can do some incredibly powerful things. But there comes a big flashing light warning sign saying the outputs can also be complete hallucinations. I have got lots of examples. In fact, nearly every single time I do this, the data that it presents back has some errors in it that if you're not paying attention you would not spot. So, case in point, from the Marketing Analytics Summit, I showed an example where we had a bar chart showing cohorts grouped by age and there were two bars, satisfied or unsatisfied, and it was just which one was higher, a blue and an orange bar. And it described in the written text. So in the charts that it creates, they are accurate. It seems that the data manipulation and the charts that it creates are accurate. But then its description of the charts, its written description of it, is wrong. Like consistently it would say you can see that for the 35 to 50 year old cohort, satisfied is higher than dissatisfied and it's clearly the other way around. And this is really consistent. This comes up time and again, so you wouldn't want to rely on it for uncovering the insights.
Tim Wilson
Do you know why that. Like, I know that me asking why is a stupid question right now. Like we don't get to look inside the black box. But like that, that's a really strange error. Like really strange that it would be able to interpret it correctly in the graph. But then is, is it like something to do with converting it to the graph and then the GR to the descriptive text? Or like, is that the step too far? Like I just. How do you know? You don't know where the boundaries are.
Martin Broadhurst
So the graph is separate from the. From what? The, the model doesn't see the graph. The model runs the Python and then takes the. And turns the Python script into something that sits in the HTML in the browser window. But the, the actual model doesn't see the output. And what. Because the model has turned everything into tokens. Where you've got a graph that has. Or it's done the, you know, it's used Python, it's got some numbers attributed to the different cohorts and positive and negative, also satisfied or dissatisfied though they're just token IDs for the model. So it's not like the system doesn't see the raw number. It sees the tokenized version of the number and then has to, in its model, understand the relationship between these. This is my best guess, right. So I'm making some assumptions here. I would like to see, particularly within the. The chat version of the. The chatbot version of these tools. I would like it where it creates the graph and then turns the graph into an image and feeds that back in. Because the funny thing is, if I take a screenshot of that graph and feed it back into ChatGPT and say, Tell me what's going on with this data. It consistently does a very good job of that because it's got the vision capabilities.
Tim Wilson
That is nuts.
Michael Helbling
Sort of like the second order thinking is where it starts to fall apart.
Martin Broadhurst
But.
Mo Kiss
But what? So that's the second. So that was like number two, I think of four. Like the. Like to what end it ingests it and outputs a result. And maybe that's going to get better with the added reasoning as more models come along. Is that. Is it going to be easy for somebody to just wave their hands and say, oh well, the second one, it'll ingest it and it'll output results and the results will be very, very reliable. It can count the number of bars in Strawberry and it will always give the right answer. So is that an easy one to kind of check off and say, yeah, that'll get fixed or I would expect.
Martin Broadhurst
So, but we don't know at the moment. With zero, one doesn't have. You can't do file uploads, you can't upload images, it's just text in, text out. You would expect that to improve. The next method is actually that using the assistance within the spreadsheet software itself. So Microsoft Copilot, by way of example, this is a really difficult one to judge because the version that I wrote the paper on was the previous version and then literally, I think the day the publisher signed it off, they announced Wave two of Copilot, which has new capabilities. So the new version, which I haven't yet tested, is supposed to be able to actually write and execute Python on new spreadsheets and do more. It can actually interact with more of the tools and the functions because the old version could do that, but would often say that it had done a task and it hadn't done a task, or it would tell you that it couldn't do a task because there was too much data. Whereas I think those limitations on Wave two have been lessened somewhat. So that's really, if we think about where we, what, what the ideal is. I think this is the ideal. You want the chatbot in the environment where you're working with that data and it's able to actually execute almost agentically different functions, tools, tasks directly within the file itself.
Tim Wilson
So the, okay, the layperson's version of this, rather than going to something separate, having to kind of ingest the data, yada yada yada, it's built into it. And the difference is not only can it ask as like work as a helper, quote unquote or whatever smart marketing person called it, it can also actually execute functions on your behalf. So it can do the doing, not just give you steps on how to do the doing.
Martin Broadhurst
Yeah, and the first version of Copilot in Excel was supposed to be able to do some of the doing, but it did it wrong really more often than you would ever want the tool. It felt like it was released a little bit too early, which, you know, fair enough, they're iterating on these things really quickly. But yes, it should. And I think the more important thing with that is actually it can, it can write and execute Python within the environment, which just adds a lot more capability to Excel.
Tim Wilson
I'm really curious from like a product perspective because that, what you're talking about here basically implies that unless you were truly embedding this technology into the product roadmap in a really meaningful way, you will probably fall behind in any kind of tech company, which I hadn't really thought about. Yeah, okay, I'm having lots of light bulbs, maybe I should do more recordings in the evening.
Mo Kiss
But, but so I'm trying to figure out the limits of that. And this is also realizing that again the, the slew of pitches we're getting for guests on the show, like the, the term Gen Bi, like, oh, Jenn AI is going to bring Gen Bi, which I'm trying to figure out of these three categories, like where does it, where does it go from? I'm a user of Excel, which means I'm a human being on the planet and I've got a tool that gives me a little bit more of a natural language interface at kind of a micro level to go bite sized along the leap. Where that gets, or I'm not sure if that's included is, oh well, you're just going to have a natural language interface to ask how much revenue did we get by channel last month? That feels like more dangerous territory than saying, hey, can you, you know, extract. Can put a filter in so it flags Anything that's within the US as US and everything that's rest of world. Rest of world. Which is a more specific instruction. Is there, is that a spectrum or is there a hard line where you're crossing from a copilot to my hoped for, wished for natural language interface to the data that is reliable?
Martin Broadhurst
I think that's where Microsoft would like Copilot for Power BI to get to. I don't have any experience with that, particularly with, with this new wave of, of updates that are coming or recently been announced. What I can say is that people that were using that Power Bi, power users that were really interested in Copilot stress tested it at the start of the year and they described it as. One description said it's not ready for CEO level insights and presentation of data at the moment. It's quite simple. If there are several steps of manipulation of the data that you need to, to do in order to get the insight that you're after, it it falls down. It doesn't understand at the moment relationships between different entities in your data set.
Tim Wilson
So how are you seeing companies use this or like analysts use it in their workflow? Kind of like I know we've talked a little bit about the spreadsheets, but if you take the CEO example, example of amazing boss lady comes to you and says sales are down, what's happening and like you go through that analyst workflow flow of solving the problem. Like do you have kind of any intuition how people are really leveraging this.
Martin Broadhurst
In their day to day the, the file ingestion? If you can get your data sources into ChatGPT, you can get with the right prompting really good insights really quickly. It can bring together multiple data sets, it can merge them and it can, if you are very good at being able to describe your data and what you're after, it can give you those graphs and those charts how much people are doing that day to day. I am, I don't see that a great deal. When I speak to people. The most common, the most common experience I have is people going, it didn't quite do it for me. Like it didn't. It told me something was wrong. So there's an element of doubt that is seeded in people's minds. And this is the thing I think people are so used to using a spreadsheet, a calculator, something that gives numbers in, numbers out that makes sense and is always true. Where you have a tool that you use it ten times and two times you go that's not right. It plants a seed of Doubt in your mind. So I think until the, the hallucinations issue is, is cracked, we're not quite going to get there. Everything feels particularly on the data analysis side, I would say you can get surface level insights or you can get visualizations created very quickly. You can do data manipulation very quickly. If you're someone that doesn't know R and doesn't already know how to manipulate the data, you can, you can do that. It gives you those additional skill sets or access to those kinds of skills in a limited capacity. But how much people are using this in the day to day, I would dare say it's more as an assistant to help them shortcut some code writing functions rather than really relying on it for insight.
Mo Kiss
So, so what is, what is the fourth way I feel like I want to dive back into And I'm not sure whether I'm hitting, I'm hitting a gap or whether I'm hitting a. Or just know that enough of our listeners will be like, he said four, he said four. So yeah, and I want to break that tension. Yeah.
Michael Helbling
This show, much like an AI, gets lost along the way.
Martin Broadhurst
There is a fourth, and the fourth is actually less useful for analysts in some respect, but it's actually adding an entirely new function to the spreadsheet itself. So a good example of this is Anthropic's CLAUDE model, has a CLAUDE for sheets add on. So it's a Google sheets add on and it creates a new function, equals CLAUDE and then equals CLAUDE open bracket. And then you can put your prompt in there and then the return of that prompt is what populates that cell. So that means that you can, you can assemble prompts using data input from other cells. And just like you would any other formula, you can build a formula and then send that to Claude and get Claude's response straight back into the spreadsheet.
Mo Kiss
Okay, so then now I've got, well, so one, I think one thing and maybe it falls in the second kind of the file ingestion. It seems like there is a lot of using generative AI for analytics and it winds up it's really using generative AI for analytics engineering or for data engineering or for data observability or for. So there does seem like there's a whole class of tools that are either kind of pipeline building assistance or data monitoring, which to me that's, that's not the analysis, that's the upstream. And it seems like just my gut is there's 60 or 70% of things that get labeled as generative AI for the analyst are really generative AI for the data engineer or the analytics engineer. Would you agree with that? I mean are you seeing those where that's getting labeled as for the analyst but it's not for analysis and that's causing maybe some confusion in the market?
Martin Broadhurst
Yeah, I think that's probably true. And I'm, I'm just yet to see really strong use cases and I guess you, you, you guys are more at the coal face of this than I am. I'm yet to see really strong examples where people have said we use generative AI for this level of insight and analysis. And look at, look how I did it. And that was all I tada. By hand, you know, we sprinkled in some data and got this amazing output. Isn't it great? Aren't your jobs all doomed? I'm not saying that.
Tim Wilson
See I find the, I find there's like two groups of people. There are the people that are doing very cool shit and are doing it pretty quietly and not telling people. And that kind of tends to be the way that I, I mean I'm not saying that I'm awesome but like when I tell people I've taken a shortcut, I'm not going to tell people like let them think I did all the. Not that I like transcribed a voice note and then used it to write up my interview feedback and then pasted it in in a really efficient manner and everyone thought that my interview feedback was spot on. But then there's the other group that are like oh my God, we did AI. Like look what we did over here, over here. And it's like it just seems to be really polar opposite. I don't feel like we're at that maturity of educating people about how to do it well and the pros and cons, like it seems to be, I don't know, like very polarizing at the moment. But maybe that's just like my lived.
Mo Kiss
Experience having gotten buttonholed by somebody who was definitely the latter. It was a really long and exhausting conversation. And it really wasn't a conversation, it was just him going on. Which what was. What was interesting is that when I the probing that I did, did do with him was all of this really really cool stuff was around rapidly pulling in data sources and being able to use web hooks and generate code to pull data sources in and then with some iterating on the model do some kind of mining of these multiple data sources to generate something which was all very interesting except the two things. And he clearly this, this fella talks about it to anybody who will listen to him and does does not stop. And he started making these bold claims about any company could go from 0 to 10 million US with one person with just AI. This is amazing. But then as I was probing one, he admitted that oh, all the stuff he did, he did actually have to talk to the subject matter experts to even figure out what it should be doing, which seemed like very much a human task. The thing that we didn't get into that just seemed. He went on at great length about he doesn't not have a technical background, but he also went on about how he didn't have to write any code. He would just have this generate the C sharp and then he'd take it. And that felt like another component of. Well, that seems sort of fragile. Like you're the playing around I've done with code generation is it'll generate something, but it may not be clean or well written or something that you want to have code that lives on for the ongoing production of any sort of ongoing deliverable. Like it equates. I talked to my son who's a software engineer and get him started on somebody who's a crappy software engineer or sometimes a faceless person in the past where he's inheriting the downstream. And I'm like, oh my God. The ability for a machine with a temperature setting probabilistic in nature to generate code that's then going to live, that some poor analyst or some future generative AI needs to modify the code. How is that going to work? So the ability to say if you're going to write something that needs to have staying power, you can use the code assistant, but you probably need to know the code and maybe do some real iteration with it as opposed to just saying I don't need code. I mean I've had multiple people saying no one needs to learn to code, it'll just generate it for you. And I'm like, well that's somebody who's never learned to code says that.
Tim Wilson
Can I challenge you a bit here? One of the things that is a little bit exciting like when anyone asked me, I was like, I will say I'm not that technical, I can do like a bit of programming, but I'm pretty and I'm way more shit than I was five years ago. And I, I know I have people in, in my team, for example, that they would share that and they would say that's not my strength, programming is not my strength. Like they definitely have made endeavor endeavors to learn and will try their best, but like they're never going to be a gun programmer. They're never, they're not the one that are like QAing 50,000 PRs from other engineers, data scientists every day. One of the things that I find so challenging in data is to find people that are really good at figuring out how to answer a question and solve a problem. And it's like the idea that you could have someone who might not be strong in a particular skill like programming, but has this like real skill superpower to understand and answer a business question and you can make them better at their job by kind of giving them this free buddy or coach or technical mentor. Like, I find that exciting. Like, that is cool.
Mo Kiss
So I think you glossed over the thing that I think just gets glossed over all the time, which is you tell me more. And those people, well, and all those people, they tried, they're like, it's not saying you have to be an elite level programmer. And I think Cassie's article that I mentioned, the Strawberries paradox, is very much on that. It doesn't mean you have to be a hot shit programmer. But discounting the effort to learn learning SQL and learning Vlookup and learning what a left join, what a join is, if you completely skip that and say, oh, but somebody just has a great sense of answering business questions, one, I think that's actually often discounting what they've learned and part of their ability to answer the business questions from struggling through some of the technical aspects of it. Like learning that stuff helps you understand how data works. Right? If you, if you do a thought experiment where somebody's never had this sense of a join introduced to them and they just say combine data sets. Like you wind up with kind of the very casual business user where you're having a very circular discussion because they don't understand that you need a key to join two data sets. You know, so, so I think that's, I think we're really good at skipping that point of saying, no, no, no, this is going to be great. Great. It's like, well, no, but the, the people have to learn that. That's not their interest or their passion or their. But, but they're learning very, very valuable aspects that go into their ongoing cognition to try to learn that technical stuff that is part of who we are. And we start saying, oh, we can skip that. You don't need to do it at all.
Tim Wilson
Who's saying that? Like, I know that there is like the odd person, but I would like, if Anyone came to me and said, I want to be a data analyst. And they're like, guess what? AI's out there. I don't need to learn any programming. I tell them to, like, that's a.
Mo Kiss
Thousand percent what the fucking analytics translators were saying. I definitely dealt with. No, I have had people tell me, I had somebody who was a long time Google person and adamantly tell me no one ever needs to learn code again. And I was like. He was like, it's just. He's like, no, you don't need to ever do code. And I'm like, I can't believe so absolutely. And I will say, going back pre AI, there were people who were coming who were enamored with the idea of analysis and the idea of doing stuff with data, but said, ooh, I don't want to learn anything technical. And this analytics translator role, I can just, you know. And this was way before gen AI. I'm not actually denigrating the analytics translator role only if somebody thinks that means I don't have to have any technical jobs.
Tim Wilson
But I think, yeah, because like, for example, I know analytics translator is a very contentious thing, but when I think of it, I think of something very different to what you think of. And like, and this is the.
Mo Kiss
I'm looking at the people I know who have jumped on that role. Yeah, sorry.
Tim Wilson
There is a, there is a spectrum though. At one end there is like, I think I can do no programming and AI is going to do it all for me. And then there's like the middle people that I kind of talked about who might be like, not great at it, bit rusty, can use it. And then there's the people that are like, why would I ever need AI? I'm such a great programmer. Like, but it's all.
Mo Kiss
But I think people, they hear your hearing your description that you gave would be. Many people would jump to saying, Mo thinks that if somebody looks at programming and is not interested in it, they can therefore completely ignore it. There.
Tim Wilson
No, I just.
Mo Kiss
But I think that that's how that can be heard.
Michael Helbling
Hold on, hold on, wait.
Martin Broadhurst
No.
Tim Wilson
Martin is talking a lot about the fact that like, there are so many mistakes made. How do you recognize a mistake if you don't know what the wrong or right output is? And that's like, I, I'm using wrong and right in a very binary sense.
Michael Helbling
But like the people, the people who are super excited about what this is going to do have probably never done it. And that's what we're probably kind of circling around, right? Now, okay, let's put up.
Mo Kiss
No, I want to say more about that.
Michael Helbling
Circle back, let's bring it back. Because I think the place I want to go next is I want to talk Martin, a little bit about sort of this idea and this goes into a couple of things. So one is sort of like mo to your point, people who are using AI for various things aren't really necessarily talking about it. And I think sometimes, because there's not a scalable process for the way that I might use an AI, like I use it kind of in that first use case is sort of this sort of assistant coach mentor thing. I'll just pop open my little chat GPT and be like, hey, I'm thinking about this, what are some ideas you've got? And blah, blah. I've never had chat GPT look at data for me ever. I've had Claude look at a couple things, but I've never used them to do any kind of analysis of data. But I think this idea of exploring sort of, you know, the agentic process in analytics and sort of like let's step through some analysis scenarios and maybe look and see where we could leverage it. And then, and Martin, like where do you see kind of the best places for analysts to use AI in their day to day jobs? And that could be like, we can give you some scenarios maybe to help with that.
Martin Broadhurst
Tim, you look like you're about to.
Mo Kiss
No, I think I'm still, I'm waiting for my generative AI to tell me that my blood pressure's come down enough for my last rant to.
Martin Broadhurst
Yeah, so give me the scenarios.
Michael Helbling
Okay, perfect. I'll start with one. So one that I think about all the time is, you know, a lot of what we do in analytics is really thinking through sort of basically an experiment of some kind or some kind of analysis around like this versus this, like we're going to try this. So one of the really cool crucial skills for an analyst, I would say is being able to design a good experiment or think through the design of a good experiment. And so like, let's say somebody comes to you on your team and is like, hey, we want to run this campaign, we want to see if this is a better way to do this. Could you use AI to start to work through the answer to that question?
Martin Broadhurst
I think that's like the kind of design of experiments is really interesting, particularly with the new Model 01 with the reasoning capabilities. So the chain of thought capabilities mean that it thinks things he says in air quotes through the process and it can be a very good constructive critic. So giving you feedback, giving you alternatives to the point that we made earlier about being generative. And it can come up with lots of things very quickly, like it can generate huge amounts of content, some good, some bad. So if you want to just throw in an experiment, design or a hypothesis or whatever it may be and ask it to give you feedback and then just keep going, more feedback, more feedback. It will generate, generate lots of it. Some of it you would disregard. But hidden amongst that there will be some gems. Now all of this is talking about the current state of these models. I think it's not going to be long before actually I'm quite interested in O1 and where this goes with the reasoning capabilities. I think you'll just be able to put in very simple prompts saying what it is that you're looking to achieve and it will spit out very high quality experiments that you can execute.
Michael Helbling
Yeah, I asked the O1 model how many golf balls will fit inside of 747. It did a pretty good breakdown, honestly. So you know those kinds of reasoning problems. I think it does a good job with. I think Mo brought up something else about sort of there's a value in being able to take on and answer a question or understand and answer a business question effectively. And how could an analyst leverage AI to maybe even work with that kind of use case moat?
Martin Broadhurst
Can you unpack that for me slightly?
Mo Kiss
You're saying if somebody comes with a problem, coming back and saying these are scenarios for well like actually this happened the other day.
Tim Wilson
No shot. I said I would not talk about this. I said I would not talk about this. And here I am talking about it. There was a, a CMO type question and someone put it into ChatGPT to say what are the possible hypotheses that might be an answer to this question? I was a little bit surprised at how good the answer was. And the reason that the answer actually was very good and I've with my own experimentation is I find a lot of the responses I get come down to structuring things very logically. And so it'll be like Reason 1, Reason 2, Reason 3, Reason 4, which as someone who ends up writing things into a lot of documents or like write ups, it then becomes a very easy structure to work with in terms of writing it up. And so I was like, you know what, we are going to just lean hard into this. We are going to then tackle this. Tim, you'll love this. Almost as like analysis of competing hypotheses. And be like, okay, these are the nine hypotheses that Chat GPT gave us. Let's go through. Let's try and knock them out. Let's see what we can't. What we have evidence against. What can we say is like, possibly responsible, partially responsible. What are the data that we have for each one? And that's actually ended up how we. How we ended up structuring. Our analysis was based off the hypotheses generated from Chachi Beauty. There you have it, folks. I said I wouldn't say it, and I did.
Mo Kiss
But. But one. That's. That's back to that. Number one. I think the. In. In Martin's like, list of four, like the. The coaching or mentoring to me, and I feel like that's. I'm now Jim Stern.
Tim Wilson
Is it coaching and mentoring? I don't know.
Michael Helbling
Or assisting. Whatever I have.
Mo Kiss
I mean that to me, that's what I mean. That is what I mean. Jim Stern has been kind of. I've now multiple times seen him due to various iterations where he's saying ask it for ideas and Michael, that's what you were saying. I've used it for that. That.
Tim Wilson
Sorry, Tim. I apologize profusely for interrupting, but I can't like, stop my brain from thinking right now. I think of coaching and mentoring as helping you make something you've already got better or get there faster. So, for example, it might be like using a different function. It might be qaing the work or, you know, making the language more concise. Whereas I think ideation is almost its own separate category which is distinct to coaching or mentoring. Like, I don't know. But that's. Maybe I'm being too.
Mo Kiss
I don't know, Martin. I mean, how would. How would you define.
Martin Broadhurst
Yeah, yeah, I. I thought of when I. When I thought about coaching and mentoring, helping you to ask better questions or think about things in different ways was. Was part of that. So I did see that kind of ideation of things being part of. Of that kind of umbrella.
Mo Kiss
But would you also. I mean, mo. With your. The. Okay, these are nine. Maybe two. You could say. These are kind of garbage. I didn't. One iterating wouldn't do that too. There were some that. But then also how would I actually validate that? Right. Because there's multiple ways to validate. I mean, you could take it farther and say, what data would I look at? Or to get a causal relationship to. Truly, if this. If my life depended on validating this hypothesis. Number three in your list, what would you Recommend that I do assume infinite resources. You know, I like, I think which all to me goes through a good iteration. But it's interesting. You asked it for like, what are some hypotheses? Not our. What are the insights, what are the answers? Right. You had it be that upstream piece and then, okay, we're going to put a human in the loop who's going to say which of these are worth pursuing and how. And hopefully someone was looking at it saying some of these. We just factually know there is no data that can validate that hypothesis already in existence. The only way I could do that is to generate some new data that the generative AI doesn't have access to because I need to run an experiment or I need to gather some data for my users or somewhere else. So it's in the process, but it's not. I still feel like it's treated as like, oh, it's this close. As it gets better, it'll generate those nine things and they'll be CMO ready and it's like, no, it's going to generate those things and then we need humans and work in the process and I don't want to come. Hopefully I'm not coming across as anti generative AI. I just think, oh, I'm just kidding. We're going to run the transcript of this through Claude and say, who's the asshole?
Tim Wilson
Yeah, it's actually really interesting. Like Martin started this whole episode talking about, you know, the terrifying scenario we wouldn't have jobs. And it's funny, I am always also using chat GPT a lot at the moment for testing different ways to explain a technical concept to stakeholders. So the other day I needed to describe probabilistic and deterministic. Probabilistic, you can tell it's late at night. Probabilistic and deterministic. And I was trying to like test out. I actually had a few different models going against each other to figure out like what was the best option. But it's still comes back to that human component of me looking, knowing my stakeholders well enough, having a good understanding of what concepts they're familiar with or like what terminology has stuck with them. So that will land and then sometimes like using different bits from like different outputs to stitch it together. And yeah, I don't know, I'm sure maybe when my kids grow up maybe that step won't exist. But for now, like I definitely feel like I still, still, you keep, you.
Mo Kiss
Keep discounting that like, you keep discounting that like, oh well, maybe it'll get to where it's better.
Tim Wilson
I'm not the future reader.
Mo Kiss
Well, but I, I mean this is, this goes back, it's not, it's not new that 10 years ago they were saying we'll get. I don't need to learn R, I don't need to learn Python, I don't need to learn SQL because the computer will just do it for me. And it's like the getting. The half life of getting partway there, it does something better and then we have this world of optimism that says, oh well, this other part that it can't do now, I'm sure it will get there. If I just wait, it will get there. Like I feel like there is a tendency to say I don't need to become better at communicating because I'm sure within six months it'll just generate canva will introduce the next feature that it just says, here's the data set. Generate the slide deck. And then you spiral into I'm going to lose my job. As opposed to saying, no, like knowing who the people are that you're working with. Like that matters. Which of these analogies would work better? What's the fine tuned right level? And it's not that it's not going to continue to get better. I mean I'm terrible as a futurist, but. But I think that's like saying, oh well, maybe it'll just do this for me within a few years. I feel like is, yeah, okay, number.
Tim Wilson
One, I don't think I'm discounting that stuff, but I just maybe don't get quite as passionate about it. So given Tim's rant though about, you know, we all still need to learn programming skills, we all still need impeccable communication skills. Skills. Computers won't solve the day.
Mo Kiss
I did not see that.
Tim Wilson
Okay, now it's just fun. Come on, come on.
Mo Kiss
No, the thing is you're putting it is.
Michael Helbling
I mean, paraphrase what Tim said, you can't.
Mo Kiss
What I'm saying there's value in this. And then you put a label that Tim says you need to be perfect at this like that. That is annoying. Right?
Tim Wilson
Okay, sorry.
Mo Kiss
I mean that's not. You're painting it.
Tim Wilson
Yeah, I take it back.
Michael Helbling
Hey, I do this, folks. I don't think so.
Tim Wilson
Okay, Tim and I are going to be banned from being on a show together for a while. But what I was going to say, Martin, is with the companies that you're working with and the use cases that you're seeing, if you are starting out in the data Space, you have finite time, you do have to choose where to spend your energy and your learning. I guess you probably have like quite a good intuition of the direction of the industry and where it's going. Where would you spend your energy? Because we always get this, we're like, what is the programming language I should learn? How much time should I spend on learning data visualization or on communicating results or writing, writing up analysis? It's like there are so many things to learn, where to focus and knowing. I suppose the pros and cons of AI, like where would you spend your energy if you were new in the data space?
Martin Broadhurst
So full disclosure, I am not an analyst. So giving career advice to future analysts is, you know, I'm not the most qualified there, but I think the fundamentals.
Mo Kiss
Maybe, maybe you're the most qualified.
Martin Broadhurst
So yeah, there's, as I mentioned earlier, being an expert in the field helps you get more quality content or quality outputs from the AI. You know, the questions to ask to steer it. I also think from a, from an AI perspective and from the gen AI space, I just think there's a really fundamental play with the tools, play with it, poke it, prod it, pull it to bits and really look at the outputs that you're getting to understand where those limits are within these tools. It's very easy to just take it at face value. It's an AI, surely it's a computer. It's told me the answer and as I've mentioned earlier, this is clearly not true. We can fall asleep at the wheel if we just take the outputs at face value. So yes, from a data and I would pursue the career or pursue the skill set completely ignoring that AI exists. And I would treat learning AI as a separate endeavor in and of itself to understand what that is and what it more importantly isn't at this moment in time.
Michael Helbling
Yeah, that's good. All right, we've got a wrap up. This is interesting. I didn't think we had any passion for this topic at all, but apparently we, we have quite a bit. So this is awesome. Well, one thing we like to do is go around the horn, share our last call, something that might be of interest to our audience. Martin, you're our guest. Do you have a last call you'd like to share?
Martin Broadhurst
Yeah, so there's a machine learning street talk, a podcast about machine learning. They recently did an episode on is O1 preview reasoning. So the new OpenAI model, is it actually reasoning? And it's about an hour and a half discussion going deep dive, quite philosophical in Nature about what is reasoning, what is knowledge? Are the things that these language models doing truly reasoning? It's really fascinating for anyone that's interested in learning more about that.
Michael Helbling
Nice.
Mo Kiss
Cassie. That same, that same post by Cassie talks about how camera which model that says like thinking. And she's like, it says thinking. She's like, it's not thinking. Like it's, it's kind of poking a little bit of fun at the human when it's spinning around. I was like, oh, I never thought about that.
Michael Helbling
Appear to be human. All right, Mo, what about you? What's your last call?
Tim Wilson
Okay, mine's a weird one. So I am talking about something that has nothing to do with Gen AI. I'm doing a professional leadership course internally. I'm very lucky we have internal coaches at CANVA that we get the opportunity to do this. And the topic we covered last week was about kind of like our leadership values and our, what's called our leadership leadership shadow. And I had written my leadership values a few years ago. I'd run it through some like mentors and people that I chat with and I was pretty happy with them. And of course I dusted them off the shelf and I looked at them and was like, yeah. I think what really stood out to me is that at the time I wrote them they were all like very aspirational and I would say very soft, skill based. And I didn't feel that I had something there that captured like the team's output or drive. And I realized over the last few years that is something that's really important to me. So number one, this is a reminder if you do have leadership, oh, you just go check on them. But the other thing that happened is we started talking about our leadership shadow. And so that's where you say something's important to you but you may be the way you behave doesn't show up in the same way. And so an example, not a reflection of me at all, is you say that like your team are the most important thing, you really care about everyone that you manage. But then you move your like one to ones regularly or you reschedule the team meeting every month or something like that. And so it's about identifying like where are you saying things are important but your behavior is actually quite different. If you were in the teen and seen that and yeah, it was just like kind of a nice, I mean challenging exercise but good exercise to see how you then overlay that with the values that really are true to you and how you're going to show up and make sure that you're demonstrating that to the team.
Mo Kiss
Yeah, you should throw those into ChatGPT and say what is my leadership shadow?
Tim Wilson
I don't think it knows me, knows me well enough yet.
Mo Kiss
Yeah, I bet it will give it a couple weeks. By the time this releases.
Tim Wilson
Maybe I should let Tim be in charge of the prompting and then maybe we would get some real gold there.
Michael Helbling
All right, well Tim, what's your last call?
Mo Kiss
So I'm going to do a plug. We are just a little less than a year out from the. I'm going to do a twofer. My first one's a plug for the Data Connect conference. So that's in early October of 2025. So I've talked about it before. We've done promos for it. It's data connect conf.com but the call for speakers is already open. So if you are or if you know someone who is a woman or a genderqueer, gender non conforming or non binary individual who would have something to speak about at a data conference, consider putting in a pitch for that. It's a great conference, open to all to attend, just limited on who the speakers are. So that's my plug for that conference and getting great content there. And then as my actual last call, which maybe does tie into this topic, there's a guy named Peter Isager. Isager, I don't know how to pronounce his last name. Wrote a post called 8 Basic Rules for Causal Inference. What's funny is the URL actually is 7 basic rules for Causal Inference. So I am really curious as to what the. Which, which one he thought he had not realized. But it gives simple little diagrams that actually made me the first couple. I'm like, yeah, knew that, knew that. And then it got really interesting. So when it comes to this topic we had today, I think causality is one of those things that is really kind of profound and tricky. And that was kind of a nice post with simple little diagrams that kind of make you think, oh, this is why you know, all the answers are not just in the data that I've already collected. So eight Basic Rules for Causal Inference. Michael, what's your last question?
Michael Helbling
Well, in the spirit of this topic, a couple of people that I know very well because they used, I hired them both and they used to work for me. They've started a startup in the AI space called Moonbird. Moonbird AI and they are building agentic tools and services and things like that. But they're first product is around an agent or Something an AI agent for specifically looking at Adobe analytics implementations. So if I were walking into a situation where I was looking at an Adobe implementation today, I would be using that tool to bring me up to speed, give me information, provide me some knowledge. So if you're in that space, it's a. It's a great little tool for that. So big shout out to the Moon Bird team over there. All right, well, Martin, thank you so much for coming on the podcast. Who knew that little networking session at Marketing Analytics Summit would eventually lead to this? We, Martin and I were at a table together at Marketing Analytics Summit. We got to introduce ourselves, and here we are. So thank you, Martin.
Martin Broadhurst
Thank you. Yeah, it was some good dim sum we had.
Michael Helbling
Yeah, that's right. All right. And then, of course, course, no show would be complete without a huge shout out to Josh Crowhurst, our producer does so much behind the scenes to make things happen. Josh, thank you. And of course, big shout out, thank you to Tim and Mo, my co hosts, for bringing so much life and passion to this episode.
Tim Wilson
Arguing, you mean arguing.
Michael Helbling
Yeah, well, you know, I asked Chat GPT, like, give me a positive spin on all this.
Martin Broadhurst
Like.
Michael Helbling
All right, well, this is an awesome topic and obviously what I think is super interesting and obviously growing and becoming more and more a part of the conversation. So I think this is probably not the first time or the last time we'll talk about it on this podcast, but I like the start we got today. So, again, thank you, Martin. And again, I think as you're going out there, we'd love to hear from you. What are you using AI for? What kinds of things do you see in your work? Is it easy to reach out to us? You can get a hold of us on the Measure chat group or on LinkedIn. And we also now have a YouTube channel, so you can check us out there as well. So go ahead and reach out to us. We'd love to hear from you. Unless you're pitching an AI, please, no related topic or host from a PR autobot type situation. We do get a lot of those emails, but we'll do the picking. Thank you very much. And I think we got the right person for this today. All right. Anyways, I know that as you're going through life, you're going to be using AI more and more. And so keep the good work going. And I know I speak for both of my co hosts, Tim and Mo, when I say keep analyzing.
Tim Wilson
Thanks for listening. Let's keep the conversation going with your comments, suggestions and questions on Twitter at analyticshour on the web at analyticshour IO, our LinkedIn group, and the measuredchat Slack group. Music for the podcast by Josh Crowhurst.
Mo Kiss
So smart guys want to fit in, so they made up a term called analytics. Analytics don't work.
Tim Wilson
Do the analytics say go for it, no matter who's going for it. So if you and I want the field, the analytics say go for it. It's the stupidest, laziest, lamest thing I've ever heard. For reasoning in competition.
Mo Kiss
I look my best when I'm pixelated and frozen.
Michael Helbling
Yeah, you know, use a little of that sometimes. That zoom feature that kind of smooths out your features a little bit.
Mo Kiss
Yeah, yeah.
Michael Helbling
Best use of AI ever.
Tim Wilson
But, Martin, I do have a bit of a question on that. Oh, Jesus Christ.
Mo Kiss
I talked it right out of your head.
Tim Wilson
As I said it, as. As the words came out of my mouth, it completely disappeared out of my head.
Mo Kiss
Dear Chat GPT, what was my question?
Tim Wilson
Oh, my God. No, I've never.
Michael Helbling
Good questions.
Tim Wilson
Absolutely nothing.
Mo Kiss
Rock Flag and our jobs are safe. For now.
Michael Helbling
For now.
Summary of "The Analytics Power Hour" Episode #257: Analyst Use Cases for Generative AI
Introduction
In Episode #257 of "The Analytics Power Hour," hosts Michael Helbling, Mo Kiss, and Tim Wilson explore the intersection of generative AI and analytics. Joined by expert guest Martin Broadhurst, the discussion delves into how AI tools are transforming the analytics landscape, the opportunities they present, and the challenges they pose for data professionals.
Opening Thoughts on AI in Analytics
Michael Helbling sets the stage by reflecting on the historical trend of automation in analytics, highlighting AI as the latest advancement aimed at increasing efficiency through machine assistance. He introduces key questions: "How and what can we hand off to an AI when it comes to analytics? Are they going to take our jobs? Will it truly usher in an era of data democratization?" ([00:13]).
Tim Wilson expresses apprehension about AI's role, admitting his nervousness about the episode due to ongoing concerns about AI's impact on job security ([02:09]). Martin Broadhurst reassures listeners, stating, "I don't see anybody's job going anywhere in a hurry. Not to spoil what's to come, but yeah, I think you're okay for the time being." ([02:23]).
Martin Broadhurst’s Journey into AI
Martin shares his background in CRM and marketing automation and how his interest in AI was sparked by OpenAI's GPT-3 API. He recounts experimenting with AI tools to push their limits and how this led him to assist clients in integrating generative AI into their workflows, particularly focusing on data analysis ([02:45]).
Use Cases of Generative AI in Analytics
Martin outlines four primary ways generative AI can be utilized with spreadsheets:
AI as a Coach or Mentor: Assisting with formula writing, optimizing functions, and providing guidance without direct data access. "It can be quite good for that. There's over 500 functions in Excel. Trying to keep all of those in your head is very difficult. Then you've got the file ingestion." ([14:47])
File Ingestion: Uploading spreadsheets to AI for data manipulation using tools like Python. However, Martin warns about the risk of "hallucinations," where AI generates incorrect insights. "Nearly every single time I do this, the data that it presents back has some errors in it that if you're not paying attention you would not spot." ([17:50])
Assistance within Spreadsheet Software: Integrating AI tools like Microsoft Copilot to execute functions directly within the spreadsheet environment. "Copilot for Power BI... it's not ready for CEO level insights and presentation of data at the moment. It's quite simple." ([25:39])
Adding New Functions to Spreadsheets: Utilizing AI to create dynamic functions within spreadsheets, such as Anthropic's CLAUDE for Sheets, which allows users to input prompts and receive responses directly in cells. "You can assemble prompts using data input from other cells... it populates that cell." ([28:22])
Challenges and Limitations
The conversation highlights significant challenges in using generative AI for analytics:
Accuracy and Hallucinations: Martin explains that while AI can generate charts and data manipulations, it often fails to accurately describe or interpret them. "Its description of the charts... is wrong. It consistently would say you can see that for the 35 to 50 year old cohort, satisfied is higher than dissatisfied and it's clearly the other way around." ([17:50])
Understanding AI Mechanisms: Mo Kiss emphasizes that users often oversimplify what AI can do, neglecting the complexities of data analysis. "It feels like it's this big bucket of like a tool and somehow people are like, oh well, the tool must be smart enough to get to how to fix it." ([07:46])
Human Oversight is Essential: Both hosts and Martin stress the importance of human validation. Without a deep understanding of AI's capabilities and limitations, analysts risk misinterpreting AI-generated insights. "You need to retain the ability to validate and interpret AI-generated insights to prevent errors." ([38:06])
Balancing AI and Human Expertise
The hosts discuss the necessity of maintaining technical skills even with AI's advancements. Mo Kiss argues against the misconception that AI can replace fundamental skills like SQL or Python, emphasizing that understanding these tools enhances one's ability to leverage AI effectively. "Learning that stuff helps you understand how data works." ([38:06])
Tim Wilson echoes this sentiment, highlighting the value of human ingenuity in troubleshooting and refining AI outputs. "I definitely feel like I still... you keep... keep discounting that like, you keep discounting that like, oh well, maybe it'll get to where it's better." ([52:44])
Practical Applications and Best Practices
Michael shares a practical example where ChatGPT assisted in generating hypotheses for a marketing campaign, demonstrating AI's role in ideation. However, he acknowledges the necessity of human intervention to validate and refine these hypotheses. "We are going to just lean hard into this. We are going to then tackle this... and it's actually ended up how we structured our analysis was based off the hypotheses generated from ChatGPT." ([44:56])
Martin advises analysts to use AI as a supportive tool rather than a replacement, emphasizing experimentation to understand AI's strengths and weaknesses. "Play with it, poke it, prod it, pull it to bits and really look at the outputs that you're getting to understand where those limits are within these tools." ([54:29])
Future Perspectives and Recommendations
Martin recommends that budding analysts focus on mastering the fundamentals of data analysis while simultaneously exploring AI tools to understand their capabilities. He emphasizes treating AI as a separate endeavor to grasp what it can and cannot do. "From a data and I would pursue the career or pursue the skill set completely ignoring that AI exists. And I would treat learning AI as a separate endeavor in and of itself." ([55:40])
Conclusion
The episode wraps up with the hosts reflecting on the nuanced role of AI in analytics. They agree that while AI offers significant enhancements in efficiency and creativity, it cannot replace the critical thinking and expertise of human analysts. The conversation underscores the importance of balancing AI augmentation with continuous skill development to harness AI’s potential effectively.
Notable Quotes with Timestamps
"Having worked with data, having dealt with business problems... oh, here's the future. It's going to take everything." — Mo Kiss ([05:15])
"People don't understand the tool and the nature of the tool... if you don't understand it, you'll be sadly mistaken." — Martin Broadhurst ([07:46])
"It's and sort of the human component is much, much less just the reality of identifying a problem where you're trying to use data to solve it." — Mo Kiss ([07:46])
"If you're not giving good context and good prompts, you're going to get bad outputs." — Martin Broadhurst ([12:22])
"We are leaning hard into AI for ideation and brainstorming, but we still need humans to validate and refine these insights." — Tim Wilson ([44:56])
"You have to retain the ability to validate and interpret AI-generated insights to prevent errors." — Mo Kiss ([38:06])
Key Takeaways
AI as an Augmentative Tool: Generative AI can significantly enhance the efficiency and creativity of analysts by handling repetitive tasks and aiding in ideation but cannot replace the nuanced understanding and critical thinking of human professionals.
Understanding AI Limitations: Analysts must grasp the mechanisms behind AI tools to effectively leverage their strengths and mitigate their weaknesses, particularly regarding accuracy and data interpretation.
Essential Human Oversight: Continuous human validation is crucial to ensure the reliability of AI-generated insights, preventing misinterpretations and errors that could arise from AI hallucinations.
Balancing Skill Development: Maintaining foundational technical skills in SQL, Python, and data analysis remains essential, even as AI tools become more integrated into the analytics workflow.
Future-Ready Analytics Professionals: Professionals should focus on mastering data fundamentals while exploring AI capabilities, positioning themselves to effectively collaborate with AI tools and harness their full potential.
Final Thoughts
"The Analytics Power Hour" delivers a comprehensive exploration of how generative AI is both empowering and challenging the field of analytics. By highlighting real-world applications, addressing limitations, and emphasizing the enduring importance of human expertise, the episode provides valuable insights for data professionals navigating the evolving technological landscape. Listeners are encouraged to engage thoughtfully with AI, leveraging its capabilities while maintaining robust analytical skills to drive informed and accurate data-driven decisions.