Loading summary
A
Hey, before we dive in on today's podcast episode, I just want to share with you that AI business world just wrapped up and the results are in. And marketers walked away with lots of AI strategies they're already implementing. But here's what you need to know. Right now, you can still get access to everything with a virtual ticket. You get recordings to every AI Business World session, plus all the content from Social media Marketing World. We're talking dozens of sessions covering AI workflows, content strategy, Instagram, Facebook ads, and a whole lot more, all from the world's top experts. And what's really cool is you can take your time watching it. You have 18 months to watch it, replay it, and implement it at your own pace. Right now, these Virtual tickets are $200 off, but only until May 15th. Head to a businessworld.live to grab yours before the savings disappear. Welcome to the Air Explored podcast, helping you put a to work. And now, here's your host, Michael Stelzner. Hello, hello, hello. Thank you so much for joining me for the AI Explored podcast brought to you by Social Media Examiner. I'm your host, Michael Stelzer, and this is the podcast for marketers, creators, and business owners who want to know how to put AI to work. What if you can move your people from AI beginners to advanced AI employees? Today, we're going to explore how to transform your people using advanced AI training. My special guest is an AI transformation expert who helps businesses scale with AI training and AI governance. He's the author of Ingrain Strategy through the Blueprint to Scale in AI First Culture. His course is AI Mastery for Business Leaders. John Munsell, welcome back to the show. How you doing?
B
Hey, I'm great, Mike. Thanks for having me back. It's great to be here.
A
Awesome to be here with you again. And it's been a little while. So let's start with misconceptions. What do you believe are some of the biggest misconceptions when it comes to training people with AI?
B
Great question. I think the biggest misconception, Mike, is that AI is easy just because the interface is easy. You know, you just type in a few words and questions and boom, it gives you an instant answer. And you think you're brilliant. You think it's brilliant. I think that's probably one of the biggest ones. And so that means that a lot of leaders are thinking, well, my employees can already type into ChatGPT, so they know AI, and they kind of confuse just that access with actual throughput and capability. So asking AI a Question is not the same thing as using AI strategically, consistently and safely. Frankly, inside of a business. Let me see if I had to put my finger on a couple other ones. Well, I think the bigger problem is that people let their employees teach themselves, which is fine, but they reach a plateau because they're busy, they have real jobs, and so they'll only go so far. And what we find when we're testing people is that we have something we call ten levels of mastery, but they only reach level two. And so we have to teach them how to think differently, how to approach AI differently, and that's the way they get more out of it. So I think the real issue is, isn't where whether people can use AI, it's really whether they can use it well, whether they can use it consistently, and whether they can actually use it in a way that creates consistent business value in the way that the business needs.
A
You know, I can resonate with what you're talking about because anyone who has employees, if you just hire an employee and you expect them to learn how to do their job on their own, well, they're going to hit a natural plateau, right? Because they're just going to kind of experiment through it and just kind of do it. But if you hire someone who is, let's say, very important to your business, and I would argue almost everyone who hires someone is hiring someone who they hope will make a difference in their business, there's probably a direct correlation between the quality of direction and guidance that you provide them and the actual output or success that they have. And why would we not apply that very same framework or philosophy directly to training people how to use something as powerful as AI? I totally get it. So let me ask you this question. When AI is properly trained throughout an organization, regardless of its size, what is the benefit? What is the unlock? What are the upside advantages, if you will, to businesses who get this right?
B
Well, when it's done well, I mean, AI training is going to improve productivity, it's going to create consistency, it's going to increase confidence, and it's going to turn your employees into force multipliers. Now, what do I mean by that? I mean the first upside is speed. So people stop wasting time guessing on how to use AI, going back and forth and back and forth and back and forth, and they learn how to be more consistent. So they learn how to structure a prompt in a very specific way, and they learn how to turn that prompt into a skill. There's a lot of steps up they need to make in order to do that. But once they do that, then they start becoming confident. And when they become become confident, then they build their own tools. And when they build their own tools, they share their tools with others, and then other people learn. And so instead of keeping the knowledge in these isolated pockets where there's no sharing going on, because nobody understands it the same way, and everybody's kind of experimenting, you create this, oh, gosh, I don't know what you would call it, but it's, it's, it's like a flywheel where it just gets better and better because the people understand how to do it. And when people build their own tools, then they see the power of it and they feel like they're making a contribution. And that's one of the most important things for an employee in terms of morale, is to know that they're contributing and they're actually doing something meaningful. And of course, you also reduce risk, right? I mean, good training is going to include some sort of privacy, some sort of security and governance training, some sort of judgment. One of the big things that we found is that people, by and large, if they're not trained properly, they will yield their taste, their judgment, their expertise to AI, because AI is so fast. And so instead of thinking through and pushing AI to get better, they just accept what AI gives them. You need to coach people how to not look at AI as like it is the final word in what comes out. They need to look at it. And because they're the ones with the domain expertise and they need to push it to a better level. And training helps you do that. Without training, people just kind of fend for themselves and you just don't get the big benefits of getting everybody on the same page.
A
And for folks who have done training like my organization has, and I'd like to think that my organization is kind of out on the edge because I get a chance to talk to people like you, John, who are, I would argue, early adopters into this new frontier. And you're developing standards and protocols, taking your many, many decades of experience working in different kinds of marketplaces and bringing it to this space. I can tell you from the people that work for me, and I have a small business, but it's not a super tiny business. We've done multiple trainings. And I know because I'm able to kind of see all the data, that sometimes these trainings go in one ear and right out the other. Okay? It's one thing for people to receive education, it's another thing for them to actually apply it to their work. And folks, that's why I wanted to talk to John today, because there is a method for those of you that have tried this and have just had people fall back in their standard way of doing things, because John hit on something really important. If people don't get the results that they expect out of the AI, then they're going to go back to their old way of operating. And we both know, and everyone who's listening to this knows that that could be a problem because other organizations who do go through proper training are going to have an UNF competitive advantage because they're going to be able to accomplish things with the very same staff that they have at a level that those who have not embraced this could not. So that's why I'm so interested in what we're about to talk about today, because it's really, really hard to get people kind of all aligned, rowing in the same direction, to embrace something that they first see as something that maybe they do not understand. And then eventually they start rowing in the right direction. So I just wanted to throw that over to the fence because I know it's a problem and I know, John, you put a lot of energy into this problem. Feel free to respond to any I said there. Or we can just start with the. Where do we start?
B
Yeah, where do we start? Right. Where does somebody begin?
A
Yeah.
B
So I think there are two places that you have to begin. So. So first you have to begin with governance. Now, what is governance? Governance, it isn't just about rules and regulations. It's about monitoring the output that you expect and monitoring people as they go through training. So we start with an assessment of people's skill set, where they are. So I want to know exactly how much they know and what they're doing. And so the second thing is having an objective. And the objective isn't just do you know how to use AI? We break it down into four stages, Mike, and they're 10 levels, but we break those into smaller four stages real quick.
A
Before you get into the stages you talked about governance and assessment, are they one in the same?
B
Yeah, well, the way we run it, it is so governance is about establishing rules. But think of it this way. As people increase their proficienc efficiency and their use of AI, they learn to do more sophisticated things and they need to connect or need to have access to more sophisticated tools, data sets, et cetera.
A
Okay.
B
So as their level of mastery increases, the level of complexity of the systems they use increase. Right. So they might have to do API calls, they might have to connect to external data sources. As they do that, the level of governance or oversight or security needs to increase as well.
A
Okay.
B
Those three things work together, but you can't have any of it work together unless the people who are in charge of governance are also overseeing people's acceleration of the process and making sure they start with a benchmark, you know, like a baseline of productivity and a baseline of expertise. And they're watching their people go through that. And so what we always look at is, look, everybody who goes through a training, I want them to produce an artifact at the end of it that shaves off at least three hours a week of their time. And so we start off with an objective for them to build their own tool. Okay. Now, I also want to know where they are currently, and then I need to know exactly how fast the team needs them to go through the training.
A
Got it.
B
Okay. And. And so that's the main thing. If you don't have that as a baseline and you don't have a group that you can go to, who are the champions of that, you can ask questions, then, you know, things fall apart. Because what. What we found is left to their own devices when people go through training. And it. And our training is a hybrid training, which means some of it is recorded and some of it is live and in person. A couple times a week, we have these office hours. The people who get the most out of it are the ones who show up at office hours and have questions. The people who don't are the ones who don't show up at office hours. And what we found, by and large, is the majority of them don't even get past the first couple of modules because they're busy. And they didn't start with, oh, if I got it to do this for me, that would be a big win. They just start by like, okay, boss told me to go watch these videos, you know, and so nothing happens.
A
So tell me about these assessment things that you were about to get into, please.
B
Yeah. Okay. So. So we have two assessments, Mike. One of them is, I want to know what their makeup is. I want to know, are they producers? Do they just get a lot of work done? Do people just give them the work and love to produce the work? Are they administrators? Are they the ones who like to set up the rules? Who. Who like to organize things? Are they the entrepreneurs, the idea people, the creative types? Or are they the connectors or the integrators? The ones that like to create community and like to get everybody Working together harmoniously. All right, so I need to know which ones those are. They're the P, A, E, I. If you were paying attention, that's what those things are. So when I know what your primary driver is, then I can see what happens as you go through training. And then I want to test their level of fluency. So we start with literacy, which is levels one through three. Fluency, which is levels four through six. Mastery, which is seven through nine. And then what we would call stewardship or AI native thinking, which is level 10. All right.
A
Yeah. Let's describe each one of those levels just so people can wrap their brain around them.
B
Yeah, sure. So literacy just means people understand what AI is, what it can do, what it can't do, and how to use it safely. Right. So they know how to ask questions and know how to refine those questions and get a decent output. Fluency means they can use AI regularly in their role to improve work quality and speed. And they started to build some tools. Okay. Mastery means they're building repeatable workflows, they're connecting tools together, they're re using reusable prompt that are solving real problems in their job, and they're building agents at that level. Obviously, if you're building agents, you need another level of security overseeing you. Right. Stewardship means they're actually managing people who manage agents and they're managing teams. So that's level 10.
A
Just real quick on the stewardship thing.
B
Yeah.
A
Are they managing teams of AIs or humans? Like, talk to me a little bit about what that means.
B
Yeah, both. So it could be that they have spun up several AI agents and they're managing those agents, but most of the time they're actually managing people who have been authorized to manage agents. But there's a. There's a big level of security that comes with that. We haven't had anybody yet reach level 9 or 10. And that's because security is not keeping pace with the speed of AI. And the, the more these agents get out there, the more people are realizing we have to have some really tight controls over what these agents do, where they were built, what they connect to. I could go on and on. Right. But that's the thing we want to test people is I want to know where they are in that, but I also want to know in their specific job, before they even go through training, we're going to help them come up with somewhere between five and ten ideas of tools they can build. And then we'll give a baseline as, okay, what did this Task take you now. Right. And then we're going to figure out as you go through, what's it going to take you or what does it take you after you've AI ified that particular task.
A
Okay, so I'm gonna re explain what I heard you say. First of all, anyone who wants to do really great training, first and foremost, you want to assess where your staff are at. Do they have basic literacy, which means they understand the AI tools that are available to them and how to use them in a way that's safe and secure. Fluency means that they're actually using it in their job to help them basically do their job better. And maybe they're starting to build custom GPTs or cloud projects or something along these lines to help them. Right, right. And then mastery is they've built more sophisticated tools and have started using or are using agentic AI agents to do really powerful things. Presumably we're talking about things like Claude Code or Open Claw or gosh, who knows what else, that kind of stuff. Right. And then the stewardship level is more like these are people that are going effectively make sure that AI, those that are using AI, are using it properly and, and are managing all that stuff, for lack of better words, and maybe even managing a lot of autonomous agents to accomplish some bigger thing. Right. And then I also heard you say you've got to come up with some ideas for each individual, depending on where they're at. Right. Of how you could use these tools to build something that will allow them to be more successful in their job. Do I have that right?
B
Correct. Yeah. Yeah. I mean, think about it. We have people from every department come through our training, and somebody in sales, typically, strangely enough, they struggle. What am I going to use AI for? You know?
A
Right.
B
Marketing is the one who always has the ideas because, you know, marketing is typically where it starts in an organization because they use it to write blog posts or whatever. But then you have somebody in HR is like, I don't know what to do. Somebody in operations, I don't know what to do. CEOs, believe it or not, have a hard time thinking about what to do. And so we have a process where we, we help them come up with the ideas, where they identify those gnats flying around the year that they'd like to get rid of. And then we. We come up with a path to do it.
A
I would love to hear one of those stories, by the way, because I think when we were prepping, you had one with a furniture company. Maybe you could share that one Because I think it'll help people.
B
So yeah, and this is the interesting thing because when people come to us, it's largely, it's going to be a CEO. And the CEO is typically saying, look, I, I know this is important, I know I gotta learn this, I don't know enough to lead. And a lot of them will say, look, I just want to learn what this is all about and then I'm gonna delegate it. But this particular CEO was like, okay, before I go put my people through it, I want to learn what this is all about. I've been using ChatGPT, my, I got a pretty good idea of how it works. Well, they're in the furniture business, so they're in the office furniture business, in particular high end offices. And so they're, they're working with businesses that are trying to fill up, you know, 20, 30, 40,000 square feet of office space. So there's a whole lot that goes into that. And so they would get these RFPS in and then RFP to populate an office building is about a 350 page RFP and some of its lighting, some of it's plumbing, some of it's a bunch of stuff. So they would have to sift through it and figure out where's the furniture part of it that they could actually bid on. And he said it would take them about three to six hours just to figure out whether they wanted to bid on it. He called that a go or no go decision. If they said it's a go, let's bid on this, he would have to put two and a half people on it for somewhere between two and three weeks just to create the response to the rfp. So they consequently would only bid on probably three a year because it was a pain. But the rewards were great because it was like quarter of a million dollars to a million and a half dollars each time. So they had to really figure it out. Well, he built the tool in our training. Now keep in mind, I don't know anything about office furniture or office space, but he built a tool in our training that would allow him to digest a 350 page PDF and get to a go or no go decision in 20 minutes. And then if it was a go, he'd have a full blown response to the RFP in two hours with one person himself. So he's like, John, this, this is going to meet millions of dollars for us because we're now going to be able to bid on probably three to five per month as opposed to three per year. And That's. That's massive. But he wasn't thinking that when he went into the training, but because we helped him ideate, so to speak, he's like, well, what the heck? Let me. Let me see if I can make this work.
A
That's super cool. I talked to a lot of people on the ground at AI Business World last month, and what I heard over and over again is people are having a hard time keeping up with the change and figuring out exactly where they should focus. And it's true that that is hard to keep up with. But my keynote, I think, could help. It was called the Future of Marketing how to Thrive When AI Changes Everything. And in it, I shared research, real AI applications, and a clear framework for yielding AI as your most powerful ally, making you irreplaceable. And here's the thing, you can still watch it. And not just that keynote, but every keynote and session and workshop from both AI Business World and Social Media Marketing World is available to you right now. With your virtual ticket. We're talking dozens of quality sessions focusing on AI workflows, content, Instagram, Facebook ads, and a whole lot more. And you have 18 months to consume it all. Right now, tickets are on sale for 200 off, but only until May 15th. Head over to AIbusinessWorld live, that's AIbusinessWorld live, to get your tickets today. Okay, so we started this discussion with the different AI stages of mastery. What comes next after we have properly assessed what all of our people's skills. I mean, not skills, but levels of mastery is what comes next.
B
Yeah, okay, great question. So, you know, we kind of touched on it earlier. Is this PAEI assessment? Because what I want to know is if they're producers or administrators, a lot of that activity is going to be handled by AI. And. But that frees up the other side of things, right? It frees up the creative thinking, so it frees up a lot of other capacity for them. So I want to know where they are so I can. I can adjust what they learn and how they learn.
A
Yeah, go over those four levels again. Producer, administrator. What were the other two?
B
Yeah, so if I change the acronym around, sometimes it's a little easier. They're either a doer, they. They love to get stuff done. They're an administrator, they love to work with rules or create rules. They're an innovator. They're the ones that have the creative thinking going on, or they're a connector. They're the ones that like to, you know, build a team. Okay.
A
Okay.
B
And so I Want to make sure that if I'm working with a guy and I know that he or she is an administrator, I know this from experience. This isn't just hearsay. I know that their idea flow is, is going to be different than a true innovator. And so I'm going to have to work with them a little bit harder to help them come up with an idea for a GPT, because sometimes they'll be a little confined. The same with a producer. Okay? So once I know who they are, that's really helpful. The other is when I'm building an AI Council or an AI center of Excellence, that's a team of people that oversee the scaling of AI in an organization. There's a chemistry to that team, and I need to know that PAEI makeup of that team so that it gels and operates the best. So if I have all administrators and I have no idea people and no connectors, I'm going to have a really restrictive set of governance, you know, So I need to have the idea people in that team. So they're the ones that champion AI. They're the ones that get people excited about it. But I have to balance them out with some administrators that say, oh, not so fast. Okay? So I have to pull all of that together in one thing. And then when I'm doing the assessments, as somebody goes through the training, we have a heat map, okay? And so think of it this way. I have dots on. On this heat map, I got the different levels, but I can see where our team is. So the concentration is on this level one through three. As we train people and retest them, I can see that level move to level, say four through six. And then I can see another chunk move through, say six or seven. And so my heat map shows me where the movement of expertise is. And then I should be able to track against that an output gain or a capacity gain. And the cool thing about this is when people do their jobs well and they've built these artifacts that we teach them how to build, then what you end up with is this new capacity in an organization. Now, a business owner has a couple choices. With that capacity, they could either give people time off, they could lay people off. We don't want them to do that. What they could do instead is sell into that capacity. And that's what the furniture guy did. See, he created a massive inflow of business, and now he has to train everybody to create the capacity. Most people end up creating capacity and go, oh, shoot, now we can sell into it. So that's the whole point of going through that assessment process is so we, I know where the gaps are going to show up, I know where the capacity is going to show up. And I now have an opportunity to address that newfound capacity. Hopefully that makes sense.
A
Yep. So I do have a question. When we who are not you are deciding to assess the skills of some of our staff. Right. Because we want this to be very educational for people. What are some of the questions we could ask to assess whether someone has the right kinds of skills? Kind of open ended questions. What are some things that we can do to assess someone's level? Because if you just ask someone like which of these are you from a drop down menu? Well, they're just going to pick whatever. But there's obviously some questions we should ask. What, what kinds of questions should we ask?
B
So. Yeah, well that's a great question, no pun intended. But there are a lot of questions. It's like 35 questions. The skill assessment is about 20 questions and the last three are open ended. And those are where we want to see a sample prompt. And so they literally have to create a prompt and they have to put it in there so I can, I can see how they structure it leading up to that. I'm just saying, hey, have you done this? What does this look like? So I've tried really hard to create a questionnaire that doesn't lead the witness too much and doesn't make them go, oh, this is what he wants to hear. I'll answer it that way. And then when we get into the PAEI assessment, what we have is a series of questions. I want to say it's like 15 or 20 as well. But those questions are where we, where we give a scenario and then we just say which one is most like you and which one is least like you? And so they have to pick out of these four or five scenarios which one is most like them and which one's least like them. So it shifts the thinking a little bit and it's troublesome for a lot of people because they're like, well, I'm actually like all five of these, you know. But they still have to make a decision. But we do that so that it doesn't make them just guess and say, oh, boss wants me to say this. So I'm going to say that, you know. Yeah, yeah. So I would ask them stuff like what do you like better, you know, out of all these things? And then I would list certain scenarios or whatever. I wish I could give you some of the Exact examples? I don't have the questionnaire.
A
No, that's totally. It's totally cool. I know when we were prepping for this, some of the things that you said that you might include for something like this is, have you ever done this, for example, build a knowledge base? Yeah, I would imagine it would be yes. No, not sure. Or something along those lines. Right, Correct. Or you ever shared a prompt? Right. I mean, you have some of those kind of things.
B
Exactly. Have you ever created or shared something with other teammates? Have you built a custom GPT or a CLAUDE project? Have you built a skill or a workflow? Have you ever connected two workflows together to create something unique? Can they explain when to use AI and when not to use AI? And do they know how to turn on secure features and do they know where those, those features are? Can they evaluate outputs critically? That's probably one of the key things I want to know. Or do they just take everything at face value? And so our assessment is really, we try really hard to base it on the artifacts and the examples they give, but the questions that we ask them tells us a little bit about behavior, but it also tells us a little bit about experience. But at the end, they have to fill in the blanks that validate the answers to the questions earlier.
A
Love it.
B
Does that make sense?
A
Totally. Okay.
B
Yeah.
A
So up to this point, we've talked about a lot of, of the assessment.
B
Yeah.
A
Now we really want to get into obviously, maybe the assignment or the training or whatever. Like, what advice or system do you recommend people? Because once you've done an assessment, you're. You're just beginning. Right. So there's a lot more to talk about here.
B
So in other words, how do we upskill people? Right. How do we. How do we make it happen? So, you know, we typically start by asking employees a pretty simple question, and that's, what do you do every week that's repetitive, slow, frustrating, or mentally draining? Right. And this is where I'm getting them to think through what is the thing they're going to build at the end of the training. So I have to get them excited about it. Right.
A
I love that because you're not asking them what do you want to build, you're helping them identify a problem that AI can help solve. Right. Okay.
B
Yeah. We even built a GPT to help them think through it if they can't think through it, you know, ahead of time. But we want them, they have to go into training with an expectation of delivering something that helps them and makes their job Better. But if they go in it with the idea that I'm just going to go watch a bunch of videos, it rarely goes well. But so then we, then we ask, so when they identify these things, we have this process that we call the Perfect Day exercise, okay? And in the perfect Day, they're saying, hey, what is it that I do? And we, and what are the things that I hate to do that somebody else is doing? Because I was able to hand it off and I know it's getting done with excellence, okay? And so after they go through this, this whole, let me look at all those dumb things that irritate the heck out of me that I have to do, then we have them ask, okay, do you think AI could help? Right? Could AI be a part of this? And then we'll drill a little bit deeper and we'll say, well, should we just use it to speed up the current process or should we redesign and reimagine the process entirely? Because AI changes what's possible. Now, going into it, they won't necessarily know the difference between those two. They won't know which one is possible. But as they go through the training, they start to realize, oh, I could reimagine the workflow. I don't have to shove AI into an existing workflow. I may have to redesign the whole thing. And that's where the real value starts showing up. And you stop saying, okay, well, let's use AI inside of an old system. And you start asking, what would this process look like if it were designed with AI from the start, okay? And then you can identify one tool that you can build, and it could be a custom GPT, it could be a CLAUDE project, it could be a document analyzer, it could be a structured prompt workflow or something like that. But the goal isn't novelty, so to speak. The goal is practical time savings, better quality, repeatability. I mean, most of the companies that we deal with, Mike, they, they have one giant AI initiative, one multi million dollar thing that they're building over here that's going to have a financial impact. And they completely ignore the fact that if they had 100 employees elevating their capabilities by 30, 40, 50 or 100%, the impact of that is faster and far more dramatic than a single AI application. It's a hell of a lot less expensive, too.
A
That's the money quote right there. So as I was thinking about this, right, like, there's lots of people that I've had on this show that you could hire that will come in and help you take a single process and help you, for lack of better words, automate it using sophisticated back end things like N8N and all the things, right?
B
Yeah.
A
But the idea that if you could scale up through all of your staff, if you could get all of them properly trained and all of them creating some thing or multiple some things that allow them to either A improve something about their work or B reimagine a way a process is done from the beginning. Now all of a sudden you're not just reliant on one expert coming in to create one really sophisticated process. You're potentially having all of your staff working in parallel, for lack of better words. It's like one of those parallel processors, right? And all of a sudden the whole thing lifts up. It's like the, it's like, like the boats start rising up. Is that correct?
B
Totally. Yeah. And the other thing that people miss is the idea flow. All right, so think about it this way. Last year MIT came out with a study that said 95% of all AI initiatives fail. And they, they started to blame it on tool selection and some other things. I personally believe that it was a training issue. And what I mean by that is if the, the collective level, or knowledge density, if you will, of the organization in terms of AI is at a level three, and you're trying to now take on a level nine or eight project, the only person that knows what's going on is the person you hired to build it. But nobody in your organization knows enough to contribute. And so the odds are you're going to make mistakes because you're not going to know what AI needs in order to make that thing really, really successful. Like we're working with Tulane University right now and they just brought in or just went live with a huge Oracle implementation on their new ERP system. But before it went live, they called us in to train their people because they knew the Oracle ERP system had AI embedded in it, and no one knew what to do with AI. But they knew if they leveled everybody up, they would have a whole lot more success with this. And that's what happens now. You, you start putting a whole lot of people, teaching them how to build their own tools, and you get them up to that level five, level six zone. The ideas they have for big applications go through the roof and now they can contribute. And so that the speed to deliver them goes, you know, way up and the cost to deliver them goes way down. Because you're not really relying so much on a vendor as much as you are on your own internal expertise. And you'll have people go, now you're going to need this from me and you're going to need that from me in order to make that work. Here's how I see it working. Because they've already started to build their own tools and they've started to realize, okay, if I connect it to this database and I can connect it to that database and have it parse all this data, I can get it to do this. That's a whole lot better than having some builder come from the outside and say, hey, I can build, build some, you know, some stuff for you that, that's hard to do if you don't know what you need. Right. So training people gives you those ideas.
A
I have a couple questions here. You know, as we train people to. And I've done this in my organization where I gave everybody, I, we had a little like, we, we did it with Google because we're in the Google ecosystem and the objective was to have everybody create gems.
B
Yeah.
A
And you know, we had a little contest. For every gem you created, you were entered to win something. I think it was like a hundred dollar Amazon gift card or something like that.
B
Oh, that's cool.
A
But give people an understanding of the kinds of things that your average everyday employee could create. Because some of the stuff you're talking about is so pie in the sky for my audience, that seems completely outside the realms of possibility. You know what I mean? Like, so what are the kinds of basic things that a lot of people might want to build as they're going through, you know, basic training, for lack of better words.
B
Yeah. Well, here's what I would say. I'll give you a couple ideas, but I'm going to caveat it with. When you get into the training, the ideas you come up with as you start to learn more are way more than you ever thought you'd come up with when you're in it. We had one gentleman go through the course and he built a patent analyzer because he files for about 20, 30 patents per year in the chemical industry. I don't know anything about chemicals and I don't know anything about patents, but I know he was spending $30,000 a year in legal fees on patents. He built something that would analyze a patent that he was trying to apply for and then it would analyze other patents to see if there were potential conflicts. And it would help him rewrite his patent so that he could then write it all up and hand that off to his attorney. So his, his legal fees dropped by like 90% and his software subscriptions dropped by 100% because he killed off a $15,000 software application. We had another gal build a tool that would estimate the cost of building a custom home. And she said it was within 3% of this $20,000 a year software application she was buying. I wouldn't have thought of that. You know, originally she was like, hey, I think what I want to do is have IT analyze the market and tell me certain buying signals. But as she got into it, she switched gears and she built up a home estimator, you know, but like in sales, for instance, we have a process that we use ourselves for generating proposals. Now, if I go back five years, our proposal process would take a week or so, which meant that after, with the client, the heat would have cooled off quite a bit. And so I've lost maybe an opportunity. But now we can produce a proposal inside of an hour. And so when we're with the client on the call, typically our calls are over zoom. So we always have an AI note taker present. If we're in person, we use a Plaud Note, if you're familiar with that. Plaud.
A
Explain what that is. For people that don't know what that is. Is it a physical device you wear around your neck?
B
You can I have one? I don't like the one to wear around the neck, but it's the size of a credit card. The exact dimensions.
A
Do you tell them, hey, by the way.
B
Oh, yeah, yeah, absolutely. You get in a lot of trouble
A
if you say, hey, I'm taking notes with my little note taker here. I hope you're okay with that. Okay.
B
But it's literally the exact dimensions of a credit card, only it's twice as thick.
A
Okay.
B
Okay. And so you just push the button. Hey, you mind if I record it? Nope. And you just lay it on the desk. Very unobtrusive. It's not like, you know, you're slapping a giant phone on the desk and boom, hey, you know.
A
But I would imagine you could do that with your phone too, if you didn't have of these tools. You could use the standard voice recorder, I would imagine, right inside of the iPhone. If you really.
B
There's plenty of tools you can. You can do. I just like the plot note. Because a lot of times when I'm in a meeting, I want to leave my phone in the car because I want them to think that they're the most important thing I see. And my phone isn't. Okay. So bottom line is that I record Those meetings, those, those are instantly transcribed. I get a transcript, I can automate that transcript to go into a Google Doc. So I have it stored up and then it triggers an automation that automation analyzes the call like all of them will do and give you follow ups. But I, I have it do a couple of extra steps. One is I have it identify the things that would typically go beyond someone's notice. So those, those nuances in a call that would escape our attention, that AI will bring out, blow your mind. But the reason I do that is because first of all, if I'm taking notes in a meeting, I'm missing something. And I'm also getting some visual cues, right. I lose eye contact, I lose a whole lot by not listening and by taking notes. So I try not to take notes. So I have that. And that's why I say, look, you mind if I record this? Because I want to pay attention to you. So anyway, it immediately analyzes that transcript, pulls that out. But not only that, the next step is it analyzes the transcript. So I understand the buying behaviors and decision making techniques, if you will. That's probably the wrong word, but you know what I mean. I'm trying to understand the buyer, I'm trying to understand their thought process.
A
Yeah, because AI might analyze something you didn't pick up on, right?
B
Correct. Yeah. So we run the equivalent of a disk profile, but it's called an ocean profile. And so it runs this profile. So I now I understand what's going to be important to Mrs. Prospect when I present the proposal. And so then it creates this giant profile on her, then it reads the proposal, it generates the proposal, you know, based on all the pains and problems because it has a knowledge base of everything that we do. But then I have it assume the personality of that prospect, read the proposal, give me the objections, provide questions, and then I loop back and I have it rewrite the proposal based on, you know, those objections and questions. And then I have it emulate the personality again, read the proposal again. I go through that whole loop three times and by the time I'm finished, I have a really bespoke proposal. And then I can have my salesperson role play with the AI Persona of the prospect and practice before they make their presentation.
A
This is really, really fascinating and cool. So, so I'm discerning based on what you're talking about and some of the clues that I heard from you, that these are the kinds of things one could create with a custom GPT, a Claude project and, or a gem. Right. Because this is not super complicated. We're dealing with the processing of information. We're possibly dealing with going out and collecting some public information, using a web search and bringing it back and maybe producing some new output. Is that the kind of stuff we're asking most people to do, which is to use the general tools that are out there to create these kinds of things. Things mostly through ChatGPT, Claude and Gemini. Is that generally where we're going with this?
B
That's exactly what we're doing. I mean, that's where the power is. Those are really inexpensive tools that really accelerate growth and productivity. And like I said earlier, instead of you spending, you know, six, seven, eight figures to build a single application, you get a 50, 100, 200 people building their own little tools. That is massive impact. And the idea flow is inspirational, if nothing else. But their confidence grows, the resistance falls, curiosity rises. You know, it's just the best way I've seen to get where you need to go really, really rapidly with AI, because those tools are excessively powerful. And most people. That's why I say when we, when we go into an organization, we, we 100% of the time find that 98% of their employees are operating at level three or below. And there the. The impact comes at level six, which is what we just discussed, building those tools to solve their own problem. Because the only person that really knows the problems they experience is the person in the job. Right. And if you could teach them how to solve their own problems using AI, that's, that's, you know, gold.
A
I would imagine most organizations are using one of the three tools that we just talked about and possibly Microsoft Copilot as the fourth option for the bigger enterprises. Or are you finding that in some organizations they haven't even standardized on the tool yet? I'm just curious where your thoughts are.
B
Oh, yeah, yeah, yeah. There's a lot that haven't standardized on the tool. There are a lot that outlaw tools altogether because they haven't had time to get their heads wrapped around it. They think there's a security issue, which, you know, if they don't use it. Right. Yes, there is, but there are new tools out there that will give you access to multiple tools, and I'm thinking of the ones that are more secure. There's, of course, PO has been out for a while, but I'm thinking of tools like Boodle Box, which is largely going after the university market. There's a. Another tool called Nebula 1, which is, you know, even more on steroids. All of those Are, I want to say they're HIPAA compliant. For compliant, they have all the compliances built into them. So they're going to be nice and secure. But they give you access to multiple models. Now they may be a little bit behind, like for instance, Boodle bots. I, I think just last month gave everybody access to Opus 4.6. So it takes them maybe you know, two to four weeks before they get access to the really frontier models. But it's a great option for those people who are like scared of the scare side of things.
A
Well, John, we have just tapped the surface. We could go on and on. There's so much more I could talk about. If people want to discover more about your training, where do you want to send them if they want to connect with you on the socials? Do you have a preferred platform?
B
You know, they can always connect with me on LinkedIn. Okay, but here's what I would, I would say what we talked a lot about is this assessment process. And on our website we have an assessment that we call the AI impact analysis. Now we charge over $500 for it. It's a pretty hefty little product. I think when you get it, it's like 18 pages long, but it's a complete detailed survey and it takes us about 24 hours to pull it back to you. So it's not one of these things where it's going to automatically spit it out to you and we don't pay attention to it. One of our people, probably me, is going to go through it. But what I wanted to do is offer it to your listeners exclusively because I'm a huge fan of your, your work, Mike, and you always do such great stuff. And by golly, this is the second time on your show. So what I would say is, look, if they will go to Ingrain AI slash impact, because it's called the AI Impact Report, when they go into checkout, right, there's a place to put in a coupon code. Just use the coupon code SME. And I think probably for the next six months I'll drop that to 100% off. Okay, so they'll get access to it for nothing. But I just want to make sure that everybody knows that this is for companies that want to know what is the actual economic impact of leveling up their employees. What does it look like if I take our employees just from a level three to say a level five, what would be the economic potential of that? And then contrast it to maybe some one offs? But if that's the kind of business you have. I don't care whether you have five employees or 50,000. And it's a massive impact. And so if you want to know what that numbers is, the other thing you'll get is you'll get an hour with me or one of our certified implementers. But anyway, that's what I'd like to do for you and your, your crew, Mike.
A
Thank you so much, John. That's super cool. SME, by the way, folks, stands for Social Media examiner, which is the company that runs all this stuff over here. John, thank you so much for providing that to our audience. Say the URL again one more time. Time for those that want to go there.
B
Sure. It's Ingrain I n G R A I N AI impact and then SME
A
coupon code at the checkout.
B
SME. There you go.
A
Thanks again, John.
B
I would have put Mike in there,
A
but, you know, SME is easier for my audience. Thank you, John, so much for coming on the show.
B
You bet. Thank you for having me, Mike.
A
Hey, if you missed anything, we took all the notes for you over@social mediaexaminer.com A105. Be sure to follow this show on your favorite podcasting app. And if you've been a listener for a while, we would love a review on whatever your listening platform is. And do check out my other show, the Social Media Marketing Podcast, and be sure to let your friends know about this show. I'm Telzner on Facebook, Stelzner on LinkedIn, and iKestelsner on X. This brings us to the end of the AI Explored podcast. I'm your host, Michael Stelzner. I'll be back with you next week. I hope you make the best out of your your day and may AI help you become more successful. The AI Explored Podcast is a production of Social Media Examiner. Do you want to go deeper in your understanding of AI? You've been listening to this podcast for a while, but did you know that we have a membership with lots and lots of marketers, entrepreneurs and creators who learn every single month? Every single month, we do live meetups. We have professionals coming on who teach, training, and this is exclusive content only available in our AI Business Society. If you're ready to begin committing to ongoing development, join the AI Business Society right now by visiting socialmediaexaminer.com AI.
Title: Upscaling Your People: Advanced AI Training
Host: Michael Stelzner (Founder, Social Media Examiner)
Guest: John Munsell (AI Transformation Expert, Author of "Ingrain Strategy: The Blueprint to Scale in AI First Culture")
Air Date: May 12, 2026
This episode dives into a critical issue facing organizations today: how to move employees from AI beginners to advanced AI contributors. Michael Stelzner interviews John Munsell about advanced AI training, assessment, organizational transformation, and building a culture that extracts consistent business value from AI. The discussion provides a wealth of actionable advice for leaders, marketers, and business owners seeking to “upskill” their people and leverage AI more powerfully across teams.
Interface Simplicity ≠ AI Mastery
Self-Teaching Plateau
Strategic Use & Consistency Is What Matters
Major Upsides
Risk Reduction
Why Training Fails:
Education alone doesn’t deliver lasting change—application in real work is crucial.
Two Essential Starting Points:
Two Core Assessments
Artifact-Driven Progress
"The only person that really knows the problems they experience is the person in the job. If you could teach them how to solve their own problems using AI, that's gold." – John Munsell [43:15]
[17:32]
Assessment Questions:
Assessment Insights:
Identifying Opportunities:
Ask, "What do you do every week that’s repetitive, slow, frustrating, or mentally draining?" to surface areas ripe for AI improvement.
The ‘Perfect Day’ Exercise:
Invite employees to envision handing off routines they dislike, then explore how AI could enable that.
Focus on Impact, Not Novelty:
Goal is not inventing for its own sake but impactful, repeatable time savings and quality improvements.
Parallelized Upskilling:
Rather than investing heavily in one top-down AI project, multiplying small wins across all employees lifts overall capacity dramatically.
[36:20]
Examples:
Tools Used:
On Misconceptions:
“Asking AI a question is not the same thing as using AI strategically, consistently, and safely, frankly, inside of a business.” – John Munsell [02:21]
On Why Employees Plateau:
“If people don’t get the results that they expect out of the AI, then they’re going to go back to their old way of operating." – Michael Stelzner [07:41]
On Upskilling Benefits:
"Good training is going to include some sort of privacy, some sort of security and governance training, some sort of judgment." – John Munsell [05:58]
On Assessment Framework:
“If you don’t have that as a baseline, and you don’t have a group you can go to who are the champions of that, you can ask questions, then, you know, things fall apart.” – John Munsell [11:00]
On Reimagining Work:
"Should we just use it to speed up the current process or should we redesign and reimagine the process entirely? Because AI changes what’s possible." – John Munsell [31:20]
On Organizational Lift:
"If you could get all of [your staff] properly trained and all of them creating something... Now all of a sudden... the whole thing lifts up. Like the boats start rising up." – Michael Stelzner [33:04]
On Parallel Upskilling:
"Instead of you spending six, seven, eight figures to build a single application, you get 50, 100, 200 people building their own little tools. That is massive impact." – John Munsell [42:17]
This episode is a must-listen for any leader or marketer looking to move from sporadic AI adoption to a truly AI-empowered organization—actionable, candid, and packed with practical frameworks proven in the field.