Transcript
A (0:01)
This is the Everyday AI show, the everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business and everyday life.
B (0:18)
Be Honest how has your working relationship with AI changed in the past few years? Are you still copying, pasting into ChatGPT and hoping for the best? Are you drowning in AI generated drafts that need more editing than if you'd just done it yourself? Or maybe you figured out a few tricks, but something still feels off. Here's the thing. Most people are still treating AI like a junior version of themselves with better grammar, but the game has changed. In 2023, you learned to prompt. In 2024, hopefully you learned to iterate. And in 2025 you started wondering why this still feels so manual. And now in 2026, the skill gap isn't technical anymore. It's managerial. It's human. It's self reflection. Because the teams winning with AI aren't better prompters or they don't have access to better models than you. They're just better orchestrators of AI, better at throwing away the old way of work and starting fresh with AI front and center. And that human AI collaboration shift changes everything about how you work. All right, does that get you a little fired up to re examine your working relationship with AI? I hope it is. And welcome to volume four of the Start Here series. With Everyday AI. We're going to be looking at the human AI collaboration and best practices for working alongside AI. So if you're new here, after almost 700 episodes, we have kicked off our Start Here series. Like I said, this is volume four. This is a essential beginner and advanced AI overviews to launch your year strong. So if one of your big goals in 2026 was to learn AI or to double down, maybe you've been there since day one. This Start Here series is for you and we have a special resource so make sure you go to start here series.com that is going to get you access for free to to our inner circle AI community and it's going to take you straight to our Start Here Series page so you can very easily catch up with all of the Start Here series. Listen Additional resources, related episodes, anything you need to get either started or centered on your AI journey. And if you are very new here, this is part of Everyday AI. It's a daily live stream, podcast and free daily newsletter helping everyday business leaders like you and me keep up with AI and get ahead and use it to grow our companies and our careers. So the Start Here series We're going to do about a dozen or so of these episodes in the first part of 2026 that I hope is going to be a refreshing look at AI for all of us. Because, yeah, as someone that's done this now almost 700 times, I understand how it can be so hard to just start somewhere. So this is for you. If you missed our last episode, it was volume three of the Start Here series, AI as an operating system. So if you are listening on the podcast, make sure to check your show notes. We're going to be updating the show notes of all the Start Here series, so you can very easily, if you're listen over the podcast, just flip around that way as well. All right, let's get straight into it and talk about, well, why. Why is there this big skill gap in 2026? There's those people that are running away with AI, and then there's people that have been using it pretty much every day or every week since the chat GBT moment of November 2022, and they're still just barely keeping up. So it's a shift. It's a shift from operator to orchestrator. And that's what I think that you have to start thinking about, right? Your relationship with AI. And if you really want to stay ahead of the best practices, because your new job isn't doing the work, it's really defining on what the work looks like, how it gets done, what information the work needs. You set the parameters, the constraint and the success criteria, and the AI is probably going to be the one doing the actual work, right? I talked about this literally in 2024, this concept of, you know, agent orchestration. And that's how I believe that that's where we're headed. And I think we've really seen that a lot in the last, gosh, only four to five weeks. That's really coming to kind of mainstream fruition on this, this big shift. And I think the winners of this era aren't those that who can effectively, you know, onboard supervise and audit digital agents. It is really thinking of like an entire team that you are now orchestrating, right? Not an entire team that you are a player coach, you are the manager, you're not out on the field doing all of the work. So here's what we're going to cover in today's iteration of the Start Here series. We're going to be talking about the comfortable, uncomfortable truths of AI in 2026 and our relationship as humans with AI. And I think what so many people do is they Take their current antiquated processes that a lot of times are broken. And they just find ways to put AI in it. Right in the front, in the middle. Oh, they find a broken joint, a leaky funnel. Let's slap some AI on it, right? Do you guys remember the. The. The infomercial, right? You slap some flex seal on it. That's what people look at AI they find something that's broken or something. Ah, this could be improved a little bit. Let's slap some AI on it. That's the wrong way to do it. I think that it's about unlearning, right? I'm going to be griping on a couple of these terms that I absolutely hate. One is human in the loop. We'll get to that later. One is upskilling or reskilling. That alone, I think, has set so many companies back years. Because when, you know, the C suite and the boardroom, you know, go around HR and they're looking at, you know, AI investment, they're like, who needs upskilled? Who needs reskilled? They're looking at it almost as a proactive or, or, sorry, a reactive measure, right? It's almost like a course correction. Oh, who needs. Who needs upskilled this year? Who needs reskilled in AI? No, you need to unlearn, right? I've been chirping out this unlearned word for a very long time because that's how I really started getting the most out of AI myself when I started unlearning good habits that had traditionally led to success. And I think that's when you look at your relationship with AI, your working relationship with AI, that's what you have to start to do. You have to start to say, no, why would I just upskill with AI? My skills are, again, they're not worthless, they're worth less. So if you're still holding on to those skill sets, right, whether you've been using them for two years or 30, it's probably time to let them go and to unlearn and then relearn and rebuild from scratch. Well, why? Well, because right now we have. I'm not going to say superhuman AI, but I will say above human, above human, expert AI that can make millions of decisions per second worldwide, and humans can't keep up, right? So we think of this concept of human in the loop, and that's our job, right? And that's a job that we thought was going to last into the, you know, into the late 2000s. It's not human in the Loop, in my opinion, is dead on arrival. It was dead on arrival, right? Because we thought that you could stick any human in the loop and okay, let's have this human go look what the AI is doing. No, it is dead, right? Because agentic workflows really create miles long action traces that humans cannot realistically interpret. We can't keep up. The complexity has out far outpaced what a human can review, right? When you see a, a large language model, or if you're working with agents that have subagents and they can work around the clock, what good is a human gonna do at that point, right? A human in the loop, right? Like, I mean, let's say what it is, it's, it's, it's your last, you know, your last hope that you didn't get something wrong. It's your last hope that a hallucination doesn't slip its way through to production, right? It's your last hope that you don't make a tragic mistake from what a bunch of agents did. Because if your human in the loop can't push back, pause or ask hard questions, that's not even oversight. That's a fail safe, probably in name only, right? No human in the loop can keep up with what agentic AI can do today, let alone next week and next month. And yes, it does change that quickly. My gosh. So this isn't hype, this is measurable. And the problem is, well, models are smarter than us, so why would we want to keep a general or generic human in the loop? You need to find what your expertise is or, you know, your team's expertise. What is that one thing that you are always better than the best large language models at, and that's what you should be focusing on and double down, doubling down on and seeing where the overlap is between that one thing and what ultimately makes your company or your team more, more revenue, right? Because you have to stop comparing AI to perfection. Compare it to the human speed and accuracy for the same tasks. And at that point, humans can't keep up in I, I'm not going to say every single use case, right? But I'll say the overwhelming majority of use cases. If you have an average skilled human, right, pick, pick your, pick your niche, pick your vertical, right? Financial analysis, marketing, pr, whatever, Pick your job. Find an average person, someone that's, you know, worked there for one of your colleagues, but that colleague that's amazing at AI and they know everything, and then pick the smartest person in that same vertical, the smartest person in that Vertical cannot compete with the person who knows enough. Right? They speak the language, but is a whiz at AI. It's not, it's, it's not close assignment of competition. Right? That's like having 20 Michael Jordans on a team versus, you know, a peewee team. It's not fair. So I think that's what we have to get to, is we have to understand that these models now that are agentic by default, that can reason, that can think, they can plan ahead, they can, they can iterate, loop back, take different paths, use tools, all faster than humans and all spin off agents that do the same thing and you know, can do it hundreds at a time, you know, spin off hundreds of sub agents and then come back and you know, have multiple, you know, mixture of experts or you know, what I like to call mixture of models that, that judge all of those inputs. Right? You humans can't compete anymore. So why, why are we still thinking that the human AI collaboration in 2026, when it comes to AI, is human in the loop? It's not. Right. There was a, a recent study that looked at 700 plus court cases worldwide now hallucinations and fabricated citations and that rate is accelerating to mult to a handful of new ca, new cases daily. And that's because the AI is too fast, humans are too lazy, and for the most part, organizations aren't training their people. A lot of people assume, oh, if my company pays for a good AI, okay, well I'm just going to have it do most of my work. It's going to be right because my company's paying for it and they're not training me on it. And this is a huge risk. You know, JP Morgan, they acknowledge this risk openly. They recently talked about that when systems perform correctly, most of the time human attention drifts. Right? But if AI is right 85 to 95% of the time, well, your human in the loop falls asleep, right? Or you're a human in the loom. Maybe if they're a good one, they double down and just produce twice as much, but they just wave them. Wendell. Right, Chicago reference right there. Right. They just let everyone go in. They don't care. They're like, all right, everyone's safe. You're in, you're in, you're in, you're in. It's a passive approval of just automating things that need more oversight. And there's a concepts, a concept here that I want to talk about. It's Amdal's law, and that's essentially you can speed up one part of the process or many parts of the process, but the whole system is bottlenecked by whoever can't keep up, right? Is the, the old sports, you know, cliche or you know, teamwork cliche that you're only, you know, strong is your weakest link. And this is true for the human AI collaboration, right? And, and this is why this is so important. So few business leaders zoom out and think about this. So few people think of the human AI relationship. So few people working on, on front end AI strategy, back end AI implementation go through and ask these questions, right? What happens when multic systems are way smarter than humans and the humans don't know how to run them, right? And the human that two or three years ago was integral is now a liability, right? And, and how do we start to change that human role? Because yes, humans are still needed in all this, right? I never said that. And I, I'm not implying that human roles are changing and I'm going to get to what that looks like. But that's why we need human reviews, everything. That's why we need that, right? That's why you have to, I'm a huge advocate, right? If you're using front end large language models, what do you do? Like people, like people are like, okay, what do you do with all this time savings? Or what do you do if you're using a thinking model while you wait, while you read the chain of thought, right? And you rerun it and you correct it and you bring in more of your company's data right at the right points before the model goes too deep into its, its dive. And that's why the fix is expert driven loops. That's what I've been advocating for, you know, EDL for, for a long time now. Not just generic oversight because the difference is when you embed experts in building, right? Whether it's, you know, we don't even, we don't even have to talk about, you know, get too technical and talk about, you know, multi agentic loops. Let's just talk about embedding the right experts and setting up your team's AI processes in your chat GBT teams account, right? Or how you're going to tackle work and claude coworkers. A lot of times you have one person, sometimes it's it, sometimes it's, you know, your, your AI champion team, which is important and they kind of do it for everyone and they set it up for everyone and that's again the wrong way. You know, one thing that was interesting, there was A Legal on Technology study kind of looking at this concept of the, you know, human in the loop versus an expert loop. And they saw that one law firm that they looked at put senior partners in the loop, right? Where normally this is a, you know, not in a bad way, but you know, kind of the human in the loop overseeing AI. A lot of times they put younger people or more inexperienced people, people who cost less because that's what companies think, right? They're like, okay, well this is just someone clicking a button, clicking approve. Why am I going to put my senior people on this? Why am I going to put my smartest people on this? So in this case study from Legal on technologies, they found 86% faster contract review and 65% better issue detection when they had senior partners instead of junior reviewers, 86% faster and 65% better. And that's not like versus the human only baseline that is better than the AI, the augmented, right? Junior reviewers plus AI. So I mean you're getting compoundingly better and better results the smarter and the more expert people that you put in the right places. And I think it's, it's, it's, it's again, it's not putting one expert, you know, on one AI powered workflow or one, you know, agent run. It's putting multiple people in there at the right place, right? It's experts driving the loop, not a single human overseeing. And other studies show this other organizations are seeing the ROI triple when they move from generic oversight to expert driven collaboration. So how does this get to this point? Right? And it's almost like the better the technology gets and the more advanced it gets and the less tech know how it requires, right, that anyone can go in there and click a button and you know, set up AI agents that are connected to your data. Literally, right? This show's been going for 19 minutes. You could have set the hundreds of agents up in 19 minutes, I kid you not, right? One click. Very easy, but poorly implemented AI can crush productivity because you just end up spending more time correcting your errors and managing expectations that maybe went awry and then running multiple parallel backups. So I like to say this. If you had a bad workflow with AI and you upskilled or reskilled that workflow, right? You, you can't just throw makeup on an ugly process and think it's going to be pretty, it's still ugly now. It's just, you know, got some makeup on, it got a little shine that it doesn't deserve. All this does is it recreates workflows that weren't working. And if anything, it just creates this augmentation depth that ultimately tanks productivity. Because now you just have to go back, you know, and instead of getting more things done in a better way, now you're just getting more things that need fixed faster, but potentially more errors. Because you're not putting the right people in the right processes. Right. You're putting anyone in old processes. You have to rebuild them to be AI native. So how do you do this? It's a mindset shift. Like I said, you're not the one. If your team wants to excel and outrun the competition in 2026 and beyond, you can't just use AI, you can't leverage AI, you have to orchestrate it. Right. What's funny is, is most, most times, this is one of the times I, you know, I'm looking at my other screen here, you know, I actually don't have agents running right now where normally I would. Right. For the most part we have to start thinking of ourselves as orchestrators, or I like to say tastemakers sometimes. Right. You need to provide that context. Right. Context is going to be one of those, you know, context engineering is going to be one of those buzzwords of 2026. Right. In 2024 I called it first company data. Right. But I still think that that's more realistic in what you need. Context can be anything, Right. Context needs more context to be defined. But you know, context engineering, to put simply, you know, when I said, hey, in 2023, you were prompting and then you were iterating and then you were providing more context. Right? That's just more data, more data, more direction to a model before it goes off and do its, does its thing. Because, you know, before when the models were just, you know, non reasoning models, non thinking models, they would just spit something back pretty quickly. Right. Now they might go work for five minutes. Right. Ten minutes longer. I had one run the other day. On the front end. Not on the back end. Right, on the back end. If you're using it via the API, it's pretty easy to get these things working for hours on the front end, you know? Yeah. You might have a model work for 5, 10, 15, 20 minutes. I had an actual model, not a deep research run, run for I think 92 minutes the other day. Right. You have to give it the context and then at that point you're orchestrating. Right. I'm, I'm looking at different models, what they bring to me, different agents, what they bring to me. And I'M saying this is good, this is good. This isn't. Let's rebuild that skill, right? If I'm in quad, I'm. Let's. I'm saying let's update this skill, right? If I'm in chat GPT, I'm going and updating that GPT, right? You always have to be improving the processes and not just trying to, you know, sprinkle some AI on an old process that's broken. And that's why we're talking about these things like agent supervisors, OP orchestrators. This is a fundamental shift in how work is getting done. This is the difference in the human AI collaboration and why I think you need to rethink your working relationship with AI. Because if you're still using it like you were, and you know, late 2022 or 2023, right. If you're going out of your way. We tackled this earlier in our Start Here series about treating AI like an operating system. So one, that part covers the tech, right? Treating AI like an operating system. This part, volume four here, what we're tackling the human AI collaboration, that's the mindset shift. That's going from an operator, I'm the one pushing the buttons, to. Nope, I'm orchestrating an agent that's technically going out there and pushing the buttons and coming back. And then I'm telling it how to improve. Right? I am building that expertise. Even if you don't know, hey, what is that one thing that you know? You're definitely smarter than these AI models that are genius level on offline IQ tests, right? You might be saying, okay, what could I be smarter than an AI model that knows everything if you give it the right context? Well, you don't know until you look at its chain of thought. You don't know until you know you've put in, you know, 5, 10, 15, 30 hours on a project having multiple AI models go and do something that you know how to do front to back. That's how you carve out your expertise. And then you really have to then deploy that and duplicate it across your team or your organization. That is how you go about shifting your mindset to go from an operator to an orchestrator. Because then that output compounds, right? Your ability to generate revenue compounds your ability to do things that you didn't have time to do last year. All of a sudden, that frees up, right? And this is where the most advanced businesses are shifting toward right now. So last thing I want to talk about, last two things. Are you still running in circles trying to Figure out how to actually grow your business with AI. Maybe your company has been tinkering with large language models for a year or more, but can't really get traction to find ROI on Gen AI. Hey, this is Jordan Wilson, host of this very podcast. Companies like Adobe, Microsoft and Nvidia have partnered with us because they trust our expertise in educating the masses around generative AI to get ahead. And some of the most innovative companies in the country hire us to help with their AI strategy and to train hundreds of their employees on how to use Gen AI. So whether you're looking for ChatGPT training for thousands or just need help building your front end AI strategy, or you can partner with us too, just like some of the biggest companies in the world do. Go to your everyday AI.com partner to get in contact with our team or you can just click on the partner section of our website. We'll help you stop running in those AI circles and help get your team ahead and build a straight path to ROI on Gen AI. Is where our expertise actually belongs, right? Where should we be spending our time? Well, you have to go do what I just said. You have to make that transition successfully from operator to orchestrator. But then I want you to think about kind of this jagged frontier of capabilities, right? Because there's things that humans are good at and then there's things that we're terrible at and those things aren't changing all the time, right? And the same thing with AI models. There's things that AI models are amazing at that we never thought would be possible. Then there's things that they kind of fail at. So right now, this is today, right? So if you're listening to this in January, February 2026, probably still safe if you're listening to it in, you know, July, September, this could be different. But right now, humans win at high context empathy, ambiguous decisions with incomplete data accountability and novel judgment, right? Reading between the lines where there's not structured data or there's not company context to fill those cracks, right now, AI wins at pretty much everything else, right? Data synthesis, first drafts, pattern recognition, repetitive cognition, right? All of those things, those are the danger zones for if those are your competitive moats or you think that's your competitive advantage right now, right? If you think one of those things, data synthesis, first drafts, pattern recognition, if that's what your department, your career, your company is built on right now, you, you, you've got to find your pivot. Because right now, the number one predictor of human AI success isn't just the model you use. It's the quality of the context and kind of those procedures that you create and you provide. Because if garbage context goes in, a worse outcome, a worst outcome comes out. So here's some quick takeaway advice, right? I can't sit here and give you advice for how to set up, you know, your, your agent orchestration, what I can tell you, your individual using different platforms, right? One of the biggest mistakes is skipping over repeatable and scalable context. So stop starting from zero, right? And you need to take this, start this at the individual level, but then take this to your team. Stop starting at zero every single prompt. You have to start building context vaults, right? You can think of those as skills, right? You can have a markdown file with all your skills, with your company knowledge, right? If you use CLAUDE skills, know you're probably familiar with these markdown files. But start creating these skill files based on the tasks that you and your team repeatedly do. This is something I'm constantly updating mine, right? If you're using as an example claude, you can update them in kind of in chat. If you're using the GPT builder in chat, GPT, you can do it there as well, right? But you need to be building and reusing these, you know, custom GPTs inside of, inside of ChatGPT, the Claude Projects and Skills, Google Gems, whatever. You need to have that rag for your personal use, right? We think of, when we think about retrieval, augmented generation, we usually think about these, these, you know, vector databases that would cost millions of dollars three, four, five years ago to build. And we think about these very complex things. I want you to think about your personal rag, your personal retrieval, augmented generation. What are those things? I, I update mine all the time, right? I have my, you know, as a small business owner, these are the things that I'm thinking about. These are, you know, my important facts, stats about my role, about what I'm trying to drive. Here's my KPIs, and I'm constantly, you know, updating these things. But you need to have your personal context, your team context, your company context, your competitive landscape context. You have to have these things that are reusable because if you're just starting at zero, you're wasting time. That's being a button pusher, right? Instead of agents that don't need to have the button pushed, they're already doing it, they're already delivering it, they have all that and they're able to use it in a repeatable and scalable way. And then last but not least, you need to elevate your champions. Okay, here's what I mean by that. Funny, I had Chris Caldwell, the CEO from Concent Concentrics on a couple weeks ago and he kind of said, hey, you don't want a hundred Jordans running around, do you? And you don't, but I'll tell you this, and I don't want this to come off in the wrong way, but you need people like me on your team and you need a lot of them. Here's what I mean. You need people who's mainly their only job is to keep up with AI every single day, right? Every large organization needs dozens of people who are in my position. All they do, they, they read about AI every day. They're scoping out new projects, measuring, building modular backups, right? Because you don't want all of your, you know, systems in one model. The model changes and your whole company comes to, comes to a halt. And then you need to constantly have those people training the non champions. So you need to find where your time savings opportunities are. You need to scope those, measure those, and then deploy the ones that can easily gain back time the quickest. And then you apply those hour saves into creating those, you know, go create your dozen Jordans on your team, right? Your domain experts, right? It might feel weird to start automating some of their work or automating some of their roles, but then you find those people that are actually able to automate parts of their job and then teach others, build scalable systems. There's still new lines of revenue to build in your company, right? People, when they look at AI, yes, I do ultimately think AI will cause a net negative in the job market, but there's millions, tens of millions of new jobs that we have no clue that are going to exist in three years. And you need those people, those champions on your team. You need to elevate them, challenge them, and you need to deploy them, right? They need to be listening to this show every day and other, you know, AI podcasts and other, you know, YouTube people on AI reading newsletters. And you need to be scoping, breaking and, and training people every single day. But the largest organizations aren't even doing that. So stop looking for cool AI tricks. Focus on automating the dull stuff first. Right? Get rid of shiny AI object syndrome. Go. The invoices, the summarization, the filing all those boring AI things. Beat the flashy AI every single time. Because as AI starts to handle more and more digital interaction, face to face and high empathy, relationships are going to become your company's differentiators. Right? But you can't have that if you're still the one pushing buttons. You can't. The human premium is rising, so you have to use it wisely. But you can only do that if you go back and follow the steps that we just laid out in re examining the human AI collaboration, going through the best practices of shifting away from being the operator, from being the button, the button pusher, the chat GPT prompter, into being the orchestrator, the tastemaker and the champion that's pushing your organization to do the same top to bottom. All right, I hope this Start Here series was helpful. Volume four done in the books. So if this was helpful, remember, please go to start here series.com that's going to give you access to for free to our inner circle community. And you're going to see all of the Start Here episodes right there. We're going to be throwing in and adding more and more additional resources to help you with your journey. So whether you're just starting out, I hope this episode helped you better understand some things. If you're an expert doing this every day, I hope this challenge you to look at AI a little bit differently and to push even your own human AI collaboration. So thank you for tuning in. I hope to see you back tomorrow and every day for more Everyday AI. Thanks y'. All.
