Transcript
A (0:01)
This is the Everyday AI show, the everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business and everyday life.
B (0:16)
How are you using AI right now? If you're like most people, it's intermittently, maybe to start a project or in the middle to end, or maybe the whole thing. But chances are you go out of your way and you make a choice to go and use AI. It's not like that anymore. I like to talk about AI like the Internet. You can't work without the Internet, right? And I'm not saying going to different websites. Your email, your CRM, your project management system, your erp, everything is connected to the Internet. And it's not like we now make a choice to use the Internet. It's just there and we need it. That's the parallel I'm trying to draw here. I think most people for the last couple of years have had to make a Choice to use AI in 2026 and beyond. It's not like that anymore. I think AI is, for better or worse, going to infiltrate every single corner of where and how we work. And so on today's show, the Start Here series, Volume three, we're going to be talking about how I think you should be using AI as an operating system, not as an afterthought or not intermittently, but the foundation of how you and your team work. All right, I'm excited for this show. I hope you are too. Welcome. And what's going on? Welcome to volume three of our Start Here series. So if you are new, our Start Here series is everyday AI's essential beginner and advanced AI overview use to launch your year strong. So if you haven't heard, well, you should probably go to start here series.com that is going to get you free entry into our inner circle community. And all of these Start Here Series podcasts, so you don't got to go fishing for them on our website. They're all going to be right there in that space as well as you get access to our free Prime Prompt Polish Chat GBT prompt engineering course. So make sure you go check that out. And if you're very, very new here, well, this is everyday AI and we've done now almost 700 podcast episodes and one of the most common questions I get is where do I start? And that's why we're starting the Start Here series. So if you missed our last Start Here episode, It was episode 693, so you can go click back a couple of times on the podcast player there. But we talked about AI without the jargon. So we kick this whole thing off by talking about what is generative AI, what large language models, volume 2. We dove into just the language and the lingo, because if you don't understand that, you can't compete. And now today we're going to be talking about how I think you should be looking at AI as an operating system, not as a tool that you selectively go to use. So why? Well, I think AI is no longer an app you visit, it's becoming how you work. You know, if we think back to the early days of Chat GPT, which I think is most people's introduction to generative AI, but even to last year, I think still most people using AI went and sought it out. It's not going to be like that for very much longer. Why? Because wherever you work, AI has already infiltrated it and you can't really get around it. Just like you can't get around doing knowledge work without using the Internet. Think about it. If you're in Microsoft Excel, Microsoft Copilot's there. If you're opening up a blank, Google Doc, Gemini's there. If you're in Salesforce, Agent Force is there. It's very hard now, regardless of where you work, for there not to be AI already embedded there yet. You still have to kind of go out of your way to use it. But we're going to talk about how I think that's not only changing, but how you as business leaders need to shift your mindset and understand that what I think is going to happen, it's definitely going to happen front end, large language models are going to be the future interface of work. And yeah, we got. We got receipts, y'. All. Don't worry. So in today's show, we're going to talk about how the Internet is becoming a background service for your AI assistants and probably nothing more, and how that changes knowledge work. We're going to be talking about the protocols and platforms that are making this shift to an AI operating system happen, such as Chat GPT apps, the new Claude cowork, MCP from Anthropic, their model context protocol, and that how that helps agents kind of not need us for the things that we've maybe been doing a lot over the last year or two. And then last. But how business leaders need to change, that's the reality. A lot of people think of AI as a tech implementation. It's not. It's people management. And you need to know the adoption stats, the Risks and how to choose your AI operating system. So we're going to be diving into all that, but let's start at the top. Why do you need an AI operating system and what the heck is it? It's not an official term. I don't know if I made it up, but I've been calling, I've been talking about AI operating systems now for more than two years. But the reality is right now there's almost 2 billion people that use AI almost weekly. And 90% of knowledge workers who use AI report higher productivity than those who don't. Obviously. Right. And not just how we use AI, but AI's capabilities have obviously greatly changed I think especially in the year 2025, more so than the two years prior. And the main reason for that is large language models mainly did. There's two big advancements. Number one, bringing in your company's dynamic data and that's big, right? That's what cuts down on hallucinations, but that's also what increases collaboration. And, and that is the second one. Right? Large language models and front end AI chatbots are no longer oh, I'm just chatting with a helpful assistant. No, now your business's data is there and you can collaborate with team members, right? Work in the same chat like you're sharing a Google Doc and you're all collaborating there live in, in real time. And that's because right now integrated AI systems are a baseline requirement. Right. I like to think most companies at some point in the, in the 90s made a decision right in early 2000s as well. They said we're either a Windows organization, we're a Mac organization, we're a Linux organization. You chose your operating system and then you kind of your front end processes that happened before the computer, during the computer and obviously what happened after were largely based around what operating system that you chose. And I think the same can be true for the choices that we have now ahead as everything becomes AI native. That's because a single person with good AI skills using the right agents, the right co pilot can accomplish the work of a larger team. So naturally we have a extremely strong orchestration layer that's already in place and it's just begging. Right? The smartest organizations I talk to, I, you know, lucky enough to get to talk to hundreds of very smart people in AI every single year. The smartest organizations have already done this. They've already brought their entire team, their entire processes into, you know, chat, GPT Interrupt, Enterprise or you know, Google Gemini Enterprise, Claude Enterprise, obviously Microsoft 365 Copilot. And they've changed their processes. Not just oh, we're going to use, you know, we have a 10 step process. We're going to use AI at step two, four and seven. No, they said that 10 step process process is now a three step process. Right. Human orchestrates something. Agents go and do something. Human checks it on the back end and yeah, we went from 10 steps to three steps and from 10 hours to 30 minutes. Those are the moves that the business leaders right now are taking. And that's why you need an AI operating system. All right. And we're not going to dive into this too much, right? Oh, what do you choose? What do you choose? Right. Well, maybe do another show later in our Start Here series on this. But I think you essentially have four choices and well, if you're a Microsoft Copilot organization, you might be able to choose two. So let's start there. Microsoft 365 copilot, hats off to them. They started technically the enterprise AI craze. Right. This is back when large language models, essentially you had front end chatbots and that was it. Microsoft was the organization that brought enterprise AI to into the mainstream by launching Microsoft Copilot. It is extremely robust, it is wildly flexible, yet the learning curve is bonkers. Right. That's one of the reasons why I think Microsoft Copilot hasn't just run away with it and they haven't been running away with it for years. Yes. There's a lot of technical intricacies that I don't even understand and the majority of people, unless you work in it, don't. Right. But there's mainly I think permissions. Right. Anyone I talk to that's a Microsoft 365 copilot organization, they see all these great features that Copilot has and the overwhelming majority of enterprises don't know how to get it set up. Right. It's essentially you have gatekeepers and bottlenecks. Right. Usually, you know, it is looking for approval. People are like, why don't I have this? Someone else has access to this, you know, this in, you know, Copilot studio. I want that. No, you can't have that. How do we get that? Right? That's what a lot of it is for. The organizations that have robust, you know, Microsoft training already in place are the ones that have obviously done really well here. But Microsoft 365 copilot obviously works at the OS level. Right. It's the only AI that's baked into an operating system. Obviously Apple is not there and they're very far from ever being there anytime soon. Next, Google Gemini, they won 2025 and they're innovating faster than anyone. Right. And similarly, like Microsoft 365 Copilot, you can get Google Gemini in a lot of different ways. Right. People don't even know this. As an example, probably the Gemini that you use and have heard of is not even the best Gemini for teams. Right. About four months ago, Google actually came out with a business version of Gemini called Gemini Business and Gemini Enterprise. It is separate. Right. So I have a paid Gemini account for my personal Gmail, I have a paid Gemini account for my work email and then I have a paid Gemini business account. All different. Work a little differently. And that is also one of the pros and cons of Google Gemini as well. They all the versions work a little differently as well. There's pros and cons to them and that's not even the tip of the iceberg of where you can use Google Gemini. Right. And AI Studio, they have so many different, you know, vibe coding platforms. But it's Google Gemini is great. It's all over the place. Then you have Anthropic Claude. So I will say this. I am much more bullish on Claude than I was probably two months ago. And a lot of that has to do with, well, mainly two big pieces that I've seen from Anthropic. One is they were the first to really popularize file creation. Which might sound crazy, right? Like why haven't large language models just been doing that at the forefront? Well, Anthropic popularized it. Right. They were the first one to get Excel sheets. Right, Right. Which is kind of crazy. Even before in some instances Microsoft 365 Copilot, the same thing with PowerPoint. Claude really pushed the boundary there. And then they also, you know, just came out with their Claude cowork, a great, a great tool, FYI. We're going to have a show on that tomorrow. But I think that in 2024, in the early part of 2025, they were overly focused on coding and software engineering. And I do think that they've made a pivot over the last quarter or so to really after the non technical business leader. So they also have a great enterprise team product as well. And then last but not least, Chad gbt. Right. In terms of users, no one's coming close. In terms of retention, no one's coming close. Stickiness, no one's coming close. But it seems like they're really focused on consumers. Yes. They have a great chat GPT business tier. They have a great, you know, chat GPT enterprise tier where, you know, reportedly I think 92% of Fortune 500 companies use one of their products. But I do think that right now OpenAI is maybe a little more focused on the consumer side. You know, recently announced, you know, ads. Right. A lot of things, chat GPT health. But maybe it makes sense. Maybe that's their strategy to bring teams into the fold is to, you know, become so sticky in their personal life that it's hard for people to use anything else. So when you talk about bringing your team, and like I said, all four of those obviously have great options for, you know, individuals, small teams, large teams, enterprise organizations. Right. You have to choose which one is right for you. We're not going to go too far into that, but in my opinion, those are your choices. Right. Especially if you're in the us. Yeah, you could look at open source solutions. Right. I'm not going to get into that. I think you have to choose. Just like, you know, you choose Windows, you choose Mac, you choose Linux. Are you still running in circles trying to figure out how to actually grow your business with AI? Maybe your company has been tinkering with large language models for a year or more, but can't really get traction to find ROI on gen AI. Hey, this is Jordan Wilson, host of this very podcast. Companies like Adobe, Microsoft and Nvidia have partnered with us because they trust our expertise in educating the masses around generative AI to get ahead. And some of the most innovative companies in the country hire us to help with their AI strategy and to train hundreds of their employees on how to use Gen AI. So whether you're looking for ChatGPT training for thousands or just need help building your front end AI strategy, you can partner with us too. Just like some of the biggest companies in the world do. Go to your everydayai.com partner to get in contact with our team or you can just click on the partner section of our website. We'll help you stop running in those AI circles and help get your team ahead and build a straight path to ROI on gen. And this is actually a very telling sign. So we covered this a couple of months ago when this was first reported, but there was a report that Microsoft CEO Sadia Nadella essentially warned his employees that Microsoft and Windows, their Office suite, could become obsolete. Yeah, he said that they had to reboot and focus on AI or the company, the company could become obsolete. Now, if that sounds kind of like a wild statement, well, it Kind of is. I mean, Microsoft has, you know, for the past, you know, three to four years traditionally been top one, two or three company in the world when it comes to market cap. Yet Nadella warned that without a fundamental overhaul in a focus on AI that Microsoft's most successful businesses could become irrelevant as AI agents replace traditional software. Right? Yeah. Saying no one's going to use Office anymore, no one's going to use, you know, all of these different tools. And that's why Microsoft reportedly is transitioning from a software factory to an intelligence engine. Right. Sadia Nadella, one of, I think this smartest tech people probably to ever live is saying that, hey, even we as the largest company in the world, that we power the enterprise when it comes to an operating system, we're not going to make it because the business world is transitioning away from that. And it's all about your intelligence engine. It is about how you put your knowledge to work using AI, Right. And he said that Windows becoming an AI first operating system where the keyboard and the task bar become secondary. So his goal was for every Office application to function as a development environment for AI agents. Right? Think about that. That's right. Like billions of people use Microsoft in his goal that all that's going to be is a base for agents to work. That's the shift that I think a lot of people, right. You should probably be listening and paying attention to what he's saying because that's the truth. And it's also because like I said, large language models are much more than that, Right. They are now agentic, they're now agents. Right? That's the thing. You go into chat, GPT people don't know this. You can schedule an agent to go, you can give it access to, you know, your connected and synced data, you can log into certain websites and you can say, hey, every day at, you know, 7:30, go do these five tasks for me. Right? People are still looking at large language models as, you know, fun little toys to help you write better emails. And they're not that the orchestration, the tools, you know, with the browsers, code executors, app connectors, bringing in your company's dynamic data. It's huge. And now as memory and context improves, so done, so does an agent's ability to orchestrate to plan and execute tasks. Right? As the memory, the context window, the personalization and the ability to properly use data. With the harnessing of these models, it's, it's, it's nuts, right? And, and this is the cognitive layer that Saadia Nadella, the intelligence layer that he's talking about, as the models get exponentially smarter and they get more and more access to our data and they get better and better at using tools. Hello, people. This is what humans do, right? I've talked one or two times through this Start Here series, and I'm going to talk about it probably a little bit tomorrow when we talk about Claude Cowork, very much related to this show. I think a lot of the work that humans in companies that are ahead in AI acting as duct tape, right? We're duct taping and connecting different agentic flows. That's a lot of what I do all day, right? I have a bunch of agents going to do things. I orchestrate them, you know, I taste make. I'm like, good job, Agent one. Hey. Agent two, that's not what we wanted, right? I'm duct taping, man. Even though that's weird, I think that's where we're headed because large language models are not chatbots. They're literally that human intelligence layer that Sonia Nadella was talking about. So let's get into the web, right? And how we can't work without the web, right? Now, if you're listening this, if you're watching this, it's because of the web, right? It's because of the Internet. Everything we do is the Internet. If you're a knowledge worker, it is because of the power of the Internet, okay? I like to tell people you need to replace the word Internet with AI because the two are going to become one, right? You think of anything, think of a Google search, right? Eventually it's going to be just an AI mode search, right? They've already said, Google has said that that is going to be the end goal. Everything's going to go to AI mode. So are you going to browse the web in the future? I'd say probably not. Browsing the web is an absolutely terrible experience. The same thing. I think for the most part, the web will be made for agents, right? And in the same way that we have these protocols that allow agents to talk to each other, they're going to be doing the web browsing as well, right? I mean, look where we are now. Look at some of the. I think the best advancements of 2025 were agentic browsers. I think if you had to give categories A grade, right? I think niche agents got an A. I think general agents that we thought were going to take over got like a C plus. And I think that agentic browsers got an A. As well, Right. I think agentic browsers performed much better or at higher aptitude than agents did. So working in autonomous capabilities into your browser and then pairing it with memory, which I think OpenAI's Atlas does really, really well, right? That gives your browser the ability to, number one, just perform tasks for you, right? But to bring your context in your history over as well, right? Think about, I don't know, an example, filling forms. Why should we do that anymore? It's stupid, right? If, if, if an agentic browser has all of that in its memory, has all that information and can navigate these, we should never be filling forms, right? That's why I think so much of what's happening on the web is just going to become for agents, and I don't think that we're going to be browsing the web much longer, which does change the future of work, because that's what we do. That's what we do as humans. We go do work on the web. Agents are better if you know what you're doing, if you're using the right model, right? The frontier, state of the art. They're smarter than us, all right? You look at offline IQ tests, they're nearly at genius level, but smarter than probably 99.99% of anyone listening. A thousand times smarter than me. So what used to take many searches, clicks and tabs is now just delivered as one concise answer. So let's get back a little bit to how using AI as an operating system can solve that. Because one of the things that I think we waste so much time for is context switching, right? Every extra login, every copy and paste. Oh, my gosh, this formatting's wrong. From Copilot Chat gbt, I, you know, I need this and markdown, it's not working, right? We spend so much time reformatting, copying and pasting, you know, switching, you know, tabs. You know, analysts call this the app hop tax, right? Going from all these different apps, all these different softwares, you lose so much time, right? And to be honest, aside from summarizing information and synthesizing things and personalizing information, that's what knowledge workers do, right? We use our brain to go and summarize, synthesize and rewrite information, and then we context switch, right? Okay, I'm going to read this industry report, okay? I'm going to personalize that context, take it to my team. Okay? Whatever the team says, then I'm going to go put it in a blog post, okay? Then I'm going to go put it on the web. Okay. Our customers are going to read it, I'm going to sign up calls. Right. With the customers. Okay? Right. All, all you're doing is you're just slight switching that context over and over and over and over again. It's got to be agentic, Right. Right. Now that's friction because agents can handle most of that right now. Right. In a, in an agent native flow or an LLM native flow, an agent can execute everything inside of a single thread. Right. Think of those multi step tasks that you do every single. Right. I'll even give you very quick personal example of how it's changed. For me, right? When I first started this show, there wasn't a lot of AI out there, right. There wasn't, you know, there was no Google didn't have AI, Anthropic didn't exist. There was no Metalama. Right. There was chat gbt. It was a nice little chatbot. So how this used to work, I just spent a lot more time reading and synthesizing information. The quality of my shows wasn't able to be as good because it took way too much time. Right. I wasn't able to go as deeply into topics, you know, it was a lot more interviews, a lot more topical things without being able to dive deep and I think have a much higher quality. Now I have multiple agents going out, looking at my context every single day, looking at my podcast stats every single day, looking at what people are clicking on in the newsletter every single day. And then based on all of that, right. Based on my context, my point of view, right? What stats, you know, what podcasts are performing well, what people are clicking on, they go in real time, base all that, do reports autonomously. I have an agent that looks at multiple agents reports and they get back to me and they say, hey, based on what you cover, based on what your audience cares about, here's what you should be covering. And I look through that, right? As the orchestrator on the front end, the taste tastemaker on the back end. And I say, okay, great, right. I still take the time to read it. I always, you know, go through and read, read the chain of thought, you know, the Observability and Traceability 101. But it's no longer a step that myself and my team have to go do over and over and over. And that's the promise of an AI operating system. And it's a lot of the recent technology that's actually going to make that work. Right. And I think one of the things you have to call out by name is Anthropic's Model Context Protocol. And actually a pretty important recent update to that. So in December Anthropic actually donated the MCP made it open source to the Agentic AI foundation under the Linux foundation with also the, that same foundation, the agentic AI foundation, also supported by OpenAI, Google, Microsoft and AWS. So right now, MCP, the easiest way to think of it, it's like, you know, people call it the USB C of AI, right? It's just everything plugs into it and you know, it just converts and it's good, right? It's, it's, it's like a us, you know, one of those dongle converters where you can, you know, put anything on one end, put anything on the other end and it just works and it's great, right? This is kind of like same thing with a. You know how websites have APIs and can talk to each other? Websites built on all kinds of different frameworks, it doesn't matter, right? If, if one thing's on, you know, react, one thing's on PHP1 things, I don't know, JavaScript, HTML, CSS, right? All those websites can talk to each other even though they're built in different languages because they have this API, it's a common way to transmit languages. And that's what you have now with the Model Context Protocol. It allows AI agents, AI models, AI tools to all talk to each other, right? Part of it's great, right? That's amazing, right? And it's also great that OpenAI, Google and Microsoft, Anthropic's technically three biggest competitors have all adopted it and have put it, you know, put it to use in their products for their users. But at the same time you also have to think, wait, doesn't that just in theory take away a lot of the work of what a human would be doing? Absolutely, it does. I don't think that's a bad thing, right, but that's a lot of whether you know it or not, right? If you took time to really go deep in Model Context protocol and look what's capable as an example, you know, in Claude Cowork, what's exam like what's available with, you know, OpenAI's like Agent SDK, right? And being able to use in Copilot Studio Model Context Protocol, you probably wouldn't know it, but probably just about any different website, any different app that you use. And normally right now maybe a human is interacting with that data via AI and they're going back, right? Human one is, is Working with, you know, tool one with AI. Human two working with tool two with AI. Human three working with tool three with AI. You don't need that because now you don't have to have this vertical integration. Right? Humans working vertically, going just up and down with a single tool. Right. Now those AI tools and softwares can all work horizontally across the plane as well because of the model context protocol. They can talk to each other. You don't need a human, you know, multiple humans to use different AI processes to talk to these 20 different, you know, pieces of enterprise software that you use. You can have an agent that change them all together using the model context protocol. That's where, you know, expert driven loops, you know, come into play. And right now some estimates suggest that 90% of organizations are already using MCP servers. Yeah, you're probably, especially if you're a quad user, you're probably using MCP model context protocol servers without even knowing it. Right. Especially if you're using their desktop application, which is really good. That means you're using some of their built in MCP servers. So you want more proof that AI operating system is the future of work, that you need to be bringing in all your processes. All right, I'm actually going to start with Microsoft Copilot Studio. They just launched their computer use in public, public preview. What that means they want everyone to be using this. Right. It gives them more data. Right. But they have a great computer use agent. So all the things that us humans do, we've been training models for the last probably two years on a lot of this data. And they're going to get even better at using computers, which is. Wait, what? That's what we do. Right. The biggest companies in the world are creating specific models to go and do the work that we do. Navigate between, you know, applications on the desktop, files on your local machine, you know, terminal on your local machine, AI models, the web. Right. That's there. That's what Claude co work does. That was just released last week. Right. It's a non technical version of CLAUDE code that allows you to navigate and orchestrate files on your local machine. A virtual machine, virtual terminal, as well as websites. Right. It brings that all together. But I will say I think the biggest nod to the AI operating system is what Chat GPT is doing. I don't think a lot of people are talking about it because they're comparing it to the iOS app store. But their Chat GPT apps are all the proof you need that the Internet is going to die. And the line between AI and the Internet is going to blend. Right? So what tools do you use on an ongoing basis? I'm going to, I'm going to name a couple here. Ready? Photoshop, Canva, Agent Force, Asana, Box, Clay, Dropbox, GitHub, Google Drive, Gmail, HubSpot, Monday.com Lovable Ramp, Pipedrive, Replit, Notion, SharePoint Teams, Slack, Stripe, Zoom. Chances are those are pretty much a lot of people's tech stacks, right? I probably covered at least half of your tech stack, maybe 2 3rd. Maybe that's all the tools you use. And that's not everything. What are those? Those are apps that are available today in ChatGPT and let me tell you why. This is a huge nod and a big reason why you need to get your team working in an AI operating system and bringing all of your day to day processes into, as an example, something like ChatGPT Business or ChatGPT Enterprise, those apps. Okay, yes, you can technically use ChatGPT to interact with a graphical interface, but it also brings in all of your data, right? So as an example, all of your data, if you use Dropbox or Box or Google Drive, right? All of your company's CRM data, If you're using HubSpot or Pipedrive for your CRM or Monday.com, there's also ClickUp on there. All of your, you know, your Zoom meetings, your Slack messages, your team's messages, all of that is now can be remembered and can be shared in the same context window by multiple people of your organization, by an agent. You get where we're going here. You don't have to go into those 10 different tools anymore, right? In context switch or 10 different people, you know, there you have all these broken lines of communication, whether it's yourself. Oh my gosh, I forgot what I did yesterday. Yeah, I was using an AI tool inside, you know, Dropbox or inside Google Drive and well, I forgot now, right? It's because when you're going on all of these separate runs, right? When you're using, you know, 10 different tabs, five different apps, you're losing all of the context when you're bringing it inside of ChatGPT or something like that, it has it all, right? And the ability to triangulate all of the different opportunities within there, that is the real promise of an AI operating system. It sees the things that you see and it can compound your impact. But the biggest thing is it's going to see what you can't see because it has access to all of that data in real time, dynamic data and an Agent can just go be working for you around the clock. And even if there's a certain piece of software and you're like, yeah, our team, you know, half of our team, this is where we work, right? There's never going to be AI in this. Yeah, there is. Right. A Gartner study said that 40% of enterprise apps are going to have agents by this year. So essentially all enterprise software is going to be agent led anyways. Right? So don't just think, oh well, you know, right now there's not an AI present, you know, our, you know, piece of software that we use, you know, whatever it is, you know, no, it's not going to be touched. Yes it is. It's. Everything's going agentic. Not only that, but with the MCP protocol and some of these other protocols at that point, that really takes away the traditional human work, right? And that turns us more into orchestrator. Orchestrators, taste makers and experts who should be driving these loops. So choosing your AI platform is now probably more strategic than choosing your actual operating system or saying we're a Microsoft organization or we're a Google organization. It is huge. But you have to make sure that you understand the data side as well the governance, right? You have to make sure you understand what controls you have, you know, human approval for sensitive actions, audit logs, how you can tackle prompt injections, you know, making sure that your data is integrated in the right way and still making sure that whatever systems you're using that you still adhere to the, the, the permissions that you've set in those tools and also portability, right? Are you going to rely on just mcp, are you going to build modularly? Right, because if all of your workflows depend on one single AI and you haven't been testing them and scoping them in any others, if that model has a pretty big change, right? As an example, going from a GPT4O to a GPT5. And maybe things don't work like they used to, right? Especially when we talk about agentic models that are doing a lot of the work by themselves under the hood. What if their tooling really changes? Then switching becomes nearly impossible. So not only do you have to be smart in how you set it up, but you also should always be running backup and building modularly and not saying you need to be working in multiple AI operating systems, because I don't think that's the best way, but you should have at least a small set of your team always looking at what's our backup plan? Right? Because people are always like, okay, well with all this extra time, what should humans be doing? Well, hopefully focusing on higher value work, more creative work, more, more strategic work. But you need people like me on your team that are keeping up with everything every single day. You need people who are creating backups in redundancies, right? And looking at the redundancies. That's what you should be doing. Because like I said, human work is changing. So to quickly recap, the line between large language models and the Internet is going out the window. So in the same way that all of our work now happens on the Internet and we don't necessarily make that choice to use the Internet, it says, well, when you turn your computer on, there's your outlook, it's on the Internet, right? Your teams, it's like everything is powered by the Internet. And that's where we're headed with AI. We're no longer, I'll say if, you know, listen to this in January 2027 and I think you'll agree, we're no longer going to be making an active choice to go use AI like we have for the past two or three years. It is going to be there, it is going to be in our face. And more and more smart teams are going to be pushing all of their day to day operations into front end large language models, the team versions, or what I like to call AI operating systems. So you need to be thinking about it now. What is your plan? Right? When your competitors beat you to this and they can run leaner, they can run faster and they start gobbling up your slice of the pie, you have to start thinking about it now. The technology is there, the capabilities are there, the results and the outputs are there. You just have to meet them there. What's keeping you from doing that? We've talked about in the first two episodes of the Start Here series, but it is making sure that your team seems, your team stays up to date, educated and you have to be able to carefully sprint as fast as you can. All right, I hope this show was helpful talking about AI as an operating system. If it was, make sure you go to the starthereseries.com and that will give you free access to our inner circle community where you can go catch up with all the different Start Here series, all the different episodes in that volume, as well as get free access to our prime prompt polish course. And you know, you get to network with hundreds of other just trailblazers in the AI space. I mean, we have leaders from Fortune 500 companies, small business, you know, CEOs you know, marketers, advertisers, anything you can think of. A bunch of great people who are pushing just the pace of AI in the inner circle community. So thank you for tuning in. Hope to see you back tomorrow and every day for more Everyday AI. Thanks y'. All.
