
Loading summary
A
This is the Everyday AI show, the everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business and everyday life.
B
Every once in a while, I have to go outside and touch grass, right? Living in Chicago, I'm pretty much hibernating for like four months out of the year. But it was a refreshing recent trip to San Francisco and Silicon Valley that made me realize that, okay, it's probably good to get out a little bit more and talk to people on the cutting edge of AI. And I thought it was probably a good time for a show that kind of took a look at that. And yes, it's maybe kind of boring to hear about some of my recent trips, but I also think it's important because I was able to meet with a lot of the smartest people in AI at some of the biggest companies building the technology of tomorrow that we'll all use. So although I'm not going to be naming names or giving away any juicy details, I can't do any of that. What I did think that I would do on today's show is to give you five AI trends, problems and opportunities that are around the corner. That maybe I had an inkling of this happening before my recent travels, but after some of these conversations, I said, this is something that we have to talk about. So here's the big picture. I think that San Francisco and Silicon Valley is kind of a tell on what's coming next. I think that AI has changed a lot over the past four years from kind of the chatbot novelty, right, that OpenAI kicked off with ChatGPT to now it is deeply embedded in kind of running the US economy. And I think that there's definitely a Silicon Valley in San Francisco. Sticker shock. Right. For me, I was kind of amazed to see that the entire city and everyone in there seemingly was just running on AI. And I'm seeing the same problems and opportunities kind of regardless of who I talked with. So that's from both technical builders and non technical workers. So no matter where you fall on the spectrum, I hope that you're going to get a little bit out of of today's show. So speaking of that, stick with me for the next. This one's going to be a fast one. 20ish minutes. All right. And here's what you're going to learn. You're going to know five AI trends from some of my recent travels that I think you should pay attention to. You're going to know why. Well, I think you should try the impossible when it comes to AI and I'll explain what that means. And I'm going to give you the one biggest problem that's plaguing both technical people and non technical people alike. All right, let's get to it. Welcome to Everyday AI. My name is Jordan Wilson and this thing's for you. It's an unedited, unscripted daily live stream podcast and a free daily newsletter helping everyday business leaders like you and me keep up with all of this. In AI innovation, I tell you what's important, what's not, and how you can use that information to grow your company and your career. So starts here, but the real cheat code is our website. That's your everyday AI.com. it's also going to be in your show notes if you're listening on the podcast. But there make sure you sign up for our free daily newsletter. We're going to be recapping the highlights from today's show as well as all of the other AI news that you need to know to stay ahead if you want to be the smartest person in AI in your company. All right, let's get to it. Live stream audience, what's going on? If you have any questions, make sure you let me know. People joining from all over the globe, love to see it. Renee joining from Germany. Good morning. Brian from Minnesota. Everyone else, good to see you. But yeah, if you have any questions as I'm tackling this or comments, please please let me know. I, I am gonna, I don't know if I can, you know, do the audience engagement every single time, but every once in a while I love to hear what you all have to say. So it's been a busy week for me. Right. Full disclosure, normally don't travel a lot. Not a big fan of it as much as I used to be. I just get a little bit more tired. Right. I, I find myself spending each and every day more and more time trying to increase the quality of the podcast. So it's not as easy as me for, you know, to travel and do a couple of shows on the road that I used to. But I did have a great opportunity to partner with Sage. More on that probably later this week on their Futures conference. But while I was out in San Francisco and Silicon Valley, I got to know, meet with a lot of awesome people, right? So I met with six different people from, I'll say big AI tech, a handful of others from well known AI startups and other technical leaders from big enterprise companies as well. And then also I was fortunate enough to travel to St. Louis right after that San Francisco trip to keynote the Marketing AI Summit for Worldwide Technology. So little photo of me there blabbing on about AI. So thanks for to my friends at WWT for bringing me out for that. And you know, I put this out in the newsletter, so if this sounds like a boring show, if you're a normal podcast listener, make sure you sign up for our newsletter. A lot of times I'm just like, do you guys want to hear this or not? So you guys wanted to hear some of my takeaways from the past week because I was super lucky to be able to talk to a lot of smart, you know, like I said, people building AI, people on the cutting edge, literally trailblazing what's coming next and building what's coming next, as well as enterprise leaders from all over who are actually using this AI. So here's the five big takeaways. All right? Also, these are just random observations from the last week, right, of great conversations. These aren't, you know, this isn't an all encompassing list. Like these are the five most important, you know, trends and problems and opportunities. This is just kind of what I stumbled upon through a lot of these conversations. All right, number one is probably the most random. But my gosh, I was shocked. Autonomous cars are so much better than human drivers. All right, Number two, AI generalists don't really exist anymore. Number three, AI homework is a thing for office workers, but it shouldn't be. Number four, FOMAT is real. That's fear of missing agent time. And number five, AI acceleration is an opportunity and a problem for everyone. All right, so let's tackle it. Number one, autonomous cars are better than human drivers. That is an absolute fact. And this is one of those things, right? I think there's probably a couple cities where the WEOs and all of these other autonomous vehicles are fairly common, right? I know San Francisco, Silicon Valley, I know Austin, Texas, I believe, I think in Chicago, where there's a pilot program, but it's going to be a while. But I think what many of us think about AI and how it's going to change our lives, we think of just like, oh, using chat GPT or Claude or Gemini or co pilot or whatever it is, right? But obviously AI is making a big impact in the real world as well. I notice it all the time here on the streets of Chicago, seeing the little, you know, delivery robots, right? But I don't know, I'm like 50, I don't deliver food. I'm not actually 50, but I feel like I'm, you know, much older than I am. I don't really get food delivered a lot, so I haven't, you know, had one of those little, you know, robots knock at my door and be like, here's your dinner. But the wayos, okay, it's obviously a very touristy thing to do, right? Anyone? If you're from another city and you go to San Francisco, right? It's like. It's probably become like a bucket list tourist thing. But one thing I noticed, I took, I don't know, at least 15 Ubers or Lyft over the past week. And I'm not kidding, every single driver I had was terrible. Absolutely terrible. And I'm like, how is this possible? Is this the norm? I mean, I had someone go 85 miles an hour in a complete downpour on the interstate, just, you know, bobbing and weaving throughout traffic. I'm like, okay, I don't like this. I. I mean, I had someone else, an older gentleman, which is fine. But stopping, you know, at night on a, you know, rural highway, speed limit 55, stopping at stop signs and stop lights to, like, log mileage in a notebook or something, you know, and. And it's like, all right, the. The. The light's been green for 2, 10 seconds. There's cars coming behind, right? So many things, like, every single driver I had was just pretty bad, right? And this made me realize that I think autonomous AI vehicles, right, are going to really stick. And if you don't spend time in an area where, you know, these vehicles are. It is something I think worth paying attention to. So, you know, waymo did report 92% fewer serious or fatal injury crashes than comparable human driver benchmarks. So, like I said, embodied AI might not be on your radar right now, right? Delivery robots, right? Drone deliveries, you know, going from point A to point B versus a taxi or an Uber or a Lyft. But it's actually something. After my first experience, if I ever have the option again, I'm never having a human driver, right? And that might sound crazy, but, I mean, there's a point where there's other, you know, autonomous vehicle companies. Apparently. Some people I talked to said that they do a lot of testing there. You know, even those companies that maybe don't have full approval in other places, right, that we might not all know about. But, I mean, there was a point where there was a wayo in front of me. I was in the wayo. There was a wayo in front of me, a wayo behind Me. And then there was another autonomous company that I had never heard of, you know, kind of ahead and to the left of me. And to tell you the truth, I'm like, that was probably the safest I had felt in a car the entire time. Right. Like, unwinding roads. I think I was. Yeah, this was just in downtown San Francisco. And I'm like, I don't want any human drivers anymore. All right, so that's a random takeaway, but that's takeaway number one. I don't know. Has. Has anyone else, like, feel this way? Is it just me? Right. Big bogey says, autonomous vehicles 100%. Nobody will even own them. Just Uber and Waymo and a few others. Yet I don't know I ever had the option. Again, sorry. Not taking a human driver. So if you are a human driver and you're like, oh, you know, way mo, you're taking my job, like, I don't know, let's just collectively try to be less distracted drivers. Yeah. Not even to talk about. They're all on their phones. That's. That's another story. I feel I'm gonna, like, sound like a very old man complaining if I keep going down this route. All right, number two, AI generalists don't really exist anymore, which for me was interesting to see because that's something I personally have prided myself on being somewhat of an AI generalist. But I think even in the last six months, that's become even harder and harder. And I think the downside right now of AI's rapid rate of growth is there's fewer generalists. You know, So I think there's obviously more and more specialties in and around AI that are popping up that didn't exist three or four years ago. Right. We're going to see all these new, you know, job. Job roles, job types, career paths that don't exist today. Right. You know, some of the more immediate ones are probably around agentic orchestration, you know, trust in observability teams. Right. Entire teams that are doing this. But it was kind of weird to see that of all the people I talked to and, you know, I'm asking them, it doesn't seem like anyone really knows AI generalists anymore. And I get it because that's what I've always tried to do, and I think that I did a decent job at keeping up until maybe, like, six months ago. So what is an AI generalist? I like to think of it as speaking multiple languages. Right. So even if you think of certain modalities. Right. So if you say, okay, text to text, AI, right? Agentic AI, AI photo, AI video, AI audio, all of these different things. AI moves too fast to follow, but you're expected to keep up. Otherwise your career or company might lag behind while AI native competitors leap ahead. But you don't have 10 hours a day to understand it all. That's what I do for you. But after 700 plus episodes of everyday AI, the most common questions I get is, where do I start? That's why we created the Start Here series, an ongoing podcast series of more than a dozen episodes you can listen to in order. It covers the AI basics for beginners and sharpens the skills of AI champions pushing their companies forward. In the ongoing series, we explain complex trends in simple language that you can turn into action. There's three ways to jump in. Number one, go scroll back to the first one in episode 691. Number two, tap the link in your show notes at any time for the Start Here series. Or you can just go to starthereseries.com which also gives you free access to our inner circle community where you can connect with other business leaders doing the same. The Start Here series will slow down the pace of AI so you can get ahead. I feel I was fairly fluent in all of these different categories maybe six months ago. Now it's a struggle. And I think that that's being seen across the industry and that's kind of difficult because I think that most organizations, and I've said this before, you need people like me. You need people who are generalists. Yes, you obviously, especially at, you know, the big tech companies, you need, your teams are, you know, just doing reinforcement learning with human feedback. You need just your teams that are doing deep and technical AI deployments. You need your, you know, your ML and your AI folks. You need all those people. But I also think you need those generalists that can be the translators and they can be the liaisons between those that are very technical, those that are deep in their wells of expertise, and then the everyday AI knowledge worker that maybe might not be very knowledgeable in AI but is expected to use AI for the larger part of their day. I think organizations need generalists, but, you know, adaptive generalists, obviously, because the, the field moves so fast. So I do think in terms of that, you know, one, you know, going back to one of the things I teased early on in the show, you know, let me scroll back here. I think the one biggest problem and the one impossible thing is being an adaptive generalist in today's AI Scene it is extremely important, but becoming increasingly harder and harder to do. Right. Obviously the nature of my job and what I do does require me to have a decent and working understanding of a little bit of everything. But that's gotten even more and more difficult as time has gone on. I don't know if anyone else feels that way, if you've maybe prided, you know yourself as being an AI generalist. But I think we're going to see fewer and fewer of those people. But I think that skill set is actually going to become increasingly more and more important because think of it as languages again, right? And think of the average enterprise company maybe two years ago, three years ago, right? Most of their AI work was done inside maybe Copilot or it was done inside of, you know, Google, Gemini or Chat, GBT as, as an example. Right now you have teams that are in theory a little more fragmented because they're using more and more AI tools. But when someone has a question on how all of these pieces work together, right, it's like someone speaking Mandarin and French and you don't have someone that speaks those languages, but the directions are in English, right? It is becoming more and more important. And I think that this is leading to lower ROI on AI because you don't have a single person that understands all of these different moving pieces and can bring them all together. Right. This is your, your kind of traditional role of digital transformation, right. But just the pace of it and all of these new essential, like essentially these languages that are coming on at all point. And when you think of agentic AI in particular, that's changing, you know, now it's like all of a sudden there's. Yes, let's say there is 20 AI languages before with agentic AI now it's like adding 3 dialects to all of those existing languages. Because what is possible across all of those different quote unquote AI mediums and AI channels changes drastically with agentic AI because yes, now you are doing it some of the time, but some of the time agents are going in and they're, you know, going and working with the AI audio, they're working with AI video programs, they're working with the AI images, etc, stitching it all together, design, vibe, coding, you know, agentic qa, all of these things. Yeah, I like what Jackie said here. She said AI generalists with specialists will be normal as in her opinion, as AI grows, you still need both. Yeah, I agree. It's just so difficult to find the generalist and you are now finding more and more specialists because obviously, you know, two or three years ago, there weren't as many AI specialists, Right. Because we were all in the same boat, you know, in 2023 and 2024. And then, I think, you know, aside from if you had a background in machine learning, right, But I think everyone else was kind of in the same boat, you know, we were all in the implementation phase, in the experimentation phase in 2023, 2024. But I think, you know, people obviously started to, to niche down in 2025. And fast forward to 2026. There's just so many different specialties that I think they aren't, or to communicate to a central person what's going on and how it impacts everyone else. All right, next, AI homework number three here. AI homework is a thing, but it shouldn't be. All right, here's what I mean by that. I'm not talking about students, I'm not talking about in the classroom. Don't get me started on that. I'll go off on a random rant. I was shocked. And this was across almost every single company that I talked with. The amount of AI learning that has to be done at home, if you want to keep up, if you want to get ahead. This issue is obviously multifactorial, right? One, companies are just sprinting head down and, you know, not properly providing, you know, quote unquote, on the clock training. That's one piece of it. And that's not necessarily anything new from conversations that I had over the past week. But the other big facet of this is the cutting edge of agentic AI is getting, understandably so, more and more of the Wild West, Right. If you thought, you know, traditional large language models and AI was the wild, wild west from 2022 to 2025, yeah. Agentic AI is even more so. But I understand it from both the employer perspective and the employee perspective. Right. So what I saw was a lot of the smartest AI leaders that I talked to, they're experimenting at home, right. Whether it's, oh, you know, I, you know, got a new Mac Mini and I'm running openclaw, or I'm going to get a big, you know, Mac studio so I can run all these local models. And after a lot of conversations, it just seems like even larger organizations, even if they set up, you know, certain, you know, virtual sandboxes and they give employees, you know, an opportunity to set up, you know, virtual machines where they can go test all these things on the cutting edge, it's still not quite the same. Right? A virtual machine you know, things like, you know, Nemo Claw from Nvidia, you know, great. But most people are still wanting or absolutely needing to experiment at home or doing at home AI homework, right. Knowing like, hey, we should be testing these models, you know, here internally in our workflows. Right. Especially when it comes to open source models, when it comes to niche models that are only in certain aspects. Right. If you're, you know, doing something as an example for PDF parsing or something like that, probably don't have the time to necessarily do that because everyone's outputs, everyone's expected deliverables doesn't go down, it doesn't decrease as AI's capabilities increase. So something ultimately gets squeezed out. And what that seems to be, at least for AI leaders that are really pushing themselves is their time at home, their time away from office. Right. You have people now working probably more from home than they were before. Yes, I know you still have that subset of people that, whose companies are not as AI capable. Those weren't really the people that I spent most of my time talking to over the last week. These are companies more on the cutting edge of AI. But you still have those people obviously that are able to bank a lot of their time, especially if they're in a hybrid or remote role. And those people do have time to experiment. But I was actually shocked at the kind of the seniority of a lot of these people. That said, I have to do so much AI work at home on my own machines, whether it's, well, some things just aren't always safe, but sometimes it's just like there's not enough time to run these experiments. With everything that I have going on, I think expectations for a lot of enterprise workers have just increased. Right. Because smart AI leaders are saying, okay, you have the cutting edge AI tools, right. So naturally you're not getting a brand new job description, which is another problem. I think we need to start rewriting old job descriptions because AI won't take your job, but it will take your old job description. If you're company or department is not updating what it means to work anyways, it is just taking away so much time by not having that dedicated kind of physical sandbox to go and experiment. If I'm a business leader right now and I'm onboarding new people, right? I know, budgets, right? Yeah, it's tough, right? But I'm giving everyone two computers. Here's your work computer, here's what you can and can't run. Here's the, you know, company approved AI and then here's your personal computer, right? Go break this thing. Go get, go get dangerous. Go get scary, right? Don't break any laws, right? But do not connect anything. You know, no work emails, no work drives, no nothing. But go. Here, here's a machine. It's not that expensive, right? Whether you're running local models or whether you're running something like Codex or cloud code or anti gravity or whatever it is, right? Here's your machine that you can go mess some things up with, right? Here's your at home play one day a week, right? We want our employees spending, you know, four hours out of their day pushing this machine and capabilities to the limits. And then we bring it back on a Friday, you know, lunchtime, and we say, hey, here's things that are working that we need to now test to see if we can safely do it in production with our current workflows. Yeah, Angie says not enough time and not enough memory, so we have to use our own personal subscriptions. Yeah, yeah. Frank, Frank here saying what about balancing personal family time with AI? It is an interesting issue. Yeah, I was surprised just with the again, people leading big companies in highly visible roles, you know, not just at AI labs and AI startups, but other enterprise companies. The amount of work people are doing at home, number one. Yes, I get it. It's exciting, you know, being able to build things that, you know, especially if you're mid career, where you're like, wow, I had this idea 5, 10, 15 years ago, you know, we kicked it around, you know, we couldn't do it, it was too expensive. Now those type of projects you can literally do in an hour, right? Get an mvp, demos over memos, but this is huge. This is huge. Speaking of that bleeding into our next kind of big problem and big opportunity. FOMAT is real. This is the first time I talked to other people that had fomat. All right, so obviously everyone's heard of fomo. Fear of missing out. Fomact is fear of missing agent time. All right? So my setup might be a little extra, but I feel this is for a lot of people, right? So I have at home, I have a Mac studio, you know, running local models, running codecs, running cloud code, all those things. But I also just bought a newer, powerful laptop, right? And when I'm going around all of these meetings, it sometimes hurt me to have to close my laptop, right, Because I'm, I'm building things. I'm running, you know, deep, deep agent runs, trying to, you know, unearth new things that'll help Me grow my business, new opportunities, you know, new, you know, kind of tools and, you know, AI toys that I can use to do my work better. And having to close the laptop, right, for meetings, for, you know, when you're going to conferences, keynotes that you're listening to or working on, that's tough. That's tough, right? Fomat, this is the first time I've ever talked to anyone else that felt the same way. So desktop AI agents are so powerful. I think it's created a new level of fomo. So, yes, there's certain things that you can do, you know, schedule things on your computer, but it does have to be on. It has to be powered, right? My Mac studio, poor thing, doesn't get to sleep, right? It's running around the clock. Every once in a while, I'll restart it so it can take a power nap, right? Or as my wife says, the. The disco nap. But this was shocking to me, right, that other people understood and felt that way. I think was. Was huge. All right, so that is. And by the way, I actually, I was joking around about this with a couple of people I talked to about fomat, and I'm like, I'm just gonna build an app. So when I close my MacBook, the MacBook thinks it's still open and it can just keep running. So I actually built that. It's almost done. So I'm excited for that personally. All right, and then last, number five, and this is both huge problem and the biggest enterprise opportunity, if you can understand how to close the gap. And this is just number five is AI acceleration is probably the biggest opportunity and the biggest problem for everyone. So maybe I was naive in thinking this, but I thought, oh, a lot of these people that I talk to are going to have this thing figured out, right? They're building the most. World's most powerful AI, right? Whether it was at the big labs, the AI startups. I talked to big enterprise, you know, enterprise workers at huge companies. I'm like, okay, these conversations are going to be great because they have the answers, they're building it. And I'm going to be able to come and, and find these secret answers and share them with everyone. Guess what? Even for the people building all of the AI that we use, the acceleration is a huge problem. And I was both relieved, yet also curious that even when I didn't ask people, they all said the same thing. They said, hey, I've only been feeling this over the past six months, up until, you know, late 2025, you know, it was easier or at least manageable to keep up with the pace and the rate of AI. And obviously that has to do with agentic AI, right? So I think Anthropic really kind of kicked this off, you know, with Claude Code, with Claude Cowork. But also, you know, we started to hear at the end of 2025, you know, the big AI labs essentially start to admit, right, that their new models were built by a bigger version. Right? So you can make the argument whether that's, you know, recursive self improvement or not, you know, or is it just distillation, Right. Big model creating other models, Right. But when you have someone like Anthrop saying, oh, you know, Claude Cowork was 100 built by Claude Code, right. Or our anthropic models, right. That means a new pace in a new rate of AI acceleration that most people, number one, weren't expecting. Number two, it hit everyone. And number three, like I said, it is both probably one of the biggest problems and one of the biggest opportunities. So what is the opportunity there? Well, the opportunity is the knowledge gap that I think is widely acknowledged and most people understand now that it's a problem, that gap between capabilities and what the everyday worker is actually using AI for. That capability gap is only going to grow because of the education, the lack of education, the lack of training, learning and development. You know, let's just say everyone's baseline AI understanding across an enterprise, you know, goes up 5 to 10% per quarter, right? Which normal digital transformation, if you're looking at that, you're happy with it, right? Those are kind of knowledge growth rates that during traditional, you know, phases of technical innovation, you'd be happy with. But if AI's capabilities, right. You can. Not saying we're looking at Moore's Law or anything like that, but AI's capabilities are sometimes doubling by the year or nearly doubling by the quarter, right? So that gap between what people are using AI for and what is possible, that gap is sizing up. It's sizing up week over week, month over month. And it is hitting everyone, right? Even the big AI companies that are building this technology are seeing that it is becoming increasingly difficult when everyone has a quote, unquote job to do. Not everyone can be kind of like me and, you know, testing every single tool, every single model, update every single mode, right? You can't do it. So naturally that means that that gap is widening. So Gardner actually projects worldwide AI spending will hit 2.5 trillion in 2026, up 44%. So increasing spending 50% year over year on a technology like AI is not a normal growth rate. All right, so Microsoft actually they just came out with their work trend index study this morning. All right. It's one of my favorite and I think one of the best studies around AI. So we'll be sharing that in today's newsletter, FYI. But they just said that Microsoft says active 365 agents group 15x year over year. Right. When it comes to capabilities and agentic AI, right. As, as AI starts to get arms and legs in a brain and it's going out in the real world and it has the ability to act now, right. AI is very much a two way street. Whereas before, you know, AI could kind of only read. Now it can read, write and it can take actions on our behalf around the clock. 24 7. With agentic swarms, the capability gap is growing and acceleration, acceleration, I think it actually becomes dangerous because those capabilities obviously outpace knowledge, learning and development, but they also outpace governance and guard rails. Right. Most AI labs aren't releasing new mo. Yes, they come out with safety cards and you know, if, if a model goes off the rails and tradings right during red teaming, they'll share those things. But there's so many new problems and threats that come out with new models that not only is the world maybe technically not ready for, and we've seen that a lot more with, you know, the, the mythos, which I think might be as much marketing as it is, you know, real powerful model anyways, you have new models, whether it's on cyber or other capabilities that do kind of pose a threat. Right. And I think we always focus on what the models can do that we want them to and not necessarily think about. You know, I, I think I called it like agentic crash. Right? What happens when agents go off the rails because they are too smart and they are, well too ahead of the rails and you know, they might think you give them a goal. Hey, here's the goal, here's the guard rails. They might justify it to themselves. Well, it maybe makes sense for me to go outside of these preset guard rails because it helps me accomplish the goal not knowing what ramifications that might have. All right, that's it, y'. All. I hope this was kind of a fun episode. It was fun for me to think about and plan, but that's just kind of what I see as coming next. Five AI trends, problems and opportunities around the corner. So if this was helpful, let me know if it wasn't helpful. Subscribe, sign up for the newsletter and, you know, vote for something else. You know, every week I try to put you guys in control for at least one episode, you know, because I do my best to try to stay up to date and say, hey, here's what I think, you know, people need to hear. But every once in a while I say, what do you want to hear? What do you want to learn? So make sure if you haven't already, please go to your everyday AI.com Sign up for the free daily newsletter. Thanks for tuning in. Hope to see you back tomorrow and every day for more Everyday AI. Thanks, y'. All.
A
And that's a wrap for today's edition of Everyday AI. Thanks for joining us. If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going for a little more AI magic. Visit youreverydayai.com and sign up to our daily newsletter. So you don't get left behind. Go break some barriers and we'll see you next time.
"What’s Coming Next: 5 AI Trends, Problems and Opportunities around the Corner"
Date: May 5, 2026
Host: Jordan Wilson
In this insightful solo episode, host Jordan Wilson shares his top five takeaways from recent trips to San Francisco, Silicon Valley, and St. Louis, where he engaged with leading figures in AI—both from major tech companies and cutting-edge startups. Drawing on candid conversations and his own observations, Jordan reveals the AI trends, problems, and opportunities that are quickly reshaping the landscape for both technical and non-technical professionals. The episode is lively, unscripted, and filled with practical advice for business leaders and everyday AI users alike.
Timestamp: 07:50 – 13:55
Timestamp: 13:55 – 21:43
Timestamp: 21:43 – 28:51
Description: Learning and experimenting with new AI tools and models now happens almost entirely off-hours and on personal devices, even for senior leaders.
Root Cause: “Companies are just sprinting head down and, you know, not properly providing, you know, quote unquote, on the clock training. That’s one piece of it.” (22:27)
Personal Accounts: Even major enterprise leaders are “experimenting at home... running local models... it just seems like even larger organizations... it’s still not quite the same [as the tools at home].” (24:03)
Business Implication: Job descriptions must be updated—AI won’t take your job, but it will take your old job description.
Advice: Companies should provide a 'play' machine to employees for safe at-home or sandbox experimentation, along with structured time for such exploration.
“If I’m a business leader right now and I’m onboarding new people... I'm giving everyone two computers… Here's your at home play [device] one day a week.” (25:48)
Timestamp: 28:51 – 32:23
Timestamp: 32:23 – 38:00
Insight: Not even the world’s top AI builders can keep pace with AI’s rate of change, especially with the advent of agentic AI (e.g., Anthropic’s Claude Code and Claude Cowork).
Industry Statistic: “Gartner actually projects worldwide AI spending will hit $2.5 trillion in 2026, up 44%.” (36:11)
Knowledge Gap: “...the gap between what people are using AI for and what is possible, that gap is sizing up. It’s sizing up week over week...” (36:53)
Risk: AI advancements are now outpacing organizational capacity for education, training, and governance—potentially causing control and safety issues.
“AI could kind of only read. Now it can read, write, and it can take actions on our behalf around the clock... that gap is widening.” (36:40)
Callout: Even those at the forefront admit the “acceleration is a huge problem,” but also the source of massive opportunity for anyone able to bridge the knowledge/capability gap.
On being an AI generalist:
“It is extremely important, but becoming increasingly harder and harder to do.” (19:23, Jordan Wilson)
On AI homework:
“...the amount of work people are doing at home, number one, yes, I get it. It’s exciting... But this is huge. This is huge.” (26:35, Jordan Wilson)
On AI’s rapid growth:
“You can make the argument whether that’s, you know, recursive self improvement or not... [now] with agentic swarms, the capability gap is growing, and acceleration, I think, actually becomes dangerous...” (36:25, Jordan Wilson)
Jordan’s candid, relatable tone and practical advice make this episode a must-listen for anyone trying to stay afloat amid AI’s relentless advance.