
Loading summary
Jordan Wilson
This is the Everyday AI show, the everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business and everyday life.
Adobe Firefly AI Assistant Announcer
Meet Firefly AI Assistant now live in Adobe Firefly, the all in one creative AI Studio. Just describe what you want to create and the Assistant handles the rest, orchestrating multi step workflows across Photoshop, Premiere Express, and more. In one conversational interface, you direct the outcome. The Assistant accelerates Execution.
Jordan Wilson
A few weeks ago, the United States government said the quiet part out loud when it comes to open source source models, at least from China. That's because in April, the White House sent out an official memo accusing China of using distillation to illegally copy American AI models to create cheaper domestic knockoffs. And that declaration is really nothing new. If you've followed AI for years. However, the recent distillation trend has completely reshaped one important landscape of enterprise AI the decision between using open source models versus proprietary closed models. And in 2026 at least, it can actually be a tough choice between saving potentially millions of dollars versus running up your legal liability. About two years ago, before Chinese distillation was commonplace, there was a sizable gap between frontier models and open source AI models or the those models that you can essentially download or use for close to free. But now the gap is all but closed, which has thrust the open source versus closed source question into every enterprise boardroom in 2026. And although there's no one size fits all answer, we're going to be tackling the toughest topics and the most important takeaways as we take a zoomed out view of open source models on today's show. That's why we're going over open source AI 101, why local models, cheap APIs and AI agents change everything about making AI decisions in 2026 as part of our Start Here series. All right, welcome to Everyday AI. Before we dig in, let's first zoom out and talk about the big picture here when it comes to open source AI. That's because, well, it's actually a legitimate thing now, right? Two years ago, enterprise companies weren't saying let's use an open source AI model in production. Today it's actually happened. It's happening. That's because, you know, maybe the most powerful open source models are only about two to six months behind frontier models, but on even consumer hardware you can be running essentially frontier level AI models from like just over 12 months ago. And the Chinese labs now distilling US models have kind of crashed the open API prices to pennies. Right? So yeah, not everyone out there on, you know, consumer or prosumer hardware can run the most powerful open source models, although I do think Google has something to say about that. But the most powerful open source models run for a fraction of the actual cost if you can't afford to run them locally, which has completely shifted the paradigm when it comes to enterprises making decisions on well, are we going to use a model from one of the big three OpenAI, Google or Anthropic, or are we going to use a Chinese open source model and pay for it that way? And well, what this has also led to in 2026 is essentially 24, seven local agents that can run and also without costing a ton, and now having to actively and almost aggressively aggressively go through kind of an AI cost triage. But going full open source does strip away the legal protection that the closed models often include. So stick with me for 25ish minutes on today's Start Here series show and here's what you're going to learn. You're going to know why the open versus closed source AI default just officially flipped. You're going to know how Gemma 4 from Google puts year old Frontier capability on your laptop. You're going to understand the two payoffs already shaping and reshaping how individuals and enterprises run AI and the hidden legal trade off most executives miss when going fully open source. Let's get into it. My name is Jordan Wilson. Welcome to Everyday AI's Start Here series. This is the essential podcast series to learn the AI basics. And if you're an and AI expert, this is your chance to freshen up and double down on your AI knowledge. Why do we start this Start Here series? Well, after 750 plus podcasts, I never really had a good answer when someone was like, where do I start? What podcast do I start with? That's why we created the Start Here series. It's best, I think if you listen in order. I think this is now volume 24 of the start Here series. So maybe we'll wrap it up at 25, maybe we'll wrap it up at 30. I'm not sure. But the whole point of this is you can go to start hereseries.com that's going to give you free access to our exclusive inner circle community. And in the Start Here series space we make it even easier for you so you can actually go listen. We have a Spotify playlist ready for all of the different Start Here Series shows as well as a breakdown on each Individual episode all in one place. So make sure you go to start here series.com for exclusive access to that inside of our inner circle community. All right? And if you missed our last Start Here series Show, that was volume 23. We talked about headless software and why companies are building software for AI agents and not humans and well, what that means. So today in volume 24 of the start Here series, we're going over open source AI101. So here's the reality. Closed AI used to be the de facto, right? And I, I mean honestly there was really never even much of a discussion about open source AI in the enterprise maybe until 2025 at least. Not serious enterprise companies. Now it's a, it's a real conversation, right? So it's no longer, you know, hey, we're just going to choose whichever API works best for us, right? Whether that's OpenAI, anthropic or Google. Now most companies are looking at some of the open source alternatives, most of them could coming from China. And the big kind of thing here is companies are starting to standardize around one frontier vendor in 2025, before this happened. And then they called it an AI strategy. And the assumption originally was, well, that worked for three years until two very specific forces broke the standard paradigm when it came to open source models. First, the proprietary versus closed gap capabilities just completely changed. All right, so we talk about arena on here a lot for previously had a different name, now it's just Arena. So you put in a prompt, you don't know the outputs, you know what models they're from and you vote for which one is better, right? So all these different models get an ELO score and to really zoom out for our non technical audience, because I know a lot of you in the Start Here series are not technical, I'd probably even say what an open source model even is, right? So the very simplified version is certain companies can release models open source under like an MIT or Apache 2.0 license. And that gives people the ability to download these actual models and to run them locally on your machine. And that is, well one of the big trade offs, right? So you're not sending any private or potentially proprietary data in the cloud at all. Everything runs locally on your machine. So number one, it's private, number two, it's well, free, right? And then there are open source models that if you can't download them on your, you know, computer, because not everyone can, some of them are much larger, you can still essentially run those in the cloud for a fraction of the Cost of what it would cost to run a proprietary model. So essentially, open source models are ones that you can download, you can modify. In some instances, you can even build products on top of it. All right, Anyways, right, until late 2025, there was a monstrous gap in the arena scores, right? So these ELO scores, when you put in the same prompt, you look at two outputs. Everyone overwhelmingly always chose the best front, the best frontier closed source model. And that really started to change, right? So the gap between the ELO scores, well, it cut down by about 90%. So it went from about a 250 point gap from the best frontier or closed source model. Well, now it's only about 30 points, right? Give or take, depending on the day. Right. But it's even, you know, recently, a couple months ago, it was like 15 points, right? So at that point, you really have to be an AI expert to be able to decipher the difference. I think 30 points, you know, most people could look at different outputs over time and you know, a 30 point, you can kind of realize it. If you're looking at the best, you know, open source model versus the best proprietary closed models, 30 points you can understand. But 10 to 15 points, it's kind of a coin flip even for people who are, you know, spending most of their days inside of large language models. But this collapse came from those two different forces, one working in parallel. So I have a little, little graphic here on the screen for our live stream audience. If you are listening on the podcast, FYI, you can always get the video version on our website at your everyday AI.com but going from a 250 point gap to essentially a 30 point gap, this is huge, right? Because like I said, in 2023 to mid-2025, it was noticeable, right? It was extremely. When you looked at the outputs, you could say, my business can use output A, but it cannot use output B. Right? And now we're at the point where the open source models, in terms of an ELO score, and I think that's a good metric to look at over time, right? Because you know, the frontier is always improving. But if you look at the ELO scores of the open source models now, so those that you can kind of, you know, if you have a beefy enough computer, you can download some of the best open source models on your actual local machine, right? Those scores are where we were at with proprietary models three to six months ago, right? So think back to the very end of 2025, you know, and there's some models like I Think at the time was probably GPT5 3, Gemini 3 Pro. And at that time I think we were at like opus, maybe 4, 6 or maybe 4, 5. Right now you have open source models that you can run for free 24, 7, run agentically that are at that same level. And that's why now this is a real enterprise boardroom problem, especially for large companies that have invested in heavily into AI, right? So I'm not talking about companies that, you know, with a couple hundred employees. I'm talking about companies that were spending seven, maybe eight, maybe even more seven to eight figures on AI each year. Now all of a sudden they're saying hey, in theory, if we switched, you know, part of our, you know, summarization tasks alone, right? There's, I, I've read a lot, I've, I've talked with a lot of people that have done something similar. You know, if, if, if we just, you know, chunk off everything that we're using, you know, open source model or sorry, closed source model just for summarizing text, right? Some of those lower hanging fruit people are saying well yeah, we could save 1, 2, 3, $4 million. And this is an actual reality that a lot of companies are grappling with right now. So the force one was just the capability moving locally, right? And this, I think we have to credit Google for pushing the edge of Edge AI. That's because with their Gemma 4 model completely shook up the landscape of open source AI, right? This thing was 20 times more efficient than other open source AI models at the time. So essentially, let me describe it like this. Do you remember GPT4O? Right. One of the best models, you know, about 14, 15 months ago, it was at the absolute frontier, right? So now you can download Gemma 4 on a consumer laptop and it has, you know, roughly you look at the scientific benchmarks and the Elo score, it's essentially a, about a GPT4O level model that you can run on your laptop. So what's the big deal? What's the big difference, right? 14, 15 months ago, I mean there's thousands of companies spending millions of dollars a year to get that type of technology, to get GPT4O level technology for their employees. Now doesn't take, take anything really. It takes a newish piece of Apple, right? I, I just got a, a new MacBook Pro. That thing can run Gemma 4 very easily, right? It can run even better models than that. And, and this really changes, I think what is ultimately capable when you look at the open source versus closed source. Because yes, I think most people look at normal usage and they're comparing apples to apples, right? Here's what our marketing team did 15 months ago with a GPT4O level model. Oh, now they can do that on Gemma 4? Well, yeah, you can do it, but now you can do it agentically because not only in the last year or so have the models obviously improved with now thinking models, reasoning models being the default, but now we have these agentic harnesses, you know, not just the ones that you can use, you know, inside of Chat, gbt, Gemini, Claude Copilot. But, well, you have these local autonomous AI systems as well, such as OpenClaw, such as I always forget if it's Hermes or Hermes Agent, right? So now you can have essentially the level of AI from 14 months ago running for free 24,7 agentically, even if you're just doing it, you know, summarization, content creation, research, things like that. So when capabilities went local, right? Gemma Ford leading the way. But obviously all the Chinese models followed suit because of right distillation, which we'll talk about a little bit here in a couple of minutes. But you can't overlook Gemma because it put frontier capability on literally a laptop, right? Because two years ago to be able to run something like a GPT4O level model, right, which rumors have been swirling that it's a 2 trillion parameter model, right, you would need a small little data center to run something like that, you know, 2ish years ago. Now you have these capabilities. So you know, I been lucky to talk to a lot of smart people in AI and now you really have executives grappling with, well, should we be buying a bunch of, as an example, new MacBooks? Should we be, you know, buying a bunch of DGX Sparks for our employees and setting them up with 247 always on Agentic AI, right, to take advantage of these now local and powerful models that, well, you don't pay, right? You download them once, you don't pay again and they work and they can work while you sleep. Like I said, because of some of the new autonomous capabilities from local agents that can run around the clock. So this has obviously led to, and I think Google putting the pressure on the, the open source world with Gemma 4. Like I said, it was 20 times more efficient in terms of what is it was able to achieve on the benchmarks in terms of size, right? Because when it came out, if you looked at the other Chinese models, it was about 10 to 20 times smaller in size, right? So you wouldn't have been able to, you know, use the best open Source Model Pre Gemma 4 on a local machine, right? At least a consumer laptop that you can just go walk into the store and buy now you can. And open. Chinese models are amazing and I think they've been getting better and better and smaller and smaller and more and more efficient since Google's Gemma 4. But you have to talk about the elephant in the room that is these models are distilled, right? We can say that. All the big labs have said that, you know, are, I don't know, maybe some of our audience in China won't appreciate hearing that. But I Mean, Google, Anthropic, OpenAI have all accused China and have said they have proof, right? But the, the White House. So in April, the White House actually officially said that China was using kind of illegal tactics to distill USAI models to create cheaper domestic knockoffs, right? So it got to the point that at least the White House said that they had enough information or intel to make that declaration. So what is model distillation? And well, why does it matter? So the easiest way is like I can spend 10 hours studying for a test, right? Think back in the classroom, I can spend 10 hours sitting for a test. Someone behind me can look over my shoulder and spend 10 minutes and get the exact same answers. That's kind of like what model distillation is, right? You have the big AI companies here in the US spending billions of dollars, right, on any single new, you know, model pre training as an example. And essentially you have certain actors in China who will use the API. And you know, different companies have come out with different levels of proof and say, okay, well they're creating, you know, thousands of spoof accounts, more or less. They're putting in all these inputs and training it on our model's outputs versus training it themselves. So yeah, just kind of copying the homework. So what this has led to is China has been able to put out these open source models really technically just pushing the frontier of open source by allegedly just copying the best US models out there. And what this has led to as well, it's a crashing out at the bottom price of intelligence. So Deep Seek v4 as an example, Deep Seq, one of those companies that many of the AI labs here in the US have accused of model distillation. Deepseek V4 Pro, one of their newer models, now list their price at 43 cents per million token inputs and 87 cents per million token outputs. That's like more than 25 times cheaper than the premium, you know, Closed source proprietary models. And that is the reality that a lot of board rooms are looking at right now. Right. To make that math easy it's like okay, it, it, we're spending $1,000 per month per employee on the API side, right. If you're, or sorry, if you're spending, let's just say $40,000 a year. Okay, we could be spending $1,000 a year if we switch over to an open source model as an example. So that's, here's what that actual leads that what that actually leads to, right. Kind of the, the model distillation leads to more powerful, cheaper open source models from China and well, it leads to people using them but not always knowing the ramifications. Right. So obviously Google doing things the right way. But I think with these Chinese models they've become increasingly popular even in the enterprise, which is tricky and I don't think that most executives are fully understanding some of the consequences of using open source models. But this has led to essentially having a workforce of always on assistance and they've shift from expensive special projects to well, that's just now the default operating model. So you know, this has just allowed kind of these, this new swarm of agentic AI that couldn't have really have existed before. Number one, the technology and the harnessing wasn't there. But number two, you take out, you know, at least Gemma and the Chinese models that have been accused of distillation and your, your options, aside from those aren't really that good. All right, we're going to talk more here after we take a quick break for a word from our partners.
Adobe Firefly AI Assistant Announcer
Adobe just introduced an entirely new way to create bringing the power and precision of its creative suite into one conversational experience. Meet Firefly AI Assistant now live in the Adobe Firefly app. The all in one Creative AI studio. Powered by Adobe's creative agent, Firefly AI Assistant lets you start with your vision. Just describe what you want and shape the outcome as it takes form. With the assistant, the assistant orchestrates multi step workflows drawing on 60 plus pro grade tools across Adobe Creative cloud apps including Photoshop, Illustrator, Premiere, Lightroom Express and more. To help bring your ideas to life. You can also get started started with creative skills. A growing library of pre built workflows for common creative tasks like batch editing photos, creating mood boards, portrait retouching and creating social variations. Every step the assistant takes is visible so you can refine, redirect or take over at any time. You stay in the driver's seat as the Creative Director. Adobe Firefly AI assistant now in public beta. See it today at firefly.adobe.com
Jordan Wilson
Aside from always on AI agents, what this open source movement has led to is, well now enterprises may be moving away from having the one model fits all solution. So now as an example, you might be able to put out an 100 agent swarm. Goes from you know, 1,200 plus dollars on opus to well maybe like 60 some dollars on deep Seek. And now essentially you can look at AI as more of a triage or a cate categorization of which models to use for which tasks. Right? Especially when you're talking about high volume operations. So things like when you're going through it in bulk, things like summarization, extraction, parsing, PDFs classification. Right now, so many, even large enterprise companies are no longer doing that on the back end using the frontier U.S. companies. Well, I mean many still are, but you've already seen a big segment of those companies move to these open source or Chinese open source models. But if you're thinking right now, if you're like, wow, our bill's pretty high, our API bill, right? I'm not talking about on the front end, you know, the number of seats you have in chat, GPT Enterprise or you know, in Gemini Enterprise or anything like that, right? I'm talking about back end, all of these special projects that you have running via the API. So if, if you're looking at your API bill and you're like yeah, we're going through a lot of, or you know, hey, we're using, you know, Opus 4. 7 and GPT 55 to run our agents, maybe we should be looking at, you know, Kimmy or Deep Sea or whatever it is. Before you do that, you have to know that there is a big trade off because just because a model is free or open source or cheapish to run via the API, right? If you are, you know, running some of these open models via the API API, there's still a, an expensive price to pay. And that price might be unknown at this point, but it can cost your company a lot more than maybe just using that closed proprietary AI would have costed, would have cost you via the API. That's because using open source strips away all of that legal protection that you probably overlook or take for granted. What do I mean? Well, when you're using anthropic, OpenAI, Microsoft, who did I forget? Microsoft, OpenAI, Google, anthropic, right? When you're using those enterprise offerings, you have a level of legal protection, right? So as an example, if you Use. Right. I'm not going to go through all the fine prints. Right. But the four companies at the enterprise level all offer, you know, some sort of essential, I won't call it insurance, Right. But think of it kind of like that, right? Like hey, if you use something produced by our systems and if you use it ethically and responsibly and with guard rails and it produces something that's not, you know, correct, there is some level of protection there. Right. Which you don't get that with open source models. Right. So as an example, you know, Deep Seek, you know, they used different MIT licenses. Apache 2.0. Yeah. Essentially there's no warranty or non infringement agreements. All right. So for regulated work and customer facing output, you have to look at the trade off. Yeah. You might save six, seven, who knows, maybe eight figures by switching the bulk of some of your maybe agentic or bulk workloads. Right. Especially if you're a Fortune, Fortune 500, Fortune 100 company. There's minimum seven, eight figures that you could in theory save by switching some of those heavier agentic or you know, parsing. You know, I know parsing is a big one. Shifting some of those workflows to open models. But you lose that legal protection that maybe you've had to rely on it before, maybe you haven't but that one time that you would actually need it. And if you do switch over to open source, you have to understand those ramifications because at that point you're going to actually be paying for it. So that gets us to the real question here as we get close to wrapping up because I don't want my takeaway here to be don't use open models, they're not safe because that's not the takeaway way. I think you need to start looking at your AI workflow like a triage. Right. At least when it comes to back end tasks. Right. Front end. I've always been a firm believer and I still am today. You need to pick your AI operating system of choice, whether that's co pilot chat, GPT, Claude Gemini on the front end. And that's where you should move, especially your non technical people should move the majority of their day to day knowledge. Work tasks should be happening on the front end there, but you still have a multitude of back end tasks and I think you have to look at it like triaging in an emergency room. Right. You wouldn't send your top neurosurgeon in when someone's having an allergic reaction to honey. Right. You wouldn't do that. You would save that neurosurgeon for, well, someone that needs a neurosurgeon. And I think that there's so many companies that haven't gone through the basics of this for the most part, on the API side, they pick, well, one model and they say, all right, well we have our AI operating system of choice. And then for everything else as an example, we go to Sonnet 46 or we go to, you know, Gemini 31 flash or whatever that model may be. And maybe that's the right model. Maybe those companies have done their due diligence and have vetted out their different use cases and have priced it out and maybe that's the right move. But maybe it's not. Because I know from experience and talking to a lot of people, a lot of companies just choose whatever is on the cutting edge and they say, well, this is the best, so we're going to pay for it. Because there is a push internally to use more AI, to use the best AI. We see all these new benchmarks, we want to make sure that we're taking advantage of it. Well, is that neurosurgeon going to be able to, you know, properly diagnose the allergic reaction to someone eating honey? Well, yeah, probably, but it's going to cost you a lot more. So you need to think about sending those high volume, low stakes work, right? Like summarization research, content creation maybe to cheaper open source models that you can either run locally or, you know, running via the API for just cost efficiency if there's essentially like no legal ramifications if you get something wrong. Right? So if you're in a highly regulated sector, this is probably not the advice for you, right? You shouldn't probably be using, you know, or just taking my advice on a whole lot of anything as truth, you always need to be vetting these things out for yourself, right? But if, if, if, if it is something relatively in a sector that's not highly regulated, where there's not a quote unquote a lot on the line, that's one of those instances where you need to say, can we shift some of our more expensive API workloads to an open source model? Or if you need to run sensitive private workflows on self hosted open models, that's another thing. I think that there's still, even to this day, even though I think there's plenty of reasons. You know, one thing I always ask companies when they're like, oh, we don't, well, we don't run this through AI, right, because it's sensitive data. And I'm like okay well do you have a cloud provider? And they're like of course. It's like okay, well it's the same thing more or less, right? As long as you take proper precautions, turn off model training and all that. It's, it's more or less using the same level grade of security that you know, cloud uses anyways. There are still some things that companies won't even put on the cloud, right? Which I understand. But having these now extremely powerful open source models and extremely efficient open source models, now you can start running those private workloads, workloads or workflows on prem, right. Or you know, self hosted that you can fully control and then you can reserve those more premium, those more high value highly sensitive tasks, you know that for those models that can reason and think and offer kind of that, that, that level of security and legal support that you don't get if you opt for open models instead. So you know, have a nice little chart here. So maybe when you look at the cheap open APIs you look at simple tasks like summarization, extraction or classification for local self hosted open models, private workflows, great for that, running agents locally, having more control, right? And then for the premium closed models which are the ones that a lot of people are using on the front end, on the back end you should still be using these for a lot of reasons, right? Those that well carry a lot of business value. Hard tasks that require reasoning. Right. Your final review maybe you do, you know, draft version either with a cheap API or a local self hosted open model but or anything you know, that requires customer facing output should probably be going on that premium closed models for that level of protection. Like I said, you cannot overlook the hidden trade off that these open licenses may disclaim warranty and non infringement where the enterprise offerings do usually include that I that IP indemnification. So for regulated work that protection in almost all cases justifies the premium that you pay. However, as we wrap up here, let me just quickly encapsulate all of this. Local models aren't going anywhere, all right? And I actually think especially as we officially welcome in the era of models that can improve themselves and create, can create own versions, you know, smaller versions of themselves, right. All the big companies have essentially said, you know, have hinted at RSI or you know, the fact that our big models make smaller versions of themselves. I think that we're going to not only see a continued trend toward local open source models, I think we're going to start seeing a lot of smaller models for very specific use cases. It's something I've been, you know, predicting now for multiple years. We've started to see it slowly. I think it is going to pick up steam now that we're starting to get some hints of recursive self improvement with these models. So your company has to be paying attention because this is a trend that is not going away. The open models are going to become more and more capable, they're going to become faster, they're going to become more efficient, and the options are going to start to become even greater, right? Not just great general purpose open models that can run on consumer hardware like Gemma 4, but small open models for very specific tasks that can be highly valuable for your company. So you have to understand the pros and the cons of these local models, when you might use a cheap API, and how this changes the agentic outlook for your company. So don't write them off just because you always want to use the latest and the greatest. Yes, you should do that. But don't send the neurosurgeon to, you know, triage a basic thing happening in the waiting room. Send the right model at the right time for the right purpose. So I hope this was helpful as we recapped open source AI101 as part of our Start Here series. If this was helpful, number one, make sure you subscribe to the podcast. I'd appreciate that. But then make sure you go to start here series.com that's going to give you free access to our exclusive inner circle community. Right now there's no other way to join except by Going to start here series.com so make sure you do that. Thank you for tuning in. I hope to see you back tomorrow and every day for more Everyday AI. Thanks y'. All.
Adobe Firefly AI Assistant Announcer
Meet Firefly AI Assistant now live in Adobe Firefly, the all in one Creative AI Studio. Just describe what you want to create in your own words and the assistant handles the rest of the orchestrating multi step workflows across Adobe Creative Cloud apps including Photoshop, Premiere Express and more. In one conversational interface you direct the outcome while the assistant accelerates execution stand control with the ability to step in and refine at any time. See it today@firefly.adobe.com.
Jordan Wilson
And that's a wrap for today's edition of Everyday AI. Thanks for joining us. If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going. For a little more AI magic, visit youreverydayai.com and sign up to our daily newsletter so you don't get left behind. Go break some barriers and we'll see you next time.
Start Here Series Vol 24 – May 12, 2026
Host: Jordan Wilson
In this episode of Everyday AI's "Start Here" series, host Jordan Wilson demystifies the recent paradigm shift towards open source AI and explores why local models, cheap APIs, and AI agents are fundamentally changing how enterprises—and everyday users—deploy and manage artificial intelligence in 2026. Through clear explanations and real-world analogies, Jordan breaks down the technological, economic, and legal factors behind this shift, showing listeners how to triage their AI workloads and make smarter, safer decisions.
Quote:
"Enterprise companies weren't saying let's use an open source AI model in production. Today it's actually happened. It's happening."
— Jordan Wilson [01:38]
Quote:
"Open source models are ones that you can download, you can modify... In some instances, you can even build products on top of it."
— Jordan Wilson [07:47]
Quote:
"Gemma 4... put frontier capability on literally a laptop... two years ago to run something like a GPT-4O level model, you would need a small little data center... Now you have these capabilities."
— Jordan Wilson [18:23]
Quote:
"DeepSeek V4 Pro... now lists their price at 43 cents per million token inputs and 87 cents per million token outputs. That's like more than 25 times cheaper than the premium... proprietary models."
— Jordan Wilson [21:31]
Quote:
"You wouldn't send your top neurosurgeon in when someone's having an allergic reaction to honey… You need to think about sending those high volume, low stakes work… to cheaper open source models."
— Jordan Wilson [29:52]
Quote:
"Local models aren't going anywhere… we're going to start seeing a lot of smaller models for very specific use cases… I think it is going to pick up steam now that we're starting to get some hints of recursive self-improvement."
— Jordan Wilson [35:02]
"Today it's actually happened. It's happening." — Jordan Wilson [01:38]
"Gemma 4... put frontier capability on literally a laptop..." — Jordan Wilson [18:23]
"You lose that legal protection that maybe you've had to rely on... at that point you're going to actually be paying for it." — Jordan Wilson [30:15]
"You need to think about sending those high volume, low stakes work... to cheaper open source models that you can either run locally or, you know, running via the API for just cost efficiency." — Jordan Wilson [29:52]
| Timestamp | Topic / Segment | |-----------|-----------------------------------------------| | 00:46–04:36 | Paradigm shift: open vs. closed source AI in 2026 | | 06:46–10:46 | What is open source AI? Technical breakdown | | 10:46–15:46 | Arena/ELO scores, shrinking model performance gap | | 15:46–19:40 | Gemma 4 and local model capabilities | | 19:40–23:36 | China & model distillation, global price crash | | 23:36–29:00 | Always-on agents, model triage, practical outcomes| | 29:00–32:00 | Legal safety nets: lost in open source | | 32:00–36:40 | Strategic selection & future outlook |
Jordan Wilson encourages both beginners and experts to embrace a pragmatic, triaged approach to AI adoption in 2026—leveraging local models and cheap APIs for rapid, cost-effective scaling, but with an eye on legal exposure and strategic fit. The future is diverse, with mixed-model stacks and specialized agents becoming the new norm.
For more, join the Start Here Series Inner Circle: starthereseries.com