Loading summary
A
Welcome to the Practical AI Podcast where we break down the real world applications of artificial intelligence and how it's shaping the way we live, work and create. Our goal is to help make AI technology practical, productive and accessible to everyone. Whether you're a developer, business leader, or just curious about the tech behind the buzz, you're in the right place. Be sure to connect with us on LinkedIn X or Bluesky to stay up to date with episode drops, behind the scenes content and a insights. You can learn more at PracticalAI FM. Now onto the show.
B
Welcome to another fully connected episode of the Practical AI Podcast. Sometimes we do these fully connected episodes where it's just Chris and I, no guest. We talk about what we want to talk about and hopefully keep you up to date with some of the things we've heard in relation to news and trends with AI, but also talk through some things that even we're learning and trying to parse our way through on topics related to AI, machine learning, data science. So I'm here with Chris, my co host who is Principal AI and Autonomy Research engineer and I am Daniel Whitenack, CEO at Prediction Guard. How you doing Chris?
C
Doing good. How's it going?
B
It's going good. It's really interesting that some of the questions that you brought up to me over this last week have seem to have cropped up in a lot of the conversations that I've been having, so I'm excited to chat through those. But yeah, getting into summer I feel like I've had some creative space at the end of this week to think through some, some interesting things related to my company and work. So that's been really good. And yeah, coming into the summer with some good energy. What about on your end?
C
Same here. So much has happened both in my own life and just out there in the news. We'll talk about some of that stuff today. Autonomy, just the general autonomy space is just exploding in a good way, not exploding in a bad way.
B
Hey, this is a off, off the cuff question which I didn't plan on asking, but I was curious as to your take on. I guess one of the things that I've sometimes been saying, but also wondering if I'm saying it and it is true is that like the, this whole world of like physical AI is, is very much a trend that we're seeing this year and something that's kind of going from so much AI and essentially located in people's cloud environment to, you know, it being embedded in us, around in our, in our daily lives is, is that you're much closer to, to that space than I am and probably following more things in terms of the market and how you see it. What do you, how do you think about that, like, physical AI, embedded AI world? I guess what I'm referring to here is like AI, of course, and like retail kiosks and stuff, or on the manufacturing floor in our glasses or, you know, whatever devices, cars, et cetera.
C
I think it's a fascinating space to be in right now. It is so much in its infancy because there are whole industry segments for every existing industry that are being developed. And so you have so many organizations out there. And I see actually, you know, in the scheme of things, I see, or at least deeply see, a fairly small sliver in that I'm in defense and intelligence and national security use cases. But that's only one little tiny place. It's exploding in retail, it's exploding in marketing. Our local Walmart has the company they're partnering with, which I forget the name of, that's doing drone deliveries. I know that you have the same where you're at and robo robots going around. You know, it's no longer, you know, two years ago it was surprising to walk into a retail establishment, you know, maybe a restaurant or something, and have a robot, you know, go by you and that it was a real novel thing. And these days it's maybe, maybe it's just where we're at or something, but. And I don't know how widespread it is, but, like, it doesn't faze me at all. And I doubt it phases you to see these moving around. So it's coming on really fast and there's so much, I mean, it's, I mean, we have just barely scratched the surface on where that's going to go. And so, you know, I keep telling, you know, friends and family and stuff that, like, they still think of it largely as very futuristic, but, like, it's now here and, you know, it's now beyond the vacuums that we have running around our houses and stuff. And so I think over the next year or two, you will genuinely see so many offerings. It will be common to go into Walmart or Costco or, you know, because you can tell where I shop or, or lots of resale establishments and have these both built into products and helping customers get the experience that they need. So there's just so many avenues and, you know, we have talked on the show to people in autonomous vehicles and things like that, but I think, I think just the sheer number of possibilities is the, Is the thing that we'll see dramat dramatically change over the next year or two as a lot of companies get into it. So I'm really excited about it. I love looking forward and seeing really great use cases and I get really inspired as we're like everyone else, we go through the AI news that's out there and stuff and to see some of the cool things that people are thinking of. So I'm very optimistic about it. I'm looking forward to it. I love the work that I do. Don't talk about that very much, especially on the show, but it's, it's just really fascinating. I love doing this kind of work and, and being part of that. So yeah, back to you. That's, that's where I'm at.
B
I appreciate the update on that space. It's something I'm, I'm excited about as well and I know is developing but isn't always what is maybe kind of in people's, you know, view in terms of the, the mainstream stories in relation to AI.
C
It's really democratizing AI for so many people because just as a quick follow up, the models that you're using for these things are much smaller to be able to fit on hardware. And we've mentioned before that there is a microelectronics revolution going side by side. The general public, I think mostly sees AI because that's what the news coverage is. But there's also this massive revolution in microelectronics happening and the distinctions between different types of computing capabilities is blurring a lot. You know, what's a gpu? What's a cpu? There's a whole bunch, there's a whole array of other names that you can call different types of chips and those are all kind of merging. And that ability to do things in a smaller context, a low power context with smaller models that are designed for very specific use cases, cases is really opening that up. And it now means that anybody out there that has an entrepreneurial bent can, without, you know, having to invest into massive, massive cloud resources, can kind of tinker. They can kind of say, well, I'm going to go and I'm going to spend maybe a few hundred dollars and buy a few things here and there and download a model and do some work on it and see if I can make something that nobody else has done. And there's so many opportunities for that. So I really like, you know, to, to go straight to the learning thing. I really encourage people to explore their passion in the area of their own interests and see what they might be able to do. Because this is the moment we're, we're, we're definitely in the wild west of physical AI.
B
Yeah. And you, you mentioned something which is kind of the topic of some of what we've been discussing over the past week and kind of going back and forth on which is sometimes corresponds to those smaller models that you were talking about. Sometimes maybe they are large models, but this is this topic which we haven't kind of updated in a while, which is the kind of state of open or open weight or open source models, depending on what you call them, and closed or proprietary or productized or third party models that, so these are called different things. But maybe just to, just to, you know, we've been going back and forth because the gap, seemingly the gap between performance on open models versus closed models was closing itself. The gap was closing for some time in terms of, and I think there's an open question now like, hey, is that still the case? Has that changed what's been updated there? Maybe just to start out though, for listeners that aren't as familiar with this, you might have heard the distinction, but you might not practically understand what this means. And I find this to be continually confusing for many, many people. And let's just, let's just take an example that I often use, which is let's say a deep SEQ model. So let's say that you have a deep SEQ model. There is an LLM or a vision model or whatever it is, that this is a model that was built or trained by a particular vendor. And so deep SEQ in this case was the vendor that creates the deep seq model. OpenAI creates GPT models, anthropic creates CLAUDE models, et cetera. But let's say we take a deep SEQ model. That deep SEQ model goes through a sort of offline training process, which is the creation process of the model. And what pops out the other end of that process is not data, not code, but it's kind of a combination of both. You have a set of what's called weights or parameters of the model which kind of parameterize how the model behaves. It defines how the model behaves. It's just a set of numbers essentially. And those numbers then are loaded into code which runs the model or runs the model architecture, which is like the structure of the model, how those numbers fit in, how those parameters operate, such that you need both that set of parameters and you need the code to run the model. And if you have both of those things Then you can put in input into one side and get the output out of the other side. You put text in one side and generate text out the other side or whatever your model does. So let's say Deepseek, a vendor creates a model through a training process that then results in this set of parameters and a set of code that runs those parameters to actually operate the model. Now from there a few things happen, right? Or could happen that vendor could decide to release the model in a productized closed way. So in other words, they could create a SaaS product that you just access over the Internet and you say go to a website, whatever, deepseek.com or I forget what their website is, chat.deepseek.com, you can create an account, you can log in, right and use interact with a SaaS product that somewhere under the hood calls that code that's parameterized by those parameters and runs the model. Now it also runs all sorts of other things related to the actual product, right? But somewhere in there is the model that you're accessing. That's thing one that could happen is like the SaaS productized product of the model. Thing two that could happen is they could also release kind of more direct access to the model via an API, but not it's still a productized access to the model. So you could connect to whatever it is, API.deepseek.com and say hey, run this input through the model and then that API gives you back that output that's running through the vendors infrastructure, right? Running that code on, they're running that code on their side with those parameters, you're getting the output. So all of that is happening still on the vendor side, the model, vendor side, the model builder side. Another thing that could happen is that Deep SEQ releases the model weights and the model code in some way publicly or openly open sources it or makes the weights open, open weights or parameters. And most of the time that happens on a website called Hugging Face, which is a repository of models. They could release this under some license such that you could on your computer or on your server or in your cloud environment actually spin up the code and run the model yourself in your environment. So that's a next thing that could happen. Then finally if that happens and you know, Deepseek releases their model code somehow, then finally other vendors other than Deepseek might choose to run that model and offer it as their own kind of productized version of access to that model. This would be like AWS Bedrock runs a deep SEQ model in their servers and you connect through an AWS endpoint in your AWS VPC to access this model, which you are not running manually on servers in your AWS or anywhere. But that AWS is managing that on your behalf as a managed service. Or you know, this would be like together AI or whoever's running the model. Right. So just in summary, there's kind of those possibilities now when we talk about an open or a closed model. The open model at some point goes through that process of having the model weights and code being open to the public so that they can run it under sometimes a permissive license, sometimes a non permissive license. Right. Which kind of opens up this wider range of possibilities of how you can use the model. A closed model would never kind of leave those weights and that model code, that inference code would never leave the vendor's infrastructure on purpose because they consider that their IP, so you could connect to their SaaS product and interact with the model. Maybe in a. No codeware you could interact with their API, their REST API to use the model. But it's. That model interaction is always happening kind of behind the curtain in a productized thing. Okay, I feel like I rambled there for a bit. Chris.
C
That was a great explanation there. Yeah, I know, I think, I think you nailed it. I mean in, I think the relevance of going through that is that we are potentially at a bit of a, a junction right now. One of the things that has happened in the news that is directly relevant to that explanation is the fact that Meta, the parent company of Facebook and Instagram and WhatsApp, has long, at least within western countries, the United States, long been the kind of the champion of open source models in terms of both the, the software and the weights being available. So you could download them and run, run them in your own infrastructure and, and they were in use cases for a lot of other developers out there and the organizations that they represented. That was, you know, one of the, the go tos in terms of, of being able to drive a lot of that. One of the, you know, we. There was a bit of drama in Meta. There's been lots of drama in Meta actually over time, but one of the big dramas was not too long ago, a couple of several months ago. Yann Le was famously one of the three godfathers of AI and a major luminary in the field, had spent a decade at Meta doing research. And part of the agreement on him staying at Meta during that time was that the models that they had would be open sourced and that is Kind of the underlying reason, as I understand it, why the Llama family has always been open source and available. And for a long time, you know, that was, that was the company's position. So a bunch of drama happened A while back, Jan Lecun left Meta and a lot of change ensued in terms of how Meta was approaching AI. And you know, as part of that, during that time period, especially more recently, Llama has started trailing farther and farther behind other frontier models in terms of performance. It used to be right up there wasn't the top, but it was within striking distance. And being open source with a major company behind it was one of the things that allowed, you know, anyone could download Llama. And so it also provided good learning for other companies that were doing open source. And it was, as we talked about earlier in this show, the fact that for a long time open source on frontier models was kind of closing in on those closed source. And so what's happened is Meta has basically abandoned Llama. What is already there will remain open source. What kind of updates, who knows if there are any, but they have turned to a closed source model family. Now it's called musespark and that's where all of their effort will be going forward. So essentially that kind of puts it back into the same closed source space that we see OpenAI and anthropic and the Gemini models from Google, among others. And so the question is with kind of the within from the Western countries, at least that's a bit of a blow. And so people right now are, are looking around at what they can do in the open source context and you know, going back to all this physical AI and things that we were talking about and how will that affect a lot of the things people want to do there? And right now it's interesting because I will finish by, without diving into it, I will note that as someone in the industry I'm in, there is at least a perceived national security interest in having western created models that are open source out there and that are kind of preferred over specifically models from China. Though at this point, if you look at leading open source AI models, China is definitely taking the lead in that
B
space by a good ways.
C
By a good ways. And so that comes down to if there is concern in Western countries about potential security issues associated with models, that's out of my specific expertise. So I don't have any comment or thoughts about that right now. But then this presents a bit of a conundrum. So that's kind of where we are right now. It's you know, what shall open source AI developers do going forward and where are they going to turn and also what will be acceptable to their customers in terms of being able to do that. So you know, you're not going to get, if you like in this particular country in the US you're not going to be creating a solution based on an open source model in China and expect to sell it to the US government. It's very unlikely to happen. So it definitely has some fairly big business ramifications. And in terms of options available.
B
Yeah. And if we were to try to define, I guess the potentially the like, how is it out there that we define this gap? Are open models as good as closed models? And I think if for those out there that aren't familiar with this space, generally how that has been done in the past is via benchmarks. And these are data sets where there's an example input, there's some expected output. Right. And you run your model with the input, see what the model produces as the output, compare the two and then there is some scoring metric to tell you how, how well you did. And so these are, these are benchmarks. MMLU SWE for the coding side, there's scoring arenas that are, that are interesting if you, if you look up graduate reasoning math benchmarks, you know, all of these sorts of things. And generally if we look at kind of the more recent models that are closed, you know, opus GPT 5:5, blah blah blah, and the, and the open or the, the closed models, those, and then the open models that are more recent. So things like Chemi models and Quinn 3.5 and these models that are, that are more recent on the open side. Generally there is still a gap in terms of like those models being a few percentage points to maybe a little bit more scoring lower on these benchmarks than the state of the art closed source models. I guess my, I don't know if it's a, I don't know if it's a hot take, Chris, but maybe my divisive question would be who cares about benchmarks and why does that, why does this even matter? Or maybe another way to put it is that's, that's fun to think about all of these benchmarks, but it has nothing to do with the real world.
C
True, it's a good point. It's, you know, one of the things that I think, you know, there's almost, there's two considerations there is that, you know, like how much do frontier models matter for most people in most use cases? Yes. Are we are of Us using them for our chats and things like that. Yeah, but the, I think one of the points, and I'm certainly coming at this from that physical AI side is it's not these giant frontier models that, that people are investing in for, for, you know, things like wearables and, and other small devices that you may have for, for things that are away from data centers so you know, where you're, where you're having the power and way from, from reliable comms and stuff like that. I think there's, that there's a whole world of models that are geared towards specific use cases and therefore some of these big benchmarks that we're focusing on in the news so often, you know, that comes up, hey, the new model is out and these are the benchmarks and look what it did against the existing incumbent and stuff. It's a bit of showmanship there as well. But I will say this. For companies that have been without having some of these larger open source frontier models like the position Llama was for a while, you have a lot of startups that are basing their companies on their access and if you believe that your company's products and services are dependent upon a frontier model and you no longer have kind of the gap starts to grow again between closed source frontier and open source, then you have a big risk in your business in terms of hoping that Those companies, their APIs and the capabilities that those companies are choosing not to get into still suits your business. And as an example, and I'm not particularly picking on, I mean this happens across all of them. But two weeks ago Claude design came out from Anthropic and prior going into that there were a whole, and we've had the same with OpenAI also has some, some new image capabilities. And so you know, if you've been building a company on their closed source API, you've just kind of gotten eaten if it's, you know, using, if you were trying to provide those specific functions. So there's a lot of risk in terms of entirely building a business based on somebody else's business exclusively that may have an interest in taking over your thing. And so I, you know, I would be very, very cautious myself before I, I dived into that territory.
B
So yeah, I, I think I love where you're going with this because you've mentioned some things like Claude quad design and we talked about Claude code and the in a previous episode which people can listen to, which was super interesting to get into the kind of innards of that. But I think I Think where I'm at on this whole question is probably a year ago I would have really been rooting for, you know, the open model to, to when and we're going to get there and it's going to be all open models forever. I think now I'm at a point where my mind isn't really thinking about models and the question of what model you're using is kind of irrelevant. It's not totally irrelevant. And the reason why it's not totally irrelevant is for the reasons that you talked about. There are very specific cases where an open model is going to be your only choice and many cases where it might be preferable either licensing or privacy wise or whatever. So if you're working in an air gapped environment or something like that, you need a, you need an open model in certain industries. You'll, you'll need an open model in certain, for certain latency or kind of large scale processing cases. You'll want an open model because it's going to be way, way, way cheaper. Right. So there are these cases where the open model like clearly wins, but ultimately I think the model is now a complete commodity. Right. And so this is, to me, it's like, you know, you look at another commodity like I'm in the Midwest, we have a lot of coffee, soy around us, right? Okay, what do, what do? Like the average person, what does it matter in their dish at a restaurant that includes some sort of tofu or soy or whatever, or let's say another commodity corn, they don't care whether they got the, the premium corn or the mid level corn or whatever. For the most part they care how the dish was prepared, which involves a whole system of things that has led to the presentation of a great tasting dish for them, which had way more to do with a bunch more things than the commodity itself. And I think that that's, that's the situation we're in. The model, although it is a necessary piece of the puzzle, it's such a, it's such a, it feels like such a small piece of the puzzle at the, at this point. And this is what I kind of keep coming back to on the mythos thing, the anthropic thing. It's like the cybersecurity world is up in arms about mythos and I think they should be to some degree because it's powerful, it's going to discover all these vulnerabilities, it's going to totally disturb how we need to do cybersecurity. And I think my response would be, well, Whether Mythos is released or not, there's more than enough AI and agentic harnesses capability out there to already totally disrupt and transform how you need to do cybersecurity. Like, it's. We're already, we're already there and it doesn't have any. It may have a small bit to do with the model, but I think it has more to do with how people are plugging these models into agentic systems, how those are operated, and the transformational capability of those systems, which, you know, whether it's an open or closed model is going to be transformative even if another model has never released.
C
You know, a good illustration, I think, of that fact that that's happening right now is that if you, even if you do look at these big, you know, marquee names in the closed source model world that we all follow, they are, yes, you know, Mythos came out recently and things like, yeah, well, it's not out, but Mythos is there and there is a certain amount of news about that. But the place where they're really focusing and the place where, if you're following AI news, you're seeing that is how people are imagining new implementations that address business needs with these models. So, you know, the fact that, you know, I just talked about Claude design a second ago, and it's, you know, it's not that they need a whole new model just for that. It's that they're taking existing models and building the infrastructure and workflows around those so that you can productively use those models for, you know, for the thing that you need. And going back to your point, it kind of points to that commoditization that you're talking about when you recognize that the value is whatever model you're talking about, whether it's a big frontier model or much smaller model for a specific purpose. There are those to work from that are out there and those that are continually develop. But the value that people are focusing on in 2026 that I'm observing is in developing the infrastructure and workflow to make that so that instead of it being just a chatbot, it's now 100 different products and services that they can leverage. And I think that's where I think the smart thinking is right now in terms of what can we do with what we have.
B
Yeah, and I've been, I've been really wrestling with this, Chris, because to your point, there's a lot of companies out there and I've seen it, even some of my friends building companies that are incredibly innovative and get sort of knocked out as one of the model vendors, you know, releases X new capability that's built into their stack, or it just becomes easier to do that with, you know, some other tool, you know, building it itself with Claude code or something like that. And so I want to make sure that, you know, like, we are providing value to the industry. So I've also had to wrestle with this, like, what is, what is going to survive in this world, right? And what is value? What is the value that people could build as they build ventures in this, in this space? And someone I was talking to yesterday, I think made a good met or made a good point and kind of illustration for me, which is there's if we go back to the world, Chris, that you and I both went through of everything as microservices, right? Start, you know, we went into this world of, okay, now everybody's going to have microservices and at first you have four or five and then you have 10 and then 50 and then a hundred microservices and thousands of microservices. And then like if you're Uber or someone, you just have an innumerable amount of microservices. There's problems related to the complexity of operating in that environment, which are really profound, which is why, like a product, like a Datadog or a Splunk or something like that, right? That actually ties into all those endpoints, helps you monitor, do root cause analysis, whatever those things. Like number one, you would never think of building your own thing like that. And number two, it's, once you adopt that product, it's very, very sticky, right? And there's no way that you can operate hundreds, thousands of microservices without this product, right? It's, it's super sticky. So I think if we draw that parallel to the agentic world, a lot of people might be creating their very first agent right now, right? Like I created an automation that does X, right? And if we look around that automation, there's kind of a harness around that, there's connections to systems. So that agent has actually a number of things in and of itself. It's multiple models, maybe an LLM embeddings rerank, it's a connection to an MCP server or more, it's API calls, it's some workflow code, it's a user interface that people interact with. And so that's Agent one. And then you have to imagine that as, as these agents proliferate, right? You have tens of agents, you have hundreds of agents, you have thousands of agents that are all operating within your operational environment, your enterprise. And if you want to do that and manage that complexity, I think those are some of the problems that are really going to be high value in this space. And you know, we're thinking about some of those as related to like governance and policy enforcement and monitoring. But there's innumerable other problems there that, you know, how do you manage all those MCP servers, How do you, you know, how do you handle agent to agent communication, how do you do goal tracking, you know, all of those sorts of things. And if, if you put it in that context, yes, there are models operating in that environment, but they're operating in as this embedded thing in such a distributed system of things that yes, they have influence, but they have influence in a similar way to a dependency in a software product, right? Yes, they cert. Certainly a dependency in a software product has, has an impact and should be tracked and might have vulnerabilities with it, et cetera, et cetera. But the overall project is much more important than it, than that individual dependency. And that individual dependency could be swapped out for any number of things in reality.
C
I'd like to expand a little bit on that last point that you made there, and that is the ability to swap it out. I think that is one of the things that we're seeing now. We've talked about 2026 being with Claude Code and others codecs from OpenAI as well as even open source models that are available for coding. And there's some really good ones. So that's. But one of the things that I'm seeing is the ability to kind of get through some of the crud work or make your systems more dynamic because you can take major components and redo interfaces and stuff like that. I've seen value added there, but it really calls out the fact that while that has changed in terms of how you might execute on something, you still have to produce value for what you're trying to accomplish, what your organization is trying to accomplish. I think while some of the tools that we have now are speeding things up, it still takes getting maybe a novel idea or maybe just an iteratively better idea than what's already out there. And being able to bring that to fruition maybe a little bit quicker these days than we used to do something that might have taken months, might only take a few days or weeks at this point with the new tooling. And I've been having a good time exploring that. But yeah, I mean, to your point, some things are changing But a lot of things are still the same and a lot of those fundamentals still apply. And I think early in the year everybody was wondering, things were moving so fast that as Opus came out and that kind of changed the way people were thinking about coding and producing products with whatever models they were using, open source or closed source. I think the thing that I've reached as we hit May at this point is that not everything has changed, that we got some cool new things to play with. But at the end of the day, you still have to work on something novel. I know without going into detail. The thing I'm working on is not something that you could just go and prompt Claude code to generate in a couple of prompts. There's still some novel ideas in that will change the business that it's trying to change. Um, and the tool but. And so the tooling is accelerated, but it hasn't really changed that fundamental. And that's really, as I've worked more and more on this, that's really been drilled into me. So.
B
Yeah, yeah. So maybe a good way to put it for people kind of coming down to this is maybe we should be talking not so much about the open versus closed model gap, but the development of the sort of agentic workforce and where the value lies. Now I do like, I guess to acknowledge where it matters as you did. There are clear cases where open models, clearly when I think that in terms of scale economics, when you have really high volume workloads, in terms of places where you need data or data sovereignty or control or kind of some sort of infrastructure alignment like air gas scenarios, there are places I think where closed models make, make more sense. And clearly when, especially where, you know, you don't have some of those infrastructure constraints, maybe it's not a high trust environment in any way. They do certainly they offer products. Right. These are closed products. And in the same way that for any other closed product or managed service, you get a high level of reliability, you get SLAs, et cetera, around things that make them highly reliable. Right. And there's certainly trade offs for that, but it's worth, worth acknowledging. But then I think you look at this really the set of other things that happen in agents, you know, rag and automation and MCP tool calling and agent to agent communication and coding, code generation, all of those things can reasonably be done with a whole host of models depending on the kind of agent harnesses around them.
C
Yeah, it's. I think, and that is the focus if, if I think, I think if we could steer people in the right direction to be highly productive and not get caught up in the hype. As we kind of wind up here, it would be to focus on what your business and what your needs are around these agentic harnesses and how they can solve your business problems in novel ways, maybe iterative ways that people haven't thought about and worry a little bit less about the AI hype out there. Even though I'm guilty of opening it up. Talking about Llama.
B
Well, it's good. It sparked a good conversation. It's good to chat about it, Chris. And who knows, maybe we'll be wrong and Mythology will release and then we're back next week talking about how models matter so much and how could we have ever thought about otherwise? I don't think we'll be there, but who knows?
C
I don't think so, but it's always fun. Things are moving so fast and so I hope folks are as inspired as we are on finding things going forward and finding these new tools and new capabilities and going and doing something really cool with them. Let us know in social media. We're out there on all the, all the usual platforms and so looking forward to hearing from you. I'm on Blue sky Quite a lot, LinkedIn as well and I'm really enjoying engaging in conversations to find what people are doing.
B
So share. Yeah, yeah. And also a reminder for folks, we have set a date for the Midwest AI Summit with which Chris and I were both at last year and I know at least, at least I will be at in the fall in Indianapolis. It's October 15th and if you just search Midwest AI summit or MidwestAI summit.com there's some early bird pricing right now and yeah, we'd love to, love to see you in person as well. It's going to be a great, great set of practical, practical discussions happening there.
C
Last year was a lot of fun, so I encourage people to show up.
B
All right, talk to you soon. Come Chris.
C
Take care. Bye bye.
A
All right, that's our show for this week. If you haven't checked out our website, head to PracticalAI FM and be sure to connect with us on LinkedIn X or BlueSky. You'll see us posting insights related to the latest AI developments and we would love for you to join the conversation. Thanks to our partner Prediction Guard for providing operational support for the show. Check them out@prictionsguard.com also thanks to Breakmaster Cylinder for the Beats and to you for listening. That's all for now, but you'll hear from us again next week.
Episode Date: May 7, 2026
Hosts: Daniel Whitenack (CEO @ Prediction Guard) & Chris (Principal AI & Autonomy Research Engineer)
Main Theme:
Exploring the shifting landscape of open vs closed AI models in 2026, the implications for real-world applications and AI development, and why the “model wars” narrative may be missing the true focus of value creation in artificial intelligence.
In this “fully connected” episode, Daniel and Chris dive deep into the evolving dynamics between open and closed models in AI, the rise of physical and embedded AI, and why focusing on raw performance and "model wars" may miss the more significant trends. They challenge prevailing narratives, discuss industry shifts (including Meta's move from open to closed models), and examine where future value lies—arguing persuasively that the commoditization of models means infrastructure, agent frameworks, and real-world integrations are now driving progress.
Connect with the hosts on LinkedIn, X, or BlueSky for more discussion, and check out the Midwest AI Summit (October 15, Indianapolis) for deeper, practical conversations on the future of AI.
If you haven't listened to the episode, this summary will give you a robust understanding of where AI industry minds are focused in 2026, and what truly matters beyond the open vs closed debate.