Transcript
A (0:00)
The wait is over. Dive into Audible's most anticipated collection, the Best of 2025. Featuring top audiobooks, podcasts and originals across all genres, our editors have carefully curated this year's must listens. From brilliant hidden gems to the buzziest new releases, every title in this collection has earned its spot. This is your go to for the absolute best in 2025 audio entertainment. Whether you love thrillers, romance or non, your next favorite listen awaits. Discover why there's more to imagine when you listen@audible.com BestOfTheYear Mistral AI has just.
B (0:41)
Released a brand new model Mistral 3. This is what they're calling a 10 model AI suite. It's pretty interesting. They're doing some some attempts to redefine having these open source models built into enterprise, making them more efficient. They're doing a bunch of really interesting things that you don't typically see out of SRI Silicon Valley. We're going to get into all of that on the podcast today, but before we do, I wanted to mention if you want to try out all of the latest Mistral models, everything from Google, OpenAI, Anthropic and all the other top AI model creators without having to subscribe to all of them, go check out AI Box AI. That's my own startup. And for $20 a month you get access to over 40 of the top AI models. So anytime I'm talking about a new model and its new capabilities, you go check it out on the platform and try it side by side with all of the other models. Models. You can compare the results of one question against a bunch of different models and see which one you like the best. So if you want to go try it out, we have a AI Box playground at AI Box AI. I'll leave a link in the description. Okay, let's talk about what this French AI startup Mistral is creating right now. And I also have to say anytime it's around Christmas time and I'm talking about France, it just like deep down in my heart it kills me a little bit because when I was a teenager I I did a volunteer mission for my church and one of the areas I was in was Strasbourg, France or Strasbourg. And at the time it was the Christmas markets. And if you have ever been to France at this time of year for Christmas, the Christmas markets are absolutely insane. So I am wishing I was over there right now. But nonetheless, I guess just, you know, news about a French AI startup at this time of year is going to have to do for me. So what I think is really interesting Here is that Mistral is definitely pushing back against what most of Silicon Valley is, has been teaching for a long time, which is kind of this scale at all costs philosophy. They just released this new Mistral 3 family. It is essentially a collection of 10 different open weight models and they're designed to kind of, I mean they do a lot of things, but one thing I think they're doing is kind of challenging the idea that AI innovation is synonymous with these massive closed, you know, systems that are controlled by a bunch of American companies. We see this with Google and OpenAI and Anthropic and a bunch of the other top US players. I appreciate that Mistral is doing this in an open source way. So I think that is a big step for them. This new release that they're doing has one new frontier model. So that's kind of their best model that is doing the best of the benchmarks and everywhere else. And it has multimodal and multilingual capabilities. So a lot of different languages, you can input text, audio, all the different multimodalities. They also have nine efficient small models that can run on consumer grade hardware. This is I think, a really interesting, you know, no pun intended, edge case for them that I think will give them a slight edge. The, the launch. I think Mistral's most aggressive attempt yet to try to position itself as one of the top voices for open source AI models. It's really customizable, it's ready for enterprise adoption. And this is something that they've been doing this kind of been, their path is going to the enterprise. Just two years ago they had already raised $2.7 billion and had a valuation of $13.7 billion. So this isn't like a small company. Mistral's been doing, doing a lot for a very long time. It's not the one we typically talk about. Probably over is over here in the US as much as maybe in Europe with all of the numbers that they've done, you know, I mean, raising, you know, $2.7 billion, it's only two years old. All of that seems like very small numbers when you're comparing this to OpenAI and Anthropic who have raised tens of billions of dollars. Right. But I will say that Mistral is definitely betting that even though it is smaller, it has raised less money. It's actually a strategic advantage. They're leaning into kind of running a leaner company, making their models more open, more cost effective, more deployable. These are not kind of these monolithic frontier systems and I think Overall, their strategy shows that there is a growing kind of countercurrent in the enterprise market businesses, they are looking for something they can run on premises, um, they can run it in their own closed source clouds and you're not having to go and you know, suffer from maybe some of the latency issues you might see using one of the big frontier models. The costs which are obviously quite staggering depending on the scale of your usage. And of course you can't customize them as much. So using an open source model, you can put your own, you know, data in there. Maybe it's private, maybe you're in a regulated industry. Here's something that Mistral's co founder Guillaume Lamp said recently about this. He said customers are sometimes happy to start with a very large closed model that they do not have to fine tune. Then they deploy and realize it's expensive and slow. That's where they come to us, to fine tune small models that handle the use case more efficiently. And I think this is actually overall a really good strategy. It's the recommendation I make to most organizations that come to me when I'm doing consulting or other things and that is grab something like ChatGPT, grab something like Anthropic or Gemini, use the API, build out your use case and then when you have it kind of set up and running and you determine this is the direction that you want to go, come and fine tune a model after the fact. If you want to get some of those cost savings, some of those speed savings, or if you have, you know, a special use case where you have, you know, private data that you can't put into ChatGPT or something else like that. So I think this is, this is essentially where Mistral is trying to close the gap and fill in the hole. Now what I will also say is they, you know, they have different ways that they charge. They do have some of, some variation of like a ChatGPT competitor where you can, you can pay monthly to use their best model. They also have the ability where they'll kind of help you fine tune and train. OpenAI has those same programs and OpenAI also has a top tier open source model that they released this year that was, you know, it did fairly well in a lot of the benchmarks. I think Mistral is kind of competing with that specifically because anyone can grab those open source models and do them themselves. They have a bunch of, you know, cloud services and fine tuning tools that they will help you do that. One thing that I thought was interesting about all of this is they said in practice, the huge majority of enterprise workloads can be solved by small models if you fine tune them. And the reason why I think this is so important is because I think we're really used to using these really big obviously we just want the latest thing from ChatGPT or Gemini. They are really computational, computationally intensive, they use a lot of power. They use, they have a lot of costs associated with them. And so if you can fine tune a model that's much more efficient, which is what Mistral is, you save a lot of money, you save a lot of power, you save a lot of costs. Especially so for these large enterprises, I think this is kind of a no brainer. This is the direction you should move in in the future. I think right now this is kind of a big flagship launch, right? This is Mistral Large 3. It is a frontier model that essentially helps a company go up against GPT4O or Gemini 2 or Llama 3. Right. It's not, you know, it's not like GPT5, but it's GPT4O which is a great model and that's what it's kind of competing against. It is one of the first fully open weight frontier systems that has multilingual and also, also multimodal reasoning in the same model. So I do think that that is an impressive feat. Right? It's like you're basically getting GPT4O but you can go put this on your own server somewhere running without having to, you know, like pay an API to OpenAI all the time. It does use what they call a granular mixture of experts. Mixture of experts, something we've been talking about for a long time on the show. But it's essentially you ask an AI model something, it has kind of like a screening AI that determines what fine tuned model could answer that question best. I guess an example is kind of a crude example, but you might have like a math expert and a science expert and a creative writing expert. These kind of models that are fine tuned for those use cases. And so the screening model grabs the, grabs the question that you ask it and it routes it to the expert that it thinks could answer the question best. Sometimes it will actually send it to multiple. They all give a response and then it's. They all determine which one was the best response by based off of how they all answered it. So there's a bunch of cool things that we're seeing done here. But at the end of the day this is kind of the mixture of experts that is now being added into this open source model, which makes it very useful. That design activates 41 billion parameters out of its 675 billion parameter pool that it has access to. So it's kind of interesting the way that it does that. I think the structure allows it to have really like great capabilities, but it's also very, very efficient. So you know, combine that with a 256,000 token context window, you can give it a ton of data and it can still understand what's going on. I think it's like really setting itself up for kind of this agentic workflow, space, document analysis, complex reasoning, software development, some of this high volume content automation that people are looking to do more and more. And I think that they're setting themselves up to compete at a high level, especially considering they're an open source model. For me, one of the most interesting signals from this launch is Mistral's kind of growing focus on physical AI and also edge deployment. They're integrating their small models into a bunch of different industries including I think they're doing like robotics, defense tech, vehicles, industrial systems. And again, it's kind of, they're looking at these areas that OpenAI isn't targeting heavily. I think this is something Anthropic does really well when they target the developer community. Everyone has to kind of carve out something and approach some of these problems. That might not be the biggest mass market thing that OpenAI or Google are approaching. So I think that they're doing a fantastic job at this. Some of the partners that they're currently working with are Singapore's htx. They're working with Mistral on robotics, cybersecurity models and emergency response systems. There is a German defense startup called Helsing which is collaborating on drone focused vision language action models. Stellantis, which is exploring an in car AI assistant which is based on Mistral's small models. These are really cool because these companies, you can imagine like a car for example, can grab one of their open source small models, fine tune it to be able to understand what needs to be done in the car. Right? There's not that many things like turn the AC up or down, set it at a certain level, you know, maybe like adjust the volume of your music, go find a specific piece of music. There's only like a handful of things that this car can do and these small models could do it quite well. But it doesn't actually have to access the Internet or pay some sort of API P API fee forever to be able to do that. So I Think these are really cool use cases. And especially when you start talking about, you know, like Helsing, for example, doing defense, a lot of times you might want to send off some sort of autonomous drone to go do some sort of reconnaissance or something. And, you know, we see the problem in the Ukraine right now with Russia, where all of the drones are getting jammed. So it's, like, tricky. They have to, like, attach the drones to, like, Ethernet cables that are literally just attached to the drone as it flies so that you can't jam its signal. There's like all these crazy things that we have to do. But if you think about it, having that onboard AI model that could go run a drone or any sort of tool without Internet connection, without being able to be jammed, would be a big competitive advantage. Also sort of terrifying to think that, you know, theoretically China could release a million drones into America or anyone to anyone that have AI models running on them. So you can't jam their signal. There is no signal. They were programmed to be able to do something. And then the AI model on board just is like a human on board or like intelligence on board to go complete that mission without getting any sort of signal that could be intercepted or stopped. So that is, I guess, definitely something to be concerned of that. But also it is a competitive advantage when you think about making capable tools and weapons and stuff. So very interesting. It seems like Mistral is really carving out an interesting niche for itself. I think they're definitely keeping pace with a lot of industry leaders, specifically in open source. They're not trying to, you know, beat OpenAI or beat Gemini at the biggest mass market thing, but they are carving out some very interesting, unique use cases that are very powerful, and I think they're doing those well. Thanks so much for tuning into the podcast today. If you learned anything new and if you would like to like to say thank you, the number one way I would appreciate it is if you could leave a rating and review on the podcast. If you're listening on Apple, you could hit some stars. If you're listening over on Spotify, it's on the about tab. But either way really helps out the show a lot to get reviews from amazing people like yourself so that other people can see it. It's the number one way the show gets found and ranked is by the reviews. So if you leave a review, it'd mean the world to me. Thanks so much and I hope you have a fantastic rest of your day. Don't forget to try AI box. AI.
