Transcript
A (0:00)
If you want to get access to this episode and my next 30 episodes, all AD free, so there'll be no ads on them, go check out my podcast, AI Chat. You can go search for that on Spotify or Apple. It's AI Chat. I'm going to post all of these news episodes and I'm also posting interviews like I just interviewed the CEO of Cohere. They've raised over a billion dollars for their AI model, talking about what they're going to be spending the money on and the direction of the AI industry along with all of this new stuff. So if you want to go check it out with no ads for free, it is AI Chat. Elon Musk and Sam Altman are both in federal court in Oakland for day three of the $130 billion trial that could force OpenAI back into non profit status and also remove Sam Altman from the board. Before that, Stanford's 2026 AI index says transparency on Frontier models just collapsed from 58 to 40 out of 100. Random Runway's CEO is positioning the company behind AI video into world models at a $5.3 billion valuation. Ex Twitter CEO Parag Agwal just tripled his agent infrastructure startup's valuation to $2 billion in five months. And the White House is drafting an executive action to quietly walk back its anthropic ban. We're going to get into all of that on the podcast today. The first one I wanted to cover was the Stanford story. So Stanford Hai just dropped its 2026 AI index and we're seeing something very interesting in the report. One of the things that the Foundation Model Transparency Intra Index is tracking or showed was that the average score dropped from 58 to 40 out of 100 in the last year for how transparent these AI model companies are. The direct quote from the report is quote, the most capable models are now the least transparent. So we're talking about Google, anthropic OpenAI, all of them have stopped disclosing their data set sizes and also they stopped, you know, how long their training duration was on the last models. So we basically have China right now that is narrowing the U.S. capability. They're at, you know, they're like 2.7% behind. We also have generative AI hitting 53% in U.S. adoption. That's faster than the PC, you know, or the Internet. And at the same time we have all of the labs at the top publishing less than they did a year ago. Stanford's Russell Wald said on X capability is going up and the ability to Understand the systems is going down at the same rate. So if you're an enterprise buyer, this is something I think that is fascinating. When your procurement team asks for model card data, the answer is increasingly going to be, we just don't share that anymore. And I think this is a very interesting new move. Next up, we have the Runway of CEO Cristobal Venezuela, who went on a podcast recently with Rebecca Balin, I believe it was the TechCrunch Equity podcast, and basically said what a lot of people have been. In fact, I think I said this a couple days on my show. But basically the CEO of Runway, the biggest video AI company, said that AI video is basically a feature on a much bigger project or product. And the bigger product is these world models. So being able to. Because in order to make this AI generated video, you have to basically have these physics models that understand everything going on in the world using tools like blender, et cetera and, or, you know, that's one way some people have done it in the past. But this is a much bigger thing than just, you know, AI generated video would appear. You have to actually understand so much. And that, that data set and kind of that model, being able to create these world models is what they're calling is incredibly valuable for robotics. So Runway right now is sitting at a $5.3 billion valuation. They have $860 million raised to date. And I mean if you line them up against Soar and Veo in the video generation tier, I think it's interesting. The argument that they're making right now is that the real product surface isn't just like the videos that are generated. It's kind of this non linear media, that's what they're calling it right now, which is basically real time generated content that responds to a viewer or player instead of playing back a fixed sequence. So I think the line that really stood out to me is the what they said, they said, quote, the real constraint on filmmaking has never been technology. And what I think they're really getting at there is that, you know, better tools alone aren't going to unlock new creative categories. If, if you just give like any random person, you know, AI generative video tools aren't going to be able to go and make a blockbuster movie per se. I mean, maybe there's some diamonds in the rough hidden there, right? But typically these people that are making incredible filmmaking are great storytellers. The people that have been working in this that have been kind of honing their art. And while some random people might have that hidden skill set in them. Right. It reminds me of like at the Olympics when you had the random dude from what, like Turkey that like went and he did like the Shooting Olympics and he like crushed it and you know, he just kind of picked it up on the side. Maybe there's some people like that with filmmaking out there, right? But I think this isn't something that's something that's incredibly common. So there's definitely a huge human element to it. And I think we talk about that a lot. But the element we don't talk about a lot is behind the technology. Not just the plain video generation model, but the actual physics behind all of it. So this is something that Yann Lecan is making with his New World Model Labs. It's the same bet that Fifi Li's World Labs is making and the same one that OpenAI just signaled by. Essentially when everyone's like, oh my gosh, they're shutting down sora. The big story they didn't talk about is that the SORA team's resources are getting put into long term world simulation research. So if you're building anything in games, robotics, interactive media, this is the thesis that I think all of the smart companies, all the smart money is converging on right now. Yes, like these AI generated video models are kind of cool, but the implications of them are much bigger than just being able to generate clips for filmmakers. These world models are going to go into robots and the robots are, you know, an absolutely insane disruption on every industry in the world. Okay, the next thing I want to talk about is Parallel Web Systems. This is the agent infrastructure startup that was founded by Parag Agwal. It was the guy who was Twitter's CEO when Elon Musk fired him in Octo 2022, he just closed a $100 million Series B at a 2 billion dollar valuation. The Series A was five months ago at a $740 million valuation. So, you know, it's basically tripled in five months. Which I think on its own tells you what is happening at the agent infrastructure layer right now. Sequoia led this round. You know, one of the top VC firms, Kleiner Perkins Index, Koshala, First Round, Spark, Terrain Capital, all of them are in this. The company right now runs web search and also research APIs purposely built for AI agents. So basically the layer between the model and the live Internet. And our gwal says that they have over 100,000 developers on their platform right now. With Clay and Harvey on the kind of the list of customers that they have named. The reason why this round, I think, actually matters, even though basically every infrastructure startup right now is raising a lot of money, is because agent traffic is fundamentally different than human traffic. Agents don't just browse and they hammer endpoints, they batch reads. They need structured outputs. So most search APIs were never actually designed for this, and I think they've basically just built theirs from scratch around this idea. In Anthropic news, which, you know, we love to cover over at Axios, Marina Pizzowski has an interesting story that the White House is drafting an executive action to walk back the supply chain risk designation on Anthropic and clear federal agency access to Mythos. That is Anthropic's cyber model. I think this is a clear, you know, 180. The defense secretary Pete Hegseth labeled Anthropic a supply chain risk. In February, Trump signed a directive ordering federal agencies off of Anthropic. And now, eight weeks later, the chief of staff, Susie Wiles, and the treasurer, Secretary Scott Besnett, met with Dario Amadeo in. What both of them are kind of saying is like, they're like. They all basically said this was a super productive meeting. And now it feels like the administration is walking back a lot of the, you know, the designations that were given. And some of the beef that happened with Anthropic looks like it's water under the bridge because we have bigger problems to solve right now. We know that the NSA is already using Mythos under a separate exemption, and the reason why they're kind of changing their mind on this is definitely not like a shocker. Mythos is the only frontier model which is purpose built for offensive and defensive cyber. And the intel community made the case internally that you can't run a cyber stack without it. Was this a giant 4D chess play by Dario Amadeo to get Anthropic, you know, used in the government again? It seems like OpenAI was a little bit jealous because they said, like, hey, no, we have a cool cyber model as well that's, like, too dangerous to release. I'm not sure where we get with all of that, but before we get into the next story, which is Elon Musk and Sam Altman's trial, I would love to tell you about my own startup, AI Box. If you're already paying for ChatGPT, Claude, Gemini Grok, or 11Labs for audio, any of the image models right now, I would love for you to check out AI Box. It's what I personally built. You get access to over 80 different AI models in one place, basically every Frontier model and a ton of really cool open source models are all on there. It's easy, 8.99amonth. So instead of paying, you know, $20 for ChatGPT and $20 for Claude and $20 for everything else, 8.99amonth and you get access to all of the different models in one place. It's what I recommend to my friends. It saves you a ton of time and there's a lot of cool features. There's a workflow builder in there. I would love for you to go check it out. There's a link in the description to AI box. AI okay. The Sam Altman trial, Day three just wrapped up today in the US District Court for the Northern District of California in Oakland. Elon Musk is suing Sam Altman, OpenAI and Greg Brockman for $130 billion in damages. And he's asking the court to do two things. Number one, force OpenAI back into a nonprofit forum. And number two, remove Sam Altman and Greg Brockman from the board. According to NBC's Rohan Goswami, Elon Musk has been on the stand for two days now. And this whole thing is being like they're doing like a live blog of it. But this is the second straight day under cross examination by opening ey lead attorney, which is William Savitt of Wedel Lipton, who is kind of famously the guy that you hire when you absolutely can't lose a corporate trial. And basically the core fact pattern that we're seeing right now is that OpenAI was founded as a nonprofit in 2015 by Musk, Altman and Brockman, Ilya Suskever and a bunch of other people. Elon Musk donated $38 million. He basically left the board in 2018 after depending on whose story you believe on this, he either kind of lost the power struggle inside to make himself the CEO or he was kind of refusing to bless the for profit conversion. But in 2019, the for profit subsidy was created. And in 2023, micro Microsoft put in $10 billion. Today, OpenAI is reportedly worth somewhere around 500 billion. Although if you look at the secondaries market, it's trading like people are buying its shares or selling stock options for 850 billion. So I'm not sure where that, you know, 500 billion that CNBC is reporting on. But like people are paying close to a trillion dollars. And, and by the way, Anthropic is trading on the secondaries market for a trillion dollars, which is Evaluation, which is crazy. But either way, Elon Musk claims that the for profit conversation or conversion was a breach of kind of the founding charter that they have and that his donation got used for unauthorized commercial purposes. So I think the big movements from the cross examiner Savit, he basically walked Elon Musk through all of the internal exhibits showing that Musk himself proposed a for profit structure in 2017 and 2018 with Musk holding a majority control of the cap table and the board. NPR's Bobby Allen flagged that as kind of the most damaging exchange in this trial. Elon's defense was that in that deal, you know, he was eventually going to minimize his control. According to the cross examination lawyer that wasn't actually on the term sheet. Something that's. So that seems like kind of a loss for Elon. What is interesting though, Elon brought up the point that Microsoft was a bit of a tipping point because the $10 billion donation was, or, you know, investment was too large to be a traditional donation and that Microsoft was clearly looking for a financial return. Savit the, you know, over at OpenAI, the counter argument from him was to point out that Elon Musk founded XAI eight months later and that his lawsuit landed shortly after. So OpenAI's narrative is. And basically what they're trying to say is, look, Elon Musk lost the for profit fight, so then he started a competitor and now he's using the courts to try and like stop his competitor. Casey Newton over at Platformer wrote this morning that the case will turn less on the legal merit and more on whether the jury believes Musk's harm is real or strategic. Sam Allman right now is expected to take the stand next week. Brockman Ceskever, Miriam Moradi and Satya Nadella are all on the witness list, which is going to be just wild. I think the strongest argument against my opinion or my take that I've given here is basically the corporate structure case, which is, you know, pretty important for an AI company. It has to be able to do something like this anthropic did the public benefit corporation. OpenAI did the capped profit subsidy. I think the simple reality is that you can't really raise $50 billion as a nonprofit. And the original 501c3 charter was always going to be revisited the moment that the COMPUTE bill became real. So if you look at it from that standpoint, then Elon's lawsuit is kind of like selectively enforcing something that nobody in the industry, including Elon Musk and Xai I might add, actually believe anymore. And I think that is a pretty fair point. But why I still think the trial is pretty important is more than what a lot of people are saying. Basically, I think the legal question whether donations made under a nonprofit can be retroactively used for for profit subsidies without donor consent has a lot of implications go way beyond OpenAI. Because if the jury sides with Elon, then every AI lab that took foundation grants or research donations under a charitable mission kind of has this new way that they have this legal exposure, like they have more risks there. And I think that's why anthropic lawyers are kind of sitting in the gallery. I think a lot of people are watching this case because they want to see the precedent that comes out of it. And I think that's not the way it's getting framed in a lot of the news, but I think that's. That's what's happening. All right, that's the show for today. Thank you so much for tuning in. If you enjoyed the episode, it would mean the world to me. It really helped the show out a lot. If you could leave a rating and review wherever you get your podcast and as always, make sure to go check out AI box AI. I'll leave a link in the description. You can go find it. I'll catch you in the next episode.
