Transcript
A (0:00)
Today on the AI daily brief. Why work? AGI is the only AGI that the big labs care about now. And before that in the headlines, ipo fever starts to take hold. The AI daily brief is a daily podcast and video about the most important news and discussions in AI. Alright friends, quick announcements before we dive in. First of all, thank you to today's sponsors, kpmg, Blitzi, assembly and Mercury. To get an ad free version of the show go to patreon.com aidaily brief or you can subscribe on Apple Podcasts to learn about sponsoring the show. Send us a note at SponsorsiDailyBrief AI. While you are at AIDAILYBrief AI, you can find out all about the ecosystem of projects surrounding the podcast, with a big fun one this week being Agent Madness. Our round of 64 is live. People have contributed some really awesome agents to the bracket, but you're gonna wanna get your voting in by Thursday before we move on to the round of 32. Again, you can find that at AgentMadness AI we kick off today with some fairly significant IPO fever. CNBC recently got a hold of documents that they describe as resembling an OpenAI IPO prospectus, with the documents warning of numerous risks to OpenAI, like their close ties to Microsoft. Potential investors were told that Microsoft is responsible for a substantial portion of our financing and compute and OpenAI also disclosed concentration risk, saying if Microsoft modifies or terminates its commercial partnership with us, or if we are unable to successfully diversify our business partners, our business prospects, operating results and financial condition could be adversely affected. Now this is particularly relevant given reports that Microsoft is considering a lawsuit to block certain parts of OpenAI's partnership with Amazon. Additional risk disclosures include OpenAI's significant capital expenditure, reliance on compute resources, ongoing litigation with Elon Musk, and their unusual structure as a public benefit corporation. They even mention geopolitical risk related to Taiwan. Now, while CNBC kind of sold this at first as a pre IPO prospectus, it appears that this document was shared with potential investors in OpenAI's recent fundraising round, meaning that it doesn't actually seem to be prepared for the ipo. And yet the list of risks will likely closely mirror disclosures once they actually go public. Sources additionally said that OpenAI is seeking a further 10 billion from investors to add to the 110 billion already raised from Softbank, Nvidia and Amazon. And as we'll hear in the main episode, it sounds like Sam Altman is changing his focus to be able to concentrate more closely on things like fundraising. Now an OpenAI spokesperson basically said that this is just legal nothing Burger Commenting this is a standard legal risk factor disclosure unrelated to any potential IPO prospectus. Similar language has been in place for years. Microsoft is and will remain a critical long term partner. Now much more tangibly In IPO news, SpaceX is aiming to file their IPO paperwork as soon as this week. Sources speaking with the information said that SpaceX and by extension XAI are finalizing the details of their prospectus and could file documents with the SEC this week. The stock is expected to begin trading in June if all goes to plan. That would make Xai the first out of the gate as the three large AI startups head towards IPO. SpaceX is said to be aiming to raise 75 billion in the public offering, which would make it the largest IPO in history by a wide margin. They were originally aiming for 50 billion, so this would be a substantial upsize in in fact, if it works, that single IPO would surpass all the money raised in IPOs last year combined. SpaceX last raised money at 1.25 trillion, suggesting that it would debut as around the 12th largest company in the world. When the prospectus does come out, we'll get our first look at Xai's books. Analysts expect that SpaceX as a whole is losing money and XAI is deep in the red now. This IPO is also expected to have a few unconventional features. Elon Musk has said that he wants to make IPO shares available to retail investors in larger quantities than usual. Typically, Companies offer around 10% of IPO shares available to retail prior to the listing, but SpaceX is expected to bump that number to 20%. In addition, the SpaceX IPO won't feature the standard six month lockup for existing shareholders. The safeguard is usually put in place to stop insiders dumping their stock and crashing the price right out of the gate. Sources said that a custom arrangement is still being sorted out, although it's unclear if this means a shorter lockup or it actually means a longer lockup. The information finally reports that Goldman Sachs, Morgan Stanley, bank of America, JP Morgan and Citi have all been preparing IPO plans, even though none of them has officially been hired. Continuing the theme of bucking convention, SpaceX is said to be considering an approach where each investment bank is assigned a different task as part of this largest IPO in history. Now, when it comes to xai's role in all of this, there is plenty of skepticism to go around. Contrariancurse on Twitter writes the obvious reason to merge XAI and SpaceX is because XAI is a fourth rate lab that Elon knows is screwed unless they get oodles of compute for free. So they'll raise the 75 to 100 and jam it into GPUs. SpaceX barely needs the money, and yet I don't think that there's going to be any shortage of retail excitement A new ETF is sending pre IPO AI stocks to the moon, although that's not necessarily a good thing. Last week fundrise listed their Innovation Fund, which holds shares in SpaceX, Anthropic and OpenAI. While some have praised these pre IPO wrappers for giving retail investors early access to startup equity, this isn't necessarily what most had in mind. Shares in the ETF are up 1500% since launch, most recently seeing a 64% jump on Tuesday while being halted twice for volatility. By the end of the day, the fund was valued at more than 16 times the value of the shares it holds. There's obviously some wiggle room on how anthropic stock is valued, but the current ETF's price implies almost a $5 trillion valuation, and since they last raised in February at a $380 billion valuation, it is unlikely that in that subsequent time, no matter how good we think claude code is, that they have jumped to become worth more than Microsoft. Now, of course, this is actually just a market structure issue. It's not possible to create more shares to satisfy the demand so the ETF can completely detach from its underlying value. To some, it's an early indication that AI startups will have screaming hot IPOs with a ton of retail demand, while others think it's just a sign that meme stock trading never went away. Jack Shannon of Morningstar said, with the implied valuations, when you have this premium, your upside is gone. Clearly it's going to attract some meme crowd and get some high octane trading, but if someone is in this for the long term, frankly it's a horrible investment at the current price. Matt Malone of Opto Investments also pointed out how this demonstrates why staying private for a really long time is really rough on retail investors. Malone said that these numbers are great for investors who want to get out, but if you're coming in, you're paying a huge, huge premium. This shows the dynamic from private markets to public markets, when public markets are often held out as the preferred pricing mechanism, but in this case the public market price doesn't really make sense. Staying in market themes SoftBank is apparently pushing the limits as they scrounge up funding for their OpenAI bet. The Financial Times reports that SoftBank is testing their self imposed borrowing limits. After committing another 30 billion to OpenAI, SoftBank had previously held themselves to a 25% loan to value ratio, meaning they won't borrow against more than 25% of their stock holdings. Last year's 22.5 billion in funding already stretched them pretty thin, with SoftBank selling all of their Nvidia holdings and taking out billions of dollars in margin loans against their ARM stock. Responding to the FT's reporting, SoftBank CEO Yoshimitsu Goto said, I don't deny the possibility in the future that we may temporarily go beyond 25%. Still, apparently SoftBank won't permanently change their policy, just temporarily work around it as they hit a cash crunch. Basically, more than ever, Masayoshi Son is betting the company on OpenAI. Speaking of OpenAI, it's a big new deal between that company and Helion Energy has Sam Altman stepping down as chairman and board member of the fusion energy company. Sam Altman personally led Helion's $500 million Series E in 2021 at a $3 billion valuation. At the time, it was the largest ever venture investment in a nuclear fusion startup. Axios reported that the new deal with OpenAI would guarantee the company 12.5% of the energy initially produced. The goal would be to scale that to 5 gigawatts by 2030 and 50 gigawatts by 2035. Lastly today, the Pentagon's battle with Anthropic has now officially landed in the courts, with a federal judge dragging the Pentagon for their conduct against Anthropic. In the latest court hearing on Tuesday, Anthropic's application for an injunction was heard in Northern California and Judge Rita Lynn was very unimpressed. She said the Pentagon's actions were troubling her word, as it appeared to be punishing Anthropic for speaking out. Now, the genesis of all this is that Anthropic sued the Pentagon two weeks ago, claiming that their designation as a supply chain risk was unlawful retaliation. Anthropic is seeking for that designation to be overturned. The case is currently in its earliest stages, with Anthropic seeking an injunction to suspend the designation until there is a full trial. Now, the Pentagon's lawyer suggested the impact of the designation could be narrower than previously stated. He said that his understanding was that the designation would not prevent a military contractor from using Claude code to write software for the military. Instead, he told the court the designation only stopped Anthropics technology from being used within Pentagon systems. For those following the story, that is obviously a complete 180 from Secretary of War Pete Hegseth tweet where he said, quote, effective immediately, no contractor, supplier or partner that does business with the United States military may conduct any commercial activity with Anthropic. The Pentagon is now arguing that this comment was so obviously beyond the scope of the law that Anthropic shouldn't be allowed to raise it in court. The judge was unconvinced, stating, it looks like the Pentagon is punishing Anthropic for trying to bring public scrutiny to this contract dispute, which of course would be a violation of the First Amendment. What's more, in this case, the chilling effect of Hegseth's words are just as much of an issue as the actual designation. Anthropic said this has already caused harm among their customers. The judge acknowledged that point, commenting, everyone, including Anthropic agrees that the Department of War is free to stop using Claude and look for a more permissive AI vendor. I don't see that as being what this case is about. I see the question in this case as being a very different one, which is whether the government violated the law. Now, even little old superintelligent recently got our first letters from customers asking us to send them plans on how we will stop using Anthropic because of their relationships with the U.S. government. That, it should be clear, is not something that we are going to do. Ultimately, the case comes down to this. The Pentagon lawyer argued what happens if Anthropic installs a kill switch or functionality that changes how IT functions. That is an unacceptable risk. The judge retorted, though. What I'm hearing from you though is that it's enough. If an IT vendor is stubborn and insists on certain terms and it asks annoying questions, then it can be designated as a supply chain risk because they might not be trustworthy. That seems a pretty low bar. Anyways guys, there will be more on this I'm sure. For now, however, that is going to do it for today's headlines. Next up, the main episode. Alright folks, quick pause. Here's the uncomfortable truth. If your enterprise AI strategy is we bought some tools, you don't actually have a strategy. KPMG took the harder route and became their own client zero. They embedded AI and agents across the enterprise. How work gets done, how teams collaborate, how decisions move not as a tech initiative but as a total operating model shift. And here's the real unlock. That shift raised the ceiling on what people could do. Humans stayed firmly at the center while AI reduced friction, surfaced insight, and accelerated momentum. The outcome was a more capable, more empowered workforce. If you want to understand what that actually looks like in the real world, go to www.kpmg.us AI. That's www.kpmg.us AI. If you're looking to adopt an agentic SDLC, Blitzi is the key to unlocking unmatched engineering velocity. Blitzi's differentiation starts with infinite code context. Thousands of specialized agents ingest millions of lines of your code in a single pass, mapping every dependency with a complete contextual understanding of your code base. Enterprises leverage Blitzy at the beginning of every sprint to deliver over 80% of the work autonomously. Enterprise grade end to end tested code that leverages your existing services, components and standards. This isn't AI autocomplete. This is spec and test driven development at the speed of compute schedule a technical deep dive with our AI experts@blitzi.com, that's blitzy.com if you're building anything with Voice AI, you need to know about Assembly AI. They've built the best speech to text and speech understanding models in the industry. The quiet infrastructure behind products like Granola, Dovetail, Ashby and Clulee. Now, as I've said before, voice is one of the most important modalities of AI. It's the most natural human interface and I think it's a key part of where the next wave of innovation is going to happen. Assembly AI's models lead the field in accuracy and quality, so you can actually trust the data your product is built on. And their speech understanding models help you go beyond transcription, uncovering insights, identifying speakers and surfacing key moments automatically. It's developer first, no contracts, pay only for what you use and scales effortlessly. Go to semblyai.com/brief, grab $50 in free credits and start building your Voice AI product today. This episode is brought to you by Mercury Banking for people who expect more from the tools they rely on. If you're building a modern business but still using a traditional bank, it just doesn't make sense. I use Mercury for all of my ADB family of companies and it honestly feels like financial software built for how people actually operate today. It's fast, clean, no in person visits, no minimum balances, and the things that used to take forever like sending wires or spinning up new accounts take seconds. Everything lives in one dashboard, cards, payments, invoices, team permissions. And you can automate a lot of the busy work so you're not constantly manually managing your money. Of all of the services I use to run aidb, I never thought banking would be one of my most painless and most happy experiences. But with Mercury, that's exactly what it is. Visit mercury.com to learn more and apply online in minutes. Mercury is a fintech company, not an FDIC insured bank Banking services provided through Choice Financial group and column NA members FDIC welcome back to the AI Daily Brief. The hallmark of 2026 so far has been big inflection point, style change. Obviously that's been the case for individuals, but it also clearly is the case among companies who are competing in the AI space. Some of the dominant themes have been the clarification of everything and the convergence of features, and Nowhere has the AI race gotten more focused and acute than in OpenAI strategic shifts as it watches an insurgent anthropic start to dominate the enterprise encoding conversation now we are coming up on six months now of a renewed focus on coding and knowledge work from OpenAI, going all the way back, frankly, to the release of GPT5. It was increasingly clear that code AGI was going to be a big part of their strategy as well. And yet for most of 2025 we were still in the OpenAI paradigm of letting a thousand flowers bloom while anthropic kept their head down and focused on knowledge work. OpenAI was a bit more voracious in its appetite, competing strategically in some ways much more closely to Google's approach. But then we got OpenAI's code red in December and with it came renewed focus. And what's more, the focus seemed to pay off. Codex is increasingly a real choice alongside Claude Code for many AI builders, and over the last week there's been lots of reporting about the ways that OpenAI is going to consolidate its focus even more. CEO of Applications Fiji Simo in fact confirmed reports from last week that said exactly this, she tweeted, Companies go through phases of exploration and phases of refocus. Both are critical, but when new bets start to work like we're seeing now with Codex, it's very important to double down on them and avoid distractions. Today we got the latest story on that front and if anything, it shows that OpenAI is quite serious about the idea of putting away side quests. Now some of the news was managerial. Sam Altman told staff on Tuesday that he would be changing and in fact in some ways reducing his role. Altman will no longer have direct oversight of OpenAI safety and security teams and will narrow his focus to raising capital, supply chains and the data center buildout. The safety team will be folded into the research organization headed by Chief Research Officer Mark Chen, and the security team will move into the so called scaling organization under President Greg Brockman. When it comes then to core commercial strategy, Altman's reduced role seems to put CEO of Applications Fiji Simo in the driver's seat. Her core team, the product division, will be renamed AGI Deployment, clearly in line with the company's ambitions. Last week the reporting said that Simo had told engineers that the next big project would be combining ChatGPT, Codex and the Atlas browser into a desktop super app. Now, interestingly and somewhat unexpectedly, the latest reporting also gave us some information around OpenAI's next big model. In a memo, Altman told staff that the company had finished pre training the model that is codenamed Spud. He said things are moving faster than many of us expected and told staff that they expect to have a quote very strong model in a quote few weeks that the team believes can really accelerate the economy. His words. Now people jumped all over that phrase accelerate the economy, Shubhrat writes. Accelerate the economy is doing a lot of heavy lifting. That's either AGI or a really confident marketing team. Now obviously this is an internal communication and while at this point I think if you're OpenAI, you kind of have to assume that anything big that you say is going to be leaked at some point, it is an interesting choice to use that type of phrasing, which of course runs the risk of over promising and under delivering. Ever since the challenges of the release of GPT5, which had misaligned expectations, OpenAI has really shied away from that sort of big bombastic, overpromising. Of course someone like Altman has multiple constituencies that he's got to deal with. In addition to getting users excited, he's got to keep his team excited as well. And so that communication and idea could be more squarely aimed at rallying the troops in a moment of intense transition. Maybe the most discussed new news though, as it relates to OpenAI's new focus, is the fact that the mandate to end SideQuest has claimed its first victim. As part of his memo, Open Altman announced that Sora would be Sunset and OpenAI would discontinue all products that use their video models. Within hours of the report breaking the official Sora app account on Twitter tweeted, we're saying goodbye to the Sora app to everyone who created with Sora, shared it and built community around it. Thank you. What you made with Sora mattered, and we know this news is disappointing. We'll share more soon, including timelines for the app and API and details on preserving your work. The decision was apparently largely due to constraints in compute resources. The Wall Street Journal reported that some OpenAI staff had been surprised on how compute hungry the Sora app was, given the comparative lack of demand relative to all their other products. With Sora winding up, Altman said the substantial compute resources could be redeployed to run that Spud model once it's released. Hater on Twitter wrote, OpenAI has finished training a new model codenamed Spud, which they expect will greatly accelerate the economy. They're also renaming their products division to AGI Deployment. Basically, they want more compute for Codex, which is why they discontinued Sora now. On the one hand, this makes obvious sense for Sora to be the primary casualty of the renewed focus, because not only is it distracting from a consumer perspective, it also is extremely resource intensive and as we know, there's just not enough compute to go around. But I do think it marks a pretty significant moment in that this is maybe the first time that we've seen OpenAI really have to choose, at least in such a public way, to not do something that they had clear interest in and ambition in because of compute constraints and their need to compete in the market. Yes, we have had Altman and other OpenAI executives at various points in the past say that one model or another was delayed because of compute constraints. But shutting down an entire application that had been unveiled not that long ago with much fanfare is a pretty compelling demonstration of just how big the stakes of these decisions are. Speaking of which, one bit of fallout from the end of Sora is the end of the deal with Disney. You might remember that after the Sora launch last October, Instead of suing OpenAI, Disney chose to partner with them and plan to do more with the technology. In the wake of Sora ending, Disney announced that they had canceled the partnership and will not be following through with their billion dollar investment into OpenAI. The still, the split seems amicable enough, with Disney commenting in a statement, as the nascent AI field advances rapidly, we respect OpenAI's decision to exit the video generation business and to shift its priorities elsewhere. We appreciate the constructive collaboration between our teams and what we learned from it, and we will continue to engage with AI platforms and to find new ways to meet fans where they are while responsibly embracing new technologies that respect IP and the rights of creators now one part of the response to Sora ending from some parts of the community was dancing on the grave. The primeogen writes, Good Sora accelerated one of the worst aspects of the new AI economy. Absolutely horrible thing for OpenAI to create. This of course relates to the feeling that some had around the announcement of Sora that by creating AI TikTok, whatever OpenAI's intentions were, they were effectively behaving like just the latest tech company to try to steal all of our attention for the sake of ads. Ahmad Osman agreed, saying OpenAI just killed Sora and nothing of value was lost. Put those GPUs to good work rather than making stupid videos. Maybe even try to cure cancer like your original mission said. Yet while some said that this was an indictment of the AI video space as a whole min do, who's about as deep in that space as anyone out there as the co founder of Machine Cinema writes, it's funny seeing people retweeting the demise of SORA as evidence that AI video is doomed to not realizing that there's a whole ecosystem now. When SORA was first announced, it was just midjourney and Runway in the game. Now it's over 100 companies clamoring into the space and marketing departments, agencies and studios are all locked in. OpenCode's DAX also made the point that even if you didn't like the SORA experiment, this type of experimentation is just part and parcel of figuring out what actually is valuable. He writes. It's lame to see all the people saying ha called it. I knew SORA wouldn't work. Yeah, duh. Everyone thought that, including the people who were working on it, they probably learned a lot trying to make it work anyway. For every successful thing that exists, a hundred efforts like this had to fail, and those learnings are fed into making something that ultimately does work and provides you with your steady paycheck. Now, on this idea that these resources could be better spent elsewhere, not just in terms of compute, but in terms of talent, it is worth noting that the end of SORA is not coming with job cuts. OpenAI's head of Sora, Bill Peebles, basically said that the SORA research team would be moving into the world model space, focusing on systems that deeply understand the world by learning to simulate arbitrary environments at high fidelity, with the prize, as he put it, being automating the physical economy. Altman reaffirmed this in the memo, saying that the SORA research team will, quote, prioritize longer term world simulation research, especially as it pertains to robotics. Now for some, the natural next question then was with the end of Sora, would we also see the end of OpenAI's ad push? The short answer is that nothing there has been cancelled yet. In fact, OpenAI has hired former Meta executive Dave Duggan as their new VP of Global Ad Solutions. The pilot phase of ads is over and ads will be rolling out to all free and go subscribers in the coming weeks. And yet, apparently there's still a lot of work to be done. Ad buyers have complained that OpenAI doesn't have a modern ad sales platform and are providing very minimal metrics, with multiple ad agency executives saying that they were unable to prove to their clients that ChatGPT ads were working on shopping, OpenAI is dramatically paring back the feature. The Instant Checkout feature, which allows customers to buy directly from the ChatGPT window, hasn't been a success. OpenAI announced on Tuesday that they would be revamping the feature, writing, we found that the initial version of Instant Checkout did not offer the level of flexibility that we aspire to provide, so we're allowing merchants to use their own checkout experiences while we focus our efforts on product discovery. Basically, OpenAI will now support a variety of checkout paths, encouraging merchants to deploy their own ChatGPT apps as well as clicking away to external shopping platforms. Still, one does have to wonder if there are bigger changes in the offing. ClickHealth Simon Smith writes Now when does OpenAI kill its ad side quest, since it's like a $680 billion market dominated by incumbents versus the largely untapped, roughly $40 trillion plus market of automatable knowledge work? Simon's implicit argument here is, of course, that even if the path to get there is more vague, the opportunity to reinvent how work happens in the world just feels quite a bit bigger than the opportunity to reinvent how people buy stuff on the Internet. Now with the renaming of the product team to the AGI deployment team, we've had a renewed wave of conversations about what AGI actually means. In an appearance on the Lex Friedman podcast, Jensen was asked a question where AGI came up. Friedman basically asked when Jensen thought an AI would be able to start, grow and run a successful technology company worth more than a billion dollars. Jensen responded, I think it's now, I think we've achieved AGI. It is not out of the question that a claw was able to create a web service, some interesting little app that all of a sudden, you know, a few billion people use for 50 cents, and then it went out of business again shortly after. Now we saw a whole bunch of those types of companies during the Internet era, and most of those websites were not anything more sophisticated than what openclaw could generate today. Now, when Friedman drilled down, Jensen noted that his prediction only really applied to novelty software for the moment, rather than anything more complicated. He said that he wouldn't be surprised if some social thing happened, or somebody created a digital influencer or some social application that feeds your little Tamagotchi or something like that, and it became, out of the blue, an instant success. A lot of people use it for a couple of months and it kind of dies away. However, he continued, the odds of 100,000 of those agents building Nvidia is 0% 80,000 hours Benjamin Todd wrote an essay do we already have AGI? With his short answer being no, and his longer answer being on the most prominent definitions Current AI is superhuman in some cognitive tasks, but still worse than almost all humans at others. That makes it impressive general, but not yet AGI. Now, regular listeners will know that I don't think the AGI question is particularly useful in practice. However, one thing that I have been thinking about recently, especially as we had that discussion around what the atomic unit of AI disruption should be and why it should be tasks rather than jobs. I think effectively what we have, and something that might kind of explain the jagged frontier of AI capability is it's almost like we have task AGI. Almost anything that you can ask AI to do that is specific and discrete, it can do really well. The problem is that a lot of work is strings of tasks together where AI capability starts to break down. And so to the extent that one's definition of AGI involves long strings of those tasks working together effectively without a lot of human oversight or intervention, then sure, it's more debatable if we're there or not. I kind of think Ethan Moloch has the right of it when he tweeted, maybe we should retroactively all just agree with Tyler Cowen that O3 was AGI so we can stop arguing about it. Also doing so will drive home the lesson that AGI alone is not enough for transformation, as all the stories recently of OpenAI and anthropic trying to partner with consulting and private equity firms suggest, they are well aware that even if the models are AGI capable, it's going to take a lot of work to actually get them to diffuse and fully work and reinvent the systems inside big companies. Still, if you can take away anything from all these moves from OpenAI and from the relentless pace of shipping at Claude is that right now more than ever for AI companies, the only type of AGI that matters to them is is work. AGI for now, however, that is going to do it for today's AI Daily brief. Appreciate you listening or watching as always and until next time, peace.
