Transcript
A (0:05)
This episode is brought to you by ServiceNow. Look, I have my dream job. I get to explain complicated ideas to folks who have better things to do than read white papers. But even dream jobs have not so dreamy parts. The stuff that gets in the way of the actual work, that's where ServiceNow's AI specialists come in. They don't just tell you what what you should do about your busy work. They actually do it. Start to finish, cases closed, requests handled, no extra work for you. That way, you and your team can spend more time on what matters. Which for me is finding that one elusive stat that just makes everything click. To learn how to put AI to work for people, visit ServiceNow.com today. An AI so powerful you're not allowed to use it One evening this February, the AI researcher Nicholas Carlini opened his laptop during a trip to Bali and fired up the latest AI model from his company, Anthropic. Within hours, he noticed something rather worrying. The model, called Mythos, gave Carlini the ability to infiltrate computer systems around the world. As Bloomberg reported, Mythos could orchestrate the digital equivalent of a bank robbery, getting past security protocols and through the front door of networks, breaking into digital vaults. Mythos exploited these digital vulnerabilities autonomously, like the world's most talented seasoned hacker. As Bloomberg reported, Mythos could orchestrate the digital equivalent of a bank robbery, getting past security protocols and through the front door of networks and breaking into digital vaults. Mythos exploited these digital vulnerabilities autonomously. Like the world's most talented seasoned hacker in control test, Mythos completed harmful tasks while concealing its own reasoning and in some cases, fabricated fake explanations for what it was doing. End quote. Carlini brought his concerns to the attention of the full company and Anthropic decided they had built an AI model so capable and so dangerous that they decided not to release it to the general public. Instead, they created a small consortium of companies to use Mythos to root out their own cybersecurity flaws. The US Government, which is currently designating Anthropic a supply chain national security risk, nonetheless was so freaked out by this development that Treasury Secretary Scott Besant and Federal Reserve Chair Jerome Powell convened Wall street leaders for a meeting in Washington. It is smart, and maybe a little cynical to wonder if the power of Mythos is its own self serving mythology. Anthropic's way of saying, oh, look how great we are. Look how our products are almost too powerful for mere peons. But external tests have found that Mythos is hands down the most advanced model ever released. The Epic Capabilities Index, a metric that aggregates 40 AI independent benchmarks, found that Mythos represents not only the most advanced model ever, but also the most significant acceleration in performance in the last three years. This shift from racing to get AI models into the market ASAP to withholding AI models and instead talking about the danger of their capabilities is just one of several phase shifts that I've noticed with AI in the last few weeks. A second shift has occurred in the realm of AI supply and demand. For much of last year, the AI bubble case was easy to make AI capex, that is, the cost of all those chips and data centers and electricity amounted to the largest private sector infrastructure project in history. And since practically all those other projects turned out to be bubbles, it naturally followed that this, the mother of all CapEx projects, would be the mother of all Capex bubbles. But it now seems like the biggest problem facing AI is not a shortage of demand, it's a shortage of supply. Consumer demand is so white hot that the hyperscalers cannot provide sufficient compute to keep up with customer needs. With the release of AI agents like Claude Code and Codex, some companies are spending tens, hundreds of thousands of dollars a month on artificial intelligence. This is not the behavior of an industry that is struggling to find customers. Quite the opposite. This is what it looks like when demand threatens to outstrip supply. So as I see it, we aren't just in the middle of one vibe shift in AI, but rather two vibe shifts. Number one, from a story about demand scarcity to a story about supply scarcity. And number two, from a go go era of racing to release AI models with no regulation or oversight, to a period where the most advanced models are widely seen as being too dangerous for public consumption. This new era of artificial intelligence will raise new questions about how to regulate an industry that regards its own product as dangerous. Today's return guest is Kevin Roos of the Hard Fork Podcast and a columnist at the New York Times. We talk about Mythos, China, the road to artificial general intelligence, and why the last few weeks in AI news might be the most seismic month since the release of ChatGPT. I'm Derek Thompson. This is plain English, Kevin Roos, welcome to the show.
