Transcript
A (0:00)
If you're someone who's using Claude code, you're kind of already in the top, like 0.1% of AI adopters. And then if you're able to use cloud code really well, then you're just like so much further ahead. I think at this point I've kind of given up on trying to call the top of AI and you just kind of have to ride out the line and see where it goes and just be very prepared for like human, if not superhuman capabilities coming soon. You see that the models are doubling in their time Horizon every approximately 104, four days. I think that, like Mythos will just be the beginning. I think this will only get more and more of a national security concern as AI companies build towards so called AI superintelligence. How are you supposed to run a government if you have like private AI systems that can then just outmaneuver the government if the government's not really getting its head around, like the national security implications of these AI systems and know how this, they could be used against us, know how we could be deploying them against our adversaries, and also know how AI might be like an independent threat actor. Like, I think it's just very important that the government is taking kind of a leading role here.
B (1:12)
Welcome to the Future of Life Institute podcast. My name is Gus Docker and I'm here with Peter Wildeford. Peter is head of policy at the AI Policy Network, and he's also one of the best forecasters in the world, especially when it comes to AI. Peter, welcome to the show.
A (1:28)
Thank you. Thanks. It's good to be here.
B (1:30)
Great. So the setup I want to do here is for us to sort of walk through some popular narratives in the AI space and you tell me how you approach actually rigorously forecasting what's going on there. Now, I'm especially interested in sort of your methodology or how you think about the question, where do you begin thinking about these things? Does that sound good?
A (1:54)
Yeah, that sounds good. I'll do my best.
B (1:57)
Amazing. Great. Okay. I'm thinking we start with whether AI is a bubble, and we can define a bubble however we want. That's maybe part of the methodology to define what you mean. If you say whether, if you ask whether AI is a bubble, but how would you begin answering that question?
A (2:15)
Yeah, I think AI is very clearly not a bubble. I guess this was maybe more of a popular talking point earlier last year when AI capabilities seemed to be phasing out. I guess like when you, when you think about a bubble, I Guess you kind of think about like the dot com era where you had like companies like spending a dollar to buy 80 cents and it was just like clearly unsustainable. But like AI companies, like they have very clear demand. Like Anthropic has more demand than they have the ability to serve. They're like batting away users right now because there's just like too much demand and they have like very clear revenue. And they also just have like this, this path to just like very immense value. Like Anthropic and OpenAI are basically just kind of creating machines that can just replace entire companies potentially in entire portions of the economy. Like that's just like such a huge opportunity and I think it's clearly panning out. So it's just like very clearly not a bubble.
