Transcript
A (0:00)
Everyone has 10 year AGI timelines right now. It started when Sam Altman put out that post. Like, superintelligence is just a few thousand days away. And at the time it was kind of odd because like when he wrote that post last year, everyone was like, AGI is one year away. AGI is two years away. It was like fast takeoff time. Like, everyone was very excited. And then he came out and was like, it's a few thousand days away. And so Sam came out and, you know, and he was kind of in his blog post of the superintelligence age, talking about, you know, maybe we're a decade away. Then Andrej Karpath, he goes on Dwarkesh just a couple months ago, a couple weeks ago, says AGI is a decade away. And then Dwarkesh posts this, this probability density of, of when AGI will be achieved. There's a chance that America does it, there's a chance that China does it. And the median, the 50, the 50th percentile, was exactly 2035.
B (0:47)
Then you know that, that, that chart, that like very schizo chart that says periods when to make money. Have you seen this?
A (0:55)
No. No.
B (0:56)
And it's like it was created like, I guess about 100 years ago. People reference it any time it like actually aligns to events because it's basically has years in which panics have occurred, years of good times, high prices, and time to sell stocks, and years of hard times, low prices, and a good time to buy stocks. And so it's basically like astrology for stock picking.
A (1:20)
And what is it saying right now?
B (1:21)
2035 is a year is they're predicting is when a panic will occur.
A (1:26)
Oh, interesting. Well, that certainly aligns with all these AGI timelines.
B (1:30)
There you go.
A (1:30)
And then I was looking at meter and this one we'll have to debate a lot more, but meter has been tracking AI's ability to complete long tasks and it's growing exponentially. It used to be like six seconds. Now it's like two hours. And you know, when you talk to anyone who's in the AI field, they'll tell you that the agents are getting more and more capable of handling longer, longer time horizon tasks. The question is, I feel like humans don't have a time horizon. I feel like humans, they're just born and the goal is like, survive, be fruitful, multiply. Right? And so I feel like if you're tracking the meter data, you need to get out to like 30 years, like a full career. Right? Like the prompt needs to Be like, go make money. And then it just goes and becomes a lawyer and you know, lives its full life and retires after, after a 35 year run. And of course when you track out the doublings one in 2030, the meter is projecting based on that log curve or that log, that log graph, that AI will be able to have a time horizon in the decades. My read on the meter data is that AGI 2035, again maybe the messiest, the least definitive. But what's interesting is that it just feels like 10 years is this consensus right now. And there's much less diversity of opinion. There aren't that many people saying two years anymore. There aren't that many people saying 50 years anymore. Everyone's kind of saying 10 years. And I just wonder like let's put aside like let's try and accurately predict when this thing happens and let's just analyze it from a psychological perspective and like what does it mean when the tech community all has a consensus of something that's a decade, Like a decade away could just be what people say when they don't know. Like if you ask me when flying cars are going to happen, I'm going to say a decade. If you're going to say when quantum computing, oh, that'll be a decade. Oh Mars, yeah, that's a decade.
