Transcript
A (0:00)
It's good to have you at the New York Stock Exchange. So when do you believe that we're going to actually achieve technological singularity?
B (0:07)
I think we're in the middle of it right now. It already happened. I think we achieved artificial general intelligence at the very latest by the summer of 2020, when language models or Few Shot Learners was published by OpenAI. And I don't think the singularity is a single point in time. I've argued that it's more of an extended interval in time and we're in the middle of it right now.
A (0:29)
And how much does recursive superintelligence play as part of it? Tell me about recursive superintelligence and how does that actually work?
B (0:36)
The idea of recursive self improvement is that AI develops better AI. This is a notion that goes all the way back to I. J. Goode early in the 20th century, and then was repackaged, repopularized by Werner Vinge as the notion of a technological singularity, and then fully popularized by Ray Kurzweil. And then Peter Diamandis and myself have been running with the concept. The notion of intelligent systems being able to build smarter versions of themselves is at the very core of the notion of an intelligence explosion or technological singularity. And even in the past few months we've seen the frontier AI labs all be very public and announce that the latest versions of the GPT model series and CLAUDE and other models are all intimately now involved with the development of their successors. So intelligence, building, smarter intelligence, that's the recursive self improvement notion at the core of the singularity. And we're there.
A (1:38)
You're one of the smartest people in the AI space, and most importantly, you're not part of one of the LLMs, so you're in many ways independent.
B (1:46)
Are you sure? By the way, there are a lot of people who are convinced that I am an LLM, so. So maybe the.
A (1:50)
Okay, so maybe you're either the fifth LLM or you're independ. When you think about these LLM wars probabilistically, where do they end up? And do you even think that these LLMs have value?
B (2:03)
Well, there are a few questions there. Do LLMs have value? Yes, absolutely. Enormous value. I call it the innermost loop. This sort of broader notion of recursive self improvement that includes LLMs, but also includes robots and energy and chip fabrication facilities. This notion of an economy that has an inner spiral that's going to ultimately consume and disrupt the rest of the economy, I think is front and center to the notion of the singularity, to the first half of your question about whether LLMs not just have value, but where all of the competition goes. All of my friends at the Frontier Labs call it a rat race. There is very much a race to the bottom in some sense of driving the cost of intelligence so low that it's effectively too cheap to meter. And used to be every year or so. It's way back five years ago when it used to be maybe an annual event that we'd get a New Frontier model that would push the state of the art. And then it was every quarter when we saw the move from LLMs to reasoning models can talk more about that. And then more recently with models that are recursively self improving and designing the weights or other properties of their successors. We're seeing New Frontier models come out on arguably an almost weekly basis soon. I think it's going to be daily, hourly, minutely. We're going to reach it to the extent we haven't already Some sort of
