Transcript
Lex Fridman (0:00)
First of all, we just didn't realize how much we didn't know about human evolution. Just like the story you learned in high school, all of it is like, at least somewhat false about how, when, where, who.
Jack Clark (0:08)
What do you mean?
Lex Fridman (0:09)
Like, did it happen in Africa?
Jack Clark (0:11)
Did it.
Lex Fridman (0:11)
A big chunk of it.
Jack Clark (0:12)
Didn't we have stuff right up to, you know, a certain amount of history though, right? Okay. Yeah. At least there's something we can hold on to. Dwarkesh, I've been really looking forward to this. Thanks for making time for it.
Lex Fridman (0:22)
Thanks for having me on.
Jack Clark (0:23)
So I want to start by talking about your thinking around the state of AI. You obviously are very close to it. You're a user of it. You have gone really deep with a lot of people who know it on many levels. And you recently wrote this really interesting blog post called why I Don't Think AGI is Right around the Corner. And I want to ask you a little bit about that and just this general topic. A lot of my guests so far, probably myself included, have been like a little breathlessly like, you know, this is here. If we just sort of deployed all the AI research that we have or capabilities today, we would have insane GDP growth. I think you have a slightly different take than some of my other guests. So I wanted to start by asking you about how you see the current state of AI.
Lex Fridman (1:09)
I'm in a similar position as you, where I've also interviewed a lot of people who are breathlessly anticipating what's coming with AI, sometimes in a very optimistic way of the AI researchers. In other cases, they're worried that the world's going to end in two years. And I think what's changed my mind around how soon we're going to get to these super transformative outcomes is just trying to use these AIs to help me with very simple script kiddie kind of task for my own podcast. And so I have a lot of friends who think, look, if the reason the Fortune 5500 isn't using AI all across their stack right now is because the management is too stodgy, they're just not being creative enough about how to get O3 into their workflows. And look, I'm thinking a lot about how to use AI in my podcast post production setup. I've tried for 100 hours to get it to be useful for me and it hasn't been that useful. And I think that's because it's just genuinely hard to get human like labor out of these models, fundamentally, because these models can't Learn on the job in a way a human can. So if you think about a human employee, probably for the first three, six months, they're not even useful, especially when it comes to knowledge work. The reason they become more useful over time is not mainly their raw intellect, although obviously raw intellect matters, but it's rather their ability to build up context and to learn from their failures in a very rich way and to interrogate them. And the models currently you just get whatever they can do in a session. You talk to them for 30 minutes and then they totally lose awareness or understanding of how your business works, what your preferences are, et cetera. And a lot of tasks just require you to. You do a 5 out of 10 job at something, then you talk to your boss, you go out to the consumer, and then you learn what didn't go wrong. You ask yourself what didn't go well and you just keep iterating on that and they just can't do this on the job kind of training, which I think is what makes humans valuable.
