Stephen Wolfram (33:54)
Yeah, I mean, it's, it's interesting that this was a moment of consumerization. I mean, it's just like with computers in general. I mean, people. You know, I started using computers when I was a kid in the 1970s, and most people were like, ah, computers, who cares? You know, maybe they're in the back end of something doing some, you know, bank processing thing or whatever, or, or figuring out how to launch rockets. But I don't care about that. And then, you know, the personal computer comes out and people discover things like word processing and games and so on, and suddenly it's like, oh, we really care about computers. I think that was sort of the ChatGPT moment, was the one where people could see that there was this. I mean, by the way, nobody knew ChatGPT was going to work, including the people who built it. I mean, it was, it was really, you know, it's like, at what point, I mean, you know, I'd been playing with language models and sequence prediction and so on. None of that stuff had been terribly interesting. It was, you know, it was a thing where for Whatever reason, and we don't understand why yet. Maybe we one day will. Suddenly we got over a threshold, you know, some number of billions of parameters. It's like, that was enough. That was human, like, enough, so to speak. Now, you know, that scale is probably determined by something to do with us. Like, we humans have about 50,000 words in our typical languages that say that, and we have 100 billion neurons in our brains, you know, that provides a definite scale. If you wanted an AI that would be like a dog, for example, in its linguistic capabilities, let's say you probably need a much smaller LLM, but we wouldn't be as impressed by it. So somehow the size of the LLM, you know, the size of the system was such that it was sort of matched enough of what we need to be impressed by that we got to that point, you know, by the way, one thing to just say is sort of a side comment. You know, one thing I've been curious about is let's say our brains. Let's say we had, you know, a thousand times as many neurons in our brains. What might we be able to do? What's kind of the next level of sort of intellectual capability? It's just like, you know, cats and dogs. Well, dogs, at least, you know, a few individual words. Fetch, sit, whatever else they deal with that we humans. The big invention of our species, I suppose, was human language and this idea that you can put together an infinite collection of sentences by using all these different words. So a big question is, what happens next? If we could go beyond that, what's the next level of abstraction and sophistication? That's an interesting thing to think about, but you also are stuck thinking, well, if I was the dog, could I really understand human compositional language? And is it the case that there's something bigger? There undoubtedly is, but maybe it's something that just isn't a fit for us. Just like the natural world has lots of things going on that we don't necessarily. We don't understand except through this sort of bridge of natural science. But I think, you know, in terms of people's reaction to ChatGPT, I think the. The thing that, you know, a lot of people were like, it's magic. You know, this magic has happened. There's going to be more magic. You know, this was such a surprise, necessarily. There will be many more surprises. That was a. I don't think that's, you know, people always think that when there's a technological surprise, they always think it's not going to stop here. It's going to keep going, but it doesn't always keep going. In fact, it usually doesn't keep going. It usually is. That was the surprise. 1. 1 got to that sort of threshold level and then. And then. So, you know, and then the question was, well, okay, sort of. I was a little bit surprised by this. People saying, it's magic. And then it's like, well, how does it work? Oh, it's this. You know, it's this AI thing. It's this neural net. And I. I started thinking, well, can I actually figure out some idea of why does this work? And what I realized is, you know, there's at least a picture of why it works. And it's this. And it's something that, in a sense, was a feature about human language and human understanding that maybe we should have understood thousands of years ago, which is, you know, people have known about grammar of language that, you know, you form sentences in English having, you know, noun, verb, noun, and so on, but the kind of that structure of grammar doesn't tell you whether the sentence is going to be meaningful. There are lots of sentences that go noun, verb, noun, and they're completely meaningless. What I sort of realized is that when. When the LLM is kind of looking at, you know, you know, hundreds of billions of sentences, it's seeing sort of patterns not only of the noun, verb, noun type, but of the. This is a pattern of words that makes sense. And that's sort of the. The big thing that it so statistically learns that people were very surprised at the beginning, for example, that LLMs could do logic, that they could kind of make logical conclusions. And you realize, well, you know, how was logic discovered in the first place? Well, you know, presumably Aristotle went through and just heard lots of people giving speeches and sort of identified. These are the patterns of speeches that make sense. And then those became syllogisms and they became what we now think of as logic. And I think the LLMs kind of learned it the same way. And so, in a sense, one of the things that was sort of interesting to me was people sort of say it's magic. They realize, well, actually, it's not such magic. There's probably a thing that actually is telling us an interesting piece of science that. That maybe we should have learned a long time ago. That's sort of at the core of why this can possibly work.