B (37:20)
Yeah, it's a good point. I mean this is my main beef personally with folks with the Drill Doomer camp that folks like Eliezer Yudkowski. Because I just don't understand. Maybe I'm just not smart enough to understand. But I don't understand why we can't put the cat back in the bag a little bit, you know, why human institutions can't rally to create the right regulations and to sandbox new models, for example, until they're truly ready. There are things that can be done. They're not easy. It would take a lot of, a lot of societal will. But I think it's not the time to feel despair and think, gosh, we've already kind of passed over some Crucial threshold. I mean, in some sense, we passed that when ChatGPT was first released, or you could say we passed that. Um, I mean, the, the Turing Test has long been kind of. I mean, it's, it's, it's, there's arguments about whether the Turing Test has been passed, but certainly we've, we've crossed some sort of incredibly important line. I think, too, though, you know what it gets at? For me, one of the things about working on this series that really taught me is the, the importance of storytelling and imagination in this technology. And that goes all the way back to Alan Turing, who. And I didn't really understand this because I understood the Turing Test as a kind of benchmark. Like this would be a benchmark of human. I'm sorry, of machine progress. You know, once, once the Turing Test was passed. So, for example, if, if I could chat with a machine and not know it was a machine, then, wow, it's, it's achieved some sort of, some sort of milestone. And that was the Turing Test, what he called the Imitation Game. But, but in fact, what Turing was doing all the way back in World War II and right post war, when he was introducing this idea was not just saying, okay, this is a benchmark for machines to pass, and once machines pass that, we can say that they're on their way to really being thinking machines. He said that, but he was also taking what was at the time a really complicated philosophical debate about, well, can machines ever think? And he, and he treated it like an engineer. And he said, you know what? We just need to create an, an observable metric by which we can say that they're thinking. And, and that's, and we don't have to deal with the philosophical, you know, discomfort of saying, well, can machines think? And what would that mean if they are thinking, etc. If, if they pass the Turing Test, then, then, then they're on their way to thinking machines. And by doing that, he not only sort of freed engineers from the philosophical angst and, and set them a kind of path to, to follow, which they certainly followed. And we, we. It kind of leads to ChatGPT today, but also, I think, created this, this, this new way. He kind of ha. He kind of realized something very important, which is that we would not recognize machines as thinking until we started interacting with them in a human like way. And that when they started using language and talking back to us, that's when we would see them recognize their thinking. And you could get very philosophical about this. You could say, trees do a Lot of thinking, but we don't think of them as thinking. There's a lot of other living things on this planet that think, but their intelligences don't interact with our own. And so we're not really that concerned about them, or many of us aren't. And so what Turing felt was in order for us to sort of really respect and use this word, respect machines, they would need to sort of interact with us like this. And the way they, with the way chatgpt interacts with us. But the danger of that, right, is that we then don't see, this gets back to something we talked about. We don't see the alienation, alien ness of it. And we start to think, we start to interact with it maybe too much like a person or like a fellow human or a human like entity, a human thinking like entity. And thus we make very important cognitive mistakes in interacting with it and we perhaps trust it or distrust it in the wrong ways. So. And this also happened in this sort of, this failure of imagination then, or this, this, this kind of, this way in which our imagination is challenged, channeled. We don't see how the models are vastly different year over year because we're interacting with a model right now. It's interacting with us, yes, like the fastest human we've ever talked to. But it's still recognizable in its thinking in some ways. And it's, it's something we can respect but recognize. And so it's very, very difficult for us to then think, okay, wait a second, this is just the current iteration. We have to imagine a different kind of intelligence that this could grow into. And what would I be in that situation? If I could say one more thing, it reminds me honestly of being reporting in Ukraine or reporting in Afghanistan or reporting in South Sudan. And you talk to people who are in the middle of a war and they say, we knew the war was coming, but we just didn't imagine what it would feel like to be us when it was here. And they want me to know, you don't understand. I was just planning my daughter's wedding in that building over there, which is now like a bombed out hall, like. And they still see, they still see the place that they were planning the wedding. They still are thinking and frustrated about the money they spent on the wedding invitations or something. You know, they haven't quite transitioned over from the old world to the new. And I don't know, I don't want to make this sound like a doomer forecast because I think the future could be quite bright. But it does take an active imagination, whether we think we're headed toward, you know, any of these kind of versions of the future, to put ourselves in the new version of the future and to sort of play with, you know, our imagination and to imagine that the, you know, the world is not going to be the same as it is now. It's.