Transcript
Commercial Narrator 1 (0:01)
When the holidays start to feel a bit repetitive, reach for a Sprite Winter Spiced Cranberry and put your twist on tradition. A bold cranberry and winter spice flavor Fusion Sprite Winter Spice Cranberry is a refreshing way to shake things up this sipping season, and only for a limited time. Sprite obey your thirst.
Commercial Narrator 2 (0:26)
Ford BlueCruise Hands Free highway driving takes the work out of being behind the wheel, allowing you to relax and reconnect while also staying in control. Enjoy the drive in blue cruise enabled vehicles like the F150 Explorer and Mustang Mach E available feature on equipped vehicles Terms apply. Does not replace safe driving. See Ford.com BlueCruise for more details.
Commercial Narrator 3 (0:54)
Running a business comes with a lot of what ifs, but luckily there's a simple answer to Shopify. It's the commerce platform behind millions of businesses including Thrive Cosmetics and Momofuku, and it'll help you with everything you need. From website design and marketing to boosting sales and expanding operations, Shopify can get the job done and make your dream a reality. Turn those what ifs into Sign up for your $1 per month trial@shopify.com Specialoffer.
David Shapiro (1:23)
David Shapiro here, your personal Chief AI Officer so what I wanted to do today was unpack some of the recent patterns and trends that we've been seeing. Now, I made a video recently where I talked about all the reasons that I think that AI is slowing down. And of course, I'm not the only one. Now, there are plenty of people who disagree with this story, and I'll address that in a minute with respect to the potential emergence of echo chambers. But first I want to address, okay, what does it mean now that AI is slowing down? Or at least there are initial signs that I might be slowing down in terms of progress. And that's not to say that it's stalling. It's just the rate of acceleration is deteriorating. So when I say slowing down, that's like where we're still at the very early stages if this is what's if this trend is reversing. So the first thing is safety. This is really great news for people in the safety crowd because it means that the singularity is not going to happen in 2027. We can kick the can down the road a little bit further before we get an intelligence explosion, if an intelligence explosion is even possible. Personally, I've started to have doubts that we're going to get those accelerating returns, particularly as I've seen some new news about the way that the human brain might work. There is increasing evidence that the human brain is not just a matter of computation based on neural synaptic connections, but that it could be a combination of that, the electromagnetic waves that propagate across the brain, as well as quantum effects. There is increasing evidence that human consciousness and human intelligence is actually the combination of several energies and several parts of physics that are all working together to create that. So I'm just like, maybe there's a lot more to intelligence than we thought. And of course there's going to be a lot of people out there saying, see, I told you so. But, you know, it is what it is. And these are also just possibilities. But according to this possibility, it might be that there are going to be continuing diminishing returns with respect to neural networks or even silicon based computing. That means that it will just be increasingly difficult to, to either reconstruct or to capture human level intelligence. And another thing that's emerging to me is that we are going to see a very distinct, like, bifurcation between human intelligence and machine intelligence, meaning that it's going to be kind of like comparing apples to oranges. And it really already is because we look at large language models which are very clearly processing information. I remember I had a conversation with some philosophers a year ago or so and they, they made the somewhat asinine claim that, oh, they're, they don't, they don't know anything, there's no information. I'm like, that's all that. They're like, that's literally all that they're doing is just processing information. But it depends on definitions. And so to these philosophers, the idea that this is a machine that only processes information because their definition of information was stuff in human brains, I'm like, okay, well that's just a bad definition of information anyways. Going down a rabbit hole. My point is, is that it really depends on how you look at intelligence and how you define intelligence. And I really don't like those gotcha questions because it's like, how do you define intelligence? And it's like, I mean, you know, it depends on who you ask. There's a million definitions of intelligence. And the fact that we don't have a good definition of intelligence means that also by extension, we don't have a good definition of artificial general intelligence. And when you ask a mathematician what intelligence is, they're going to give you one answer. When you ask a neuroscientist what intelligence is, they're going to give you a different answer. If you ask a psychologist and a philosopher what Intelligence is, again, they're going to give you fundamentally different answers. So, moving on, another thing that this is good for, and this is going to be really good news, really reassuring news to many of you out there, is that if AI is indeed slowing down, that means that the threat to jobs and the rate of change for jobs is going to be slower, which means the, the status quo that we have is going to persist a little bit longer than perhaps some of us would like. Now, what I do want to address is that there's going to be mixed reactions to this. So some people are like, you know, let's just get it done, like, replace my job. I'm ready to get out of the workforce. Give me, you know, UBI and get me out of the workforce for good. I don't care. And other people are going to be like, well, you know, this, this will give us time to create new jobs. I don't want to lose my job yet. And so on and so forth. Now, if I had to guess. Now keep in mind that I'm speculating here, and that's a lot of what I do on this channel. My gut check now is that it's going to be five to 10 years. And I've talked about this before where you look at the adoption curve and it's like, you know, seven years. So maybe 2030 and 2030 seems to be a pretty sticky date. So, you know, anywhere between 2027 to 2030 is when we might start seeing some really drastic change out there. Now, I could be wrong. We could have a confluence of multiple technologies. Like, again, I'm really waiting to see how GPT5 and robotics mix because you see the number of bipedal, like humanoid robotic chassis being built around the world. And like, remember this is only Gen 1, so GPT 5 and, you know, Claude 4 and whatever else, you combine that level of intelligence with robots, that really could change a lot for a lot of things. And I think there's. I don't know if it's proven out or to what extent, but I've heard that Tesla is already using their robots in the Tesla factories. And the economic carrot for that is really high. So don't underestimate the power of that economic incentive to get things really going. But overall, if the advancement of AI intelligence is indeed slowing down, it just gives us all more time to adapt on a cybersecurity level, on an economic level, on a military level. So it means that, you know, your life is not going to get upended, you know, soon, hopefully. So this leads me to want to address another thing about what 12 months ago, a little bit more. I predicted that we would have AGI by September 2024. So that's just a few months from now. Now what I was looking at at the time, and you know, if you go back and watch my videos, there's a whole bunch of charts and data that I was looking at. And this is right along the curve of what Ray Kurzweil originally proposed as to when we would have a human level. You know, intelligence in a single computer is actually 2023. So that was one piece of data. I was also looking at parameter counts going up logarithmically, which they have been, but they've slowed down. And the one thing that I was not looking at, so this is the piece of data that I did not include in all of those calculations, was, was the exponentially rising costs of training subsequent generations of models. So, you know, as I think it was Demis Hassabis was talking about on a podcast recently, every subsequent generation from GPT 2 to 3 to 4 costs 10 times as much to train, if not more. So while all these other things are going up exponentially, so is cost. And that did not figure into my calculus. And so because of that, it's like, oh well, if I had recognized that, I might have said, well, we're probably going to get diminishing returns sooner rather than later. Now I have been talking about diminishing returns pretty much for the life of this channel, and I've been wondering save.
