B (21:19)
Definitely going to have an impact. The, the place to start with where AI is going is the people that are using it the most. If you look at usage of AI amongst those that are closest to it, developers, people in Silicon Valley, people that are working in media, usage is just skyrocketing. So if you, if you go to your average startup here, In Silicon Valley over in Palo Alto, it is all AI. That is what they are using and they are doing a phenomenal job using it from what I've heard. And I'm not, I'm not an investor in this, this area, but I do follow it very closely because it is the start of a lot of things. So it's important to, it's important for me to keep tabs on it. And everything that I hear is that the founders that are coming in today, the startup founders are coming in today, they're all AI native and their teams are all AI native and their output per wor worker is significantly higher. In fact, there are entire groups of investors and firms that will only invest in startups in silicon coming out of Silicon Valley. Now if they are AI first and in term and that means they are using AI first. And so when you look at any, this is in the two year time frame or this is just a measure of looking at the two year time frame. If you look at any of the usage statistics, the number of tokens used, and a token is generally a word. So anytime you type a query into ChatGPT or any of the other bots, every word that comes out is a token. And that token is predicted. We won't get into how it's done, but that's a, that's a token. The token usage is just absolutely skyrocketing. So over the past 12 months, since about November last year, a little bit more than 12 months, Google has seen, if I remember correctly, a 50x increase in token usage. You'll see the same thing with grok, once it released its models. Every chart that you look at is just going up and to the right at just a phenomenal effect. So what we're seeing is the power users, the super users, people that use AI a lot. And it makes a lot of sense. The product fits for them, they're using it. And I think it's only a matter of time before that trickles farther and farther down the right early adopter. We've got the person that comes in next and then we've got the late stage people. So all the people that are on the front lines are using it a ton developers are already fully immersed in AI and part of the reason for the token usage is, you know, as a, as a developer or an engineer, you can say you want this code written and then it writes all this code and it queries itself on the back end. It does all this stuff that's more than just like answer this question. So if we look at that, what is happening there on the engineer side, it's hard for me to think that that's not going to come to the rest of the professions. And the reason for that is quite simple. The paper that was put out in 2017 that got this whole thing started, the title of the paper was all youl need is Attention. And it's the transformer architecture that, that does this whole prediction thing that can be applied to just about anything. It was written for text and that's why we have chatbots. That's why we have done all of this. Takes the corpus of, you know, everything that we've ever learned, books, posts, whatever, and shows it to AI and just trains it over and over and over again. Takes a chunk of text out, blanks out a word and says, predict this next predict the word that's blanked out here. It gives us predictions, right? And if it gets it right, it adjusts its assumptions. That's how AI works. But it works for everything. It works for images. If you break an image down into different sections, you can use the same process. If you, if you take video, you now have a 3D matrix that represents a pixel and time. So you can do this for so many different things. The thing that stops it from, from moving forward though is compute. That's why you see all of these data center build outs. If you're going to predict a word, it's four numbers. It's not easy matrix math, but it's relatively easy compared to predicting the next frame of video. And so you need more compute. And that's why these companies are looking at data centers and saying, we're going to build one the size of Manhattan. They're looking into the future and saying, all right, we got text, but that's not really that useful. You know, what's really useful is being able to see the world. Not, not, not to go on. But the glass half full, glass half empty thing, you know what's going to happen in five, 10 years. Right now we have text and we're getting a lot of knowledge. But when you think about what really is knowledge and wisdom. So I can type this thing into the computer and it's going to give me a very good answer, right? But a, AI doesn't know what the world is. So consider this. I could show a banana to a five year old. They know what it tastes like, they know what it feels like. That is incredibly deep knowledge that AI does not have. AI has no idea what a banana. It can tell you based on what it's read, but it has no idea what a banana actually tastes like, what it feels like, what it smells like, how to predict the weight of it. These are all areas of knowledge and reasoning that we humans have that AI is nowhere near. So we can predict, just our eyes can predict the way a shadow is going to move as, as the sun changes. We can predict, you know, how long it's going to take me to walk to that gate. A 5 year old can do that, a 10 year old can do that. What we take as extremely smart intelligence from AI is just a subset of what humans can do. So if the data center build out continues, which will be a struggle around electricity in five, 10 years, we're going to be so much deeper into this than we really know. So where does it go is really a question of electricity. Are we going to allow these data centers to be built or not? That's becoming a real struggle. But in five to 10 years, robotics will absolutely be a thing if data centers are allowed to take root.