A (42:29)
And so I already conceived of the brain as a Bayesian active inference machine. That the way that we learn as humans is we come out of our mother's womb and we're curious and we have these sensors and eyes and ears and so forth, and we're paying attention and we are patterning learning. It takes a lot of tries, but eventually you learn how to stand up, you learn how to walk, you learn how to hold a cup without it falling out of your hand. And so you're just training your mind in the same way that we're training these models to build knowledge. And so I knew this about the brain and I intuitively understood it and I was interested in sort of frontiers of neuroscience and the difference between us as an active inference machine and our AIs at present as more passive inference machine is we are agents out in the world and we can go take action to do stuff, you know, so we can be like, I'm detecting thirst and I'm going to go walk over there to that coffee shop to buy a cup of coffee and satisfy my thirst. Right. Whereas an AI right now does. You know, we don't have any humanoid robots out there at scale yet, but they're not out taking agency in the world. They're constrained to an experience on these silicon transistors in our data centers. I think that will change, by the way. But so for me, as soon as I kind of groked what, not to use that word, understood this parallel, it was like this aha moment for me of oh shit. Holy cow. Everything we do is, you know, like these computers, these LLMs are working in the same way our brains work, only it's a 5 petaflop data center instead of the, you know, 86 billion neurons in the human brain. Oh my God. Right like this. So I was already tuned to that because of this experience of meditation. Now at the same time then, you know, we're using these tools internally, we're in the tech industry, so we're kind of early adopters as they're coming out. And you know, like, as amazing as the launch of ChatGPT was, which really kind of woke all of us up to this, you know, transformer technology that was developed at Google, by the way, not at, not at OpenHAB. It was an, both an amazing experience and comically limited. Right. Like if you remember, when ChatGPT first came out, it couldn't search the web. You were, you were inter interacting with knowledge that was like, you know, two years old because that's. It was trained on the Internet from two years ago. Still powerful, but you're not like worried for your job. And so then the next breakthrough for me was, I think it was ChatGPT's O3 model, was their reasoning model. Okay. That was the moment where I was like, oh, it's going to totally transform our company and every industry and everything else. So that was basically the ability for the model to teach itself how to think about new and novel problems. Okay? So if you ever see it thinking, like you type in a Chat GPT and it's like thinking or it's explaining its reasoning to you, it's like, oh, Chris has asked me a question like size the, you know, how many ping pong balls can fit in a 747, you know, some cheesy like consultant answer like that. It will teach itself how to answer that question. It won't just say, I think the answer is this spike because I've connected ping pong balls in 747 and my next predictive token says it'll be whatever it says. Ah, the user wants this thing. And I'm going to go build a chain of thought to analyze this problem set like a McKinsey analyst would and give you the McKinsey analyst answer. And what the model companies were doing, like what OpenAI and these foundational model companies were doing around that time that reasoning came out is like the wave one for these models is going out and traversing all of the Internet, all of the publicly recorded knowledge, and as we know, probably some of the not publicly recorded knowledge too, and building a neural net of that knowledge. But eventually you sort of consume all the knowledge that's in textbooks, all the knowledge that's in public web pages. And really what you want to get at is the knowledge that's locked up in people's brains. Like when I asked Chris about an industrial deal in Texas, what is his thought process? Like, what goes on inside his mind, patterned from decades of experience. And so then the foundational model company switched to hiring literally thousands and thousands of experts who math Olympiads, McKinsey analysts, you name it, who were then training the models, how the best humans in all these different fields approach problems. And so you could get this recursive runaway loop where the models could start to train themselves how to improve themselves, how to, you know, become smarter, become better. So for me, that O3 moment was the, ah, this is going to totally change everything at our company. And then like any, I think anybody in my position, leading companies or leading organizations, it's humbling how hard it is to get people to pay attention. And it's humbling how hard it is to get people to change. Like, it's just amazing how humans are just kind of hardwired to do what they've always done. So then there's a period of like, how do I get the company to really pay attention to this and take it as seriously as I'm taking it, you know, and you're yelling and you're pounding the table and every opportunity till I'm blue in the face, I'm like, yeah, but what about AI? What are we doing it with? AI over here. What about AI in our billing process? What about AI in our customer support process? Then I had a moment this spring where I said, okay, I'm not worried about this being a fad. I'm not worried about over rotating on AI. The risk of not moving fast enough is far greater than the risk of over rotation. That I have to do everything I can to get the company to embrace this. Remember Top Gun, the need for speed? It's like we all need the need for speed. Because that's actually, I think what's going to matter that everyone will have access to the same technology. I don't think there are any moats in data. So GPS that are out there, like, well, I've got my special sauce in data. I don't think there's any moat there. I think the models wash that moat away. So then the actual moat that GPS build is how quickly can you go from next breakthrough in model technology to figuring out how that's going to impact your business process, to figuring out how you can implement it in your business to your advantage and then changing human behavior to implement that loop and taking advantage of it. And the faster you can spin that wheel, the more competitive advantage you will build. And so, yeah, I had a moment. We had all leaders, top 75 leaders at the company globally. We gathered them in San Francisco for a week. And I said, we're going all in on AI. Every leader at the company has an imperative to come to me with a plan for how you're going to embrace AI in your business function. And then we are going to have a bunch of expectations of not just leaders, but every employee at the company that if you're not actively using AI to redefine your job, then you shouldn't be here. Because this is going to be an organization of people that are on this vector of, of change. Now we're in the process of incorporating AI competencies into all of our job performance frameworks. Every job at the company, and we have hundreds of them, different types. We'll have certain AI competencies that we're looking for. We're ensuring our performance is taking into account our performance evaluation is taking into account AI, and then we're evaluating for AI competency as we hire people. So if by now, at this point in time, you are not as a candidate, you can't sit down with a prospective manager and talk about how AI has transformed your life and the zillion different ways that you are using it, then you shouldn't be at our company. And then what we're doing is we are, you know, Rome wasn't built in a day, so we're, we sort of imagine like taking all the business processes that happen inside of your organization and cataloging them. Then imagine Stack ranking them for how well aligned to the current AI technology they are for rewiring and disruption. And then we're just patiently trotting through those business processes, trying to take a few of them a quarter, not trying to reimagine 2000 different processes overnight. Pick the high impact ones first and you build a muscle. I think the biggest, hardest factor for organizations is getting all of your team and most gp. You know, we have almost a thousand employees, so most GPs don't have, many do. But most GPs don't have the scale of employee base that we have. So it's easier when you're smaller. But it's getting all of your employees to adopt this growth mindset of being excited about the change and leaning into it and being curious versus this defensive mindset of AIs coming from my job and what am I going to do? Because every, you know, you see it in different pockets. But like if you work in support, like Mark Benioff's on the record, Salesforce CEO is on the record just like a few weeks ago saying that they fired 4,000 people out of their 9,000 person support team because of how well agents are now fielding these lower level, tier one and tier two tickets that are coming in. So now imagine you're one of the 5,000 employees left on the support team, right? And you're looking at this going, wait, the model's getting better at a compounding rate. What if the support team is going to be 900 people next? What if it's 90 people? What if it's 50 people overseeing a thousand different agents? Right? And so what can happen if you're not careful is you sort of devolve into this, you know, the opposite of an abundance mindset of scarcity. That's I think the hardest leadership challenge is how do you get people optimistic, skating toward a future when it takes so much effort to rewind, to rewire it and to write it. And human nature is to sort of start to fear for one's safety and livelihood. And I, I, I will admit I don't know that the answers, but I say that is the leadership challenge.