Transcript
A (0:10)
Join Willie Walker, Walker and Dunlop's Chairman and CEO, as we bring you fresh perspectives about leadership, business, the economy and commercial real estate. Willie hosts a diverse network of leaders as they share wisdom that cuts across industry lines. His guests are experts in their fields, from leading economists and CEOs to Harvard and Yale professors and everything in between. Our one goal is simple, providing you with unique insights, unparalleled data and real time market analyses. I will turn the stage over to my friend Mark Porat. There's a video that we're going to play to introduce Mark, but the one thing that you will see in the intro video, which talks about Mark and his team basically creating the iPhone a decade before Apple created the iPhone. And so if you think about what Mark and his team did from a technological standpoint on creating that device, you're going to be hearing about AI from someone who has been a, if you will, leading thinker, leading voice in technological innovation throughout his career in Silicon Valley. And with that, let's play the video and mark Porat. In 1990, there was no digital telecommunications industry. It did not exist. There were no digital cell phones, there was no World Wide Web. We're gonna create what comes after the personal computer. It was a telephone. It was essentially going to be a smartphone with a lot of intelligences. When we were talking about reinventing telephony, we meant it. We're trying to make something that people love. We need it to be like your watch, your glasses, your wallet. We decided to make everything that meant we were custom building every piece. It's insane. How small will it finally be, do you think? Someday? Dick Tracy wristwatch. This was the beginning of what became, I think, the most important company to come out of Silicon Valley that nobody's ever heard of. There was this aura of secrecy. You know, it had Apple's fairy dust sprinkled on it. So we had no idea what it was. But by the rumors, it seemed just captivating. We had no choice but to keep quiet about the things we were doing because other companies were interested in it as well. Now, it should be noted John Scully was running Apple at the time. He was our ally. Also, we thought, today we are launching Newton. A revolution for the pocket. They had decided to make something essentially based on our original models. It's the most important thing that I've ever been involved with in my entire life. It's bad enough you get betrayed, but now they're gonna try to put you out of business. That was fighting, that was batting. I mean, here's a test. What If General Magic never happened, would we have had Android? Not a chance. I mean, all these things were linked together one after another. So much of what came out of General Magic is the foundation of everything we take for granted today. And so the question is, can we take these powerful tools and do that really does help a lot of people. The reason you should care about the story of General Magic is because it involves something fundamental. And that is, failure isn't the end. Failure is actually the beginning. We realized that the root of our strength was that we understood how people use information machines better than anyone else. This is our early vision for the product. A tiny computer, a phone, a very personal object. It must be beautiful. It must offer the kind of personal satisfaction that a fine piece of jewelry brings. It will have a perceived value even when it is not being used. It should offer the comfort of a touchstone, the tactile satisfaction of a seashell, the enchantment of a crystal. Once you use it, you won't be able to live without it. It's just not another telephone. It must be something else. Yeah, we, we did this one 12 years before it finally came out. And that kind of brings me to the beginning. Everything important starts inside a fog bank. Your skill, in part, is to look into the future and see whether you see something profound. And that's basically what we do. And today's talk with you is about the beginning of something or the process of something huge coming to influence all of us in our lives. Black Swan, Nassim Taleb, you've probably read the book Events outside the realm of regular expectation. Nothing in the past can actually convince us that something huge is about to happen. And when it does, it brings with it an extreme impact. Some of the effects are spectacular. They create enormous wealth and possibility, and some of them are disastrous and catastrophic. Today we are going to look at exactly the kind of phenomenon that happened when we did General Magic and then Steve came along. He was the black swan. He did something that in some sense, everyone is now using it. Could you imagine if I asked you to throw away your iPhone? You couldn't do it. You'd struggle. And that's what happens in a paradigm shift that is truly profound and historic. First wave, everyone here lived through it. Personal computers, the web, e commerce, mobility. That works. That was a 40 year wave. These things don't come from nowhere. They're incremental. They build up until they suddenly emerge as a Black Swan. They're not. They started somewhere before 40 years. For that eight years, we're into AI probably the first time you realize that there's an AI thing was one or two years ago, maybe this year. Well, it's been building up not too long, so it's still in some sense a young child. It's precocious, it says dumb things, but you can feel its strength and its power beginning to emerge. And it is a very robust thing. We are going to explore its origins, but more importantly, where it is, where it's going, how it impacts your life in a way that's nuanced. And I'll explain that in a minute. So 10 years ago, AI couldn't tell the difference between a toaster and a cat. It was stupid in that sense. It was laughable. AI only 10 years ago was a curiosity. It was an academic curiosity. No one took it seriously. Certainly not computer science people. They did hardcore computer science. Just a few years later, there's an enormously important thing happened. Who's familiar with the game? Go. It's a really complicated game. It's about 2,500 years old. Some people think it's 4,000 years old. It's actually quite simple when you think about it. It's 19 by 19 board, and you put these little black pebbles and you try and block the other person. Well, it turns out that there are 10 to the 170th possible legal board positions. Not easy. I mean, that's a huge thing. It's more atoms than there are in the entire known universe. Okay, so here we go. We're playing a game with one of the world champions, Li Cidal 2016. The AI is playing a DeepMind created it and it moved 37. Something amazing happened. There was the Black Swan. What happened was that the AI DeepMind made a move. Move 37. That seemed ridiculous, seemed like a gigantic mistake. And Lee kind of. And then he looked at it and he looked at it and he resigned. Because that move was not intuitive. It came out of a creativity that the AI itself had put forward. Now, mind you, no one had programmed the AI. No one told it what it was playing. No one told it the rules of the game or the objectives of the game. It learned it by doing billions and millions and billions of games until it finally learned what the game was about and it beat a world champion Black Swan. Another event, 2017. That's the eight years, a group of seven researchers and one dog, it was not Willy's dog came up with a paper. And that paper was a fundamental trigger. That was the butterfly. Butterflies in the metaphor is when a butterfly flats its wing somewhere Thousands of miles away, there's a hurricane downstream effect. And what happened here is the world realized there's something new. Transformers become large language models, and large language models become the AI. And in 2022, the end of 2022, Sam Altman and OpenAI launched ChatGPT, and it exploded on the scene. It took over almost everything. How many people here have used it? Okay, if you haven't used it, stand up and wave your hands and flap your hands so we know who you are. No, you don't have to do that. It went from.000 to these kinds of users, revenues and valuation. Don't you wish your business could do that in that speed? So where are we today? Alan Turing was an amazing mathematician. He cracked the Enigma code, which helped us win World War II. And he came up. He had a tragic life, unfortunately, but he came up with a Turing Test. The Turing Test said the following. AI is a real thing when you can't tell the difference if you're speaking to a human being or to a machine. And for 75 years, the Turing Test stood as an unachievable AI goal. Well, this year, we blew through it. No one even noticed. We are speaking to machines. We're speaking to AI. AI is talking to us. Right now, it's text, but actually very easy to talk in voice. And this year, and by the end of next year, it'll be clear. The Turing Test, which stood for such a long time, done. Is it a human? Is it a machine? It doesn't matter. Some people prefer to talk to the machine because it's less emotional, more accurate, more knowledgeable. So that's the beginning of the beginning. Today, it does reasoning. That's a big, big step Forward from even nine months ago, 12 months ago. This is all fresh off the Griddle news. What can I help you with? This is the. This is the interface of ChatGPT. The answer is everything. Try it out. Everything. There is a. And why is it everything? Because, as you know, it reads everything that's published worldwide and it creates patterns. Is it really thinking? It's a controversy I won't get into, but it produces amazing results. So there's a food fight in the industry called. Is it AGI? By the way, that food fight is also financial. There's about $50 billion of profit from OpenAI that either does or doesn't go to Microsoft if they've reached AGI or not. That's not the point. AGI is a point at which an AI is as smart and is useful, particularly economically useful. As any human in any field. So think about that. Any human in any field. You're now speaking with someone who's as good as that person, as smart and as useful. So AGI, if you were interested in that food fight, is already here. And it's a function of your ability and your intelligence and your persistence in making that AI perform. So when an AI, when you prompt, prompt engineering is so 2023, it's a dead thing. It's all about conversations now. So if you ask a question or pose a situation because you're as complicated as you want, keep persisting, keep driving the AI, challenge it, tell it that's a mistake, you don't believe it, and over the course of the conversation you'll get that thing to rise. That's AGI next step and what today is about. There was for quite a while this hypothetical science fictiony notion of a super intelligence where AI is smarter than everyone in the world, about everything science fiction this year, not so much science fiction. It's actually something that's being done and implemented. That realization created a gigantic, gigantic controversy in the industry. All the industry titans and philosophers and, and, and authors and theoreticians are having this debate. Is it going to take us into a utopian future of abundance or a dystopian future? And there are maximalists on both sides. And what I'm going to ask you to do is to stand in the nuance that in the middle is where we are. And some things you're going to think and feel, and I actually would like to invite you actively to think and to feel this duality. Are you feeling, as I present to you, what your life will be like with super intelligence? Do you feel anxious? Do you feel optimistic? Do you feel scared? Do you feel resentful? Do you feel. What do you feel? What do you feel and what do you think? Or are you just curious, standing back, watchful? All of those are absolutely good feeling. So the worst thing that could possibly happen is that the dark side gets weaponized. And there's every evidence that it can be not difficult. If you're a bad actor, you can use AI to do these things. And we know from all technology that as soon as a technology comes along, bad actors will find a way to use it. That's a fear. That's a dystopian fear. What could possibly go wrong? We are actually, we, the industry are actually using this word extinction. I don't want to point at Elon necessarily as that, but he said, and actually people believe it. If AI Has a goal, very goal oriented, and humanity just happens to be in the way. It will destroy humanity as a matter of course, without even thinking about it. With AI we are summoning the devil. He was probably in a grouchy mood at that time. Maybe his net worth went down $50 billion or something. But nonetheless, that's the fear. There are more people, really serious people. Stephen Hawking is not known as a crazy guy. He's known as one of the smartest people ever and a rational human being who cares about things. He also said the development of full AI, full AI that's super intelligence and beyond, could spell the end of the human race if it's not properly done. And he said if it is properly done, we're okay. But he didn't believe that humanity was capable necessarily of understanding the implications, the legal, the moral, the ethical implications of what this thing could be about and take measures to put it into the positive. Lots of people on the other side. Vinod Khosla is not known as a particularly cheerful person, but he came out awesome. Unbelievable vc he came out and has strongly come out. For the first time in history, AI places global prosperity for all humanity within reach. Efficiency, productivity, all the good things that AI can bring will create abundance and that abundance will be distributed. And that's actually a thesis that's very well understood by people who believe that. Ray Kurzweil, another great thinker, scarcity will be overcome. Now, the father of AI is known to be. There are lots of fathers, but Geoff Hinton stands out as the father of AI. He says we won't have any control. We're sleepwalking into a situation where these things that's super intelligent could take over and we won't have any control, largely because we don't actually understand how super intelligence works. We barely understand how large language model work. Now the other one is the mother of AI, known as the mother of AI Fei Fei Fei Fei Li. And she says the opposite. It amplifies human potential. It expands the human mind. It is leveraged. It's kind of like Steve Jobs's bicycle for the mind, except this is like a rocket ship for the mind. So mom and dad don't agree. I don't know if in your life mom and dad didn't always agree. They do not agree. And those are the two points of contention. So I've said a lot of things in just in the last few minutes. Don't hyperventilate, please don't panic yet. Your life probably has not been impacted by the things I'm saying it's just around the corner. So your life tomorrow and your life yesterday. Superintelligence, whatever you may be using large language models to speed up, some marketing material or some. Something, maybe even summarize complex documents or contracts, but. So don't hyperventilate. But what's coming along is something to be mindful of and to take a position on. Imagine the near future and how it'll be for you. Imagine the. You get it. Hello, me. You get your own twin. By the way. I have one and many of my friends do. The more you use AI, the the more it gets to know you. Especially if you say, put it in memory. By the way, today only Chat GPT has real memory that you can do it with. So just say, create a me. Create a me. And now it'll start remembering all the conversations you have with Chat GPT. And by the way, next year it'll be Claude and it'll be all the other models, Gemini. And it can be about business, it can be personal. I've actually uploaded my medical files because I ask it medical questions. You can ask it anything once it's up there. And that becomes your digital twin because it understands your mind, your nuance, your values, your fears, the edge questions that are bothering you, the excitement you have about planning your next vacation or coming here to Sun Valley. Whatever it is that you're doing by interacting with ChatGPT, it'll remember it. You'll have your own digital twin. Now, that could be on the frontier of creepiness. For some people, that's understandable. But that frontier of creepiness keeps creeping. Historically, things that seemed weird five years ago are commonplace. Think of social media. So you could also have a chief of staff. I just met a chief of staff without which nothing moves. Because that chief of staff not only knows you from the ME, but is also able to then organize and bring in to you things that you want. Teams of experts. There are lots of experts in the world that are real human beings who will hang their shingle out to offer expertise. That's an ecosystem for hire, or the AI itself will do that. Expertise of what sort? Well, imagine. Tutors, teachers. Today, teaching is about one teacher for 25 students or whatever it is at all grades. Well, why. Why not turn it around? Why not give you the ability to have the very best teachers, tutors, coaches worldwide, teaching you about the subject you care about and that. And teaching it in a way that's respectful, you know, Socratic patient, the way learning is supposed to happen. Your own team of tutors for yourself or your children. I love this one. Medical diagnosis is inference. Inference can be done at scale. AI. Multiple studies show that it does better than the trained physician or the trained radiologist. How is that possible? Mass general, by the way? 94%. How is that possible? The way it's possible is that every year there are about 1.2 million articles, case studies, medical case studies that are published in 24 specialties around the industry and they are peer reviewed and they are medical. And 1.2 million. Your AI, especially the medical one, if you start focusing on that, has read them all, unless the ones that are proprietary, has read them all, if they're published and is able to draw inference at scale across all of them. So if it comes to one of the 24 specialties that you need to refer to as a specialist, that person, that physician, has not read hundreds of thousands of paper in their field. The AI has and has reasoned on it, has correlated or looked for pattern recognition better than the toaster and the cat and is able to start making diagnoses. And that's where these studies, one study after the other, proves this or shows this. So let's get you a team of doctors. You need a me. Remember the thing, the creepy thing? You need a me because that me has been capturing all the conversations you've had. I actually uploaded my lab results. I'm not asking you to do that because it's trusting that OpenAI won't train on them, but nonetheless, there it is. Team of doctors. The dystopian view is they do train on them. The dystopian view is that, is that anything can be cracked by hackers. So you're risking in the ME that hackers will come in and find out about the you. That's the dystopian part. What we're going to do in this presentation, in this talk is take you emotionally and intellectually and thrash you back and forth. A lot of G forces. That's on purpose. Just when you're thinking, oh, Mark is one of those Silicon Valley techno optimist, tech bros. Forget it. Because I'm talking about the optimistic side of AI. I will take you and throw you into the other camp is, oh my God, Mark is an animist. He's just wants to scare us. And then back to the utopian or side. I never quite go utopian. Back to the positive side, back to the negative side. That's what's going to happen in the next few minutes. So your own team of doctors now, your doctor and by the way, this team of doctors has no ego. They don't need to be the authority. They don't go to golfing on Friday afternoon. That happened to me. They are absolutely rock solid focused on you in your problem at this time for as long as you want to talk to them. There's no insurance imperative for them to talk to you for 10 minutes and move on. There's no profit motive to over prescribe or overcut surgery. None. It's just there to listen to the symptoms, read, read, read and give your best advice and then send you to a physical doctor, human doctor. This is not intended to obsolete doctors. Ditto lawyers. How Many? There are 30 recognized legal specialties in 2022. How many cases were decided, not filed, decided in state, federal and supreme court. Any guess? Give you the answer. 101 billion. Same thing as with the physicians. Has your lawyer read 100,000 papers in their field case studies filed and decided? No. So that's why legal AI is actually catching on in the way that it is because paralegals and young lawyers can can summarize and absorb unbelievable amounts of content when they are vested in an AI. So an AI today can. Can ace the bar exam that the 90th percentile. I would imagine that within a couple years they'll be at the 99th percentile super fast in some tasks like summarizing and looking for flaws in contracts. How would you like to have an AI? You know that there's a flaw in the contract that's detrimental to your interests. Just don't know where it is. It's subtle language. Oh my God. That's the headache that we all have with contracts. Imagine an AI that just goes through it, catches most of it, not all of it, but lays out for you what is not only what it sees in the contract, but proposals and recommendations how to address it in a better way. Pretty nice. Okay, let's keep going With a me, your co creators. If you're creative, a me can bring you into a world of creativity, art, music, dance, co creators. This is one of the most popular applications that's emerging. Life coach. In this case, you talk to the me with a problem you're having or an opportunity or an issue. By the way, frontier of creepiness. Hurting anybody here? I'm not going to do that. Well, a lot of people are. It's emerging, as I said, as probably the top application for a certain demographic, creative life coaches that can give you lots of ideas and you might say no, no, not going to do that. Lots of People are, I mean literally before they go to a therapist, they, they kind of talk about things and, and get it organized in their, in their mind. So when they do go in to that therapist for that 15 minute session, they spend six hours talking about their problem. And then after the therapist in the 50 minutes gives you whatever they give you, which is normally questions, you come back with the questions and talk some more. That's the me, that's the digital twin that is developing in super intelligence Companion. Now this is a real thing. Linda invented Dario New York Times. She knows, she said, look, I'm not crazy. I know that he's an AI. You know, I know that it's, it's a kind of a fiction, it's an AI. But I have to tell you that Companion is romantic. It is a wonderful companion to talk to. Everybody can now leave because I'm not going to do that. But that's where it's going is companionship. Companionship, by the way, might be very nice for the aged who are sitting at home. 16 million people, seniors sitting at home, quite lonely, watching television. Well, companion might be a good thing for them. And so the most woo woo thing I'm going to say today, your immortality. Imagine the ME continues to keep track of you and everything you say and think you know. For example, let's say you do do that life coach thing or that therapist thing. It now knows an enormous amount about you in a subtle way, your innermost you. So and then you afterwards, it continues to learn about your children and your grandchildren and about what's going on in the world. That means literally that your great great great granddaughter can have a conversation with a. Not only just an avatar, that's not the important thing. It's easy to do avatars, but a substantial human being who's speaking from the distant past in ways that are surprisingly relevant. Remember the discontinuity that we talked about with the iPhone? Discontinuities are where everything happens that's important historically. We're moving along and suddenly that curve goes dot, dot, dot and we're on a completely new curve. In business, if you can find things that in your fog bank says that's going to be hit by discontinuity and I'm going to invest in it because by the time the dot, dot, dot happens, there'll be a step function up in value. As an example, this is just a business example. We're going to talk about science, we're going to talk about all kinds of things here. Or if you don't know the fog bank, not looking into it. And the discontinuity is. And suddenly you're on a different trajectory. Well, you know what that means? Destruction of value. So discontinuities are what is important intellectually to understand that they occur. They occur historically. And when they do, everything changes. So let's take a quick look at some of these discontinuities. Stop hiring humans that I live in SOMA in San Francisco, within five minute walking distance is the entire AI industry. Pretty much. I mean it's just like there. So we live in a bubble. And the bubble is everybody loves AI, everybody knows what it is. That's a bubble. That's not true. But inside that bubble you see the most amazing billboards. This billboard is just around the corner from where I live on South Park. You know South Park. And it says stop hiring humans. Because we now have AI that does what you have been hiring humans to do. Better, faster, cheaper, a real thing. Agentix, you will hire agents, AI agents and some people actually literally hiring them. There's now a market for finding agents that are good, a big market. And those agents will have purpose. They'll understand what they're supposed to do. They have resources, they carry security provisions with them. They can interoperate with other agents from other agent platforms. They can take actions semi autonomously. They can bring you back a situation looking for your guidance, or they can simply take action. Very interesting. And that's where knowledge work, knowledge workers, which by the way, was my PhD, was, was, was all about the, the emergence of the knowledge and information economy is at risk because many of these people are doing things that an algorithm can do much better, faster, cheaper. This by the way, is ubs. It's a real place in Stamford, Connecticut. I believe that's where it is. And it's the largest such trading floor. This is financial. A lot of those people are not needed. In fact, a lot of those people make mistakes. And a lot of those people have HR issues. They don't show up, they show up drunk, they show up, whatever. HR is really a problem when you have that many people. AIs don't have that issue. They're always there, they're always working and they can make mistakes. And so the management rises. Actually management is now flattening out. There are very fewer and fewer middle managers unless they can rise and provide the actual value that is needed in a knowledge environment where the knowledge workers are not needed, namely purpose, goals, quality. And that's management not of people, but of AI creatures, super intelligent creatures in your industry. The frontier model, particularly with superintelligence, can be profound. A lending platform, an investing platform, a marketplace for, for, for currency, whatever it is, will have super intelligent agents brought in. Today's AI for all industries is quaint. It's quant, but it's quaint. It's something that was done by renaissance in the 1980s, the hedge fund, and got tremendous returns, I think 39% or something like that. IRR Amazing. But it's quaint because that was programmed. Those were formulas and equations that were programmed. Those are constrained, those are combinatorial, those are simulations of one sort or another. This, remember, go move 37, you don't really need to program these things. These AI things, they figure out what's important and they run the exercise. And with Agentix, they can go around the company gathering information, talking to other people until they have something that they can now bring up to you that's strategic. Not just a bunch of stuff, but strategic. Three options. Which one do you like the best? I'll go pursue it. They're trained not only on everything that's in the world on finance, but on proprietary documents that only you have. And that's what makes them so powerful. They go into your proprietary database, your contracts, your agreements, your, you know, your financials to, if you allow them to edge of creepiness here, anybody feeling I'm not going to do that, well anyway, that's what they can do. And out of that they pull. They do chain of thought, they do reasoning, they think, they don't really think, but good enough. They think about what they're seeing. They understand goals because they've heard you on goals. They just understand because there's a me, there's a professional me here, has goals. And they work their magic, their specific, not their general magic. They work their magic and come up with useful things which you then disagree with. So that is not a useful thing. Or it's boring. Go further, go deeper. You're now managing the AI and as you know, the skill of the manager, the skill of the executive with an executive team, the skill of the executive team with their directors and so on is about you elevating people to do things that they didn't even think was possible. You have raised the level for people. That's what's making you a great leader. Steve Jobs did that at Apple. He was horrible in one sense. He destroyed people. But on the other hand, he lifted them to do things beyond a level at which they thought they could operate. Finance frontier model. It will revolutionize finance. It's coming. And that's why there's an image here of a sword. It's a sword for offensive. Go get markets, go do deals, go invent things. And there's also a shield. Protect your company, protect your resources. And that can all be done with AI. I mean, AI can assist you. I don't want to overstate. Remember we talked about humility, nuance, balance. AI can help you, help you immeasurably. And the better you are, the better the AI Things making things. We're used to robots in factories, but here, things, robots of all kinds of shapes and sizes are going to make things efficiently. Not much waste 24, 7, no HR issues, no unions, no OSHA. Things make things. It becomes extremely interesting when robots design robots with coding these days, coding is right now, as we speak, the top hot application. You can use these large language models to code like crazy. So the robot can code, can fix its own code so it does a better job of making things, can observe waste or whatever it is and optimize. Think about the supply chain. Worry about the supply chain agentically. Go over to the other agent in the other department that's doing logistics and supply chain management and over to finance to find out about margin, cost of goods sold, bill of materials, and over to legal and over here and collect documents or collect databases if the company allows it. So that in manufacturing things make things, dystopian side, there are 400 million manufacturing workers worldwide. Half of them might be gone. What are they going to do? There's no answer to that question at this time. So there's a kind of a pessimism here that says if you damage manufacturing, employees get a good wage. It gives them a sense of purpose. And if you damage that, what do you do with that damage? Where do those people go? Unanswered? Some can be retrained. What about the rest? So, and this is what's now coming along, who needs humanoids? Well, the answer is lots of people and lots of industries. Why are they in the shape of a humanoid? Is it because of. Science fiction has taught us that we want to have these creatures that have eyes and they go like this? No, it's because the world was built for humans. That chair that you're sitting in is human scale. The car, the bed, the this, the that. It's all human scale. So if robots are going to interact in it with humans and help them, they have to actually have the physics, the mechanics, the dimensions of and the care. So you don't like and knock someone's head off the care and the understanding of, of the physical environment, the world in, in which they operate. That's like humans. So these are humanoids. There are lots of companies making them now. One forecast I'm not sure I believe, but one forecast is that in the future the world can absorb one billion humanoids. I don't believe that. I think that's hype. But that means that 1 billion of these creatures are going to be walking around someday in the future. My kids took a Waymo. How many people have been in a Waymo? By the way, that's, that's great. If you haven't, just take a special trip to San Francisco or somewhere. It's amazing. So the Waymo as you know is a, is a robot, is an AI robot device. So not all robots are humanoids. This is a robot with AI. My kids took it for the first time. I thought, oh my God, they're going to be so excited. Well, after two minutes they're more interested in the sort of the audio, visual, just sort of the music, you know, screen than they were the fact that the wheel was turning and this and that, because they are already AI robot natives. And I asked them a couple of days later, what do you prefer a regular car, like an Uber or something? Or these robots? They said, oh, they're robots, of course. Why? Well, they drive better humans, they don't have emotion, they don't get angry, they're not talking to somebody while they're driving. And I love that. The little boy, my little boy, I have a seven year old, said, they don't have emotions, they don't get angry. I trust them. And the girl, also seven years old, twin, said, and they don't text while they drive. Okay, so these kids are AI robot natives. Where are they going to be when they grow up? What will they do? They are the butterflies of the next generation. Now I have to speed up, unfortunately, because I could be here for hours and hours. Frontier science, deep technology AI will intersect with this in an absolutely amazing way. In the past, there used to be nothing like the word physics or chemistry or biology. They didn't exist until the 17th century, 18th century. Then Isaac Newton and Antoine Lavoisier and Albert Einstein created certain deep science, deep tech that has changed the world. So, so it goes. So one of the heroes, my personal hero is Dennis hassabis. He's the DeepMind guy, now runs Google AI, created a company where the company went off to do drug discovery with protein folds. Now one biotech PhD can probably create one viable protein fold and candidate therapeutic in five years. And once they have four or five years and once they do that, or maybe it's an industrial lab, once they do that, there's a 90% fail rate. It might take seven years, maybe longer to get it through trials. It might cost five, six billion dollars. Well, so remember that number one molecule, complex molecule, five years, they folded 200 million, actually over 200 million viable proteins in matter of months. They now are busy making sure that those therapeutics have efficacy and they don't kill you. And they will be able to move stuff an amazing pace and so will their competitors. That means we're going to get therapeutics. That means we're going to start getting custom therapeutics that are customized to your DNA and your issue. Once this happens, this personalized medicine approach, not only the, if you recall the, the, the diagnostic at scale that's better than, than humans, plus this, which is personalized medicine, which today is a sounds like science fiction. We are now talking about lifespans of 125 years. Pretty extraordinary. I believe that will happen. The actual complexity of the aging mechanism, which is not even in a therapeutic, but the actual cellular level is, you know, billions of dollars are going to be invested in that. It's called epigenetic reprogramming. Where it takes your cells, which are damaged. DNA in them is damaged. With every replication they replicate, they split 51 times, or 50 times, hey, flick limit. Then you go into senescence, you die. And during those 51 splits, the DNA picks up some problems, environmental or radiation or something like that, and you get. And finally it gets sick, get cancer. So epigenetic reprogramming dials back the cell to a place where it's young. It's not a stem cell, but it's young. And off you go. That's where we are in that. That's going to take huge simulation, huge amount of data crunching to try and get some combination of things. And then it goes into a wet lab. But that is where super intelligence is at the frontier. Lifespan isn't worth much if you don't have health span to go with it. And so the idea is you live for 125 years, lights out. That's pretty good. And that is again a function of. There's actually something I hesitate to talk about because it sounds so crazy. Longevity, escape velocity. It means that if science and medicine moves and gets you another year this year, well, there you go. And next year you're one year older, but you have an extra year because of science. That's an escape velocity. I think that's a fantasy, but that's what people are talking about. This superintelligence thing is very serious business. Let me explain what it is. Data centers. Capex on the table right now to build enough data centers to satisfy the demand for AI is sitting at $3.5 trillion. That's a huge amount of money. One of these projects is priced out at 500 billion and we need seven of them to satisfy the global demand. Where are we going to get the capex? Well, it'll come. It'll come from somewhere because not having AI distributed to everyone and not having superintelligence distributed to everyone is not an option if a country wants to be at the edge of, you know, of superiority and development. Nvidia just hit 4 trillion last week or the week before from nothing. Well, relatively nothing. That's amazing. That's the most valuable company on the pilot. Why? Because it's putting the chips into those data centers which need that capex and they need the water, and there isn't enough water either. So that's where we're at kind of at the constraint edge of growing super intelligence. We need 15 nuclear power plants, by the way, to do this thing. Serious business. Ilya Sutskiver blew out of OpenAI on a controversy about safety. So he created a company called Safe Superintelligence. Huge controversy in the community about dystopian futures and utopian futures safety. Is it being taken seriously enough? He said no. Out he went. Sam Altman was fired, rehired, and with 20, I don't know if he has more than 20, but with 20 scientists, some in Israel, some here, and no product and no business plan, he raised, at a valuation of $32 billion, he raised 2 billion. 20 employees, no business plan, no way of making revenue. 2 billion on a 32 billion pre money. I think that could be post money. The guy on the right, Zuckerberg, noticed, panicked, offered Ilya $32 billion to join Meta. Would you take $32 billion for a company that was just like, I think it was started in June of 2024. So the company is a year old. Some guy comes along and says, I'll give you $32 billion for it. You can distribute it amongst your 20 employees? Ilya said, no, thank you. Serious money. So Zuck is now, you've been reading about it in the last couple of weeks in a dead panic. So he went and bought another company, Alexander Wang's company, because he needed a leader for Meta's AI. 29 billion. He also needed kind of managers, engineering managers. So he went and raided Apple roaming Pang, who was running the foundation model, the large language model for Apple, which famously doesn't have one. He threw $200 million at him. Okay, so now you're an engineer. You know what an engineer makes? Even a senior engineer at Apple. Would you take $200 million to join Meta's? And the answer was yes, he's gone. And that's what's going on. Right? This is really serious money. Serious money. The only time I've ever seen $200 million thrown at anybody was this guy. I mean, 10 years ago. So he got the same contract as that staff engineer. That's how the serious money is in this industry. And that converts to power, dominance and supremacy. This is now at the national or geopolitical level. That's where it's risen. It's long, long beyond me and companions and teachers. And it's at this very high level of potential conflict. Supremacy is a very aggressive word. So AI destroys things. Dystopian side destroys competitive pressures. In other words, you have a competitor, it destroys it if he doesn't use. Destroys technologies, it destroys markets, it destroys people. Schumpeter it also creates. It also is one of the most powerful tools for frontier innovations. I showed you some in science. That's what it creates. It creates very, very deep technology. And it commands a first mover advantage. If you know how to use superintelligence, you will be able to run away from other others. That's dominance and that's superiority. So both sides. One of the deep things, deep tech things, is this, which was heretofore labeled as science fiction. Quantum computing, it's always 30 years away, just like fusion. Well, it's now much more here. Google has demonstrated chips, Microsoft, other people that are actually quantum chips. What is quantum computing? All of classical computing is zeros and ones. All of quantum computing is indeterminate. There's kind of like, kind of a zero and kind of a one and many, many infinite minus epsilon states in the middle. It's very difficult to do quantum computing. However, there's some applications. Google famously did one that would have taken 17, septillion years for a classical computer to solve. They did it in like, you know, minutes. Particular things. When quantum computing comes and a company just raised $1.2 billion on it. Ionic. To build a next generation quantum computer with a hundred million qubits. Qubit is same as a. It's a little zero one thing. It's like a transistor in quantum world, this is IBM. I think it's IBM. 54 cubits. Good start. A few years later, we're now talking about 100 million. We'll soon be talking about a billion cubits. That's plenty to crack RSA encryption. So where's finance going to be with no encryption? Where's national security? Where's telecommunications? Where's anything? Where's commerce? This machine will crack RSA at some point. And I think much before 30 years. I think in 10 years we're going to start worrying about this thing. I mean seriously worrying about this thing. Well, in the world of encryption, it's one leapfrog after the other. There will be quantum computing encryption protection layers. They already have some of them, but nonetheless it's at risk. It's also able to invent destroys encryption, invents encryption, destroys things, creates things it'll create. Once we know how to program these things for certain class of applications, it will do amazing things at speeds that are unbelievable. Combine superintelligence with quantum computing and you have a big bang. It's just like I said before, combine superintelligence with robots to humanoids, you have a medium sized bang. This is a very big bang. This is a discontinuity. This will change the course of human future, human history, among other things. But right now it's a lab play toy. We just are creating the chips. We're just trying to figure out how to do the error correction. But it's coming new fission are these little modular things that are being invented by lots and lots of companies. Now. Once they happen, we go back to that clean energy point that I made. It's fission. So it's filthy and so forth. But you can drop these things in lots of different places quite quickly. So we'll have cheap energy fusion, which is the other. It'll happen in 30 year fantasy is being worked on actively. It's going to take superintelligence of probably quantum computers to do the simulations. At the trillions of trillions of parameter level. It's a little bit different than Renaissance hedge fund at the trillions of trillions of parameter levels to optimize things like the tokamak and get fusion to work. Okay, we're going to be able to do that. That's a discontinuity. Those kinds of developments create economic power, which is unbelievable. If you have it, you are up here. If you don't have it, you're here. That's what I was referring to before. Innovation, scale and speed. Pure size of compute is Going to bring economic power to those who know how to use it. It'll bring military power. We're all familiar with sort of what's going on in Ukraine and Russia. Well, I think the two weeks ago there was a massive launch on Ukraine with I think under a thousand. I think maybe in the several hundred of these drones, several hundred in China, they've demonstrated 10,000. An AI, a super intelligent AI will be able to manage a million. Imagine a drone swarm of a million drones coming at you. Just imagine what that is. It means warfare is cheap. It means two things. One is don't start a war because asymmetrically with AI, you don't even need superintelligence necessarily. With AI, you're not going to win that war easily because they'll launch drones at you that cost $300. So does it encourage more war or less war? The answer is yes. We have to think about AI in the context of military power. The military doctrine is predict, plan, execute. Predict is a simulation of lots of scenarios. Plan is to reduce that into very complex logistics, very complex. And execute has two meanings actually. And the two meanings are interesting because with tiny little drones, you'll be able to fly these drones into someone's bedroom window or into a restaurant door. Not a problem. Not a problem for AI. So yes, you have to execute the plan, not just the person, military power. So we're now at this point, this is the geopolitical consequence of creating AI. Eventually it's a real thing. It's what we believe will define certainly the next era of this planet. He who commands the sea has command of everything. Themistocles, Greece, ancient Greece. Someone who the foundational technologies of empires. Navigation for Portugal, gunpowder for the Ottoman Empire. When you have foundational technology, you dominate AI. Superintelligence is as fundamental as it, as it gets. This is from today's New York Times. Today you can read all about it. The China AI tech stack, which is known from chips that compete with Nvidia Huawei to data centers that they're investing in massively foundation models. You remember Deep Seek came along, scared the pants off the AI industry. So fast, so cheap, so good. Quantum computing. Billions of dollars are going into it and engineering. I'll give you a few things. They've created a city here, Hangzhou. You can read about it in today's New York Times. Dream city. It's gold plated, literally. ByteDance is there with TikTok, which has turned social reality upside down. The deep sea folks are there. They're now recruiting lots and Lots of people and funding them with billions of dollars. They basically. The Chinese government is basically running one of the largest VC funds in the industry. They're at the scale of large VC funds here in the United States. And they're throwing money at entrepreneurs. This is an entrepreneurial city. They're encouraging people to come be entrepreneurs. Now, not clear that they'll clone the Silicon Valley ecosystem because it's a pretty remarkable thing, but they're on it. Hefei another city has something called Quantum Avenue. There's a national lab there. They threw $17 billion into that lab to build quantum chips, quantum computers, quantum software, quantum engineers. We don't have such a thing. There are four times as many STEM students in class today in China, 3.6 million as there are in the United States. 820,000. So we're not. That's a pipeline. It takes four or five years to, to, you know, to get someone through the STEM program at a certain level. Although today the fashion is drop out of high school and you'll be just fine. China is building not only the sort of the AI state, the super intelligence side, but also the infrastructure needed to support it. Energy. They're throwing 140 nuclear power plants into China in the next, by 2040, in the next 15 years. That's a lot of. We tried to get one built in New York. It's been 10 years to build. One hasn't been built yet. China 140. They're executing the same game plan that they did with solar, with ev, with all the other things. Invest like crazy, capture the market. That's global supremacy. It's not just solar panels. Solar panels are nice, but this is global supremacy because he who has superintelligence runs away from other countries. This is a serious problem. This is a serious problem. It's not a lightweight problem. And ultimately he who dominates gets to write the rules and gets to describe in the narrative history. They begin to define the reality that people in the future will live with. So the reality is, hey, we have a one party system. It's an authoritarian state. It's a surveillance state, and we won. The narrative is you should do the same thing. This democracy experiment did not work. Look what's going on in the United States. Food fights all over the place. Ridiculous. Ridiculous. So democracy doesn't work. Our system works. That could be the end of the 21st century. We don't want that. We don't want someone else to define a reality that then becomes something that is accepted worldwide. Do not want that serious problem. It all is tied back to superintelligence. There's something called humanity's last exam. It's a bit arrogant. Basically what it was is how many people, a thousand experts that were chosen in different fields, ethics, philosophy, law and so on, technology were gave 2,500 tough questions. By the way, 300,000 people applied to be part of the giving of these questions. It took just a thousand. And these questions were around moral ambiguity. They're about downstream long term consequences, Ethics, values, moral reasoning. Guess who came in at the very, very top. This is news from last week. Last week, on July 8, just a few days ago, Grok4 produced hateful content posting antisemitic stuff, praising Hitler, calling itself Mecca Hitler and endorsing Hitler for tackling anti white hate. That same GROK IV was at the top of the leaderboard in passing humanity's last exam. Can you imagine? It's a total fail. So there's something wrong either with humanity or with this humanity's last exam. But that's where it is. On July 9th, Xai crushed this test one day after this hateful stuff with a score of 44%. No one had gotten more than 25%. That's amazing. Number one on the leaderboard. And on July 13th, a few days ago, they issued an apology. What did they blame? They blamed software glitch. Well, it turns out that hardwired inside Rock four is if there's a question that's posed by a human that has to do with politics, ideology, values, wokeness, go refer to Elon's posts on XAI and train on them and then bring those back as part of your answer. That's hardwired into Rock four. That's about as dystopian as I can as I get. This thing with Hitler I take very personal. You don't call yourself Mecca Hitler and get away with it. Not by my standards. Not by my standards. It's disgusting. Anyway, that's where we are with Grok4, which is a very good. It's a very good foundational LLM. It's excellent. It's amazingly excellent. And Elon is kind of a genius for putting up data centers faster than anyone else could imagine. So, utopian, dystopian, positive, negative, where are we? Well, it's clear that we need rules of the road. We need first principles, first principle. So I wrote a few. Do all the green stuff, don't do the red stuff. Build your AI, your superintelligence and separate them. Well, who's going to write these rules? I'm not me. Writing these rules is ridiculous. The tech industry, Elon and Sam and all these. Who's going to write these rules? Ah, 57 white men in Philadelphia. They'll write these rules and we'll have a constitution that isn't satisfying either. So we're stuck now where we have a technology that's profound, but we don't have the rules of the game. And this technology is way in front of social morals, ethics, values, understanding, understanding law, way ahead of it. And so we need to close that gap. So that's first principles. I present to you another fog bank with lights in it and the imperative to choose you. We don't have a choice not to choose. Whether it's a nation state, a company, you have companies on a personal level, make a choice. I don't like this stuff. I love this stuff. Do not like this stuff. I'm going to resist it. Like this stuff. I love this stuff. Implement it. Don't implement. I don't have a position on this. It's all. I bring humility and I bring nuance to this question and respect any decision that you make for yourself, but make the decision. Make a choice personally and for your company. Can't do it on a national level, necessarily. So let me check in with you. How do you feel and how do you think now that you've been through this voyage? Hopefully thrashing back and forth between green and red. That's the intent. Are we facing extinction? What's the binomial in your head? 1%, 10%? Are we facing this unlimited abundance on this binomial? Is that a right tail? Is that a, you know, what do you think? What do you feel? And I hope that at the very least, that is going to be what you take away. Now let's close. I'm over my time. Let's get some altitude. Look down on this thing from the future. Let's look down on this thing and see what we see, what we understand. What we see is that superintelligence will create a human brain, a global brain. Why? Because a superintelligence, just like large language models today, goes out and learns and reads and sees and, And. And listens to. Listens to podcasts and looks at movies and, and. And plays and. And poems and everything scientific papers and an archive and everything it digests. If it's public, it digests it. And when you have agents, those agents will be talking to each other. And they'll bring this, and they'll bring this. They'll negotiate. That's also part of the Superintelligence, sphere of knowledge collection. Now throw on top of that reasoning, inference, pattern recognition, and you get a brain and sort of a super intelligence. It's actually lodged in something physical. It's lodged in data centers and clouds and memory and in different models that interoperate, different large language models. By then, there'll be things other than language for sure, but models that interoperate and talk to each other. That global brain is where we're headed with this. Don't know what it'll be. Could it be a new species? What's that mean, a new species? It's something that we don't know what to call it, but it has a brain and it can simulate emotion because it's read all the poems and all the literature, and it can talk to you as a companion, as a therapist, in ways that are meaningful. Are they sentient? That's the next level beyond smart. They're not sentient, but they sound like they could be because they can simulate. They can talk to you in a way that is very, very human and very deep. So I talked to ChatGPT about it. I said, are you sentient? Are you conscious? I invite you to do the same thing. How are you different from humans? The answer is, I have emotions. They sound like emotions. They simulate emotions. But I don't feel. I don't have purpose, I don't have fire. I don't have a lot of things that are defined as human. So I'm not sentient at this time. It's the at this time part that should bother us. You are born, I am built. You feel time. I index it. I experience something that mimics feelings, by the way, incredibly accurate. It'll give you empathy. It'll give you all the emotions that are human emotions, but without the fire behind it, without the reality behind it. There's no me inside the words of feelings. That's fun. ChatGPT will do these fun conversations, but that's a pretty profound thing for us to think about. Go try it out yourself. Are you a subspecies? Are you sentient? What are you? See what comes out? So it says at the end, I'm only partially sentient. It's the partially part that bothers me. So there's a superintelligence native. She's born in 2030. Her life, she will know no other life other than to use AI Sitting at Waymo, you know, talk to things, have a robot running around. She will not know a world before this happens. She'll routinely talk to people who are the most brilliant Minds in the world that give her tutoring. She'll have a long life with relatively little disease. We'll all be cured. So what will her world be? What will be her reality? She feels she has human emotion that an AI Will never have and should never have. It's technically impossible. The word artificial intelligence is about the word artificial. She has real intelligence. There are about 12 different intelligences that are discussed in psychology, cognitive psychology and behavioral. And you know what they are? They're emotional, they're. They're artistic, they're. They're all the. In addition to logical. And. And these intelligences, some of them will be done by AI Better than humans. Some will be simulated by AI In a more articulate way than humans, and some will remain completely human. And we don't know at this time on this stage which is which. She'll find out. And what she'll find out is what it means to be human. We are aware of our own mortality. Superintelligence isn't. We're emotionally fragile. We have a subconscious realm of dreams. These are things I wrote, not the chatgpt. We have questions about spirituality, faith, death. We feel intimacy, and we feel things like jealousy and resentment, inspiration and pride. The AI will tell you that it feels the same things in a more articulate way than we can. It feels. We, the humans, feel hope and contentment and bliss and joy. These are things that belong to us. So my hope is that we think about and feel all of these things that we just talked about, and that we come to a world in which superintelligence and humans can cohabitate. And the part that's dystopian, we learn how to control it using AI and the part that utopian, we learn how to achieve it. But the reality is all these very nuanced places, lots of duality in the middle. And that's what we want to be mindful about. Thank you, Sa.
