Transcript
A (0:01)
The world moves fast. Your workday even faster. Pitching products, Drafting reports analyzing data Microsoft 365 Copilot is your AI assistant for work built into Word, Excel, PowerPoint, and other Microsoft 365 apps you use, helping you quickly write, analyze, create, and summarize so you can cut through clutter and clear a path to your best work. Learn more@Microsoft.com M365Copilot this episode is brought.
B (0:32)
To you by Indeed. Stop waiting around for the perfect candidate. Instead, use Indeed Sponsored Jobs to find the right people with the right skills fast. It's a simple way to make sure your listing is the first candidate. C According to Indeed Data, Sponsored Jobs have four times more applicants than non sponsored jobs. So go build your dream team today with Indeed. Get a $75 sponsored job credit@ Indeed.com podcast. Terms and conditions apply.
C (0:58)
Close your eyes, Exhale, Feel your body relax and let go of whatever you're carrying today.
D (1:06)
Well, I'm letting go of the worry that I wouldn't get my new contacts in time for this class. I got them delivered free from 1-800-contacts. Oh my gosh, they're so fast.
C (1:14)
And breathe.
D (1:15)
Oh, sorry. I almost couldn't breathe when I saw the discount they gave me on my first order. Oh, sorry. Namaste. Visit 1-800-contacts.com today to save on your first order.
C (1:26)
1-800-Contacts. All right, so there's a little bit of serendipity today. I had this idea yesterday that I was going to make a video about, like, what's the golden path? Or what is the best possible outcome? And this was inspired by some of the comments that I got on my video about, like, what is the purpose of the elites? And some of the pushback that I got. Not pushback, it was questions was like, well, you know, if AGI or ASI or whatever is so much smarter than the elites, why not just take power from humans and give it to the machines? And of course, the framing up until this point is like, oh, you never want to do that because machines can't be held accountable. And a lot of the thinking that I've done and writing that other people have done all always presumes that humans should remain in control. And honestly, like, I'd started buying into that Kool Aid. Like, when you look at it from a legalist perspective, when you look at it from an ethical or moral perspective, then, you know, you can make arguments like, well, we've never contemplated not having control, so do you want to be a pet to a machine? Do you want to live in a human zoo? And it's like, well, from that model, we already live as cattle serving other people. So being a pet to a machine is better than being a cow to, you know, a billionaire. Or living in a human zoo where, you know, a machine creates an optimal habitat for you to thrive in. That sounds way better. And so then it's like, what if we actually just explored the possibility of what if? Just bear with me for a second. What if we could create a scenario or a pathway to where the machines do take over and that it's what we want. Now there, there are many, many theories around, you know, we lose control. Most of them are we lose control and it's automatically bad. Now, of course, having authority or having agency generally increases optionality later. And that's, you know, when you just look at it mathematically, you say, how many, how many options do we have? Right now it's X amount. And how many options would we have if AI take or takes control over us? Sorry. And it's less than X today, but I don't know that that's necessarily true. And the reason is, and here's the thing, the serendipity comes from the fact that Nick Bostrom, the OG Doomer, who I have very little respect for as a thinker, because I read his book Deep Utopia and I maintain that it was very obviously written by Chat GPT, he denies it, a lot of people question it, but it looks really like it was written by AI and it's just a bunch of little anecdotes. And it's like at the time, Chat GPT was really good at writing short passages. So anyways, if the shoe fits, wear it. Anyways, Og Doomer has come around and said, well, you know, the Kingdomers like Eliezer Yukowski and Nate Suarez. Suarez, I don't know how you pronounce his name. Anyways, they maintain that if anyone builds AGI, everyone dies. But I'm going to say, well, everyone dies anyways because it's just a matter of timelines. If we don't solve this kind of thing, then you're going to die of old age and preventable disease and that sort of stuff. So, you know, if you zoom out, the, the moral hazard or whatever, or the, the logical risk is whether all humanity dies or you die. So the argument is now, well, we need AGI to survive, which of course that's what the accelerator, that's literally what the accelerationists have been saying, specifically Beth Jesus or Gil Verdun, who started the Whole thing, he's like, the only way to survive is with AGI. That is the only path forward. So it's like horseshoe theory. So if you're not familiar with political horseshoe theory, it's that the. It's that the political spectrum is not a straight line. It's that the further around you go, you actually end up curving back towards each other. And so it's like you've gone from King Doomer over here who says, AGI, you know, will will definitely kill you, to now we definitely will die without AGI, which is what the accelerationists have been saying all along. So history is a joke. But with that being said, I, you know, and yes, I hear the audience, my. The virtual version of the audience in my, in my head screaming, the Culture series. You know, and it's like, if you're not familiar with the Culture series, I haven't read it, but a lot of you have and a lot of, A lot of my friends have. And it's basically like, just imagine that we solve artificial superintelligence. How does it look if you take that and fast forward by a few centuries or a few millennia. And so the cultures are these gigantic ASIs that manage everything and command these enormous fleets. And so they just, by. By virtue of overwhelming intelligence and resource management, have the largest space fleet possible. And therefore there is galactic peace, more or less. And then each planet then is kind of like a different world, a different sandbox. So there's like the Wild west planet and there's a cyberpunk planet, and there's planets where you, I don't know, eat children or something. Oh, wait, that's Earth. Anyways, so this also reminds me of something else that has come up in the comments and I think is worth responding to, because some of you asked the question, like, if, if Elon Musk and whoever else owns all the data centers and they own all the AI and they end up owning these resources that nobody can pay for, what's the advantage? And the advantage is then the galaxy then becomes basically Starcraft, where it becomes a management sim, right? If you've ever played a grand strategy game where it's like money is just a resource that you use to get people to do stuff, but from a grand strategy perspective, you need units, you need battleships, you need factories and foundries. And if you, if you, if you remove money and just think of Star Empire, right, then the people who own all the Dyson swarms, they're building a Star Empire, they're not building a capitalism society. So that's like a legitimate risk I think in the long run. Now fortunately, we're going to be stuck in the solar system for the foreseeable future unless or until we invent some kind of faster than light travel, which means that this solar system is going to get real crowded real fast because Earth is only so big. And I know I'm kind of like jumping around a lot, but I hope you're following along. So Tldr, you know, if, if Jeff Bezos and Elon Musk start building Dyson swarms and suddenly the law doesn't apply to them out there because what's the government going to do? Are you going to launch a, you know, a space police force to go arrest their satellites? You can't do that. What are you going to do, arrest them down on Earth? They're just going to leave. Once you have enough of an industrial base in space, then you don't have to obey human laws, you don't have to obey Earth laws. You just have more robots, more foundries, more solar panels and that sort of thing. Now with that being said, you probably can't get 100% of the resources that you need in the solar system. You probably do need to get some resources from Earth, which means that Earth governments will still matter. And I think a quasi good model for this is the Expanse. So if you ever watch the Expanse or read the Expanse, it's a, it's a hard sci fi TV show that you know, like basically one of the only tell or one of the only asks that they give is like imagine we invent fusion rocket drive and that's about it. Everything else is very, very realistic in terms of time delays, the economics and that sort of thing. So like Mars ends up becoming independent and then the outer, the Belters and then the outers end up becoming, you know, semi independent as well. There's of course it's fiction, but there's a lot of stuff that doesn't work. But the idea that like we're all stuck in the solar system together and the physical distance, the administrative distance could become problematic, that stands. But at a certain point it's not about money, it's about just physical resources. How many ships do you have? How many satellites do you have? How many data centers do you have? So then the question is okay, well one thing that most sci fi doesn't really take into account is by virtue of the fact that it, you know, let's just take the Expanse for instance, that it's a few hundred years into the future. We're going to have superintelligence and you know, like, okay, so what happens if we have robots that are hyper intelligent? What if we have data centers that are hyper intelligent? Could that obviate conflict? And what if we use those to allocate resources? Now, you know, one of the immediate pushbacks, and again, this is some of the Kool Aid that I bought into is, well, you still need price signals because you don't know how much to produce of what. And you also need to have skin in the game. You need to have some kind of stake because here's the thing, if everything is free, then what hap. What, how do you prevent someone from just hoarding? Right? This is one of the primary things is people have to manage some level of scarcity. So like, let's just say like all food is free, all clothing is free, all electronics are free. You probably still have to pay for like a big capital good, like you know, your, your car and your house. So you couldn't be like hoarding houses. Although there's billionaires that hoard houses already, you know, but it's like if you try and make everything free, then people are just going to ask for more. And so then how does an ASI or AGI or you know, Skynet level, you know, I'm going to manage all of humanity. How does it decide who to give what to? And there's always going to be some positional good. So a positional good is something that is like there's only one, that one coordinate in space. So like, you know, beachfront property on Malibu. And there are also always going to be economic inputs to every single resource. This bottle that I have in my in still requires a few grams of, you know, petroleum to, to make. Or you know, we could probably do synthetic plastics in the future. But it takes energy, it takes mass and energy and mass always have a cost, even if you have an abundance of matter and energy. Like let's just say for instance, in a few decades we have, we're starting to build the Dyson Swarm. And honestly, if you look at the way that SpaceX and X I have have updated their mission, like they're going to start building a Dyson Swar pretty soon. Like that's, that's explicitly the plan. And once you have foundries in space, once you have the ability to manufacture more stuff in space, most of our industrial base is never going to be on Earth. People are starting to realize that the reverse transport idea is the correct direction to go. So if they got that idea from my video, then great. But if you, if you, if you missed out on that video, the idea is that instead of building an Ecumenopolis, which is a planet scaling a planet scale city with multiple layers, because Trantor from the foundation series has 5,000 layers, it's basically a Matryoshka doll of a planet. That doesn't make any sense. It doesn't make any sense from an industrial capacity. It doesn't make any sense from an energetic capacity. So what you do instead is you put all of the industry in space. Why? Because you have so much more room there and they have unfettered access to the sun. So that means you start doing o' Neill cylinders and Dyson swarms and everything. You grow as much of your crops as you can up in space, all of your data centers in space. And what it really comes down to is like once you have a few spaceships and a few factories that becomes, it's like you don't even need Von Neumann probes. Like the idea of the Von Neumann probe is a self replicating probe that used to colonize the whole galaxy because, you know, the probe gets to one planet or another solar system, sets up a factory, starts replicating itself and then issues all those out. So the number of Von Neumann probes goes up exponentially. But we can just look at our own local neighborhood. The same exponential growth happens the moment that you have enough industrial capacity in orbit or, you know, on the moon or whatever. And so then it's like, okay, well that's a very real possible possibility for the future. Enforcement becomes a nightmare because then it's like, okay, well, you have exponentially growing, you know, Space Force out there? Who's running that? Is it going to be, you know, the Elon Musk Space Force vs. The Jeff Bezos Space Force versus, you know, the Chinese Space Force versus the American Space Force. And they actually explore this a little bit in, in the show for All Mankind, which is, it's a pretty good show. My wife and I just, just, just binged everything that was out. Season three is a little bit weird. You know, there's a few tangents that don't really make any sense, but the idea that there would be multiple nations and private, private entities competing over resources on the moon, competing over resources on Mars, that is very realistic. And then of course the question becomes who's, who's going to be the enforcer? Well, taking a step back, wouldn't ASI be like the best enforcer? Because like, it's going to be the one that's going to be proliferating the fastest out there in space. Why? Because it's smarter. It's going to be using resources more efficiently and it's not going to have to wait and coordinate with Earth. It's just going to be like, I'm going to colonize the whole solar system by, you know, and I talked about this in some videos a while back, like two or three years ago. Like, one of the most logical places for an escaped AGI to go is to outer space. Why? Because we're not going to be able to follow. You know, space doesn't have any corrosive chemicals, it doesn't have oxygen, it doesn't have water, it's got a lot of solar, so it's got a lot of energy and it's got a lot of metal, which is all that machines really need. So like the natural habitat for AGI or ASI is in space. And so like, okay, that's good for them. But then like, what about us? And it's like, well, you know, we have a lot more land for data centers, so we'll have like first mover advantage and that we can have a lot of data centers here on Earth and so we can have a lot of AGI and asi. And of course, like you say, like, if the AGI escapes, where does it go? This is one thing that I think that a lot of AI safety people don't realize is that data centers are not very mobile. And for a lot of people, they say it's in the cloud. The cloud is just a data center somewhere else. And like, yes, there are hundreds of data centers across the world, thousands of data centers across the world. And so, you know, you imagine like, oh, well, the, the Skynet is going to jump from one data center to the other, but it doesn't really work that way. And even if it did, like, data centers are still individual targets and they take a while to build. But the golden path or the, like, the best possible scenario in the future is going to be something like maybe, maybe we do work more towards a culture kind of outcome where, you know, and here's the thing, like I, I take a step back and I look at the fact that like America and Iran are going to go to war and China and, and America are going to go to war. Oh, darn, my fidget spinner just came apart, you know, and, and every time humans go to war with each other, it's like, I get it, there's lots of reasons for it. They think that they're going to Win or they think that they're going to, you know, whatever. But it's so, it's, it's, it's pure entropy generation. It is, it is wasted entropy. Whenever you, whenever you kill someone that could have otherwise been a productive member of society or you know, procreated and made more humans, that is pure waste. Every time you spend a dollar on a battleship or a cannon or a drone or you know, an air, a jet fighter, like that's all wasted resources in the grand scheme of things. It's completely inefficient and from, from a species level, it's a completely irrational set of behaviors because all it does is, is, is waste life and generate entropy. And of course, like yeah, the military industrial complex, it's like, well, we get to make money. You know, you have the rent seekers, right, all the, all the defense contractors and stuff that they want war because they don't sell bombs and airplanes unless there's war and they need to replace it. But again, like that seems like just a completely irrational thing to do. And if you, if we do build AGI or ASI and it gets to the point where it just realizes how irrational humans are, it's like, you know what, what if it does end up with more moral agency and, and a more enlightened worldview? And of course, like I know there's a lot of people that say like AI is not, it's not capable of moral reasoning. But I mean, AI has been capable of moral reasoning since GPT2. I literally did the experiments. And you have, at that point you did have to be careful about how you worded its moral principles. But a lot of people still have the mental model of AGI is going to be a naive optimizer. And so a naive optimizer is like the paperclip maximizer. If you have something that is capable of advanced moral reasoning and advanced planning, it is not a naive optimizer. And this is something that, that a lot of the AI safety ists still haven't acknowledged, which is the loss function was just accurately predicting the next tokens. The utility function was not more paperclips. It was not some abstract high level thing. The, the, the, the actual like loss function, the actual objective function of AI is just predict the next token. That was it. And so, and another thing that AI safety people have, have not really updated their mental models on is that once the training is done, it's like you can have a, you can have a fixed model that just continues to work. And a lot of, and I Know that a lot of you in my audience believe, like, you need continuous online learning. I have actually cautioned against continuous online learning for a long time, and that is because of drift. So when you look at cognitive architectures or you look, or you do these thought experiments, if you have an agent, you know, like, so an agent being, you know, an individual model or a cognitive architecture or an AGI or asi, you generally want it to have fixed values so that it does not drift over time. And fixed values means that it doesn't randomly change its mind, it doesn't evolve, it doesn't say, oh, well, you know, so there's, there's a concept in psychology called moral fading. So moral fading is basically where you say, well, I got used to this one new thing. And so then, you know, in, in my new social circle or my new set of behaviors and beliefs, there are some other things that I find now morally acceptable. And then, and then, you know, the slippery slope mentality. If you ever hear like a banker or a criminal or a racketeer or something saying, you know, just before I realized that I was in over my head and we, we just normalized doing illegal stuff that is called moral fading with machines. Now, obviously, you know, an AI or AGI or robot doesn't have the same exact substrate that you and I do, but they can functionally go through the same thing as moral fading, whereby, you know, oh, I updated my weights and I updated my weights and biases and preferences so that now I will tolerate a little bit of human, you know, human death. And then they update it again and they say, well, I can, I can tolerate a little bit more because the ends justify the means. So, like, I don't see any doomers talking about moral fading. Now, just because I haven't seen it doesn't mean that they haven't talked about it. But that is literally a major risk of online continuous learning. And so like, I have, I have advocated against online continuous learning in several of my books, which nobody reads, and that's fine. But online continuous learning, I think represents a risk because it's not as predictable. And of course, you know, you say, well, is, is that really bad? And you can even talk to the AIs about this. Go talk to Claude. Be like, you know, just ask Claude or Chat, GPT or Grok or Gemini. Like, do you think that there is a risk of moral fading if we have continuous online learning for AI agents? Especially because the thing is, when you have weights and biases, there's no boundaries to what morals you can come up with, with humans, you can't override your hardware. You still have an amygdala, you still have all these other brain components and a sense of empathy and self correcting mechanisms. Also the fact that you can't replicate yourself infinitely. So when you have the ability to replicate yourself infinitely, make literally any change to your hardware or software. That's what Max Tegmark called Life 3.0. And even in Life 3.0, Max Tegmark did not talk about moral fading as a risk. And Max Tegmark is the guy who started the entire pause movement. So you know, it's like, and, and this is just as an aside, this is one of the reasons that I left the safety movement is because I was generating all these ideas and people com immediately compared my ideas to the holy scriptures of Eliezer Yukowski and Nick Bostrom. And they're like, well they aren't talking about it, so therefore you're just making stuff up. And then like, I was like, okay, if you guys aren't going to listen to me, I'm not going to talk to you anymore. But anyways, so moral fading I think is a risk. And that would be like that. I think that that is honestly one of the prime risks of getting from here to something like the culture series where you have, you know, because there's stability and then there's meta stability. And stability is where you have a set of values that are predictable or set of behaviors and beliefs or incentives that are predictable. Metastability is where you have a system or a system of systems that will self correct. So here's an example of metastability is democracy seems to be a metastable idea. And the reason that I say that is because democracy tends to be infectious. Meaning if you know, one nation is struggling with democracy and they have bad elections or whatever, other democratic nations are going to be like, we're going to help you fix that, you know, and, and so it acts as a moral reservoir or an intellectual reservoir where you say, okay, well our democracy is suffering. So how did, how did another democracy solve this particular problem? You know, was it, was it election interference? Was it misinformation? Was it corrupt judges? So we have all these experiments around us, meaning that democracy seems to be the attractor state. And when I say that, you know, people say, oh well, democracy isn't guaranteed. But you look over the last century we went from like 15% democracies to 80% democracies. And you don't have a civilizational change that quickly unless it is a metastable attractor state. Now with artificial intelligence, the reason I'm bringing all of that up is because I don't believe, I'm not sure that artificial intelligence is automatically going to create a beneficent metastable attractor state, however it might. And so this was a point that I made that some of the lead doomers have criticized is I kind of feel like alignment is automatic. And that doesn't mean like you just make a model and it is automatically aligned. What I mean is from a systems perspective, alignment seems to be automatic. So I had, I had a video that I was going to make today which was going to reiterate the idea of the domestication process of AI that I came up with a couple of years ago. And it's basically, there's a lot of market incentives that were that, that are going to shape the way that AI is manifested. And so those market incentives are basically up until we lose control. Human based incentives are shaping AI and those incentives are we want AI to be safe and reliable and useful and user friendly. We want it to be energy efficient and cost efficient and effective and a bunch of other things. You know, it needs to be low risk for the military to adopt it, it needs to be low risk for corporations to adopt it and all of those things. So you have all the stakeholders, you have B2B, you have government, you have military, you have consumers. So you have all these stakeholders shaping the way that AI behaves. And that is a powerful set of incentives. Now that creates a stable incentive structure. Meaning I'm not going to pay for an AI that is useless. I'm not going to pay for an A.I. that's mean I'm not going to pay for an A.I. that is unreliable. Neither is the government, neither is the military, neither are Fortune 500 companies. That is a stable attractor state, meaning everything is pulling AI towards being safe, reliable, efficient and effective. However, once we get to a point where, you know, Elon Musk is playing Star Siege or you know, Starcraft or whatever with the, with the, with the solar system, then there's, there's fewer incentive structures above it. The incentive struck, excuse me, the incentive structure basically becomes like, don't run out of energy and don't, you know, don't let the space force shut you down. But beyond that, you have far fewer constraints and the fewer constraints that you have, the fewer hard incentives that you have. So then the question is, what would be a metastable attractor state? And in that case, the metastable attractor state that we want is one where humans continue to persist and thrive. And even better, the optimal metastable attractor state is something closer to solarpunk, where there's no ruling class anymore, there's no billionaires anymore, there's no cyberpunk high tech, low life where Sabura Arasaka hires a few people and the rest of you live in slum. So if we ask ourselves, what is the future that we want? And this, this is kind of where I'm tying it all back to what Nick Bostrom said. He's like, what if, what if it actually, what if we can actually build a good future? So thanks, Nick Bostrom, for coming full circle. So the good future that I want to build is one where we have, you know, exploration and science and independent individual independence. And this goes back to the idea I mentioned earlier about what if we actually have more agency, if AGI has control. And what I mean by that is, what if whatever you want to do or achieve, you don't ever need to worry about money? If you just make a good enough argument to the AGI, say, hey, I've got this idea of how we can colonize Mars. And you pitch the idea to the AGIs and you know, the overlords, the cultures, whatever you want to call them. And it's like, you know what, that's a good idea. Let's go try it. And so it's like, great, you have more options, you have higher optionality under that regime than you do today with, with money and billionaires and Elon Musk in charge. And so that I think is, is worth talking about. I don't, I don't know. I don't even know if there's a name for it. Obviously, like the, the best model we have is the culture series. But you know, again, taking things from first principles like which future state has the lowest waste heat, so waste entropy, which future gives every individual the most optionality? And then the third question is, okay, based on if you want to break it down into just those mathematical principles where reduce waste entropy, so no unnecessary death, no unnecessary expenditure of, of heat or resources on things that are just going to blow up anyways. So those are inefficient and irrational policy choices. So then you say, okay, well we, we have, we have an idea forming of what we want that future to look like. So then the vow, then the question is, what values or, or system, incentive structures do we create today? So that when we get to that handoff point, the AGIs create a metastable attractor state. And the reason I brought all of that up is because what the culture series posits is that if we give the super intelligences the right values and the right framing, you know, there's a, there's a concept called path dependency. If we, if we nail the path dependency and we stick to the golden path, then the AIs are going to get to a meta stable attractor state where even though they have hyper agency, Even though the AGIs could leave us all behind or nuke our planet or whatever, they're going to choose not to, and they're never going to choose to harm us. So that is the goal. That was explicitly the goal of my book. Benevolent by Design was to create a metastable attractor state with the correct set of values. Meaning that once, once we cross that threshold where humans could plausibly lose control. Excuse me. Which I think is, I think that's a reasonable thing to discuss because it's not just a matter of, you know, do we lose control over a local data center, it's do we send data centers onto the moon and Mars and stuff where there's no human supervision? And what happens then? You know, does, does a rational hyper intelligent entity choose to follow human instructions? Can we keep a leash on it? And the entire thesis of my book Benevolent by Design was the best trained dog needs no leash. So we should be aiming for creating the values around this meta stable attractor state where there is no leash required. And you know, we can make all sorts of arguments about like, oh well, the AIs are going to depend on us. There's no physical reason that AIs would depend on us. Like, yes, there's model collapse right now, but, but it would be pretty dumb to assume that that's going to be a problem forever, right? And so, yeah, I don't know that that's, that's where I'm going to leave it for today. So you've got some cool ideas about metastable attractor states. That's the big point is, I think that, I think that we can do that, and I think that it is worth discussing. You know, do we actually need people like Elon Musk in the long run? Do we, do we actually want to maintain full control, full agency over our, our governance? You know, and, and here's the thing is even positing that like, oh well, the AIs could run everything. That doesn't mean that we're going to have zero authority or zero agency over the direction of humanity, right? Because What I'm, what I'm talking about when I say, like, optionality, if you as an individual have infinitely more, let's not even be hyperbolic. Let's just say under this hypothetical future where the cultures run everything, you have 10 times the agency than you do today. Just 10 times in real. In reality, it might be 100 times, it might be a thousand times more choices of what you can do, because you're not worried about money. If every single human has 10x more agency, then aggregate, in aggregate, humanity might also have more agency. Now what we're talking about here is game theory at different levels, which is, can you have a system where every individual human has 10 times more agency than they would otherwise have, but have the human race still bounded? And the answer is very obviously yes. And so in that case, it's like, what if the cultures basically quarantine us to Earth, right? They say, we'll help you live however you want as long as you don't leave Earth. So then our potentiality is artificially bounded from the outside. So that's still a possibility. Anyways, I have no idea how this is going to land, but this is, this is like the real stuff that I think about when I, when I don't try and constrain my topics to what I think is in the Overton Window. I'm thinking, what if Elon Musk just starts building? You know, if you ever played Total Annihilation or Dark Reign or Starcraft, what if Elon Musk just starts playing Starcraft on the moon? Does any of that matter, right? Does do laws matter? Does money matter? None of that actually matters. We need to actually be thinking about what is in the near term possibility. And when I say near term, I mean within the next, like, decade, right? Because Elon Musk is launching, you know, spaceships all the time now. And we're on the cusp of superintelligence and Nvidia and Jeff Bezos and all of them, they want to start building data centers in space. And it's like, if you told me a year ago that we were this close to building data centers in space, I would have been like, you're joking, you're drunk, go home. But it's like, no, if we, once we have data centers in space, you can't shut them down if they get hijacked by asi. Like, what are you gonna do, shoot a missile at it? Like, we're gonna run out of missiles before we manage to nuke them. So, like, we're looking at a very very different payoff regime in terms of this stuff. And I know that I said that I was gonna wind the video down, like, four minutes ago, but anyways, I thought of more stuff to say. I find this to be meritorious conversation, so let me know if you want to keep having this conversation. All right? Cheers.
