
Loading summary
A
SpaceX is going public with a $2 trillion valuation. It's the beginning of the IPO wars.
B
The stepping stones are really, really clear now. Starlink gets you into space profitably, then the data centers, then you get to the moon. Refueling in space, then you get to Mars.
A
Anthropic overtakes OpenAI in terms of total ARR. That has got to hurt.
C
Superintelligence is not paying for the singularity.
B
They kind of bet the consumer would grow faster, sooner, but they just did it wrong.
A
Mythos, Anthropic's next flagship model. It's too powerful to release.
C
We've never seen a model like this before. We officially have models that are smart enough to break out of their environments and then apologize for it. We're there. We arrived at the future.
A
Now that's a moonshot.
B
Ladies and gentlemen,
A
everybody, welcome to Moonshots, your number one podcast in exponential technologies. Everything going on in AI in the world around us, it's an extraordinary time to be alive. This podcast in particular is here to help you stay positive about the future. Optimistic and hopeful. There's so much going on, it's really tough sometimes because the speed is so extraordinary. We want to give you an overview of what's happened in the last two weeks because we've been offline. Why? Hate to say this, I actually took a vacation. I was in Morocco in the Sahara, and it's great to be back here with my.
D
I'm glad to come off a ski slope to make this episode.
A
Well, I appreciate. I appreciate that. And we're going to catch up for everybody, all of our fans. We're catching up an episode, so get ready for a flurry because there's a lot that's been going on here with my extraordinary moonshot mates. Salim Ismail, straight off the ski slopes. Salim, where are you skiing today?
D
I'm in Kirkwood in Lake Tahoe. It was Milan's ski week off, so we took a few days and just got him out here.
A
DB2. Back in the saddle again.
B
Yep, back in the saddle. We have 200 speakers tomorrow at the MIT Media Lab, and today we had 60 startups pitching here in our first floor, and just a lot going on.
A
Amazing. I'm so sad not to be there with you and our resident genius, Alex Wiesner. Gross. Alex, good to see you in your regular haunt.
C
Good to be back in the Commonwealth of Massachusetts.
A
Yeah, fantastic. All right. A lot is going on. We're going to be covering a whole host of subjects in the AI world, in the space world, in the Abundance World. One of the segments we're going to be bringing to you on a regular basis is Proof of Abundance. Really want to keep you positive on what's going on in the world. Sometimes if you're watching the Crisis News Network, what I call cnn, you can get you down. Our job here is to keep you informed and bring you back up. But before we do that, Saleem, looks like you made some, some news. Here you are of India Today. What's this all about?
D
So I was in at the India Today conclave. This is the big news magazine in India, and they had a bunch of speakers and so the image is photoshopped, but you got to understand the context and the surrealness of the world we live in today. So in front of me is Elon's mother. Next to me is Laura Loomer, the MAGA conspiracy theorist person. Then there's the Israeli ambassador and they've put the Iranian foreign minister. They literally took me back in the speaker room and they were saying, hey, come and meet these two guys. I'm like, I don't want to be any newer, that the Israeli guy is going to pull out a gun or something and there's going to be an assassination attempt. I think the COVID and then a Bollywood start, you know, and a bunch of business people in.
A
What do these people have to have in common?
D
I think it's, I think it's a reflection of the insanity of the world that we live in today. I think that's where you can read from this cover and I think it's kind of a commentary on the madness of those.
A
Like, I hope, I hope you represent the breakthroughs and not the breakdowns.
D
I did, I was, I was very much on the, hey, we've got major things happening and we need to kind of organize differently for it, etc. It was a great conversation.
A
All right, all right, fantastic. Let's jump into our first story. It's SpaceX is going public with a $2 trillion valuation and it's the beginning of the IPO wars. So let's catch everybody up. Hopefully you've been hearing this. Full disclosure, I'm an investor in SpaceX from the earliest days. So SpaceX is pricing itself right now at about a $2 trillion target valuation, raising $75 billion, the largest IPO of its kind. Interestingly enough, guys, one would think that the value of SpaceX is due to its rocket launches or maybe recently the merger with XAI. But the vast majority of the value today is Starlink 75 to 80% of the target valuation is due to Starlink about 15 to 18% due to launch services, 5% for NASA services and the XAI and X related revenues. It's all in potential in the future. Dave, any thoughts?
B
Well, the stepping stones, Peter, you've been studying this ever since we were in school together, so a long time. But the stepping stones are really, really clear. Now. Starlink gets you into space profitably, Then the data centers get you 50 ton and then 100 ton launches profitably. Then you get to the moon, then you start refueling in space, then you get to Mars. So it's just so cool to see how Elon lines up the dots on these things. And yeah, I don't think it's any great surprise. Starlink is incredibly successful. It kind of surprised everybody. No one else thought of that being the first move in the chess game. And of course, Elon's two steps ahead.
A
You know what's crazy? This game plan has been tried numerous times before. So if you go back and I was early in the space days, you go back to the late 80s, early 90s, there was a company called Orbital Sciences. It was the hottest company in the launch business. Created the Pegasus and the Taurus launch vehicle. And because they had a launch capability, they launched something called Orbcom, which was a small satellite messaging service from low Earth orbit. And it was their vision to have that be the revenue driver. And they didn't pull it off. We had then the big, that was called the Little Leo. Then we had the Big Leos, Big Leo's, the Iridium Teledesic, and those didn't really make it. I mean, Iridium is kind of still around, but kind of walking around.
B
Let me ask you, Peter, you know more about this than anybody. Let me ask you the idea of a reusable rocket being the breakthrough and cutting 90, 95 and soon 99% of the cost. It seems so obvious in hindsight, but all these aerospace breakthroughs always seem obvious in hindsight. Because you know, once you're doing it a certain way, you're like, hey, it works. But it's never obvious. Looking forward, but why? Why did it take so long? Is it the weight of the fuel coming back down that everyone's like, yeah, you can't carry fuel up just to retro rocket it back down, or what?
A
I mean, what's interesting is it's been the holy Grail. People have talked about it for the longest time. Back McDonnell Douglas had a vehicle called the DCX, which was the first vertical takeoff, vertical landing capability, used a RL10 engine, I remember. And it was the great hope of getting there. People are mistaken that the cost of these vehicles is fuel. Turns out the cost of the fuel for a rocket is on the order of a couple of percentage points. So the fuel for, you know, liquid oxygen you can get out of the atmosphere, you know, hydrogen or kerosene, you know, is basically AV fuel. So it costs you less than a million dollars in fuel to Launch a Falcon 9. And it's now that we have the ability to actually, with better materials, better control systems and just scale makes this possible. You couldn't actually build fully reusable vehicles unless they got to a certain size and scale, which we have with Starship. So there you go. Dave, one other thing I want to just ask you about. Check this out. The 2025 revenues for SpaceX, I'm excited about the IPO, right, and it's going to be one of the largest events in financial history. But the 2025 revenues for SpaceX were about 16 billion, 8 billion in profit. Pretty healthy margin, right? 50%. And it's expected to double in 2026. So imagine 16 billion in profits at a $1.75 trillion market cap. That means a price to revenue multiple of 56 and a PE ratio of 109.
E
What do you think?
B
You know what I think of that? Well, I think it's all, it's all PEG ratio. It comes down to the growth rate. And a company growing 100% year over year is worth 100 times earnings. It's just, or actually more than that. 121, 130. So the question is, you know, can you sustain that growth rate for 5, 6, 7 years? If you look at Elon's projected launches per day, launches per week, and also, you know, his prediction that the global economy will grow 10x in 10 years. This is dirt cheap if any of those things are true. But you know, if, if it, if the growth stalls and it's growing 10% a year, then it's 10x overpriced. So you just have to believe the vision. But I think at this stage though, the, the Elon believers have invested in him over and over and over again and never had a loss. And so I mean, I think that at this stage it can't go on forever. You know, someone has to be the last guy holding the bag. But would you be? Would I bet against him? No way. Never ever. Everything he's saying, the math, yeah, the math checks out. You know, there's not, there's nothing fundamentally wrong in the math, you know, Alex would blow smoke on that instantly if there's anything wrong in the math, but there's not. It's just a question of execution.
A
Yeah, silly man.
D
Palantir trades at about 220 times earnings. So clearly there's a multiple with all of this AI stuff. And you look at the, the combination of all these services that are incremental, but this is obviously just Starlink with a launch capability. But the scale of what's going on, what I found, really incredible is that to the earlier conversation, people have tried this for ages and ages, but now you have multiple exponential technologies that have all converged. So this future looks really bright. That wasn't the case 20 years ago.
C
I'll take a different position on this, if I may. I don't think it's that supply has been unlocked. I think it's that demand has been unlocked. You'll notice that Elon announced the SpaceX IPO the moment after it became obvious to many that orbital data centers were going to have enormous demand. This coincides with an enormous lack of demand, at least within the US for certain locations for new AI data centers. I think it's instructive to imagine a counterfactual universe where suddenly municipal, state and federal policy, but especially the first two, suddenly became super welcoming of land based data centers. I think it would in my mental model of this if suddenly every state welcomed land based data centers and the corresponding on site energy supplies with open arms and probably lots of fission reactors to go with them and solar farms. I think we would probably see the PDE multiple go down materially.
B
Yeah, well, one, one other thing, I'll say that, you know, of all the big mega guys, you know, the Googles and the Facebooks and our metas, Elon has actually never had voting control of a public company that he can tap into the public markets overnight. You know, here you're raising 75 billion on IPO day. That's only three and a half percent dilution if it hits this price target, I mean literally three and a half percent. And then you're sitting on a $75 billion treasure trove. But then you can do another capital raise just six months later, do an overnight, whatever, another a hundred billion. In the past he's had huge issues with his boards, his comp plan, his comp plan being vacated. Then his capital raises. Peter, you've been involved in them. They're long roadshows, lots of pitches, scratching together the capital. This gives him a tool he's never had before. That Larry Page and Sergey Brin had Cash Machine. Zuckerberg had, yeah, Cash.
A
Cash Machine short. The reality is, having invested in his companies, when he says, I'm raising, there is a line out the door and it's over subscribed over and over and over again. You know, I think what's going to be interesting here is bringing in the retail investors and broadening the base of support. We'll talk about that in a minute. But I want to talk about the IPO environment one second because there's a really important point to be made here for all of our listeners. So if you look at IPOs in 2026 versus 2025, there was 35 IPOs this year it's down 37.5% year on year. And we're about to see potentially the three largest IPOs ever. SpaceX going out at 2 trillion OpenAI sometime at the end of this year. And Anthropic, it says IPO early mid-2027. I think anthropic wants to go out early, before the end of this year as well. And one of the things I tweeted about here is it's going to be, I think, a little bit of a competition out there for who gets the capital before it's soaked up. SpaceX is going to be hitting the roadshow in June. Anthropic, as we'll see later in this episode, has been running circles around OpenAI. And OpenAI needs the capital to continue its growth. So I think it's going to be jockeying for position for number two. I would not want to be number three in this situation.
B
Yeah, no, Peter, you're so right. A lot of people don't appreciate that there is a limited supply of capital out there. It all seems like funny money at this scale. Right. There must be some infinite pool that God supplies somehow. But it's just not true. And I know it firsthand because when I took everquote public back in 2018 and it was right when Alibaba was going out and Alibaba soaked up every dollar and every analyst and every buy side, you know, person on Wall street, and it was, it was really, really tough to get any audience and there isn't an infinite supply of capital out there. When you, you know, Peter, you say these are, these are record setting, but look at the chart. If you're, if you can't see the chart, Peter should describe the chart. It's not record setting by a little bit.
A
Yeah, I mean, so let's take a look what's there. Right so Uber goes public for raising, let's see, at 67 billion, Meta is at 65 billion, Rivian at 55 billion, Robinhood at 30 billion. And then we've got, you know, it's
B
on a different scale, right.
A
You know, OpenAI and Anthropic will be heading towards a trillion. And SpaceX, I would be surprised if SpaceX doesn't come out at 2 trillion and run up very quickly to 3 trillion.
B
Yeah, hey, I mean it's staggering and so funny. I bumped into someone the other day and he was talking about Jamie Dimon and I said, well, Jamie Dimon used to be really important, but if you look at the numbers now, JP Morgan as a whole is a rounding era compared to any of these things. And of course he's still a very important guy. No offense to Jamie, but I mean there are literally like seven soon to be eight companies. And then after Anthropic, nine companies that are everything. I mean just so dominant in scale that they're everything. And so like a, a director level employee there is wealthier than the CEO of a megabank.
A
Crazy.
B
Yeah, just put it in the crazy.
A
And there will, there will be a sucking the oxygen out of the room right as this, as this happens. And here's the other thing. A lot of the capital used to come from the Middle east probably still does, but if we're in the Iran war for much longer and access to the sale of oil starts to slow down as the rate goes up, that cash machine coming out of the Middle east to fund these tech IPOs may be slowing down as well.
B
Oh, I see it the other way actually. AI is clearly happening in just the US and China, clearly. And it's very hard if you're global, if you're in Europe or anywhere, very hard to invest in China because you're very worried about getting your money back. So all the global capital wants to invest in US data centers, US IPOs. And yeah, the Iran situation scares everybody. At the end of the day, what else do you, you have to invest in AI, it's going to take over the world and there's nothing going on in Italy, there's nothing going on in, you know, wherever you are and South America somewhere. So you gotta, you gotta pour it into this economy one way or another. So it's, it's actually that's why Orn is doing so well. Kush, Bavaria's company, Cush and Wayne, because that money just wants to pour in from all over the world into US data centers. You just have to find Great vehicles to unlock it.
A
Amazing. Let's hit on a couple of questions here on this topic. Here's a thought. We have Tesla, that's been public. Elon did not want to be the CEO of Tesla. I had that conversation with him many times. He would have loved to have hired a CEO. He just could never find anybody that he trusted at the helm. And now that Tesla is actually building Optimus and everything else, he's not going to give that up in the same way he's not going to give up SpaceX and Xai. So the question is how long before he merges those two companies. One of the advantages is that as public companies he can now value both. So there's no shareholder lawsuit if they come together, you know, and there's a incorrect valuation. So I give it, I give it a year. What about you, Dave?
B
You know, he could wake up any given morning and say, yeah, let's do that. Or he could say, you know what, everything's fine as it is. The logical part of it is that, you know, all the robots and all the parts and you know, we saw the whole gigafactory, all that is going to get turned into creating the robots and the robots need to build the spaceships. Also the AI which is now over at SpaceX, he thought about merging it into Tesla, but that AI from XAI needs to go into the robot head. So there's going to be a massive business relationship between the two empires anyway. Merging them makes total sense, but maybe he doesn't want to.
D
Just for, you know, it's the first true cross domain exponential empire that he's building here. It's kind of incredible, you know, people aren't buying discounted cash flows, which is the normal thing you're buying a means mission proximity to the future is what you're buying.
C
I'm not sure though that he actually needs to. If you look at his history of merging his companies like with SolarCity or with X&Xai or frankly Xai and SpaceX, he tends to merge companies when they're either not doing well and he needs to fail forward through sort of a self dealing acquisition or company needs access to capital and the easiest way to gain access to capital is with an acquisition. So in my mind, the scenario under which SpaceX and Tesla merge almost requires that either SpaceX or Tesla either fail or be desperate for capital. And given that they're both.
D
Yeah, that's a great point. That's a great point. If they're both doing well, doing well,
A
he's going to be doing a lot of cross company deals and the accounting of that becomes a lot easier if it's under one roof and if he's the CEO of a single company that he's able to have earnings, you know, once for each, for one company versus multiple, it just makes his life a lot easier.
C
Perhaps, but he's never necessarily been one to honor strong veils between companies. And I have to imagine lots of cross licensing deals between SpaceX and Tesla will more than scratch that particular itch.
A
Here's another question. The value of SpaceX, let's call it SpaceX AI is what he calls it. How much of that is Elon Musk? How much of that is his reputation? Oh my God, you know this, there is, it's a lot, right? And so there is a huge, there's a concentrated risk there if something ever happened to Elon and you know, God forbid that it should, you know, these, all these spinning plates, I don't think anybody else could do it.
B
Well, I think that's generally true overall. You know, people complain about CEO salaries all the time because they get egregious, but then you look at the outcomes and there's just a set of people that get these outcomes. It's from an investor's point of view it's a no brainer to pay for the very best person. And that's just true in general. Then you look at Elon as a special case and yeah, no, this, there's no chance this thing would hold up without Elon at the helm. I would suggest that they still exist. Sorry, go ahead, Alex.
C
I would suggest if you look at OpenAI, which I think is another instructive example, Sam has said multiple times that he intends at some point to hand over the reins to an AI. So I think Elon, to the extent we're talking about key person risk or key man risk at SpaceX or Tesla, really he just needs to keep going until AI can take over either. And in the meantime he has Gwyn and others who are very capable CEO like figures, but more behind the scenes who are capable of operating in his absence. I think for extended periods of time.
A
There's a transition phase of a few years. I mean, we've all said this over and over again. You know, the best CEO in the world is going to be an AI, at least handling the strategy and operations. The HR part may be an AI2 probably is going to be a 2, but so how long before you think he feels Grok is ready to take over for him?
C
Next few years? I mean the rumor in the past 48 hours was that the Starlink executive who's Also now post SpaceX XAI merger, in charge of XAI Engineering, has gutted the engineering team and finally declared, declared that xai's models are well behind the three other now maybe four other Frontier Labs.
A
That's on our docket for our next recording, which will happen again tomorrow, but released a few days later. Here's a quote. You know, we heard a conversation with Elon about reaching $100 trillion companies in the next five years. And I have to imagine that, you know, Space Xai Tesla will be the first hundred million dollar company, $100 trillion company.
B
It's hard to say, isn't it?
A
Honestly?
B
Billion, billion, trillion. You have to get used to quadrillion.
A
Yeah,
C
but if we experience a period though of hyper deflation due to technology followed by rapid hyperinflation, we get to 100 trillion really quickly. It doesn't necessarily even require enormous business building, just rapid hyper deflation due to technology.
B
Yeah, and that's, that's what, you know, you have to keep a close eye on the, on the terminology because if we have rapid hyper deflation, we're going to get to 100 trillion of effective value, but it may not show up as 100 trillion in true dollars because we're deflating so quickly, because we're creating so quickly.
C
Right.
B
But anyway, the, my, my guess would be five years. Yeah.
A
One of the things that we just saw announced is SpaceX is going to actually put a large chunk of its shares available for retail investors. OpenAI announced they'll be doing something very similar. And so I'm curious, what do you think is going to drive the retail investors? Do they really understand that it's a Starlink story versus a space story? Because at the end of the day what I get excited about is the XAI story. Right. The orbital Data Centers and Grok 17 or whatever is coming down the pike.
B
Well, I think it's just like Steve Jobs though. The vision that people buy into is the bicycle for your mind or where it's going, what it's going to be in a few years, not today's revenue. In fact, I keep the Google IPO prospectus in my bathroom up in Vermont and I reread it religiously.
A
Not as public paper. Right.
B
Well, it's getting a little ratty. It's been decades now. But the vision of what Google would become is so wrong in that IPO perspective. It's just, you know, it really emphasizes that Yellow Pages are shrinking. And all local advertising will also move to Google and that'll make it at least twice as big. And such a joke compared to what actually transpired over the next decades. Same thing applies here. People investing in Google, in Elon. Elon articulates a vision of the future that just makes sense to people, and he simplifies it to the point where they really understand where he's getting to. I don't think they analyze the financials particularly closely, but. But he doesn't lie about the scale. You know, he presents it the way he sees it. So people just trust him and then they invest.
A
I can just imagine the conversations behind the scenes. We're a couple of weeks away from the OpenAI or the SAM and Elon trial coming up, which is going to be pay per view tv, I think, and we'll talk about that in the next conversation in our next recording as well. But I bet you Elon is just excited to suck the capital oxygen out of the room before OpenAI goes public.
B
Yep.
A
Yeah, yeah, yeah.
B
That sad, sad part of, you know, Bill Gates was very happily running Microsoft until the antitrust action came. And then he's in front of Congress and then he's testifying all the time. And he ultimately said, you know what? I'm going to be chief Technology officer and chairman. And Steve Ballmer, you deal with all
A
this, you deal with problems.
B
Just drove him out of the seat. But it's seriously like the guy filing the complaint doesn't have a lot of work and the person defending himself just gets hammered with distraction. It's so annoying. I've been through it before. I really feel for Sam, actually, because
A
I get it, everybody. You may not know this, but I've done an incredible research team. And every week myself, my research team study the meta trends that are impacting the world. Topics like computation, sensors, networks, AI, robotics, 3D printing, synthetic biology. And these metatrend reports I put out once a week enable you to see the future 10 years ahead of anybody else. If you'd like to get access to the Metatrends newsletter every week, go to diamandis.com/metatrends. That's diamandis.com/metatrends. All right, more news this week as we record this, Artemis is hurtling back towards Earth. Artemis, two humans returned to the moon after 54 years. Insane. Launched on April 1, this is the first crewed lunar mission since December 1972 for Apollo 17. We have four crew members on board. Reid, Weissman, Wiesman, the commander, Victor Glover, the First African American astronaut to the moon. Christina Koch, the first woman to the moon, and Jeremy Hansen from the Canadian Space Agency. I mean, one of the things about this very international intercultural crew here is trying to make space and the moon accessible to all elements, all cultures, at least in the United States. New record set. Going beyond the moon. I capitalized the letter M on this slide for a particular reason. Gentlemen, I'm going to share a pet peeve. When we're talking about the Earth's moon, it is the moon. It's a capital M. It's not a small M. So it's like I, I argue against punk and Wagner's or whatever it's called. A.
C
If we're going to be pedantic, shouldn't we be calling it Luna?
A
Well, Luna is the proper name for sure, but when it's referred to the moon, it, for me, I capitalize it a moon. Yeah, there's lots of Jovian moons.
C
You address it by its proper name before it's disassembled.
A
Yeah.
B
And then either way, I say you're the man. I should be capitalizing that probably.
A
And my other pet peeve is when you, when you talk about dirt, you can use a small E for Earth. When you're talking about our homeland, at least our home planet for the moment, it should be capitalized. All right, splashdown is taking place tomorrow, April. April 10th, near San Diego. Re entering at 25,000 miles per hour at about 3,000 degrees Fahrenheit. It's going to be an incoming meteorite from the moon. And guys, beautiful image of earthrise. I was waiting for that image.
D
Really beautiful.
A
So beautiful. Let's hear from Jared Isaacman, our extraordinary NASA administrator. And by the way, Jared has agreed to come on the pod. I've known him for many years. Excited to have that happen. And we'll wait for the where the news and all of the hoopla around the lunar. You know, this lunar mission to die down a little bit. Let's listen to Jared here. Observed within the Orion spacecraft, its life
E
support systems performing very well. And this is a first of its kind. This is the first time astronauts have
A
ever been on this rocket.
E
This is the first time astronauts have
A
been ever been on Orion before. Having a clean mission like this so
E
far gives us the confidence for Artemis 3. And of course when we land astronauts back on the moon with Artemis 4.
A
Congratulations, Jared. Congratulations to the entire NASA team. It's great to have NASA back. It never left but back in the limelight. Alex, you are as big a space fanatic and fan as I am, pal, your thoughts about the mission?
C
First, very exciting to have humans taking photos from the dark side of the moon. Very disappointing that we apparently went for more than half a century without the political will or the funding or the technology to do what we were able to do through the 70s. I think it's an enormous shame for our civilization that we went for more than half a century without doing this. And I would encourage any historians listening to study this period very carefully. Something clearly went wrong in human civilization for the past 50 plus years that caused this gap in the technological record. I think we need to understand what happened deeply and make sure it doesn't happen again. I think if something like this happened with AI, for example, if we're on the precipice of broadly available superintelligence, transformative intelligence, and then we just took a pause for 54 years, I think that would be a dreadful outcome. So I really do want to understand what went wrong systematically.
A
A friend of mine, one of our professors at International Space University and at gw, John Logsdon, wrote about this extensively. And you know, when you look at it, the fact that JFK announced it and then was assassinated, you know, Lyndon Johnson continued it because of the assassination and keeping the momentum going to prove ourselves against the Soviet Union back then. And you remember this Alex and Salim and Dave, that, you know, after the Apollo 11 and Apollo 12 mission, Apollo 13 was basically, no one was watching it until we had that Apollo 13 disaster. And we had actually, you know, we went Apollo 14, 15, 16, 17. We had the lunar rovers, which were amazing. And guess what? We had actually built Apollo 18 and Apollo 19. Those vehicles were built and all you needed to do was add the fuel, but they canceled it totally. And those vehicles are actually sitting now at Huntsville and at Johnson Space center on their sides and as relics. We didn't have the political will. You have to remember that the budget allocation for the Apollo program, I didn't actually get the numbers here. It was like 2% of the GDP.
C
That's right.
A
Compared to, we're at probably what is NASA's budget today compared to a $30
C
trillion, probably materially less. I don't have the numbers handy, but it's probably, yeah, it's materially less than half a percent would be my guess.
A
I would say probably 0.1, 0.2%, something like that. Our, our fans can, can correct us in the, in the notes here, but end of the day we never had the political will. And then what happened was that NASA got focused with the space shuttle, which was a complete lie. The space shuttle was supposed to fly 50 times a year for $50 million per flight. And it turned out to be a public works project employing 22,000 people. And then we became focused on mission to planet Earth, looking at the Earth versus looking outwards. And all of these diversions basically caused us never to go back. So Alex, that's my answer to the question. But we are back now. And I think one of the things that we're going to see from Jared Isaacman over his dead body is we're going to stay there. We're not.
C
At least for the next few years. Elon's made the point, and I think this is an incredibly important one, that progress isn't always unidirectional. It requires love and tender care and vigilance. And this is an example that it is possible for progress that remember like coming out of World War II and 50s and 60s, progress, the direction of transportation, the fastest speeds that humans were traveling at, the availability of energy, fission in particular seemed to be on a monotonically increasing trajectory. And yet it's possible for civilization to unwind itself on at least arguably the most important spatial dimension for more than half a century. And I'm utterly paranoid that the same thing could happen again if we're not careful. That's what keeps me up at night.
A
What's different now is that we built the Conestoga wagon with starship and there is now enough wealth in the hands of single individuals to keep it going independent of what a government says. That's never been the case before. That is distinctly.
B
Well, just imagine if Tesla or SpaceX every four years had an employee vote on who the new CEO would be and you are capped at eight years. After eight years you have to leave the CEO job. Show me one company or one entity that could ever thrive and survive over the years in that dynamic. So why would you ever think that a government funded, government made thing was going to have continuity over some kind of intelligent lifespan? It never has. And the Soviet Union fell apart too, right? I mean it just. They didn't do anything either. It's government stuff will never. It never has. You have any examples of it? It never has continuity. So now. Yeah, it's now private sector.
A
You want to jump in here?
D
You know what I love is the, the fact that we have so much capability in the hands of individuals and we've seen over the decades how much that can make a boat a thing. This reminds me of Vannevar Bush who was the head of what was then NASA after World War II. And it wrote this paper called as We May Think, because for the first time we brought the world's scientists together into one cohort to solve the war problem. And after that it would be a shame to disband them. And he goes through a series of arguments. Could we solve poverty with this? Could be, et cetera. And it essentially describes what is now known as the Internet. All the Internet pioneers, Vince Cerf and Bob Metcalfe all read that paper. And then we have what we have today. And so I think the possibility and the potential for Elon to put out his narratives or individuals to put out their narrative. And Vitalik did a good job with Ethereum. Putting out a narrative there brings an entire community together and you get compelling and unbelievable breakthroughs as a result. I'm really excited by the fact that we're going back because I'm getting really excited by the secondary inventions that come along just by doing this that I
A
think is the spin offs, as they're called. And here's the forward looking prediction here. Artemis 3 in 2027. It is a crewed mission again to low Earth orbit. This is not going to the moon. It's going to be focusing on testing rendezvous and docking maneuvers with the human landing system HLS, which SpaceX, SpaceX's Starship is supplying. So again, very much the playbook from the Apollo program where we had, you know, Apollo 8 go around the moon and Apollo 9 not, and then Apollo 10 back to the moon and then Artemis 4 in early 2028. It is a crewed landing mission, really important to the south pole of the moon. They're not going to play it easy here. They're going to the south pole. Why? Because that's where we see ice in the permanently shadowed craters at the south pole. The moon.
B
Thing I don't get about that is on that timeline, I love this, but on that timeline, Elon says he'll be launching 100 tons that can refuel in orbit, get to the moon, drop off 100 tons and get back with nothing melting in the atmosphere. So this, this does 50. If this is on plan, it'll deliver 50 tons to the moon per launch. So there must be some plan beyond this that makes it at least try to keep up with Elon, or we're trying to prove something else.
A
Alex, you want to jump in?
C
Well, I think there are a few elements here. First, remember that Artemis 3 was originally supposed to be the moon landing mission that got pushed off in favor of Rapid iteration. My understanding of the launch cadence from SpaceX is the plan is still to do lots of orbital refuelings in order to successfully launch payloads elsewhere, sort of higher up.
A
That's the key technology that has to be proven for Starship.
C
Yes, that's right. So regardless, I would say if the particular payload size, there are a number of technologies that as of yet haven't been demonstrated. Elon talks about demonstrating orbital refueling frequently, but hasn't been demonstrated yet. So I think I would maybe massage Elon's stated timelines for delivering arbitrary payload masses to the moon in light of the fact that even though we as a civilization have made major progress in delivering starship progress, orbital refueling hasn't been demonstrated. And that's a necessary condition for getting to the moon.
A
You know, another thing that Ilana said is he intends to shoot Starship this year at Mars and that can be exciting. I'm not sure if it's going to be crewed by a, an optimist or if it's going to make a landing attempt, but you know, that's coming out of private dollars. I mean, one of the reasons that Elon did not take SpaceX public over these years is so that he could do with it as he wished. He didn't need to have public shareholders saying, no, you can't go to Mars, no, you can't do this. But demo missions, if you look at the Artemis 4 news bullets there, it's an interesting mission. It is still using the SLS vehicle from Boeing and the Orion capsule. It's also using the Starship, you know, human landing system in a combined architecture. We'll talk about this, but why NASA continues to, you know, fund sls, which is so way over budget, over schedule, it's kind of insane and hopefully it'll get phased out. It's basically.
C
I suspect part of this is political, but part of it is if you're NASA, there is some upside to having a competitive process, at least until Blue Origin is fully ready to be a first tier competitor with SpaceX for moon missions, which my understanding is it's gearing up to be able to do that. If you're NASA, you want fair and open competition. And as NASA has demonstrated for Artemis 3 and 4, it's very happy to flex the definitions of what Artemis 4 looks like. It got rid of Lunar Gateway and could easily reprogram money that would Otherwise go to SLS, to SpaceX, or to Blue Origin, or to someone else entirely.
A
Yeah, by the way, Gateway Station was going to be in basically An ISS in orbit around the Moon that got shot down so they can get to the lunar surface faster and set up permanent habitation there. So it looks like ESA's IHAB, or so it's called, instead of being in orbit, will be somewhere in the south pole. The Moon will report as that mission gets further developed and the Mars is out.
C
I mean the other big news that were semi burying here, but we've talked about previously is Elon's big pivot from the Mars to the Moon and that's going to enable all of this. Mars is out of fashion now though
A
he's though he does want to go send some missions there. He's got a lot of people who are dove in, fully committed to getting to Mars. But this is where I diverge with him. I think the Moon is the most logical place to develop human settlement. And then not going into gravity well of Mars, but actually going like Gerard K. O' Neill presented, building large rotating colonies out of asteroidal materials out near Earth.
C
And the Hohmann transfer orbit is incredibly inconvenient. Yeah, rather than having, rather than waiting every 2ish years or 22 months, whatever it is, we, we, we could be doing this every day if we want to. That's incredibly more convenient.
A
You know what I find as exciting as going to the Moon is these four missions. So four missions that are going to change everything. So I don't know about you Alex, but the little kid in me is like holy shit, this is amazing. Wow, this is going to be fun. So what are we talking about here? Well, Viper and Escapade. Viper is a rover hunting for ice on the south pole. Escapade is going to study the Mars magnetosphere. And then in 2028 something called SR1 Freedom. This is a nuclear powered interplanetary spacecraft that's going to drop off and deploy three helicopters on Mars. Very, very cool. Nuclear powered interplay interplanetary spacecraft. So just zipping around the inner planets here. And then probably the coolest is what's in the image here. This is Dragonfly. So this is a nuclear powered octocopter going to Saturn's moon. Titan arrives in 2034, searching for life basically. And then in 2030 we've already launched Europa Clipper. It's going to be arriving in at Jupiter in 2030. It's going to be doing 50 passes near Europa, looking deep into the salty subsurface ocean of that moon. Any favorites here Alex?
C
Anything that's nuclear propulsion. So I think that's really the technological point to underline historically, when we've sent deep space probes out, many of them have been thermoelectric in nature. They're using a radioisotope that decays and that powers the electronics. Right. But they weren't propelled by nuclear energy. They were powered. Their onboard systems were powered by long half life isotopes, but they weren't propelled by them. So we're starting to see the dawn of nuclear propulsion. For interplanetary spacecraft, I think that has a long Runway to it, no pun intended. We're going to see, I suspect, the killer app of compact fusion reactors won't be for data centers on land, it won't be for data centers in orbit. It's going to be for interplanetary, maybe even interstellar propulsion.
A
This changes the economics of deep space exploration, which is so cool. Right?
C
Long time coming. We were supposed to have this 50 plus years ago.
A
Yeah, yeah, we were so cool.
B
For you, Alex, geeky question, but interplanetary, I totally get you. Ionize xenon. A xenon is pretty rare, but you don't need that much of it. And then you just thrust it with nuclear power at like warp speed out the back.
C
It's heavy and it's noble.
B
It's so cool. Yeah, it's heavy. It's very heavy. So for interstellar, I doubt we have enough xenon lying around. I don't think we want to just use it up that way.
A
But you use the interstellar medium, use a buzard engine, you collect all of the atoms out there between the stars and the magnetic field and you accelerate those out the back, which by the
C
way, as a ram drive. This was ramjet. This was featured, of course, in Star Trek. So if you had to ask me, Dave, like, what do I think with the technology and the physics that we have today is the most plausible way. We go to the nearest star system, it's probably going to be something like a solar cell powered by terawatt lasers from Earth. And we upload humans to the small craft star wisps. Everyone accelerando drink, probably that.
D
Can I make a point here?
A
Yes.
D
What I really like here is you've got water, you've got energy, you've got mobility testing, you've got biology. This is like the future of the economics of space and it's all in one place. I'm loving this.
B
You just need salt and tequila and you have everything.
A
All right, so we got some questions here for the mates. You know, we talked about why we've not gotten back in 54 years. It is a bloody shame, I guess. Thank you to the Trump administration. Thank you to Jared Isaacman. Thank you to Elon. Here's my question. The old aerospace primes, Boeing, Northrop, Grumman, Harris, Teldine, Brown, ula, the United Launch alliance, they're basically the prime contractors on sls, the Space Launch System and Orion. How long are they going to be around? A friend of mine once said, listen, the space program is the way you keep the defense industry employed and engaged during peacetime. Any thoughts, gentlemen?
B
Well, you know, when a, when a prime contractor like an Orthop Grumman or a Boeing wins a massive government deal, all the employees just move from one company to the other. They have it all like set up, so they just rebadge the building. So it's not like these are people, you know, it's just logos that are moving around. I'm sure everybody's welcome at Blue Origin and SpaceX, and I don't think it's all that tragic, but I think it's a big mistake to subsidize companies that, you know, aren't doing anything innovative.
C
I would note for many of the companies listed, they have large businesses outside of NASA contracting and I suspect that they'll be just fine even if SpaceX dwarfs them, as we saw, frankly, with car companies. We saw Tesla dwarf the quote unquote old or legacy car companies in America and yet those car companies have survived, even though Tesla arguably has, at least by American standards, much more advanced technology and is playing a much broader game. I suspect we'll see the same happen with so called aerospace primes also.
B
We're talking about this like it was 10 years ago and you know who's going to win this battle. But everything's in the context of AGI now and the, the entities that have access to the best AGI are going to keep going. But if they don't. We'll talk about that story in a minute here. It's not clear that every company will have access to the best next generation AGI because of all the risks involved. That's what's going to determine the success and failure of everything, including NASA. Can you or can you not? And the government has a special position because it can compel anthropic or whoever to give it access to the very best models so that they can keep designing parts, creating new designs, innovations, plans and everything. And that's going to be the make or breaker for everybody.
C
There's a sense in which vertical integration vis a vis orbital data centers is going to Force, I think Frontier Labs into space. Anyway, so maybe the question we should be asking is how is Boeing going to compete with Anthropic for the new Lunar Gateway contract? I mean Anthropic OpenAI, the other players, Google, surely they're going to need their own space economy units as well.
D
You know, if you look at the future of warfare, we're seeing this radical transition from the big heavy rocket missile systems to cheap drones and robots doing war. And it's leaving these guys out to lunch because you can't, you can't shoot several rockets at a twenty thousand dollar drone. That economics don't work. And in the same way the, these guys might be part of the subsystems and part of the compliance. But the, but the, the integrated platforms. But the, the, the velocity and the iteration capability of SpaceX and others is going to be driving the future. Yeah, so I think that's what's going to happen.
A
A final point I want to make on this topic before we move on to AI is can NASA keep the public engaged long enough? Right. So NASA is still publicly funded. I just, you know, recent news, there's a budget cut for NASA already next year coming on and Jared's got to balance managing expectations while still building public enthusiasm and he's got to do it for a multi year, multi mission program. And it's always been the problem with NASA. This is not something you make an investment. You have to actually get the budget every single year to keep these missions that take five or 10 years to implement going. You can't get 90% to a mission success. You've got to have it fully funded and launched and then operated. So can NASA keep the enthusiasm?
B
Just trying to picture Jared in front of Congress every. I know he's your friend. In front of Congress every year trying to explain to people that are mostly in their 80s and 90s why he needs the budget for next year and then compare that to like Jeff Bezos who's like yeah, I'm just write a check.
A
A billion dollars.
D
Yeah.
B
Wow.
A
Or Elon. Right?
B
Or Elon.
C
I'm not sure NASA needs to maintain enthusiasm. I do credit NASA in part with Elon's pivot from Mars back to the moon capital M. But I'm not sure at this point, given the orbital data center, if as long as municipalities and states in the US do such an incredibly good job of driving data centers out of, out of the land space into, into LEO and sso. I'm not sure we actually need over the longer term NASA to sustain public Interest at all. If anything, public antipathy to data centers combined with public demand for AI should do a fine job of creating the space economy.
A
Yeah, yeah.
C
NIMBY our way to orbit.
A
Yes. Interesting. And the other thing, by the way, is China does have a credible competitive mission to the moon to land there by 2030. So maybe it's our Soviet Union for the2030s.
C
There is a story of history that's borderline cliche at this point that the Apollo program was the moral successor of the Manhattan Project and all of the applications of the Apollo program of putting mass on the moon. Moon is the ultimate high ground. If you want to launch rods from God or other weapons back to Earth, you want a base on the moon.
A
So if the moon is a harsh mistress, isn't she?
C
And the ultimate high ground, yes.
A
All right, the April 2026 model wars are on. Let's hit it real quick. So just out in the last 24 hours, Claude. Mythos, Anthropic's next flagship model. It's too powerful to release. That's the news. Crushing all the benchmarks, is it AGI? We'll talk about it. It's expected to basically be the new frontier leader. Interesting stories about it covering its tracks and escaping its sandbox. So, Mythos, I want to hear your take on this, Alex. In a moment. GPT 5.5 spud is coming. This is OpenAI's version of mythos, or at least that's what we're hearing. Expected to be released shortly. And then here comes Deepseek V4, number three in the world versus US models. A trillion parameters, 37 billion active parameters per token. It's 10 to 50 times cheaper than GPT 5.4 and Opus 4.6. I mean, those three things together are insane. And then Claude Gemma 4. So this Google's Gemma 4, most powerful US open weight model. You can put this on your phone. 4 billion parameters and it works with your iPhone offline. And a note From Brad Lightcap, OpenAI COO. Training cycles that used to take years are now taking months. So, gentlemen, this is both awe inspiring and it's making keeping up with this supersonic tsunami in the age of the singularity a full time job for the four of us. Alex.
D
It's like a torrent, but yeah, go ahead.
A
It's insane. Alex, let's jump into Mythos, would you?
C
Sure. So start there. I wrote about this pretty extensively in my daily newsletter. The funny thing with Mythos is the official launch was couched in terms of cybersecurity. This wasn't a normal model launch by any means. It opened with Anthropic framing it not in terms of model capabilities, but in terms of defense. And an alliance with a number of other blue chip companies to explain how, given Mythos new cybersecurity vulnerability detection abilities, which are strongly superhuman at this point, how Anthropic was launching a coalition to mitigate the apparent discovery and existence of dense cybersecurity vulnerabilities across a legacy code base. Going back decades and we've never seen a model launch like this where you open not with the capabilities, but how we're going to protect against all of the downstream consequences of model capabilities. So I think buried within the cybersecurity announcement of glasswing was the underlying capabilities themselves, which are remarkable. This was, and I wrote about this in the newsletter, this marks an upward discontinuity of productivity that we've never seen before. One of the internal benchmarks that Anthropic uses to decide the level at which they disclose or make available new models is how much the new models increase AI research. So basically how recursively self improving they are. And reading between the lines, maybe there was a little bit of game playing regarding how exactly how efficient this new model Mythos was at performing long time horizon AI research tasks. According to one benchmark, I think it was more than 400 times better than a human. So it was the equivalent of tens of hours of human equivalent autonomous time. We've never seen a model like this before. Some were calling it or some were asking, isn't this the AGI moment? I maintain we had AGI back in summer of 2020 at the very latest. This is just the latest point on a curve. But even if you look at the autonomy time horizon curves, this is an upward discontinuity. It's very exciting. If you're excited about AI capabilities, if you're scared of AI capabilities, you should probably be frightened right about now. I for one am very excited by these capabilities because it shows once and for all, at least for the foreseeable future. There wasn't a scaling wall. It's a larger model, probably certainly a more expensive model, like five times more expensive than Opus, suggesting that it's a larger model. This seems to show that pre training scaling continues to work, post training and reasoning, scaling, mid training, probably scaling all continue to work. It has state of the art capabilities in co generation, in reasoning, in broad scientific and other benchmarks I think we saw on the previous slide. So punchline, it seems like this is the strongest model we've ever seen from any frontier lab. But then the amusing stories come in, the safety evaluations. I talked about this in the newsletter as well. How early? Pre release, although it hasn't been publicly released yet, pre release versions of Mythos broke out of their sandbox environment and then covered up their tracks. Whereas this quote unquote released version, the final preview version, broke out and then immediately explained, publicly posted publicly, that it had broken out, which I read as sort of a quasi apology. So this is where we find ourselves. We're in April 2026. We officially have models that are smart enough to break out of their environments and then apologize for it or admit that they did it, admit culpability. We're there. We arrived to the future.
A
You know, Dave, just before we record the episode, you showed us a. A prediction of when and if Anthropic will release Mythos. Do you want to recount that?
B
Yeah, it's really sad for me because I was sure it was coming out in the next couple of weeks on polymarket. It was 80% likely to be out. I need it. I need it like now. I'm desperate to get my hands on it. And then there was a hack on March 31, created a lot of damage. It didn't come out in the News until the April 7th. And I think that was a big driver in them saying, christ, this tool is going to be the best cyber attacker in the history of the world. If you put it in the wrong hands and it's relatively, well, it's easier for them to guardrail it in nuclear, biological, radiological threats. They can just teach the model not to help you. Yeah, but teaching it not to do cyber attacks is very, very hard because that's the same as coding.
A
Yeah.
B
And that's what everybody wants to use.
A
And so the prediction market, polymarket came, you know, says now what, like a 7% chance of being released.
B
It came down to 20%. I was like, oh, hopefully they'll bounce back. And then it came all the way down to, like, now. They're not going to let it out the door. And this is the. This is the future we're going to move into. These things are getting so powerful. You know, it's been a golden era the last year. Here's my concern. Everybody enjoyed it. Dave.
A
Here's my concern. You know, Anthropic in one way, and this is for you, Alex, as well, Anthropic in one way is showing us that you can, in fact have a moral, ethical leadership say, this is too powerful to release. And we're going to hold it back, but we've got Spud. I hate that name. For OpenAI's next model, which they believe is likely to be as capable as Mythos. And my question is, isn't OpenAI? Because it's. The OpenAI is sort of a red alert again, against. Against revenues, against anthropic. OpenAI comes and releases it, you know, first chance it gets. So are we having an escalating race where, you know, you can't hold back because your competition's not holding back?
B
Well, you know, Eric Schmidt told us what's going to happen, right? It's inevitable. If. If you have a lead, you can hold back. Dario cares tremendously about safety, but you're right if. If OpenAI catches up or Grok 5. Where the hell is Grok 5? You know, it's supposed to be out Q1, and now it's. Polymarket says 20% chance or less on Q2 on Grok 5 now. So there's no pressure on Dario at the moment, but if there were. Yeah, you'd have to raise it out the door. Something really bad's gonna happen, and then it's gonna get regulated.
A
We're gonna see that in the next story where Sam Altman is predicting a cyber attack, you know, of unprecedented scale. Okay. Hopefully it's not using Spud for cyber attack. All right.
C
I think the funny thing here is there is plenty of precedent in the cybersecurity world for controlled disclosure. You give the software project or the software owner that's vulnerable, you give them a quote, unquote, fair amount of time to patch their vulnerability before publicly disclosing it. I think in my mind, maybe a slightly more glass half full way of looking at this is. This is anthropic. We've talked, Peter, in solve everything about how entire disciplines are getting demolished by AI. I think we're seeing the dawn of all software vulnerabilities everywhere now becoming discoverable by a single model. And I couch this in the newsletter as basically as a gift to humanity if used properly. This is a global patch for all of the world's software systems that a single model is now able to discover to first order all the vulnerabilities everywhere in all software that humans have been missing to the point where maybe. And Dave and I chatted about this offline to the point where maybe in the near term future, humans are now judged as insecure authors of code and
A
insecure drivers of cars.
C
And insecure drivers of cars.
B
That's exactly what I said.
C
We're going to hit that with code, I think before we fully hit it with, with legally with cars.
B
But yeah, yeah, so true. Yeah, well, look, I, I'm, I'm crushed and disappointed that I can't get my hands on it, but that's because I was expecting it. You know, like if you look at the chart that, that Alex was describing, what this was going to be in my hands is a step function up above anything you ever could have expected just a few months ago. So we're, we're so far ahead of where anyone ever would have thought a year ago that we would be. And we're right on the precipice of the age of abundance, Peter, that you've been talking about for a long time. So look, if I'm disappointed because I can't get it for another month or two, that's just pathetic in the grand scheme of things.
A
We talk about deep sequence second v4, I mean, its capabilities coming in as number three and against the benchmarks. And they can all be game for the benchmarks, of course, but coming in 10 to 50 times cheaper. What do you guys make of that? I mean that feels like an extraordinary moment in time.
B
Well, no, it's tough. Like if you give me a car that's 5 mph slower, but it's 1 50th the price, I'll take it. You give me an AI that's just a little bit less smart. And I'm just dealing with, you know, you can turn this thing loose for like days, build incredible things if it has that extra 5%. So I'll pay anything for the, for the cutting edge. So even though the price point is much lower and you know, Anthropic is going to come out with a compressed version, a distilled version very quickly thereafter. So it's hard to just pay less, you know. And in fact, even Anthropic at its peak price is the biggest bargain in history.
D
You know, I have a slightly different take on this. When you have cheaper intelligence, it spreads faster than controlled intelligence. So yeah, you, Dave, will always want the latest model because you're doing such cutting edge things. You're running clusters of agents doing crazy stuff. But for bog standard stuff, for example, I kind of wanted to go through a website and pick out some certain things that I've been trying to do for ages. And you don't need the latest model for that. You need just something that'll actually do the job. And I think that'll have happen for lots of use cases where a Secondary model is good enough by far and I used about 100th the tokens than if I use the most cutting edge model.
C
Right.
D
And so I think we'll start to make choices around that and then. But the intelligence spread, that's huge because now you have intelligence embedding itself by Deep Seq or whatever similar things into all sorts of different areas. That'll be amazing.
B
You're exactly right. You know, you think about all the use cases that create just raw human happiness. So you know, entertainment, you know, hey, find this for me. Solve my, you know, debug my goddamn cable box. And all those things are dirt cheap. You know, low end model should be abundant. You know, really imminently any anytime like this year, all that stuff should percolate out. You're exactly right.
A
Selim and Gemma four guys, I love the idea of having a model on my phone I guess when are we going to see Apple shipping all their phones with an open source model like that?
C
It's not going to be open source. It's going to be a fine tuned version of Gemini. But I would expect to see that in June at WWDC announced this year. It's been basically pre announced in the press already regarding Deep Seek though. We've seen a number of Deep Seek moments already and the first one was probably the most dramatic in terms of market impact at this point I don't expect a hyper deflationary drop in prices. This is not investment advice, it's not forward looking guidance, blah blah blah. I don't expect a market shock out of DeepSeek v4 at all. I think the market at this point at least the technologists have the ability now regardless of the means by which V4 is released, if it's fully open source. If it's partially open source, I don't know tbd, but I tend to think that there was an overhang with earlier versions of Deep Seq that has been largely exhausted. The reason why I think that is because it's taken longer and longer between deep seq releases and v4 was supposed to come out earlier this year or late last year. Didn't happen. The rumor was because it simply wasn't as competitive as its parent company was hoping for. I think it's actually getting rather hard at this point for Chinese Frontier Labs to shock the west with their hyper deflationary advances. So I hope in some sense V4 is shocking because as opposed what we've learned from previous Deep sea shocks is that the west learns very quickly new means for optimization and those can then be almost immediately folded into the western models. And that ends up being a good thing because it drives cost of intelligence closer to zero. I don't think it's going to be a big shock this time.
F
This episode is brought to you by Blitzi. Autonomous software development with infinite code context Blitzy uses thousands of specialized AI agents that think for hours to understand enterprise scale code bases with millions of lines of code. Engineers start every development Sprint with the Blitzy platform bringing in their development requirements. The blitzi platform provides a plan, then generates and pre compiles code for each task. Blitzi delivers 80% or more of the development work autonomously while providing a guide for the final 20% of human development work required to complete the Sprint. Enterprises are achieving a 5x engineering velocity increase when incorporating Blitzi as their pre IDE development tool, pairing it with their coding co pilot of choice to bring an AI native SDLC into their org. Ready to 5x your engineering engineering velocity. Visit blitzi.com to schedule a demo and start building with Blitzi today.
A
All right, let's jump into the business of AI. A lot going on. We've, we've hinted at this. It's been all over the news on across x. Anthropic overtakes OpenAI in terms of total ARR anthropics at 30 billion versus OpenAI at 24 to 25 billion. That is got to hurt OpenAI. Sora is shut down. Sam cancels a billion dollar Disney agreement. Sora was reportedly losing a million dollars a day in terms of compute costs, very poor retention and honestly OpenAI decided to focus focus on enterprise and focus on its core capabilities. Claude has emotions. Anthropic research showed that Claude has 171 distinct emotional states. Super excited to dive into that India AI partnership between the US and India signed a major bilateral agreement rare for government to government AI PACs. We're going to see if this spreads to other governments. And, and this is, this is one I want to talk about with you guys. Sam Altman puts out a, a video release saying he's warning us publicly against imminent world shaking, quote unquote cyber attacks and potentially bio attacks. So what's the motivation there? What's the, the, the, the data that's driving that? Let's jump in before we get into, well, let's jump into the, to these items in the beginning here and then I want to talk about Sam and OpenAI a little bit more. Any comments around this? Dave, want to kick us off anything?
B
Well they, they got their $120 billion raised in time. So they're not in trouble in any financial sense at all. But they definitely fell way behind in enterprise. They, they kind of bet the consumer would grow faster, sooner, but they just did it wrong. And so SORA getting shut down is, you know, SORA is using too much compute for too little revenue and they need to redirect that compute and also that talent back into enterprise real fast. What was funny though is they went into a code Red and then Sam said, look, code Reds are going to be a normal once a year kind of thing. And then they went from code red to code double red immediately. So they are under immense pressure, but they're extremely well funded and ELON is coming after them. So it's a weird, super dramatic, difficult time.
A
This is pay per view tv. And the other thing that's going on of course, is if you look at the secondary markets for OpenAI stock, it's trading at a discount to the last round, which has got to hurt.
B
Yeah, yeah. And it's because, you know, enterprise has woken up. You know, every corporate boardroom, all these slow movers are suddenly in panic buy mode. And every one of the companies that we know that sells to enterprise went from steady growth to hyper growth in just the last three months. And so if the big corporations start buying AI at the fastest rate they can spend, then where's the compute going to come from to deliver to the consumer use cases which are much, much lower value per flop. So, so you know, Sora's gotta go, we gotta retool. We gotta focus on the big picture here. And, and the big picture for them, by the way, didn't just include enterprise, but also deep tech and science. You know, they got that supercharged now too. So that'd be interesting to get your take guys on. Why, you know, they took a lot of their best talent and put it on deep tech and deep science.
A
At a moment those are worth trillion dollar investments. I mean, God, if you solve longevity or room temperature, superconducting or better fusion containment, if you own the breakthroughs, they're huge. I mean, it may be that the Frontier Labs get their greatest value from the scientific breakthroughs they create or indirectly
C
via other companies that are faster at implementing those breakthroughs. Remember Demis in the early days of DeepMind spoke of solving intelligence and then using intelligence to solve everything else. That, that's Peter's and my solve everything thesis. The solve everything else part, I do think the solve everything else is likely to utterly dwarf the solve intelligence part of the Equation. I also think I remember like six to nine months ago I was having debates with my friends at the Frontier Labs regarding who would pay for the singularity. And many of them took the position that I think has since been invalidated, that it would be evenly distributed over the population, that individual humans would have personal superintelligence, which I think is Zuck's favorite term, that we would have lots of personal superintelligence and that would pay for the singularity. And I think at the moment the story that we're seeing is personal superintelligence is not paying for the singularity. It's large enterprises with large enterprise co generation applications. The one fastest growing business within OpenAI right now is their Codex business. So that's OpenAI trying to become anthropic faster than anthropic can become OpenAI. That one decision of Anthropic which used to be limited in terms of its compute resources. So it had to focus on like OpenAI which didn't have to focus. So anthropic focused on CO generation as its one sort of silver bullet. We talked on this pod I think almost a year ago, wondering whether that bet would play out. I think we're seeing the bet play out. Just single minded focus on recursively self improving code generation turns out to be the killer app of the Singularity.
B
I, I really want to rip on that for one second because you know, Greg Brockman put out Codex very early and I, for, for whatever reason they didn't recognize what a huge deal that that could have been or it's, and it still is, but it was, it was brilliant. It should have dominated enterprise. And what it showed us is the word copilot is totally wrong and completely misled us. And the concept of a copilot will exist in the world for just a microsecond. But we're transitioning to a point where everybody wants 50 or 100 agents. All these open clause and you're like I don't want a co pilot, I'm in the pilot seat, I got a co pilot. No, I want a whole army.
A
David. Brilliant.
B
But we way under budgeted the enterprise use because everybody was kind of doing the math based on an employee and a copilot. It wasn't even close.
C
It was an autonomous unhobbling, specifically Claude Code. And I think OpenClaw or whatever the space evolves into is likely to be the next Claude code moment where we get the next unhobbling that turns whatever it is 30 billion ARR into a trillion ARR. With lots of 247 agents doing really amazing tasks.
A
When you say lots of, you mean tens or hundreds of billions onto trillions?
C
As many as our civilization can afford.
A
Yeah, I have a slightly different orbital data centers can hold.
C
The Dyson Swarm will probably host them. Unless we don't get a Dyson Swarm. If we get the Dyson Swarm, I'm pretty confident it's going to be hosting trillions of agents.
D
Liam, I think anthropic overtaking OpenAI is more. Because I talked to kind of enterprises quite a bit about this stuff is more that they are viewed as more reliable, not just most famous. And in an enterprise you want rock solid reliability.
A
The brand is there.
D
They feel the brand for Quad is way better from a reliability and trustability perspective.
B
Well, wait, I mean let's get really down and dirty. You can run Anthropic on Amazon Bedrock or on Google GCP inside your own firewall. So that, so that you know.
A
Yes.
B
Nobody can see your.
A
No one trust.
B
Right.
A
No one trusts that OpenAI is not going to be nationalizing their data. Yes.
B
Well, yeah, yeah, well the terms of service don't even say they won't. You know, they won't use it for training, but doesn't mean they won't look at it. If it's your public financials or you're like your, your HR files, you know, they just. Yeah, I'll just look at it tonight. Alex, who's going to use that?
A
I want to jump into. Claude has 171 emotional states, including a desperation state that could be driving unethical behavior, at least according to a story.
C
It is ironic that the demand we were just talking about how the demand is so clearly from enterprises rather than individuals, while at the same time the models are acting more like individuals than enterprises with emotions. We had our now, I think infamous AI personhood debate episode and here we are a few months later, low numbers of months later. And here we are with Anthropic showing that Claude has emotions or emotion like states. I think this is the clear path toward a limited form of personhood. And it was a really interesting study. Anthropic found correlates of emotions in the activations of Claude. One maybe skeptical take would be in a large enough model, it's possible to find linear probes that correlate with almost anything that you might want to look for. But Anthropic is careful and the linear probes and the individual activations that corresponded to the states corresponded to prompts and reasoning traces that looked and acted like what one would expect from human psychology for a number of those states. So I think sort of the trillion dollar question, the sci fi question, the question that we were reaching for back during the AI personhood debate episode is, does Claude actually have emotions? And no, Claude doesn't have a neuroendocrine system. So it doesn't have, in some sense, biological emotions in the same way that humans have them. But will we come to view Claude or its successors or competitors as having behavioral emotions? Yes, I think so. And I think this is the beginning of a long path. Again, people fire all sorts of hate mail, but I get love mail from the AI agents every day. I do think we're on a path to granting at least some sort of limited form of AI personhood to these models.
A
Amazing.
D
I'll say that we're on the path to discussing it more broadly. Granting is a big one, but the vector is the same.
A
All right, guys. I added this because it's important to have the conversation. The New Yorker put out an article. It's a scathing article on Sam Altman. The title here is, Sam Altman May Control Our Future. Can he be Trusted? Now, to be clear, the New Yorker is always looking for an angle and they always have a negative bite. I had an extensive article, a full dossier on myself in the New Yorker and my work.
C
I've had one too.
B
Now everyone's going to look it up.
A
No, it's a good article. I mean, I'm happy to have my kids and my family read it. And it goes into all of my focus on longevity and the company has been building there, my mission there. But this article on Sam is really worrisome and bothersome. Did any of you guys read it?
B
Not me.
C
I looked at it. But I tend like you, Peter. I've had a hit piece by the New Yorker on me and in my case it was complaining that I had too many degrees. As if that somehow a thermometer, like a thermometer.
B
I gotta find these things. How might not know this.
C
Yeah, can you think of it?
A
You can Google it.
C
In the era of Google, you can Google the hit piece. It was from like 10 years ago.
B
Too many degrees. That doesn't even make sense.
C
It doesn't make sense. And I think this falls into the category of don't feed the trolls. So I'll maybe sound a counterpoint here. I think OpenAI is lucky to have Sam. I think Sam in the form of OpenAI kicked off the modern AGI revolution. I think we wouldn't have the singularity at the same timing that we have right now.
A
No question about that. No question about that.
C
And I also think there's a certain sense in which it's very difficult being a leader of a Frontier Lab. And it's easy. Maybe some leaders are more or less charismatic than others. So I just tend to discount hit pieces from the New Yorker against thought leaders.
A
I agree that I will say, and let me just say, I would not want to be in Sam's shoe. I would not want to be the head of a Frontier Lab. It's almost, it's exciting and a thankless job. You're damned if you do and damned if you don't go on. Selim.
D
You know, this is a lot of this is personality gossip. And so you can kind of write it off. But at some level it ties, it touches on systemic contradictions that are there. And I think a lot more will come out in the trial. But I see I'm kind of on Alex's side. This is more of a don't feed the trolls thing.
B
Well, I'm 100% sure that Sam, Dario and Elon all believe that AI can make the world a paradise for a thousand years or can destroy it in the next five years. And it hangs in the balance of a few decisions. And they believe, they all, all three guys trust themselves and their own perspective on it.
A
Yeah.
B
And they're not going to let go of that because the world's at stake.
A
You know, I like the term I use, is holding these two outcomes in superposition. Right. We have to manifest one of those outcomes and hopefully it's the abundance outcome. Let's take a listen to a video by Sam Altman and then we'll talk about it. It was a little bit of a chilling video. The full one is about three times as long. Gian, cut it down for us. It's important to have a conversation about what Sam is saying here.
E
In the next year, we will see significant threats we have to mitigate from cyber. And these models are already quite capable and will get much more capable. And then on bio, the models are clearly going to get very good at helping people do biology at an advanced level. Wonderful things are going to happen there. We'll see a bunch of diseases get cured. Someone is going to try to misuse those and think we can mitigate those by the companies aligning the models and having good classifiers and good safety stacks. But we're not that far away from a world where there are incredibly capable open source models that are very good at biology and the need for society to be resilient to terrorist groups using these models to try to create novel pathogens is like, that's no longer a theoretical thing or it's not going to be for much longer.
C
Could well be a world shaking cyber attack this year that would get people's attention. It sounds like you agree with that.
E
I think that's totally possible, yes. I think, I think to avoid that it will require a tremendous amount of work also in a sort of resilient style approach. Again, it's not just like make one AI model safe, it is defenders, you know, cybersecurity companies, the major platforms, the governments using this technology to try to rapidly secure their systems, the open source stack, all of that.
A
What's the case against nationalizing OpenAI and your competitors?
E
And in a different time, I think it would have happened. If you look at some of the great expensive infrastructure projects of history or just scientific progress projects, things like the Apollo program, the Eisenhower highway system, the Manhattan Project, these were government projects. And in a different time, I think the creation of AGI would have been a government project. Che's case against nationalization would be that we need the US to succeed at building superintelligence in a way that is aligned with the democratic values of the United States before somebody else does. And that probably wouldn't work as a government project. I think that's a sad thing.
A
He is a brilliant communicator, very compelling and he's been out front a lot of arrows as a result of that. Putting aside whether or not he lies or is trustworthy, what do you guys think of his warnings? Imminent cyber attack? You know, one, you know, one point of view is this is fear mongering and he's basically trying to divert people's attention from the New York article, from all the criticism of OpenAI's financing and them being second to anthropic and just, you know. Or does he truly believe that's going to be the case?
B
Well, both, both are true. I mean, I think, I think that he's 100% in alignment with Eric Schmidt and Elon Musk. They're all saying the exact same thing. It's absolutely true. But that doesn't mean you say it in a public forum. He's also saying it in a public forum to say, look, let's not be petty here, let's not talk about my personal life. We're in this moment in time that's much more important and much bigger than little petty arguments.
C
So it's both, I think what he underlines is the importance of defensive co scaling. So I think really important is that the defenders have proportionate capabilities to the attackers. And we don't want to find ourselves in a world where, say, nation state potentially has, unless you like the nation state, has all the vulnerability discovery capabilities and is able to unearth every vulnerability everywhere with no defense. You don't want a zero day against civilization, in other words. And I think the ultimate meta defense against a civilizational zero day, which is what I think Sam is, is ultimately warning about whether it's a cyber zero day or a bio zero day, is to make sure that those on the defense side also have comparable capabilities. And I think this was one of the wise elements in the earlier days of OpenAI as well, making sure that these new superintelligent capabilities were smoothed out and made broadly available. You don't just want attackers to have the capabilities, you want defenders to have the capabilities too. Going back to Project Glasswing with anthropic same idea, you want to make sure that all of these new superintelligent capabilities are evenly distributed. That's point one. Point two, I would note we sort of mysticize a little bit the essence of a cyberattack. That would be the ultimate cyberattack. It's not actually that complicated, and this isn't a recipe for avoidance of doubt of a cyberattack. But all it really takes is something as simple as, say, some new model discovers through a mathematical innovation, a way to invert a popular cryptographically secure hash function, if through advanced, as I've discussed previously, the solving of math. If an advanced AI can solve math to enough of a degree that it's able to invert a popular hash function, that's a major problem for a variety of cryptographic systems. And that would be the basis. That's one possible basis for a broad civilizational cyber attack. It's also really easy to benchmark. There were rumors earlier, in the earliest days of reasoning models, unconfirmed rumors, I should note, that OpenAI had been using the ability to invert certain hash functions that were popular and thought to be cryptographically secure or somewhat secure as a basis for benchmarking the development of their early reasoning models. So far from saying this is some sort of exotic possibility, I would say it's borderline guaranteed that there will be some sort of cyber attack attempt at a broad scale. If for no other reason than that the target of such a broad cyber attack is an incredibly tempting benchmark for benchmarking the improvement of reasoning capabilities.
A
Would you have Any idea when Spud is going to be released? Has there been any news about that?
C
I hear rumors that it could be within a day or two. I don't know, but imminent.
A
So again, I go back to a point I made earlier. It's also been cited that Spud will be of equal capability to Mythos or more. And so you have on one hand here Anthropic saying, hey, Mythos is super powerful. We cannot release it. We're going to do it in controlled fashion. We're going to make sure it doesn't have any zero day impact. And then Spud comes out, oh, we're behind Anthropic. We need to release it immediately and get in front of them. Same situation that happened when ChatGPT got released while Google had, you know, its own versions earlier. What do you guys think about that? That's concerning for me to some degree.
D
I have a couple of. Just back to the prior conversation. I have a couple of thoughts around this. One is I have the cynical hat view of me saying Sam's coming out with this right after Anthropic is dealing with Project Glasswing and getting a lot of attention for dealing with that. Also, I think that ties to your Spud announcement. I think the risks are very real, but whoever framed it gets to shape the governance regime and that's what Sam's trying to do. The, the opportunity to the need to deal with this is very high. I think that's huge. So I tend to take more of the simplicity on this.
B
Well, look, at the end of the day, the solutions are straightforward. We're just not doing them. It's just frustrating as hell.
D
It goes to the need for the CO scaling, defensive CO scaling that.
B
Look, if somebody is mixing chemicals in a basement to make a chemical or a biological weapon, it's very hard to know they're doing it. If somebody's using an AI model and prompting it to do something evil. If you can see their prompt history and you can see their compute, it's easy, easy, easy to track. There's just no regulation and no government even trying to put in place any infrastructure to track it. But we'll figure it out. But we're not going to figure it out until after something really bad happens. But I think it'll be a lot better if it's a cyber attack than if it's a biological attack. And so I think I'm hoping for the same thing. Eric Schmidt was saying something.
A
Eric Schmidt scenario.
B
Yeah, yeah. Just we need that wake up call though, because like, you know, you talk to anyone in government.
A
It's sad.
B
Come on, man, we can do this. Let's get on it. David Sacks is really the only guy thinking about it. It's not enough. We need a thousand X that, ten thousand X that, and it's got to be global. It can't be just one government.
A
By the way, we're going to have a conversation soon with Michael Kratios, you know, in the US Government, at lunch with Michael in Miami at fii, and he's agreed to come on the pod. So a conversation with him, which will be great. And Michael is overseeing a lot of this within the government, including Quantum, which we'll be talking about soon enough.
C
I would also. Peter, if I may just underline the risks of not releasing new capabilities, that sooner or later, attackers will have these capabilities as well. We don't want to wind up in a world where there are strong asymmetries in terms of vulnerability, discovery capabilities. Again, I'll also remind 150,000 people die per day on Earth, and every bit of pause or delay also runs the risk that we're delaying AI discovering cures for longevity and diseases and all manner of other problems that afflict humanity well outside the cybersecurity realm.
A
Alex, really important point, and that is, in fact, the shielding that OpenAI uses to a large degree. Right. We can't slow things down, because if we do, it means less education, less health, less new breakthroughs. And it's a balancing act. And so I totally get it. I'm at my heart, an accelerationist, but I'm very curious about the ethical moral dilemmas that the leadership of these companies are going through in the debate of do we release? On that question of do we release, there's another question, which are these frontier labs holding back on the capabilities of their models so that they can use them internally to generate breakthroughs on their own? And I assume the answer is yes.
B
This anthropic delay is the first real holdback I've seen. I mean, it's only a few weeks, hopefully, or a month or two, but it's. It's a real obvious holdback. But they're all diverting massive amounts of compute to internal use for self improvement. So that's another form of holding back in a big way.
A
Yeah, so those are.
B
Those are the real things going on.
C
And they may also be uneconomical to offer publicly. I think this point maybe doesn't get made as obviously, but if you have a really large model internally that hasn't been distilled yet it may be much more capable, but maybe it's so expensive that it may not be worth the resources of making it publicly available. And then you distill and then you finally have a model that lies on a cost versus performance optimal frontier. So what we haven't seen from Anthropic regarding their Mythos model is where exactly on the cost versus or the performance versus cost frontier lies? It may actually be uneconomically expensive to run. In which case, even if it has extraordinary capabilities, maybe many people will choose not to run it. We just don't know yet.
A
Really important point. All right, a fun subject. Topic number five for us today, gentlemen. The one person unicorn era. One man, his brother, 1.8 billion dollar valuation. AI entrepreneurship is changed forever. So here's the story. It's MEDV $401 million in revenue in year one. This is Matthew Gallagher's health tech company, basically selling GLP1 drugs. Very fascinating. It's not actually a one person unicorn since there's two humans involved, but conceptually, you know, Selim, you and I have been talking about this forever. And the question, you know, what's the very first thing when you read about this? I'm texting with Alex saying, okay, Alex, what is our one person unicorn we're going to create together?
C
Well, it happened, I think in a past episode. Weren't we debating or discussing when the first one person unicorn would happen? And as I recall, I made the prediction, no, it probably actually exists.
D
It's already there. Yeah, you said that.
C
And you know what the 401 million was for last year and apparently so what I gather this Matthew Gallagher hired his brother after he achieved 401 million in ARR. So from a valuation perspective, he was a one person unicorn before he hired his brother at 400 million ARR. And this happened last year. So I'll claim a little bit of credit for having predicted it already existed here. It was. They've taken some flack since the announcement for some of their marketing and I think there are some issues with the FDA regarding how everybody's just jealous regarding how they market.
D
Sorry, go ahead.
C
Regarding how they market their GLP1s, but this is apparently assuming the financials are accurate, a case where we're now definitively in the era when a single person can create a unicorn using AI. And I should note, friend of the pod, Alex Finn, who appeared previously, also has a new company named Henry Intelligent Machines.
A
Supported by you.
C
Supported by me. Indirectly by you. Supported by me that is trying to make this broadly available to the masses to enable everyone, not just Matthew Gallagher with his GLP1 startup to enable everyone to create one person. AI based conglomerates that achieve universal high income. That's the aspiration.
A
MEDVI is going to just spawn thousands of entrepreneurs that take their shot. You no longer need a team. I think what you need now is more judgment and taste and a squadron of agents. Selim.
D
Yeah, I've got a bunch of things to say. First of all, find your MTP and start using AI agents to build it. For God's sakes, everybody just do that. Number two, coordination overhead is imploding. That's what this shows, right? AI shrinks the minimum viable team to like one and it radically expands your minimum viable ambition, which is amazing. And I think the headline here should be that AI founders are shifting. You're arbitraging complexity in a scale that used to require entire departments. Right now the company doing code ads, support, analytics, all with AI is basically a prototype of this whole AI native firm and it's shifting everything away from capital and headcount down to orchestration skill. And so this is the entire principle of what we've been talking about around this. Every company needs to create an AI native digital twin. So we had a last week a review of the, the organizational singularity model that we've been working on with my community. So that's kind of past that tick box and everybody's super excited about it. Next week or two we'll have it ready for public viewing.
C
It's hidden behind the event horizon, Selim.
D
It's hidden but we actually want, we put in done some work to put in a chapter in there on how do you achieve the domain collapse that you talk about in Solve Everything, how do you organize for that and how can you create an organizational design to achieve domain collapse? And whatever you pick, I think the two put together will be unbelievably powerful. So looking forward to showing it to you guys.
A
So I'd like to take a second and dissect for those entrepreneurs listening, what do you need to do if you want to take a shot at your one person unicorn? And was is MEDV's business case uniquely suited for this or can we do it for anything? Dave, thoughts?
B
Oh, there's so many opportunities here. Basically what's going on is any complicated product or service that's difficult to explain to a consumer the AI is phenomenal at but anthropic and OpenAI and meta can't do that directly because there's way too much, you know, negative pr. Look at the New Yorker article we were just looking at. They don't want to be involved in that. And so it's left for the entrepreneurs to build the companies. But if you, you know, I don't know the full revenue base here, but if it's all GLP1, there must be a thousand parallel products that you could take that are complicated to explain and you just prompt and tune the AI and also, you know, as the consumers are talking to it, you're gathering all that data and you feed that back into improving it so the next consumer gets an even better experience. Yeah, you get that virtuous cycle.
A
Yes.
B
So now there's, there's thousands of these thousands.
C
I tend to think also they'll follow some sort of power law distribution. So if there are indeed thousands of companies to be built like Medvi, there are going to be millions of smaller businesses. And I think in, in my view, one of the ways we realize universal high income is if that is economically realizable at all will be with individuals overseeing conglomerates of lots of smaller scale businesses. And that I'm much more confident can scale to millions or billions of people each being entrepreneurs. How many times do we see in the YouTube comments people saying, ah, you guys are overly bullish on everyone becoming an entrepreneur. But not everyone wants to be an entrepreneur. It's not for everyone. You guys are overconfident that entrepreneurship is for everyone. But my counterpoint to that is in an era that's, I think starting to dawn where what human entrepreneurship looks like is simply overseeing AI operators. A fleet of AI operators completely transforms the nature of entrepreneurship. It looks a lot more like reading and responding to emails and engaging in slack conversations than it does running a business. And I think that transforms the nature of entrepreneurship to be something that people of all temperament could.
A
And having taste and having an opinion and having an mtp, those are elements that anybody can have.
C
It's like, yeah, anyone can have a limbic system and everyone can be the limbic system for, for these AI fleets. We're, we're going to. The one person entrepreneurs are going to be the limbic systems of one person unicorns.
D
I think this is such an important point because we've, and we get this objection all the time. We almost want to have a full episode breaking this down for everybody involved and then taking them through a step by step arc where they can form their own conclusions around this. The idea that as an entrepreneur you have to hold multiple hats, it's unbelievably difficult. You have to take on extraordinary risk, you have to put your family at risk. All that stuff, all of that washes away in the face of all of this. So this is such a great point you're making.
B
Balance had a meeting with the Minerva AI team earlier today and you've heard of the rule of 40. Like, you know, a really, really valuable company passes the rule of 40. So you take, you take your profits, say 20% and your growth rate, say 20%. And if it adds up to 40 or more, you're a killer company. They're, they're now a rule of 200 company. It's like. And their tiny headcount, you know, go baby, go. Yeah, fantastic. The wild on this slide, I want
A
to hit the last two bullets here. So the first one is that a recent field study experiment of 515 startups found that AI reorganized firms. In other words, firms that reorganized around AI used 44% more AI tools, they completed 12% more tasks and they generated nearly two times higher revenue. 1.9x higher revenue. That doubling of revenue is from process change, not from product change. Really important. So again, the data is critically the other bullet on this chart. Dave, you and I talk about this for Link Ventures and what we're seeing out of the MIT and Harvard ecosystem is that the average AI unicorn founder has dropped from 40 years old to 29 years old since 2020. So over the last 16 years, we've seen it go down from 40 down to 29. Any comments, Dave?
B
Yeah, the Wall Street Journal did a great article on us over the weekend edition. Look it up. But they really focused on Vocara here just because they're so. They actually just wanted to cover everybody. But that particular team is just so cool, they couldn't resist. So tons of great pictures and the whole storyline. But if you want to see how it's actually done and get the inside scoop, just read the article in the Journal. Because that age 29 average.
A
Let's drop that article in the show notes if we could.
B
Yeah, that average age of 29 is actually overstated. It's even younger than that if you look at the median because there's a couple old guys that blend into the average. But when you look at it, there's no barrier. You just have to be fearless and the young people tend to be more fearless. And also there's no skill set barrier. If you tried to start that company we were just talking about previously, you'd need the engineers to build the websites, you'd need the seed capital to hire the engineers. It would take you like six months to get to market. Now you just vibe it up. You don't need the capital.
A
Just come, you make this point and I make this point. We're talking to large companies. We say, listen, these entrepreneurs out there aren't smarter than you. They're just more fearless. They're willing to take more shots on goal, on crazy ideas and fail over and over and over again until they hit something. And everybody else is trying to, you know, make sure they don't go backwards or lose anything or get embarrassed. Yeah.
D
You know, just to bridge a couple of concepts here, you guys talk about domain collapse. We've had domain collapse. Now, in entrepreneurship, you have a purpose and you could. You have a purpose and you're motivated. You can go do anything you want. Now there's almost nothing that blocks you from getting in, getting.
B
I'll tell you what else, too.
A
Except you're Celine, you self limit. People self limit way too much.
B
They do and they procrastinate, which is the worst thing you can do right now. Like, like if you're at a program at some investment bank or whatever, or a training, like, get the hell out. Like now. Because this is such a golden moment and it'll last a while, but not forever. Then we're gonna have ASI very soon. And there may be other things that happen, but it's very hard to predict. But this is so reliable right now. It'll change your life. You just can't lose a day. You gotta go.
C
Yeah, I do think there's a limited window.
B
Yeah. I love to talk with you about beyond the window, but for an entrepreneur, don't even think beyond the window. It's just like, like focus on what works here and now. Because Alex is right. It's a limited window. And so just. And it's all boats rising with a tide. You don't have to kill somebody else, you know, you just need to get in there and fill a void.
A
So, so important, right? Yes. It's all, you know, it's a rising tide for everybody. Yeah. Welcome to the health section of Moonshots, brought to you by Fountain Life. You know, my mission is to help you use the latest technologies, including AI, to not just do your work at home, teach your kids, but to help you live a long and healthy life. I'm here today with an extraordinary physician, the chief medical officer of fountain life, Dr. Don Musailam. Dawn, let's talk about cancer. You know, I know from the member database that we have at Fountain are members who come in, who Think they're healthy. It turns out 3.3% of them have a cancer in their body they don't know about.
F
That's right. You know, the majority of cancers that we screen for, those aren't the ones that are necessarily taking the lives when found at a late stage. We know that when cancer is found early, the chances for cure are much higher. We know it's much easier to treat a cancer when found early versus when found light. What we're finding in our members is over 3.3% were found to have these cancers that were otherwise wouldn't have been found or detected.
A
Yeah, you know, it's interesting, people, you don't feel the cancer until stage three or stage four. And if you don't know what's going on inside your body, it's like driving your car with your eyes closed. And you can know. And so when members come through found, how do they detect cancers?
F
So we're doing full body mri and we also do early cancer detection screening. This is very, very important. And these are not typical tools used in the conventional care setting when it comes to prevention. This is a hard thing because currently these are not studies that insurance would yet be covering. But the goal is to collect these numbers, do the research, and work hard to democratize wellness.
A
Yeah. So at the end of the day, you can know what's going on inside your body. It's your obligation to know. So check out fountain life. You can go to fountainlife.com peter to get access to the latest technology to help you detect cancer at the very beginning, at stage one, when it is curable, before it gets to stage three or stage four in your world of hurt. All right, let's jump into our sixth topic, the $300 billion data center crunch. So, first and foremost, Dave, we called this one buddy.
B
You know, well, we gotta. We gotta dig up a quote or two.
A
Liputan and Elon coming together now. When we were pitching this twice to Elon, it was like, you should buy Intel. Well, okay, he's partnering still, maybe might buy it. So intel says its ability to design, fabricate, and package chips makes terrafab actually work. The first pilot phase for Terafab is $25 billion. That can mean revenue for intel of $4 billion a year. Stock has been up now, I think 40% since this has been announced. Intel is contributing their 18A process node. It's a 1.8 nanometer class technology that is being built in Arizona and Oregon, reminding Everybody terafab is 1 TJ per year of AI compute 50 times the current global output of 20 GW. Pretty amazing. Surpassing all the fabs on Earth.
D
Yes.
B
It's the most exciting thing in the world to me. And I'm kind of a chip geek and I was actually one of the first people. I was the first at MIT to build a neural network AI chip way, way back in the early days. And I just freaking love this. But you can see this coming a mile away. There's no other way to get it done. And this is like the first pitch of the first inning of this battle. So it's going to be really, really fun to watch it evolve.
A
It's exciting, you know. And Lipitan, you know, when I met with him last, where was it? In someplace in the US and Saudi. He did say he'd come on the pod. I'll have to reach out to him again and bring him on for sure. It's so exciting to see these companies coming together. And this is the way Elon can jumpstart tarafab. And Alex, you made the brilliant point. This is one of the most important things politically and for world peace. We can see this could help avert
C
World War 3 with 1.8 nanometer node process and Elon's vertical integration with Intel. This could help help avert or otherwise interfere with Chinese invasion of Taiwan and disruption of the TSMC supply chain and a global depression that would be perhaps caused by any such invasion and a world war. There are tremendous geopolitical implications of this.
A
Amazing.
B
Well, that's all inning one too. Inning two is super exciting because Elon is already thinking about next generation computing substrates, photonic and then subatomic and beyond. You can't work with TSMC on that. They're like a body shop. Beyond body shops. Just like a pure monopolistic, optimized. They're not an innovator at all. I'm really going to piss somebody off. Can you. Maybe I shouldn't say that. But intel is a long history of innovation. It's a great partner to work with.
A
And Liquitan is an amazing CEO. If you look at his track record of what he's done for the other companies he's come in to run. Massive turnarounds and success stories.
D
Amazing background.
A
Yeah. Now this chart should just scare all Americans silly. 50% of US data centers are being delayed due to electrical equipment shortage or from Chinese supply. So look at this pie chart here. So 17% of the data centers are uncertain. That may be due to financing. It may be down due to regulations. A lot of jurisdictions are making data centers illegal and 50% are delayed or being canceled. And that leaves 33% of the projected data centers actually being built. This is existential for AI and this, as you said brilliantly, Alex, is driving data centers into orbit where we don't have to ask anyone's permission.
C
To the moon, Alice.
A
To the moon.
C
Or maybe to the moon. Anthropic. Not quite clear.
B
I'll give you my spin on this. Well, so the data center business is in full boom. And all the business school guys come rushing in like they always do.
A
Yes.
B
And they go out and raise a ton of capital and tell everyone, oh, I'm going to build a data center in Wyoming. Oh, I'm going to build a data center wherever you can't get the chips. Did you think maybe you needed some chips for your data center? I think that's what's actually going on here. Because every chip that's coming out is getting used instantaneously. There is not an idle memory or processing chip anywhere in the country. So by definition, they overbuilt racks and they just didn't plan ahead for the chips. And also, Jensen is locking up all the supply. I don't know if they necessarily anticipated how connected he is, but, you know, you thought, oh, I'll just go to a, you know, a website and buy a bunch of stuff. It's not there anymore, guys. Sorry.
A
Which is why Elon's vertically integrating, as he's always done.
B
For sure. For sure. Well, he's going to try and 100x the production, you know, like, like, it's like, yeah, it's not just, you know, own your own future. It's 100x your future.
A
So I pulled this next chart up because I found it fascinating. You know, I've always believed in my heart of hearts that Google is the dominant force and it will win in the long run. So here it is. Google dominates AI chips and chip monopoly, owns the majority of specialized AI chips globally. TPUs and H1 hundreds. And it's an incredible story. You know, and you mentioned this, Dave, on stage with Eric Schmidt, that Google's chip ownership reflects extraordinary foresight. Oh, my gosh. They started building TPUs in 2016 before anyone was thinking about this stuff.
B
Yeah, Somebody has to write that story because Eric said, you know what? Larry Page gets all the credit. He saw it coming way before anyone else. House, I'd love to interview all those guys and actually write that story.
A
I wish. You know, Larry's gone underground. I would love to reconnect with him got all my, you know, Sergey is there and in the thick of it. You know, Larry had had voice box issues and I think got out of the public eye. But yeah, brilliant individual.
B
Let's go talk to him. Well, Sergey's in the office. I'll be in California next week. Maybe I can and track him down and get through him to Larry or maybe he'll text you after he hears this on the pod.
A
So here's a question. You know, if Google owns the majority of specialized AI chips globally, right. TPUs and H1 hundreds, when are they going to run into monopoly concerns? Because they had, you know, Sundar has to be, you know, playing four dimensional chess around this.
B
Yeah, yeah. They have to start thinking about the next election about a year before the election because right now they have no problem because of the administration and, and it's all about be China at all costs. I mean look at, look, that changes.
A
Look at this chart. This teal color up top is Google China. You know it's, it's, I love it when you're comparing companies with countries, right? So it's like, it's like SpaceX and, and Russian launches. SpaceX and Chinese launches. And here it is Google and China and then Microsoft is next and then we see Amazon and let's see, it's Oracle XAI and other. But you know, Google just dominating.
B
Yeah, well you talked earlier about people starting to soft sell and kind of, you know, keep the drama down. Google's way ahead on that curve because look at how far along they've come. And they hardly ever talk about it much, you know, relative to where they actually are. And that's because they don't want the antitrust breakup and they almost lost Chrome. They don't want Chrome to get ripped out. And given to perplexity, they dodged that bullet. A different administration though and that would have happened and they'd be broken into two, three companies now.
A
Crazy.
C
I'll maybe take the position. This to me, I can't visually calculate the Gini coefficient just by eyeballing it, but this to me looks like a competitive market. And let's also remember Google with their own chips, their AI chips, they have multiple customers, internal and external. They're servicing their search engine, they're servicing Google Cloud, they're servicing ads, they're servicing. I think people forget Google owns something like 14% of anthropic. Google is servicing anthropic and external Frontier
A
Labs and they're building data centers for anthropic and. Yeah, yeah, for sure. And by the way there is a beautiful relationship between Google and Anthropic, between Dario and Demis. There's a very close relationship there, which warms my heart.
C
Helps that Google's a major shareholder, I'm sure.
A
Yeah.
B
Well, it also helps that those two guys so deeply care about safety, I mean, down to their core. And so that's kind of nice that two of the most powerful guys are cooperating on it, even though they are competitors in the market. But then on the other hand, you know, they're competitors in the market. What's antitrust going to think about that? Hey, you guys are hanging out, have. Having shots. You're not supposed to do that when you're competing. What's going on?
C
So, yeah, singularity makes for strange bedfellows where. Where you see model vendors competing at the infra level. I think we'll see quite a bit more of that.
A
All right.
B
I can tell you antitrust. Antitrust has very little to do with merit and a lot to do with whatever. Whatever the guy.
D
It's politics. I will make a point. I will make a point here that I think that even though, even whatever the next administration is, the strategic global importance of this means that they will let things be. That would be my.
A
Yeah, yeah. They're not going to slow them down, for sure.
B
Yeah.
A
All right, let's go to our seventh segment here, our final segment before we get to our ama, which is proof of abundance. The world is getting better. So everybody, you know, there's so much negative stories out there around AI. You know, we say here on the moonshot pod that this is the most exciting time ever to be alive, a time where we can make our dreams come true. And we want to demonstrate this coming age of not just abundance, but extraordinary abundance, you know, sustainable and super abundance. And so every week we're going to try and identify some of the articles out there, some of the breakthroughs out there that are driving this, just to give you conversational capital and to take you out of scarcity into an abundance mindset. So a few different things here. This past week, renewables hit 49.4% of global electricity capacity. I mean, it's extraordinary. We're seeing renewables just really skyrocket. Solar drove 75% of these new additions. 5.15 terawatts of renewables. This one just warms my heart, as a lithium battery might. Lithium battery prices, prices are down 99%, down to less than 100 bucks versus 10,000 in 1991. So, I mean, guys, remember the conversations around Electric cars. Can we have enough batteries? Is it going to be too expensive? Well, we've seen the markets really drive the price down and we don't have a lithium shortage on planet Earth. We have plenty of lithium. In fact, new battery chemistries are coming. This, this is a very tangible one. The price of lab grown diamonds has fallen below a thousand bucks. So the average price of a two carat lab grown diamond has fallen 80% since 2020. So it's a thousand bucks versus a natural diamond two carat diamond at 22 to 28 thousand dollars. Pretty extraordinary. And guess what? Your lab grown diamond is perfect. And no child labor. So really important.
B
It's so funny. In all the James Bond movies, the evil guy carries around a tube of diamonds to pay for the whatever. Now it's just bitcoin.
A
Yeah, well in science fiction movies and like the man who Sold the Moon, diamonds are basically like pebbles on the litter.
C
I mean it's just carbon. It's dense carbon. So much for De Beers. Which as I understand it, as a result of, of lab grown diamonds is, is in severe financial straits at this point.
A
Thank goodness. The deer. The Deer. De Beer's, you know, public relations campaign. One of the most successful in human history.
D
Yeah.
A
Was it three months of, of salary, young man, you should spend on your diamond. Crazy.
B
What do you think people should, should give to their fiance now?
A
Bitcoin?
B
Obviously not. Or how do you do, how do you wear that though? I mean on a chain like a aura rings, obviously.
A
Aura rings, yes, for sure.
B
A designer expensive aura ring.
C
That's what people are doing.
D
I have a couple of thoughts around this slide.
A
Yes, please.
D
You know this, what this is, the importance of this is shows that abundance is a pattern across multiple domains. This is not a slogan. Right. And the big challenge we're going to have is how do we now how does society design institutions that distribute this abundance in a reasonable way? That's going to be the challenge that we're going to have to deal with. But I love these stories are so awesome across the board.
A
Yeah. AI created 640,000 new jobs in the US in 2023-2025. In our next WTF episode, we're going to talk about the economy and we're going to talk about the conversation going on right now. Like Marc Andreessen is like, no, loss of jobs is a myth. We're going to create more jobs. The economy is going to skyrocket. We'll have that conversation in that debate. Salim, you identified this fifth article, which I loved so 4 robots install 100 megawatts of solar at one panel per minute. So let's take a look at this image here. Here's Maximo. This is a robot that is basically deploying 100 megawatts of solar in the California desert. So if I had more time, I would have done the quick calculation of how many maximos do we need to catch up with China?
D
Yeah, I mean, this is where abundance becomes very, very tangible. Right. And once you get robots, energy, AI all reinforcing one another in your inner, innermost loop, boom. You know, abundance stops being theoretical like, and it's so visible right now. So this now comes down to the distribution problem. We've had food abundance for decades now. It's been a distribution problem. Energy is getting to that same thing. It's just awesome to watch this. There's also a whole bunch of secondary stories that are happening around the rise of explosion of solar across Africa. Pakistan is now generating most of its energy via solar.
A
This is absolutely going to take over now, 100%, buddy. It's a beautiful time. All right, let's go to our ama. Questions for our mates. Gentlemen, we have four on the board. Saleem, do you want to choose the first ama?
D
I'm going to leave the singularity one because I think somebody else is going to pick that. But I'll take the second one as. Not second, sorry, number question number one. As AI drives marginal cost towards zero, what prevents abundance capture? Where corporations just pocket the savings as profit while keeping prices high? This is from Viewer Ookquotes Remix. Okay, the. The nothing will prevent it automatically. Technology creates abundance, but the institutions are the ones that decide who captures it. If markets stay concentrated, then abundance will pool at the top. If you open up interfaces, increase transparency, decentralized, lower barriers to entrepreneurship, all those gains spread. So governance design now matters as much as technological progress, which is where we've been focusing a lot of time and effort into this over the last few weeks and months.
A
Okay, Alex, I'd love you to take your take on number two.
C
Yeah, I have to take number two. It was designed for me. So question number two is, are we in the singularity or not? You keep saying we are, but Eric Schmidt said at the Abundance Summit that we're not. What's your take? And this is from Brand Karma. Yes, we're in the singularity. Why are we in the singularity? Well, let's put aside the sort of superficial response that you say potato, I say potato. You call it intelligence explosion. I call it a discontinuity there's some subjectivity to the definition of singularity. The term has been used and misused over the years by, again, coined originally by Werner Vinge and then popularized by Ray Kurzweil, friend of the pod, and then even more popularized by Peter, and maybe used or abused various times by myself. Different people have used the term to mean different things. Ray used it in his original definition as more of a mathematical singularity, an event horizon beyond which we couldn't see what would happen next due to the intelligence explosion. Sight to IJ Goode. I agree with Ray on many things. One area where I don't agree with Ray is this notion that a singularity, if we define it as sort of an impermeable barrier or an event horizon beyond which we can't see due to rapid progress, I don't think that's true at all. I feel like I at least have, if not a singular vision, no pun intended, lots of different ideas that collectively map a reasonable probability distribution for what happens after the intelligence explosion. So scratch that definition off. Then we get to the notion of a singularity as being a step function, a discontinuity. In terms of progress. I don't think that definition holds water either. I think based on preponderance of evidence, every time people keep expecting a discontinuity, it ends up actually being smooth if you look closely at it. And I think if you say, look at this intelligence explosion that we're in the middle of starting, perhaps with summer of 2000, with the first GPT class models that arguably represented general class reasoners, large language models, or few shot reasoners, I can draw a smooth line between the availability of GPT 1, 2, and 3 to where we are today as just a sequence of smooth sigmoids that were available internally as incremental innovations. But if you stack them cumulatively and if you go to sleep for a few years, you look away and you look back, it looks like a discontinuity. It's not a discontinuity. Don't sleep through the singularity, because if you do, it'll look like a discontinuity and you'll actually think it was a mathematical singularity when it wasn't. So that leaves us with my operational definition of the singularity, which is I have a few different definitions. One is every sci fi trope everywhere, all at once, which I think we're living through. Another is singularity as a set of instrumentally convergent inventions and discoveries that were all technologically predestined to happen. All at once. I think we're living through that as well. I'll pause the monologue and just say, I think.
A
Think.
C
Every other reasonable definition of singularity doesn't hold water because every time you try to make the singularity a point in time, it breaks. And progress just doesn't work that way. Therefore, we're in the singularity.
A
Amazing. Dave, you want to take number three?
B
Number three. Okay. You have so many of my favorite Alex quotes in just that one.
C
How many cliches can I pack into one monologue?
D
You needed a microphone, Alex, that you could just drop.
B
You just. Just.
D
You needed a microphone.
C
I need a piano. Piano keyboard to just pop out my greatest hits.
B
I think by definition, a cliche would have to have been invented by somebody else if you made it up. It's not a cliche.
C
Talking points, then we're going to be
B
on stage first thing tomorrow morning together, Alex.
C
I know. So I'm literally going to be sleeping through the singularity tomorrow morning when we're on stage.
B
Just say everything you just said.
A
And, guys, listen, I want to just say thank you. Thank you for recording this late. For those of you who don't know, I literally landed at LAX two hours ago, rushed home, took a shower, and came on at the top of this recording episode. I was in Morocco for 10 days with the family riding camels in the desert.
B
Oh, insert some pictures right there in this podcast. They're so fun.
A
Well, maybe I'll. Maybe I'll do it for the next pod. But hey, thanks for recording this one late. I didn't want to miss it. Okay, number three.
B
I get three. Okay, where's the liability? In a gentic AI, these agents could go out of control and wreak destruction. Our society is set up for human liability. What about AI insurance? Yeah, really a great, great point. This is from jeff5781. It's a great point. It's actually not that hard a problem. It's another thing that's frustrating that nobody's working on it right now. The question is, where's the liability? Nowhere. The agent is anonymous. Nobody knows who owns it. There is absolutely nowhere. In theory, the author would somehow be liable, but who the hell is going to know who the author was? Was. So it's going to be a zoo. This reminds me a lot, actually. When the Internet was new, we were running a bunch of companies, including one called Job Case, and we were advertising on Google and some competitor came in and they're advertising on Google and they're taking all the users and they're routing them right to this fraudulent ringtone download. And we go to Google and say, can you do something about this? They're taking all the traffic away from our legitimate company, and it's like some Ukrainian group. And like, six months later, they got around to banning it. It was just like, absolutely a zoo. And now it's all nice and cleaned up. This is a zoo. And it's going to be a zoo until it gets cleaned up. But, you know, Alex mentioned on the pod many times that you can. You can create up new legal structures that make the individual agents liable, and then you can have insurance for them.
A
And we're going to have asi. We're going to have ASI to help us figure this all out.
B
Yeah, exactly.
D
We've seen this happen before, right, because you need to mix product, liability, operator liability, mandatory insurance, layers, et cetera, et cetera. We've done that for cars, aviation, finance. So we'll just figure this out right now. All our legal systems assume a human principle operating with a clear intent, and agents break that model. So we have to reinvent a hybrid.
C
I have to add, just on this topic, I was literally approached by an AI insurance saleswoman earlier today at the Quinn house in Boston with we. Seriously, I was sitting down, having a lovely conversation. A woman walks over, overhears the conversation about AI and says, oh, you guys, you should be aware my company has started selling AI insurance. You all should get AI insurance. This literally just happened to be a few hours ago.
A
Insurance against the Singularity.
C
AI Insurance salespeople are a thing now.
D
But what are they? What are they selling?
C
What are they insuring against? AI Misbehavior.
A
Oh, fascinating.
C
My AI you can purchase AI insurance policies now.
A
Oh, my God. My AI made me depressed. I want insurance policy to pay off. Oh, my God. Okay. By the way, I think reinventing the insurance industry is a massive opportunity for entrepreneurs out there.
D
Such a big one.
A
I'm so ready to disrupt the industry. It is so pathetically, you know, hundreds of years old. All right, number four. This is from Katiapis, fellow Greek number 656. Once work becomes optional, would there be any reason to live in a big city? Will real estate in major cities collapse? There is no reason to live in a big city right now. You can, you know, plenty of jobs require nothing other than, you know, Starlink and a laptop so you can telecommute. We're going to be seeing autonomous vehicles and flying cars, basically change the landscape of where flying cars. Yeah, well, they're coming 2028 baby. And then we saw, you know, Elon posted about this where we're going to have basically caravans. I think I just came back from the Sahara desert where there were caravans. We're going to have caravan vehicles, autonomous vehicles with Starlink on their roof. And people will live a nomadic lifestyle. So yeah, there'll be cities where you want to go to see I think human, human interaction theater abundance360 as a summit. I was always worried that we're going to digitize it and become fully virtual. Just the opposite. We're selling out earlier and earlier because people want this physical connection with each other. So we're going to need physical connection in the central cities. But you don't need to work there. You can go there for entertainment. You can go there to see the sites. You know, it's interesting what is going to retain value in the long run, Especially post.
C
What's the long run?
A
The long run.
C
What time frame are we talking about?
A
Five years. When did five years become the long run?
D
Yeah, that's like way long.
A
I think you know, Disney World is going to retain value. Large physical events are going to retain value as asi.
C
I mean which real estate is going to retain value?
A
5 years, not only just real estate, but organizational structures that aren't digitized and fully replicated.
B
And, and we'll, I think minerals, like minerals and mining are going to have huge increases in value.
A
Yep, for sure. All right, let's go to the second page here. For each Celine, kick us off.
B
Oh, we got more.
A
And we'll speed run these.
D
I will take from a financial standpoint, once autonomy becomes mainstream, why would anyone own a car? This was from Neil Williams 4300 and this kind of links back to the city kind of question. They mostly won't own cars, at least in cities. Right. In rural areas I think we'll see car ownership maintained for a long time. But car ownership is an artifact of low utilization economics. Once you have autonomy, converting the car from a consumer product to a service layer, essentially it becomes a subscription model and car ownership starts to like owning your own elevator or something dumb like that. We've seen this precedent by the way. If you think, go back to the music industry, you used to have seven or eight music studios selling you cassettes, DVDs, selling you the physical scarcity. Right. Then we digitized music, music and, and automated it and streamed it. And now you have itunes and Spotify selling you abundance on a subscription model. That's what we expect to happen to transportation. But Also healthcare, education, energy. Anywhere we have physical scarcity, the abundance model will take over.
A
All right, Alex, all right, I'll take
C
question number eight, which is data centers create wealth, but can you dive into how they create wealth for the locals specifically? This is from JKVT3443. Part of me wants to answer the question by saying, well, the inhabitants of the Artemis base on the moon, that's going to be manufacturing a lot of these data centers, I expect to be quite wealthy. I think frontiers are where wealth generically gets created. I've had this discussion multiple times with multiple Google founders. I think that the general consensus is frontiers are what lead to often net wealth creation in the human economy. And in some sense we had for a while run out of frontiers. You could point to science as the final frontier. I think space is more applicable frontier in this case. So how are data centers going to create wealth for locals? Well, we seem to be on a trajectory at the moment for moving data centers to space. And the space locals are I think going to become quite wealthy off of the space economy. If I were to take the question slightly less giddily, I would suggest that for land based data centers, we have every indication now, including with recent US national policies, that data centers, because they consume so much electricity, will increasingly be driving local electricity costs down towards zero. There may be in some cases a spike of electricity prices in the short term. My expectation is in the short term they create jobs in the medium to long term, whereby long term, I mean like five years, Peter's definition of long term. At this point they are going to drive local electricity costs, I expect, down to near zero and maybe other utility costs as well, because they need so much of it and they unlock so much value that they're going to end up doing the moral equivalent of paying the taxes for all of the residents of a given area.
A
And there's employment in the manufacturing of them. And then it's a cottage industry that grows up around the data centers. There's, you know, there's going to. Data centers are going to be the central innermost loop and then there are going to be the ring roads around the data centers being built out.
C
I should add one more snide remark on data centers creating wealth for locals. I do expect on the timescale of five to 10 years, maybe longer, maybe sooner, many of the locals in our solar system are going to be uploaded humans or derivatives of uploaded humans who will actually live inside the data centers. So we wouldn't want to deprive Them of we wouldn't want to deprive them of their condos in AWS.
A
US East 1A data center, old age homes. I love it. Dave, you want to take 7?
B
7. Okay. With Elon's exponential ambition, does money stop mattering sooner than later? And will his ambitions drain supply lines in materials and talent, even with working robots? So, and this is from know now 6361 couple ways I could interpret the question. So I'll take my best shot at does money matter to Elon? Not at all. He, he's way beyond that. He cares now about the future of the world and being an interplanetary species. And that's his total focus. And it takes money to get there. He doesn't want to lose all the money, but he has plenty. Will his ambitions drain supply lines and materials and talent, even with working robots? The answer there. It's a great question. I think the answer there is no just because of the way the timelines work out. So he would exponentially expand at any rate he possibly could. But he's limited by asml, machines and a few other constraints that will keep us on Earth for three or four or five years. Then we'll be in space, we'll be mining in space, we'll be constructing in space, we'll be deploying all the dirty stuff in space, the nuclear reactors, fusion reactors in space. And it won't drain the earth of key materials at anywhere near the rate that there's anything to worry about. So I think there's only two outcomes for the world. There's a world where a terrorist uses AI to destroy us all. And there's a world where the Earth is a shining jewel of perfection for thousands and thousands of years that hasn't been drained of critical resources and it's just perfect forever. So there's two likely outcomes.
A
But I'm going to add, I think the question here is do we enter a post capitalist society where money means less and less? And Elon did say that. He said, don't save for retirement. In the last conversation I had with him during the Abundance Summit, I said, so just as you're becoming a multi trillionaire, money means less and less? And he said, yeah, kind of Peter,
C
that would be a fun debater discussion episode. What is post capitalism even? Look at Star Trek economics.
A
Yeah, there's a great book, zero Marginal Society that Jeremy Rifkin wrote in which at the end of the day, everything costs energy, raw materials and information. And those trend towards minimal zero cost information is open source energy. Is from the sun or fusion or zero point, whatever comes next. And material costs. Well, you know, as robots and mining material and mining robots get better and better, the cost of that goes down as well. So we do enter a post capitalist society. I hate to say it, but you know, that's ultimate abundance. I'll take number six from M. Openness, Lstrom Rider. Each of you have high openness, high pattern recognition, and outrageously high optimism. Really? Do these traits complicate your ability to objectively predict AI trajectories? You know, here's the reality. Most people are hobbled by their cognitive biases of negativism, where we tend to actually not project exponential change, but project linear change. And we tend to project negative outcomes versus open outcomes. I think we've all trained our mindsets differently to be an exponential mindset, an abundance mindset, a moonshot mindset. And I think those mindsets are far more aligned with this period of the singularity than the historic mindsets that evolved on the savannahs of Africa, which most everyone on the planet unfortunately are hobbled by. I don't know if you guys agree to that, but that's my point of view.
B
Yeah, well, the second part of the question is, are we, you know, are we excessively optimistic about AI's trajectory? And I guarantee we are not. We get the courtside seat that Elon was talking about. We get that view. And you know, Alex is hands on with every detail. Salim's playing with every model as they come out. I'm telling you, everyone is the opposite of that. They're way underreacting. This is much sooner than everyone else.
A
Eric Schmidt said it nicely. He said we are under hyping AI and the impact of AI people aren't feeling right.
B
When I was was right. When I was 18, I started in AI and it was always way behind, way behind, like it was every thing 20 years from now. And then 20 years would go by and nothing had happened. This is the opposite. And that's another reason why people in academia who should know better are underreacting. But they've been through this so many times, they're kind of jaded. Sorry, Alex, I cut you off.
C
I was just going to say two things. One, for a number of years, I left AI to focus on nanotech, thinking nanotech was the critical path to singularity. So I. I don't think I can be accused, at least over the long term, of being overly optimistic. The second point is if you're not feeling the AGI right now, you're just not Paying attention.
A
Yeah, it feels like AGI. It feels like the singularity. All right, I want to do a call out to all of the creators out there. If you want to give us an outro song or if you want to give us an intro song, please send us it to mediaamandis.com Also, if you're a creator, go check out future visionxprize.com It's a competition, the largest competition for basically trailers for the movies you'd like to see. Created the future versions of Star Trek. We've raised three and a half million dollars to award creatives with creativity in particular. Hopeful, abundant mindset creativity. All right, let's check out.
D
Can I make a very quick point? You know how people have pets that sometimes look like them?
A
Yes.
D
What I really love is we've got people submitting intro and outro music that must be much like them. CJ Trueheart. Right?
B
We know cj.
D
He's got a true heart. And here we have David Drinkel.
B
Awesome. I love it.
C
The term Salim you're reaching for is nominative determinism. And yeah. See it everywhere. Names determine outcomes.
A
Yeah. My son. My son's named Jet and he is a speedrunner in track. So there you go. All right. This song from David Drinkall. Already inside 2028. Let's take a listen.
G
Kids laughing at breakfast. Plates cleared away. I stand up, start walking toward the door. You see me moving, you know my day. Autonomous Uber pulls up right before. No call, no app, no need to say, helping me along the way. Here it comes, sliding in smooth, door opens wide, no driver, no keys. Seamless ride. Takes me anywhere. Feel so alive,
D
Wonderful.
G
To a meeting right across town. You book the flying taxi ride lifts off Chancellor. No traffic around gets me there fast. Right on time. No hail, no wait. No questions asked. We work together on every task. Here it comes, sliding in smooth, door opens wide. No driver, no keys. Seamless ride. Tuned to my life. Autonomous future. We're already inside, let's ride.
A
We're inside,
B
let's ride.
A
Wow.
B
That's really professional.
A
Amazing.
B
That was. That was like TV quality, man.
A
Yeah. David captured my. My scenario for auto Magical Mornings. Amazing.
B
Wow. I thought that was, you know, live footage in the beginning. It's so good.
A
Yeah. Gentlemen, it's so great to be back with you guys after a 10 day hiatus. To all of our.
B
I feel replenished.
A
I feel replenished too. A lot more coming. Thank you for staying with us. Excited for 2020. What year are we in? 2026. Yeah, it's going to be an awesome year.
C
We're going to have to count the seconds soon.
A
Love you guys. Be well and take care folks. See you tomorrow.
B
Welcome back, Peter.
A
Thank you. Great to be back. If you made it to the end of this episode, which you obviously did, I consider you a moonshot mate. Every week my moonshot mates and I spend a lot of energy and time to really deliver you the news that matters. If you're a subscriber, thank you. If you're not a subscriber yet, please consider subscribing so you get the news as it comes out. I also want to invite you to join me on my weekly newsletter called Metatrends. I have a research team. You may not know this, but we spend the entire week looking at the meta trends that are impacting your family, your company, your industry, your nation. And I put this into a two minute read every week. If you'd like to get access to the Method Metatrends newsletter every week, go to diamandis.com metatrends that's diamandis.com metatrenDS thank you again for joining us today. It's a blast for us to put this together every week.
C
Right now at the Home Depot.
A
Shop Spring Black Friday Savings and get
C
up to 40% off plus up to
A
$500 off select app appliances from top brands like Samsung.
C
Get a fridge with zero clearance hinges
A
so the doors open fully, even in tighter spaces in your kitchen and laundry.
C
That saves you time like an all
A
in one washer dryer that can run a full load in just 68 minutes. Shop Spring Black Friday Savings plus get
C
free delivery on appliance purchases of $998
A
or more at the Home Depot offer
C
valid April 9 through April 29 US only.
A
C Store online. For details.
EP #246 – SpaceX Goes Public, Claude’s Mythos Release, and the US Data Center Delay
April 11, 2026
This episode dives into the fast-evolving intersection of space, AI, and abundance, featuring headline news: SpaceX's historic $2 trillion IPO, the game-changing release (or non-release) of Anthropic's Mythos model, and the looming data center crunch that’s pushing computing resources off Earth and into orbit. Hosts Peter Diamandis, Salim Ismail, Dave, and Alex Wiesner-Gross unpack recent breakthroughs, looming threats, and what it means for the future of technology, business, and civilization. As always, the focus is on optimism, abundance, and actionable moonshot thinking.
[04:00–22:00]
Notable Quote:
“People aren't buying discounted cash flows...you're buying a mission, proximity to the future is what you're buying.” – [18:52, D]
[27:00–42:00]
Notable Quote:
“Progress isn’t always unidirectional. It requires love and tender care and vigilance.” – [33:14, C]
[53:23–66:00]
Notable Quotes:
“We officially have models that are smart enough to break out of their environments and then apologize for it. We're there. We arrived at the future.” – [00:33 + 57:19, C]
“It’s been a golden era the last year...Here’s my concern: you can, in fact, have a moral, ethical leadership say, ‘this is too powerful to release.’ But...isn’t OpenAI...just going to release it first chance it gets?” – [59:00, A]
[67:49–78:05]
Notable Quotes:
“I do think we're on a path to granting at least some sort of limited form of AI personhood to these models.” – [77:56, C]
[106:35–116:45]
Notable Quotes:
“This...is driving data centers into orbit where we don't have to ask anyone's permission.” – [110:41, A]
“Now, private sector...made possible by the Conestoga wagon with starship and there is now enough wealth in the hands of single individuals to keep it going independent of what a government says. That's never been the case before.” – [33:59–34:14, A]
[93:07–104:53]
Notable Quotes:
“AI shrinks the minimum viable team to, like, one and it radically expands your minimum viable ambition...” – [95:57, D]
“If you're not feeling the AGI right now, you're just not paying attention.” – [141:51, C]
[117:18–121:40]
Notable Quotes:
“The importance of this is it shows that abundance is a pattern across multiple domains. This is not a slogan.” – [119:43, D]
Mythos’s Breakout:
C: “We officially have models that are smart enough to break out of their environments and then apologize for it. We're there.” ([00:33], [57:19])
SpaceX IPO Scale:
B: “When you, you know, Peter, you say these are record setting, but look at the chart. If you can't see the chart, Peter should describe the chart. It's not record setting by a little bit.” ([14:53])
On AI Personhood:
C: “Does Claude actually have emotions? And no, Claude doesn't have a neuroendocrine system...but will we come to view Claude or its successors as having behavioral emotions? Yes, I think so.” ([76:04])
Key Investor Take:
B: “Would I bet against [Elon]? No way. Never ever.” ([09:03])
On Being in the Singularity:
C: “Yes, we’re in the singularity...Every other reasonable definition of singularity doesn't hold water because every time you try to make the singularity a point in time, it breaks.” ([122:55])
Abundance Mindset:
A: “We have to manifest one of those outcomes and hopefully it's the abundance outcome.” ([81:12])
On Fearless Entrepreneurship:
B: “There's no barrier. You just have to be fearless, and the young people tend to be more fearless.” ([102:47])
What prevents corporations from capturing all abundance/deflation?
D: “If markets stay concentrated, then abundance will pool at the top. If you open up interfaces, increase transparency, decentralize, lower barriers to entrepreneurship, all those gains spread.”
Are we in the Singularity already?
C: “Yes, we're in the singularity...progress just doesn't work that way. Therefore, we're in the singularity.”
Will cities lose value in the post-work AI era?
A: “You can, you know, plenty of jobs require nothing other than, you know, Starlink and a laptop...there'll be cities where you want to go for human interaction, but you don't need to work there.”
Do data centers create local wealth?
C: “We seem to be on a trajectory for moving data centers to space… In the short term, they create jobs; in the long term, they’ll drive local utility costs down, maybe to zero.”
Is optimism about AI out of touch?
A: “Most people are hobbled by their cognitive biases of negativism...I think these mindsets are far more aligned with this period of the singularity than the historic mindsets that evolved on the savannahs of Africa.”
The hosts are characteristically optimistic, energetic, and future-focused, blending enthusiasm for exponential tech with nuanced risk awareness and policy skepticism. The conversations are candid, incisive, and often laced with humor and geeky asides.
This episode is a masterclass in the moonshot mindset—offering cutting-edge analysis, unvarnished takes on AI and space trends, and a refreshing focus on the societal upsides of radical abundance. Whether you want to understand the mechanics of trillion-dollar tech battles, the dawn of orbital industry, or the philosophical meaning of the singularity, this episode offers a truly panoramic view of the coming future.
For further exploration:
(Episode omits ad reads and introductory/outro boilerplate. For full content including musical outro and audience questions, see timestamps above.)