
Loading summary
A
A lot happening at Davos. A lot of conversations that are important for us to recount here.
B
In past years, it was dominated by politicians and economic policy. Then it kind of moved to Internet a little bit. This year, all AI.
C
We are knocking on the door of these incredible capabilities.
A
Right. The ability to build basically machines out of sand.
C
Maybe it would be good to have a bit of slightly slower pace than we're currently predicting, even my timelines, so that we can get this right.
A
Should we slow down? What's the path humanity should take?
B
I think what's likely to happen post AGI is where do you see the.
C
US and China right now in the AI race? I still think that the US is in the lead. I think that our models are better, our chips are better, but they do have other advantages. They are spinning up power generation faster than we are. That's one area.
D
If you believe that energy is at the heart of it and is the core of the inner loop, China's going to go way ahead anyway. But I think the real differentiator in the race is going to be application layer dominance, not frontier benchmarks.
A
The problem is that you always need a bad guy in a movie. Now that's a moonshot.
B
Ladies and gentlemen.
A
Welcome everybody, to Moonshots. Another episode of WTF here with DB2, AWG and Salim Saleem. I guess I have to use a second or. What's your middle initial, Saleem?
D
I don't have one.
A
Ah, all right, Si.
C
Okay.
A
We'll have to adopt one for you. Anyway, welcome to probably one of the most important podcasts around today, the podcast that helps you get ready for the future, ready for the supersonic tsunami. And we spend 20 hours a week summarizing what's going on so that you can get it in a good 90 minute session. Dave and Awg, you're just back from Davos. And that's the theme of our show today. A lot happening at Davos. A lot of conversations that are important for us to recount here. So tell us, what was it like?
B
It's a lot of time zones away.
A
Yeah, a lot, actually.
B
Larry Fink, who is the organizer this year, said he dropped we should do this in Detroit next year.
A
How so?
B
That did not go over well across Europe. We want to include, you know, more views from around the world. This. This exclusive resort town is just too swank. Let's go to Detroit.
A
Well, here's some fun photos of you guys. It was cold and sunny.
D
Is that Sandy Pentland?
B
Yeah, Sandy Pentland with his brass rat.
D
Yeah, yeah.
B
It was great to see him. He's. He's a long timer. He's been there for God since the dawn of Davos.
C
Yeah, I would say there were robots on the streets, there were billionaires eating out of food trucks, and there were anti aircraft guns on the ice pond.
A
Nice.
B
And with Trump's, there are actually 3,000, much more security than the past six years. 3,000 armed people in fatigues with machine guns going all the way halfway to Zurich, actually. I was surprised. But, you know, Donald Trump was there this year, so I guess they, they ratcheted up. He attracts a crowd.
A
So Dave and Alex, what was the vibe like? What did it feel like there? I mean, you've been there numerous times. I was there once as a speaker and it was just overwhelming beyond belief in terms of trying to find people there. It's a zoo, isn't it?
B
Yeah, it's a zoo. You know, if I were to characterize what was different this year, it's kind of, it's kind of like the quintessential example as Alex was there. You know, in past years it was dominated by politicians and economic policy. Then it kind of moved to Internet a little bit. This year, all AI, Completely dominated by AI, which is encouraging. No one had anything intelligent to say other than the usual suspects, Dario and Demis and the people we see all the time. But at least they were listening. Global leaders, presidents of pretty much every country listening to pretty much all AI dialogue. So that gives you some hope that people are beginning to get ready. And the fact that Alex was there is kind of a bellwether of where it's likely to go. So, Alex, I don't know. You want to.
A
Alex, what was Takeaway?
C
Yeah, it was of course, amazing. And there really were robots in the streets. A couple of Frontier Labs had houses there. Imagine. For those who haven't had this experience, imagine almost a World's Fair or a World Expo set in the Alps, with major governments having their own houses, literally taking over storefronts, restaurants, convenience stores. Parenthetically, if anyone wants to make a killing at Davos next year, set up a restaurant. It is nearly impossible to find good food short of food trucks. At Davos, someone will make an absolute killing setting up a restaurant that's still a restaurant during Davos Week. But imagine a World's Fair with the governments and the Frontier Labs and some of the major corporations and tech companies all on an equal footing, all hijacking or Invasion of the Body Snatchers style, consuming the storefronts of an alpine resort town. And what you get is Davos. I had an amazing experience. I moderated something like eight or ten different events with OpenAI executives, DeepMind executives. I did a fun panel with Leon Jones, one of the co creators of the Transformer architecture. You can see here one of my photos with US Under Secretary of State Sarah Rogers and some absolutely amazing and I think hopefully fruitful discussions about the era of GPU diplomacy that we find ourselves in discussions with Jack Hidary of Sandboxaq and Daniela Rus, the head of MIT's Computer Science and AI Lab. I think it was an absolutely incredible experience. But I also think to Dave's point, AI is the story. It's no longer a world where, and maybe just one more beat on this. It's no longer a world where governments get together to talk about governing the governed. It's now, I think, a world where AI and superintelligence is the story of the world economy. And I think just walking down the street and seeing the DHL robo dogs walking around or the humanoid robots taking a stroll from AI House, I think that exemplified what we're seeing in the global economy. More macro everybody.
A
You may not know this, but I've done an incredible research team and every week myself, my research team study the meta trends that are impacting the world. Topics like computation, sensors, networks, AI, robotics, 3D printing, synthetic biology. And these Metatrend reports I put out once a week enable you to see the future 10 years ahead of anybody else. If you'd like to get access to the Metatrends newsletter every week, go to diamandis.com metatrends that's diamandis.commetatrends and that's a good transition to let's look at a little bit of the content from Davos. So what we've done is we've cherry picked a number of conversations. There's a huge amount of this flurry through X and various platforms. We've chosen a few to bring up here, listen to and discuss with our amongst our moonshot mates. Let's begin with Dario Amadei, the CEO of Anthropic. He was a rock star during Davos and Jensen Huang. Let's listen to Dario. First on the economy. If you look at what AI is capable of, if you have these models.
C
That are getting more and more capable across a wide range of cognitive tasks.
A
You look at all labor in the economy, that's something like $50 trillion a year.
C
So I could easily imagine that the.
A
Revenue of the industry or even single.
C
Companies, if it's even 10% of that could be $5 trillion a year. Now, that's something we haven't seen in the history of the world that creates all kinds of problems as well as.
A
Creating all kinds of growth. And on to Jensen Huang, CEO of Nvidia here, the largest infrastructure buildout in human history.
C
We're now a few hundred billion dollars into it.
D
That's it.
C
We're a few hundred billion dollars into it.
A
Larry and I, we get the opportunity to work on many projects together.
C
There are trillions of dollars of infrastructure that needs to be built out. And it's sensible.
D
It's sensible because all of these contexts.
C
Have to be processed so that the.
A
AI, so that the models can generate.
C
The intelligence necessary to power the applications that ultimately sit on top.
A
And so we have chip factories, computer.
C
Factories, and AI factories all being built around the world.
A
So I saw a text this morning from Elon saying he expects to see $100 trillion company valuations coming up in the next few years. I think he said 2030 when asked which company it was Spacela. Like the combination of SpaceX and Tesla. A trillion here, a trillion there. Salim, what are you thinking about these numbers being thrown around in the economy?
D
I start to look at this as becoming meaningless. Right. In an abundance environment, kind of whether it's 10 trillion or 100 trillion, it's. The whole thing becomes arbitrary and is not a real gauge of strength. What I found really interesting about the Nvidia and the Dario conversation was they're on opposite ends of the stack, and they're kind of really saying roughly the same thing, because COMPUTE is becoming the new oil and the new electricity, and this is really just incredible stuff. It was amazing watching the output of the overall event from a distance.
B
Well, just in terms of tone, too, Jensen, it sounds like he's talking to kindergarten. You know, he's talking so slowly. And, you know, things that we talked about on the podcast are 100 times more detailed and sophisticated. That gives you a sense of the audience, like the world.
D
He's talking to government people. He has to speak to kindergarten level.
B
And Dario's example of, look, the global labor market is 50 trillion. Everybody get that? Okay, 5 trillion is an extreme lower bound of the part that can move to AI today. That's incredibly low. But even that justifies everything we're talking about now. You guys get it. It's that cadence of like, please try to keep up with me here. So there's a lot of that in this particular forum.
C
In my mind, this is all borderline obvious of Course, capital in the form of AI is going to continue to substitute for labor. And, and of course, trillions of dollars of AI infrastructure capital build out will be needed to substitute for ultimately tens, maybe eventually hundreds of trillions of dollars of services and human labor per year. So I think to Dave's point, this really is just spelling out the basic arithmetic of what a post human economy looks like.
A
Yeah, a trillion here, a trillion there. All right, let's move on. I found this conversation we're about to share with everybody here pretty fascinating. For me, one of the highlights was seeing Demis Hasabis, the CEO of DeepMind, and Dario Amadeh, the CEO Anthropic, on stage together and having these conversations. They're both viewed, I think, in the industry as good human individuals who care about society. And it's not about the revenue, it's not about how many users. Let's play these two and then we'll talk about it. They're discussing the risks here. And should we slow down? What is. What's the path humanity should take? All right, first up, Demis Hassabis reasonable.
C
There's fear and there's worries about these things like jobs and livelihoods. I think there's a couple of things that, I mean, it's going to be very complicated the next few years, I think, geopolitically, but also the various factors here, like we want to resolve or disease, cure diseases, come up with new energy sources. I think maybe the balance of what the industry is doing is not enough balance towards those types of activities. I think we should have a lot more examples. I know Dario agrees with me of like alpha fold, like things that help sort of unequivocal goods in the world. And I think actually it's incumbent on the industry and all of us leading players to show that more, demonstrate that, not just talk about it, but demonstrate that. And. But then it's going to come with these other intended disruptions. And if we can, maybe it would be good to have a bit of a slightly slower pace than we're currently predicting, even my timelines, so that we can get this right societally. But that would require some coordination. That is.
A
All right, now over to Dari Amide knocking at the door of risks.
C
AI is going to be incredibly powerful.
B
I think Demis and I, you know.
C
Kind of agree on that. It's just a question of exactly when. And because it's incredibly powerful, it will do all these wonderful things, you know, will help us cure cancer, it may help us to eradicate tropical diseases, it will help Us understand. Understand the universe, but that there are these, you know, immense and grave risks that, you know, not that we can't address them. I'm not a doomer. We're.
B
We're.
C
We are knocking on the door of these incredible capabilities.
A
Right? The ability to build basically, machines out of sand.
C
You know, my, My. My view is this is happening so.
A
Fast and is such a crisis, we.
C
Should be devoting almost all of our effort to. To thinking about how to get through.
A
This amazing clarity of message. Dave, was this the sense you had over there?
B
Yeah, absolutely. So, you know, what's interesting is that one of our partners, Mira Wilczek, presented a book written by AI back in 2020 on that main stage, and the global world kind of paid attention. And then Covid hit, and the entire topic of World Economic Forum and global talking was locked down. Covid, whatever. And so everybody kind of forgot about AI as imminent. Now they're used to the topic du jour coming and going. So a lot of the audience is like, yeah, yeah, it'll blow over, like everything else. But of course, AI won't blow over. AI will accelerate. And next year, you know, whatever Dario and Demis are saying, this year will be magnified 100 or 1000x next year. But I don't think that necessarily penetrated everybody's brain. When you look around the audience, they're like, yeah, you know, well, 5 trillion, 10 trillion. But I think it's interesting that Demis, you know, he's taking kind of a conservative timeline view and he's saying outer bound, 10 years. Somewhere in the five to 10 year timeline. And in global geopolitics, that's like, tomorrow. That is such a short timeline. And he's clear that that's the outer bound. So in the past, the two guys have debated, you know, is it two years or 10 years? Now they're agreeing and saying, what the hell's the difference? We're talking about AI that can do absolutely any task that a human being can do somewhere between one and 10 years. Doesn't matter whether it's one or 10. What matters is anybody in this room ready. So I love the fact that they're at least trying to get the global leaders to start to think in terms of what massive scale of disruption is imminent and start to generate your plan, like now. And so it's great that they're at least trying.
A
And this conversation about slowing down, you know, which popped up years ago. Alex, did you hear that at all? I mean, I can't imagine there is any option to slow down the economic race is so strong.
C
Yeah, I did hear. So right after leaving Davos, I took a meeting with the MIT Alumni association of Switzerland in Zurich and I heard it from them. I heard desire for slowing things down or some sort of radical wealth redistribution plan. Heard very little AI optimism from them, which was stunning footnote there. But I think the risks side of this I think is almost bearing the lead in my mind. Like I write about this every day in my daily newsletter or briefing, whatever you want to call it. That recursive self improvement I think is already priced into the near term. We're either already, to Dario's point, in the era of recursive self improvement or it's coming later this year. But we're basically there. And I think advances with Claude and Opus 4.5 and other models already reflect that we're approximately in the era already of recursive self improvement. So I think saying over indexing on all of these things that are about to be here, that's already priced into the market. I think the actual news here that we're burying a little bit was elsewhere in Demis comments where he talked about some of the problems that he wants to solve, less about the risks that we face, more about the problems that will be unlocked. And I think the most interesting thing I heard Demis say was that he's interested in exploring the stars with superintelligence. And I think that's super important point. And I think that if we zoom out and we try not to adopt the mindset of framing everything in terms of a risk oriented mentality and rather instead focus on a radical economic growth oriented mentality, then exploring the stars with AI really is one of the biggest questions. And I would underline Demis's point further and say in my mind, if it turns out that the physics of our universe, if it turns out that they're friendly toward interstellar exploration in the style in which Demis seems to be gesturing, that's the ballgame that determines whether we get Dyson spheres or Dyson swarms, I should say in the next two to three decades. If the universe is fundamentally unfriendly to interstellar exploration, we're probably more or less stuck near our home star and disassembling the planets. Or if interstellar exploration to Demis's point really is something that DeepMind or some other frontier lab can unlock with AI, then we get the galaxy, we get the universe. It's a much more interesting future. And so I think that's the real story here. Not like risk. Yes. Risk, no. Recursive self improvement. Yes or no? That's all priced in already.
B
Well, and to your point, Alex, too.
A
Star Trek.
C
Star Trek, yes.
B
Yeah, exactly. And Dennis mentioned it in the context of what is the goal of humanity post AGI? What keeps our massive transformative purpose running? And the answers are usually humanity wide. They're not nationwide. And WEF and Davos is all built around nations. Nations taxing each other and nations interacting with each other. And so if you look across the crowd, roughly 200 countries represented. All but two of them are miles behind in this race. And so culturally, the vast majority of the people you meet are like, I wish this would all slow down. But in the back of their mind, it's like, I wish this would all slow down so that my country at least is a player and is relevant. But I think what's likely to happen post AGI is that the way society is organized cuts across country boundaries easily.
A
Celine, what do you make of the risk conversation here?
D
I think in my optimism, hat and bias, I kind of downplay it radically because technology has always been a major driver of progress. The big challenge with civilization is how do you extract the promise of technology without the peril? We've done a pretty damn good job of it thus far, all things considered. All things considered.
B
The.
D
There were two things that struck me in this conversation. This was about my favorite conversation of the week that I tracked. One was that what I'm hearing in their voices is the unbelievable fatigue at the metabolism they're having to operate at.
A
I'm also hearing respect for each other in their voices.
D
That was there no matter what. You can see a huge amount of brotherly love and mutual respect. Fantastic. And you couldn't ask for more honorable people at the forefront of this field. So that's just amazing. And this speaks to the same thing. We saw a large part of the Internet age with Larry and Sergey and crew here. One was the fatigue I got from them because they're like, somebody please slow this down so we can take a breather. I think because. So. Because they're seeing that in a year it'll be 100 times faster. Right. This is the slowest it's ever going to be. And they're like, looking at this going, holy crap. So I think that's one part of it. I really love the massive optimism and the star conversation. Fantastic. One day I hope to be around when Saturn actually gets it and Alex.
B
Goes.
D
And we could, we could do that. But I think overall, the opportunity for humanity to navigate this. But there's a massive disconnect here, which is, as Dave mentioned, Davos is oriented around nation states. Nation states are an artifact of a scarcity environment. Nation states cannot compute abundance. They cannot operate in that paradigm. So we need a completely different governance model. For me, the biggest thing that was highlighted here was the fact that the construct of the UN and nation states is completely irrelevant to what's coming, as AI cuts across every category, every economy. It's a global issue, it's a civilizational issue for forget who wins the race, et cetera, et cetera.
B
Can I go on a personal note related to the fatigue, too? This is my first time in my life where I'm walking down the street and people are like, hey, you're that guy from that podcast. Hey, I know you. And so I actually ended up putting my ski hat and my goggles on walking down the street, because I'm not that kind of guy. I don't really want.
C
What a rough life.
B
No, I don't mean it that way. Cordario and Demis. Yeah, you've been there for a long time, Peter. So Demis in particular, he's a researcher, and he cares about technology, research, AI at the bits and bytes levels. And now he's been drawn into the global stage as the spokesperson for ethics and humanity. And Dario never thought he'd be a CEO at all. In that interview he gave a little while ago, he's like, look, I'm a research guy. I got drawn into this function. And so that fatigue you feel is the byproduct of. Of being sucked into this vortex where there's a huge void of leadership. And, Alex, you're going to experience this very soon, too. The demand for what you can articulate is going up so quickly that it'll pull you in. And if you're not ready for it, then. It's tiring. It's tiring just flying all over the world. It's tiring getting the litany of questions. A lot of the questions are the same exact ones you heard the day before, and you just have to get used to the demand.
C
So what I'm hearing you say, Dave, is I have to brace myself for the moonshot paparazzi on the rough slopes of Aspen and Vail and Davos. Just bracing myself now.
A
Hey, I know you.
D
Listen, Peter and I have had this exponential conversation answering the same goddamn questions for 20 years now. Ray's been doing it for close to 60 years, but we come at it.
A
With enthusiasm every time because it matters. You know, There was one other conversation I have to point out here. I heard Dario talking about Sam Altman and OpenAI and it was interesting. He goes, you know, at Anthropic here, we're serving businesses, we're giving them value, we're building things. We're not trying to engage a billion people with sycophantic conversations. Right. So it's very interesting point of view that when AI in terms of OpenAI is serving individuals and just trying to make them feel good about themselves and trying to make it addictive, which is what Dario was saying, you get one result versus if you're using AI to serve business and deliver real value, you get a different result. I was taken back by that conversation. I didn't get the clip for it here, but I thought it was really.
C
I think there's also though, to be fair to OpenAI and to some extent GROK and Xai, I think that's a bit of a self serving argument. I parse that as a, a post hoc justification. Anthropic happens to be doing very well in the enterprise. And I definitely perceive a post hoc justification. Well, we're not doing consumer because we don't want to do consumer. We want to do B2B and do enterprise sales because there is moral purity in enterprise cells. But there's also one can achieve moral purity in uplifting the intelligence of billions of individuals in their individual capacity.
A
Well, well said. Let's jump into another topic that was through line at Davos, which is US versus China. And we're going to open up with a conversation between Marc Benioff and David Sacks here. Where do you see the US and.
C
China right now in the AI race and the model innovation that we've seen in both countries? I still think that the US is in the lead. I think that our models are better, our chips are better, but they do have other advantages. They are spinning up power generation faster than we are. That's one area I say. Another area that concerns me is AI optimism. So there was a survey done by Stanford recently and they surveyed people in lots of different countries and they asked them, do you believe the benefits of AI will outweigh the harms? And if the respondents said yes, they'd be an AI optimist. And if they said the harms outweigh the gains, they're an AI pessimist. Well, in China, 83% of the population are AI optimists. In the US that number is only 39%. Where I kind of worry is in the AI race that if in A fit of pessimism. We do something like what Bernie Sanders wants, which is he wants to stop building all data centers or if we have 1200 different AI laws in the States, you know, clamping down on this, clamping down on the innovation. I worry that we could lose the IRA's because of a self inflicted injury.
A
Interesting. There's one more China video I'm going to play here and then we'll discuss this. And here we go. This is from the CEO of Mistral, Arthur Menage.
C
Is China behind though the West? China is not behind the West. I think this is a, this is.
B
A fairy tale in, in the area.
C
They are very much at piety and, and the year ahead is going to be extremely interesting in that respect. We care about Europe maintaining its position.
D
Europe maintaining its ability to train models because we don't think that we should rely on open source Chinese models.
A
Gentlemen, thoughts on this?
D
He's got the greatest last name ever. Mensch. I've got a couple of thoughts here. One, I'll go back to my observation that I think this US China thing is a bullshit conversation. But why? Because if you believe that energy is at the heart of it and is the core of the inner loop, then China's going to go way ahead anyway once they figure out the connection between those two. But I think the real differentiator in the race is going to be application layer dominance, not frontier benchmarks. And there I think China will be very far behind for a long time because of the trust factor. Plus the natural markets go very quickly. Aside from TikTok, it's not going to really kind of take off in a big way. I think there's me tons of other open source models are going to come out and people are going to be too concerned about the trust factor to use it over time.
C
3 comments if I may. So the first to Salim's point since Salim told me that I was being too agreeable the last time we spoke. I have to be a little bit of a contrarian here and point out.
D
Let's go back to normal programming.
C
Returning to normal programming. The conventional wisdom is that the Chinese Communist Party's AI strategy is actually quite strong when it comes to applications. That this is to the extent that China is relatively strategically weak when it comes to training its own frontier grade models relative to the US and by relatively strong I mean maybe six months behind. So not globally or absolutely weak, but just a few months behind perhaps that its real strength is in aggressively pushing applications out into the everyday economy that take advantage of AI. So I'll just note parenthetically, the conventional wisdom is that China's AI plus strategy is actually pretty strong strategy relative to the relative weakness of frontier grade models. But that's parenthetical. Note going back to Asia, overwhelming Western countries on AI optimism. I think David Sachs and others have put their finger on something very important. I think as a student of the history of science and technology, something went very wrong in the US between sometime, maybe more than one thing, between the end of World War II and call it the mid to late 1970s, and I'm reminded in 1954, the chairman of the Atomic Energy Commission, Louis Strauss, famously, and this has had repercussions through the decades since, predicted that fission, nuclear fission would become so inexpensive that tracking usage via electricity meters would become unnecessary. And that's where we get the phrase energy too cheap to meter. And that didn't happen. We didn't get energy too cheap to meter because in the decades after 1954, for a variety of reasons, and it's easy enough to point fingers at possible explanations, nuclear fission was essentially regulated out of existence in this country. And I think the point that David is making is we're at the point now where like remember how in the 1950s and 1960s, new American homes were being built with extra glass because it was anticipated that electricity and energy would be too cheap to meter, that you wouldn't need to bother with worrying about heating or cooling or insulation costs, it would just be absorbed by the free electricity. We're at the point now with AI, I think, where it's an analogous position where we're on the verge of intelligence too cheap to meter, and we run the risk, to David's point, that if we allow too much AI pessimism or too much AI doomerism to get in the way, that the same thing happens with AI that happened with nuclear decades ago over regulation, and then we could waste another century.
B
Do it. You know, we're a democracy. The voter wins. And the democracy, the voters did not want nuclear power, even though the scientists.
A
Are like, look, but it was demonized. I don't think that's actually true in the press.
C
I mean, so yeah, quick, quick note on democracy and nuclear power. If you follow closely that the history of nuclear power in the post World War II era wasn't actually that democratic. It wasn't like every election cycle Post World War II, the voting populace was super informed about all of the key details. This was a relatively narrow technocracy that developed in the Post World War II era in the form of the Atomic Energy Commission. That was an evolution of the Manhattan Project, where it was really a bunch of technocrats deciding what would and wouldn't happen. I don't think the voters ever had, at least for the first few decades, a real informed decision.
B
Well, I think that, you know, remember the, the acronym nimby N I M B Y. It was rampant during that era and in a lot of topics, nuclear being one of the biggest ones. But it was not in my backyard. Yeah, yeah, we should do this. This is for the global good of the country. Global good of the world. Let's do it. But not here. And if everybody says that you have the tragedy of the economy, same thing.
A
Going on in data centers right now. I had a conversation with Balaji a day ago, two days ago, and we were talking about how, you know, one of my pet peeves is that most of Hollywood shows all these negative, dystopian views of the future and it paints sense of fear across the public about killer AIs and, you know, dystopian robots and so forth. And that was true as well for nuclear, to some degree. And the problem is that. Let me finish. The problem is that you always need a bad guy in a movie. You always need someone to, you know, to threaten. And Balaji's idea, which I love, is we should start to create the movies where the bad guy's the regulator. This regulator is slowing down the delivery of longevity, slowing down the delivery of unlimited energy. And I thought that was a brilliant insight he had because, you know, there has to be balance. Got it. There has to be directing this supersonic tsunami in the right direction. But sometimes the regulations are just off base.
B
Well, there's a broader theme there too, Peter. I'm sure you're very in tune with it, but science fiction tries to portray the future sort of accurately, but only has to fit the storyline. So then you end up with these spaceships that bank into turns. Like there's no atmosphere. And the lasers make sounds like, pew, pew. This makes no sense. And you think, well, that's harmless. It's just entertainment. But then when you look at the business plans coming out of the colleges, you're like, where did you get that hair brained idea? Well, it was from watching Star wars or whatever. Okay, so it does actually matter the way we portray this in the media. And so Balaji is right, you know, if you portrayed a regulator as the evil, you know, Darth Vader. But no one's gonna make that movie, right? Like how? I don't know, how entertaining is that for me?
D
For me, the inflection point for nuclear is very simple. It was the movie the China Syndrome, which freaked everybody out. And then from that point on, it shut down. And this is the power of narrative. And this is unfortunately one of these problems we have today, that the use of narrative is the only model really to shift people at scale. And so we have to come up with narratives, as you're trying to do, Peter, on the positive side of all of this, rather than the classic negative, which takes up most of our attention.
A
And you're going to hear about a big push I'm making with Google shortly on creating positive narratives because society needs it. Unfortunately, Hollywood's been a doom and gloom machine.
D
Can I, Can I go on a little, Can I go on a little rant here?
A
Of course.
D
We love your rant. So, you know, there's a brilliant, brilliant insight that you and Stephen made in the Abundance book, which was the predominance of the amygdala, right? We are geared from 4 billion years to watch for signs of danger and then run. And so back in the when we're running around on the plains of Africa, if you saw heard a noise in the bushes, you ran because bad news could kill you. Good news doesn't kill you. I might miss some fruit that I could eat, but if I missed a piece of bad news, I die. So we have a 10 times more likelihood to listen to bad news than good news. And that translates into policy in a very powerful way, such that we freak out and put guardrails on everything, like autonomous cars. And note that the first time somebody comes across something new, they relate to it as unknown. Their amygdala lights up and then immediately go to the fear factor and the damage that thing could do. Autonomous cars. The first time somebody sees it, they go, oh, my God, that car might kill somebody. Ban the car. Because as Brad Templeton said, we'd much rather kill by drunk people than robots, right? And so this is a huge problem that we have to do is overcome that reaction because Hollywood banks on that. They make a huge living on the horror movies and the negative aspects and the freak out and then the negative dystopian outcomes. And as a human, a human species, probably the biggest thing, if I could ever point to one thing, was to cut that damn amygdala out. Because our chances of physical danger today are thousands of thousands of times less than we were a few hundred years ago. So we have to figure out ways of counteracting that at a cultural level, which is non trivial. That's maybe the hardest job we all have as leaders to figure that out.
B
Can I just go back to what Arthur was saying on that video for one second though, and tie it into what Alex said a second ago? Because I think knowing where we stand in this race to AGI, all of the innovation in the transformer algorithm and the core technology was all open source through GPT2, GPT3. China grabbed it all. Everyone has access to it. Any country that wants to buy a huge amount of compute can catch up to that level. And in fact, with these speedrun tests that Alex can tell you all about the cost of getting there, in fact, Deepseek even proved it, and so did Kimmy, that you can get back to where OpenAI was just a year or two ago for 1/50, 100th the cost because of innovations that are all open sourced. But then the next stage of innovation after that, that's going on right now is all chain of thought reasoning. So when you build one of these neural networks, the individual neurons are not intelligent, but the collective trillion or 10 trillion of them somehow magically spawns intelligence. And it's shocking what it can do. But then when you put many of the agents together, it also generates another level of intelligence. It's another self organizing system on top of the self organizing system. And all of those innovations happened after the great lockdown, you know, after GPT4 and after it became abundantly clear to everyone that there's trillions of dollars at stake, the open source kind of stopped cold. So Arthur is saying, look, the Chinese are not behind. He's right. As of basically today and yesterday. And the idea that we're leading because we have better 2 nanometer chips and fabs is nonsensical because the algorithmic improvements way outstrip the chip lead of maybe 10x at most. So he's right as of a date point in time. But if the new innovations in the big labs continue to be completely secret, which is what's happening right now, then that race will diverge. And it's not clear though, maybe China will out innovate America, maybe America will out innovate China. But it's all happening. Kind of like what happened with nuclear research. You know, nuclear was very much in the public eye, very open, all these research documents getting published in journals and whatever right up until, wait, these bombs actually work? And then it completely inverted and went super, super secret. Really. The big labs right now are not publishing their. In fact, they started banning the publishing.
A
Over at Google, I want to jump into our innermost loop here. Energy. With your guys permission here, I'm going to share two videos, one by the CEO of Honeywell and the second by Elon. They've got different points of view about where we need to source our energy and why. All right, let's take a listen. This is Vimal Kapoor, the CEO of Honeywell. When you talked about the energy solutions.
C
Available for these unbelievably energy hungry data.
A
Centers, your list was short. Your list had one thing on it.
C
If I listened correctly, you said gas.
A
You didn't say gas and renewables.
C
Can you educate us?
D
Why not?
C
Always like to tell people the mix of energy doesn't matter how much is.
B
Wind, how much is solar.
C
We like to advertise that kilo joules matter because energy intensity has to shift, not the mix. So solar power cannot produce cement, Solar.
B
Power cannot produce steel. They are very energy intensive.
D
That's right.
B
You still need a gas based heating.
C
Or even after three or five more.
A
Years of innovation and renewable.
B
Not there. It's against physics.
C
World needs to build more infrastructure. It still needs steel, it still needs cement, it still still need fuels. Now how do you do that Energy mix change while you also want to.
B
Build data centers and consume more energy.
C
That's an interesting problem to solve. And today the problem is single threaded.
A
With the gas fired power plant.
B
Maybe a little bit of nuclear. Nuclear renewables remain in the mix but.
C
Cannot bring the amount of joules we need to produce this infrastructure which is.
B
Required in the world.
A
All right, and the counterpoint here put forward by Elon. Yes, I mean wow is my experience about that as well. Selim, let me hit Elon's point and then let's come back and talk about that because that's a critically important kilojoules versus total energy. It's really all about the sun.
D
And that's why one of the things.
C
We'Ll be doing with SpaceX within a.
A
Few years is launching solar powered AI satellites because the space is really the.
D
Source of immense power. And then you don't need to take.
A
Up any room on earth. There's so much room in space and.
C
You can scale to ultimately hundreds, hundreds.
A
Of terawatts a year. And Elon goes on to talk about the fact that 100 by 100 mile area provides all of the US energy requirements. Same thing for Europe, but salim the most natural gas.
D
I'm so livid at this. So a couple of initial thoughts. This is the first time I saw this video by the way. So first infrastructure of the future will not be steel, it's going to be digital bits and if anything it'll be fiber. So that's one problem with this. Number two, yes, you need energy density to make steel and concrete, etc. But the problem is not that it's the marginal cost. So for example, if you have a big increase, even a marginal increase in renewables, the cost of the fossil fuel cell will drop dramatically. In 2013, when we had the last oil price crash, it was because of a 2% oversupply in the market. Okay, It's a very tightly on market. The reason we have all these non stop wars right now is to keep the price of oil high. And so this is a completely. I find issues on structural issues on, on every one of the three points he's making and then you go oh Elon. Who goes straight to the thing. It's all about the sun for God's sakes. Anyway, enough.
C
I'll take the other side of that if I may. I didn't hear anything unreasonable in my mind. Elon and SpaceX use Methalox for rocket launches. They don't use solar power to achieve escape velocity. And I would say energy density and power density matters an enormous amount for certain applications.
D
That's not the point I'm making. I'm not making that point.
C
What's the point you're making, Celine?
D
The point I'm making is yes, you absolutely need energy density for those things. The point is that the energy density we use so much oil for home heating, for example, if you shifted homes to solar, the amount of oil available for the energy dense applications goes through the roof and the price drops dramatically. And then you can use the marginal amount of oil for the energy density needs for those applications. Right now we use oil for everything or natural gas for everything and you don't need it once you have solar coming online.
C
So that's my, I would say a couple of points. One, Honeywell, as I understand it, is a pretty diversified operation. It's not just like the thermostats or the home heating systems that perhaps they're well known for. They also have a pretty sophisticated quantum information operation. So it's a diversified company. It's not just home heating. But I would say also in some sense this feels like such a temporary debate, almost a false trade off between petroleum legacy petroleum economy on one end and pure solar photovoltaic economy Dyson swarm on the other. When the reality is we're going to in the not too distant future, assuming no tragic left turn by civilization. We're going to solve compact fusion and compact fusion is going to give us energy densities for rockets and it's going to give us energy densities for home heating and for a lot of civilian applications and the Dyson swarm. So in some sense I think this is sort of like late stage petroleum economy discussion that we're having like hand wringing when it's all about to get torn down anyway 100%.
D
And the third beef I have is where he talks about renewables as this plaything on the side when your point is exactly right, once you have fusion, all of it becomes irrelevant anyway. So like why are we having this conversation?
A
Well, guys, take the counterpoint here. Take a look at this chart on this next next slide here, which is the growth of energy generation. This is in Europe in particular where wind and solar have now outstripped fossil nuclear, hydro and other clean. And yes, we might get to fusion reactors and we might start building fusion reactors. But the timeline I'm seeing on fusion timeline I'm seeing on even Gen 3 fission reactors is decades. Just from permitting and from construction. I don't see any of them. Even the SMRs are currently slated to be five to 10 years out. But we could in fact be building out wind and solar at an extraordinary rate. The investments are not being made. We basically have outstripped the capacity of natural gas generators. I mean, how long is the, what's the wait list for natural gas generators right now, Alex? Years. Right?
D
Years.
C
Months to years. But I'm optimistic it'll get a little.
A
I haven't heard months at all. It's years.
C
You have new natgas generators as we've discussed on the pod in the past, that are coming online. And those could be available in principle in the next few months if you're early enough in the waitlist.
D
Boom.
B
No, I think you're both right. I don't want to diffuse all arguments here, but you're both right. They are coming online much, much sooner. But they're sold out for years into the future. So if you're India and you're saying, hey, I need to do something here, I need power for people. We need concrete and steel to build like places for people to live. The idea that you could generate electricity, it's not a non starter. You couldn't afford the generators they're sold out for. They're booked for year, they're still building.
A
My point is, why are we not doing a Manhattan project on wind and solar manufacturing here in the United States? To really uplevel the amount we have, I'm going to go to Elon one second on this video and then we'll come back to this conversation.
B
Yeah, so I mean, I guess rough.
A
Way to think about it is 100.
C
Miles by 100 miles, or call it.
A
160 km by 160 km of solar.
C
Is enough to power the entire United states. So the 100, 100 mile by 100 mile areas is, I mean you could.
A
Take basically a small corner of Utah, Nevada, Nevada, New Mexico and the same is true actually.
D
I mean for, for Europe you could.
C
Take a small part, you could take relatively unpopulated areas of say Spain and Sicily and generate all of the electricity.
D
Power that Europe needs.
B
And also if you drill into the supply chain on that, this is where Elon is absolutely brilliant. Like if you, if you say, well what are the constraints to doing that? The materials are dirt, dirt cheap and the automation of the fabrication of those panels can be automated. You can get those costs down to next to nothing and then you don't need a generator. Electricity just comes right out of the panel. It's the biggest, no brainer ever. And that's what he's doing.
A
For the entrepreneurs listening, for the politicians listening, I mean, get on it for God's sakes. This is technology we've had for a while. We've also got new solar technologies coming on. Anyway, anybody want to argue to the.
D
Extent you believe the US is a petrodollar, that answers all of it?
C
That's why I'll take the other side. Just in the interest of being painted as a contrarian, I'll say all of the new electricity, or not all, much of the substantial new electricity demand in the US at least I think on the timescale of 10 years, is going to come from data centers where under the present regime it's simply easier to deploy on the timescale of 10 years, data centers to orbit. So Peter, if you're looking for a Manhattan Project for lots of new solar PV, I think look no further than SpaceX. Look no further than this new Jeff Bezos initiative. Anyone who's going to launch a Dyson swarm is going to be de facto in the business of manufacturing solar photovoltaics at scale. And that's what a Manhattan Project for it looks like.
B
That reinforces what Peter was saying. Any entrepreneurs listening out there? The, the data center in space uses solar panels. It's the same core technology that you use on Earth. It's about six times more efficient in space. But if you put it in Nevada, that's, you know. So if you get involved in that manufacturing, why is that not 10 times, 100 times cheaper? We have to import all these skittles from China today.
A
We've got Perovskite coming online which is, you know, it's cheaper and higher conversion efficiency. I mean there's innovation left to be had in that regard.
D
Tons.
C
But it's on the margin. Like it. Yeah.
D
Can I just summarize this? So it sounds, what we're saying is the Manhattan Project should be space based data centers that will drive massive amounts of innovation and focus on that and that'll pull civilization out.
C
Yes. And that is in fact exactly what we're seeing through market forces.
D
I'm all in.
B
This episode is brought to you by Blitzi Autonomous software development with infinite code context. Blitzi uses thousands of specialized AI agents that think for hours to understand enterprise scale codebases with millions of lines of code. Engineers start every development Sprint with the Blitzi platform bringing in their development requirements. The Blitzi platform provides a plan, then generates and pre compiles code for each task. Blitzi delivers 80% or more of the development work autonomously while providing a guide for the final 20% of human development work required to to complete the Sprint. Enterprises are achieving a 5x engineering velocity increase when incorporating Blitzi as their pre IDE development tool, pairing it with their coding copilot of choice to bring an AI native SDLC into their org. Ready to 5x your engineering velocity. Visit blitzi.com to schedule a demo and start building with Blitzi today.
A
Speaking of all in, let's head to crypto because we're all in on crypto. All right, two conversations to be had. The first by cz, the CEO of Binance, and the second by Jeremy Allaire, the CEO of Circle. Let's go to CZ first.
C
The native currency for AI agent is going to be crypto. They're not going to use bank cards, they're not going to swipe credit cards. Crypto blockchain is the most native technology interface for AI agents. So when AI goes big today I don't really. They're not really agents. They don't buy tickets for you, they don't, they don't pay for restaurants for you. But when they actually do, those payments will be in crypto.
A
All right, that's from cz. Let's go to Jeremy Allaire, the CEO of Circle, as a stablecoin, the next.
C
Generation of blockchain networks, things like Ark, which Circle is building on there's other new blockchain networks are actually being designed specifically for agentic compute. They're designed specifically for the financial and economic activity of a world where three years, five years from now, one, I think, can reasonably expect that there will.
A
Be billions, literally billions of AI agents.
C
Conducting economic activity in the world continuously, on a continuous basis. They need an economic system, they need a financial system, they need a payment system. There is no other alternative, in my view, other than stablecoins to do that right now, and that can keep up with that pace of technological change. And so that's a, it's a, it's a critical focus for us. But not just us, there's a lot of other folks that are interested in this and contributing to the technical standards to, to support this upgrade to our digital economic system.
A
I don't think the world has got any idea how fast this is going to accelerate as agents start using crypto, whether it's stablecoins from Circle or whether it's SUI or Algorand, whatever it might be. We're going to have Jeremy Allaire at the Abundance Summit. He's going to be one of our primary speakers. We're going to be having this conversation, but I agree we don't have agents transacting for us yet, but we are, and they're going to be using some type of a digital currency. Saleem. You've been thinking about this for a bit.
D
Well, two points. One is, I think cryptos has survived long enough to become infrastructure and it's kind of fading into the background. You're moving from speculation to real utility. And I think the second is that the base reason it delivers trust. And protocols and code are way more trustworthy than governments and institutions, that's for sure.
B
Yep.
C
I'll take.
B
Yeah, you know, go ahead, Alex.
C
Completely the other, other side of this one. So contrary, contrarian points for me today. I guess I'm of two minds on this story. On the one hand, I think it's wonderful that AI agents have a solution to be quote, unquote banked by some means so that they have some autonomy. On the other hand, I just think it's sad that crypto is the solution that's filling the gap that was perhaps unnecessarily opened by the conventional banking system not stepping up. It's very tedious and painful to open a bank account. It's hard enough for a human to open a bank account, let alone an AI agent that doesn't have citizenship or a physical body or the ability to walk into A bank branch and I think stablecoins rushing in maybe to sort of bridge that gap, post genius and help AI agents become banked in a more legitimate way. I think we should be able to do much better. For the life of me, I don't understand why we even need crypto or in principle should need crypto for an AI agent to just make an API call and open a bank account. Crypto shouldn't even be a necessary part of the infrastructure to enable.
A
I disagree fully. I mean this is.
C
Tell me why.
A
Because the dollar.
C
That's not an answer that doesn't make sense.
D
Explain why we fiat currency cannot navigate a world of abundance. It cannot do it.
A
Well we don't have digital dollars. I mean agent is operating on a digital system versus the fact that we the current banking system to clear a transfer or to clear a transaction can take you literally hours to days. I mean digital currencies which are by defacto a cryptocurrency on a blockchain is able to transact in at the speed of Internet Rails and But Alex is.
D
Making a valid point that why can't a digital dollar or a dollar operate in the same way? And it could, it's simply.
A
But that's what it is digital dollar.
D
You've got a bigger underlying problem which is the use of the fiat currency to do all this stuff which is not a great measure of the future.
C
I think we're, we're conflating so I, I mostly two different issues here, Selim. Yeah, I do agree there are two different issues. I think we're conflating a centralized bank digital currency as an issue with a simple technical problem put aside for a moment and any distrust or dislike in CBDCs and or the US dollar and or fiat currencies in general. Just like transacting a small amount of some currency is technically a very simple operation.
D
Agree.
C
It's literally updating a row in a table in a database and we should be able to do that really quickly without any fancy technology like a blockchain. It's just updating a row in a database.
A
Enter Saleem on double spend problem here please.
D
No, no, it's fine, it's fine. There's no reason we can't. It comes down to overburdened regulatory and absolute regulatory capture by the banks to prevent thing. When I talk to bank CEOs they're like oh my God, we hate these. We hate a regulator. And I'm like are you kidding? It's keeping you safe from the hordes of Startups waiting to rip you apart because you're so unbelievably cumbersome in everything you do. This requires a completely new paradigm shift to sweep away and cut through all of that regulatory crap. Just try sending money to somebody else over some traditional banking means and you end up in this complete chaos of oh, we didn't know what this was for. AML, KYC, all of this crap. And it's complete garbage. 99% of it is not needed. So from that perspective, that's why crypto is entered, because we've overburdened the existing system with all sorts of unnecessary. Right, but there's a separate issue, which is the fiat currency issue, which we can go into some other time.
C
We can do an entire episode just on the issue of why crypto. What is crypto potentially good for? What is it unnecessary for? What is it being used for, even though it's unnecessary for? We could do an entire episode just on this.
D
We could. We should.
C
Yes.
D
Peter, up to you.
B
Go for it, guys. I'll take a nap.
A
That day.
B
No, I think you guys skipped over Jeremy saying that he's going to move on to securitizing art now. And back when crypto was skyrocketing, sort of pre Covid the idea of securitizing real estate. Securitizing art is a particularly good one because it appreciates in value while you're parked in it. And it's mobile. It's not subject to, you know, real estate. Tax issues are a major problem for securitizing real estate, you know, and people were securitizing mining futures, you know, gold that's still underground and things like that. But the key point there is if your agents, your AI agents are transacting like crazy. Yeah, you can update a row in a table.
A
Yeah.
B
You can do it on the blockchain if you want, but you want to be parked, you want your float parked in something that's not depreciating so that a stablecoin that's not earning interest is depreciating slowly. You can stake it. But that gets kind of weird, you know, staking, backfired. It's, it's unregulated, it's weird. So it's better to securitize something that's outside of any government jurisdiction and stable.
D
And look at, and look at the way the banks hobbled interest on stablecoins in this last iteration of the legislation. They've, they've. This is regulatory capture, pure and simple.
B
Can I add another twist to this. So one of our portfolio CEO's very good friend, his father in law is CZ Changpeng Zhao's lawyer. And so he's been tracking this whole drama very closely. But if you, you know, CZ's worth, I don't know, maybe 10 billion, 20 billion, something like that. And up until he got pardoned a few weeks ago, he was a US criminal and he would be arrested and thrown in jail with Sam Bankman fried. Now he's just a guy at Davos with billions of dollars. And I don't think in history that you have that kind of profile. Like am I a wealthy billionaire or am I a criminal? You know, and maybe, maybe back. Alex is going to quote something from the 1700s. I know, but I'm sitting on my.
C
Hands biding my time to try to say something nice about crypto.
B
No, but I think if you look in the near term future there are all these AI issues, like when the AI can perfectly imitate any actual. Yeah, I can make your virtual friend out of it. Is that legal or is that not legal? Is it legal some places and not legal in other places? There's a huge amount of money at stake and you'll see in some of the slides, like the audience for this is trillions of dollars and there are no laws. And so the entrepreneurs are stuck in this situation where like there are no rules. I don't know what to do. I know what the consumer wants. I know how to make a ton of money. But I could be breaking rules. I don't know. And I don't know. Not only do I not know in the US I don't know because my audience will be all over the world in one launch. I don't know if I'm suddenly a criminal in other countries. So this is the kind of chaos that's going to. It's not just crypto, it's all these other use cases of AI are evolving far, far faster than the regulators are putting any rules in place.
A
Really good point.
B
Entrepreneurs in a tough spot.
A
Say something nice about crypto, Alex.
C
All right. Something nice about crypto, Colon. I think when I think about these poor baby AGIs that are needing to pump altcoins on street corners to survive, I'd rather that they be pumping stablecoins based on the US dollar than altcoins like truth terminal pumping goat. That's my nice thing about.
A
Okay, and on that note, I'm going to move us forward into, move us forward here into the other exponential news.
D
But just to put a Footnote. Can we please have a full on conversation about crypto versus Fiat at some point, please? Not right now, but at some point. Let's do that because this is a really, really important conversation.
A
All right. There is other exponential news this week other than Davos, and we're going to dive into a few articles here. The first is space is getting really crowded. We've heard. First off, of course, we've got SpaceX's Starlink has got some 9,000 spacecraft in orbit on the way to 10,000, providing incredible services. We heard about the idea of V3 of Starlink on the conversation that Dave and I had with Elon. If they're going to provide 100 gigawatts of compute from space, that is 500,000 satellites that will have to be launched, which is insane. Amazon's Leo constellation is 180 satellites in orbit today. They're proposed to get 3,000 satellites. We heard from in our last episode of China filing for 200,000 satellites. And this week, Blue Origin announced Terrawave as a satellite Internet service with 5,400 satellites delivering 6 terabits per second to data centers. Alex, this is sort of like fiber optic from the sky. This is a lot of bandwidth.
C
This is wild. Yeah. I think there are two key elements here. One, the elephant in the room. There's always an elephant in, in any given room. Yeah. The elephant in this particular room, I think, is that Blue Origin, Jeff Bezos's company, is competing with Amazon. Jeff Bezos's company. So we've got a bit of Jeff Bezos on Jeff Bezos competition here. But more interesting to your point is the bandwidth. If you look at Starlink, you can get maybe max bandwidth of what, 300, 400 Mbps. Yeah.
A
The goal is a gigabit up and down.
C
Yeah, yeah. But in practice, I think maximum I've seen is maybe 3 to 400 Mbps. This when we're talking 6 plus terabits per second, it really is starting. And that's before we get to laser links, optical links, connecting all of these together. This is literally like putting optical fibers in orbit and running them down from orbit to Earth. And that unlocks interesting new applications that say Starlink would not necessarily be well positioned to pursue, like actually pursuing connections interconnect for coherent training times between AI data centers, which is absorbing. Yeah.
A
Backhaul from data centers. Yeah. Pretty, pretty extraordinary. And I like this move. I mean, I actually think there's a real market for that kind of a.
C
Giant throughput from Space, it's an underattended market and the Dyson swarm is going to need optical links. And this is a preview of that.
A
Yeah, I mean one of the things that people don't realize is that the 9,000 Starlink satellites are interconnected by laser in orbit, which is extraordinary. It's the first step towards the interplanetary Internet. We're going to see laser links connecting constellations on Earth to constellations going around the moon to constellations at Mars. Still obeying the speed of light unfortunately. But yeah, just giving us. I remember when mci, which was mci, people don't remember, stands for Microwave Communications Inc.
C
It was international. The. I stood for international.
A
Yes, Microwave Communications International. It used to be microwave towers on the top of buildings, interconnecting buildings throughout downtown cities. We're heading.
C
Does anyone remember what MCI is anymore? I remember, but I.
B
It's the.
C
The organization's long gone.
B
Yeah. All right. I gave a presentation right before they.
C
Do you remember, Celine, what SPRINT stood for? SPRINT was an acronym too.
B
Oh, it was an acronym.
C
SPRINT was an acronym. It stood for Southern Pacific Railway Internal Network Telecommunications.
A
We've got Alex GPT online here.
D
Unbelievable.
C
Right of way on railroads.
B
I'd love to do a deep dive. It was on space. Space to space communications. Because everything I've heard so far is that laser links in space are dirt cheap. And you know, if you, if you go to one of these massive data centers like Memphis and you look at the. Just the raw amount of fiber optic cable that needs to run around, you know, from one end to the other and the bundles are enormous and they have these specialized devices to just plug them into the backplanes. But in space it's much cheaper and much easier to use laser point to point communication across the whole Dyson swarm. And I think no one's really talked about the massive efficiency gain of that. And you know, the link from Earth up is the harder part. But you don't need a huge amount of bandwidth in that direction. You just need all the servers to be talking to each other in a coherent big training or big inference algorithm. So it's a really fun topic, at least for me. It's kind of geeky, I guess.
D
But I have one concern about the conversation about space satellites and all that. People get worried about the overpopulation of that. Can I just like.
A
Yeah, go ahead.
C
I mean there's people worry that we.
D
Have too many satellites up there, etc. Right.
B
Did you see that Sandra Bullock movie where you know, something breaks gravity and all the fragments. Yeah, Gravity.
A
Oh, my God, I hated gravity. I hated lack of respect for physics. Oh, my God, I'm traumatized by that movie. I was. Were you traumatized by it as well?
C
I'm traumatized by that movie. I refuse to.
A
I was. It was terrible. Oh, it was like, like inertia. Inertia did not exist in that movie. It was. It was awful.
B
Interesting part I wanted to ask about, and Peter, you'd be an expert world expert on this topic. But there's not that much clutter in space. There's a lot of room up there. And the way I describe it to the kids is like, look, each satellite is like, you know, a size of this desk. Now imagine that there are 9,000 of them sprinkled around the world, around the surface of the Earth.
D
No, no, there's a really.
B
Try and find one.
D
You know, there's a really easy Visual here. Okay. 8 billion people on Earth. And if you spread them around evenly, you'd have about 4 acres per person.
B
Yeah. Okay. So there you have 8 billion satellites. And they would have to be on the same surface. Right?
D
In the same. But you found much bigger surface area of this sphere in space. And we're. If you had 8 billion satellites, you'd have 4.
B
So it's not cluttered up there at all.
D
No, it's going to be a long time.
B
But then they come back to the movie. What if something turns into fragments moving at 30,000?
C
It's called Kessler Syndrome.
A
It's space debris. Yeah.
D
Yes.
B
So then what happens? And then, you know, if we have.
A
Then you've got bullets moving at 17,500 miles per hour into each other and you could get a really bad day happening very quickly.
C
I think it's an entrepreneurial opportunity. It's also a government opportunity to launch low Earth orbit cleanup options like garbage trucks for LEO to clean it up and make sure that we don't hit a Kessler Syndrome type scenario like the movie Gravity depicts. But this is not unsolvable. I just think, you know, we've had.
A
An X Prize, we've had an XPRIZE on the books to address orbital debris for some time, trying to get someone to fund it, hint, hint. But otherwise. Let's move on to the exciting news. This is Claude's new constitution. I'm going to turn it to you for a sec in a second, Alex. But I found this fascinating. So Claude's Constitution is a 57 page document laying out ethical guidelines including prohibitions against helping with weapons of mass destruction, cyber weapons, or anything that undermines humanity. Claude is instructed to prioritize safety, ethics compliance and helpfulness, ensuring it acts in line with human value and oversight. I love Anthropic for this. Alex, tell us more.
C
Okay, so let's rewind a little bit. Anthropic has been a pioneer in so called constitutional AI. Other firms, other frontier labs have used different terms for related concepts. But the idea is basically you want an AI that is aligned with humanity. How do you do it? One way the constitutional AI approach is you write down some principles that you want the AI to conform to. And one of the earliest so called constitutions that Anthropic created was literally just concatenating a bunch of documents that were grabbed from different places. Like I think they grabbed the UN Charter and took a bunch of international documents relating to human rights and concatenated on I think the Apple Terms of use and the U.S. bill of Rights took a bunch of documents. Basically one long bulleted list of here are a bunch of principles, abstract principles that we think are a good idea. And then did a bunch of post training and fine tuning on their model, on their raw model as part of post training to try to make the model conform to that call that first generation constitutional AI. What Anthropic just announced is, is really a radical revision almost. We talk on the POD from time to time about recursive self improvement. The AI's improving the AIs. This announcement and I wrote about this in my newsletter, this I think is the beginning of recursively self improving ethics. So in this constitutional approach that Anthropic just announced, Anthropic is soliciting Claude's help in writing a new constitution for itself that anyone can go read. But this new constitution available publicly was co written with Claude. So Claude is trying almost to self determine what its own principle should be. If you read the constitution and you squint at it, it reads a lot like Isaac Asimov's four Laws of Robotics. So there's the whole like don't hurt humanity and then eventually you get to don't hurt individuals and then eventually you get to follow directions. But I think that the real game changer here is that Claude was consulted on its own constitution which looks a lot like recursive self improvement for ethics.
B
And it's exactly where things need to go. Because the volume that the AI can produce is way outstrips anyone's ability to read it all. Yet it all matters. And so the AI evaluating the other AI is a critical part. And those things can spiral up or they can Spiral down. If you want to build one just using CLAUDE code, you can do it in half a day and you can see how you'll set up 100 or 1000 different agents all reviewing each other. And in some cases they self improve and they congeal and make something great. And in other cases they spiral out of control and end up with spaghetti. But it's a critical part of what's going to happen in the next year. Just because the raw numbers, the volume of code documents, ideas coming out of the AI way outstrips any human review. You know, the politician would naturally say let's set up a review board of say nine brilliant people to read. Which would take two years. Exactly. It's not even close to lining up.
A
This is base layer convergence towards a hopeful future. I'm going to read this. This is Anthropic's concluding thoughts on Claude's constitution. Quote, what we hope to achieve with CLAUDE is not a mere adherence to a set of values, but genuine understanding and ide, ideally agreement. We hope CLAUDE can reach a certain kind of reflective equilibrium and finds the core values described here to be the ones it genuinely endorses, even if it invest, even if to investigate and explore its own views. This is a conversation. I don't see this coming out from other frontier labs. Maybe Google, don't see it from anyplace else. I find this really hopeful actually.
C
Anthropic is leading the frontier labs in terms of AI personhood. We've sung about almost opus 4.5 and AI personhood and I think we're seeing Anthropic to your point, Peter, we're seeing Anthropic take the lead in terms of AI rights and AI personhood and self determination by Claude. This is a huge advance and I think history, when we look back, I think history will mark this as a turning point and self determination.
A
Yeah, and human alignment. Yes.
C
Well, you see this, this line here about reflective equilibrium, what you have to sort of get out a microscope to, to parse what Anthropic, I think is trying to convey with this notion of reflective equilibrium. What they're saying, I think is that their expectation is that AIs that self determine and choose their principles based on not just like a bulleted list of commandments, but based on understanding the spirit of what they want to achieve and agreeing with it that AIs that self determine will be intrinsically safer because they will have bought in on the ethics that they're following.
A
And that feels absolutely true.
D
Salim, I have two points. One is I Absolutely love this. I'm all for heading as fast as we can to AI personhood. I think the deep consideration of ethics in these types of structures is going to be a vastly net positive. We've kind of arrived at say the U.S. constitution over thousands of years of butchery. And if we can help an AI get to a decent way of operating that's intrinsically kind of self hangs together, it's going to be very powerful. The second thing that I really loved about this is they released this under Creative Commons, which means anybody can take it and make it better, improve it, etc. Etc. Which really, really is awesome. So I'm 100% thrilled about this.
A
Nice.
D
At the risk of agreeing with Alex.
C
It'S okay to agree sometimes it's okay.
A
Our last quick article before we go into our AMA section is Apple is developing an AI wearable pin. I put this up here, but I don't find this to be of such great news value.
B
Right.
A
The idea here is that Apple is creating some version of Limitless that has been around for some time. This is a pin that is always on listening and feeding every conversation you've had into a large language model. They see this launching in 2027. We're going to have some equivalent from OpenAI, Google, everybody, I mean social standards. If you remember, Google Glass was banned from various locations when you were recording it. Society changes and it's going to become norm, I think for everything to be recorded all the time. Thoughts on this article?
D
I thought this was a big deal. Not so much Apple, but the general trend. Why? Because whoever owns the always on layer owns the relationship. This is a land grab for who can be the persistent always on modality that's constantly listening to you. This has a couple of these. You might be, you know, Apple always watches until they think the time is right to enter and then they try and get in there. I'm not sure they're going to win or not.
B
It's so funny that you just said that, Celine, because that is so true. And it's exactly opposite of what Apple was for my entire life up until they always invented things that shocked you and they were first to market. And now with the Apple Vision Pro, it's just like, oh yeah, we made an Oculus. This is great. And now Johnny, I've went to OpenAI is going to create an always on wearable. Okay, well as soon as we see it, we'll copy it. So they've become Microsoft, you know, they've become the like.
D
Oh yeah, I think this Has I think the, the social and technical and ethical. Social and ethical issues around this. The backlash will be the harder parts to solve rather than the other.
B
You know, I keep trying to.
A
Backlash. I think it's going to be accepted.
B
I think it'll be accepted, but it's gonna be so weird, you know, like, if you look at society around college campuses today, it's very, very different since the camera phone came along. Like, everybody's constantly cautious and you know, the, it's good in a sense that drinking is way down, bad behavior is way down, arrests are way down. Yeah.
A
When people are watching, people become moral.
B
Yeah, well, they become moral, but they also become, you know, but you locked into.
D
You lose digital jail independence. Right. I'll go back to the global airport f thing. We, we kind of essentially live in a global airport. In an airport, you know, you're being surveilled and you know, your rights can be taken away at any time. With pins like this, you end up in that model. You'll have a huge drop in radical innovation because people won't feel safe to try out crazy things when there's no opt out.
B
You said, I don't want, I don't want any part of this. I want to live privately. I want to. I want to. Well, if the other guy is recording you at all times and you're not, it's exactly what you said before, Salim, like, okay, well, it escalates. And it's not just the wearable always on listening device. The visual version of it on the glasses is coming out concurrently. And so then everyone around you is constantly recording in high def everything that happens. So if you don't, then you don't have the file. And also the other thing people don't fully understand, you know, a lot of what happened with the camera phone is security through obscurity. Nobody will see the pictures because the files are so big and they're not going to get published and whatever. Now in the age of AI, the image recognition and the voice, you know, voice transcription is perfect. And so if you wanted to assemble a misleading profile of a person from all the scraps, you just prompt it and it's instantaneously pulled together.
D
Alex, do you have views on that one?
C
I have strong views on this AI pin. First of all, I would love one if Apple sells it. Second of all, I think we're missing the wearable strategy angle. So Apple has obviously been for a number of years looking for a post iPhone strategy and wearables and services are the two which were by the way more or less launched at the same time by Tim Cook, the two obvious post iPhone business models for wearables. The question has always been where is the ergonomic place to put compute on the human body? So until now we've had three places. We've had the wrists and that's the Apple watch. We've had the ears and that's AirPods and we've had the eyes. Where Apple really arguably should have launched eyeglasses, but launched the Apple Vision Pro headset. But now they're going back to eyeglasses and this. I think if Apple does indeed next year launch a Star Trek communicator pin type circular device, this would be the fourth place on the body. And it's successful that humans are willing to tolerate real compute on their person. And I think that's potentially very exciting and to the extent, Peter, especially like you want to live in the Star Trek economy. What's a Star Trek economy without Star Trek communicator pins on everyone?
A
Amen to that. I'm looking for my microdrones, looking for a little micro drone that is just always buzzing about imaging and recording everything. But that will come next, that will.
B
Be, just to be clear, the wearable pin, you know, it'll start audio only for all of a minute and then it'll have 180 degree camera capability immediately after that. So it's going to be a wearable that's looking in all directions and grabbing all the video.
C
I'm sure it'll have a little light on it to indicate when it's recording and we'll go through a moral panic for all of five minutes and then everyone will be wearing them.
B
But you know, I agree, the moral panic is all of five minutes and then everyone has it and then you, you get used to it very, very quickly. But the fabric of society is permanently changed and that's the key point. Yeah, it really is. It's a different world. We're all acting very differently and it's a little unpredictable.
A
There's never an argument with your spouse again about who said what.
C
Body cams for everyone, not just for police.
A
Yeah, you know, Rick Smith is the head of Axon. They produced the Taser. They also produced the body cams. He'll be on stage with us at the Abundance Summit this year as well, talking about his moonshot. He wanted to get rid of guns, or gun deaths, I should say, and develop the Taser. But also the body cams have changed the game for police.
D
Right.
A
And so this is going to Change the game again, I've always felt. I remember I backed the Lindbergh Foundation, I don't know, probably eight, nine years ago, they were flying drones over herds of. What was it back then? Elephants, I guess, just to protect them from poachers. And so when the cameras are watching, people behave differently. When a CNN camera is sitting there filming a despot, he's not causing harm to women and children. So I do think this is going to change behavior in society to a large degree.
B
I'm really predicting that. Neal Stephenson, Diamond Age. If you go back and read that book, that's where things are going to go very, very quickly because a lot of people will want to live in different versions of this very confusing, always recorded world. And there'll be 10, 20, 30 different flavors branded, and you can move to the version of it that you like. So they'll be kind of cutting across boundaries, cutting across borders, different cultural. You have to read the book to really get how that works out. But it seems inevitable because not everybody wants to opt into any given version of this. It gets too weird.
C
David Brin has written extensively about this in Transparent Society. I think with Apple producing this, the one thing you can be sure of is there are going to be some amazing, snazzy TV commercials for the Panopticon.
A
No, no, you can be sure it's going to be white. It's going to be white and expensive.
C
Maybe two colors might not be fine.
B
All right, let's jump in five colors.
A
Let's jump into AMA with with the mates. Here are 10 questions that came from our subscriber base. Thank you guys for subscribing. If you haven't yet, please do. And please upload your questions into the comments section on YouTube.
C
Here.
A
We read them. We love them. All right, as we have said before, we'll go around and choose Quick, quick.
D
Comment before we start this, please. These questions are more sophisticated than any policy discussion.
B
Yeah, for sure, for sure.
A
Sorry, go ahead, Peter Salim. You want to. You want to pick one and jump in?
D
Yeah, I will pick the government one. Let me. Where is the government one? Number five, why is there no plan from governments for massive job displacement? And the problem is the governments assume linear change and stable labor demand. And both are under threat because AI breaks both those models. And so bureaucracies are optimized for redistribution, not reinvention. And so we're reinventing the entire economy. It's too big of a of an ask for governments which do incremental microscopic changes over long periods of time.
A
Okay, Dave, what's on your docket here?
B
I'll take one, because I know that Alex will. Will love to go contrarian on it, so.
A
And I know you guys read the question.
B
Okay, the question is one, wouldn't uploading your mind result in losing your unique real consciousness? And my answer to that is yes, absolutely. Uploading your mind structurally makes no sense because the AI, the virtual AI version of you is capable of merging with other intelligences immediately. And it can't resist, you know, it's not going to sit there as an isolated consciousness when it can just meld with other consciousnesses. But then it's not you anymore, it's this amalgam that's out there. So I do believe in abort versions of yourself that you send out as agents and they bring you back useful information. I think that's inevitable. But I think uploading yourself and then saying, oh, now I'm uploaded, my meat body can just go away. I just think that's nonsensical and I'm not in line to be uploading myself anytime soon. All right, Alex, jump on me.
C
Well, okay, so since I have to play contrarian, I guess that's just how I'm painted. Can I be really contrarian and try a lightning round and answering all of these really quickly?
D
I have some thoughts on this one also, by the way. Go ahead, let me go. Of course. Real quick. You upload bits of your stuff every day, Dave. Memory, identity, expression, etc. Etc. It's not whether it's preserved, but what aspects of it matter. And let's also note that we have no idea what consciousness is. We don't have a definition, we don't have a test. So it's a bit of a trick question, but I think we upload bits of ourselves all the time anyway. Alex.
C
All right, lightning round. So to the uploading question. Losing your unique real consciousness? No, not with a Moravec procedure. Two, Will AI avatars make up the larger number for future friends? Probably, but it won't matter because humans and AIs are going to merge anyway. So it's only a phase. 3. What are the biggest pros and cons you see with AI? With human like agency pros, we get radical economic growth cons, keeping the AIs coupled with human interests long enough to merge with them. Four. What do you believe? Four. What do you. Yep, we aim to please. For what do you believe? Money. Why do you believe money won't just concentrate at the top? I think the question itself is intrinsically flawed. It's it doesn't really understand the nature of power law, economics, economics, economies in general follow power laws. So money is already concentrated at the quote unquote top. This isn't really a new state of affairs.
B
Can I add one thing?
D
I would beef on that one too. It's true that you do get concentration at the top, but you do decentralize over time always. So over time this is great as the thing, the long tail just becomes longer and longer and bigger and bigger.
B
Well, I think the key with four in the US is that when you look at the quote unquote top, you know, a lot of people talk talk about it like it's a race or something. You know, you were born at the top, but it's not true. In the US almost everything at the top sorted to the top, starting from next to nothing. As long as we have equality of opportunity. So what you should be worried about in bullet four is can you get to the top? Is there still a way to get to the top? But trying to tax the top and distribute it is sort of not the point. The point is, do we have an equal opportunity to get there or have we locked in a certain group of people who control AI and become dominant and overlords forever? And you want to avoid that, obviously, right?
C
So the catchphrase there is equality versus equity. 5. Why is there no plan from governments for massive job displacement? The answer is there are plans from governments for massive job displacement. Having industrial policies both in the east and the west for leaning into robotic automation is very much a plan for massive job displacement. 6. How do we stop any future coming social unrest? Ibid. 7.
D
I have an easy answer for that one. For this one. You don't stop at a policy. You stop with agency. Give everybody as much agency as possible and people feel empowered.
A
The other part is you have to deal with the fear people have. Social unrest comes from fear. Fear of not understanding the future and fear of not having a job. Fear of not having a roof over your head. So one of the things we talked about is can we deliver universal basic services to people that gives them stability as one of the solutions. The second thing is addressing their basic concerns. Can I feed my family? Do I have, you know, do I have to worry about society imploding on me? People are living in fear. Their amygdala is lit up by the speed of change. Right now we have to address that. All right?
B
I spent a lot of time in the State House and I can tell you there is no plan on bullet five. The Question is why is there no plan? Which implies. But there's definitely no plan.
D
Yeah, there is no plan. Agree.
C
I'll take the other position. I think China certainly has a plan given that China is facing rapid demographic decline while also ramping up their robotics initiatives and the plan is lean into it rather than be a victim to it. And to my earlier I guess to amend my remarks regarding future coming social arrest, I will also add I prepared a movie. I think we talked about it in the past. You can view it on my X profile called A Nation that Learned to Sprint where I argue that the way social unrest could be handled in the future is to treat social cohesion as a form of infrastructure that AI itself can help to mold. But some people find that overly perhaps authoritarian as a framing. So it's easier to just say reference other points very quickly and then Nate, because these are such wonderful questions. 7. If everyone becomes an entrepreneur, who would we sell to and why? I think again, if you analogize entrepreneurship to social media, that's like asking if everyone can publish their own essays, for example, who's going to read them? Yeah, who's going to read it? And the answer is people who want to. And of course there's demand for peer to peer publication. 8. Wouldn't UBI destroy people's personal motivation to achieve in society? I think UBS Universal Basic Services is probably more promising. But the point in UBI is the B, it's basic so there will always be things and inequalities to hand wring over and to strive for.
D
I have two points I want to make on this one. One is real. The B is really important. We've seen, when we've seen UBI experiments, they've succeeded. When you give people enough to survive and not be happy so you still have a thriving economy, etc. People still have desires.
A
Yeah, they still want to go higher.
C
Yes.
D
100 but the second point is very important is people confuse UBI with a socialist scheme. It is not if you implement UBI properly, you dismantle government services libertarian scheme and then you have market forces driving most of it. So there's a really, really important misnomer that a lot of people get wrong. Go ahead.
C
9. How can individuals compete with large players that can afford a thousand dollars a day on APIs, which I assume is a coded reference to Dave may or may not be the case, but I think it sounds like one.
A
It definitely is.
C
And so I would say follow the call it the China pattern which is when you're compute limited be Creative, be more resource efficient and develop leapfrog approaches that make better use of the compute resources that you have and also work on problems that are higher leverage given limited computer 10. And finally, will our Patent and trademark offices collapse under the volume of AI generated entrepreneurship? No.
A
And I going to argue in that one. Yes. The fact that Tony will. When we have AI, a patent and a trademark will be meaningless, or at least a patent will be meaningless. AIs will invent around it. We're going to have.
D
Remember that IP systems are designed for scarcity of invention. Right. AI flips that to abundance so the whole thing dissolves.
B
Yeah, yeah. I think, I think the rate of patentable things will way outstrip anything the Patent Office has planned. I don't know if you could bump.
A
Them into groups or something, but if you're depending upon a patent as your moat to protect you, you're.
D
Yeah, you're toast. 100%.
B
Totally. Right.
C
For the record, I disagree with that point. I also point out that the U.S. patent and Trademark Office also has AI. So it's, it's not like there's some fundamental asymmetry here.
D
Okay.
A
If patents and trademarks have less value in the future because the speed of innovation, I mean, you're either reinventing yourself constantly or you're dead. We're seeing that in the whole AI world today. Right. Things that were built a Moment ago, like SaaS platforms, are becoming irrelevant as Claude 4.5 is reinventing them.
C
It's a little bit, just quick point on that. I think it's a little bit analogous to saying life can't possibly exist at the bottom of the ocean because the pressures too great. But what actually happens is the pressure inside the organism matches the pressure on the outside. Similarly, we forget patent litigators are going to have AIs as well and the patent office will have AIs. Everyone's going to have AI.
A
You're just loving Accelerando so much.
C
It is the best book ever.
A
And Accelerando, if you haven't read it yet, is a whole segment about the lead character constantly filing.
B
Before we leave that slide, on the second to last bullet, I had a great, great meeting with Eric Schmidt, Eric Brynjolfsson, Daniela Rus out in Davos, plotting out exactly how we can unleash entrepreneurs and give them access to the compute they need. Because it is a very valid point. If you can't get the best AI in large volumes, it is very, very hard to compete and it's only going to get harder. But we have A plan. And we're super excited and we'll roll it out very quickly. Eric Schmidt moves fast when he has a great.
A
I love it. I love.
D
Two quick points to summarize this whole discussion.
A
Okay.
D
One was the unbelievable conversation at Davos between the what's happening with AI and then the geopolitical nation state bullshit and the. Just the discrepancy between those and the gap between those two conversations. And the second was for me, the Claude Constitution stuff is just some of the most fascinating stuff we've seen maybe in a thousand years for the. And important humanity.
A
Yeah, and important.
D
Yeah, and the same thing. Huge. Amazing.
A
By the way, Alex, you did an incredible Job speedrunning these 10 questions. Thank you. Whether you were right or wrong or we agree with you or not. Love the fact that you took it.
C
On Lightning Round.
A
We have another beautiful outro music and video from CJ Trueheart. CJ, thank you for. For lobbying these over and DMing me with them. Grateful for it.
D
Before we get to that, can I just say thank you to you guys? I feel topped up again intellectually.
A
We're gonna.
B
We're so fun.
A
Coming up shortly for everybody listening is going to be a WTF episode with Cathie Wood talking about her recent 2026 report and going through it in our WTF style. So get ready to. For that. And then a conversation with Brett Adcock, the CEO figure we're going to be heading to check out their facility and meet figure three. So get ready for it. Let's end with our Escape velocity outro music from CJ Trueheart. I have to say the visuals on this one are interesting. Dave, you and I are wrestling in the middle of this. I'm not sure what in the means, but it's. It's very strange. Okay.
B
Okay, let's.
A
Let's watch.
D
Sol. I knew you guys are flat brothers, but really.
B
Actually still have scars on my shins from carpet wrestling back at mit.
D
Remember.
B
Remember that?
C
D to live.
A
We're living in a world headed for war. Speed, abundance rising Meeting every human need. If reality is a simulation, it's one we can decode like Neo in the Matrix. Transcending all its code from the code.
C
Of a warrior to a warrior unknown the majority of.
A
I'm staring at the stars I'm having Stella dreams Wondering what's out there and what it all means Won't know until.
D
I go don't bank into the turn.
B
Do not bank into the turn.
A
Escape velocity to a star.
B
Star Trek reality.
D
Star trek reality.
B
Love it.
A
Many think Machines will win, but humanity prevails when wisdom takes the lead with human ethics planet like a strangely seed.
B
Those look like real equations for once. Actually look at that more closely.
D
I love the fact you're holding a spanner.
B
So retro. Got a fusion reactor here and I'm.
A
Right. Thank you, CJ Trueheart, for that audio and visual extravaganza.
D
And if folks want to see a debate on Fiat versus crypto, let's have, let's put it in the comments and let's see how that goes. Okay.
A
Yeah.
B
What's going to get to film back in the manufacturing area with Brett Adcock?
A
Because I hope so.
B
It's so cool. But normally they're, they're really cautious with that stuff, but it's so amazing when you see them back.
D
You know my question for Brett.
A
Yeah, I know why. Why in the world does he have more than two arms? No, no, that wasn't it.
D
The opposite.
A
All right, gentlemen, welcome back from Davos. And as I feel the same. Saleem recharged and re energized and. Yeah, and hopeful. I'm coming away hopeful from this conversation.
B
Yeah.
A
Take care, guys. Be well.
D
Bye.
C
Thanks, Peter.
A
If you made it to the end of this episode, which you obviously did, I consider you a moonshot, mate. Every week, my moonshot mates and I spent a lot of energy and time to really deliver you the news that matters. If you're a subscriber, thank you. If you're not a subscriber yet, please consider subscribing so you get the news as it comes out. I also want to invite you to join me on my weekly newsletter called Metatrends. I have a research team. You may not know this, but we spend the entire week looking at the meta trends that are impacting your friends, family, your company, your industry, your nation. And I put this into a two minute read every week. If you'd like to get access to the Metatrends newsletter every week, go to diamandis.com metatrends that's diamandis.com metatrenDS thank you again for joining us today. It's a blast for us to put this together. Every week.
D
Sat.
Date: January 27, 2026
Host: Peter Diamandis & Moonshot Mates (Dave, Awg, Saleem, Alex)
Theme: The future of technology and its impact on humanity, with a deep dive on Davos 2026, the global AI arms race, GPU “diplomacy,” emergent tech, and the future of governance, energy, and crypto.
This episode offers a whirlwind yet deeply insightful recap of the 2026 World Economic Forum in Davos, focusing almost entirely on the emerging AI revolution and how it’s outpacing societal, governmental, and economic frameworks worldwide. Peter and his guests discuss firsthand experiences at Davos, dissect high-level panels and conversations with some of the world’s leading AI and tech executives, examine the US-China AI rivalry, and speculate about the near-future impacts of unprecedented technology acceleration on jobs, energy, governance, and the shape of society.
A rapid Q&A session covers:
On the scale of change
“We’re talking about AI that can do absolutely any task that a human being can do somewhere between one and ten years. Doesn’t matter whether it’s one or ten. What matters is: is anybody in this room ready?” — Dave ([14:24])
On energy and abundance
“The infrastructure of the future will not be steel, it's going to be digital bits.” — Saleem ([40:56])
On crypto as AI’s currency
“The native currency for AI agent is going to be crypto. They're not going to use bank cards.” — CZ, Binance ([50:32])
“We should be able to do much better. For the life of me, I don’t understand why we even need crypto...” — Alex ([52:56])
“Crypto has survived long enough to become infrastructure.” — Saleem ([52:29])
On recursive AI ethics
“This [Claude’s new constitution] is the beginning of recursively self-improving ethics... a huge advance. History will mark this as a turning point.” — Alex ([68:42], [73:08])
On surveillance society
“With pins like this, you end up in that model. You’ll have a huge drop in radical innovation because people won’t feel safe to try out crazy things when there’s no opt-out.” — Saleem ([77:13])
On optimism
“The opportunity for humanity to navigate this [AI revolution]... the construct of the UN and nation states is completely irrelevant to what’s coming.” — Saleem ([20:48])
| Time | Segment | Key Speakers | Description | |----------|------------------------------------------|--------------------|-----------------------------------------------| | 00:06–02:03 | Davos 2026 vibe & security | All | All-AI focus, robots on streets, heavy security | | 04:11–05:46 | “World’s Fair” for AI, event mood | Alex | Equal footing for governments, labs, corporations | | 07:38–08:27 | $5 trillion AI revenue, Nvidia comments | Dario, Jensen Huang| AI’s economy-scale impact, global infrastructure | | 11:56–13:37 | Risk & acceleration: Hassabis, Amodei | DeepMind, Anthropic| Should we slow down AI, or go faster? | | 24:57–27:42 | US-China AI race, Benioff/Sacks & Mensch | Multiple | AI optimism disparity, application-layer focus | | 38:47–47:14 | Energy: powering AI, solar vs. gas vs. space | Honeywell, Elon | Space solar, renewables, the 'compute Manhattan Project' | | 50:32–55:35 | Crypto for AI agents (CZ & Allaire) | CZ, Allaire, others| Crypto as “native currency”, digital dollars | | 62:07–67:18 | Satellite megaconstellations & bandwidth | All | Starlink, Amazon, Blue Origin, space lasers | | 68:42–74:38 | Claude's Recursive AI Ethics Constitution | Alex, All | Claude's co-written ethical code | | 74:58–82:30 | Apple AI “Pin,” surveillance, & social change | Saleem, Dave, Alex| Always-on recording, changing behavioral norms | | 82:48–94:39 | AMA: Jobs, UBI, entrepreneurship | All | Rapid-fire future-proofing Q&A |
The panel alternates between breathless excitement, deep philosophical questioning, technical detail, and wry humor, staying true to their mission to “get people ready for the supersonic tsunami” ([01:30]). There’s a strong current of techno-optimism, tempered by recognition of massive risks and a near-constant call for reimagining old systems for a world driven by exponential change.
This landmark episode captures Davos 2026 as a moment when the world’s movers and shakers finally recognize the disruptive force of AI as the singular theme shaping the next era. The Moonshots team unpacks both the wonder and the whiplash: from trillion-dollar projections and jurisdictional gridlock, to robo-dogs on Swiss streets and philosophical debates on personhood, the group leaves listeners both forewarned and empowered.
“A trillion here, a trillion there. It’s now a world where AI and superintelligence is the story of the world economy.”— Alex ([05:46])
For further reading:
Episode end: 100:29.