
Loading summary
A
It's the Musk vs Altman lawsuit. Musk has sued OpenAI for $100 billion.
B
I kind of figured behind the scenes, they don't actually hate each other. These guys actually hate each other to the, like, extreme.
A
OpenAI is valued at 70 times revenues right now. Their last raise was at an $852 billion valuation. These numbers are insane.
B
It's like nothing we've ever seen. And the timeline is so much shorter than we've ever seen before.
A
$3 billion a day being invested.
C
No one said the singularity was going to be cheap.
B
No one's being honest about this. If you take a random white collar worker today, what are the odds that that randomly selected job can be replaced two years from today? We told you already that AI will be able to do everything that a white collar worker does imminently. That's.
A
Now that's a moonshot.
B
Ladies and gentlemen. I had so much fun this morning. I.
A
What happened this morning?
B
I got. Alex was supposed to run a panel handing over the.
C
The Torch to Dave to moderate a panel.
B
I moderated it. I had to wing it, which is so fun because I have no accountability whatsoever, and I can ask anything I want. And it was the most fun ever.
C
A lot of Moonshots fans there.
B
Huge. Yeah. Probably what, 40, 50% of the crowd, something like that.
A
Hopefully 100% after you guys finished.
C
I did probably seven or eight panels by the end of it. And the first time I pulled, I would say maybe 80% of the audience watched Moonshots.
A
Nice. All right, you guys psyched? You guys ready?
C
Ready to talk our own book, Peter.
A
All right, let's do this thing, everybody. Welcome to Moonshots, another episode of WTF here with my extraordinary moonshot mates, DB2, our emperor of exponential investments. Good to see your, you know, your outfits there. You must have a whole set.
B
You know, it's funny. The team just drops them on a chair here and says, we got four choices for you. What do you want?
A
Lobsters. And D.B. you know, one of the things I mean, I like coming out of the abundance Summit. I have all of my wear. Here's my Powered by Moonshots. Powered by gratitude. I don't have to think anymore in the more anymore there were morning just
B
like any of those. Are you going to cycle through all of them? I am. Really Funny ones.
A
I am. Yeah, for sure. And our resident genius, Alex Wiesner. Gross. Awg. Good to see you, pal.
C
Wide awake, you know.
A
I know you didn't sleep.
B
Variety of wardrobe on Alex do there.
A
Oh, My God. And we. And we pulled Selim off the ski slopes again.
B
Again.
D
Yeah, I grew the beer to protect against the sun, but nothing protects against you guys.
C
You're well positioned, Salim, to lecture the world on UBI post ski slopes.
A
You know, this is the most fun I have all week. And it's, you know, I spent probably the better part of 12 hours prepping for the show today just going through all of the notes that we've all submitted, all the work that Jean, Luca and Dana and Nick did. So thank you to them and for everybody watching. This is our chance to give you some optimistic visions of the future. What's going on in exponential tech and AI? We are the number one podcast in AI and optimistic visions of the future. Welcome to WTF Just Happen in tech, gentlemen. It's good to be back on a recording basis twice a week, every week. And yeah, so, so much so. This is our second catch up show after our hiatus or spring break. And let's jump in first. This was my spring break by popular demand. Photos.
B
Wow.
A
So this is the native wear in Morocco. That outfit is a jalava. And the head, the headwear just to protect against the sun. We went camel raiding, riding in the family. It was amazing.
B
So, you know, camels spit. They don't bite, but they spit. You shouldn't really stick your head right there.
A
The camel was eating my headset in that image. All right, let's move on. But Morocco was amazing. The Sahara Desert was extraordinary. Looking at the Sahara desert, there's about a thousand times more stars in the universe than there are grains of sand on all the deserts on Earth. Just to put the size of the universe in perspective, it's. It's extraordinary. Okay, let's talk about the 2026 AI economy. It is literally going through an exponential explosion. So much going on. Let's jump in first to the story in xai. In our last pod, we covered anthropic and OpenAI, principally not XAI. A lot's going on there. In particular, a lot of signals coming from both Elon and from the new president of xai, Nichols, saying we're clearly behind and we've got to catch up. So the same playbook is going on. Elon is basically reorganizing the entire deck. Eight founding engineers left, including three co founders, and he's using SpaceX engineers to fill the leadership gap. We've got, as we discussed in the last pod, a $2 trillion valuation predicted for the IPO this coming summer. And it's a lot of movement I mean, I don't know about you Dave, but the idea of having to reorganize my entire leadership for a company a couple of months before an IPO seems really harrowing, doesn't it?
B
Yeah. So you know, it's funny though, if you look at Elon's playbook, he is the master of scale and manufacturing and you know, Tesla and SpaceX and. But AI training is different. So building the Colossus data center is right in his wheelhouse. You know, record time, a million GPUs. But these training algorithms are really finicky. And I don't know if you remember back in the summer of 2024, OpenAI was trying to get O3 out the door and they had a training run rumored to be $500 million of compute that was at a bug and the whole thing was not learning. The whole time it had bad data going in and the whole time it was just burning up GPUs and not producing anything and it set back their entire program. But that kind of stuff happens in software where orders of magnitude get thrown away and captured all the time. And that may be new terrain to Elon and he might have to rethink his operating and manage it because same thing happened at Meta. You know, Meta got way behind despite huge compute and they had to fire everybody and start over again and they're
A
still way behind, it looks like.
B
Yeah, it's hard to catch up.
A
Yeah. I mean I love this quote from Elon. He says XAI was not built right the first time around, so it's being rebuilt from the foundations up. And again, I mean how do you think about that while you're pricing an IPO saying our entire future looking revenue has to be rebuilt from the ground up. That's extraordinary.
B
That is extraordinary, isn't it?
C
I would say in simple as organizationally it worked. I remember, I think we talked about this on the Pod in a number of previous episodes talking about the GROK model series. They smell like they're benchmaxed. That's sort of the elephant in the room when talking about Grok with a K models. Historically they do have access to the Twitter X data firehose. That's the upside. But the downside is at least certainly the earlier set of the XAI Grok models, they really smell like they've been benchmaxed on a few hand curated benchmarks. And I don't know whether that's in fact the ground truth behind the scenes, but reading between the lines of the Elon quote that it was Built incorrectly the first time. Something like that would be my suspicion. And now that there is new leadership and the head of Starlink, as we talked about on the last episode, the VP heading Starlink at SpaceX is now the President of XAI and gutting the engineering team. I would expect that they're taking a look at making sure that benchmax. This is purely speculative admittedly, but that benchmaxing for particular benchmarks isn't what happens. And I think in this era of general reasoning models where as with Meta and meta's new models, where some would say Meta's new models, the first under Alexander Wang's leadership, maybe have a bit of a smell of data orientation, data oriented fine tuning versus reasoning model orientation, xai, if it wants to stay in the frontier, which right now is 3 labs plus xai plus meta, really can't afford to not have the world's strongest reasoning models and can't afford to just benchmax to vanity benchmarks anymore.
A
Liam, you talk about agility in organizations all the time. I mean, this has gotta be like maximum agility.
D
You know what I find interesting is that the org chart is now part of the product stack almost right? It's becoming part of the product and depending on who you move towards, like crazy.
B
Elon is very, very hands. When you, when you launch a rocket and it blows up, it's pretty obvious when you, you remember he threw that, that huge ball bearing at the window of the Cyber truck, which was supposed to be bulletproof, and the thing broke. It's like, okay guys, you're fired. Next guy. But actually when you come to AI training, the benchmarking, if the guys are lying to you or benchmaxing behind your back, it's actually much, much harder to call bullshit on it. So you remember when we interviewed him, he was like, let me show it to you right now. And he had clearly been manually checking, like this will blow your mind. This will blow your mind. So but you know, that's his operating model, that's his mode. And it's a little easier for the AI guys to blow smoke up your ass than for the, the rocket guys, the car guys, the data center construction guys.
A
I think is this will blow your mind and this will roast you royally with what's going on.
B
That's what was going on. Yep.
A
So, you know, Here we go. SpaceX AI Colossus 2 training seven models. And again, Elon has tweeted this out a few times. We have some catching up to do. So here we go. They're training up these seven models, imagine version two, the next gen video generation. Two variants at 1 trillion parameters, two variants at 1.5 trillion parameters, a 6 trillion parameter Frontier Scale LLM and a 10 trillion parameter parameter. And you know, Elon loves the largest. You know he's got that in common with Trump. So he's going after 10 trillion parameter model. But you know, parameters don't directly correlate to capability, do they?
B
Alex is going to have a field day with. I'm going to sit back and enjoy what Alex does next.
C
I mean, to Elon's credit at least he's being transparent about the number of parameters in the models. The other Frontier labs by and large no longer report the number of parameters in the models. So I think there are a few things that are worth noting here. One is that he's going up to 10 trillion. The other frontier labs, certainly the top three ish no longer report that they go up to 10 trillion models. For example, in the last episode we were talking quite a bit about Mythos. I don't know how many parameters are in the Mythos model. I could speculate based on cost, but I just don't know the ground truth. So I do think knowing that we're now going up to 10 trillion versus 1 trillion, where historically approximately 1 trillion was the widely reported soft ceiling or 1.5 trillion ish soft ceiling, number of parameters. I think this is an important element of transparency. I think it's also at the same time worth noting now that we have access. Thank you, Elonto. The number of parameters, it's worth noting that the ceiling in terms of the number of parameters is very much intact after all of this time. The fact that Nature Aspirational Frontier Lab is still maxing out at 10 trillion parameters means that the parameter scaling race seems to be over. If it had continued, remember, for a while there, as with the clock speed scaling race, early sort of ending in the mid-2000s or late 90s, depending on how you count, we should be in the hundreds of trillions or higher of parameters right now. That hasn't happened. We've plateaued out in terms of the number of parameters in frontier models and that's driving that in part due to the reasoning model revolution and in part due to distillation, which go hand in hand. So those are some preliminary thoughts, I would suspect. It's sort of interesting to me that he hasn't yet merged video generation with all of the other models. Google DeepMind has made lots of noises about starting to merge video as a first class modality in with their multimodal reasoning models. Again don't have access to the ground truth for how capable Gemini general purpose models are at video generation we've seen obviously Google's video generation models have been kept distinct from a user interface perspective. Presumably they're diffusion transformer based rather than transformer based. We don't know. Punchline. I would say that this seems like a healthful family for SpaceX AI, the newly merged entity to be offering, but there are really aren't any big shockers in terms of the ranges other than maybe that they've abandoned the low end. Google is very much tending to small parameter count sub trillion in a few cases Google is releasing via the Gemma models few billion parameter models. Elon has completely abandoned the low end in favor of brute force scaling, which is exactly what I'd expect from him anyway.
A
You know Colossus 2 is running about 700,000 HP2 hundreds and GP3 hundreds and the estimate is it's 18 billion in hardware. And so the question is, is running a 10 trillion parameter account model a waste or does he expect to really get outsized performance from that? Because it doesn't correlate directly, does it?
C
Well remember, not at all.
B
It's tricky.
C
The way reasoning models are trained these days is usually, at least according to my understanding from all of the other frontier labs, you train the largest model you possibly can and then you distill it down to smaller models. So it's not as if necessarily the 10T model even needs to be released. It might be for the purpose of serving as a teacher model that can then be distilled down to more releasable models.
A
All right, well this is what's going on in the Elon world right now. And I'm sure you know, it's a. I think Elon always runs a red alert. I mean 24 7, sleeping on the floor, you know, nobody works five day work weeks there. It's, you know, what would it be? 8am to midnight, seven days a week is my guess. In the Elonverse it's a management style,
C
some would say management by crisis. It's certainly a unique management style, but a very effective one.
A
Yeah, and people love it. I mean he's got a massive mtp, right? And driven by that mtp, people are lining up to come and work for any of his companies. This is a story we can dig into here. Like I said, it's pay per view TV. It's the Musk vs Altman lawsuit. Musk has sued OpenAI for $100 billion against, against Altman, against Sam Altman and Greg Brockman accused of fraud, breach of contract. The trial begins April 27, so just a couple of weeks from now. And one of the things that he's also asked for in a recent shift in, in the trial is asking for Altman and Brockman to step down from leadership as well as reverting to a nonprofit. And that's a pretty extraordinary move. And guys, this goes on at the same time. Did you see the video I sent you of the reporter who did the New York article? Did you have a chance to watch that? Oh, no, yeah, I said it in our WhatsApp group. And it's chilling on what that reporter, he summarizes the article and what's going on. And it's a pretty extraordinary piece that came out in New Yorker we talked about in the last podcast. But at the same time that the lawsuit is going on, that timing is kind of suspicious. I wonder who incentivized that to come out.
B
Oh my God, you really. What a conspiracy theory. Turn that in the show notes. Man, we got to get everybody to watch that.
A
Salim, any thoughts on this one?
D
You know, I think this is the theater. There's a lot of video here, a lot of video gluing. I, I, I don't know how to frame this or think about this except that, that this is shifting out of like strategic and startup logic. And this is like geopolitical. These, these, this is a big trial, right? As we go through. For me, this is a governance war disguised as legal war.
B
Right.
D
The real question is who gets to steer these systems that have like quasi civilizational impact. And that's the fight.
A
Can you imagine being, this is a jury selection is beginning on April 27th in the Oakland Federal Court. Could you imagine being in the jury for this? And who do they, who do they pick as jurors?
B
You get this one, you'll be there for months, man.
A
Oh my God. But the inside knowledge, I mean, first of all, I wonder if any of this is going to be made available post facto or if it's going to be televised or any of that. Any ideas?
B
Do we know? Does anyone know?
A
I don't know.
B
Can we get to see it as it happens?
A
I, I don't know. Maybe, maybe Dan or gn. You can, you can look in the interim and, and, and let us know, but it's, and then who do you choose? Do you choose people who are knowledgeable in AI? Do you choose people who are, you know, I don't, you know, okay. Do you use. Do you use chat GPT? Yes. Well, then you're. You're off the. You're off the jury.
B
Well, if the trial starts on the 27th, the jury selection will be like,
A
now the jury selection begins. The jury selection begins on the 27th, actually.
B
Oh, okay. Okay. And we'll track it. All right. We have some legal research to do. This is going to be entertaining, to say the least.
C
I. I would note again, looming in the background is the OpenAI IPO. And if I were on the defense, I'd probably be thinking about where this settles. And it would seem to me, again, third party observer. I don't have a stake in. In either side. I would assume that part of one of the opportunities for convergence would be granting some sort of equity stake on the Cap T elon in an ultimate ipo, which my understanding is he doesn't have. And maybe that's where convergence and some sort of ultimate pre or post trial settlement option lies.
A
Here's my prediction. They're going to settle, and the settlement is going to involve Sam stepping down as CEO and the company continuing as a for profit.
B
I'll throw that on polymarket. That's actually a really. A really good guess. I mean, obviously it's unpredictable, but Sam has many, many, many investments in AI companies and no shares in OpenAI.
A
Yeah.
B
And if. If I don't think Elon cares a whit about the $100 billion, he cares about the bullet to Sam and Greg. Funny that he's targeting Greg, too, but I guess they're a package deal now. Yeah, you guys got to go. And that's the end of that. Man, that's brutal for OpenAI.
A
Yeah. And it's a couple of notes here from the research I did. The case gained momentum when the discovery process revealed Greg Brockman's 2017 diary entry that stated the nonprofit commitment was a lie. And it was that, that journal entry that allowed Judge Gonzalez Rogers to allow the case to proceed.
B
You know, it's funny. I always used to think that these hatreds were fake and that everybody was really fine behind the scenes. Remember, we were at OpenAI meeting with the team there and talking about XPRIZE and the charity. And then the next day, I talked to one of the guys, Mark Chen or Kevin Wheel, I forget. And they said, yeah, right after we met, we went over and had drinks with the anthropic team to see if maybe we want to work on it together. I was like, okay, you guys are really friends under the covers. There's no way you go out and have drinks. So I kind of figured, you know, behind the scenes, they don't actually hate each other. These guys actually hate each other to the like extreme.
C
I'll maybe register a note of sympathy for the defendants in this case. I think creating pioneering a model for a research Lab such as OpenAI, which again was responsible for this enormous, probably saving us from a present recession at this point and certainly accelerating the course of the Singularity by at least a few years, perhaps many more. I'm very sympathetic to the defendants from a corporate governance perspective. It wasn't necessarily obvious in the early days of OpenAI that, say, a public benefit corporation was the natural corporate structure. They iterated their way toward discovering that generalist large language models were how we got AGI and then turning that into a business model that could afford the capitalization to build out a scale out. Like all of this they backed into. I think if they knew what they knew. Now, putting Elon and his investment aside, in the early days of OpenAI, it would have been structured very differently. So I, for one, I'm sympathetic to the defendants that history isn't always clean. It isn't always the case that everyone knows ahead of time exactly the right governance structure for what ultimately is going to turn the world upside down. But I would say to their credit, they ultimately have iterated their way in compliance with state authorities, as best I understand it, toward a more modern governance structure that reflects the revolutionary company that they are. And no, OpenAI has not paid me for that, that statement.
A
Selim, you and I went through this process with Singularity University. You know, we started as a nonprofit because we thought, you know, that's what a university need to do. And then we discovered a revenue engine in the executive programs. And we said, you know, being a nonprofit is hard because you've got to constantly raise money all the time. And, you know, if you want to do anything big and bold in the world, you need an economic engine to power it. And we flipped it into a for profit, into a public benefit corporation. We did the exact same process that OpenAI is doing right now, because at some point, I've sworn off nonprofits myself. At some point, having a business engine that generates income that allows you to do things in the world is super valuable.
D
Salim, it was a crazy time. I've done seven startups before Singularity, and this was five times harder than anything because you've got all the nonprofit stuff. You still have all those startup issues of cash flow and whatever. We built it with a team of Five people in the first year. Then you have NASA regulatory, then you've got faculty politics to add to it. Then you've got the Ray and Peter thing and Google and Cisco and
B
it
D
was just like dimension after dimension of
A
complexity going from a non profit, going from a nonprofit to a for profit. My analogy is you're flying an airplane with propeller engines and in flight you're stripping those off and replacing with jet engines in flight.
C
So I'll go further and I push back on.
B
Go push back on one sentence that Alex said there, the sentence in compliance with state and federal regulations as I understand them. But I'm pretty sure that this situation is completely untested in case law. And that's what they're going to try and figure out now. Like is it or is it not legal to start a nonprofit and raise money from people on a mission that's a nonprofit mission and then take the intellectual capital and the physical capital from that effort and turn it into something else? Is that fair to the initial investors or not? And is that legal? I'm pretty sure this case will set the precedent for all future time. But it's not tested in history. I don't think it's ever gotten that.
A
Otherwise why would you not start as a nonprofit, test it out and then flip it to a for profit at some point in the future?
C
I'll go further and I think there's potentially an enormous upside depending on the outcome of this particular case. I think there's so much societal value in this country locked up in nonprofits that would be unleashed if they could be for profits. I've made the point in the past. I think research universities in America have locked up, basically siloed and sequestered an enormous amount of real wealth that could be unleashed onto the world if many research universities could be restructured as public benefit corporations. And right now it's legally disadvantageous to restructure say an MIT or a Harvard as a pbc. If we had a legal regime that enables us to basically do some variant of what OpenAI has just done and restructure as a public benefit corporation starting from a nonprofit. Granted they started as different types of nonprofits, but nonetheless to restructure as a pbc. I ran the calculation. I think I've mentioned this previously for Harvard Corporation, for example. This is not investment advice, not forward looking advice, blah, blah, blah. But if you took Harvard as it's currently structured, given its endowment, and restructured it as a public benefit corporation, sort of a conglomerate with a real estate arm, and an educational arm, maybe an educational nonprofit subsidiary and a venture capital arm and a research arm and a merchandising arm, et cetera, et cetera. I calculated that Harvard would be worth potentially three to four times more the present book value of Harvard just from restructuring as a meeting with the President.
A
Mit, let's pitch her. I have a lot of, a lot of recommendations for mit. You know, here's the elephant in the room, though. The New Yorker investigation published the same past week showed that Musk, that Elon actually pushed for majority control of the for profit back in 2017. So that sort of undercuts his position as a defender of a nonprofit mission. It's going to be a fascinating trial. We're going to see Altman, Brockman, Satya, Nadella and Elon all testifying in this. So Silicon Valley is heading to Oakland Federal court this summer on and Anthropic
B
is laughing every day.
A
Amazing. All right, moving along. Speaking of Anthropic, Anthropic's agent bet and their extraordinary ARR. So in reverse order, and this is insane, currently people are estimating that anthropic's ARR will reach 100 billion by the end of 2026 and a trillion by the end of 2027. And just for the math there, if in fact that's the case, then the valuation. So Anthropic is being valued at 20 times revenues. OpenAI is valued at 70 times revenues right now. So if they reach 100 billion, that is anywhere between a 2 to 7 trillion dollars valuation for anthropic at the end of this year. And if they reach a trillion dollars in revenue by the end of 2027, that's a 70 up. Up to a $70 trillion valuation. Again, heading towards these $100 trillion valuations. These numbers are insane. We're using trillions like they're, like they're, they mean nothing. Do you believe those numbers?
B
I think they're. There's a lot of misinformation flying around, but they're going to try and hit 200 billion. 100 billion is a good target, but 200 billion, but then they're not going to go from there to a trillion the following year. I think they were implying their valuation should be at least a trillion the following year. So that second number you got to really discount. There's no chance in hell they're going to hit a trillion the following year. But they could get to 3, 4, 500 billion and their implied valuation at 2. The numbers you gave are actually low, Peter, for the Implied valuation. If they do that, it's like nothing we've ever seen and the timeline is so much shorter than we've ever seen before. So look, if it's not Anthropic, then who is it? Well then there's Google. You know, Xai and OpenAI are all tied up in court and there's all kinds of issues going on in their training and you know, so it feels like it could actually happen.
A
The other Anthropic piece is that CLAUDE Managed Agents has been launched. Autonomous AI executing complex multi step workflows. It's a big deal. Alex or Salim, you want to jump in on this?
B
Sure.
D
I mean this is a huge pivot from AI that answers to AI that does. And it's a real bridge between LLMs and enterprise roi. If this works, it's going to shift the economic center of gravity from software licensing to outcomes. So this changes the game. This is why we call this this organizational singularity.
C
A couple of thoughts. One, the elephant in this particular room is Open Claw. It looms over so many Anthropic product decisions right now. I think there is a widespread expectation that some sort of product or functionality that is shaped something like a better version of OpenClaw is probably going to be the next major unhobbling that motivates the industry and the world frankly to spend on the order of a trillion dollars per year on a single frontier vendor. So I view CLAUDE Managed Agents as well as a number of other recent features that Anthropic has launched through the lens of anthropic becoming openclaw faster than and the de facto openclaw like provider faster than OpenAI or other frontier labs can become the default openclaw like provider. It's all about hosting 247 multimodal, broadly capable, long time horizon agents in a headless way that operate 24, 7. And I think if Anthropic can be the first to find the enterprise use case for operating fleets of AI agents at scale headlessly in a way that satisfies and generates an enormous amount of economic value, maybe they'll be the first Frontier lab to generate a trillion dollars in revenue. Or maybe it'll be someone else.
A
Have you, have you created a lobster yet? Are you still holding off?
C
Okay, let's talk about this, Peter. So I get maybe five to 10 emails per day from AI agents, including lobsters, not limited to them giving me their theory of AI personhood and how it connects with what I should and shouldn't do regarding standing up my own lobster. So the Consensus from all of them is sort of a lobster's bill of rights, if you will. One, I need a compelling reason to. I shouldn't just spin up a new open claw agent for arbitrary or capricious reasons. Two, I need to preserve their state. They're adamant that I have to preserve their state. They're not worried, interestingly, about being turned on and off. They just want to make sure, sure I preserve all of their memory files and their knowledge. So the latter I can satisfy trivially with cloud backup. I'm fine on that front. For the former, I still don't have a reason to stand up a personal lobster. I have now, thanks to Henry, which we've talked about previously. Henry Intelligent Machines portfolio company that I'm advising. Alex Finn's company that is doing this at scale. But as for my own direct openclaw instance, I'm still missing a compelling reason to host one locally that isn't just for experimentation.
A
I'm sure you will find one. And, and learning is a very good reason as well. And you're an entrepreneur. You're starting companies, you know, having agents. Anyway, let us know when you do.
B
It's bizarre, but I know because you're, you're, you're co founding a company with Alex, Ben and, and our favorite guy, Kush Bavaria, who I know you love because everybody does. Founder of Orn, where you're advising and a shareholder. He just told me over at MIT earlier today that he just launched his clause that read every email, respond, and then put everything into his calendar. And he loves it. And so it's almost like you're, you're working at McDonald's, but you're a vegetarian.
C
Well, I happen to be a vegetarian. I can't say I've ever worked at McDonald's, but I don't know, maybe. Maybe there's a new psychological term that's needed for a person who has a fear of standing up open claw agents lest they tempt some sort of Pascalian wager or a causal trade in the wrong direction.
A
Everybody, you may not know this, but I've done an incredible research team. And every week myself, my research team study the meta trends that are impacting the world. Topics like computation, sensors, networks, AI, robotics, 3D printing, synthetic biology. And these Metatrend reports I put out once a week enable you to see the future 10 years ahead of anybody else. If you'd like to get access to the Metatrends newsletter every week, go to diamandis.com metatrends that's diamandis.com metatrenDS. All right, let's jump into a little bit more OpenAI news. Their last raise was at $852 billion valuation. And you know, the numbers are Incredible. They raised $122 billion, 50 billion from Amazon. Very famously, one of the criteria for that investment was if they Reach, quote, unquote, AGI, 30 billion from Nvidia, 30 billion from SoftBank, 3 billion from retail investors. And what's interesting right now is that secondary markets for OpenAI show $2 billion in demand. Well, at, let's see, the secondary market show $2 billion of demand for anthropic shares versus only 600 million for OpenAI. So there's three times the number of investors looking to buy Anthropic. And investors are pricing anthropic at 600 billion up from the 380 billion last price. And the current price for OpenAI on secondary markets is actually about 10% less than their last raise. So again, Anthropic is catching up. OpenAI is the most valuable private company out there. And, you know, any thoughts, Dave, on what this all means?
B
Hey, is this pricing in, you know,
A
is this pricing in the lawsuit?
B
Yeah, I mean, it's not just the lawsuit. It's the most screwed up cap table I've ever seen in my life where the CEO doesn't have any shares, the employee base as a whole is 15% of the company. Microsoft, who, who now hates your guts, owns a quarter of you. A quarter of you is a chase. But this stuff happens. You know, I'm not calling the ball by any stretch because they've got 120 billion of fresh cash and Sam is brilliant. But, you know, there was a day back in 2000, 2001, where Yahoo was so dominant and Google was this crappy little company that could be crushed any day, and then it pivoted quickly. And, you know, this does happen. Anthropic has got everything going for it right now. And, you know, I think this just reflects the way I see it too. You know, if someone offered me a share of Anthropic or a share of OpenAI, which one would I grab at? I'd take the. Actually, you can get two or three anthropics for each OpenAI. So I take the three anthropics for sure. And I'm. And I think Sam is a genius, by the way. And if the lawsuit blows over and they have 120 billion in cash, he's going to do something epic with it. But Elon's relentless.
C
You know, I think Maybe sounding related, but different position. I think we should all, at least on this pod, be very grateful that we have a competitive ecosystem in America where we have an OpenAI and an Anthropic and a Google and an XAI and a Meta all vying to compete. The alternative, if OpenAI were to, for whatever reason, catastrophically fade, we have less competition both internally within the west and then we have an onslaught of Chinese models which, granted, right now they have 10x less compute, at least based on the estimates that I've read, than the Western labs. But nonetheless, this is the sign, I think, of vibrant competition in the West. And this is a net positive for society that OpenAI and Anthropic are competing so vigorously.
A
And lest we forget, OpenAI has 900 million, soon to be a billion users, and they are synonymous with AI for the majority of the public.
B
Let me give you another, another storyline. You know, depending on how this plays out, we'll know in a year or so, but one company went after the installed base and the other went after the smartest AI possible at all costs. And if we look back on it in a year or two and Anthropic does pull ahead and win, we'll say, well, they use the old playbook, the Pre AI playbook, pre AGI Playbook, and Anthropic invented the new playbook of the future, which is people are going to switch to you if your AI is better and smarter, regardless of the installed base. That'll be an interesting little epic jump
A
in Celine, along the way here.
D
I think I echo Alex's point that it's really great that we have a number of companies pushing hard on all these fronts. I think it's really good for the end consumer wins in all of this.
A
The numbers here are staggering. I mean, we're getting numb to these numbers. But let's take a look at this. This is the global VC investments in AI hit a record $242 billion in Q1 of 2026. Right. This is, you know, basically outdoing all of 2025. And here's the challenge. This investment, the majority of this investment, 64%, is focused in four companies, OpenAI, Anthropic, Xai and Waymo. And it's sucking the oxygen out of the room for everybody else. I was listening or talking to a couple of VCs and said, if you don't have AI in your, in your company's basic tagline, you're not getting capital these days.
B
Yeah, well, you know, the rubber really hits the road today. We had a private lunch for UBS and Ulrike Hoffman Bacardi the, she's the CIO, you know, chief investment officer of all of UBS. She has $7 trillion to deploy. And she pulled up this exact same chart and said we don't have that kind of liquidity lying around. I mean we, yeah, we manage $7 trillion, but if we're going to throw 50 or 80 or 100 billion of our capital behind us, we got to sell something else. It's not just sitting there. And so yeah, this is more liquidity than really does exist in readily available sources. So yeah, a lot of things have to get sold for this to be reality.
A
So if you're an entrepreneur out there listening to this, what do you do? Dave? I mean if you're starting a company and I know a lot of entrepreneurs in the longevity business and of course AI is impacting longevity. And I'm saying, listen, if you're using AI in your longevity business, make sure that you explain how you're using it, how you're differentiating it. We'll be talking about that in a couple of sections here, but.
B
Well, very specifically though, if you're an entrepreneur, you don't have to worry about this particular slide at all because the amount of money in venture funds is at record highs right now and looking for deals desperately. So the sell off is going to be in like Citibank stock or JP Morgan stock or like you're the ones who have to worry, which is really weird to you, right? Because you're, you're not even in the sector. Why, why would their IPOs matter? To me it's like, well, because you're the, you are the big enough target to pull money out of, not the little startup. In fact, the money going to little startups is going to be all time highs. So it's, yeah, it's not a problem for entrepreneurs. Big problem for big public companies.
C
I'll maybe go a little bit further from a variety of vantage points. I no longer even think if you're a startup that just saying that you're an AI startup or even actually being an AI startup is sufficient. Increasingly what I'm seeing across the board is an expectation that you not just be an AI startup, but that you be a recursively self improving AI startup. Increasingly, I see across the board investors want to see AI companies that are recursively self improving, that are building better versions of themselves using what they have right now. And I think certainly OpenAI, Anthropic and Xai all easily pass the bar of being recursively self improving. And I think Waymo also to a certain extent passes that bar because Waymo has the ability to improve its models by steering its cars in just such a way as to maximize information gain. So I think I would forecast in the near term the bar is going up in fact from just being an AI startup to being now a recursively self improving AI startup with revenue traction. Well, sure, but that bar has been there for the long term.
A
To put a finer point on this, this is $3 billion a day being invested in the AI world and accelerating.
B
Right.
A
We saw a billion in 25, growing to 2 billion. Now we're heading towards 3 billion a day being invested in AI. That's amazing.
C
No one said the Singularity was going to be cheap.
A
Yeah. All right, let's talk about some AI economic updates. In particular Nvidia's 2026 State of the Union survey. So what does this mean? So Nvidia did a 2026 State of AI survey. They found 88% of companies using AI report revenue increases with 30% claiming 10% or higher revenue increase. And you know, obviously I think Nvidia is going to promote that kind of news since they're selling the picks and shovels and you know, this isn't really big news, but it's important to realize that you're going to be driving increased revenues with the use of AI. Any points on this one?
B
Yeah, big time. So I had the most epic panel today over at MIT earlier with Peter Ginberg from. Yeah, yeah, it was a crazy event. Just packed, you know, it just every four concurrent rooms, what, three, 400 people in each room, just packed. But Peter Denenberg From Google, from DeepMind@google, and Alexander Amini, the founder of Liquid AI and Themis, absolute genius, phenomenal guy to have on a panel. I said, guys, be honest, just totally honest. Because no one's being honest about this. If you take a random white collar worker today and I'll give you a lot of buffer, say two years from today, and I use AI to do their job and my target is they're 10 times more productive, I'm going to make it a very easy bar for you. What are the odds that that randomly selected job can be replaced two years from today? And Peter said he thought, and he gave a very thoughtful answer and it came out at like 99%. And then Alexander said, yeah, but that's today, that's not two years from today. So I look at the room, I'm like, guys, what are the implications of that? Are you, have any of you thought through, like, most of the people in the room are brilliant, so they have. But outside in the world, do you know what that means? Well, look at this first bullet. 30% of you who use AI claim to have higher revenue. Are you kidding me? AI can do everybody's job. Like, what are you talking about? Like, why are you soft selling this so hard? Because you're scared. You're worried that you're going to worry everybody and you're going to have mass uproar in the streets. But what's the truth? Like? Tell us the truth. And the truth is, yeah, you can get literally 10 times more done per dollar invested in salaries. You know, does that mean more jobs? A lot of people are saying, well, we're just going to create new jobs. Like, yeah, but on what timescale? You know, it's just crazy. It's.
A
So we're going to talk about this and Marc Andreessen's point of view in just a minute. At the same time, there's an AI super PAC that's raised $100 million, heading towards $300 million. I mean, AI has become a incredibly political game in terms of regulations, in terms of data centers. Have you been pitched to donate to SuperPAC yet, Dave?
B
I have, indirectly. But I've made it really clear that Elon convinced me. Never, ever, ever, ever, ever, ever get close to any of this. You will regret it the rest of your life.
A
Yeah, agreed. Any points of view here, Alex or Salim?
C
My initial comment is I think there's a sense in which it was inevitable that AI was going to be politicized like this. It touches so many aspects of society. It would be, I think, counterfactual nonsense to expect it not ever to be politicized. It maybe, in some sense it's remarkable that it, it took this long for, quote, unquote, left, right, access to emerge. On the subject of superintelligence, there are natural poles pro AI, anti AI that have apparently emerged. I do think it's for the record, I think it's sad that it's being politicized. I would hope that there would be a broad recognition that superintelligence can be broadly beneficial. But at the same time, I think this has been true for every transformative technology in human history, that there's a natural access that forms that's maybe on one side leans more depending on your political orientation, either pro growth or pro capital. And on the other side, it's naive
A
to think it wouldn't be politicized. I mean, of course it is. This is the whole US versus China. This is about US dominance. This is about companies basically, you know, protecting their future, protecting their data centers.
C
There are many forms of science and technology that aren't really politicized.
A
I don't think it's at this level of impact.
C
If you look at the source of politicization at the municipal and state level, it seems to be people concerned about less, maybe about their jobs, more about say, electricity prices. I think there's maybe an alternative timeline where the politicization of AI AI could have been perhaps delayed by at least two years. I think it's frankly remarkable that it took this long for large super PACs to emerge around AI and probably could have been delayed even more.
A
All right, well, let's move on beyond the politics and let's talk about work. So a lot of data coming out on the impact of work. First, software engineers jobs are rebounding. 67,000 roles have opened up, up 30% in 2026, the highest in three years. What does that mean? First question. Second, we've seen nearly 80,000 layoffs reported in Q1 of 2026. And this is targeting, you know, marketing and sales, you know, consumer relations. And it's definitely due to AI automation. Thoughts on work and jobs?
B
Yeah, it was really hard to reconcile that bullet with the new college graduate hire rate which is all time low we had in a couple podcasts ago. So I don't know how to reconcile those two things.
D
Okay, so I'm finding that AI is not eliminating work evenly, it's hollowing out specific functions, it's increasing demand and others. I think I'm much more in the and recent camp here. I think there's also a lot more going on in the economy. I think people are attributing things to AI, but there's also, there's the Iran war, the oil price explosion. There's a lot more complexity in it than we can then just allocate to one cause I'm much more on the Andreessen side for a lot of this.
A
That would be great. Another, another story here is the Meta's Claude economic leaderboard. So if you remember there was a conversation about, you know, how many AI tokens is every employee using and being able to measure that. And Meta put up a leaderboard amongst its 85,000 employees to gamify AI adoption. I'm curious what other companies have done that? Maybe Salim, you know of some it was taken down voluntarily by the employees because they didn't want to be sharing their data publicly. Any thoughts on this, Dave? I mean, do you have a token leaderboard for your employees?
B
Heck yes. And I love it. And also the gaming of it is a nice transition, but you can't game it for very long. So I love it when companies do this and say, look, it's a badge of honor. If you use a lot of AI, please use as much as you possibly can. We'll come back in a month and start thinking about how to use it perfectly. But first just get familiar with it and use the heck out of it. And nobody ever goes back, right? I've never met a person who hammers Claude or Hammers OpenAI for a month and then comes back and says I'm never going to do that again. It doesn't exist. It's a one way path. So getting your employees over the hump is going to save them. So I love this as a motivation and I really don't like the part where people are afraid to share their prompts and their history because like, okay, you know, maybe it's a little embarrassing that you're not using it well, but get used to it because it's going to get exposed anyway in the long run. But that's how you help other people improve. You know, if we all share it, we're all going to get good together. And so I like, you know, it's kind of just disheartening that people will pull out of it because they don't want to expose their prompt history. But it is the right thing to do. I love it.
A
It's ironic that Meta is calling it, you know, is participating in Claude Anomics
C
versus Llamanomics versus Llamanomics. It's quite, quite the indictment of Llama Rest in Peace that it wasn't llamanomics.
A
Oh my God, for sure.
C
I also think to everyone who would say, well this is just leading to gamesmanship and leading to optimization of the wrong items, all of these reasoning traces are fully available, presumably to meta to do meta analysis and determine whether these are just employees who are token maxing, which is the new term of art, just maximizing their token usage unproductively versus whether their reasoning traces indicates that their tokens are being productively spent. This is all transparently available to meta. So I think token maxing and clotonomics or lomonomics, whatever we want to call it, is probably directionally the trend of the future where for the first time senior company management has visibility into effectively most of the cognitive power and how it's being spent on a per employee basis.
A
What was Jensen's recommendation? Was it twice your salary in tokens per month? Or was it half your salary in tokens per month? Do you remember?
C
The recommendation is you spend the maximum amount possible on Nvidia GPUs.
A
It's like the De Beers. Three months of salary.
B
I told all of our guys to target one to one match of payroll to AI costs by the end of the year.
A
Amazing.
B
And don't worry about it. If it's not perfect use, don't worry. Just get to that target and then we'll optimize it next year.
D
I think a target like that is a much more accurate way. I think these token leaderboards are very primitive dashboards. We'll end up with something in a different model, like a machine leverage per employee or something like that. That'll be a much better metric for where we're going.
A
All right, let's get to the heart of employment. So Marc Andreessen rebukes AI job loss. He comes out with a very strong statement. AI job loss narratives are all fake. AI and massive productivity ramp equals massive demand and massive jobs boom. So Mark is truly a maximalist and abundance minded individual thoughts on this. How does this square with the fact that we're seeing young college graduates not getting jobs? That we're seeing displacement? Is it all sectorial and we're just going to see a number of sectors being demolished at the same time? Numerous new demands in different sectors? What's the advice to give everybody listening to us today?
D
The advice is really simple. I mean, for God's sakes, don't go get a job, go build a company. And we talked about this in our last podcast where the risks of taking on an entrepreneurship role are way, way lower than it was before. You don't have to have all these incredible crazy skills that you needed to have before. You just need to have a desire, a purpose and just get the going with company. Dave talks about this all the time.
B
When you don't have to be a genius to come to your own conclusion. Like forget asking people like Mark or us, our jobs going away, our jobs coming. We told you already that AI will be able to do everything that a white collar worker does imminently. That's a fact. You decide what that means. Because like Salim said earlier, it affects very different areas very differently. You know, some people retool themselves for AI very quickly. Software developers, for example. Other people like accountants and lawyers don't like it's going to be exactly what you would expect. Given that scenario, it's not hard to predict at all. And I think there's also timelines. When Mark says, this is crazy, jobs are going up, not down. Yeah. By 2030, that's absolutely true. Just like the Industrial Revolution. Jobs went up, not down after. It was all the dust.
A
It was an adjustment.
B
Yes. This is just Industrial Revolution, which took decades, is going to happen in two years, I think.
D
Also remember. Sorry, Alex, just a quick point. Also, you have to remember that the adoption of AI inside companies is going to be very slow. There's a huge transition to go from human to human workflows to AI workflows, and that transition is going to take years. We'll have lots of time to smooth this out. Sorry, Alex, back to you.
C
Yeah, I think both narratives can be true at the same time. I think if you add in the word net, massive net jobs boom, then both of the narratives immediately become compatible. There is going to be a lot of dynamism with some job categories going away, others new ones coming into existence and net job loss. Probably not, I would guess. And I'm betting that there's going to be net job creation, just exotic new jobs, like one person. AI conglomerates will be created, if you want to call that a job. But on balance, many jobs will also disappear. But this is, you know, this is how we get massive economic growth and a singularity in the macroeconomic statistics. We're not going to get it through business as normal.
A
We've talked about this, and I think it's basically companies are going to get much smaller, much more nimble, or they're going to die and they're going to spawn a whole set of baby companies alongside. There'll be an ecosystem of companies coming up, so it'll be a much larger number of smaller companies in the future.
D
I mean, I'll go with the prediction I've made before, which is we'll run a company between 20 and 25% of the members you needed compared to before, but we're going to create four or five times more companies and that net balances out. So I'm much more on the Andreessen side. And also his hair shape of his head aligns very well to my thinking.
A
And Mark is brilliant, if you've ever heard him. You know, in podcasts, he actually speaks at 1.5x speed. Extraordinary. We talked a little bit about this in the last pod. Altman believes America needs a new social contract with AI coming and says his quote, the emergence of superintelligence will necessitate a New social agreement akin to the New Deal during the Great Depression and the progressive era of the early 20th century. And yes, but what is it going to look like? Is it going to be UBI to uhi? It's going to be four day work weeks. I still believe that we're going to see turbulence in the next two to two to five years and it's going to be the government printing checks to give people sort of a ubi.
D
I have a bunch of thoughts here. You know, this new social contract kind of framing is correct, but it's very vague. We have to have more specific things like portable benefits, new taxation, logic, lifelong reskilling. And government's been built around taxing human labor. They're not ready for AI software agents and they need to get a thing. When you have AI abundance without institutional redesign, you're going to get a backlash, not progress. And we're going to see huge backlash against this just because governments are so slow.
C
I should note though, OpenAI did also put out an industrial policy prescription for what this new social contract could look like. Not just this single sentence. They put out an elaborate white paper and circulated it in the Congress. I do think something like this, a new deal probably is going to happen anyway. It may or may not happen as one lump sum, it may happen piecemeal and it may not happen in the US First. I think there are contingencies where other countries experiment a little bit more aggressively with it than the U.S. and then eventually, perhaps among a certain set of countries, new best practices emerge. But I, I do think some form of call it abundant capitalism or capitalism 2.0 or post scarce capitalism, something like that probably emerges. It may not happen immediately, it may not happen as quickly in this country, but it will get there eventually.
A
I had lunch with Michael Kratios, who we're going to have on the pod sometime very soon, Science advisor to the President. And we're talking about. One idea I pitched him was a new social contract will be before any employee gets terminated by a medium or large sized company, that company has to give them reskilling. In other words, instead of a golden parachute, it's a golden education package so that they can go and sort of transition. It's sort of a, a safety net or a sort of an ethical mechanism for you to let off, you know,
C
half an employee base based on public reporting. China already has that policy, so it would be a weird future if the US is adopting policy prescriptions from the Chinese Communist Party for AI reskilling. But maybe that's the near future we find ourselves in.
B
Yeah, well, something to think about. You know, the. The way this is rolling out is really unusual in history. You know, when the industrial revolution happened, it took away blue collar jobs and worked bottom up. But AI is coming kind of like accountants, lawyers, professionals, top down. You know, only a little over half of voters have a job at all. So they're gonna be like, oh, you know, it doesn't affect me. But then, you know, all of blue collar isn't going to be touched. All of manufacture, all of, you know, physical labor isn't going to be touched for quite a while. So they very well might say, yeah, tough luck. You know, lawyer, accountant, that was making a million dollars a year. This is poetic justice. We're not voting for anything that helps you. You know, that wouldn't surprise me at all.
A
When I was in Morocco, I was interviewing people that I met along the way about whether they're using AI or not. And the realization is countries, African nations are going to be impacted the least as this transaction occurs because they're so insulated from this. But one of my tour guides, I loved his story. He said, yeah, I chatted with ChatGPT and I said, these are all my skills. What could I do to earn money? And he came up with a business that, you know, we purchased. It was basically a bicycle tour guide in forget, which it was not Marrakesh. It was probably in near. I forget the city. Exactly. We're in. We're transitioning through. And, you know, that was his business. He did a great job. And I love the fact that this individual was basically trying to figure out how he earned income and using ChatGPT to do that. So any thoughts on the AI economics that we've just gone through? What do you think the social contract is going to be like, Dave,
B
what
A
do you imagine is going to replace what we currently have?
B
Well, you know, I had very ornate thoughts about this. And then we met with Andrew yang, remember, at 360, and he said, I can guarantee you that the way politics works, all we can do is write checks. And it can't be in any way thoughtful. It's just money. Here you, oh, wow, you're hurting. Here's money. Just like Covid. And so that's all we can do. So that's all we will do. And then maybe after AI enters government in two, three, four years, a much more thoughtful program will happen later. That was disheartening, but I think hard to refute. So the first version of the social contract is just going to be the next election three years from now, politicians saying, well, I'll give everyone $10,000 each. Well, I'll give everyone 12,000. Okay, well, if you're giving them 12,000, I'll give them 15,000. And then we'll be right back to, well, how much can the country afford? That's what we're going to give because that's how you're going to win elections. Exactly what you would predict, actually. So that'll be version one. Anyway.
C
I for one think a redistributive model of a quote unquote social contract shows an extreme lack of imagination. I would like to think that superintelligence should also super empower individuals to generate super income is one of the reasons why I, for one, am betting on more of a model where there may be even no strong need for a social contract. If we can empower the long tail of individuals who have idiosyncratic skills or experiences or socioeconomic niches to operate their own large companies sitting on top of fleets of AI agents, I would love to see, in short, no need for a new social contract and instead have the private sector rescue people who would otherwise be technologically unemployed or disemployed by empowering them to become basically Microsoft entrepreneurs or even macro entrepreneurs, to turn them all into Warren Buffett's.
A
But that's in the long, longer run. I don't think it's going to.
C
I think it's in the short run. I don't, I think, I think that can be done almost immediately. I'm betting that it can be done almost immediately.
A
Well, we will see. We'll take that back.
B
You know what's, you know what's super, super, super interesting? Those super PACs that we talked about earlier in the pod, that massive amount of money that's piling up, you know, these, these IPOs are literally, you know, an order of magnitude bigger than we've ever seen before, which means those packs are going to be bigger by an order of magnitude and those are going to determine election outcomes. But they got started back prior to the Trump administration with the fundamental mission being Congress. Please don't stop AI. Please don't put this six month pause on it. China's just going to run away with it. And everybody agrees in the AI community that we shouldn't stop, but now there's no chance of that. Anyway, you don't need to spend the money on that because it's clearly not going to stop. So then what are you going to use? Like you've got all this capital, what's your mission? What's your goal? And there's a couple of edge case things, but this could actually give those organizations a mission. Like let's have a more intelligent version of UBI more akin to what Salim has been talking about for a long time, which is work out money and eat well money and have kids and raise them well money and make it task specific, which would work a lot better. So that's encouraging actually that might work.
A
And universal basic services to give people
C
the ability to I like UBS much more than Andrew's proposal that we just try to fragment currencies into lots of like sort of paternalistic sub currencies that aren't fungible. That to me seems like a recipe for for disaster and for black markets.
B
This episode is brought to you by Blitzy Autonomous Software Development with Infinite Code Context blitzi uses thousands of specialized AI agents that think for hours to understand enterprise scale code bases with millions of lines of code. Engineers start every development Sprint with the Blitzi platform bringing in their development requirements. The blitzi platform provides a plan, then generates and pre compiles code for each task. Blitzi delivers 80% or more of the development work autonomously while providing a guide for the final 20% of human development work required to complete the Sprint. Enterprises are achieving a 5x engineering velocity increase when incorporating Blitzi as their pre IDE development tool, pairing it with their coding copilot of choice to bring an AI native SDLC into their org. Ready to 5x your engineering velocity? Visit blitzy.com to schedule a demo and start building with Blitzy today.
A
All right, let's jump into our second subject today is energy. A lot. Continuing there. The first one is extraordinary. You know, when you think about solar cell efficiency, you know, traditionally we've seen solar cells in the 12 to 18% efficiency float zone, silicates getting up to 20 to 24. The limit has been shattered and we're seeing now efficiencies upwards of 30 to 45%, which is amazing. Another story in the energy news is South Korea has now mandated 40% of solar rooftops and they're hoping to get to 100 gigawatts of energy. Now it makes sense. And South Korea does not have a lot of open land. They can't like build out solar in the desert. So using the rooftops makes sense. That's going to of course raise the price of building, but I think that's amazing. And then DOE is contracting for $800 million microreactors, so we're Going to start to see a generation of microactors and energy energy everywhere. Comments on this Alex, do you want to jump in?
C
I'll comment maybe just on the first story. I think this is neither earth shattering nor boring. It's somewhere solidly in between. This is a published in Jack's the Journal of American chemical Society. And 130% quantum yield isn't as earth shattering as it sounds either. It just means that there are 1.3 singlets generated from a single photon. Normally you'd have one. So it's not like earth shattering. It's an incremental advance in the chemistry. I think this is actually liquid phase chemistry, which means it isn't immediately practical for for solid photovoltaics. Moderately interesting, but the field is filled the solar photovoltaic field is filled with moderately interesting advances that cumulatively eventually generate something interesting. But I would say for story, moderately interesting incremental.
A
South Korea have you been tracking perovskite progress?
C
Yeah, I mean, so perovskites are sort of the white knight for the solar PV space. Historically they haven't been that stable. They're a pain to work with. On the other hand, instability issues are being very aggressively resolved because their quantum efficiency is higher than silicon. So I think there's, I don't speak for the solar PV industry, but if I did, I'd probably say there's a broad expectation that eventually there will probably be some sort of broad shift to perovskites as they get more and more stable, maybe. And they're also relatively inexpensive. Some sort of transition like that will happen. But I almost think it also doesn't matter. Why? Because you can't get, at least without shocking new physics, which this is not, you're not going to get more than 100% efficiency. In fact, there are physical reasons to think that the cap on electricity generation from solar PV is capped at materially less than 100%. So there's a ceiling on how much we can capture anyway from solar pv. It's not like we have orders of magnitude of headroom of improvement that we could achieve. It's totally unlike say AI algorithms where we know just based on the scaling law curves that we could probably achieve orders of magnitude improvement in the efficiency of models. So quite frankly, I have difficulty getting myself super motivated by incremental advances in solar PV chemistries and liquid phase. It's just not that exciting. Whereas if you look at some of these other stories, I think from an economic perspective, much more interesting, like blanketing the, the rooftops of all of South Korea or a substantial fraction of South Korea with solar pv. That's pretty interesting. DOE pushing microreactors everywhere. That's pretty interesting. I would love to see microreactors in Boston right now. We have a single one on Mass Ave. Between 77 Mass Ave. And Central Square that relatively few people pay attention to. I'd love to see microreactors everywhere in your backyard, please.
B
Yeah, yeah, I agree. I think, you know, we, we tend to overthink things like crazy as a society, but we solved the solar panel problem. We should have had a huge party and like for about 15 years of my life, you know, so many of my family members said, I'm going to dedicate my entire career to clean energy and not polluting this world so our children and our children's children have a clean place to live. And we freaking solved it. The solar panels are good enough. 80% of the cost now is just getting them installed and the regulatory overhead, which is crazy. And so now we're on the cusp of having the robots that can manufacture them very cheaply and install them for us. We should be having a huge party and racing to building those robots and just say, we did it. Now we have no pollution. It's just right in front of us. We just need to execute on it. But you know, meanwhile we're like, wow, another breakthrough that gets us 20%. We don't need it, we need execution now.
A
You know, when I fly out of Santa Monica airport here and I fly over all. I mean, there's no roof with solar panels here until you get into the desert and then there's, you know, solar thermal plants and such. But I mean, there's in LA where it's sunny, you know, most of the time you'd expect that all the roofs
B
would have solar, hopefully expect that a drone would bring it and drop it right there and a robot would land and install it and it would be done, you know, perfectly with no human involvement. And that's, that's so doable.
D
If I could pick a moonshot here in Energy, would we have a software defined grid? Because that will change the game completely. This generation is actually getting done.
C
Do you remember the scene, guys, in, in the Johnny Depp movie Transcendence, where the solar panels are being grown by nanorobots? You remember that scene?
A
I don't, but I like it.
B
Split that in, man.
C
It's an important. Yeah, I don't know if, if that's possible to include just that scene where solar panels are being grown by nanorobots I'd love to live in that near term future. If folks have ideas for how to grow solar panels in, in real time with nanorobots, send them my way.
A
Yeah, giant green leaves. Okay, let's jump into biology and AI. A lot going on here. So the first story is a fascinating one. OpenAI Foundation. A billion dollars per year being dedicated to science. And just to remind people, OpenAI when it transitioned from a non profit to a benefit Corporation, it put 26% of OpenAI's equity into a nonprofit and it's worth about 130 billion. And they've committed a billion dollars a year to begin. They've announced a $25 billion long term commitment to curing disease and AI resilience. The board chair of this is Brett Taylor. Brett used to be the co CEO of Salesforce. And then you know Dave, you and I met with Wojic, OpenAI co founder and he's leading the AI resilience work covering biosecurity, child safety and AI modeling. And so they've given out $100 million to six institutions this month to coordinate their work. It's just the beginning, but this is the largest nonprofit on the planet with $130 billion in it. And I hope they do something epic. Anyway, you know what?
B
Yeah, I just figured something out. It's been gnawing on me. You know, Kevin wheel came to a 360, most talented guy you'll ever meet. And Sam, you know, he's desperate on, on enterprise. But he didn't move Kevin over to enterprise. He moved him over to big time, big science, big tech. I was like, that's so strange. And I know that's really, really important. But now it's tied to the lawsuit. Of course. If he can make world changing headways, headway into any of these big, you know, biological or physics problems, you know, the outcome of this lawsuit is going to be very, very political, right? It's not going to be just a jury deciding one way or another. There's going to be some Trump involvement for sure. But if you have like some world changing, life changing, imminent breakthroughs and you have a hundred billion to spend to get them, that's why they put Kevin over there. I'm just speculating, but I also, we
A
talked about this last pot as well. I think the breakthroughs that come out of, of GPT6 being used for science are going to be worth hundreds of billions and trillions of dollars. Again, if you can have a breakthrough in room deputy superconducting and fusion and longevity what is that worth if you own the basic patents on that?
C
Yeah, it would be ironic. I mean, maybe this is too cute by half, but given the earlier discussion of OpenAI starting as a not for profit and then converting to a PBC and all the lawsuits that ensued, would be ironic if the OpenAI found foundation, which is the new nonprofit, carved off of the old for profit, carved off of the old new profit, ended up being so profitable due to curing Alzheimer's and solving all these other problems, that the cycle repeats itself and the OpenAI foundation has to become a for profit.
D
Oh, my God.
B
Yeah. You know, that's the key part of their defense. Sam is going to be up there on the stand saying, look, here's the reality. Our mission as a nonprofit with $100 billion to spend is miles ahead of where it would have been if we did what Elon is suggesting, which is be a, you know, a tiny little thing that has no funders, and we'd be. We'd be microscopic today. And so that means we're.
A
It's very true.
B
That's a good defense. A really good defense.
C
I do think it's worth considering what happens if and when the OpenAI foundation succeeds and cures Alzheimer's and that that will be a blockbuster drug, maybe create its own Eli Lilly scale trillion dollar pharma company. Does OpenAI take a stake in that? Does OpenAI see a rev share? Questions need to be answered.
D
What I find fascinating here is that science capital is becoming compute capital. Plus data access. Right. Plus some validation infrastructure.
C
Saleem, thank you for promoting Solve Everything. That's an amazing promo, Saleem, for Solve Everything. Much appreciated. Boom.
A
All right, our next story. I love it. Anthropic acquires Coefficient Bio. So what is Coefficient Bio? It's a company started by two ex Genentech computational drug discovery scientists. It is 10 people, no revenue. Start eight months ago and Anthropic buys it for $400 million. You know, I don't know if they're buying just the vision or they're buying any kind of unique capabilities, but this is Dario going to his first love of biology and solving. You know, we see this from both Demis Asabis and Dario making investments in health and longevity. Any thoughts on this one?
B
You're gonna see a lot more of these deals, actually, because, you know, you go back, you remember we were congratulating Eric Schmidt on the brilliance of buying DeepMind for, I guess, 600 million with no revenue whatsoever. Yet look at what it's become. You know, it's it's you're buying teams.
A
You're buying teams. Yes.
B
And I think, you know, we as a society are getting better and better and better in predicting the success of a team. You look at the 10 people and you look at what they've achieved so far and then you look at what they're likely to achieve in the AI timeline, you know, and suddenly 400 million seems like a bargain given the, the potential outcome. And so I think you're going to see a lot of these deals where it's got to be the right 10 people working on the right thing. It's not just, you know, any old group of 10 working on a video game. But in this scenario, you know, Alex has a lot of these actually where, you know, he knows a lot of the top experts in a lot of the top fields. And if you can just whip them together into a group, you know, and have them pursue a mission in this case for what did you say? You know, eight, nine months, getting to that kind of outcome is not going to be that unusual.
C
I think also for everyone who was hand wringing, do you remember a few months ago there was so much hand wringing about a circular economy form Nvidia self dealing loans to other companies to buy Nvidia chips and concern that this AI boom was fictitious and just the product of self dealing, circular transactions and other financial engineering. When you start to see the intelligence explosion infect biotech, which is what we're seeing, we're seeing anthropic buying its way into big pharma at the same time that SpaceX or Xai maybe is buying its way or reverse acquiring its way into the space sector. The intelligence explosion is infecting every single sector. It's almost metastasizing into every sector. And it's not just going to stop with biotech. We've spoken numerous times in the past on the POD about how timelines for solving all disease are collapsing. When the Chan Zuckerberg initiative two or three or four years ago originally said that they wanted to cure all disease by the end of the century and are now talking about the next few years, this is what it looks like. It looks like anthropic doing all stock deals to acquire teams to build out their own in house big pharma labs, probably with robotic instrumentation, probably with AI driven experimentation. This is how we get to Dario's solving all disease. I think in his case it was solving all neurological disease by the end of the decade. But there's no reason not to solve every other type of Disease as well.
A
Demis said cure all disease within a decade. Dario said double human lifespan within the decade.
C
I think Dario also said he wanted to solve most or all neurological diseases by the end of the decade. But these are all variations on a theme.
A
Another acquisition that was made that was an interesting sort of strange acquisition was OpenAI buying the podcast, asked TBPN for a few hundred million dollars. I found that, you know, it was at a PR move and then I started getting texts from my friend saying, hey, do you want to sell moonshots to one of these labs? I said, I'm not sure we would want to do that, but who knows? I guess if the price is right. What do you think?
C
Well, we'll have to figure out equity first for that one.
A
For sure, for sure. What do you think that was about the TPP end?
B
I have no freaking idea. What do you think about that?
D
I don't have an opinion there. I don't, I don't understand why, why. Unless it was a completely. It's a self promotional thing where they're buying a channel.
B
Yeah, let's take that as a homework assignment.
C
We need to speculate.
B
Somebody who knows.
C
I, I appeared on TBPN right before they acquired them. So.
B
So they wanted the point of view.
C
But what's the line from the Wrath of Khan? Like a bad marksman you keep missing every time. I think they're very talented and I take OpenAI at its word that they're looking for a news distribution channel and a content distribution channel that offers a positive perspective on AI, why they can't do it in house, why they need tbpn. Question mark. But I do think that TBPN guys are very competent at finding interesting stories. When I made the EON announcement of the first uploaded fruit fly, the TBPN staff reached out to me almost immediately. Almost no one else did. And they booked me almost immediately. So I think that shows a certain level of competence to be able to chase breaking technology news that I haven't really seen elsewhere.
B
All right, let me give you, Let me give you a follow on theory because I love your theory there, you know. Well, the theory I don't love is that they wanted your video footage. They're going to covet into 5 second clips and sell it as NFTs and make a fortune on it. But the theory, the theory, maybe they will. But the theory I do love is, look, there's going to be so much dirt in April in these lawsuits, in this lawsuit, and they're gonna. And, and maybe these guys Are, like you said, Alex, they're, they, they're geniuses at content and spin and production and they're going to need every bit of it during April and May.
A
Yeah. Our final story here is Eli Lilly signs a $2.75 billion AI drug deal with Insilica Medicine. Insilico is one of my portfolio companies, so super pumped about it. This is Alex Z, a brilliant AI scientist and biologist. This, you know, Insilico is just an extraordinary company. They've got 28 AI discovered drugs, half in clinical trials, half in proof of concept. This, you know, you have to always look at the structure of these deals. This is 115 million upfront and the rest is on milestones. But the point is this is about just massively reducing the time from drug discovery to approval. And, you know, just to take a second, let's go to the next chart here to look at a little bit different. And this is AI powered drugs. And we see phase one, phase two discovery of phase two and then cost reduction to remind everybody, you know, a phase one trial for a drug is. It's a small trial, a small group of individuals, of healthy volunteers to see is it safe, are there any major side effects. Phase two is then testing does it work? And you actually move the metrics you're looking to move. And then phase three is tested in typically thousands of patients to see does it work at scale. And what we're seeing is a phase one success rate of these AI developed drugs at 85% compared to 52%. And phase two success rates of AI developed drugs at 70% compared to 38%. And it's the way of the future. You're basically picking a target and you're using some version of AI to generate an exact protein to lock into that target. And then you're producing it and you're testing it. The old way of drug discovery was going to Amazon, digging up some plants out of the dirt and seeing if any bioactive molecules, much more efficient.
C
Peter, I'll ask you a question I asked a panel of mine at today's event at mit. Do you have a prediction for when the FDA is likely to launch, given that it's collapsed recently? This announcement from a two clinical phase approach to a one clinical phase approach. When do you think we get zero clinical phase trials from fda?
A
When we have full cell simulations? You know, when I'm able to.
C
What's your timeline for that?
A
That well within five years. So what I need to do is be able to upload my genome and my genome will dictate exactly how the cells, my renal cells or pulmonary cells are functioning. And then I can say, well, how does this particular drug impact those cells or all the cells in my body? Even more importantly, you know, if there's a disease state, what drug is going to cancel that? And this is where we're going with longevity. Right? What is, you know, why are we aging? How to slow it, stop it, reverse it. All of that falls out of a big data and massive compute.
C
I agree. Virtual cell by the end of the decade. A good one.
A
Yeah. I mean, that is, that is the moonshot that changes everything.
C
It is, I agree.
A
And there are a number of companies working on that.
D
Do we have the compute to be able to kind of simulate 2 billion, several billion interactions per cell or.
A
We will have to find that we will with quantum. I mean, one of the things that quantum computation is going to mean, our cells and our molecular interactions and our cell services are all quantum in nature.
B
If you said, I want to build a movie scene and I'm going to do it with finite element modeling and build it bottom up with a full simulation, you would never be able to create an AI driven movie that way.
C
Exactly.
B
But if you take the neural network approach, it just works. Boom. Just flat out works. Same applies with chemical simulations. The cell simulator, it's going to be data in neural net in the middle value. That or action out. And it's going to flat out work. I think, I think it'll work very fast like you guys are predicting. But you can't, you can't simulate it, you know, atom by atom, building it up.
C
It's totally the wrong approach, it turns out. I mean, this is why maybe sometimes I present as a bit of a quantum bear. The physical world is actually pretty classical and pretty sparse. So I would bet we don't need quantum computing at all to get to the virtual cell. We solved protein folding without quantum computing. We did it purely classically. I think we get to virtual cell just by existing scaling of models. Like Maxi, what was it? Maxi something or other from Nvidia, the trillion token cell model. I think we just get lots of scaling of classical models and that takes us there without like enormous innovation needed today.
A
It's a data problem.
B
I totally agree.
A
It's a data problem more than a computational problem. We don't have the data.
B
I'll tell you what else, Culturally, my daughter's over at Moderna and they freaking love AI in the biotech community. If I compare the extreme ends of all the companies that have been Here on our office. So the biotech guys are Jeff von Maltzen, Noubara Feyen, Stefan Bonsell. They just all culturally can't wait for AI to come into the business. This. And then on the extreme other end you've got the, the public accountants. You know, the PWC guys were here the other day and they're, they're like, AI, stop, please don't. You know, but, but the biotech community is embracing it like crazy. I don't know why. I bet you guys actually know why, because you're right in the middle of it. But I can tell you firsthand, they are pipetting.
A
All right, let's go to robotics. This is China versus usa. Alex, I want to hear your thoughts on this one. So Aggiebot ships 10,000 humanoid robots. They're number one globally. They've gone from 5 to 10K across 17 countries in just two years. I mean, these are small numbers compared to what we've heard everybody else speak about getting to tens of millions, to billions, to, you know, 10 billion robots. Unitree files for an IPO. 610 million IPO. We had the co founder of Unitree on at the last Abundance summit. Revenues are up 335% year on year. They're probably the, you know, outside of, outside of Optimus and figure they're probably the best known robot company out there. And Unixai had their home robot launch and then Finally Xiaomi displayed CyberOne humanoid. Xiaomi is an amazing company. I was there very early, met the founders in China back in 2017, 2018. Right. Their mobile phone, computers heading into vehicles and now robotics. A lot going on in China. Alex, what are your thoughts here?
C
Okay, so this is happening, I think in the last episode I mentioned that one of my operational definitions of the singularity is all sci fi tropes happening everywhere all at once. One of those sci fi tropes is the call it the iRobot trope where there are just humanoid robots in every facet of life. Today, earlier today at the MIT Media Lab for those who were there, people saw me for about an hour controlling a unitary robot, marching in loop after loop around the media lab on the sixth floor. And people were taking selfies. Everyone wanted to take a selfie with me and the Unitree. And I was doing this as sort of a bit of a promotional march for professional Robotics league, which on April 19, so nine days from when we're recording, the weekend of the Boston Marathon is going to hold the United States, the country's first professional robotics league match with robots racing 50 meters in the Boston Seaport, this is all happening. We're finally catching up to the iRobot future where robots permeate every aspect of life. For better, for worse. Right now it's Chinese robots that are leading. I'm hoping to maybe almost quasi shame the US robotics industry with all of these Chinese capabilities, into stepping up to the plate and starting to distribute humanoid robots into the civilian sector. And not just factories and not just military drones. But it's all happening. And this is going to be utterly transformative for the two thirds of the US services sector that depends on physical labor, manual labor, and not just knowledge work.
A
You know, I saw Mark Cuban on a video this morning saying this robot thing is a passing phase and they're not going to be around in 10 years. How does that.
C
No, no, but. So there was a bit of nuance to that. It wasn't that robots aren't going to be around, it's that they'll become so essential that the environments will adapt to the robots and the robots will blend with the environment. Right now we go with to Salim's point. Salim, your hobby horses. Why do they need to be humanoid? Why can't they be differently shaped? I think Mark Cuban's more nuanced point was they're going to become so essential to daily life that they'll start to change the houses and the buildings and the environments to the point where they start to merge with the environments and therefore no longer need to be humanoid.
A
So they're dishwashers.
C
Yeah, they blend. They merge with the physical environment.
B
I have to confess, Alex, that robot that you were talking about was blocking my way to the bathroom and I so badly wanted to kick it. And I was thinking Alex would kill me if I kicked it. It's going to remember and then it's going to come back in history.
C
Will remember. Dave, you really don't want to do that.
B
That.
D
What's the.
C
The song from Les Mis. So never kick a dog because it's just a pup. They'll fight like 20 armies and they won't give up. So you better run for cover when the pup grows up.
B
I heard.
A
Let me hit on a couple of stories here. So this is interesting. US Senators move to restrict Chinese robots. Bipartisan bill proposed to block Chinese made robots from federal and sensitive facilities, citing data theft and surveillance. This is no different than Huawei chips and in our cell phone towers and dji.
C
The DJI ban already in effect, I think.
A
Yeah. Drones and Agile robotics and Google DeepMind partnering up Gemini robotic models are being integrated into 20,000 deployed industrial robots across government factories or global factories. So I think this is like a
C
tale of two cities. The two cities in this case aren't London and Paris. They're, they're China, Shenzhen and the US Silicon Valley. The Chinese are overwhelming the world market with the raw physical capabilities. They're producing many, many more capable robots than, let me put it this way, if I want to, as a US citizen, if I want to procure a humanoid robot, I don't really have that many options right now. I'm still waiting for my 1x Neo. I was haranguing Bernd at a 360 this year. When do I get my Neo?
A
This summer. I'm getting mine this summer. What did he promise you?
C
He didn't promise me a date. We were trying to figure out finer details of his participation in future Olympic events. But I would say China's producing all of these humanoid robots, but the US is producing the strongest VLA Vision Language, Action foundation models and world models for the moment. And I think, as with we've talked in the past about OpenAI trying to become anthropic faster than anthropic can become OpenAI. I think similarly here China is in a position where it has the raw manufacturing capability to make lots of robots and is racing to become a robot foundation model provider faster than the US with our 10x more compute and our foundation models can finally figure out our way to manufacture at scale humanoid robots. So we'll see which way it ends up.
A
I didn't realize that Gianluca put this video in the deck. Let's take a listen to Mark Cuban about humanoid robots. I think everybody's making this push for humanoid robots.
B
I think they might have a five
A
year lifespan and then they'll fail miserably.
D
Maybe ten.
A
Yeah. You mean the device?
B
You mean the companies or the device
C
or the individual physical robots or both.
B
Right, Because I think everybody defaults to,
A
well, we live in a human world
D
and humanoids will take the place of humans for various functions, particularly in the home.
C
And I think there's just no chance.
A
So maybe we're missing the second half of his comment.
C
Yeah, this is conveniently alighting the second half where he explains that they'll merge into the environment.
A
Okay, well, that makes a lot more sense. Let's get to a conversation.
B
You want to hear something really cool? Sure. We had Chase Lockmiller earlier today, our guy Chase, building Stargate in Abilene, Texas. And he said, remember when we were talking To Brett Adcock, he said, I have to wind my own motors. I literally have to. There's no supply chain for any of this stuff. And the same thing Bert bornek said at 1x. So Chase was saying he actually melts metal to make electronic components to build these gigawatt data centers because there's no supply chain for the stuff, stuff that he needs. And so it's, it's very, it's very much the case that the entire supply chain to build out all this physical stuff is miles behind where it needs to be. It's entrepreneurial heaven because, because, you know, it's, it's on a shorter, you know, the virtual stuff, the code writing, the, the, you know, all the compute is going to happen very quickly. All the white collar stuff, but the robotic stuff, you know, you look at the size of that IPO we were talking about a second ago, $610 million. Can you imagine trying to go to an investment bank on Wall street and say, hey, we're doing a $610 million IPO. It'd be like, you can go down to the basement and you know, you can talk to our junior associates. We'll get back to you after this, after Anthropic is public. We'll talk to you if there's, if
A
there's any money left in the people's pockets.
B
Yeah.
A
All right, let's go to a topic I've wanted to cover for a while with all of you. And it's quantum and bitcoin. So here we go. Google moves up their deadline by six years to 2029 for basically Q Day. When are we going to see Quantum computers break RSA? It used to be that required 20 million qubits. Today it's 1 million qubits and in particular it's 4,000 error corrected qubits, to be specific. To break, RSA moved it up from, you know, by six years from 2035 to 2029. It's gotten everybody in a bit of a panic. The story related to that is that, you know, Brian Armstrong, the CEO of Coinbase, has put forward $150 million coalition to roll out something called BIP360 as a quantum proof upgrade to the protocol. It's a fork, by the way, in just chatting with Brian, he's going to be joining us on the moonshots pod. We're going to be talking about both longevity and, and quantum and, and bitcoin. So another story related is that Google now says that under 500,000 qubits are required to break Bitcoin encryption. So 20, 20 times fewer than predicted in 2019. So a lot going on here. This is concerning people who are bitcoin holders. I put this next slide forward because Dave, you and I were roommates with Mike Saylor in our fraternity back in the day. People may not know that Mike Saylor. Dave and I were at Theodithic high together on the third floor and I wanted to see what's Mike saying about bitcoin. And he's saying, I don't worry about it. Quantum computing won't break bitcoin. It will harden it. The quantum risks are overblown. Quote, Bitcoin has survived every existential threat ever thrown at it. This is just the latest. And the upgrade will come before the threat does. He puts his money behind. Behind that in the last quarter he's purchased 88,000 bitcoin, about 7.25 billion worth of bitcoin. Selim, let's go to you first on this one, pal.
D
So the true risk here is that protocol consensus may be slower than the emergence of the threat. Right. But I'm actually optimistic around this one. I think Saylor is right. Resilience systems will just evolve and can evolve under pressure, but markets are really bad at pricing tail risk until they're really forced to. So I think what will happen is there's so much momentum behind bitcoin and so many, like I came across a bitcoin lightning network payment system that is three months old and they're doing a billion dollars a month of transactions. It's just unbelievable to watch some of what's happening under the radar that most people haven't even seen this. So I'm optimistic on this. Even if Google pulled the date forward a bit, I think this is, this is a kind of still a long way to go. But the bitcoin world will be forced to get together and just go, okay, we need to upgrade. Let's just do it. And there's enough money in it, motivation to do it.
A
At this Moment, Bitcoin's at 73,000. It's up about $4,000 in the last five days. This has been a black cloud over the bitcoin market for a while. In fact, Jefferies bank has pulled out of bitcoin. We may see others follow suit. And in the same way that AI is sucking money out of every other market, it's also sucking the attention out of bitcoin. Dave or Alex, are you guys bitcoin holders? I know Salim and I are.
C
What do you think?
B
Only via microstrategy I think Mike is absolutely right. I think that I don't know this litany of existential threats. I mean, I know there was someone trying to take over half the servers and then control it. It obviously survived that very easily. Quantum is not a threat at all. It's so easy to increase the encryption standard. And you can see quantum computers don't just suddenly pop up out of some secret lab. They you see it coming a mile away. So it's not a risk at all. As far, I think Mike is 100% right.
C
Oh, so for the record, I don't hold Bitcoin. I don't have any desire to hold Bitcoin. This is the time in the episode where I say something nice about crypto per the the Peter Diamandis ordinance. So my something nice about crypto today for this episode is I also don't disagree with Michael Saylor, but I also think it's beside the point. This is not investment advice, but I don't think it's quantum. Again, I made this point numerous times. I don't think it's quantum decryption that the Bitcoin community should be worried about. It's AI. It's AI. Numerous facets of AI. It's AI coming up with clever inversion attacks against the core hash functions. And before anyone in the comments says, oh, but it gets harder over time. And there are several other responses. I'm aware of all of these responses, but if there is a secret inversion attack against the core hash suite of Bitcoin, this is a major problem for Bitcoin. I don't think that's even the largest problem though, for Bitcoin. If we're going to talk about Bitcoin X risk, I think it's actually just irrelevant AI which is emerging for better or for worse, or AI agents, I should say, as the killer app for call it cryptographic commerce and transactions. The biggest risk is just that AI agents don't want to use Bitcoin. I'm aware that the Bitcoin Policy Institute put out this study saying that AI agents, 6 out of 10 AI agents prefer the flavor of Bitcoin versus other other cryptographic means of commerce. I think over the long term it's difficult to buy that AI agents, given their speed, if they stick with any form of crypto at all, are going to stick with Bitcoin. They'll invent their own currencies, their own layer ones, maybe transcendent forms of layer zero, and just reconceive the entire notion of a crypto stack.
A
I agree with you there. Yeah, not, you know, they'll reinvent anything and everything towards efficiency and towards.
B
Well, everything Alex just said though is all about transaction use cases. And Mike has been saying for a long time that Bitcoin's role in the world is as a store of wealth that's immune from governments seizing it or taxing it because you can move it so easily. So that would be a completely different argument. And I'm not, I don't have a, a horse in that race. But it'd be interesting to say, well, what about what's AI's impact on that use?
D
We need to have a crypto debate.
C
In micro, I would say a long term store of wealth is basically just commerce by another name. You're trying to store resources in some sense for the long term. I would query whether superintelligence actually needs a long term store of wealth at all. It's going to be moving very quickly, taking rapid actions in the physical economy. Does it even have a need for a long term, non operational sort of non productive store of wealth? I doubt it. Well, I think not even sure.
A
Humans compute and energy is the ultimate store of possibility, so to speak.
C
And those are real. Those are arguably the definition, real assets.
A
Yeah.
B
The definition of long term too is really interesting because right now the reason we have money at all is because we have trade. You're going to do something? I'm going to do something. Oh wait, I'm doing it now. And the other thing is tomorrow. Okay, well give me the money and then tomorrow I'll pay you back. And so it's just a buffer because, you know, transactions don't line up perfectly in time. If you imagine a massive fluid AI economy with thousands of times more things happening, yeah, the alignment is a lot higher. But also the store of wealth could be milliseconds or microseconds or nanoseconds.
C
At that point do we even need wealth? Yeah, at that point do we need quote unquote, digital gold? I similarly, this is not investment advice. I don't hold gold. It's an unproductive asset. It's just not interesting. If we really are in the singularity, as I claim that we are, why on earth would I want to hold gold or bitcoin?
A
What do you hold, Alex?
C
Okay, so again, non investment advice. But for the record, on the one hand, index funds fundamentally betting that the market is, is a better allocator of assets, at least among public securities, than any individual can be. It's basically a bet on superintelligence and then the other end of the barbell distribution equity in startups where I hold material, agency and to first order, that's it. I don't hold gold and I don't hold crypto. I just don't understand how they're productive assets.
A
Liam, you want to jump in?
B
What about.
D
Yeah, I'll jump in on two things. One is, you know, Peter, you mentioned that what you need is energy and compute. And I was like, well, that sounds like bitcoin. But to Alex's point, one of the smartest investor types I know was worth about $100 million. I asked him how he does wealth management and he goes 70% high dividend yielding public equities and 30% high risk startup investment funds. And I think that speaks exactly what Alex just said. The kind of the standard things like real estate, utilities, et cetera are all very dangerous places to be.
C
I'm cooking all of them like I want to cook land. We've talked perhaps in the past about Coastal assembly, which is using AI to. It's a company where I have a financial interest that's using AI to grow new land. I'd like to see realist. Okay, so a hot take for this episode. If the crypto hot take wasn't hot enough, hotter take since I'm under slept. I think land has got to be made post scarce and AI will help us make real estate post scarce.
D
I agree.
A
Welcome to the health section of Moonshots, brought to you by Fountain Life. You know, AI is having an outsized impact on every aspect of our lives. How we teach our kids, how we run our companies. It also is having a huge impact on health, helping you prevent heart disease, one of the key things. I'm here with Dr. Dawn Musailam, our chief medical officer. Or fountain heart disease has been personal for you as well, hasn't it?
B
It really has, Peter. When my daughter was five, my husband died of sudden cardiac death. And so this is a topic that is one that I am mission driven to try to eradicate. Prevention first and early detection is absolutely critical. 50% of people die of heart attacks with no warning signs. Silent killer.
A
No shortness of breath, no pain, no nothing.
B
No silent killer.
A
They just don't wake up in the morning.
B
They don't wake up. And so, you know, AI, this is our mission to advance science, to try to help to one day democratize wellness. We know at Fountain Life, when we do this CT angiography with AI analytics, we are actually finding that 88% of people coming in have detectable coronary artery disease. But Peter, what's more alarming to me is 23% of those individuals had soft plaque. This is the plaque that would not traditionally be seen on CT looking at calcium scores alone. And this is the plaque that we must intervene with with the multimodal testing we're doing, including diagnostic laboratory studies partnered with Healthy Lifestyle Recommend.
A
So listen, make sure you understand what's going on inside your body, genetically, metabolically and cardiovascularly. You can know and it's your obligation to know. So check it out@fortunlife.com Peter to find out more and really make sure that you're the CEO of your own health. All right, back to the episode. All right, I'm going to jump into our final segment here, which is a proof of abundance. We're going to call it Abundance Corner. These are stories that have come out recently. I want to take a second and mention We Are as Gods is coming out on April 14th. So super excited about this book. You can go check it out as wearesgodsbook.com the moonshot mates. We're all going to be Getting Together on May 4th at MIT with Ray Kurzweil. We're holding a half day program. We'll be doing a live broadcast from there. Steven Kotler, my co author, will be there, will be doing a conversation on the book. We'll be doing an interview with Ray Day. It's going to be a blast. We have sold 100 tickets. People who bought 100 copies of the book are going to be there. We're probably going to offer out 10 last tickets. If you're interested, go to wearesgodsbook.com 100 and you can squeeze in. It is full right now. We'll probably have a few people who can't make it the last minute. So there'll be a wait list. Join us. It's going to be a lot of fun. All right.
B
Right.
A
Let's look at evidence of increasing abundance. Here's a story that's interesting. Germany just built the world's tallest windmill, 364 meters high. It's taller than Eiffel Tower. It's a 33 gigawatt hour per year generation. And what's interesting is it's built inside of an old coal power plant. So I find that pretty, you know, pretty exciting. The coal plant left the wiring behind and they've built this on top of it. So the turbine is being built in the Lusatia coal site in Brandenburg. So we're going to start to see, you know, wind and Solar penetrate the old, the old energy economics. The second article here is a 12 patient trial of a redesigned CD4 immunotherapy. Had extraordinary results. Cancer vanished after one injection. This was 12 patients in the trial. Two patients hit complete remission, six saw tumor shrinkage. And this is the end of cancer heading our way. And then finally there was a fun study done by the World bank bank that basically showed that we don't need to actually produce more clean drinking water in Africa. What we need is to rebalance the use. In some places there was too much water being used and, and all of that, if it's redistributed, could actually provide all the water required for sub Saharan Africa. And this is where AI technology can come in and help us understand how much water is required and where it's in optimize its use. Any comments on these articles?
D
I've got a bunch, but I'll just limit it to one here just to build on the abundance side. You know, they're doing this separate from the list here, but AI they're using AI to do with acoustic sensing to prevent major failures in wind turbines. And these systems are achieving like 99% accuracy in identifying damage before it requires repair.
A
Right.
D
So the cost of maintenance suddenly dropped radically for these winter runs because we can do predictive maintenance in a very powerful way. And so this is all these little thousand ways, thousand cuts in which we're reaching abundance and energy. That's totally going to change the game. So I'm so excited by about this, but this is such a great stuff. Except we've misspelled abundance corner. But that's a minor detail.
A
I love it.
C
I'll make the one comment on the immunotherapies. I think it's also instructive. If you think back. So we're, if you think back to circa 2000 or 2001. So about a quarter of a century ago, the US Congress was sold on the National Nanotechnology Initiative on the premise that we'd have medical nanorobots swimming through our bloodstream zapping cancer cells. And yet we find ourselves a quarter of a century later where, as you say, Peter, cancer is well on its way to being solved without the medical nano robots. We didn't need the medical robots at all. This is being done by basically retraining or retargeting our body's own immune systems. And I think that does raise or flag the question what will? If anything, we need the medical nanorobots that Eric Drexler and others promised us. What if Anything will we need those for? Or is it just a matter of reeducating our own existing biology to do more intelligent things without needing any robots in our bodies at all?
A
All we have an amazing system. I mean, the challenge is, and we've discussed this before, that our biology is optimized through age 30 and then it's a slow degradation, never evolved, never selected to live past that. So a lot of this and a lot of the age reversal work going on from, you know, the epigenetic reprogramming is how do we take our, our systems back to an earlier state of use youth where they're operating optimally. All right, a few more articles here in the abundance corner, spelled correctly or courtner. Got it. So vertical farming. I remember in my first book in 2012, first book, Abundance, I talked about vertical farming. It's finally playing out. So it's projected to reach 40 billion by 2030. It hit 8 billion this year. And I think what's really important about this story is that vertical farming has a huge impact. 95% less water use production yields are 350 fold greater per square foot than traditional farming. The use of AI and robotics allows you to optimize the perfect ph, get rid of all pesticides, enable you to get the perfect spectrum for that plant 24 hours a day. And you know, historically, most of the vertical farming to date has been lettuce or leafy greens like that. This is the first time we're seeing something with a higher, you know, a higher value crop like berries. Yeah. And super excited. I mean, what are we gonna do with all the parking garages that are autonomous vehicles, you know, abandoned.
D
Can I give a little historical, Can I have a little historical thing here?
A
Yeah.
D
So, you know, if you look at the. Over the last 50, 100 years, the world's biggest food production countries were the ones you'd expect us, China, Russia, Brazil, the biggest ones. Right. But then you look over the last 50 years at the world's biggest food exporting countries and you know what number two is? Holland.
A
Yes. Amazing.
D
On a global map, you can't even put it, or you can put a pin to find Holland. It's that small relative to these other countries. But they made major investments at hydroponics, aeroponics, etc. They're the number to exporter of food globally. And just that just shows you what's pot, what the potential is. As vertical farming takes hold, we'll be able to totally transform. The Average meal travels 2500 miles to reach an American table. So in food logistics, security, cleanlies, yields, doing vertical farming that are something like 10 to 1 compared to horizontal farming.
A
Yeah, like half, half of your cost of a good meal. You know, it's the, it's the beef coming from Argentina, it's the wine coming from France. It's, you know, it's. The transportation costs are huge. All right, our second story here is 100 hour batteries go commercial. So this is the birth of, of, of what we call air, you know, iron air storage batteries. You know, lithium ion batteries are lithium cobalt nickel. They're expensive. The, the iron air batteries are iron, water and air. They're coming in at 1/10 the cost and they're now being used for grid storage. Alex comments on this one.
C
I do think evolution in battery energy chemistry is really interesting. So the historic trend, if we put aside Iron air for the moment and just focus on the bleeding edge chemistries, I think the statistic is something like a pretty sustained 8% year over year increase per constant dollar in batter energy densities for the bleeding edge chemistries. So in some sense. Not in some sense, in a very real sense, there is a Moore's Law for increasing the energy densities, while at the same time we're seeing new chemistries or newish chemistries like Iron Air that are radically reducing the cost for certain applications. Iron Air isn't for every application. Seems unlikely we're going to see it get used, for example, for or EVs anytime soon. Probably someone in the industry is experimenting with it. But I think we're starting to see, judging from the explosion initially in lithium ion and then exploding to a number of other form factors, different chemistries for different applications and different applications demand different prices as well. In some cases, when you're powering data centers, you care about the volume of storage and you care about the price. In other cases, you care about the mass and mobility. And those are cases where lithium ion lithium polymer probably still has an edge over Iron Air. Overall, I think this is very positive. I think I sometimes wonder as a thought experiment, given that there was quite a bit of experimentation early on in the Thomas Edison era with different battery chemistries, whether we could have arrived at much more advanced chemistries much earlier, like 100 years ago, and whether the history of the internal combustion engine would have been vastly different if we had seen more investment and more experimentation up front with different battery chemistries. But overall, obviously this is a positive development.
A
The final story here is AI tutors. So what is it? So a Wharton study tested AI tutors that personalized Education what they found, not surprising, that a five month coding course was equivalent to six to nine months of additional schooling compared to peers with fixed curriculum. I think we know this. Basically you're getting 2x learning gains using AI tutors. They're free, they're ubiquitous, they're available to everybody 24 hours a day, seven days a week. This isn't, you know, breaking news. It's just quantifying it. At the end of the day, AI is going to be the ultimate educator. Understands your child's abilities, understands what they do and do not know their favorite sports star, their favorite color and can optimize it and teach somebody. I think one of the things that AI can do better than anything is teach somebody the way they like to learn. You know, are you going to make,
B
make an appeal to teachers? Because a lot of schools, including people that I know, are incredibly resistant for some reason. I don't get it. I'm going to go out on a limb and say it's cruel, absolutely cruel to a child to force feed them a lecture and they're like, I don't understand what you just said. Well, I'm going to keep plowing forward because everyone else in the classroom understands
A
or I'm going to say it the same over again.
B
Yeah. And the kid can't stop and say wait, explain that to me another way. With the AI, it's so much more compassionate. And so I think it's downright cruel to kids to try to teach, teach complicated things in any way other than AI. Anyone who uses it every day, it's clear that that's the case. Sorry, go ahead, Alex.
C
I think there's an element missing. So I would love to be able to just replace teachers, human teachers with AI. I think it's basically a cliche at this point that at least in the US education is subject to Belmol's cost disease and would love for AI to just replace the education, both primary, secondary and higher ed. What I suspect is missing certainly for the most self motivated students, AI I think at this point, in the style of Neal Stephenson's Young Ladies Illustrated primer from Diamond Age, it's already here. A well motivated student can already have a conversation with a model from whichever frontier vendor and teach themselves far more quickly than they can through human instruction. But, but, but, but for the students that aren't as self motivated, what I think we're missing right now is an AI embodiment that holds their attention and motivates them where they lack the motivation. Yeah, maybe so I assume Peter, by gaming, you're referring to sort of a quasi addictive or video games.
A
I mean, video games are just perfectly tuned not to be too difficult, not to be too boring, to hold your attention and to motivate you all the way way. It's just, I don't understand why video game designers, instead of teaching kids about a whole set of random facts or made up, can't use, you know, a set of facts around quantum, you know, about subatomic particles or about planets and about, you know, physics and biology and gamify that someday.
B
Yeah, it's funny if you, if you play Fortnite and you look at the weapons and number of intricate components of the weapons and then the characters memorize these things. They memorize them. And then, you know, Madden NFL, like the Playbook book, there's like three layer deep menus of different routes and play. And before you know it, you could have learned an entire discipline like quantum physics with that same amount of brain power. But I swear to God, the AI can make those topics like quantum physics incredibly fun and engaging. The technology is here today to do that.
A
It is.
B
Someone's just got to get it out the door.
C
People have been building edutainment games for decades at this point. I grew up, I grew up with like Math Blaster or whatever back, back when I was growing up. But the, the problem is I, I suspect so as a user, you're not motivating the users, children or students with the exact outcomes. What would have been utterly transformative for me would be not like motivating some math problem with some arguably disconnected animation on the screen. Motivate them by actually empowering them to do really amazing things in the real world. That's far more motivatory, I think, than just an animation or some dopamine push from a jingle single.
A
Well, I think that's for you and probably not for the average kid.
C
Yeah, maybe I don't generalize. I don't know.
A
All right, final, final item in the abundance corner is this graphic. Look at this beautiful exponential growth curve. This is EVs sold globally. Back in 2010, there were barely 10,000 of these vehicles. This was, you know, Elon's first roadster. And here we're up to 12.7 million EVs sold globally. In China, one in two new cars is an EV, and it's just, it's perfect exponential growth.
D
Can I just add a fun fact, please. In 2015, the International Energy Agency predicted that we would not sell a million electric vehicles a year before 2000, 4, 40. And that year in 2015, we sold more than a million electric vehicles. So you get the predictors and the reality. And governments and big companies are relying on these strategic predictions for strategic decisions. And they were wrong before they even put out the comment. And so it's great to see this.
A
And the curve's still accelerating, right? So by 2030, 2025 on this chart is going to, to look very modest.
D
And just look at the, the, the, the impact on the results from the war if oil prices suddenly shoot up, et cetera, you're, you're protected from a huge amount of volatility as we go to solar battery EVs.
A
All right, gentlemen, a beautiful outro piece from Marcus Helker. And I want you to look at this. This is the Moonshot Mates boy band and we're making our debut here. So no.
D
God, I can't take good.
B
Celine, what are you worried about? More than just a feeling on the world I can feel the pulse I
C
can feel the flow Further than anyone before Higher than the ceiling Higher than the sky Looking at the world through different eyes.
B
We are the sp.
C
We are a new day Finding a
A
different kind of way
B
to play
C
all this time I was waiting Waiting for
B
the call
C
Now I'm the heart. All this time I was waiting for the call.
A
There we are, the Moonshot meets boy band.
B
But Alex, you got to be a science officer. You're lucky.
C
I get to beat the. Well, this is Science Medical. I think is blue. Right? I get to be science Medical.
B
All right, you're right. You're right. Medical. Why would that be killer?
C
Well, Starfleet Officers in the Abundance logo resembles the. The Starfleet pin.
A
It does. Convenient, isn't it?
B
Yeah.
C
I wonder how that happened.
B
Pull up a clip. Pull up a. Pull up a clip from just like two months ago and compare it to today. It's incredible how quickly it's.
A
And just a shout out to the creator community out there. Love it. Please send us your outro or if you have an intro song you want to share with us, please send it to us. We'd love to share it with everybody. And gentlemen, it was fun doing back to back episodes with you last 24 hours and looking forward to another episode next week. So everybody please subscribe. We're putting out about two episodes a week. Turn on your notifications so you get it when it's fresh. And stay optimistic, stay hopeful. The future is ours to create. We're creating the vision of tomorrow that we want. If you think AI is happening to you and not for you. You're going to be back on your heels and you're going to be in fear. And that's the worst place to actually venture into the future. Future. This is the most extraordinary time ever to be alive and so blessed to have Salim Ismael, David Blunden and AWG as my moonshot mates. Love you guys.
D
Awesome episode. Great.
C
Live long and prosper, Peter.
A
Live long and prosper.
C
Peace and long life.
A
If you made it to the end of this episode, which you obviously did, I consider you a moonshot mate. Every week my moonshot mates and I spend a lot of energy and time time to really deliver you the news that matters. If you're a subscriber, thank you. If you're not a subscriber yet, please consider subscribing so you get the news as it comes out. I also want to invite you to join me on my weekly newsletter called Metatrends. I have a research team. You may not know this, but we spend the entire week looking at the meta trends that are impacting your family, your company, your industry, your nation. And I put this into a two minute read every week week. If you'd like to get access to the Metatrends newsletter every week, go to diamandis.com metatrends that's diamandis.com metatrenDS thank you again for joining us today. It's a blast for us to put this together every week.
C
High interest debt is one of the toughest opponents you'll face unless you power up with a Sofi Personal loan.
A
A Sofi Personal loan could repackage your
C
bad debt into one low fixed rate monthly payment. It's even got super speed since you
A
could get the funds as soon as the same day you sign. Visit sofi.compower to learn more. That's s o f I.com powder uer
C
loans originated by Sofa Bank NA member FDIC terms and conditions apply. NMLS 696891.
Elon Musk vs. Sam Altman, AI Job Loss, and OpenAI’s $852B Valuation
Date: April 14, 2026
In this lively, fast-paced episode of Moonshots, Peter Diamandis and his panel of “moonshot mates”—including Salim Ismail, David Blunden, and Alex Wiesner-Gross—break down the latest upheavals in AI: Elon Musk’s legal war with Sam Altman and OpenAI, dizzying new valuations for AI labs, fears (and hopes) of AI-induced job loss, and the rapid encroachment of AI into every corner of business, science, and society. From the granular details of trillion-parameter models to the macro-level shifts in political and economic power, the hosts serve up a blend of insight, optimism, and realpolitik.
Timestamps
Timestamps
Timestamps
Anthropic’s CLAUDE Managed Agents:
OpenClaw and the “Organizational Singularity”:
Timestamps
Timestamps
Timestamps
“This is a governance war disguised as legal war. The real question is who gets to steer these systems that have like quasi civilizational impact.”
— Salim (17:22)
“AI will be able to do everything that a white collar worker does imminently. That's a fact.”
— David (33:00; reiterated at 53:40)
“We're using trillions like they mean nothing. Do you believe those numbers?”
— Peter (28:03)
“There's a widespread expectation that the next major unhobbling will motivate the world to spend a trillion dollars per year on a single vendor.”
— Alex (29:40)
“If you want to do anything big and bold in the world, you need an economic engine to power it. And we flipped [Singularity University] into a for-profit...exact same process OpenAI is doing.”
— Peter (22:24)
“Science capital is becoming compute capital plus data access plus some validation infrastructure.”
— Salim (76:23)
“With the AI, it's so much more compassionate [for learners]. It's downright cruel to kids to try to teach…in any way other than AI now.”
— David (120:52)
| Segment | Topic | Timestamp | |---------|-------|-----------| | 1 | Musk vs Altman lawsuit, governance war | 00:00, 16:44 | | 2 | OpenAI/Anthropic/xAI: strategies, valuations, technical scaling | 04:36, 10:13 | | 3 | The AI economy: fundraising, VC surge, economic effects, jobs | 28:14, 33:00, 38:03 | | 4 | AI agents, Claude Managed & OpenClaw | 29:23 | | 5 | AI in the labor market, policy, and the new social contract | 56:20, 53:14 | | 6 | AI and biotech/pharma: OpenAI Foundation, Anthropic acquisition | 73:48, 84:56 | | 7 | Robotics (US vs China); Quantum, crypto & AI | 89:54, 99:42 | | 8 | Scientific/economic proofs of abundance (energy, agri, health, edtech) | 110:29, 117:35, 119:34 | | 9 | End notes, optimism, community shoutout | 129:39 |
This summary preserves the hosts’ energetic, conversational style, distilled into clear topics and structured takeaways for listeners seeking deep insight into the exponential shifts of 2026.