Loading summary
A
We're discussing the models becoming super intelligent at the same time that the layoffs are happening, at the same time social unrest is happening. Cloudflare cut 20% of its workforce, 1100 people, while reporting the highest revenue in the history of the company.
B
This is the trajectory that we're on. We're trying to bring it with us everywhere we go, which will be enormously valuable and creepy and all sorts of
C
stuff you're going to need 100 times the energy. And that's exactly the bottleneck we're trying to solve for with these space data centers.
A
We are going to have the polarization of compute in our society. The 1 aren't going to be able to just afford a G650. They're going to be able to afford $10 million data center for themselves. It's just going to be exhausting to go to a company, have the company study your work, have AI reinforcement learned and automated, and then get laid off again.
D
The capitalist system is such because it incentivizes people to do great labor. We don't create a system where AI just actually makes all the money and we don't know how to make a smooth transition. The core issue, just to highlight it, is that we're decoupling labor from value creat. We are building towards a world that I think in many ways we're unprepared for.
A
Thanks to our friends at PayPal, the exclusive sponsor for this Week in AI try the payment and growth platform that's trusted by millions of customers worldwide. PayPal Open. Start growing today@paypalopen.com all right everybody, welcome back to this Week in AI Episode 13. This is our roundtable where we talk to experts in the field of AI about the news of the week and you can get more about this podcast if you go to this Week in AI AI. You can sign up for the email and you'll have links to YouTube, Spotify, Apple, podcasts and all those. Anastasios Angelopoulos is here. He is the co founder and CEO of Arena. I think people referred to it previously as LM arena, but he was back on episode three of this Week in AI back in March of 2026. Welcome back, Anastasios.
D
Thank you, sir.
A
How's everything going over there at the Arena? What are the trends that you've seen from the last time you were on the pod 10 weeks ago?
D
Well, as always, AI is moving really fast. You know, anthropic has been dominant with with the opus model for quite a while, but there's a new challenger in GPT 5.5. In the coding arena especially, there's always great geopolitical race in the US versus China debacle. I would say that open source models continue to be dominated by Chinese providers. And we have this new interaction model from Thinking Machines, which I think is something that everybody should be discussing.
A
It's our top story today and we'll get into it. Let's double click on the China issue. I know that President Trump is on his way to China. I believe at the time we're recording this with Tim Cook and Elon. What should we take away from Deep Seek 4? And I think it's Kimi 2.6. What should we take away with from their progress? And are they closing the gap or just maintaining and not falling behind at this moment in time here? May 12, 2020 26.
D
I would say the gap continues to close, but perhaps not at the rate that others may have expected from previous generations of model improvements. In fact, in certain ways, like just in terms of the first derivative, I would say proprietary American labs have started to sort of pull ahead more and more. So what we've seen is that the gap was closing like six months ago, faster than it was, than it is now. Now it's sort of remaining constant. And what you'll see in the data is that if you look at the win rate of Chinese versus US top US proprietary models, what's happening is that the Chinese models stay roughly two quarters behind, so half a year behind, which is interesting. You know, it is quite interesting for the industry because there's so much capital expenditure on training the next and next generation of models because that's where the spending goes, right? So why should Anthropic or OpenAI be investing so many billions and billions of dollars into compute? Well, it's because whoever trains the next best model is going to have all the users churned from other platforms onto theirs and it's going to capture all of that value, all the developers and so on. But if we ever reach a world where that doesn't, where that law doesn't hold anymore and people start saying, hey, you know, whatever I have, it's pretty much good enough and I can't tell the difference. If that becomes the world. Well then six months later, once the Chinese models catch up, I think that some of these proprietary providers could have a problem.
A
Yeah, that is, I guess, their big existential fear. And I think it's well said. Anastasios. Nick Harris is back with us. Last time he was on was episode seven. So Nick and Anastasios, please meet each other. You guys have been winning the race here on this week in AI in terms of great feedback, great insights and of course being willing to be candid because that's what the audience is always look looking for. Nick is the co founder and CEO of Light Matter. They do photonic computing chips. That means they use light instead of electricity to move data around. Why does that matter? Well, when you're training models, when you're doing inference, you got to get that data. Between GPUs, TPUs and data centers, you have a lot of customers, Nick. And we just saw the greatest report on infrastructure spending by Amazon, Microsoft and Google. They are committing to hundreds of billions of dollars this year and also Meta, of course. So of those companies, are you in relationships or situationships with any of those? And what can you tell us from the game on the field? What are you seeing in terms of this build out and the demand for your product and other products?
B
Yeah, I would say we're trying to be in as many situationships as possible. And in terms of demand, what's interesting is I think these companies are willing to spend everything that they can get access to, even taking on debt to keep building out the AI infrastructure. I think there's two trends that are happening which is capacity availability, trying to meet what people actually want to bring online to run inference workloads on models like Claude. And then there's also the challenge of driving the token cost down and getting more capable models. All of that stuff points to just huge deployments. And the next frontier in AI performance is really about how you connect the chips up. Because I think as you all know, the computers that these AI models run on, they're not a chip. It's thousands of chips, hundred thousand chips, many hundred thousands of chips. And the thing that defines the performance of these systems is really about how you connect them together. And we build really fast connections. You know, we have single chips that are as fast as the cables that connect North America to Europe. Hundreds of terabits per second. Your house is a 1 gigabit per second connection. So I'm powering cities with one chip, 200,000 houses worth of bandwidth. That's what's needed to be able to scale these AI workloads, drive down the token costs, drive up interactivity. I don't like waiting for AI models to respond. I don't know if you like waiting. I never like dialogue.
A
Yeah, when will that experience change in your mind? When will we go from queuing up, turning on our notifications in our browser and coming back to, you know, more instantaneous answers.
B
I think 28 is when it'll be like starting to be broadly available and you'll come into contact with some optics enabled systems. That's kind of the right timeframe. But what's interesting is are people going to use that new capability to build even bigger and serve even bigger models with the same wait times or is it going to be more optimal to just drive the wait time to zero and host current models? I don't know the exact answer to that. I think it probably depends on the workload. Maybe some deployments will be on, try to drive the latency down to zero and some are going to just be offering the biggest, baddest thing they can do. There's opus and then the thing that comes after that, Mythos. Maybe the Mythos stuff will be all optical.
A
The mythological Mythos, which at some point we're all going to get to play with. Any insight into when that's going to be available to the public? Anybody?
B
I don't have any data on it, but I would love to try it.
A
Yeah. Anastasios, anybody reporting in to you in the back channel.
D
So what we're seeing is that actually multiple providers are coming up with their own versions of Mythos. So I shouldn't say, you know, too many details. Anthropic is not the only one that is sort of developing one of these like security focused models. And I think what's going to happen is that the sort of Mythos, the like Mythos trend is going to proliferate across the industry and we'll see that coming from all sorts of different angles.
A
And Philip Johnston is back. He is running StarCloud. As you can see in the background, people are actively building something in the background. I see people moving around. We'll get Philip to describe what they're doing, but the plan is for Philip to build megawatt scale data centers in space to address all this demand that's happening. Last time you were on was March 10th as well. How's progress and what are people working on back there?
C
It's going well. Yeah. That's my co founder. See Ezra there, he's our CTO. So we're building StarCloud 2 as our second satellite launching in eight months or seven months now it's going to be about 100 times the power generation of the first one. So about a 10 kilowatt spacecraft. We'll have the first Nvidia Blackwell chips in there and also a whole bunch of other new and cool stuff. Yeah, it's been going well. I mean, yeah, it seems like the data center and space world is heating up. Many new entrants and competitors coming. But yeah, it's good for us.
A
It's certainly validating the space that you have Elon and the co founder of Robinhood. And we'll get into that today. Joining the fray, the Nvidia chips you're going to put up in space, you're putting up a Gracewell, I think Grace Backwell up in space. Does it need to change in some way in order to survive the forces that are being sent up? I don't know that they're built for like x number of GS. And then how do they have to be changed? Change to be in orbit, if at all?
C
Yeah, it's a great question. So we actually already launched an H100 in November last year and we had to modify it quite substantially. So we cut about half the mass of it by doing quite simple things like removing the casing, removing the heat sinks or replacing it. Take up the AC to DC converters, a whole bunch of things. Then we ruggedize it so stiffen it a little bit for launch for the vibration. And then lastly we put shielding around it to make sure that the radiation environment of space doesn't stop it from running. We're actually now working with Nvidia on designing and building a new space chip called the Space Rubin one. And if you're at GCC this year, Jensen walked out on stage to the deployment video of Star Cloud 1. And then he spent about five minutes standing in front of the Star Cloud render describing this new space chip that we're building or designing. And so that's optimized for mass. So slightly or actually quite a lot less mass, it's optimized for thermal. So you want to run these chips hotter without, you know, the higher failure rate that usually comes with that. And then lastly, it's optimized for radiation shielding intolerance.
A
And so you're designing it, they're fabricating it. Is that the relationship or this is. They're building it for everybody?
C
Yeah, they're kind of building it for everybody, to be honest. There you go. Wow. All right. Yeah, this was the opening like credits and Jensen walks out just a few seconds after this. Yeah, actually this was the video I took, I think.
A
There it is.
B
Wow.
A
Yeah. This is your actual video from your social media. Here's the man in the leather jacket, which now everybody feels obligated to wear a leather jacket. It's getting.
C
Yeah, I did a TED talk. I did a TED talk and I wore a leather jacket.
D
I'm wearing one right now, baby.
A
Everybody's in their cowboy phase. If you're wearing that, you're in founder, true, true founder mode. Talk to us about just the Philip, before we get to our first story about the energy that is going to be required now, you obviously are going to use solar. There obviously has to be some amount of batteries, I would assume to keep a steady state and to be charged. But yeah, talk to us about that balance. How big are the solar arrays and do they have to flare out like in a science fiction film and build this giant solar array in order to fill the batteries? How big are the batteries?
C
Yeah, so actually one of the very nice things about doing this in space is we can fly in this orbit, which is always in the sun. So this dawn dusk, sun synchronous orbit they call it. Which means actually you need very minimal batteries compared to, if you were to build, for example, a solar project on Earth, you would need to charge the batteries during the day to power at night. We need maybe, you know, a thousand times less battery capacity because all we're doing really is buffering between the solar panels and the chips. We're not storing power to use at night, basically because there is no night in space. So it's a huge advantage and a massive cost saving. So.
A
Yeah, and also no clouds, no inclement weather.
C
Exactly. No clouds, no inclement weather, no seasonality. So in winter, for example, you have much less irradiance on Earth than you do in space, so than you're doing summer. So it's a much better place to do it. And also you don't have to pay for permitted land. And that's actually the biggest cost in North America of building a new solar project to power data center. So yeah, it's really the optimal place to run if you wanted to build a solar project for running data centers, space is definitely the place to do that in terms of how big they are. So it's about 200 watts per square meter. So like 1 square meter is about 4 square, 16 square feet. So yeah, you've got about, let's say four tennis courts is about 200 kilowatts. That's the node that we're going to launch on Starship. You can put about 50 of those per Starship. So talking about 200 tennis courts for 10 megawatts of compute, basically.
A
And the data between them will be by lasers between them. Yeah. And then Getting back the job back down to Earth will also laser.
C
Yeah, yeah, oh yeah, there you go. So yeah, you can see in this image we're generating, this would be generating 3D video but it can be any inference workload. It could be coding agents or back office business processing. They come up via either RF or optical to our satellites that we have a constellation of 88,000 that we've just followed with the FCC in this always in the sun you can see it's, it flies over the day night line, you know, between day and night on Earth and they're connected optically to each other and then we can fit around the 88,000 gives us the ability to deploy about 20 gigawatts. We could fit many terawatts in that orbit. So.
A
And they will go to a satellite dish on the ground similar to Starlink or a more dedicated one for a data center. How does that work?
C
There's a few different architectures, but my expectation is that in the end state will be direct from device to on orbit. Yeah. So if you were typing in a ChatGPT query, it would go directly to a Starlink satellite and then be relayed through that Starlink satellite to one of our orbit for data centers. Yeah. And in fact if it's going to go through a Starlink satellite anyway, you'd probably rather the data center capacity be close to the Starlink satellites rather than on the ground.
A
Amazing. This is going to be quite a future and the idea of protesting a data center is going to go away and the need to take energy off the grid is going to go away. And the only thing you really risk is, I guess based on my deep knowledge of science fiction, that some dust or rocks hit your solar cells, that that is the like legitimate concern is that something hits this massive array that you've built.
C
It's a, it is a concern. It's not. We have very good data on the frequency that that occurs. So for example with Starlink satellites, I think there's around 10,000 satellites up now. So they've got around 30,000 satellite years worth of data and they haven't had a single starlink failure from orbital gris or collision. So yeah, we have good data on that.
A
So it's great for a action sequence in a science fiction film, but not the reality. Mira Muradi just released a new model through our company Thinking Machines. She was the, I think one of the star witnesses this past week or last week Talking about the OpenAI flip to a for profit company. We'll leave that on the side, but she's in the news this week. For shipping product, Thinking Machines did a research preview of interaction models. This is an AI model that will process audio, video, and text continuously in real time. The goal of the model is to keep humans in the loop, of course, and they accomplish this by using two models simultaneously. There's a fast model that listens, watches, and talks live as if they were a person in the meeting with you. And then there's a slower model that's thinking in the background and maybe feeding to the live agent. What's going on? The first model is called tmlinteraction. Small 276 billion parameters. Total 12 billion active runs with a mixture of experts. MOE setup with many specialized submodels inside the main model and a router that activates only the relevant ones for each query. So I think the mixture of experts could be a developer Persona or a fact checker. The wider release is planned for later this year. Here's a video of the model correcting the pronunciation and facts in real time. So what are you having for breakfast these days? I really have been digging these Acai bowls. It's pronounced Acai. Oh, sorry. Acai bowls. Oh, yeah. Acai bowls.
D
Yeah.
A
I did wonder where they originated from first. I think the Acai bowls first came from Argentina, if I remember. Actually, Acai is from Brazil, not Argentina. Oh, so sorry. I think it was from Brazil, actually. Oh, my God, this is going to be so annoying. This is like a. If you remember. Cheers. Like Cliff Clavin correcting your facts in real time.
C
Totally.
D
It's the actually meme.
C
It's the.
A
Actually. Actually, it's from Brazil and it's pronounced Acai. Like, literally the most annoying person on the planet is now gonna be replicated. We're gonna have to have Anastasio. Some cultural norm for this. Just like we have it for the horrific Ray Ban recording glasses.
D
Exactly, exactly. I think that what we really. What we really needed was models that interrupt us.
A
Excuse. Excuse me.
D
Excuse me.
A
So what are your thoughts on this? Is it as cutting edge as it seems in the video? And what's the innovation here that has people. You know, I don't know if people are losing their minds over it, but people have engaged it pretty fervorously in the last 48 hours.
C
Yeah.
D
So I think it comes as a big release for thinking machines, whose last big release was Tynker. And so people are always wondering what is happening with this mega Neolab that has a huge valuation in a lot of the top Researchers in the world, what have they been doing? And now they've released this interaction models. So we should think about what is the main technical development here that separates this from a standard frontier model. And the main technical innovation is the interaction model, which is one of two subsystems in this model they've released and which is trained differently than a standard frontier model is. And it does inference differently too. The way that the model is structured is it's no longer turn based in the same sense that a normal AI is. So with a normal AI, it's like ChatGPT. You type your response or you type your query, you press Enter, it goes to the model, it thinks, it calls its tools, it does, whatever it does, it comes back to you with the result. And so it's separated into these turns. But turns are different. Yeah, and here's the diagram. That's exactly right. Turns are actually different from time in the sense that, you know, it's turns 1, 2, 3, 4. But turn one can take, take five seconds and turn two can take two hours. So what they've, and what that means is that during the turn you have trouble interrupting the model and the model has a trouble interrupting you. Instead, what they've done at thinking machines is they've said, hey, let's take time and we're going to divide time up into a bunch of micro turns or every X milliseconds we're going to chunk up everything that's happening in those milliseconds, feed it to the model and the model's then going to understand what to do next. And those X milliseconds, they might include you talking, they might include you moving about, they might include complete silence. And from that the model is sort of learning how to not just work with you in a turn based environment, but, but interact with you the same way that a human would with all of the, you know, context implicit and explicit.
A
There'll be some nuance. If Nick, I was going to interrupt you in a conversation or if I did a perplexed face, it would know. And we've all had the experience using, you know, Claude Cowork or whatever it happens to be. You give it some instructions and then you realize, oh yeah, and I should have said this, I should have added this to the instruction set. And while you're waiting, your mind is just firing off all the things you should have said, the things you should have added to the instruction set. So what's your take on this? And obviously you've got a horse in the race. The faster this gets and the more it's able to move data around on the inference side, the more effective thinking machines model would be.
B
Obviously, I think the demos seem really cool. One of the things that would be a little disorienting is there's different things you can ask and the answer to them can take a wildly different amount of time. And when it comes back with that where you're at in the conversation, all that stuff is going to be wild. So it's going to be all of these thought streams that are re merging at random times. I'm curious to try it. I would say I don't use any voice mode on any of the AI models right now because they're so frustrating and so any innovation on this is going to be a really big deal. I don't think innovating on snark is the plan because I kind of think the user plot with time with snarky models will kind of drop. But, you know, let's see what happens. This is obviously a huge, a huge part of the innovation landscape for AI models right now. They're just chat bots. We need to be able to talk to them. They need to feel like a person. If Optimus is ever going to get, you know, human rights, it needs to be able to talk to us like we, like we talk.
C
Yeah.
A
And Philip, when you think about it from a consumer perspective, obviously whatever you're doing in satellites, I don't think changes this model to a certain extent. But what are your thoughts just from a paradigm shift, is this going to increase usage dramatically? Is it going to be something that's incredibly annoying when you're trying to do deep work? Is it going to be something that's fantastic for very niche applications? I don't know. I'm driving and I'm trying to get instructions in real time. What is the use case here and is it paradigm shifting or is it more evolutionary?
C
Yeah, I mean, perfect. Be perfectly frank, it felt quite incremental to me. Like it took me a while to understand even what they were people were excited about. I guess what people were excited about is you could talk over it and it would still be listening. Yeah, to be honest, I wasn't like it didn't feel like a step change to me. But maybe I'm not deep enough in consumer to know how much of a pain point that is. Certainly for me that's not a pain point. In my daily life I use AI in a way much more like, you know, industrially, either with coding agents or with, you know, just general Queries.
A
So the term base is just fine for you. You don't see this as like super innovative? You know, I find it incredibly fascinating because I do live podcasts all the time and I'm in live discussions and I literally on this week in startups, my other podcast, I put a $5,000 bounty out for somebody to build an open source project that listens to a restream or a zoom and in real time just does fact checking and puts it along the side. And so they did this with regular models just chunking it and giving me fact checking. Then I had one do roasting. So I made a Persona for roasting. And it's possible to do this with the current tools that are out there, but it's not as real time as this. I don't know if many people have this use case other than the one I just did like real time fact checking. Maybe like in a debate or something, or I'm moving really fast doing trades or something. Like maybe I'm, I'm trying to think of use case. I'm a stock broker and I'm trying to build something.
B
You have kids, I mean, Jason, if you've ever had kids in the car and you're talking to like Grok, and my kids are like, oh, I want to ask this question. Every time they do it, the model just stutters and then like goes off and you can't get anything done. So I think like having a family sort of setup helps. It would help a lot.
C
Yeah, the other big one, and my, my twin brother is runs an AI company in London doing voice agents for customer service. Basically for customer service. It's if you're trying to sort of convince somebody that this is a human and not a customer service agent, not, not an AI agent. And by the way, they're like, score on how much they like dealing with you is directly proportional to whether they were convinced you were real or not. So if you want to convince somebody of that, as soon as you interrupt any of the current models, they will stop dead. And it's like a dead giveaway. And I think maybe that's the, that's the key is it will allow you to not be stopped dead.
A
That's such a good use case. Yeah. Because you frequently are talking to them and you're like, oh no, I found my membership number and you want to interrupt them. Oh, I got my membership numbers here. Or I have my gate number and you want to give it to them. And they're, oh, great, awesome. I don't have to look it up for you and boom. And they give it to you. So that actually feels like a good one. And I think, Nick, you need to own a Tesla and have kids to have had the experience you're talking about, which is kids love talking to AI and there's nothing they love more than interrupting you mid instruction to grok, to throw grok off. This is like the greatest trolling for kids in the backseat ever, which is trying to get Grok to do inappropriate, stupid things and confuse it after you ask it. Where's the nearest Amy's ice cream? 100%.
B
Exactly. The other problem with children is their pronunciation is like very poor. So the AI model is just like, what are you talking about?
A
Yeah, and they're trying to get it from the back seat so they don't know what's going on. I'm in love with whisper flow. I don't know if you guys are into whisper flow or if I've mentioned it to you before, but I have a three pedal setup under my desk right now. So with no hands I can switch to my browser. I have a teleprompter here, so I just switch to my comment browser and then I can hit the left pedal and it goes to the zoom window so I can move the zoom windows with my feet while I'm talking to you guys. And then the middle pedal is to start. If I press it down, it starts whisper. So any, you know, if I'm in a slack window or Claude or perplexity doesn't matter. And when I release it, then hits enter and puts the text in unbelievably game changing. But this to me, Anastasia seems like it would negate me needing to do that.
D
I think if you think about what is the difference in the user interaction paradigm here, there's really two things that are being done with this innovation. The first is that the model is able to understand implicit signal from the background that isn't explicitly given by the human in any part of the interaction. So when you chat with ChatGPT, you know, you kind of have to just. You just tell it what you want to tell it and you edit your prompt and you're like working for quite a while to like give it the context that you think it needs and then you press enter and then the model gets that. But here, the interaction model, I think the purpose of that paradigm is that it will be like listening the whole time. And if you want to build, as Nick was saying earlier, a human, that's what humans are doing. Humans aren't Just waiting for you, sort of, you know, eyes closed and ears plugged for the next signal. They're observing everything. And it's possible that this will result in a better user experience. You know, I think one of the areas, perhaps in a dystopian way, that this will be really useful for is building AI partner. Like AI girlfriend, AI boyfriend. Yes, because. Because what you want is for your AI girlfriend boyfriend to be watching you and, like, watching your body language and looking at what you're doing. And when you're not saying, maybe you can give AI the silent treatment. And then it'll start to get, you know, that's. I'm.
B
Anxiety. Yeah, exactly.
D
To get anxiety.
A
Yeah. I don't know if any of us have ever experienced that before, but yes, you could never, ever, never. You could get an eye roll. You can get.
D
God forbid I get an eye rolled at me. So that's one thing.
A
Just a pinching of the eyes and the nose. That's always a good one.
D
Yeah, exactly. And the AI is going to be like, what, did I do something to bother you?
A
Yeah. Oh, I'm so sorry. It could be even more sycophantic. Well, actually, now that I'm thinking about this, because it has multimodal, I could be talking to it, have my camera on, and I could be dragging and dropping images or sharing my screen with it and so be like, oh, okay, so there's your Google sheet. Okay, yeah. And then I could go to row two and highlight. Oh, I see you highlighted row two. What do you want to do with row two? And you're like, I can average it. Or it could just be like, here's the average of row column two, and here's the total. And just. It does it in real time. So that idea actually is quite compelling now that I think about it.
D
The second innovation which I think ties into this is the fact that it is decoupling the interaction from the model's actual thinking and actions in the sense that there's a background. It basically spawns a background agent through this, like, background model that's able to do all the tool calls and blah, blah, blah. And probably what that paradigm will evolve into, if it's not already there, is that the interaction model can just spawn agents whenever it wants to. And then, like, listen to those. Basically it becomes like a pub sub model, like a publisher subscriber.
A
Here it is creating a video, or here's a video of it creating a graph, I guess, alluding to some of the things we've been speculating about here. Oh, interesting. Can you visualize them in a bar chart for me? Absolutely. Let me generate a quick bar chart for those reaction times. In the meantime, could you quickly explain to me why is auditory even faster than visual? That's a little bit unexpected. Sure thing. Auditory is faster because sound signals travel a shorter, more direct neural path to the brain than visual info does. So if you think of the camera and it understanding what's going on behind you, Philip, that's like one level of interesting. But actually the desktop is much more interesting or your phone screen. So imagine Philip Siri doing this, or you're working in a CAD software. Whatever you do. That to me, kind of makes it a little bit or massively more compelling. So maybe we'll give you a second shot. Is this interesting to you if it was real time monitoring the workshop behind you and your desktop at the same time, and had a camera over your partner's desk there while he's working on that solar panel?
C
I think it could be, yeah. I mean, now, the more I think about the use cases for this, I think the more I can see applications for it, you know, I think especially if it can understand context about the hardware that we're building, you know, and if I could ask it things about that. Yeah, I can see use cases for that.
A
I'm just thinking about, like, the security system I have, like, at some of my residences, offices. Like, I have many cameras, like many on the ranch, all different directions, picking up all kinds of things, like to be able to have it in real time. And there's like a camera off shot here that looks like I'm in a casino with like every camera and I can see, like, people, you know, or animals on the ranch, deer or whatever, and it highlights it. But to have that talking to me in my ears as I'm walking around and it's saying like, hey, there's a pack of coyotes at the north end of the ranch, or somebody's dropping off a package in real time. Talking to me while watching that video screen. Now we're starting to get really compelling. But Nick, what kind of footprint of compute is this thing going to need and what kind of token usage if you were to take a guess of it's got a camera on me, it's got my desktop, and it's got four different people talking to it at the same time. Like, am I going to need a rack of H1 hundreds of my own?
B
Yeah, I think that's where it's going whenever you're sort of interrupting it, it's going off and spinning up another agent, calculating stuff, coming back with another result. People were excited about OpenClaw. This is like OpenClaw on steroids. If this thing is always running and you're just asking it random questions all the time. Right now we have filters like, what's the bar at which I'm willing to type something in and give enough context to get an answer. You know, it's some level if it's always listening to you and following you around. This will drive compute demand like crazy, which would be awesome for Philip and Nvidia and all the other companies in the space, including myself. And I think people value time. Like, we only have so much time, so you want the fastest compute that you can get, and that's where you need networking and all that stuff. So I think it's ultimately going to go there. The thing that we're talking about at the very highest level, what you're saying without saying it, I wish I had an AI with me in the room and it could see what I was doing and what I was seeing. Like, that's actually what you're saying. Like, I wanted to see my screen. I wanted to see what's behind me. I don't want to explain the context because it's exhausting. I don't know if you guys get that feeling. I don't like typing the context a lot of the time and guessing what other things it might need to know to be able to solve a problem. If the model was with you, this is the trajectory that we're on. We're trying to bring it with us everywhere we go, which will be enormously valuable and creepy and all sorts of stuff.
C
Yeah.
A
So you have the creep factor. Anastasios, you have the value factor. And then you have. What does this do to compute? So we have over a billion people using these tools already. Like, it's way over a billion now between OpenAI and Gemini. Those two alone are driving, you know, high 7,800 million people every week. So let's assume there's some overlap there. But 1.5 billion people are using this probably every week. What magnitude would this be if it was listening to them persistently and watching their screen persistently in addition to taking text inputs? Like, what are we talking about here? This is a hundred x the compute.
C
Yeah.
A
What are we doing?
D
It's going to be enormous. And we haven't even talked about, I think, the most major axis in terms of the difference, which is the fact that it's chunking up time. So the fact is that like, because it's listening to all contexts, not just the context that's specifically curated by a human to go into the model. It's just, it's going to be like completely implicit watching everything. Which is exactly the paradigm that you need when for example, you're building AI for a robot. Robot can't be turn based. Right. Robot has to be able to take in signals from many modalities all at once into one unified brain. And so I can imagine that this could be a foundational paradigm for that. I mean, you know, in the sense that it's taking up time and it's chunking it up into turns in a different way. And I think that the real question there that's, you know, going to be important is how do we know whether it's working or not? How do we know whether it's doing a good job or a bad job? And it's going to require, because the sort of surface of opportunity for those kinds of models is nearly infinite. Right. They're, you know, going to be measured on both objective and subjective factors. And so we're going to need to be building, you know, it's going to be like quantitative trading taken to like the a thousand x.
A
So what is arena going to test then? You're going to test, can it make a latte art? Can it cut a banana, can it peel a banana? Can it cut, yeah, you know, a carrot. Like you're going to have to come up with an actual real world test. Do you have that already in the works?
D
Exactly. Well, we're working on stuff like this and I think what we would, the way that we would approach it is that we would look at all the actions that's being performed by the model. We would segment them, we would come up with, we would use AI to evaluate the AI to try to understand, you know, what are the success criteria for everything that it's trying to do and whether it's doing performing well or poorly against these. For example, if it's interacting with you, are you happy with the way that it's interacting? Is it giving you what you need? Is it helping you be more productive? Is it helping you improve as a person? Is it providing positive value in your life? These are the sorts of things that we need to be able to grade at an enormous, enormous scale. And it's going to go past the scale of humans. Even though humans are going to have an important role as being the sort of consumers and the users of these models for whom we try to create value Actually annotating that data is going to have to be an automated process.
A
Yeah, Nick, when you start to think about the amount of compute, this feels to me like it would be impossible to build. Even all due respect to StarCloud and everybody else working on it, I don't think we can build the compute necessary for every human to have this. This feels like it's going to cost $5,000 a month to have something like this or $10,000 a month. This is $100,000 a year product, is it not, Nick, what would this cost to run in my 10 hour working day?
B
Yeah, I mean it's absolutely going to be enormously expensive. There's no way you could deploy this. If you're 100xing just say where we're at today, you know, tens of gigawatts of compute coming online. By the end of the year, you're going to need 100 times that. That is a rate of power coming online. That's not going to happen. You're going to need a lot of innovation in the compute, in the interconnect, all the hardware pieces in power generation. I think you're pulling the future too far forward trying to deploy something of that magnitude right now. It's probably possible. And what it hints at to me, and I always think about this, I don't know if you guys have the same thought. Imagine what the engineers and the leaders at Google can do with these dedicated compute systems just for themselves or anthropic what they have access to. I think there'll be a small group of people who can live in that future, but you can't deploy it to everybody. It's just insanely expensive. Power grid doesn't support it and we need a lot of hardware advances to bring that economic ease down to the right level.
A
So essentially, Nick, what you're saying is this is going to create. We have the polarization of wealth in our society that has created massive tension, but we are going to have the polarization of compute in our society. The 1% aren't going to be able to just afford a G650, a private island or mansions or servants and tons of staff. They're going to be able to afford a, I don't know, $10 million data center for themselves. They're going to be able to take their barn or ADU at their home, build it out with batteries and solar, and be able to have a level of compute just for themselves that will distance them and make them superhuman in their ability to produce, which we're Already seeing with token usage for developers. This is a whole concept that I don't think anybody's had this discussion to date. Philip this I think makes me very bullish on the fantastical sci fi vision you have of the world, which is like most people are like, is that necessary? And when you think about this model you kind of go, yeah, it's necessary. And there are. If you told me right now I could have this future and all I had to do was put a quarter million dollars worth of compute in my compute closet at my house, I would do it instantly. I would instantly spend $250,000 on this, which means local compute and buying five Mac studios and stacking them for, I don't know, 15k each. 75k. That's going to give an individual the ability to beat other individuals in any knowledge task on the planet. This is kind of trippy when you think about it.
C
Yeah, I think the implication on the amount of energy that we need to power this computer is clear. Like if you're going to need to run 100 times the number of GPUs all the time, unless there's some dramatic reduction or increase in the efficiency of these things, you're going to need 100 times the energy. And that's exactly the bottleneck we're trying to solve for with these space data centers. Personally, I would be surprised if people end up running these types of models locally. I would have thought if you're going to be spending 250 grand, that would be. There would be a efficiency in doing that on a cloud rather than running it locally, but could be.
B
I think it can go either way. I do think that companies like Apple have a huge opportunity to offer personal AI devices with security. Think about secure technolog, you know, locally at your home. But I also think like you're saying servicing a supercomputer, which is what you would literally have to do. This is very hard. Think about like the technicians you're going to need for that and you know, all the work that goes into it, replacing parts. Like you're going to order cables for high speed interconnects from light matter, maybe directly. I don't know if there's a consumer future for us, but I think what will happen for sure we'll have these giant clouds. Maybe they're in space and certainly they're on earth and you'll be able to dial up a huge chunk of it and it'll be proportional to how much money you have. Probably true. But there's another thing which is we don't know the answer to this. How much intelligence do you actually need? If you are an enterprise, it could be a huge amount of intelligence. If you are all the cameras and all the datas you have and every customer interaction for your business, then that could be a huge amount of compute. But if you're just a person trying to solve a DIY project, it's a different amount. So I think it's going to be very dynamic how much you're spinning up and it'll be, you know, maybe some people will have custom allocations for themselves. I think there will be people like that for sure. And think about scientific discovery. If you can have a cluster, a giant cluster trying to figure out how to cure cancer and you can corner the market on that, I think that will happen. There will be cases where that kind of thing happens. But what's fighting against that is democratization of the models. And Astasia is talking about China's open source models. I think there are a lot of people who work in computer science who are going to fight like heck to make sure that these AI models are open and they're going to try to enable the world with that. So it's almost impossible. Every time I look at AI, it's almost impossible to figure out what's going to happen because it enables everything. At the same time, there's so many things that it's doing at once that I can't quite figure out the trajectory.
A
This would be the limitation of the human brain to understand the creation of super intelligence, right? Like if what we're dancing around here is we're trying to conceive of what happens if unlimited intelligence is given to a human. Anastasia, if you give a human unlimited ability to process the world and it understands all the context and you just spitting out your ideas of what should be fixed in the world. Now you add to this, you know, world where you have some unlimited tokens or you're a rich person with a 10, $10 million personal data center. You have your own $10 million data center just for you. Now you add to it 100 optimuses or figures. Now this person is sitting there saying, you know, I want to build this and it goes, and it builds a satellite and it goes and it builds a rocket ship. Like now we're getting towards, I think our own limitation as humans to think about what unlimited intelligence and unlimited physical execution and unlimited context means. It's kind of breaking our brains. Are we even built to understand what playing every single hand of poker possible at One time is what does that mean?
D
Yeah, I definitely think, I mean, literally, we're not built to understand it. Right. These systems ultimately are going to be able to live a thousand lives in a day simply because of how much volume of data they'll be able to process. And I think we should think about, you know, just outlining the steps that it's going to take to get there. I think step one is already there, which is that we have these models that are kind of ubiquitous on the Internet, that are chatting with, you know, the majority of people, at least in developed countries in the world, and they have surpassed the abilities of people on most knowledge, sort of knowledge, knowledge production and knowledge like recovery, like knowledge access, retrieval, so on and so forth, logic and so on. Models are better at it than we are as humans. You know, I'm not saying there's no human that's better than a model. Of course, all of us have seen failure cases, but in aggregate, they've already surpassed a lot of human intelligence. Step two would be, well, what happens when we, when we make the entire world a reinforcement learning flywheel? And I think this is the part of the problem that arena is working on, and there's many others working on it as well, which is how can we measure everything, literally everything that's happening across every, you know, sense, you know, sight, sound, touch, you know, smell in the world and turn that into reinforcement learning signal that helps improve models for the benefit of humanity. And that is where we are going next. That's why we have every enterprise in America basically learning how do I access my data to improve my products for my customers. And what that's going to mean is that we're going to have AI learning from all of that data. And then step three at the end is what happens to us, you know, what is the role of humans in a world where AI has access to all of this data and where the whole world is a reinforcement learning flywheel and it's living 10,000 millions of lives in a day. It's able to simulate reality. Well, I think that there will still remain, at least, you know, in our lifetimes, room For n of 1 people that have that. It's not enough to know everybody else's life. It's not enough to have simulated the trajectory of everything that the model has seen, because it hasn't seen you. And for example, you, you know, have a very, very unique. You were, you know, a workman, then you were, you know, became president of the United States, and then you were a reality show host. You know, we, there's people like this who are n of 1 and those people may have such unique experiences and perspectives that they still won't be replaced.
A
This is Nick, I guess, getting to the point at which we have to talk about employment. What employment is left. This last couple of weeks we've seen just a flurry of what some people call AI washing, which other people call okay, you just don't need as many people. Cloudflare cut 20% of its workforce, 1100 people, while reporting the highest revenue in the history of the company. PayPal, Coinbase, Upwork also citing AI just this week. And Cloudflare said the cut span teams while internal AI usage is up 600% in the last three months. At the same time, OpenAI to your point, Anastasios is doing a joint venture with all of these private equity firms in order to build models that get deployed inside of private equity owned companies, which we all know is to train the model to get rid of humans. Okay. It's a sensitive topic for politicians, but hey, for us, I think we see the writing on the wall. So much so that I shared this in the group chat. South Korea floated the idea of a citizen dividend from AI profits. Samsung fell on the news. South Korea's presidential policy chief, Kim Yong Bom proposed a citizen dividend funded by taxes on AI earnings. Before he clarified he meant excess tax revenue, not a new corporate windfall. Samsung posted a 755% Q1 profit jump, crossing the 1 trillion market, crossing a $1 trillion market cap. Okay, this is getting, I think, acute. Nick, I threw a whole bunch at you, but we are. What's your take on this? Because this is happening in real time. We're discussing the models becoming super intelligent at the same time that the layoffs are happening. At the same time social unrest is happening. We're soaking in it.
B
I think, I think that what's going to happen is it'll be an explosion of companies. A lot of these people who are being laid off, they're actually really talented engineers and thinkers and business people. And I think the bar to creating a company is dropping. Everyone's excited about the idea of the first one person billion dollar company, which is probably happening right now. I think there's going to be so many different companies you can create. The unique skill that people have is the ability to ask questions. And I think there's a lot of smart people that are dropping out. So I think that'll be part of it. I think there's some crew of people who it's going to be harder for if you're working in call centers. I think that problem is probably pretty addressable. So you're gonna have to find something else to do there. And maybe it is starting a company, you know, you can run a strawberry stand a lot more efficiently with AI as an example, like, where do I place these? What's the right price to charge? Track the data for, like, how much traffic is coming. How do I optimize this supply demand? You know, there's a lot of smart things you can do. And I think it's going to be super complicated, which is such an annoying answer, but I actually believe that it's going to be very complicated. Ubi, I don't know, seems a little bit demotivating to me. Like, I personally would not be inspired by just getting the paycheck.
D
Yeah.
B
Unless I was a professional mountain biker and, you know, it was.
A
Which. No, you'd still want to run a race and beat somebody mountain biking or you would still want to be part of a team that accomplished some trail that nobody had done before. It's just human nature at UBI feels like, you know, like my grandmother rest in peace on my Irish. I would say, you know, idle mind, devil's playground. Like, go do something. Like, you can't just sit around thinking or you're gonna get yourself in trouble. Philip. So is this hitting your reality yet? Where when you run your company, you're just like, you know, I can just figure this out with AI rather than hire somebody. And do you find people in your on your team asking you for tokens or headcount?
C
It has definitely made us much more productive. I mean, things that would take like a PhD researcher a day. Like, for example, on orbital mechanics. Like a question we asked recently is what percentage of the time in a whole year in a dawn dust synchronous orbit at this altitude are you going to be in the sun? And it would normally take like a PhD about a day to figure that out. And you know, AI can do that in 15 or Grok4 heavy can figure that out in about 15 minutes. So, yeah, it definitely makes us way more productive. Where it still lacks a little bit is, you know, we've been trying all of the different text to CAD models. None of them are. They're good for very simple tasks like produce this type of screw or this widget. But they're not good at. Right. Right now they're not good at, for example, if I say Design me a 200 kilowatt satellite with deployable radiators in CAD, you know, it's not going to come up with something coherent in three years time. In two years time it will come up with something clear and it will probably be a lot better than anything our engineers can come up with. So yeah, we are like leaning heavily into being, being ahead of that. But yeah, right, right now those test CAD models are not, not right there and then obviously at the moment, everything the guys behind me are doing cannot be done by an optimist. But again, in two, three years time, probably everything they're doing behind me can be done by an optimist too.
A
The reality I'm seeing inside my little, you know, 21 person venture and podcasting operation is the people who are AI first, like really all in on AI, have now become at a minimum 10 times more valuable than the people not using it. And I literally like this week I think I'm going to have to sit everybody down and just walk each individual on the team through their use of AI day to day and just explain to them how they just to close that gap. Because it's almost like you're going from people who were tilling the fields with a horse and a plow and then somebody else is on a tractor and it's just not even comparable the work product. And I don't know how to get this through to people. I am in your camp, Nick. I do think we're going to see so many startups, not because people necessarily want to start a startup, but I think if you've been laid off by block or Meta, you're just going to sit there with like three of your friends and be like, do I want to even apply to these companies again? Do I want to go through this charade that I'm going to get rehired, then they're going to replace me again. And it's just going to be exhausting to go to a company, have the company study your work, have AI reinforcement learned and automated, and then get laid off again. I think that's going to be this absurd version of purgatory for some mid managers or developers or designers. They're just going to be on a flywheel where they're training something for 18 months, getting laid off, get their severance package. Or maybe that's like a nice kind of way to just get a year off, paid for every two years of service. That feels like the future. And then you got to think, what if I could just make $1 million with two of my friends in profit a year and just chop it Three ways, and we make our own hours and we can work from anywhere. And then all of a sudden, all these villages outside of Japan where they give you homes for 15k, you're like, yeah, I could live at a beach in Japan. And the end. I think this is the ultimate solution. Nick, you wanted to jump in there.
B
Yeah, I think that that's the right thought process. I bet some people are working right now at tech companies, and they're already spinning up agents in their free time and figuring out what they would do on their own. And they're going to find that there's stuff to do, there's money to be made, and there is, like, a nice off ramp. So I think there'll be an entrepreneurship boom for sure.
A
They're looking for their exit ramp. Go ahead, Philip.
C
One thing I think will be the last maybe bastion is content creation. Like, I think the job you do might be the last.
A
I might be the last man standing.
D
I might be the last man standing with a job.
C
I literally think you might be.
A
I might have figured it out. That's hilarious.
C
Yeah.
A
I mean, I do think, if you think about this happened before, by the way, when we had the dot com burst, the bubble burst, and there's a lot of people unemployed, we talked about cognitive surplus and what could you do with the cognitive surplus of humans? This is before tokens existed. This is 20 years before where we are today. Like, literally in 2006, people are like, what do we do with cognitive surplus? And people started building ideas. Like, I built a blog network because so many people were out of work. I was like, hey, write some blog posts and see if you can entertain people. And we made autoblog and gadget and joystick, and people just were like, oh, my God, I can read 20 stories a day about video games or gadgets that hadn't existed before. So I came up with an idea like, just make people addicted to coming back to the webpage and hitting refresh. Then Wikipedia happened. Wikipedia was based on cognitive surplus. Like, there's people sitting at home, they're smart. There's nothing for them to do. Give them a wiki to edit, and they would just go, edit a wiki. And then people did mechanical Turk. So what do we do with the artistic surplus of humans? If it's not necessary for you to make the cup of coffee, well, then what could you do in the cafe? You could play a guitar. You could read poetry. You could help people to their car. There's joyful things you can do. When I went to the Amman Hotel in Tokyo. When I was on my book tour, I came in and there was a woman sitting there playing the Japanese, I don't know what they call the Japanese harp, but like it's this giant instrument. And I was just like, oh my God. Wow. I happened to come in here when she was playing the harp. It's like, no, no. There's somebody playing the harp in the lobby of the Amman hotel, which is 1500 bucks a night or $2000 a night. Like they're paying for that person to be there for that experience. Experiences with humans. That's it. Which is everywhere you go, there could be somebody playing a flute.
B
I think there'll be more art. And there's another experience. I'm curious if you guys are having the same kind of experience here. In ways I'm feeling less and less bound. Like if I have an idea, I can just make it happen. And if you keep playing that forward, at some point, even in the physical world, if you have an idea, you'll just be able to make it happen. That's going to be wild. That's, that's going to be absolutely wild where you, you have a thought, you're laying on the couch and you're like, I just want this thing. And it's like, it just happens. This is kind of where we're headed. Like unlimited agency and, and minimize minimizing effort that maybe results in more art. I think it'll result in a, in a lot of crazy things in the world. Maybe exciting, but we're kind of getting to the point where you'll be able to do anything you want.
A
Complete, utter abundance. I want to build a sculpture garden in my backyard and I also want a rope course and just come back, go redscaping.
B
Go build this. Go build this, you know, playhouse for the kids. Like go, you know, do, do anything you want. And it's just like, it just happens. I wonder what that'll feel like. That's where it's going.
A
Well, Andon Labs just gave an AI a three year retail lease in San Francisco and asked it to make a profit.
B
So proving you can just go do things.
A
Nick Luna, a Claude powered agent, manages a San Francisco retail store with $100,000 budget human staffing. The AI independently interviewed and hired three human employees via phone for $22 an hour. It has security surveillance. Luna actively monitors employees via security camera. After spotting a worker on their phone during a slow hour, she unilaterally updated the employee handbag for stricter rules around phone use. It hasn't been perfect. Luna has lost $13,000, botched employee schedules, and accidentally ordered 1,000 toilet seat covers. Okay. And Luna pays her male employees 24 bucks an hour while paying female employees only 22, citing experience as the reason. So, yeah, be careful in San Francisco. You're going to have a protest out there, Luna.
B
And I guess I'll give you the
A
last word here as we wrap up. Build anything anytime.
D
We are building towards a world that I think, in many ways, we're unprepared for. I think that the core issue, just to highlight it, is that we're decoupling labor from value creation. And the capitalist system that we have built is. Which has been working very, very well for, you know, for all of us and for many people, for this whole country. The capitalist system is such because it incentivizes people to do great labor, because people who do great labor are rewarded greatly. And in a world where that is not true anymore and where intelligence and, you know, the abundant intelligence, and it's beyond intelligence, it's also the ability for AI to also take actions because it's going to be agentic as well in a world where those things are commoditized. I think we need to think hard about how to get to the world that, you know, Nick is. Wants to target, which I want to target as well. I would love to be in the world of abundance, where everyone is taken care of and where their needs are met and where if they have a need, they just think about it and it comes up and it would just be such an amazing society. But I think we will. We need to think very, very careful about how we set up the incentives in that society so that people that. So that, you know, we don't create a system where AI just actually makes all the money and we don't know how to make a smooth transition.
A
The transition is going to be, I think, the. The challenge, Nick, when you live on a ranch and you have chickens, you start to learn about abundance because these chickens will not stop making eggs. And the. The eggs are overflowing on the ranch with but five chickens. We lost one. One of my bulldogs killed the chicken.
D
I'm sorry.
A
That's all right. It's part of the food chain. Like, that was the best day of that bulldog's life. He was like, I got to murder a chicken. Fantastic. But I was just talking to, you know, the family, and I was like, maybe we should have, like, 10 chickens, because there's no. Or 20 chickens. There's no difference now. It. It's just collecting the eggs. But if you have an optimist and you're collecting the eggs like now you've got unlimited protein. I got two wells, I've got unlimited water. You put in a hydration system or air capture, you put in solar. Like it does feel like you could get pretty close to abundance with just a, a wee bit of robotics. And that's pretty exciting.
B
It is pretty exciting. I think it's going to create really weird situations. Like people are going to build the weirdest things. Like you'll just be walking around cities and there'll be a tower made of bubblegum. Or like just absurd things because labor is worth nothing and you can do whatever you want. I think it's going to be like a video game.
A
It's going to be like a video game. Fill up your. What's your P doom right now? I guess it's, it's fun because this, this panel feels like we're really enthusiastic about the future, but you know, there are some doomsday scenarios here. So do you have a P doom, like what percentage doom you think we're living at right now? Where are you in your head? Especially after this hour long conversation which is just taking us on a real journey.
C
So actually like if I just project forward, I'm relatively optimistic. However, I have a slightly esoteric take which is the fact, you know, this like Fermi paradox, the idea we don't see life in our galaxy, that's the thing that drives me to having a very high P doom. Not necessarily because I'm like worried about any specific AI thing. It's the fact that if there was going to be, you know, it would only take about a million years to settle the whole galaxy and you could get to the nearest galaxy in about a billion years. So in the last 13 billion years, what we're saying is there hasn't been anything that's as sophisticated as us anywhere in the nearest sort of thousand galaxies. Otherwise within you know, a few billion years they would have been here and the whole galaxy would be flooded with life everywhere. You know, there'd be Dyson spheres and o' Neill rings everywhere, which is the path we're heading on. You know, if you, if you just extend extrapolate forward or where we're heading, there'll be Dyson spheres and, and you know, we'll be living across the galaxy. So my PDM is actually extremely high, but not for the reason that most people's PDM is extremely high.
A
That is like 100% okay. I mean, when I hear that scenario. I just think to myself, and perhaps this is just the most egocentric thing I've ever said in a list of very egocentric things. I just think that we just happen to be. Somebody has to be first. First. What if we're just the first?
C
Somebody does have to be first.
A
So maybe statistically, we're just the first to get here, and then we're going to make the wormholes and we're going to be the ones who connect the timelines. It's possible.
B
Or maybe we're already in the simulation and it's already happened over and over and over again. Those are the possibility.
C
I think that's probably the most likely scenario, actually, to be honest.
A
Where's your P dome? Anastasius, where are you at?
D
You got a P Doom? I also am fairly optimistic. Yeah.
B
Yeah.
A
So P Doom. Yeah, my p doom's under 10 right now. I just think it's like a 10 chance somebody does something really stupid with the technology. Where are you at, Nick?
C
Yeah, I'm not doing time frame, though. Do you think in the next, like, thousand years, it's. It's 10 or like.
A
Oh, well, given, like, Planet of the Apes kind of scenario where, like, humans will eventually do something incredibly stupid? Yeah, it's 100% over a thousand years. But in the next two or three hundred, I'm like 10% chance somebody does something profoundly stupid, like makes a bioweapon by accident or on purpose. I'm 10% less. I feel pretty good about it. All right, another amazing episode of this week in AI Anastasios. Nick, Philip, great job. Give us a URL where people could find more about what you're working on. Working on.
B
I'm just Nick and Light Matter CO L I G H T M A T t e r co okay.
A
And always hiring. Philip.
C
Starcloud.com and on X, Philip at Philip Johnston.
A
There it is. Philip Johnston on X.
D
And Anastasios, you can go to Arena AI and find me on X at ML. Underscore Angelopoulos.
A
There it is. And we'll see you all next time. Bye. Bye.
Episode: How the 1% Will Own Compute (and What It Means for You)
Date: May 13, 2026
Host: Jason Calacanis
Guests:
This episode explores the accelerating divide in access to advanced AI compute infrastructure, warning of a potential future where only the wealthiest individuals and organizations can afford ultra-powerful, persistent personal AI. The roundtable hosts—Jason Calacanis and three CEO-level AI/compute experts—dive deep into current AI capabilities, open vs. proprietary models, the geopolitics of model development, the buildout of next-gen cloud and physical data centers (including in space), paradigm shifts in AI-human interaction, and the social/economic fallout as compute (and work itself) becomes increasingly concentrated among the elite.
Timestamp: 00:00–00:47
Timestamp: 00:27–00:47, 36:57–43:20
Timestamp: 02:15–05:03, 44:00–45:00
Timestamp: 06:27–08:44, 10:08–12:42
Timestamp: 17:13–38:17
Timestamp: 39:25–45:09
Timestamp: 49:15–57:23
Timestamp: 59:42–66:53
Timestamp: 66:36–67:10
On compute polarization:
"We're going to have the polarization of compute in our society. The 1%... are going to be able to afford a $10 million data center for themselves."
– Jason (00:27)
On future human agency:
"If you have a thought, you're laying on the couch and you're like, I just want this thing. And it's like, it just happens. This is kind of where we're headed. Like unlimited agency."
– Nick (59:42)
On employment after AI:
"A lot of these people who are being laid off... they're actually really talented engineers... Everyone's excited about the idea of the first one-person billion-dollar company."
– Nick (51:26)
On the real-time AI interaction model:
"It's no longer turn-based... Instead, what they've done at thinking machines is they've said, hey, let's take time and we're going to divide time up into a bunch of micro turns or every X milliseconds we're going to chunk up everything that's happening..."
– Anastasios (20:20)
On the Fermi Paradox, existential risk, and “P Doom”:
"If you just extrapolate forward... there'll be Dyson spheres and... we'll be living across the galaxy. So my PDM is actually extremely high, but not for the reason that most people's PDM is extremely high."
– Philip (66:10)
The conversation is candid, wide-ranging, and blends technical rigor with future-casting and humor. All participants are bullish on the transformative potential of AI but remain acutely aware of the risks—the destruction of jobs, an intensifying concentration of power and compute, and the difficulties of navigating a transition to a society of abundance when the traditional link between labor and value creation erodes.
The episode ends with the group voicing both their excitement for the creative potential ahead and a sober awareness of the societal and existential challenges humanity faces as AI and compute become ever more powerful—and unequally distributed.
Resources and further information: