
Loading summary
Latitude Media Announcer
Latitude Media covering the new frontiers of the energy transition.
Shayle Khan
I'm Shayel Khan and this is Catalyst.
Jake Elder
But if you just do the simple math on a single gigawatt scale space based data center, you end up with a radiator the size of a small town, right? Like between the radiator and the solar panels required, I think you end up with a four square kilometer orbiting asset. And that's obviously complex to manage, but it's also a target.
Shayle Khan
Coming up, a unified framework for all the crazy data center stories you've already heard and will inevitably keep hearing.
Latitude Media Announcer
What if the next big source of grid reliability is already sitting in your home? Energy Hub Software coordinates thermostats, EVs, batteries and other devices so they operate as a flexible resource when the grid needs support. These virtual power plants or VPPs help keep costs down, strengthen grid reliability and support a cleaner energy system, all while reducing the need for new infrastructure. More than 160 utilities trust Energy Hub to manage over 2.5 million devices. Learn more at energyhub.com Catalyst is brought
Antenna Group Announcer
to you by Antenna Group, the communications and marketing partner for mission driven organizations developing and adopting climate energy and infrastructure solutions. Their team of experts helps businesses like yours identify, refine and amplify your authentic climate story. With over three decades of experience as a growth partner to the most consequential brands in the industry, their team is ready to make an impact on day one. Get started today@antennagroup.com what if utilities could
Latitude Media Announcer
meet surging electricity demand with energy assets already in homes and businesses? Uplight is making this possible by turning customers and their smart energy devices into into predictable grid capacity. Through an integrated demand stack, Uplite's AI driven platform activates smart thermostats, batteries, EVs and customers to generate, shift and save energy when the grid needs it most. Learn how uplight is helping utilities unlock flexible load at scale, reduce costs and accelerate decarbonization. @Uplight.com.
Shayle Khan
I'm Shel Khan. I lead the early stage venture strategy at Energy Impact Partners. Welcome. So, massive data centers on the grid, massive data centers off grid, Small data centers on the edge. Huge data center clusters in space. Each of these might get built. Actually, each of them probably will get built. But how much and when? When you boil it down, I think there are actually two basic questions at play here. The first is the amount of demand for compute in the future, and the second is how to deliver the energy required to meet that demand. I don't really personally have anything insightful to say about the first one. But boy, do I spend a lot of time thinking about the second one. And it has occurred to me that I've never really seen anyone attempt a cohesive framework to think through all of these different pathways. You see proponents of one or another adopt kind of a maximalist approach to any given one, but I haven't seen anybody try to think through how they weigh against each other. So I've been trying to organize this in my head, and in doing so I've realized that each of these different configurations, sightings, types of data centers has a core constraint or two, but each also has its strengths. So to talk through all of them with me, I brought on my colleague Jake Elder. Jake works with me at eip and he leads our research practice that is focused on the built environment, which these days is increasingly means a lot of data centers. One more thing. Coming up on April 13th in San Francisco, we're going to do a live episode of this podcast at the Transition AI Conference. It should be actually a really interesting conversation. My guest is going to be Amin Vahat, who's Google's chief technologist for AI infrastructure. So obviously relevant to this conversation as well. I rarely do these in person, so if you're in San Francisco or you want to be on April 13th, go sign up for Transition AI register@latitudemedia.com events. Here's Jake. Jake, welcome.
Jake Elder
Thanks. Excited to be here.
Shayle Khan
All right, so the premise here is, let's just, in the interest of not adjudicating this question, let's just assume compute demand continues to scale. Let's assume superintelligence AGI, maybe neither of those things, but that the demand for computer and actually the demand for Watts to deliver that compute. Right. Let's also assume that there is no massive energy efficiency gain that comes and like totally changes the paradigm. So if it is true, and let's assume that's true for the next, I don't know, five years, 10 years, whatever we want to talk about, let's assume that's true. I think the thing that we want to talk about here is what are the various options to deliver as much of that demand as possible? What are the options on the supply side? And so we're going to talk about the incumbent solution, which is large hyperscale grid connected data centers. And then we're going to talk about each of the alternatives that I think are currently being proposed, some of them already being developed, some of them being talked about on X a lot. And I think we'll compare and contrast. Right? We're going to Talk about the constraints on each of them. But why don't we start with the incumbent thing, the thing we are doing right now where all the data centers are, which is like large hyperscale data centers connected to the grid. What do you think of as being like the core constraint to just delivering 10x the compute in that way?
Jake Elder
Yeah, no, great, great question. I think it's going to make for a great, great conversation as we look across the different options here. I think the constraints on the grid side.
Shayle Khan
Right.
Jake Elder
Are fairly well known at this point. It's a speed issue in particular on the transmission side. How long can we, how much time will it take to build out the transmission capacity necessary to interconnect these mega sites, gigawatt scale sites, to new power supply, ideally, you know, carbon free power supply. And in many that's running five to seven years now, which is a pretty massive timeline for data centers given the speed of power and speed of deployment on the AI buildout that we're trying to drive. Maybe the other couple issues that we should at least be mindful of here are power quality. Right. These large data centers, especially as they cluster in certain locations, can have bigger impacts on the grid writ large. And the extent to which society regulators and utilities are willing to serve those customers if they have bigger grid impacts I think is still a bit to be determined and a space I'm watching pretty closely. And then of course, maybe the third vector from my side would be, let's call it social license to operate. And we're seeing in many states just blanket bans on new data center developments. We're seeing some developments get pulled years after announcement because of community pushback. And if you listen to Elon for example, and his best cases for going off planet with compute infrastructure, that's really his argument that at the end of the day society is not going to move at the pace that the AI buildout requires and therefore at some point we're going to have to abandon the planet and go to the stars.
Shayle Khan
Yeah, we're going to get to the orbital compute thing a little bit later. I think that is a good point. Right people? So you mentioned three things. There's the capacity, actual physical capacity on the grid and deliverability. There's the power quality thing, which that one of the three I think is probably the most manageable. Honestly, it's like a engineering problem. And then there's the social license to operate, which we're already seeing kind of burst at the seams in some locations, despite being kind of in the early days of this trend. So I think that third one is underappreciated in the question of are we going to be able to deliver all of the compute capacity that we need via the current paradigm. On the first one, I would say I do think people conflate the transmission problem and the generation capacity problem. And the thing is, they're both problems. I mean, you said five to seven years. The five to seven years is the timeline to get new gas turbines, if you're ordering them. And it's close to the timeline to get new transformers and other switchgear and stuff like that. We're all in the three to five or maybe seven year timeline for that kind of thing at this point. And maybe the timeline also to get a substation upgraded, which is part of the deliverability thing. But I want to say the timeline to get a new transmission line built, especially if it's inter regional or across state lines or whatever, is not five to seven years. It is essentially infinite years. In the United States, at least in recent history, we just aren't doing it. So there's a limitation there that might be even more intractable than just the generation thing.
Jake Elder
Totally. And I think it's the unique constraint to the grid connected pathway.
Shayle Khan
Right.
Jake Elder
If you wanted to go towards some of the other options we'll explore down the road. And you're going to go off grid, for example, you're still stuck with the timeframes for transformers, the timeframes for generation assets, et cetera. And I know we'll talk about some, some ways to shortcut that. But the transmission side is really unique to this first scenario and certainly makes the case that if you want to run around that, you need to think about some amount of on site power as the only way to avoid having to build more poles and wires to route power from elsewhere to a new site. And so that on the capacity side is one thing that I think gets lost a little bit in this grid connected conversation is it tends to be an all or nothing conversation around how we power these data centers. And I do think there's a hybrid option here where you're still grid connected, but data center brings some of its own power for a few hours in the day specifically to overcome that transmission bottleneck.
Shayle Khan
And to be clear, that's what's happening now. A lot of that, this concept. In fact, I've seen people get confused about this because there was some, I don't remember who put it out, but there's some report that came out that saw like there's like 50 gigawatts of behind the meter generation and development at data centers. Right. And some people have interpreted that to be, oh, 50 gigawatts of off grid data centers getting dealt with. It's actually close to zero. Of those that are true off grid data centers, they are all either grid connected but have some behind the meter generation or the behind the meter generation is a bridge and they ultimately intend to be grid connected. So that is true that there's a hybrid there, but okay, so this is the least interesting one because this is the way we do things now and it's going to be the way that we do things as much as we possibly can. I think you and I agree that the first thing that's going to happen, this is already happening, is that developers are going to find as many sites as possible that can handle hundreds of megawatts or gigawatts of load. They're going to develop those into data centers. So we just assume that happens, and we should just assume it's not enough. Or maybe it doesn't happen because of community pushback, but either way we'll assume it's not enough. Now let's talk about the other, I think three categories of ways of configurations to get a lot of new computers online. The first one is maybe the least distant, which is you still grid connect data centers, but they're smaller and you put them at the edge. So I've talked a little bit about Edge Compute on the podcast before. You and I have spent a lot of time thinking about it separately. First of all, define what you think of as Edge Compute because it is sort of malleable. And then like, what is your latest thinking on what role that plays in the market?
Jake Elder
Yeah, so this is a really tricky question, right? Edge computing has been around for a while. Historically it evolved to serve certain use cases like telecommunications and more recently, video streaming, for example, is something that happens much closer to the edge than other hyperscale data center activities. But moving forward, I think there's a school of thought that says that AI inference in particular might move to the edge. And I think the first principles argument that folks tend to make is that latency is going to matter more. And so citing compute infrastructure closer to demand just has a performance benefit that can't be met via large central sites in West Texas, for example, as we've dug in a little more, I think that's a little bit of a red herring. And so let's come back to that in a second and talk about why you would actually pursue Edge data centers and edge computing. But latency has certainly been one of the reasons historically. That said, edge computing can mean a few different things to your point. Right. So in the extreme scenario, I think as you move out 10 plus years, more and more is going to happen on device. We already know that like Waymo cars, for example, have a lot of their day to day or all of their day to day navigational tools and driving decisions get made in the car directly. And increasingly as we have models that can operate on a phone, for example, you might have a version of ChatGPT or Gemini that just operates natively on your phone and doesn't need to go out in the world at all to get access to basic inference results. On the other end of the spectrum, we've seen a few folks announce larger scale projects, really think about 20 megawatt style data centers, maybe 15 to 30. And those folks are basically building many hyperscale sites, but they're trying to build them in locations where they think they can get power sooner and perhaps in a regional node where they could serve some more latency sensitive applications. But from a design perspective and a deployment perspective, they kind of look like much of what we're building today, just small scale relative to the gigawatt scale assets. My suspicion is that's probably the most economic piece here. And so if this becomes a cost play, that that's the space that becomes most interesting. But again, let's come back to that. And then I think there's this third category which is really more kind of true, what we might have thought about as edge computing, where you've got 100 kilowatts at a given site or a couple of megawatts at a given site. You could think about these being located at utility substations or in a commercial real estate office basement. And the reason to pursue that is probably cost. At the end of the day, we know across the folks that we know well that there are a number of individual parcels of land that were provisioned for 5 megawatts of power and are only using 2. And so I think the theory to pursue that is probably more around speed, where you could probably suck up a bunch of assets relatively quickly and start to build out a network. But if you end up in a cost game and you're trying to be the cheapest form of inference, it strikes me that that probably struggles because you're subscale relative to bigger sites.
Shayle Khan
Yeah, you said a couple things that resonate with me based on what I've learned. The first is that is that latency is a bit of a red herring. The latency benefit of being edge, not for zero applications, but for very few does it seem that you need such low latency that edge has a big benefit over the sort of regional hyperscale model that we have today? And people use the example of things like autonomous vehicle. That was a classic case people would talk about, well, you need edge computing for autonomous vehicles. But as you said, most of what a Waymo needs is inside the car. And so as I understand it, they can operate with compute inside the car and then when they need to go pull something from the cloud, it's generally not so latency sensitive that they can't handle the hyperscale. So this concept of edge being necessary for latency purposes, I'm yet to have that proven to me. I'm waiting for it, but does seem unlikely. Secondly, it's hard to imagine it's cheaper. Now, people do make the argument that you might get free land, right? And that could be true. Like if you're taking land that's already getting paid for because it's at a commercial property or whatever, it's in a parking lot, it could be any of those places. At a utility substation that's already substation, the land could be pretty cheap. But if you look at the fully loaded cost of a data center, land is not a big portion of it. It's a very, very small portion of it. The cost is actually in the GPUs, obviously, in the building, in the labor, all those kinds of things. And as you said, being subscale is tough. Maybe you can make some modularization argument. You know, you have the standardized shipping container and the shipping container is like super cheap and easy to deploy. You just plug it in. But as is true in many other sectors, my guess is, you know, your 300 megawatt data center on a fully loaded levelized cost of FLOP is just going to be cheaper. So it's probably not a cost thing either, which means it's a speed thing, right? And speed is the name of the game right now. But I think what remains to be proven in EdgeWorld is that it can actually be faster at the same scale. This is what we need to find out.
Jake Elder
Yeah, I think that's right. And I do think at some point the speed game is going to slow down and cost is going to matter, especially in the inference world. I don't know exactly when that happens. And in our future scenario where we're in some kind of relatively quick takeoff around AI capabilities, maybe Speed matters for longer because models continue to improve kind of indefinitely. But at some point, when we have agentic employees in most Fortune 500 companies in this kind of future. Right. The cost of those workers matters. And so I do think at some point, if there's an edge build out and you're looking at two or three different edge deployment models, while speed matters, the cheapest of those models might be the one that ends up winning at scale.
Shayle Khan
Yeah, but I think speed remains a question mark. Like in principle, if you have an existing interconnect, as you said, there's some commercial site that has like a 5 megawatt interconnect and is using 2 megawatts, you put 3 megawatts on there, that should be much faster than waiting for an upgrade in the system. But of course, to match the speed with which you need to go, you're going to go deliver your 300 megawatt data center. You then need to go find 100 of those sites and develop 100 of them. And like, in principle, I can understand how that could be faster, but I'm waiting for somebody to show me that that is true.
Jake Elder
Yeah. And certainly requires a lot more conversations and turning over of rocks and dead leads as you try to build it out.
Shayle Khan
Right.
Jake Elder
You've got to have 100 success outcomes in terms of site evaluation, not just one.
Shayle Khan
Right. Okay, so that's edge. So both of those are grid connected. Let's assume the grid becomes the constraint. It just is the constraint. Right. Okay, so now we're going to get into like increasingly distant in a literal and a metaphorical sense. But let's start with the one that I think is maybe least talked about relative to how interesting I find it as an answer to this question, which is just off grid. And again, we're not talking about a hybrid version where you have behind the meter generation. You're still grid connected. Let's just say put a data center anywhere, it has an amazing relaxation of a constraint. If you remove the grid as a constraint, we have plenty of land available. Right. That is not the constraint here. And you can go where there's the cheapest labor, you can go where there's the easiest permitting and siting. It does change the game in that manner, but it does have its own set of challenges and constraints, which is why it hasn't happened a lot historically. So what's your perspective on just straight off grid?
Jake Elder
Yeah, I mean, you make a pretty good case. It should be pretty attractive. Right. There was this foundational study that came out about two Years ago that was co authored by Stripe and Paces and Scale Microgrids. And they found over a terawatt of opportunity in the American Southwest alone, with high levels of renewable development being able to support Those assets like 50% solar plus batteries at cost parity to using all gas and the ability to get up to, I think, 80 or 90% solar without a meaningful cost increase. So from a land perspective and a resource perspective, it makes a lot of sense. And to your point, it can also move really quickly. You can avoid the places where the public really doesn't want data centers. You've got such geographic flexibility. It should be the opportunity if you just take a first principles approach. And we certainly don't need to be thinking about going to space until we think about going to remote parts of the Earth. But to your point, it's not happening at scale yet. And I think there's a couple reasons for it. There are some projects that are happening that we can learn from and we've got some anecdote to support that. I think at the end of the day, the grid's a marvel of humanity and it does a lot of really good things. In particular being a giant shock absorber for any one individual asset. If you go off grid and you have to operate on an island, you have to build the whole shock absorber yourself. All of the inertia, the fault response, the ability to blackstart the asset, and that's not just expensive, it's really complicated. And there's not a lot of folks out there that know how to run a gigawatt scale grid at all. And so when you think about the risks that these new data center developers are needing to take in the values of these assets, betting on a model where you can't be comfortable or can't guarantee that you're going to have 99% uptime is possibly a non starter in some cases. And we've heard some of the early data from some of the off grid projects that have been built so far. The anecdote suggests they're not able to stay above even 90% uptime yet. Will they get there over time? Probably. Right. Like this is a learning curve. And we know that there are power quality solutions that can manage a lot of these issues. But it's a big risk if you're going to be a first mover for a $10 billion asset to design it in a way that you don't know how to manage and operate it and
Shayle Khan
keep it running right. It strikes me as one of these things that clearly that is, it should be solvable. It is a real engineering challenge, it appears, and you and I have looked at some of that same data. It does appear that there are actually projects that are mostly these ones that are bridge power projects. So they're currently off grid, intending to be on grid eventually, but as they are operating off grid, they are not operating at the normal 5 nines of reliability or whatever. Now, interestingly, you may or may not need that. In some ways it's sort of a legacy of the cloud business, where AWS and Azure and Google basically promised in their SLAs to their customers that they would be able to offer really high uptime. And so they have this huge redundancy requirement and so on, and the new world of AI, sometimes you do need that, sometimes you don't need that. And so there may be a class of data centers that can accept sub 99.999% reliability. There's an economic impact, of course, to lower uptime. But again, in a world where we're so constrained on the grid side, it seems inevitable to me that that is going to happen to some degree and that the engineering challenge is going to get at least partially solved.
Jake Elder
Yeah, I think that's right. And I think over time we'll figure out better ways to, you know, have more and more checkpoints as you're doing model training runs and whatnot, such that you could tolerate a major outage. I think the key is you could make it work at 90% uptime if you know when that 90% is. I don't know whether you could handle total randomness with that 10% downtime. And if all the downtime happens to come in the middle of big, long, expensive model runs that it takes down, I don't know what that does to the economics of those projects. And I do think we'll learn a lot here. I think it's critical also to acknowledge that those that operate our larger grid don't yet know how to manage these sorts of voltage swings and harmonic distortions that are coming from these data centers. And so if we can't solve the problem when the data centers are a small part of the overall load on the system, then it tells me that's probably going to take us some time to figure out how to solve it when they're the only load and you've got a much more constrained set of tools to manage the impact.
Latitude Media Announcer
What if everyday devices had the potential to strengthen the grid when it's needed most? Energy Hub helps utilities turn that potential into dependable capacity by coordinating thermostats, EVs, batteries and other devices into virtual power plants that respond to grid needs in near real time. Energy Hub's latest white paper lays out a maturity model for VPPs that can be planned and dispatched with the same confidence as conventional plants, while being 40 to 60% less expensive to build. That's why more than 160 utilities across North America partner with Energy Hub to manage over 2.5 million devices that provide 3.4 gigawatts of flexible capacity. Read the whitepaper and discover what VPPs can do for your grid at energyhub.com
Antenna Group Announcer
Catalyst is brought to you by Antenna Group, the OGs of PR and marketing for climate tech. Is your brand a leader or challenger? Are you looking to win the hearts and minds of customers, partners or investors? Are you ramping up your new biz pipeline? Are you looking to influence policy conversations? Antenna works with leading brands across the energy, climate and infrastructure space to do all of this and more. If you're a startup investor, enterprise or innovation ecosystem that's helping drive climate's age of adoption, Antenna Group is ready to power your impact. Visit antennagroup.com to learn more.
Latitude Media Announcer
The grid is changing fast. Data centers, electrification and extreme weather are driving a surge in energy demand. Utilities can meet the moment with existing resources by activating energy customers and their distributed energy resources to create predictable and flexible capacity with uplight's integrated demand stack. Uplight coordinates energy efficiency rates, demand response and virtual power plant programs into one cohesive strategy to strengthen grid resilience, improve energy affordability and make progress toward clean energy goals. Learn how Uplight is helping leading utilities harness over 8 gigawatts of flexible load@Uplight.com
Shayle Khan
yeah, and though cost is not the determinant factor in this stuff right now, it's not nothing. And you know, the way to Engineer yourself into 5 nines reliability off grid right now is is to over invest in both capacity and sort of and storage. And you can do that, but it does come at a significant cost. It starts to actually matter for your economics. And back to your point like who finances your $10 billion asset if it is, you know, at the top end of the cost curve?
Jake Elder
Essentially, yeah. And then you start getting into, you know, do you need two different fuels?
Shayle Khan
Right.
Jake Elder
If you're going to use, you know, some, some kind of baseload resource or if it's just gas, you need two separate gas pip and that constrains sites, ads, costs and oh by the way, we've just jumped in assuming that location doesn't matter in this world. Right. And that you can do everything in remote parts of the country, for example. I'm curious for your take there. My suspicion is that at least to date, folks are still generally sensitive to where they're being cited for not all projects, but for most projects. And if there were lots of off grid opportunities in Virginia, for example, I think we'd see them being pursued more quickly than we're seeing some of the stuff move forward in West Texas, New Mexico, et cetera.
Shayle Khan
I think that's changing in real time. Like historically there were these tier one markets like Northern Virginia or Chicago or Phoenix or whatever, Atlanta, and they were where 90% of the demand for new data centers was going to be. And there's still that, but it is broadening out quickly. Right. And you see all this development in West Texas, for example, so many data centers going into Texas. And I think that's just because of speed to power and availability and scale. And so I think that the constraint of you need to be in certain locations. It still matters from a, is there a workforce? Can you get enough labor? Electricians and construction workers and water and all that kind of stuff. But I think apart from that, my sense is that it is not the most important thing. The one thing I do want to say though about the off grid thing, and you mentioned this before, but let's reiterate it. Your fundamental assuming you can solve for sufficiently advanced engineering to get to whatever reliability you need, your constraints then on scaling predominantly become power generation and delivery. So you still need to, because you're probably going to need some gas, you still need turbines, or if you're doing a lot of solar and storage, you need solar and you need batteries, you need transformers, you need switchgear, you need whatever, all that kind of stuff. And that, that you're now still in that supply chain problem. And I want to mention that because if that is the constraint on really massive scale off grid, in a minute we're going to talk about orbital and so we can compare and contrast which is the more challenging constraint between those two.
Jake Elder
Yeah, no, that's a great reminder. You're still stuck with all the generation and supply chain issues, maybe with one possible exception, which is that your gas infrastructure is going to likely be smaller and more modular. Right? Like you're not going to have a 500 megawatt combined cycle turbine. That's your solar generation asset for a massive data center just because of the redundancy issues. And so you can get a lot of 1 megawatt reciprocating engines today. I know you can find some smaller derivative turbines or old repurposed jet engines if you want to get a little bit crazy. But I do think the off grid option in some ways maybe shortcuts the actual supply chain bottlenecks on the generation equipment side, at least to some extent relative to the other options. But agree with you, there's still a bunch of other pieces of equipment, transformers, et cetera, that you're stuck waiting for.
Shayle Khan
Okay, so let's shift to the most fun one we talked about off grid. Let's go off world and talk about orbital data centers. There's such a long conversation to be had about orbital data centers here, but I want to frame it in the context of these other things. Again, I think the premise here, and certainly the way that Elon talks about it, is the most prominent proponent of orbital data centers is this is going to be the only way. It's a scalability thing. I mean, he says too. Okay, let's dispense with the premise. He says he thinks orbital data centers are going to be the cheapest way to get compute in three to four years. Correct? I do not believe that. Do you believe that?
Jake Elder
I do not believe that. I think we need to start this conversation with a bit of the acknowledgment that moving off planet for lots of reasons is a crazy proposition. Right. And if you listen to Elon talk through it, it starts to sound like a logical endgame in a world where we're building hundreds of gigawatts of compute infrastructure a year, and Elon asserts that that's start happening in three or four years. Right. I don't think it is going to be the cheapest source of new compute capacity in three or four years. Nor do I think that we're going to be building hundreds of gigawatts of compute infrastructure per year in the US alone in three or four years. But in a world where we're assuming that we're somewhere between AGI and some more super intelligent computing infrastructure, it's kind of the end game, right? It's kind of the only place you could go to build infinite amounts of compute capacity, whether that's in five years or 500 years, I'm not quite sure. But I agree it's not before 2030.
Shayle Khan
As this has become a bigger conversation. People have talked about lots of things that they think are going to be the killer of the idea of orbital data centers. I think we should dispense with them because despite what you and I Just said, which is both fairly skeptical on the cost side, I think we both think it's not totally insane. And it doesn't seem like the technical challenges are insurmountable. So people talk about like heat transfer as one of the big problems. I think it doesn't seem like actually that is likely to be. It's not nothing, but it doesn't seem likely to be the thing that kills orbital data centers.
Jake Elder
Agreed. I think the heat transfer conundrum, right, is that space is a vacuum and it's very, very hard to dissipate heat in a vacuum. I think the whole International Space Station, for example, rejects less than 100 kilowatts of heat in total. And they have a radiator the size of a soccer field.
Shayle Khan
Right.
Jake Elder
And when you think about the compute infrastructure we're building out, a single Nvidia high density rack could soon be more than 100 kilowatts. It may already be in some cases more than 100 kilowatts. On the flip side, of course, heat dissipates to the fourth power of temperature. And so it turns out that the hotter and hotter you run chips, and the denser and denser you run chips, the better your heat rejection gets on its own. And so it does seem like as we move to a world of denser and denser computing infrastructure, it gets easier and easier to reject chips. But if you just do the simple math on a single gigawatt scale space based data center, you end up with a radiator the size of a small town.
Shayle Khan
Right?
Jake Elder
Like between the radiator and the solar panels required, I think you end up with a four square kilometer orbiting asset. And that's obviously complex to manage, but it's also a target. I saw this really great piece of analysis this morning from an analyst called Thunderstorm at Energy. And I think the stats on the odds that a Starlink system gets hit today by a piece of space debris is like a couple percent maybe per year. If you scale that up to a single floating thing that's 4 square kilometers large, you can basically expect to have a piece of space debris hitting that data center every hour. And I don't know how you operate something that's going to get knocked off orbit and, or destroyed just every hour, like by a piece of space debris every single hour. That sounds really complicated.
Shayle Khan
Yeah, I mean, to me the thing that seems this is sort of related to it. The thing that seems like the hardest to solve, it's all hard. But the thing that is the hardest to solve with orbital Data centers is O and M, because actually data centers on land require a lot of maintenance and you can't really do a lot of complicated maintenance to a satellite. Right. And so either we solve that with some robotics, that's going to be very clever. That seems difficult for me to imagine. Or it's an economic thing, you lose a bunch of. You just have some loss rate and you have to account for that.
Jake Elder
Yeah, I mean, in a hyperscale data center today, right, there's a meta engineer or a Google engineer that is going to replace every CPU or GPU as it breaks more or less in real time and in space. If it breaks at least today, you're kind of stuck with it broken. And to your point, maybe in 20 or 30 years, if we're really in some super intelligent future, there's robotic replacement and ways to update chips in real time and whatnot. But until then, it just adds economic drag on the overall project and we kind of skipped over costs. But it's not clear that there's a real economic advantage here. I mean, the economic reason to do this right is free power. You can effectively get 95% capacity factor on the solar panels at a space based data center because you put it in kind of permanent sun. Right. From an orbital perspective. And then there's much better solar irradiance. So you get somewhere between 5 or 10x the energy output per panel over the life of the panel than you would on earthbound panel. And so power is really cheap. But as you mentioned earlier, total cost wise energy is only 5 to 15% of an AI focused data center and chips and maintenance are the rest. And you're stuck with the same chip cost whether you put the thing in space or on Earth. And the maintenance piece gets much more expensive. And so I kind of have a hard time seeing it being a cost play even in a world where launch costs go way down. And if you buy Elon's view of the world that the starship's going to get super reusable and be able to launch at 100 bucks a kilogram. And so I kind of come back to it just has to be the sort of thing that we pursue from a physics perspective because we can't build at the pace needed for AGI on Earth.
Shayle Khan
I think that's right. Okay, but that gets us then maybe to close it out into what I think is the interesting comparison that I don't hear people making very much, which is orbital data centers versus off grid data centers. Let's just compare those two as we said the rate limiter. We have plenty of land, I mean in the long arc of history to build many terawatts, sure we're going to run out of land, but to a first order for the next decade, I don't think we're running out of land. So we've got land. And then the rate limiter is all of the other stuff we talked about turbines or whatever, power grid infrastructure and so on, and we certainly don't have enough of that today to go build hundreds of gigawatts a year of off grid data centers. The rate limiter on orbital data centers is sure there's going to be some solar for space, right? Elon is saying that Xai or I guess now SpaceX is going to develop 100 gigawatt solar manufacturing, presumably for space. They're also Tesla's going to do it for land. But let's say that that's the lesser constraint. The bigger constraint is starship. Starship has to launch a lot, like a lot a lot to get that kind of capacity into space. And they've got a ways to go. So as I think about it, I'm like, okay, if your binding constraint is like capacity of starship launch on one side versus ability to scale up the supply chain for power generation and delivery on land, it's not clear to me that like space is eminently more scalable on that measure of the problem. Like can we not as a planet go develop, you know, 200 gigawatts a year of new turbine manufacturing capacity? Seems possible.
Jake Elder
Yeah. I think that piece we could maybe the question back to you is, do you think that society over time is supportive of us building, you know, 200 plus gigs gigawatts of incremental gas infrastructure year over year for the next 20 years? And I know that's one of the other concerns that Elon raises, right. Is at some point the conversation around carbon free energy will shift back in a different direction. And do we get stuck in a world where we can't build that?
Shayle Khan
Right. But then be maximalist on solar and storage, be a maximalist on geothermal, be a maximalist on new nuclear. Are those things all so much crazier than five starship launches a day when
Jake Elder
you hear him talk you through it and it's basically the ship lands and then takes off again within a few minutes. That certainly does sound pretty crazy. And solving fusion might even be easier than cracking that code.
Shayle Khan
Yeah. Again, I think for me it's not that it's totally insane to do orbital data Centers, that's not my takeaway here. It's just I think if we're going straight to space, I'm surprised that we're not stopping at a waypoint along the way of doing a lot of off grid. I'm surprised that hasn't happened. Agreed.
Jake Elder
And I think the other constraint that obviously exists across both scenarios, and we've kind of washed over in the decision to talk about a world where we've continue to see massive AI progress, is just the chip supply chain. Right. And in a world where we're building a couple hundred gigawatts a year, I don't know how many chips that actually turns into, but I know that we don't have the semiconductor fabrication space today to build at that level. And so we probably end up bottlenecked by chips before we're really in a world where we can't build everything on the ground, for example, and probably before we're in a world where starship launch costs are so cheap that space becomes the cheapest option. So if you take that as a fundamental constraint, then I think you probably do bet on the off grid stuff moving materially faster. But yet, same as you, I think we shouldn't dismiss the orbital option. And I think in a world where compute build out does rapidly accelerate in 20 years or 30 years, there's going to be a lot of AI models being trained in particular in space. And that's maybe just the one last topic we didn't quite hit on, is latency in space. Right. If you've got latency concerns building in West Texas, then you're certainly going to have latency concerns building a few miles north of the above the South Pole. And so I do still think in that world we're still going to have to build a lot of our infrastructure here, even if we're training the brain that is a thousand times as smart as a human in the atmosphere.
Shayle Khan
All right, so I'm going to put you on the spot to wrap up here. Ten years from now you've got a fixed pie of all the global computers that exists. We have four categories here. Hyperscale, grid, connected edge. Let's define it as like sub 50 megawatts or something like that. So a broad definition of edge, off
Jake Elder
grid, off world, 10 years, all compute infrastructure that's operating.
Shayle Khan
What is your best guess?
Jake Elder
You know, if I were to look forward about 10 years and assume we're talking about all compute infrastructure that's operating, I still think the majority of it's going to be in hyperscale data centers and that's probably you know, 50 to 60% of the total. Let's assume that on top of that there's another 10 to 15% that gets built off grid in a similar hyperscale like format, but never connects. And so that puts us at 65 or 70% that's built in more of a traditional way, whether grid tight or not. I suspect that the bulk of the rest comes in the edge markets for certain use cases or applications. Call it 15% or so there. And I do think we'll see a couple efforts to really build out some infrastructure in space. And we know SpaceX and Google in particular are going to take their shot there. And so I wouldn't be surprised if if we're training some models, we've got 5 to 10% of our overall compute capacity out there over time. I'm curious, which of those are you buying or selling?
Shayle Khan
That's interesting. It's so hard. Okay, so again it comes down to this. How bullish are you on compute demand? If you told me that the total number the size of the pie in 10 years is 10 terawatts, I have a very different answer from if the size of the PI is 300 gigawatts. Nuts, right? Agreed. That dictates the shares to me. So it's really hard to know. I would say I generally agree with you and to be clear, that's actually a fairly bullish statement on I would say what you're saying is bullish on off grid and bullish on orbital, just because you're starting from zero in both of those. And so getting to 5%, even 5% of hundreds of gigawatts is going to be a big number to do in 10 years for Orbital. So it's actually like a fairly bullish statement on all of them. Again, depending on how big the size of the pie is. I'm filibustering because I'm trying to figure out which one of these I disagree with the most. I guess maybe where I currently sit, I'm a little bit even more bullish on off grid. It has the scalability, I think it can have the cost. There are challenges, engineering challenges, but if we're really going to be in this world where we're that heavily constrained, like it just seems inevitable to me.
Jake Elder
Do you think that comes from the grid tied large sites or where do you think that that comes from?
Shayle Khan
Where the.
Jake Elder
Like from my view of the world, which of those categories do you see losing market share? Let's call it if more is going to go off grid.
Shayle Khan
Oh, I see. I'm still haven't. I mean you didn't put a lot into the edge category in the first place, but where I currently sit, I don't. I don't know why we're going to have a lot of edge in the grand scheme. We'll have some, but like in the. As a portion of overall computer, I don't know why that's going to be a lot.
Jake Elder
Which is frustrating because in many ways it's the most obvious and theoretically fastest way to deploy compute. Right. This is why you and I have spent a lot of time thinking about this over the last three or four months.
Shayle Khan
Totally.
Jake Elder
It should be the right answer, but I agree with you.
Shayle Khan
Yeah. And I reserve the right to change my mind. Right. I think you and I have spent a few months trying to convince ourselves of edge and I think we haven't done so yet. But that, that's a matter of time. In fact, if a listener wants to convince us of edge, I would welcome it. Jake and I both. But yeah, we're struggling to find the. Like it's going to happen and here's why. For all these reasons anyway, I would maybe take a little bit away from Edge and I guess I take a little bit away from grid connected hyperscale. But I agree with you that that's like most of what we're going to do is just build more grid connected hyperscale. All right, Jake, all the time we've got. Thank you so much. This was fun. As always, this was a pleasure.
Jake Elder
Thanks for having me.
Shayle Khan
Jake Elder is a Senior Vice President of Research and Innovation at Energy Impact Partners. This show is a production of Latitude Media. You can head over to latitudemedia.com for links to today's topics. Latitude is supported by Prelude Ventures. This episode was produced by Max Savage Levinson, Ann Bailey and Sean Marquand. Mixing and theme song by Sean Marquand. Stephen Lacey is our Executive Editor. I'm Shayl Khan and this is Catalyst.
Podcast Summary: Catalyst with Shayle Kann
Episode: “AI scaling pathways: on grid, on edge, off grid, off planet”
Date: March 12, 2026
Host: Shayle Kann (Latitude Media)
Guest: Jake Elder (SVP, Research & Innovation, Energy Impact Partners)
This episode explores the various pathways for scaling AI data centers to meet surging compute demands, with a view toward energy and decarbonization constraints. Shayle Kann and Jake Elder break down four main configurations: hyperscale grid-connected data centers, edge (smaller, distributed) centers, off-grid (standalone power), and orbital (space-based) centers. The discussion navigates technical, social, regulatory, and economic barriers as well as theoretical scalability and future potential.
“The timeline to get a new transmission line built … is essentially infinite years. In the United States, at least in recent history, we just aren’t doing it.”
— Shayle Kann (08:00)
“Latency is a bit of a red herring … not for zero applications, but for very few does it seem that you need such low latency …”
— Shayle Kann (14:12)
“If you end up in a cost game … you’re subscale relative to bigger sites.”
— Jake Elder (13:37)
“If you go off grid … you have to build the whole shock absorber yourself … and that’s not just expensive, it’s really complicated.”
— Jake Elder (19:34)
“It should be solvable. It is a real engineering challenge.”
— Shayle Kann (21:06)
“If you scale that up to a single floating thing that’s 4 sq kilometers … you can basically expect a piece of space debris hitting that data center every hour.”
— Jake Elder (32:29)
“If it breaks at least today, you’re kind of stuck with it broken … until then, it just adds economic drag on the overall project.”
— Jake Elder (33:19)
“If we’re going straight to space, I’m surprised that we’re not stopping at a waypoint along the way of doing a lot of off grid. I’m surprised that hasn’t happened.”
— Shayle Kann (37:24)
Jake Elder's Prediction:
Shayle generally agrees, but is even more bullish on off-grid relative to edge.
“If a listener wants to convince us of edge, I would welcome it. Jake and I both. But yeah, we’re struggling to find the … reason why [edge] ….”
— Shayle Kann (42:16)
Shayle and Jake expertly situate the buzzing excitement over grid limitations, off-grid strategies, and “crazy” ideas like orbital computing in a sober engineering and economic context. Their consensus: evolution will remain slow, centralized, and earth-bound—until, perhaps, a leap by necessity or breakthrough hurdles are cleared. Off-grid is the next frontier before orbit. Edge, in theory a quick fix, remains unproven at the scales required.
For more detail
Produced by Latitude Media, supported by Prelude Ventures. Host: Shayle Kann, Guest: Jake Elder.