
William Layden is Co-founder and CEO at Rune, a company building modular, behind-the-meter micro data centers that plug directly into solar and wind plants. These units operate on a fully electric, DC-to-DC architecture—bypassing the traditional grid and unlocking new economics for compute at renewable energy sites. In this episode of Inevitable, Layden explains how solar clipping and curtailment leave vast amounts of clean power stranded—and how Rune’s “RELIC” units turn that waste into usable compute. The conversation dives into DC architecture, Bitcoin as a beachhead market, and why traditional data centers are ill-suited to an era of distributed energy. Layden also unpacks why modular infrastructure may be the fastest path to deploying AI-scale compute at the edge of the energy transition.
Loading summary
A
Today on Inevitable, our guest is William Layden, co founder and CEO of Roon. Roon builds modular behind the meter microdata centers that sit directly at solar and wind sites, operating on a DC architecture that bypasses the grid entirely and turns otherwise stranded renewable energy into computer. In order to understand why this matters, I decided to unpack a few things with William in this conversation. First, I wanted to look at how electricity actually flows from renewable generation into the grid today. And second, how data centers are designed to receive and consume electricity today. Both of those systems have largely been bolted onto a legacy power grid rather than designed together. And part of what I wanted to explore with William is where Rune has identified real inefficiencies in that setup, how they're trying to address them, and whether the future of compute could start to look more like energy becoming modular and distributed rather than continuing to concentrate into ever larger monolithic construction, heavy data centers. From mcj, I'm Cody Sims and this is inevitable. Climate change is inevitable. It's already here. But so are the solutions shaping our future. Join us every week to learn from experts and entrepreneurs about the transition of energy and industry. William, welcome to the show.
B
Hey Cody, thanks for having me.
A
Well, I'm really excited to learn from you today. All about the power electronics that come to play when it comes to renewables, when it comes to data centers, and the solution that you are trying to bring in the middle of that. That I think quite dramatically changes some of the flow of electricity and power in this whole world that is so important to where the economy is heading. Maybe just give us the high level description of Roon and then we'll dive in from there.
B
Yeah, absolutely. I love that you said power electronics too. Like, I'm so glad we're just jumping right into power electronics. Roon at its simplest level, you know, we're building solar powered data centers, solar and wind powered data centers. We have a vision to transform the world's most abundant energy resources into the world's most flexible and scalable platform for compute. And we do that through our data center product, which is called a relic. And RELIC stands for renewable energy Linked Interruptible Compute. It's a mouthful. That's why we say relic and you know, the relic is a highly modular data center product. It's different from anything else that's out there. We think it has the potential to dramatically accelerate time to compute and also scale compute in a way that maybe you have to go to outer space to do.
A
We recently had an outer space focused data center company on the show. So that is one extreme example. But I think we're trying to solve the problem terrestrially. Maybe describe for a minute what the physical Rune product looks like and then we'll get into where it sits in.
B
The stack, you know, maybe starting by saying what it's not. So it is not a building, it is not a 40 foot shipping container, and it is not grid connected. We are building these modular data center products or compute clusters. The power factor is 100 kilowatts per relic, and the dimensions are roughly 8ft long, 2ft wide, 5ft high. And you know, that 100 kilowatt power amount and that really small dimensions, those are deliberate design choices. Right. So we are plugging into solar power plants and wind power plants, and solar especially tends to be not very energy dense. I think it's the least energy dense generation source we have. And so we need to make our load equally as not energy dense to take advantage of all the energy we have. And so think of us as distributed load designed for distributed energy resources.
A
How much do these things weigh? Roughly? Like how big are they?
B
They're £2,000 fully loaded. So they're £2,000 fully loaded with compute, cooling comms, the enclosure, everything like that. So maybe to walk you through the actual aspects of it, we'll call it modules. The different modules that make up a relic. We've got a plc, which is an onboard computer that turns the relic on and off. You know, these relics turn on based on the availability of power and respond to the prices that the power would otherwise get. We've got the stainless steel enclosure, so that's what actually houses the compute elements. And it's highly ruggedized. And then we've got the DC DC converter, which, you know, back to power electronics, that's actually how we tap the high voltage DC coming out of the power plant. And just in terms of how we deploy, they're rolled to the site on trucks, dropped off with a lift gate, put into place with a forklift till we take two wires and plug them in. And that whole process takes about 45 minutes. So we're energized in cash flowing in 45 minutes.
A
So data centers that can essentially be constructed by being carried off a truck onto a forklift and then plugged in with. You said two wires into what? Whatever DC sort of power source you're getting access to.
B
That's right. We have a philosophy, products, not construction projects. And that culminates in the design choice of the relic.
A
And from a permitting perspective, there's gotta Be more to the story than just, oh, you can plug it in with two wires.
B
Yeah, it's pretty light permitting, honestly. I mean, the main permitting we have to go through is through AHJs, which are authority having jurisdictions that's like your fire marshal and stuff like that. Oftentimes these are voluntary permitting. You know, the beauty of these things is they're not permanent structures. So the permitting is very, very light.
A
And then from a data connectivity perspective, is it Starlink connected 5G. Like how are you actually running loads on these things and sending them up and down both.
B
Right, Starlink 5G. We like to use Starlink. And yeah, I think that's a great product. So we're primarily Starlink.
A
Okay, so you've got 100kW, you got 100kW Super Micro Modular data center. I assume you're bringing them on site in rows that essentially sit underneath solar panels or sit in a cluster around a windmill. Is that the way to think about it? A wind turbine?
B
Yeah, yeah, that's right. So there's a concept in load or renewable energy called behind the meter. And behind the meter refers to the substation meter. And you're tapping that substation and so you're not really good connected. You're behind the meter. Very exciting. We're further behind the meter than that. So if you think of a solar power plant, you've got the substation, you've got step up transformer, you've got inverter, you've got combiner box, and then you've got the modules themselves, the panels, and we're tapping the combiner box. So we are as integrated as possible to the actual generation source. As possible. Yeah, so we tap in combiner boxes.
A
So pretend you didn't exist. Maybe we can use solar, you can use wind, whichever example. It's easier to help people get their head around. Help me get my head around. Talk about what does power look like today coming from a solar panel or a wind turbine, Ultimately to get to that substation where you can have a behind the meter solution, which is not the typical data center setup today. In fact, it even goes further, which is it goes onto the grid. And then as a data center, you're buying power from the grid. So maybe walk us through what does that electricity flow, that power flow look like in the normal world today if Roon didn't exist?
B
Let's use solar as an example. So you've got so many solar modules like the power plants you work with, they've got like 500,000, 750,000 solar panels. So these things are just of a massive scale. So you've got a collection of solar panels that feed into a combiner box, which feed into an inverter, which feeds into a step up transformer, which feeds into a substation. And so you're collecting electricity throughout that entire process, transforming it from direct current into alternating current and stepping it up in voltage so it gets ready for the grid or the bulk transmission system.
A
And then today that grid is using AC alternating current and is going out wherever it may go. And then a data center is buying power off that grid, pulling it in in as alternating current and then ultimately needing to then convert it back to D.C. in fact, actually sometimes multiple times in order to operate the data center. Is that right?
B
Yeah, you can think that whole schematic of the solar facility that I laid out, it's the exact opposite. Right. So you're getting the grid to the substation, stepping down in voltage, and then rectifying that power from AC to dc. So this is very interesting. Where solar produces direct current. Computers run on direct current, but every electron of direct current flows through a multi chain conversion system that was designed for alternating current and designed for really the 20th century, the factories of the past which did mechanical work, not really creating intelligence like the factories of today the AI factories do. So that's really the insight that Rune has. Why are you transforming DC all the way to ac, going through an incredibly complex process to power compute when you can do DC to DC and view electricity as a native input? That's our philosophy.
A
Even a, if you're using quote unquote behind the meter power and pulling it off of a substation, is that power still going through DC to ac, AC to DC conversion?
B
Yeah, you're just you as a data center. If you are tapping the substation, as a behind the meter developer of the data center, you are just skipping the grid part. But you've still got to transform that high voltage alternating current power back down to low voltage direct current that your computers can use.
A
So I'm going to introduce two concepts that I've learned about in my prep with you, one of which I think most of our listeners have probably heard of, which is the idea of curtailment. I'm going to come back to that one because that one's actually for me a little easier to understand. You've also talk a lot about this concept of clipping which has to do with this DC to AC conversion. And I think it's a fundamental part of the business. You're building. Can you describe what clipping is and then describe how you see that as being a fundamental driver of economics for the business you're trying to build?
B
Clipping goes back to like the architecture of these solar power plants. So the solar panels themselves operate in direct current. They produce direct current and they go into an inverter that turns DC to ac. But the solar panels themselves are oversized relative to that inverter. So you might have 1.3 units of DC or sometimes even 1.5 units of DC going into that inverter, which can take one unit of DC and transform it into one unit of AC. So I think of it like imagine you've got a one and a half liter bottle and you're pouring it into a one liter glass. You're going to have spillage and that water is not recoverable. And that's exactly what's happening with the solar industry. And it's not that big of a deal in terms of wasting the power because solar, after all, is a zero fuel cost, zero carbon cost energy source. However, if I were to go to you and say, hey, you are wasting 5 to 10% of your product and I'm willing to buy it, that's an interesting value proposition and that's what we can offer by being DC connected and being connected at the combiner box level.
A
And you have to prove to these solar plant operators that they are indeed clipping x percent of the power they generate. Or do they already have typically this insight?
B
To me, this. Okay, I might sound like a nerd. I'm like so interested in all this.
A
William, I'm going to warn you, you've already sounded a lot like a nerd through this whole conversation, which is why we love you.
B
Okay, great, great. So to me, there's basically no solar asset owner that can tell you reliably and continuously their DC output. Because what the asset owner with the solar power plant looks at is their inverter availability and the amount of power they're putting onto the grid. And that's a meter that is AC connected. So everything behind the inverter, all the DC stuff we're talking about is, is basically blind. I don't want to say it's totally blind because there's ways to figure it out. They're cumbersome, costly, slow, but they don't actually know with accuracy the amount of DC power they're producing. They've got models. You know, solar is physics. So we can tell you the amount of clipping you'd have. But reality often differs from These models. And that's been so interesting for us because we'll go to these solar power plants, we say, hey, we think you've got X amount of clipping. And like, yeah, yeah, whatever. We don't think so. And it's a lot more than you expect. And it's a lot more for a variety of reasons. So right now we've got data centers operating in the Atacama Desert in Chile, exclusively off clipping. And I'm running seven hours a day on just clipped power.
A
I mean if you said it's upwards of 10%, if you've got a 300 megawatt solar farm, that's 30 megawatts of power sitting there, that's a lot of power.
B
Yeah. Basically there's a hidden power plant behind every single solar power plant. And with Roon, every single solar power plant is a latent data center. That's what this technology is really enabling.
A
So clipping is this idea that between the power generation and the inversion there's this lossiness. And it sounds like most solar farms aren't measuring it. You're not really seeing the power come through until it's inverted. And that's when you start to measure the amount of power that you have.
B
Yeah.
A
And so A, you kind of have to prove to them that it exists. But then B, once you do prove to them it exists, it sounds like it actually can be fairly substantial. Then there's also curtailment, which is a separate problem that renewable power has. Maybe describe curtailment.
B
Yeah, curtailment is basically when the grid tells you, hey, we don't want your electricity. And it tells you that in two ways. One, it might give you a reliability signal where it says, hey, there's too much electricity trying to be exported from your area, so we're just going to shut you down because the lines can't handle it. And the other way that they signal curtailment is through price, economic based curtailment. And so price of electricity in real time markets is often priced every five minutes. And you know, when the sun comes up in these solar rich areas like Texas or California, you can just see the price collapse and, and the price will collapse to negative five bucks, let's say, because oftentimes these power plants will bid into the market assuming they're going to be able to generate a renewable energy credit or it'll be even lower because of the production tax credit. So it's a really interesting mix of economics and policy that drives curtailment. It's wasted power. The moment that you are likely to produce the most power is the exact moment the grid says turn off, we don't want your power.
A
And when power is curtailed at some point, it's basically, as I understand it, just like shot into the ground. Right. Like it's actually grounded power. Where? In the value chain that we just laid out. Where is that typically happening?
B
So with a solar facility, the inverter is going to, you know, basically that one unit of AC that says, hey, give me one unit of ac, I'll push out that one unit of ac. It goes down. The inverter set point will say, I don't want one, I want none. And so you've got all this infrastructure behind the inverter that's perfectly capable of producing power. It might have been producing power five minutes ago at 100% capacity factor or 100% capacity. And yeah, now it just goes nowhere.
A
Once again, it's the inverter that actually controls that decision to curtail today.
B
Yeah, the inverter set point will drive that decision. Yeah.
A
So with all of that, that helps me understand sort of clipping curtailments. Now, if I'm a solar farm owner, batteries seem like they also similarly solve these problems or no.
B
Yeah, I think batteries can solve these problems. It's funny because, you know, we're deployed at a 200 megawatt solar power plant in Texas and they've got batteries and they're. They're equally happy to work with us. I think that batteries, they're often AC connected, so batteries are tapping that substation and there's a good reason for that. Batteries you want to be able to do. You want to be able to buy energy when it's cheap and sell it when it's high. And sometimes you can do that by buying directly from the power plant, but sometimes you can do that by buying directly from the market. And so batteries, it's really important to be AC connected, actually. Previously they thought batteries would be DC connected, but now they're doing all ac, even though energy is stored indirect current in batteries. So the difference between us and batteries, I think fundamentally is that, number one, there is no capital expenditure for the solar power plant or the wind power plant that we're working with. So when we go to a solar power plant, plant owner, we say, hey, let us buy the power. You can't sell or you don't want to otherwise sell, and it's not going to cost you a capital expenditure outlay. And so the value equation is dramatically different for that proposition compared to a battery. The Other thing that I think is very different between us and batteries is I think batteries are actually anti network effect technologies. Meaning we all know that telephone is the classic network effect technology. One telephone is not valuable, who are you going to call? But a million telephones or a billion telephones, that's really valuable. We've connected the whole world. And I think the opposite is true for batteries. And you see this play out in the economics. So there are some batteries, the first movers, they're going to capture so much value. But every incremental battery you install, particularly in power markets that trade nodally like we have in the United States, you're going to cannibalize value. Because what did we just say was the main way batteries make money? It's through arbitrage, buying low and sell high. Well, if enough people buy low and sell high, that spread collapses. And I think that's what you've seen in battery revenues over the past three years in markets like Texas and California. And we don't do that. We're not an anti network effect technology. We're a network effect technology. Meaning we're taking this locally constrained electricity that's subjected to the whims of the power market and weather, frankly, and we're transforming it into a higher value commodity that's a part of a global market. And if you string together a bunch of computers, I don't think you're actually going to destroy value. I think you're going to drive material and scientific progress. And by the way, that global market for compute is way, way, way harder to saturate than the nodal market in, you know, West Texas, let's say.
A
So I heard kind of three big points. One, we're happy to work with batteries. Many of the plants we work with use roon and have batteries. Two, batteries sit way downstream of Rune. They're at the site of substations, meaning they're operating in ac, because by definition, unless they are wholly there to just back up the local facility, which is probably more of a commercial, industrial, residential use case than it is a like solar farm use case, they need to be grid connected. And because their intent is to buy low, sell high and sell power back to the grid. So they're thus a grid connected architecture. So they're sitting in that AC stack which, which is downstream of the clippings and curtailment decisions that we talked about. And then three, you know, a bit of a sort of theoretical view that the more batteries you add, the worse any one battery can be at arbitraging. The buy low, sell high economics. And so ultimately, batteries may have degrading returns as they become a scaled technology. Whereas your argument is that data centers and computers only increase in value as you build the footprint.
B
Yeah, that's exactly right.
A
So with that talking about that network effect of compute, you've taken a similar approach to a company we've had on the pod a few times, Crusoe, which is you may have aspirations of serving AI workloads and things like that, but you've started with a compute business that has essentially an anonymous, permissionless buyer on the other end, which is Bitcoin, that maybe describe a bit about that decision and what that sort of initial footprint looks like and how and when you decide to also add AI and other sort of compute workloads into your stack.
B
You know, just from the jump, Crusoe has been, you know, hugely inspirational to us. To see where they took stranded power in the form of flared natural gas and turned it into Bitcoin and then have driven to, you know, way up market to build Stargate and and all the awesome things they're doing. We have deliberately selected bitcoin mining as our beachhead market because it's a bit of a chicken or an egg problem. You need to convince power producers that they should work with you. And then if you want to serve those non bitcoin workloads, traditionally you need some kind of customer or offtake agreement. So which are you going to get first? Are you going to get the offtake agreement or are you going to get the power producer to agree to work with you? And we felt that getting an enterprise customer to purchase compute or to sell into one of these channel partners to sell compute. That's a fairly established sales motion and business model. Not saying it's easy, but it's fairly established. And we took the opposite approach. We said working with solar and wind producers to allow us to plug into their extremely expensive infrastructure in a novel way to purchase energy that they wouldn't otherwise sell. That's a more challenging aspect of our business model. How do we validate the assumption that they want to do that? And so that we started with that angle and bitcoin was a great way to do that because it basically said we've got an off taker, they always want that compute, and let's work on the power stuff.
A
One of the things that's interesting to me about bitcoin is the idea that you're basically generating commodity you can sell into a global market and you don't have to have A business development partner on the other end. You essentially can sell it on an exchange or spot, sell it to a large buyer or whatever you want to do and you have someone to transact on the other end.
B
That's right. You've got a guaranteed buyer of your compute power. That's really valuable for us in terms of where we're going, you know, over the Next, let's say 18 months. Absolutely. We're rolling out upgraded version of our relics on a frequent basis. We are going to be integrating energy storage onboard the relic. We're going to be leveraging orchestration software across the network of relics. And you know, we think that's going to allow us to move up market to tap high performance compute workloads, including AI inference.
A
Now bitcoin is, you know, I think by definition an interruptible load, meaning you don't have to be mining it all the time. You can decide to mine when you have the power at the right price to mine and you can turn off your miners when you don't. As you move from ASICS and bitcoin mining to GPUs and AI is the same.
B
True.
A
Are AI companies willing to shut off a training load in the middle of training or willing to say, you know, we can have higher latency for this inference request and you know, allow you to kind of navigate the usage of your data center according to your ability to it at a price competitive rate?
B
That's a big important question and you know, could be a very, very valuable answer. The way we decide to answer that. I'll start with the technical aspects and then move to the more commercial aspects. But from a technical standpoint, all of these workloads can basically be checkpointed. The first thing you're going to do when you're doing any of these expensive workloads is to institute some kind of checkpointing. Because you know, even if you've got six nines, it's not 100%. So you always need checkpointing. The degree to which folks want to checkpoint or don't want to checkpoint, or I should say will tolerate interruptions or not, will be determined by the master of all signals, which is price. And I don't think that candidly training of frontier models, they're ever going like an OpenAI or any of these other frontier labs will ever say, I'm willing to make the trade off to interrupt because based on price, I think the value is just so immense there that that's not who we're going to target with interruptable workloads. However, when you look at the market for compute today, this product that we're discussing already exists. So you do have preemptible instances or spot instances. And those instances are, you know, they're virtual machines that are offered by azure, Google Cloud, AWS and basically they say, hey, we'll give you a 90% discount in exchange for being able to kick you off the instance with a 30 second notice. So there is already a market for this kind of interruptible compute and it is determined by price. And I see that's an area that we will contribute to. The things that are different now than they were five years ago are it's incredibly difficult to bring new load onto the grid and to do these giant construction projects. So how do we allow these, you know, hyperscalers or Neo clouds to continue to serve those interruptible customers without sacrificing the extremely high margin workloads that they run that they'd like to run all the time. And I think we can actually expand that interruptible instance offering through our product.
A
So maybe just to unpack that a little bit more, describe some of the loads that you think are likely to become interruptible over time.
B
Yeah, so like even like technology companies today, like if you're designing a, let's say a wind turbine, for example, you might not have the willingness to pay to have reserved compute capacity. With a hyperscaler, you might say, I'm going to schedule my workloads in a queue and those workloads are going to be executed via queue based on the availability of interruptible instances. So when the interruptable instance is available, we'll continue to chop down that queue. When it's not available, we're going to pause because the work we're doing is not as lucrative as a frontier model. So these are climate simulations, hardware simulations, just, you know, physics based simulations that require compute. These kinds of workloads are typically what's run on interruptible instances.
A
Yeah, interesting. So the theory there would be large scale batch processes that are not directly market or transaction or real time data oriented in theory, could run in an on again, off again mode based on when there's availability.
B
Yeah, that's right. And a lot of organizations do that today. Right. They're queuing up their workloads to get executed in that as the compute becomes available.
A
One thing we haven't really dug into is who's your actual customer. So on the one hand you're needing to work with the renewable power plants themselves and actually get your product put on site. But are you selling hardware today? Are you operating AI clouds? Are you partnering with data centers? What does that look like for you in terms of a business model and a customer set?
B
Yeah, we view our customer as the power plant. So if you view your customer as not necessarily who you sell to, but how do you make money? We view our customer as the power plant. And so our value offering needs to be sufficiently compelling for the power plant. The way we make money is, you know, simply we buy electricity at what we think are very attractive prices and we convert that into a higher value product. In this case it's bitcoin. And we live off of our ability to manage that, spread and deploy relics in a inexpensive way. And that goes back being able to deploy relics in an inexpensive way goes back to our design choices around we're 100% electric tech stack, we're 100% direct current and we are 0% construction project. There is no EPC budget or spend here.
A
So today you have to be good at actually building the physical microdata center that you've developed the relic. You have to be good at understanding price signals coming off of these power plants. Is there curtailment happening? Is there clipping happening? How can you price access to that with the power plant providers? And then you separately have to be good at actually running essentially a bitcoin arbitrage business, bitcoin mining business, which it sounds like is part of your vision, but maybe not the long term vision of what the company needs to be excellent, excellent at over time. Am I following that correctly?
B
Yeah. We want to transform the world's most abundant energy resources into the world's most scalable and flexible compute platform. And those adjectives are very deliberately selected. So when we talk about scalable, well, there's more energy that hits the earth in one hour of sunshine than all of humanity consumes in one year. So we are focused on tapping the most underutilized energy resource we have. And we think that can provide us with immense scale. And then the flexibility angle goes back to this entire framework around direct current electric tech stack and modularity. And if you give me a 10 megawatt solar facility in South America, I'll be able to work with you. If you give me a 400 megawatt solar facility in the United States, I'll be able to work with you with the same product. And as our ability to purchase energy changes, meaning if we are just buying wasted power, we can do bitcoin mining. If we are able to buy wasted power and power that would other be wise be exported to the grid we can do higher value workloads. So it's a very flexible product, scalable, flexible platform for compute, but you're not selling the hardware.
A
So in the price that you are offering to these power plants, you have to absorb an account for profitability that includes the hardware installation of the hardware, servicing and ongoing maintenance of the hardware, and then ultimately your ability to transact on the other end of your compute load in a profitable way.
B
Yeah, that's exactly right. We are developers and operators of the data center. Correct.
A
As you move into AI workloads, I assume then are you running cloud instances and selling that cloud instance to NEO clouds or other people like that to then resell to an end customer? How do you see that evolving?
B
That's right, yeah. I think the vision for the AI business or the high performance computing business is to sell into those NEO clouds or hyperscalers, contribute to their compute capacity rather than try to build a NEO cloud from scratch.
A
And William, we jumped right into this and got into the nitty gritty details from the start.
B
We didn't even let you introduce yourself.
A
But you've got quite a background in this space. This is not your first rodeo when it comes to building distributed compute, bitcoin power, et cetera. Maybe describe a bit about your background and how ultimately you came to this thesis for building Roon.
B
I started my career working for President Obama in the White House. At the end of the administration, I moved over to a hydropower company. We're buying and operating hydropower assets. And one thing that always stuck out to me is that, you know, hydropower was used to make things. We would buy hydropower plants connected to pulp and paper mills, connected to aluminum smelters, and all those things were defunct. And I felt that the market, the power market, just wasn't valuing hydropower's ability to make things. And so my question was, what can we make in America today that is energy intensive? And this was back in 2017 and I landed on bitcoin mining. And so we ended up spinning up one of the first vertically integrated clean bitcoin mining facilities. Sold that business in 2019 and then moved over to Softbank Energy. And that's really where I got exposed to solar power. And I wanted to run the same play that I ran with the hydro. You know, how do we use solar to make things? But solar's a very different shape. It's a very different technology. Solar is the only way we make energy without spinning things. So you need a new way to consume that power and a new load to be optimal with that energy source. And that's what Rune is. It's the company I wish I had when I was operating solar power plants. And, you know, it's been about two years building this company with an amazing team and an amazing collection of investors and partners.
A
Can you describe a bit about where Rune is today, where you are from a rollout perspective, how you've capitalized the company? Anything to give us a sense of where you are in the world of white paper, in theory, to physically using electricity to actually do computer.
B
We're a small team based out of Mountain View, California. Despite that small team, we're highly leveraged and highly efficient. We've got three projects operating on three different continents. So we've got Relics operating at solar facilities in Texas, we've got Relics operating at solar facilities in Chile, and we've got Relics operating at wind facilities in Sweden. We're a seed stage company and we are backed by great investors like Union Square, Lower Carbon Capital, Vestas, which is the largest wind turbine producer globally.
A
So I think if you follow the recent news, there's all this discussion with everything going on with PJM about, you know, the Trump administration now has this idea that essentially they can create auctions and allow hyperscalers and data centers to show up at PJM and essentially offer to bring their own power and sort of reserve capacity to the grid. And I think this notion of the Wall Street Journal podcast, the Journal was calling it byop, bring your own power, which I think is such a fascinating concept. And we've talked about Crusoe. They've obviously really sort of been a big pilot of that, of we're going to commission a large data center and we're going to bring the power to the conversation so that the hyperscaler doesn't even have to fully think about that. And we're going to go get the deal done with the grid somehow. Where do you think that goes? I asked Chase this question on an episode a few months ago, and he basically said he thinks that in the future the tech companies will be the primary power producers in the world, and the grid will essentially buy residual power from whatever the tech companies maybe aren't directly producing, but are essentially commissioning. Do you think that vision is true as well?
B
That's a really interesting vision of where things could go. I think fundamentally the demand for compute is going to drive a lot of changes, and there's an immense demand for this compute and there's an immense demand to have it come quickly. And that's really what this is all about. It's all about new strategies to deliver conditioned power to compute quickly. And I think that's certainly one way of doing it. The bring your own power aspect, the way that Crusoe or Xai or any of these other players are doing it, that's very interesting. I think that we have a different way of doing it and I think it's a very exciting way. Even if you've somehow obviated the grid through bringing your own power, there still remains the aspect of the traditional power system supply chain. And that's something that we avoid. So I think that we actually have the fastest way to deliver conditioned power for compute.
A
That's a fascinating way to think about it, which is you don't have to wait or expect the entire power dynamics in the energy markets to change. You're basically saying, hey, there's already this large deployed resource out there in terms of renewables, in terms of solar and wind. Double digit percentages of power generation in the United states growing substantially. 80 plus percent of new power generation in the US last year, 2024 I guess. And you can take advantage of the inefficiencies of that existing system that is still trying to power the grid. They're not built for data. Data centers can use them, but they're not exclusively built for data centers. They're built to power our lives. And there's this inefficiency in them that you can take advantage of without having to wait for the world around to change how power is bought and sold.
B
That's exactly right. Every solar power plant with Rune is a latent data center. So we need to convert those underutilized resources into AI factories. The way that we think is the best way to do that is is through the electric tech stack. Because we don't wait for transformers. We think direct current to direct current is the best way to do it because it's much faster, much cheaper, much easier to implement. I don't even have to do any conduit right. I don't even have to put cables underground to do this. And the modular approach allows us to be so much faster. Today it's end of January. We just had a major snowstorm. Good luck pouring concrete in 10 degree weather. It is not happening. We don't use any concrete. You get a forklift, you plop the relic down. It's up and running in 30 minutes. I don't want to say all of these AI workloads can be run on the relic in its current form, but a substantial amount of compute can be run on the relic today in our current form factor, and we'll continue to pursue the amount of workloads that we can capture using these three design choices, Electrotech Stack, Direct current and modularity.
A
And even if Things like the 800 volt DC Data Center Future shows up, which is what I hear a lot of Nvidia and everybody talking about, you're still upstream of all of that.
B
I certainly welcome that 800 volt DC busbar. We're tapping 800 to 1500 volt DC today and delivering it to compute. And we've done, you know, right now our product is Bitcoin minute. We've done also delivering that 800 volt DC power to GPUs. So we're already living in the high voltage DC era now and I think I'm excited to see where all that all goes.
A
William Anything else we should have covered? Anywhere you need help, anything you want to put in the minds of people who are listening, who are interested in what you're doing.
B
We're always hiring. We're looking for talented engineers, power electronics for folks, mechanical folks, AI ML engineers. So check out our website www. Ruin.energy for more information there.
A
Thanks for your time today.
B
Thank you.
A
Inevitable is an MCJ Podcast. At MCJ we back founders driving the transition of energy and industry and solving the inevitable impacts of climate change. If you'd like to look learn more about mcj, visit us at MCJ VC and subscribe to our weekly newsletter at Newsletter MCJ vc. Thanks and see you next episode.
Podcast: Inevitable (an MCJ podcast)
Host: Cody Simms
Guest: William Layden, Co-founder and CEO of Rune
Release Date: February 17, 2026
This episode explores the intersection of renewable energy generation and scalable compute infrastructure. Cody Simms interviews William Layden of Rune—a company building modular, plug-and-play microdata centers that tap directly into unused renewable power at solar and wind sites. The conversation unpacks inefficiencies in the current energy-to-data-center pipeline, Rune’s novel "direct DC" approach, the economic dynamics of renewable generation (including "clipping" and "curtailment"), and how Rune is leveraging these inefficiencies starting with Bitcoin mining, with aspirations to scale into broader compute workloads like AI.
Technical Feasibility:
Target Workloads:
Value Chain:
Profitability:
William Layden’s Journey:
Current Stage:
On Clipping and Hidden Opportunity:
"Basically there’s a hidden power plant behind every single solar power plant. And with Roon, every single solar power plant is a latent data center." – William Layden (13:24)
On the Inefficiency of Current Energy Flows:
"...every electron of direct current flows through a multi chain conversion system that was designed for alternating current and designed for really the 20th century..." – William Layden (08:42)
On Batteries vs. Compute Scaling:
"...batteries are actually anti-network effect technologies ... every incremental battery you install ... you're going to cannibalize value. ... If you string together a bunch of computers ... you're going to drive material and scientific progress." – William Layden (17:27)
On Future Market Fit:
"Today it’s end of January. We just had a major snowstorm. Good luck pouring concrete in 10 degree weather ... We don’t use any concrete. You get a forklift, you plop the RELIC down. It’s up and running in 30 minutes." – William Layden (36:38)
| Topic | Timestamp | |-------|-----------| | Episode Introduction & Rune Overview | 00:00–03:15 | | Physical Product Description | 03:15–05:09 | | Power Flow Today vs. Rune | 07:09–09:42 | | Clipping Explained | 10:09–13:24 | | Curtailment Explained | 14:09–15:57 | | Batteries vs. Rune | 16:13–19:59 | | Bitcoin as Beachhead | 19:59–22:51 | | Interruptible Compute & AI Vision | 22:51–26:28 | | Business Model & Customer | 26:54–28:05 | | Company Backstory & Deployments | 30:44–32:51 | | BYOP & Future of Data+Power | 33:26–38:20 |
For more details or to join the Rune team, visit: rune.energy