
Loading summary
Big Technology Podcast Host
Is AI a bubble or the biggest boom of our lifetimes? The fate of one company, coreweave, may tell us everything we need to know. We'll be back with the company's founders right after this.
Brian Venturo
Fiscally Responsible Financial Geniuses Monetary Magicians these.
Michael Intrader
Are things people say about drivers who switch their car insurance to Progressive and save hundreds because Progressive offers discounts for paying in full, owning a home and more. Plus, you can count on their great customer service to help when you need it. So your dollar goes a long way. Visit progressive.com to see if you could save on car insurance, Progressive Casualty Insurance Company and affiliates. Potential savings will vary. Not available in all states or situations.
Ad Voiceover
Are you interested in effortlessly growing your Bitcoin portfolio? I sure am. The Bitcoin credit card by Gemini earns you Bitcoin back on every purchase. Use it like any credit card, buy lunch, gas or your weekly groceries and you'll earn up to 4% back instantly in Bitcoin or one of over 50 other cryptos straight to your account. All that with no annual fee. And right now you can grab a $200 bitcoin welcome bonus. It's the easiest way to start building your Bitcoin stack. Go to gemini.com card to learn more. Terms apply. See the link in the description for more information regarding rates and fees issued by WebBank. To qualify for the $200 crypto intro bonus, you must spend $3,000 in your first 90 days. Some exclusions to instant rewards apply. This is not investment advice and trading. Crypto involves risk. Check Gemini's website details on rates and fees.
Big Technology Podcast Host
Welcome to Big Technology Podcast, a show for cool headed and nuanced conversation of the tech world and beyond. We have a great show for you today because in studio with us are the founders of CoreWeave. CoreWeave CEO Michael Intrader is here with us. Michael, welcome.
Michael Intrader
Thank you very much. Great to be here.
Big Technology Podcast Host
And Core Weave's Chief Strategy Officer Brian Venturo is also here. Brian, thanks for having me. Great to see you. You are you both are running one of the most fascinating companies in the AI boom. Everyone has used you effectively as a Rorschach test to read in their beliefs or insecurities about what's going to happen in this AI moment. Some people think that you're the poster child for the AI bubble. Others think that you're perfectly positioned to take advantage of the boom in building that is occurring as demand goes through the roof. A couple stats about you. As of today, the company is worth $42 billion after an IPO earlier this year. You've built eight new data centers across the US in the third quarter alone. And the latest reported numbers have you in possession of something like 250,000 of Nvidia's GPUs, which are the chips that companies use to run AI models and grow them or train them, as they like to say. Let's just start off with this because it's been heck of a ride for you over the past couple years. What has it been like being on the front lines of this AI build out, talk a little bit, help people feel it, the speed at which it's boomed and what it's taken to do something like build eight data centers in a quarter.
Brian Venturo
It's exhausting. All right, so let's start with that. It's been exhausting.
Michael Intrader
Yeah. You headed dead out. It has been incredibly exciting. It has been an unbelievable year. I mean, we just IPO'd really eight months ago, and it feels like it's been two lifetimes. The company is moving at a incredible speed. We are building a massive percentage of the global AI infrastructure that's required to allow artificial intelligence to be what it is. And when I say massive, it's like a meaningful percentage.
Big Technology Podcast Host
What's your estimate about the percentage?
Michael Intrader
Ooh, that's tough. Look, a lot, a lot is we don't. We think of ourselves as providing enough of the compute that we have the ability to be relevant in the debate of how AI is going to be built and how it's going to run into the future. And so we don't know what the numbers are. There's lots of different providers of technology that are being used and there's no real good way to kind of put your fingers on the data, but, you know, meaningful, right? And that's an exciting place to be. And it's honestly, when we talk about this in the company all the time, it's a privilege to come into work and focus your energy, your creativity every day on building a component of this, of artificial intelligence, which is the issue of our time in many ways. And we get to really sit there every day and pit ourselves against those issues, which is great. I mean, I have a ball with it.
Brian Venturo
I'm taking a shot at this. Hold on. Before we move on, I think that that's really around, let's call it the practical side of it. Right. And when you're a company growing as fast as we have, where we had maybe 100 employees three years ago, now we have 2,500 employees or so, there's an emotional side of this too, right? And sometimes since the ipo, we've been under this spotlight in the world of like, what are they doing, how are they doing it, are they executing? Are they doing this? And internally, we always set the highest bar for how fast can we do something, how high of a quality can we do it at? And as this industry has expanded so rapidly, there are things that happen, right? And you have weather that impacts construction at a project. You have a truck that hits a bridge. You have all of these random exogenous or idiosyncratic things that happen in a supply chain. And then it comes back to us and it's like the world is like, wow, you failed. Right? And inside the company from a culture perspective, it's been so important for us to manage. Like, listen, we're doing something at a scale no one's ever done before, at a speed no one's ever seen before. Of course things are going to go wrong, but take perspective, like, see how much we've done, right? And for our employees, it's if you're moving at a million miles an hour and you hit a speed bump, it's okay, right? It doesn't change the trajectory of what you're doing. It just provides the battle scar so it doesn't happen next time.
Big Technology Podcast Host
Yeah, I can imagine it's a rough and tumble world trying to build this with very demanding customers, very important technology that you're deploying. And the speed is crazy. I mean, it is interesting. Looking at your founding story, you really started working on providing infrastructure for crypto. Was it like Ethereum mining or something like that? And then pivoted in a very smart way to this AI moment, establishing a relationship with Nvidia. We'll talk about that. That's proven to be very useful and helpful for you and probably for Nvidia as well. And now you're again hyperdrive building data centers. And the data centers are, if I have it right, largely licensed or the capacity is rented out, mostly the tech giants. I mean, the core customer is Microsoft. Something like two thirds of the demand, according to your public filings, is Microsoft. But there are others as well.
Michael Intrader
So we actually spoke to a customer concentration in our last earnings. So we can kind of. There's no customer that represents more than 30% of our backlog. And so we've done an incredible job. It's been a focus of the company, everything from sales all the way through the build cycle to really begin to broaden the reach with which our solution touches artificial intelligence. So Microsoft is An important customer and a large credit worthy and formidable part of the AI ecosystem at large. But they are, you know, we've done a really good job bringing on other wonderful clients, wonderful customers that are going to continue to kind of use our solution as they, as they build their products and deliver them to market.
Big Technology Podcast Host
Okay. And I definitely want to get into customer concentration in a little bit. So. But that's a good preface to what we'll touch on and already some new data to me. So good to hear that. But I wanted to again like just get into what it takes to build these things, these data centers. You're assembling them with incredible speed. So I just want to hear a little bit about on the ground, what does it take to put together these data centers?
Brian Venturo
So historically, let's say two years ago, we were able to go out and buy capacity or lease capacity. That was much further through the development cycle. Right. They were basically the shell already existed. It was a fit out construction process, which means going in and installing the last pieces of the cooling infrastructure, cabinets, conveyance for all the cabling, all the hundreds of miles of cabling we have in these things. But it's shifted over the past year is that now we're doing much more bespoke in house design. Right. To make sure that we're meeting the needs of what our customers deployment is going to be. So it's everything now from okay, how is the cooling and electrical distribution designed? How are we ensuring electrical redundancy and reliability? How are we cooling the air cooled side of these things? Because you have liquid cooling, there's still a component of it that has to be cooled with air.
Big Technology Podcast Host
Can we pause on that?
Michael Intrader
Sure.
Big Technology Podcast Host
These chips run extremely hot, right?
Brian Venturo
Extremely hot.
Big Technology Podcast Host
Cooling. People talk about cooling. For those people who are coming to this for the first time being able to run an AI data center, you gotta be able to cool the chips if you want to be able to be successful long term.
Brian Venturo
This is one of the things that I think the market misunderstands. Right. Is that everybody believes that this, that there's some differentiation in the plumbing of the liquid cooled data center. Right. That's not where the differentiation lies. It's all the same pipe and valves and fittings. Like everyone's using the same things there. The differentiation comes after you turn it on and how you control those systems.
Big Technology Podcast Host
Okay.
Brian Venturo
Right. And that's what we've done incredibly well as a company that we've very consciously not spoken about externally for the past couple years because it is our secret sauce is how we provision, validate, and manage those data centers all the way from the power cooling infrastructure up through the GPUs, the servers. And it's why the most valuable companies in the world, the biggest AI labs, actually use us to run their most critical training jobs.
Big Technology Podcast Host
Right? I mean, it's a Herculean task, right?
Michael Intrader
It's important to understand that when you're thinking about the ecosystem, right, and you're thinking about the different NEO clouds that populate and what's a NEO cloud?
Brian Venturo
So NEO cloud, the worst term ever. I hate it.
Michael Intrader
Think of it as like, in the common vernacular, everybody knows who AWS is. Amazon. They know who Microsoft is, they know who Google is. Those are the hyperscalers, right? You can throw Oracle in there if you'd like, but then there's a class of providers that can deliver this infrastructure, and we are the leader among that. And what is important to understand that if you took all of the other neoclouds and added their GPU fleets up, we would still be a multiple of all of them combined in terms of the number of GPUs that are up and running and delivered to clients. Large multiple. When Brian is talking about, you know, things that the market is struggling to understand, it's important to understand that what differentiates us, what allows us to be as successful as that we have, is that the software suite that we have built allows us to take the commodity GPU and deliver a decommoditized premium service that allows people to extract as much value from this infrastructure as possibly can be extracted. And that's really what coreweave is doing. And it's why when Brian says, hey, the leading companies in the world and the leading labs in the world are relying upon us to deliver our service, that is why it's because the product that ultimately they receive is the product that will allow them the greatest probability of being successful at using the GPUs to deliver the. The products that their company is building.
Big Technology Podcast Host
Right? So just to put it in plain English, always helpful for me, when a company like Microsoft will work with you on building infrastructure for artificial intelligence, you've built some proprietary pieces of the puzzle, like your cooling system, like the software that runs the data center, and that allows them to get more out of the chips than they would have typically.
Brian Venturo
Yeah. And the, the nuance here is that when you build one of these data centers and it has 3,000 miles of fiber optic cabling and it has a million optics that connect into the switches, these things all fail. And when they fail, the Way that training jobs are run today is if one component fails or one component limits the performance, the balance of the training run is going to be governed by the worst performing component. Oh, right. And our entire job is to build the automation, the predictive analytics, the, you know, the machine learning models around saying, okay, we're seeing a problem here. How do we gracefully handle these things? So it has the least impact on our customers jobs.
Michael Intrader
Right.
Brian Venturo
And that's the core weave, secret sauce, okay, is that we have the world's largest data set of how these things run, how they fail, and we've built all the recovery mechanisms and the software intelligence to help our customers run these things.
Big Technology Podcast Host
Is the demand that you're getting from your customers, you mentioned you know, training very well. Is it mostly training the AI models? Because, well, that's, that's what a lot of the infrastructure has been used for. Build, scaling these models, throwing more compute at them, throwing more data, making the models bigger. And, and then the idea is that the models get better. So are you seeing most of your demand in the training side of things or has it gone to inference where companies are actually using the models and deploying them into production?
Brian Venturo
It's a great question, and I think it talks to the split or this kind of delineation of where the market's been for the last three years and where it's going. Our customer base for the last three years has primarily been the largest AI labs and enterprises that are building the capabilities of AI.
Michael Intrader
Right.
Brian Venturo
And it's now shifted from the people building those capabilities to the people that want to use those capabilities to change business outcomes. And this is where all the enterprise adoption is coming from. One of my favorite services out there is lovable. Right? You go to lovable, you can build any app you want, there's a chatbot that helps you go through it. We're finally starting to see people chain together these capabilities to build real products that solve problems. And our business for the last three years has really been around the creation of those capabilities and has very quickly shifted to include not just the creation of them, but the deployment of them and use in business practices. So one of the things that I didn't expect was that what looked like training two years ago is how inference was going to look today is that you're still dependent upon highly connected storage. You know, your backend networks become critical to this because the models are so large. So there's really no difference between training, infrastructure we deployed to build those capabilities and what our customers are ultimately using to serve Them.
Big Technology Podcast Host
So has inference overtaken training for you?
Brian Venturo
We serve a tremendous amount of inference.
Big Technology Podcast Host
But, you know, I actually don't know.
Brian Venturo
The answer to that. Six months ago I would have said it was 2/3 training and one third inference. It's probably close to 50, 50 now. Okay. But there's also some of our big customers that they go from. They'll use a campus for training, they'll launch a new product, they'll have to spill over for inference. You know, a lot of this is very dynamic and it's been built to be.
Michael Intrader
So, yeah, this may provide a segue to some of the other subjects that you'll ultimately get to in this podcast. But, you know, for me, watching inference, understanding that inference is the monetization of the investment in artificial intelligence is one of the most exciting trends that exists within AI. And we have a front row seat across the entire cross section of almost every large, important lab that's building this stuff. And watching them increasingly move from, let's say, one third inference climbing towards 50%, and at times it's even over 50% of the fleet. Being used for inference, you know, is just an amazing indication of the scale of the demand to use artificial intelligence to serve customer inquiry. And that means everything.
Big Technology Podcast Host
All right, one more question about this.
Michael Intrader
Yep.
Big Technology Podcast Host
Why does. Why does coreweave need to exist? Why? I mean, we're talking about these big companies like Microsoft, like, why wouldn't they just build their own data centers? Why are they licensing it from a third party?
Brian Venturo
So it's a great question. There was a void in this market. Right. And there's a couple pieces here. The biggest clouds in the world today are built off the cash engines of peripheral businesses. Right. Google's built on search. Amazon's built on retail. Microsoft was built on enterprise software. We came pretty much out of nowhere. And the moment in time for us to be able to get ourselves into this position was driven by crypto. You mentioned earlier that we came out of Ethereum mining. We were able to leverage the revenue from Ethereum mining to go out and build and deploy additional scale. So that when crypto went away, we had the infrastructure in place and we hopefully had enough clients that we became like we were at escape velocity.
Big Technology Podcast Host
Right.
Brian Venturo
So we recognized that compute was going to be valuable. We didn't necessarily know at the time what it was going to be valuable for. Like, I don't think Mike and I ever had this idea of like, there's going to be this hundreds of billions of dollars a year in capex for AI, but we had the thesis that compute is going to be incredibly valuable and we wanted to own a lot of it. And we looked at that compute resource as an option and we said, okay, what are the best things that we can do with this? And that's how we've always approached different business problems, right? It's like, what is our asset? How do we monetize it the most effectively? What's the most valuable way to use this?
Michael Intrader
But then, so I'm going to jump in here on this, but I want to go back to something that we kind of talked through as we started this, right? Is that like we've built a software stack from the ground up to optimize for the use cases associated with parallelized computing. We do it better than anyone else. The reason we exist is because we deliver a fantastic product that is highly.
Brian Venturo
In demand and incredibly differentiated.
Michael Intrader
And incredibly differentiated. And so, you know, we serve the largest players, but we also serve, you know, a ton of other AI companies that are building applications where they have the choice to go and use us or to go and use one of the hyperscalers. And many, many, many of them choose to use our solution because it allows them to more effectively deliver compute. And one of the things that's really just lost on this is that there's not an understanding of how fundamental the Change from Cloud 1.0 into Cloud 2.0. As you moved from sequential computing into parallelized computing, and when you made that leap from hosting websites and data lakes into driving parallelized computing for artificial intelligence, it stands to reason that a fundamental change in how compute is used will also require a fundamental change in how you build the cloud to serve it. And we took advantage of that transition to build best in class solutions.
Big Technology Podcast Host
Right.
Michael Intrader
And that's why we exist.
Big Technology Podcast Host
So I've heard an argument made that basically the big tech companies to build these data centers, they have to forecast demand out years in advance. It's a massive capital commitment. They are not sure what, whether it will pay off. And CoreWeave is useful to them because you're taking the risk. And then they will be able to use your capacity and sort of rent it out as opposed to having to make these big investments on their own and it's their ass. As if things go wrong.
Michael Intrader
Yeah, look, that is a narrative. I don't think that actually tracks with the reality of the situation. I think the reality of the situation is the large hyperscalers are building as fast as they can. Google went out and just released a press release where they're building $50 billion worth of infrastructure while they're still buying from everyone else they can. Microsoft is building internally and they're buying from lots of other plays. I feel like that that argument is model fitting, right? It is. Somebody's got a preconceived notion of what this is going to look like and now they're reconstructing the facts on the ground to fit that model so that they can say, look, I'm right. But the reality is that I look at it very differently. I look at the way that we built our competitive advantage over the hyperscalers, the way that we built our competitive advantage over other neoclouds. And the way that we did that is we understood that this type of computing was going to be important and we built the infrastructure and the software to be able to serve it when the demand emerged. And we did it in a very risk managed way. When I look at the future, when I think about the investments that go into building an AI factory and I think about how much money is being put into the data center versus how much money is being put into the compute that goes inside of the data center, I think about the data centers as being basically an option on being able to provide and be relevant for the delivery of compute into the future. Right. We take our risk dollars as a company and we invest in the long poles. And the long poles are really twofold. One is building the best software in the world and the second one is having access to the data center capacity to be able to deliver compute. When a wave of demand hits this market that requires you to deliver it, you can't just wake up and say, hey, I want to deliver a gigawatt worth of infrastructure. What you'd have to do is you have to start years in advance building that gigawatt of infrastructure so that you're in a position that when your customers say, hey, I just produced a new way of using AI that's going to require a gigawatt worth of infrastructure, you're able to serve it. We're going to have a tremendous portfolio of infrastructure that is going to be able to be deployed into the future. And we're really excited about that. We think it's a wonderful way to go about building our business.
Big Technology Podcast Host
Right. And that's the question about the bet, right? Is that you're betting that AI is going to continue to be adopted at.
Michael Intrader
A wild rate that's not entirely accurate.
Big Technology Podcast Host
Okay, let's hear.
Michael Intrader
What we are doing is we are making the majority of our investments by taking long term contracts from Credit worthy entities using those contracts as a way of raising money to build the infrastructure where the demand and the credit and the capital has already been secured. Right. So let's say 85% of our exposure is to deliver compute to investment grade or AI labs or other large consumers of compute. Right. The other 15% is our exposure to long term contracts to be able to do that exact thing in the future. And that's the way I look at it. And I think it's a much better way to think about how we're taking on risk, how we're dealing with leverage and, and how we're positioning ourselves. If the market continues to grow, we're in a great position. If the market stabilizes in and around this, we're fine. If the market contracts, there's some new technology, then we will be left with some portion of that 15% that we may be in a position where it has to wait for a few years before the market grows back into it. And we are fine with that. We think of it from. And you know, people have talked about how the founders of this company kind of look at the world with a different lens because we don't come from Silicon Valley, you know, we come from the commodity space, we come from Wall Street. We think about option value, right. When, when we think about compute, we think about what is the option value associated with it. When we think about the data centers, we think about what is the option value to be able to build, to be relevant in the future. And that's the way we kind of go about allocating our risks and securing the contracts that we have in place right now.
Brian Venturo
Yeah, and to speak to one thing here you talked about, if the market contracts, I think that we would love for that because it presents tremendous opportunity for us.
Big Technology Podcast Host
How.
Brian Venturo
Right. I mean, you're in a position where there's going to be distressed assets, there's going to be consolidation possibilities. That's when opportunity really comes in. And there's a lot of times where we sit there and say, okay, we, we're looking for M and A, we're looking to invest in things, but the valuations don't make sense. And for Mike and I, we've made our careers on waiting for those opportunities and saying, okay, these are the things that I want to buy when things don't necessarily go right for them. Right. And you know, that's really what excites us. You know, one of our, one of our other founders, last week he got on the phone with me, he's like, I love this Brian. I'm like, what Brian? He's like, this is the one where you start like you're so focused on like, where are the opportunities? How do I go take things over? I say it to some people every once in a while is that I feel like when there's headwinds in the market, it's actually easier to do this job than when the tailwinds are blowing at 1,000 miles an hour.
Big Technology Podcast Host
But can I ask, how have you set up the company to make sure that you're not the distressed asset? If the contract.
Brian Venturo
Look at our construction of our customer contract portfolio. Everybody last year talked about how customer concentration and exposure to Microsoft was a bad thing, but they have a better balance sheet than the US Government, right? Like, I'm not worried about them performing in their long term obligations to us. Like, that's basically the best possible position we can be in. And we've been super thoughtful about the way that we choose which customers to work with and how we manage the credit exposure so that we're like, we're certain that the investments we make will be paid back. And if you look at the people that are providing us the debt to do those projects, like Blackstone, right, They're some of the most sophisticated people in the world. And for their underwriting committees to come in and say, yes, I wanna do this and I wanna scale it up as aggressively as possible. You're telling me you're gonna pitch some financial analyst against John Gray? I'm gonna go with John Gray, yeah.
Michael Intrader
I mean, maybe a second on just like kind of one of the fundamental building blocks of how we have expanded the way we have and how we use debt. Because I think that's one of the misunderstood components of how you build or how we have built this company. And so it is really important to understand that the way that we build the components is we go into the market, let's use Microsoft, cause we've used them, but there's lots of other clients you could use and they're totally interchangeable. From the perspective of the structure is still the same. We go to them and we say, hey, we've got access to this data center. They say we need computer. We say, okay, we're going to sign a contract. They sign a contract for five years. We structure that contract in a way that we can go back out to the Blackstones of the world and we can borrow money from them to go ahead and build the infrastructure to deliver to Microsoft within the five years of the contracted period with Microsoft, we pay for the Infrastructure. We pay for the opex, we pay for the interest, and we earn an enormous margin on the infrastructure. So yes, there is debt. We're not arguing that. We believe fundamentally when you build any type of infrastructure at this scale, debt is the correct way to go about doing it. The examples run through history. Whether you're talking about building a power plant, building a distribution grid for electricity, whether you're talking about the telephone, whether you're talking about the steam engine and railroads, like you go throughout history, this is the tool that you use. Right. We didn't invent anything new here. We just took a tried and true method and applied it to the specifics of depreciation associated with this asset, of the obsolescence curve associated with this asset, and made the contours so that it worked in an airtight manner. So that guys like John Gray or, you know, Blackstone or any, or BlackRock or any of the big lenders could look at it and say, I understand how they're going to underwrite this. I understand the risk in this. I understand that these guys are going to deliver compute to that balance sheet. They're going to get paid back. And when they get paid back, we're going to get paid back. So let's lend them the money. And that's lost on the market. They think we're running around with this like, you know, incredible capacity to take on risk. But that's a really low risk approach. Matter of fact, it's way more low risk than saying, hey, we're going to do it on equity because we're saving our equity for the long poles that you've got to invest in. That's where you want to put your bullets. You want to use the debt markets to deal with a depreciating asset. It's the way it's done. It's the way it's been done throughout history.
Big Technology Podcast Host
Yeah. By the way, it's great that we're able to have this conversation. This is what we want to do on the show, is take this complex stuff, talk about what the reactions have been in public, speak with the principals and actually get the story. So thank you for talking it through with me. And on that note, let's continue. The argument I think that would be made is not that Microsoft isn't good for the money. The, the argument would be made that generative AI is still a developing category. It hasn't really shown the ability to turn consistent profit. And so the companies that are investing in a big way in it may one day wake up and Say we don't really want to do that. Build out OpenAI, for instance. Let's just use them as an example. They have something like 1.4 trillion committed to spend on infrastructure. I think OpenAI might be the only ones that believe that they'll actually spend that 1.4 trillion and maybe they're investors. So what do you think about that risk that AI is because AI is new and not as predictable as you would have in a different category, financed by debt, that therefore it is riskier even if the credit rating of a company like Microsoft is golden.
Michael Intrader
When you're a couple of things on OpenAI, because they are the tip of the spear in many ways for artificial intelligence. They have a franchise that has 800 million monthly users of their product, which is fully 1/10. One out of every 10 human beings on the planet logs on to OpenAI.
Big Technology Podcast Host
Fastest growing tech products in history.
Brian Venturo
I use it all the time for everything.
Big Technology Podcast Host
I am addicted to it. And I don't even find it in like a bad addiction way. It's an amazing product, I won't argue with that.
Michael Intrader
So you've got this product that's out there and then you have this $1.4 trillion, which I believe has been confirmed by everybody but OpenAI who would actually probably have issues with that number in terms of how much they're spending, when they're going to spend it, what are options, what are firm, all those kind of things. And so I just think it's a, you know, narrative shaping. There's an incredible amount of people out there that are talking through how this is going to be done, when it's going to be done. And I don't think that they necessarily have all the correct information. That's number one. Number two is, is that you listen to both Brian and I talk about how we think about credit. We're pretty sophisticated how we think about credit. We've built our entire careers long before we started this company thinking about risk management and credit. OpenAI will be a percentage of our credit exposure, just like Microsoft will be a percentage of our credit exposure. And the way that you manage credit against a unbelievable potential company, but a company that may not have the credit rating that is strong enough to support their aspirations, or they may have to tone it down or they may is you just make them a limited percentage of your overarching business and you accept the risk on that while you mitigate the risk using credit from other companies like Meta that we signed a $14 billion contract with, like Microsoft. I mean, just incredible companies. And so you just think of them as, how much investment grade exposure am I going to take, how much non investment grade exposure am I going to take and what's the correct ratio and how am I going to mitigate that over time? And that's the way we look at it.
Big Technology Podcast Host
And what happens if one of these companies over time wants to walk away? Let's say Meta says, yeah, actually artificial intelligence, we can develop it much more efficiently. Or Microsoft says, yeah, AGI is actually a decade away, not three years away.
Michael Intrader
Yeah. So AGI being a decade away, six decades, it doesn't matter. The way you were asking about how you run a company in this dynamic environment, how you run a company that's going through this type of scaling, and I talk about this internally to the company all the time. We need to be directionally correct. The world is incredibly fluid. The world is incredibly dynamic. We are at the absolute bleeding edge of a new technology that's redefining the world. You're not going to get everything right, but directionally you have to go ahead and build a company that's moving in the correct ways to be able to take advantage of this super cycle that's going on. What do I think? If Meta says, hey, we're going to, you know, we're not going to continue to invest, that is their prerogative as a company. But that doesn't in any way mitigate their contractual obligation to us through the term of the agreement that we went to Blackstone with and said we're going to borrow money because we have a firm contract with Meta that's not open to renegotiation. The concept is, and you know, there was a wave of this that took place, you know, about a year ago. Microsoft is walking away like, what are you talking about? This is a AAA company. They don't walk away from anything. If they make a contractual obligation, that's a contractual obligation. Even the idea that they would walk away from it is deeply misleading to the market.
Big Technology Podcast Host
Okay, there's been some analysts that have talked about one more thing on debt, then we'll move on. Some analysts that have talked about core weave borrowing more money because they spend more money than they can get structurally. So they borrow to pay interest on the last loan.
Brian Venturo
Why don't we talk about how these actual debt instruments are structured from the box perspective and how the controls around these things are. That'll put this to bed. Let's just be done with this.
Michael Intrader
There's a lot of analysts that have a lot of opinions based on a deeply Incomplete understanding of how these are built. So maybe two seconds on it. And Brian, you can kind of keep me on the rails here.
Brian Venturo
I'm pushing you off the bus as much as I can.
Michael Intrader
Once again, going back to the contract, we did a contract with Meta, right? When we did a contract with Meta, we go ahead and we sign the deal with Meta, we go, we borrow the money from Syndicate of Lenders and then we go and we buy the infrastructure to build that facility. We run the facility. When we run the facility, as we're delivering GPU capacity to Meta, Meta sends money, but it doesn't come to us. It goes into what's called a box. Money flows into the box and then it goes through a waterfall. The first thing it does is it pays off the opex associated with the power and the data center. After it's done paying that. The second thing it does is it pays the interest to the lenders. The third thing it does is after it's paid all of the expenses, is it releases back up to our company, also principal and principal and interest, so that it completely amortizes within the five year term of the contract with Meta.
Brian Venturo
It'S controlled by somebody else.
Michael Intrader
And the important piece of this is it's not that it's, hey, we just barely pay off the interest. The coverage ratio in that box is excellent and it can be underwritten at a very narrow spread based on the risk analysis of the most sophisticated lenders in the world. Right. They're not lending us this at 22%, they're lending this at 250% over, excuse me, 250 basis points over SOFR. Right. Which means basically they're looking at it as like this is a low risk transaction to get their money back. It's not some crazy YOLO structure, it's an unbelievably risk mitigated structure that's built to simply go ahead and allow us to build the infrastructure, deliver it and then take the revenue. Now, when you're scaling a company at the rate we're scaling, it tends to make sense that you're going to be investing all over the place. And we are, we're investing in data centers, we're investing in software, we're investing in people, we're investing in. The companies that we're buying to help us reach up the software stack and provide more value. We're doing all of those things, which is exactly what we should be doing right now. Now, as this space opens up, whenever we see an opportunity, we look at it against all the other Opportunities that are out there and say that one makes sense for us, it drives the company forward. The idea that you're at risk from the debt. I mean, anytime you have debt, there is risk. I'm not going to argue that point because you have to generate the revenue. But what are you talking about? You're talking about operational risk on the GPUs that are in the box. Right.
Brian Venturo
One of the things for us, and why our spread on that interest rate has compressed over the last two years is we've demonstrated incredible capacity and capability of delivering that infrastructure. The first time we did one of these debt syndicates, I got paraded around the whole world and had to sit with every single underwriter being asking me questions about what are the doors to get into the data center, what is the floor made out of? I'm like, okay, guys like that, they had so much. There was so much risk around our ability to operationalize it. That has been put to bed now where everyone knows that we can do this and we can do it at scale.
Big Technology Podcast Host
Right.
Brian Venturo
That our cost of capital is significantly compressed.
Michael Intrader
I mean, it went from, you know.
Brian Venturo
What was it, SOFR +800 to.
Michael Intrader
No, it was SOFR +1350 down to SOFR +400. Right. Once again, like for those who don't understand what that means is the higher the interest rate, the higher the risk. The. And what you're seeing is the lending market understand that we have the capacity to deliver this infrastructure and that they are willing to lend us money at increasingly lower rates because they look at it as a lower risk transaction.
Big Technology Podcast Host
Okay, I have so many more questions and we have only 15 or 20 minutes left. So let's take a quick break and come back and talk about a few things that I find really fascinating. That is the depreciation on these AI chips, maybe a little bit about the financing structures and then power. I think we need to talk about power. So let's do that when we're back, right after this. You want to eat better, but you have zero time and zero energy to make it happen. Factor doesn't ask you to meal prep or follow recipes. It just removes the entire problem. Two minutes, real food. Done. Remember that time where you wanted to cook healthy but ordered pizza? You're not failing at healthy eating. You're failing at having three extra hours every night. Factor is already made by chefs, designed by dietitians and delivered to your door. You heat it for two minutes and eat inside. There are lean proteins, colorful vegetables, whole food ingredients, healthy fats. The stuff you'd make if you had the time, head to factor meals.com bigtech50off and use code bigtech50OFF to get 50% off your first Factor box plus free breakfast for one year. The offer is only valid for new Factor customers with the code and qualifying auto renewing subscription purchase. Make healthier eating easy with Factor Are.
Ad Voiceover
You interested in effortlessly growing your Bitcoin portfolio? I sure am. The Bitcoin credit card by Gemini earns you Bitcoin back on every purchase. Use it like any credit card, buy lunch, gas or your weekly groceries and you'll earn up to 4% back instantly in Bitcoin or one of over 50 other cryptos straight to your account. All that with no annual fee. And right now you can grab a $200 bitcoin welcome bonus. It's the easiest way to start building your Bitcoin stack. Go to gemini.com card to learn more. Terms apply. See the link in the description for more information regarding rates and fees issued by WebBank. To qualify for the $200 crypto intro bonus, you must spend $3,000 in your first 90 days. Some exclusions to instant rewards apply. This is not investment advice and trading. Crypto involves risk. Check Gemini's website for more details on.
Big Technology Podcast Host
Rates and fees and and we're back here on Big Technology Podcast with the founding team of two thirds of the founding team of coreweave. Michael Entrader is here, he's the Core Weave CEO. And Brian Venturo is here, he's the Core Weave CSO Chief Strategy Officer. We talked previously or in the first half about how these chips run hot. So let's just talk a little bit about the life cycle of these chips. I'm trying to figure this out. There's two differing opinions. One is that a GPU like the Nvidia H100 or the GB200 will burn as hot as it possibly can for like two or three years and then effectively be useless. Like meltdown. It's like the life cycle of a car compressed into a couple years. The other side of it is that no, the GPUs can last, but they get less valuable over time because more powerful GPUs come out that are multiples in terms of their ability to do AI calculations compared to previous generations. So can we just start with the basic physics of this? How long do these things last?
Brian Venturo
So I'm taking this one. You're out.
Michael Intrader
You take the physics, I'll take the other side.
Brian Venturo
So last year is when we saw, let's call it the the Hyperscalers that were around in the 2000 and tens. So Amazon, Microsoft and Google finally retire their Nvidia K80 fleets. The K80 was a GPU that was introduced in 2014. So it was active in their clouds, almost fully utilized for 10 years. The number of changes in architecture and efficiency advancement and performance advancement over those 10 years was massive. You know, just last week we entered a multi year contract to renew Nvidia A1 hundreds, which are the GPUs that were introduced in 2021. Right. So we're already going beyond the five year contract life for GPUs that came out, you know, four years. The idea that these things burn out in two or three years, like it's kind of bunk, right? And from a physical perspective, right, within three years these things are all still under warranty. So if they break, they get replaced.
Big Technology Podcast Host
Right.
Brian Venturo
But from a, like this is not. They run hot. These things are designed to run hot. GPUs that we had deployed in 2019 are still running, still have customers on them. You know, it is a, like some of it is customers that are deploying Grace Blackwell with us today. They're going to use Grace Blackwell for their most frontier or bleeding edge use cases. They're going to train their biggest models. They're going to do the things that they need. The newest Nvidia's latest chip. Yeah, it's Nvidia's latest chip. They're going to do the things that they need the most firepower to do. And they're going to run their inference on hoppers or they're going to run their inference on Ampere, the A1 hundreds. Right? Or they're going to run different steps of their pipeline on a 1/ hundreds or they're going to run parts of their pipeline on CPU computer. Right? There's always going to be a use for these different levels of compute infrastructure. It's just where's the economic value there, right? It's not a useful life question. It's where's the economic value in that time?
Big Technology Podcast Host
And this is where the questions start to build up because so the chips run. We agree on that one. Now I've been taught. So thank you.
Michael Intrader
The chips run that one off the tip.
Big Technology Podcast Host
And so now the question is when it comes to power, right?
Michael Intrader
Hold on, hold on.
Brian Venturo
I just want to finish.
Big Technology Podcast Host
Let me finish this question and you can answer the last one. But I just want to finish this one. Right? So the question.
Brian Venturo
Shut up.
Big Technology Podcast Host
No, no, no, no, no. I really do want to hear. But let me just Put this out there and then you can answer whichever way you want. Okay. The old generations of Nvidia GPUs, they're much less powerful than the newest generations. There's the Grace Blackwell that's out now, there's Vera Rubin that's coming out. And the argument is that these newer chips, Even if the H100, the hopper can continue running, the new chips are so much more powerful that the value. Right. Because those H1 hundreds are being sold at 20, 30 thousand dollars a pop, the value of those chips are going to be much less because of the power of the newer generations. And then if you think about it again, if, if these companies move from training to inference, if for instance, let's say hypothetically, there's a diminishing return to training a bigger model, then those more powerful chips can be used to run inference. And then a company like Cora Weave, which has hundreds of thousands of the older generation of chips, is faced with a depreciation problem compared to the most powerful ones.
Brian Venturo
You got it.
Michael Intrader
So let's, so let's go through this a couple different ways.
Big Technology Podcast Host
Okay.
Michael Intrader
All right. I feel like the depreciation narrative is being spun up by folks.
Big Technology Podcast Host
Michael Burry.
Michael Intrader
Yeah.
Brian Venturo
People that don't understand the space has.
Michael Intrader
Never been in a data center that I know of.
Big Technology Podcast Host
Sorry.
Michael Intrader
My theory here is it's being spun up by a bunch of folks who couldn't spell GPU two years ago and now they are out there as experts on how it actually works. So let's actually go through the different pieces of it. The most important tool that I have for understanding what the depreciation curve or the obsolescence curve of compute is, is not what I think. Right. It's not what, you know, some historic short thinks. It's what are the buyers, the most sophisticated companies in the world willing to pay for today? And when they come to me and they put in a contract for a five year deal or a six year deal, in what world do I not think that they, who are the consumers of this understand that there are new, more powerful chips coming out? Of course they do. They understand it. But they also understand what their various use cases are. And they are saying to themselves, I'm going to buy this because I'm going to need it today, I'm going to need it in three years and I'm going to need it in five years. And what the use is within my system will change. But it didn't become useless, it hasn't become obsolete. Right. And they know the new stuff's coming, yet they're still buying it because they know better than someone who doesn't know anything about how compute is used. My opinions around depreciation are informed by the only entities that get to vote in my world, which are the folks that are paying for the compute over time. Those are the guys that get to vote. Everybody else is just looking and guessing. Right. That's number one. Number two is Brian kind of made a point that we just had somebody come back and recontract for term for a term deal. The H1 hundreds.
Brian Venturo
A1 hundreds?
Michael Intrader
No, at H1 hundreds, at 95% of the value of what they were originally sold for. Once again, not showing this catastrophic depreciation curve that has been voiced out there. Once again, for me it's about the data because I need to make the decision to buy this infrastructure or not to buy this infrastructure. And so I've got to kind of look through the noise and decide are the big hyperscalers, are the big labs, are the big buyers of this infrastructure who are looking at this saying, this stuff will be useful for us for the next five years, let's go out and buy it. Or should I go and turn to somebody who's never really understood how the cloud works, what a GPU is? What are the different uses as it moves through from the most cutting edge models to other uses within the training as they go all the way down through inference to simpler, smaller models? And I think that's the way you got to look at this thing is like, what are you talking about, man? If Microsoft and Meta and the other big buyers are coming in and buying for five and six years, I don't really think that anybody else really should or gets to have what I would consider to be an informed opinion on depreciation. And since I'm selling on term contracts specifically to insulate my company from the depreciation curve, I know how much I'm going to make because I've sold it to meta for 5 years every hour of every day and they're going to pay for it every hour of every day. What, what the curve looks like inside of that five years that's already been priced into the deal I did with them. Sorry, go ahead.
Brian Venturo
Sorry. Well, I was trying to interrupt you there because I think that the, in addition to the H1 hundreds which came out in 2023.
Big Technology Podcast Host
Right.
Brian Venturo
We signed a term contract for the A1 hundreds like within like the 95% of an original price range for on, like on term last week or two weeks ago.
Michael Intrader
Yeah.
Brian Venturo
Like, that's crazy. Those GPUs are already five years old. And that useful life is there. And everyone is saying, oh, it's not useful. They have no idea. They don't actually have the data. We're sitting on all this data. We talk to every single one of these customers. And you know, one of the interesting things that's happened over the past year is everyone was saying, well, where are all the enterprises last year and ever see an idea so clearly in your head, but struggle to find the time.
Big Technology Podcast Host
To get it all done? Wix helps you go from I'll get to it 2 done.
Brian Venturo
Build a full site just by describing your idea. Let an AI agent handle daily tasks.
Big Technology Podcast Host
Plan your next marketing campaign or help out customers so you can grow your business the way you want without it taking over your life.
Brian Venturo
Try it out at wix.
Big Technology Podcast Host
Com.
Episode: Coreweave: AI Bubble Poster Child Or The Next Tech Giant?
Date: January 7, 2026
Host: Alex Kantrowitz
Guests: Michael Intrator (CEO, CoreWeave), Brian Venturo (Chief Strategy Officer, CoreWeave)
This episode delves into CoreWeave's meteoric rise in the AI infrastructure world, examining whether the company is the quintessential symbol of an AI bubble or a genuine future tech giant. Host Alex Kantrowitz interviews CoreWeave CEO Michael Intrator and Chief Strategy Officer Brian Venturo, exploring their business evolution, operational challenges, market positioning, and the underlying realities regarding AI infrastructure, demand, risk, and financial structure.
On the Emotional Toll of Growth:
"Sometimes since the IPO, we've been under this spotlight... and inside the company... we always set the highest bar for how fast can we do something, how high of a quality can we do it at?"
— Brian Venturo [05:07]
On CoreWeave’s Real Differentiator:
"The differentiation comes after you turn it on and how you control those systems..."
— Brian Venturo [10:23]
On Skepticism and Market Analysis:
"I feel like the depreciation narrative is being spun up by folks... who couldn't spell GPU two years ago and now they are out there as experts..."
— Michael Intrator [49:07]
On Real-World Value of Old GPUs:
"We signed a term contract for the A100s... within 95% of an original price range... Those GPUs are already five years old."
— Brian Venturo [53:04]
On Managing Risk & Contracts:
"When we think about compute, we think about what is the option value associated with it. When we think about the data centers, we think about what is the option value to be able to build, to be relevant in the future."
— Michael Intrator [24:44]
CoreWeave is not simply an AI boom beneficiary but has built a highly differentiated—and resilient—business at the intersection of AI hardware, software, and finance. Their approach combines infrastructural expertise, a deep understanding of parallel compute, and conservative risk management—all the while navigating narratives about bubbles and skepticism with real-world data and customer demand as their north star.
For those wondering if AI infrastructure is a bubble or a sustainable sector, CoreWeave’s founders make a strong argument for the latter, not least because of their flexibility, rigorous financial discipline, and ongoing value-add beyond just possessing hardware.