No Priors Podcast: How Capital is Powering the AI Infrastructure Buildout
Guest: Neil Tiwari, Managing Director, Magnetar Capital
Hosts: Sarah Guo, Elad Gil
Release Date: February 26, 2026
Episode Overview
In this episode, Sarah Guo of Conviction and Neil Tiwari of Magnetar Capital dive deep into how capital deployment and financial engineering are enabling the rapid buildout of AI infrastructure. They explore the evolution of compute financing, shifts in demand from training to inference, supply chain constraints, the role of sovereigns, physical AI, and more. With insights spanning capital strategy, infrastructure challenges, and macro trends, this is an inside look into the business mechanics fueling the AI revolution.
Key Discussion Points & Insights
1. Magnetar Capital’s Role in the AI Compute Frontier
[00:39]
- Background: Magnetar Capital is a $22B alternative asset manager with strategies across private credit, venture, and systematic public markets.
- Unique Positioning: Their history in energy, real estate, and core infrastructure set up a natural extension into compute infrastructure investment.
- Early Entry: Magnetar moved into AI infrastructure by investing in CoreWeave in 2021, well before the AI boom, capitalizing on the transition from Ethereum mining to high performance computing (and eventually AI workloads).
- Notable Quote:
"We just happened to be at the right place at the right time... we could envision a world where the GPU could be used for a lot of different high performance kind of computing applications."
— Neil Tiwari, [01:51]
2. How AI Compute Financing Works
[06:34]
- Massive CapEx Requirements: AI infrastructure capital expenditure for 2026 by hyperscalers is projected at $660–$690 billion, scaling to trillions in the future.
- Debt vs. Equity: Relying solely on equity is inefficient and dilutes founders; creative structuring is required to fund such capital-heavy buildouts.
- Innovative Structures: Use of DDTL structures or SPV debt, with contracted cash flows (e.g., from Microsoft, Meta) as primary collateral and the actual GPUs as secondary/tertiary.
- Amortization: Debt is structured so that it’s paid off in 2–3 years, typically over 4–5 years, before the assets become too depreciated.
- Notable Quote:
"The primary collateral was the contracted cash flows from investment grade counterparties... GPUs themselves were actually like the second or tertiary level of collateral."
— Neil Tiwari, [09:47]
3. Evolution of Instruments and Opening Up to Non-Hyperscalers
[11:44]
- Early Days: Only investment-grade counterparties could participate due to risk.
- Today: Structures now blend counterparties, with both established corporates and newer AI-native companies. This risk balancing lets newer model labs and software startups access significant debt financing as they prove reliability and market demand.
- Optimizing for Fungibility: Compute allocation is becoming more flexible.
4. Supply Chain Constraints: Chips, People, and Power
[13:01]
- 2023–2024: Market was constrained by limited chip supply.
- By 2026: Chips are more available, but the pressing bottlenecks are now infrastructure-related: power availability, land, equipment, and skilled electricians.
- Demand Is Relentless: Every obtainable GPU is being used–no “dark fiber” or “dark GPUs.”
- Memorable Exchange:
Sarah: "No, any GPU is used."
Neil: "Exactly. Any GPU is used."
— [16:11] - Efficiency Drive: New chip generations (e.g., Nvidia Blackwell series) offer much higher inference efficiency—up to 90–100x over previous models ([14:48]).
5. Responding to Critiques: “Circular Financing”
[15:26]
- Circularity Debate: Some critics suggest a speculative bubble with debt financing riding on assets that quickly depreciate.
- Rebuttal: Because financing is supported by committed, investment-grade contracts to well-capitalized buyers, and demand is proven and growing, financiers like Magnetar see ROI as strong—unlike the dark fiber oversupply of the 2000s.
6. Shift from Training to Inference
[17:59]
- Training vs. Inference: There’s now a pronounced shift with large volumes of workloads moving to inference, which is more distributed.
- Inference Complexity:
- Challenges of variable demand, latency sensitivity, memory bandwidth requirements.
- Distributed inference is creating need for smaller, geographically spread clusters, shifting the role of software in orchestration and reliability.
- Ownership Trends: Application-layer and inference cloud companies are recognizing the financial and strategic value in owning their own infrastructure rather than reselling or brokering.
- Notable Quote:
"For every Application layer company out there, the highest line item from cogs is compute."
— Neil Tiwari, [20:54]
7. Power and Energy: The New Limiting Factor
[25:02]
- Nuances of Power Shortages: US grid isn’t solely undersupplied—much available wattage is ‘stranded’ due to peak-oriented utility design and distribution problems.
- Short-Term Bottlenecks:
- Construction materials (e.g., structural steel), specialized labor (electricians), transformers, substations.
- “Bring your own capacity” trend—combining solar, natural gas, etc., to bootstrap additional supply.
- Energy Storage and Distribution: Investment in companies (like Taurus) that optimize the delivery and storage of surplus or peak power, making more of the existing grid usable.
- Quote:
"The true bottleneck, at least in the short term... is things like structural steel, finding electricians, substations, transformers, air chillers."
— Neil Tiwari, [27:06]
8. Sovereign Build-Outs and Security
[28:35]
- Sovereigns as New Players: Countries (India, Middle East, etc.) are now rapidly funding their own compute clusters, seeing AI as national security.
- Challenges:
- Sourcing partners capable of building and scaling advanced GPU infrastructure.
- Ensuring cybersecurity and safe ecosystem operations, especially as these clusters often require American or allied expertise.
9. Physical AI: Infrastructure Beyond Digital
[30:00]
- Physical AI: Applies “AI-native” capex financing models to hardware companies—robotics, drones, manufacturing—traditionally considered capital sinks.
- Rationale: AI’s software advances can now unlock profitable scale in physical hardware.
- Analogy:
“Everything we saw starting in 2021 is asset heavy. That’s where you started hearing a lot more about us. And I think physical AI is actually an extension of that.”
— Neil Tiwari, [30:00] - Financing Implication: Project finance and debt-backed contracts, like in compute, will be key for the physical AI buildout.
10. Capital Rotation and The End of “Software Eats the World”?
[33:00]
- Market Observation: There’s been rapid public market rotation out of traditional software, consulting, and real estate firms, with increasing capital diverted to AI infrastructure and native AI companies.
- Host’s Take: The market is likely overreacting with sector-wide pessimism; individual companies’ ability to integrate AI will matter more than sector classifications.
- SaaS Paradox: While AI is disrupting categories, SaaS businesses are now at their highest free cash flow margins and lowest multiples in years.
- Notable Quote:
"What’s happening right now is there’s a hammer being hit across all names and not specific individual names that might not be using [AI] as well."
— Neil Tiwari, [35:11]
Notable Quotes & Memorable Moments
-
On early investment foresight:
“We made our first investment before the AI trade started. But we added a lot of optionality…”
— Neil Tiwari, [01:51] -
On depreciation risk:
“In these kind of debt structures it doesn’t really matter because the debt’s fully paid off by the end of the debt term against committed contractual contracts…”
— Neil Tiwari, [10:33] -
On inference cloud future:
"Can the inference clouds like Base10 deliver reliability you would expect from a traditional cloud?...the distributed data center operations that they consume today do not offer that reliability."
— Sarah Guo, [21:59]
Important Segment Timestamps
- [00:39] — Introduction to Magnetar's business and AI position
- [06:34] — The scale/capital problem in AI infrastructure
- [08:25] — Innovations in debt structure for compute financing
- [13:01] — Supply constraints and evolution of the bottleneck
- [15:26] — Addressing concerns about “circular financing”
- [17:59] — The rise and complexity of inference workloads
- [25:02] — Deep dive: power and energy markets, grid bottlenecks
- [28:35] — Sovereigns' entry into AI infrastructure
- [30:00] — Financing the physical AI/robotics boom
- [33:00] — Capital rotation out of SaaS and sector-wide market reactions
Conclusion & Takeaways
This episode pulls back the curtain on the “financial rails” powering the immense growth of AI infrastructure—demystifying how creative capital structuring fuels both digital and physical AI advancements. Neil Tiwari provides a rare investor’s lens on supply chain realities, power grid challenges, the shift to distributed inference, global competition, and how deeply financial engineering now shapes AI’s trajectory.
For anyone navigating AI, tech, or infrastructure—this conversation is essential listening.
