Azeem Azhar’s Exponential View
Episode: Inside the Economics of OpenAI (Exclusive Research)
Date: February 13, 2026
Host: Azeem Azhar
Guests: Jaime Sevilla (Founder, Epoch AI), Hannah Petrovic (Exponential View), Matt Robinson (AI Street, moderator)
Episode Overview
This episode explores the increasingly pressing question: Do the economics of OpenAI and other frontier AI companies actually make sense? As AI firms command massive valuations and pouring capital continues, Azeem Azhar brings together exclusive collaborative research between Exponential View and Epoch AI to find out if the high costs of developing and maintaining cutting-edge AI models can be justified by current and future revenues. The conversation, moderated by financial journalist Matt Robinson, delves into the margins, business models, infrastructure constraints, and long-term sustainability for companies like OpenAI.
Key Discussion Points & Insights
1. Why Examine AI Economics Now?
- AI companies such as OpenAI and Anthropic are now valued in the hundreds of billions of dollars, but it's unclear if these valuations are sustainable given immense infrastructure and model development costs (00:00).
- The episode aims to answer: Are these companies like Uber in its early years (protracted losses, eventual profits), or is a sustainable business model still out of reach?
- Research was conducted in partnership with Epoch AI to piece together public data and create a robust financial picture of frontier AI firms (00:39).
2. The Financial Framework: Margins & Model Lifecycle
Jaime Sevilla (Epoch AI):
- Gross Margins: OpenAI likely made more revenue than the raw cost of compute during GPT-5’s peak—showing a small margin or an overall loss after accounting for all expenses, including staff, sales, administrative costs, and Microsoft’s revenue share (03:00).
- Model Lifespan Matters: A 50% gross margin looks healthy—until it's realized that high R&D costs for brief periods of peak model performance (short "shelf lives") mean those margins are quickly consumed (04:30).
"If you look at how much [OpenAI] spent in R&D in the four months before releasing GPT5, that quantity was likely larger than what they made in gross profits during the whole tenure of GPT5 and GPT5.2." — Jaime Sevilla (05:42)
- Building a clearer framework for assessing AI business models means tracking not just current operating profit but the full lifecycle of model investments and replacements (03:54, 06:41).
3. Methodology: Piecing Together the Puzzle
Hannah Petrovic (Exponential View):
- Used public historical numbers and projected forward (e.g., sales and marketing expenditures: $1B in 2024, $2B in H1 2025) to estimate overall costs and margins (06:41).
- Disaggregated different cost categories as much as possible for realism, given much of the data is not public (07:10).
4. The Problem of Ephemeral Dominance
Azeem Azhar:
- Even market-leading models are only at the top for a few months before being superseded.
- Enterprises lag in switching models, but consumer preferences can shift overnight with API and front-end changes (07:33).
- The short life of each model complicates both cost recoupment and strategic planning.
5. Rising and Shifting Compute Costs
Jaime Sevilla:
- Contradicts the common wisdom: Compute costs are not declining for current frontier models—in fact, they are rising as model scale increases (10:11).
"Pre-training is not dead at all. People are building hundred billion dollar data centers for a reason." — Jaime Sevilla (10:31)
- The business model right now isn’t immediate profitability, but demonstrating scalability and future growth to investors (10:53).
6. OpenAI: A Different Type of Tech Company
- OpenAI’s investment requirements are unlike conventional software businesses—high upfront R&D and infrastructure, fast depreciation of models (11:46).
- Gross profit margins are lower than classical software businesses.
- “Foundation Labs don’t look like software businesses, they look like something different.” — Azeem Azhar (12:34)
7. Ad-Based Monetization: Path or Red Herring?
Discussion led by Matt and Jaime:
- OpenAI considering in-product ads: With 800M+ weekly users, ads could yield billions but won't cover hundred-billion-dollar infrastructure ambitions (13:56).
- Ads serve more as a proof of potential profitability to investors rather than a core, long-term revenue model (15:00).
- Meta’s experience: AI-driven ads generated ~$60B ARR by keeping users engaged, but chatbot-based ad models are largely untested (15:31, 15:54).
8. Infrastructure: The Real Endgame?
- Infrastructure (data centers, power) may be the most durable asset in a space where software models rapidly depreciate (17:36).
"If you think that the software part is rapidly depreciating, you might want to get in on this part of building and serving infrastructure at a scale." — Jaime Sevilla (18:15)
- OpenAI’s ambition to build gigawatt-scale power supply is noted as a strategic move.
9. Consumer vs. Enterprise AI—Where’s the Stickiness?
Hannah Petrovic:
- OpenAI is split between consumer (60%) and enterprise (40%) revenue streams—enterprise offerings may provide more stability and predictable returns (18:38).
- Consumer space is highly competitive and “frictionless,” e.g., with devices now offering multiple model options instantly.
Azeem Azhar:
- User “inertia” could rise as models become capable of running for long, complex agentic tasks—current high compute costs might eventually be justified by such use cases (21:24).
10. Bottlenecks: GPUs or Energy?
Unanimous among guests:
- Despite the ‘energy crisis’ narrative, capacity (GPUs and data center hardware) is a more unique bottleneck than power supply; chip supply chains are fragile and slow to expand (22:53).
“The GPU part... is being choked on production in a few factories in Taiwan. That’s probably where the real bottleneck is.” — Jaime Sevilla (23:26)
Azeem Azhar:
- Many energy constraints are regulatory and supply chain rather than fundamental physical limits.
- The market's recent overreaction was due to missing just how supply-constrained companies are on compute (24:16).
11. Will Edge Devices Matter?
Hannah Petrovic:
- Edge model performance is improving, but the relentless drive for higher capability means most significant inference will continue in powerful data centers (26:39).
- For the foreseeable future, the best models will be out of reach for consumer devices due to compute requirements (27:11).
12. Sticky Demand and Unused Capacity
Matt Robinson:
- Enterprises are reluctant to relinquish compute allocations, analogous to airlines preserving inactive flight slots during COVID-19 (29:14).
- High “reserved but idle” compute demand further intensifies infrastructure constraints and CapEx requirements.
13. Value-for-Money: Agents vs. Humans
Azeem Azhar & Matt Robinson:
- As model quality improves, the cost of AI agent labor compared to human labor drops precipitously—currently “$20 an hour for a junior analyst, 50 cents for an AI agent. That gap is narrowing fast.” (32:52-32:57)
14. Most Surprising Research Result
- Hannah Petrovic: Surprised that actual gross margins are as high as 50%, given narrative of perpetual losses (33:16).
- Jaime Sevilla: More pessimistic about overall profitability—margins are closer than expected, but not enough to fully amortize R&D and model replacement costs (33:46).
"They seem to be great at inference...but after you account for the cost of developments the thing looks much closer than what I expected." — Jaime Sevilla (34:27)
15. Business Strategy Divergences: OpenAI vs. Anthropic
Azeem Azhar:
- OpenAI’s broad “capture-every-market” strategy may have spread resources thin, compared to Anthropic’s narrower focus (36:57).
- Early deals with Microsoft (20% revenue cut) are a drag on operating profits.
16. Model Swap and Future Interoperability
- Increasing interest in model routers and “model agnostic” application layers, especially in finance; larger companies are less nimble to swap models due to governance and integration costs (40:00, 42:49).
- Agent “identity” may persist even as underlying models change, making model upgrades in enterprise less disruptive than anticipated (43:35).
17. The Near Future: Personal Agents & Unleashed Demand
- The panel is bullish that everyone will soon have a capable AI assistant handling a range of tasks—but mass market “trillion-dollar” value will require business adoption (47:21-47:55).
- Genuine agentic workflows will mean a step change in compute consumption as personal agents transact on users’ behalf.
18. Final Thoughts & Future Research
- Models should be seen as “rapidly depreciating assets,” not enduring platforms—strategic focus may need to shift to infrastructure and sustainable gross margins (48:14).
- The team is continuing their research, anticipating further insights with any OpenAI IPO (49:19).
Notable Quotes & Memorable Moments
| Timestamp | Quote | Speaker | |-----------|-------|---------| | 05:42 | "If you look at how much [OpenAI] spent in R&D in the four months before releasing GPT5, that quantity was likely larger than what they made in gross profits during the whole tenure of GPT5 and GPT5.2." | Jaime Sevilla | | 10:31 | "Pre-training is not dead at all. People are building hundred billion dollar data centers for a reason." | Jaime Sevilla | | 12:34 | "Perhaps Foundation Labs don't look like software businesses, they look like something different." | Azeem Azhar | | 17:36 | "[If software is rapidly depreciating]… you might want to get in on this part of building and serving infrastructure at a scale." | Jaime Sevilla | | 23:26 | "The GPU part...is being choked on production in a few factories in Taiwan. That's probably where the real bottleneck is." | Jaime Sevilla | | 33:16 | "I was actually quite surprised the margins were where they were, as in the gross margins, because I wasn't expecting them to be around 50%. That seems pretty good…" | Hannah Petrovic | | 34:27 | "…after you account for the cost of development, the thing looks much closer than what I expected. I still remain bullish, but I'm now much more temperate…" | Jaime Sevilla |
Segment Timestamps
- 00:00–02:27: Framing the trillion-dollar AI valuation question and introducing the panel
- 02:27–06:41: Research scope, methodology, key takeaways on margins and costs
- 06:41–08:30: Building the financial framework—dealing with incomplete data
- 08:30–11:11: Short model lifespans & implications for strategy
- 11:11–13:24: Compute costs, scaling ambitions, and strategic models
- 13:24–16:50: Ad-based monetization and its pros, cons, and precedents
- 17:27–22:20: Infrastructure as an enduring asset and future enterprise focus
- 22:20–26:39: GPU vs. energy bottlenecks and supply chain challenges
- 26:39–29:29: Prospects for edge inference and competing CapEx pressures
- 29:29–33:16: Value of agentic compute, increasing user willingness to pay
- 33:16–36:57: Surprises in margins, business model uncertainties, and strategic tradeoffs
- 36:57–40:36: Business focus: OpenAI vs. Anthropic, consumer/enterprise split
- 40:36–43:35: Model-agnostic enterprise adoption, model routers, and governance challenges
- 43:35–47:21: Agent memory, model swapping, future of AI assistants
- 47:21–49:19: Recap, models as depreciating assets, research directions
Tone & Takeaways
The episode is in-depth, analytical, and refreshingly candid, filled with data-driven speculation and grounded assessments. There’s an air of both excitement (“this is where the future is built”) and caution (“the margins are much closer than expected”). The researchers and panelists are collectively bullish on AI’s long-term societal and business impact—but clear-eyed about the enormous challenges, structural constraints, and business model unknowns that must be navigated.
Summary prepared for listeners who want a detailed, nuanced understanding of the current economic realities and future prospects for OpenAI and the broader frontier AI sector.
