Podcast Summary: Latent Space – Bitter Lessons in Venture vs Growth: Anthropic vs OpenAI, Noam Shazeer, World Labs, Thinking Machines, Cursor, ASIC Economics
Guests: Martin Casado & Sarah Wang of a16z
Date: February 19, 2026
Hosts: Alessio (Kernel Labs), Zwieks (Latent.Space)
Episode link: Latent Space Show Notes
Overview
This episode of Latent Space explores the massive shifts in AI venture investing and growth, foundation model company strategies, investment theses, talent wars, infrastructure versus applications, and the future of AI hardware and model economics. Martin Casado and Sarah Wang from a16z, major figures in tech investing, provide transparent insider commentary on raising capital, blurred industry lines, AI's capital flywheel, the changing calculus of founder and venture math, and speculation on areas of underinvestment, all contextualized by bitter lessons learned from frontier labs like Anthropic and OpenAI.
Key Discussion Points & Insights
1. Blurring Lines: Venture vs. Growth, Infra vs. App
-
Venture and Growth Are Colliding
- Capital rounds in AI are now enormous, even at so-called "seed" or "series A" stages, forcing venture funds to team up with growth investors sooner than in previous eras.
- Quote (Martin): "Most of these deals ... are like this hybrid between venture and growth." [02:12]
- Companies now need intense business development (BD) and compute negotiations early on, tasks previously reserved for much later stages. [02:32]
-
Infrastructure vs. Applications
-
AI model companies are simultaneously infrastructure providers and direct-to-user apps; classical distinctions are eroding.
-
Quote (Martin): "What is a model company? ... It's also an app because it touches the users directly." [05:24]
-
New capital flywheel: raise for compute → create breakthrough → use it in an integrated app → gain users/share → raise again and repeat. [06:24]
-
Quote (Sarah): "You raise money for Compute, you pour that, ... you get some sort of breakthrough, you funnel the breakthrough into your ... application ... you raise money at the peak momentum and then you repeat." [06:24]
-
2. Bitter Lessons: Foundation Model Capital Flywheel
-
The Flywheel Effect & Market Structure
- If a foundation model company (e.g., Anthropic, OpenAI) can repeatedly raise 3x as much as the aggregate of companies built atop their platform, it can "consume" the ecosystem—an unprecedented scenario in tech.
- Quote (Martin): "They can outspend the aggregate of companies on top of them and therefore you'll necessarily take their share, which is crazy." [10:45]
-
Tension: AGI Aspirations vs. Product Focus
- AI founders feel unique pressure: invest resources (especially compute) in building AGI, or allocate to productization for current revenue.
- Quote (Sarah): "The best researchers in the world have this dilemma of, okay, I want to go all in on AGI, but it's the product usage revenue flywheel that keeps the revenue in the house to power all the GPUs to get to AGI." [12:23]
-
Founder Dynamics
- More movement than ever among AI founders, elevated by stratospheric talent valuations and media attention ("fishbowl effect").
- Quote (Sarah): "If you're a founder in AI, you could fart and it would be on the front page of, you know, the information these days." [14:39]
- Talent wars—peak frenzy in 2025, but high compensation cascades and acquisition mania continue. [16:17]
- Strategic money and acqui-hires are at historic highs. [16:44]
3. Areas of Underinvestment & Hype Cycles
-
Boring Software is Out of Fashion
- Traditional, solid software companies in huge markets ("boring enterprise") are currently underfunded, even with attractive growth trajectories.
- Quote (Martin): "It's almost become a meme, right? Which is like if you're not basically growing from 0 to 100 in a year, you're not interesting. Which is just the silliest thing to say." [18:06]
-
Robotics & Hardware: The Hard Slog
- Hardware and robotics attract capital but few horizontal plays; most successful bets are vertical (agriculture, mining).
- Quote (Sarah): "We just haven't seen the ChatGPT moment happen on the hardware side. And the funding going into it feels like it's already taking that for granted." [19:52]
- American Dynamism (AD) segment at a16z handles most such deals due to their complexity and vertical nature. [21:28]
4. ASICs and Chip Economics
- Custom AI Chips (ASICs) Now Economically Sensible
- For billion-dollar AI training runs, custom silicon becomes cost-effective even at single-model scale.
- Quote (Martin): "If it's a billion dollar training run, then the inference for that model has to be over a billion, otherwise it won't be solvent ... You can tape out a chip for $200 million." [23:19]
- The main challenge is time-to-market, not cost. [23:12-23:19]
- Oligopolistic hardware trends may impact geopolitics (TSMC, 'America First', supply chain focus) and investing focus. [24:32]
5. Geographic and Market Focus
- Bay Area Resurgence
- The Bay Area has regained its status as the AI epicenter, with other regions (NYC, Miami, LA) being secondary.
- Quote (Martin): "The Bay Area is very much back." [25:03]
- Later-stage AI companies are more globally distributed, but early networks and support are still anchored in the Bay Area. [26:05]
6. AI in Investing Practice
- AI as a Co-Worker for Investors
- Tools like Claude Cloud Co-Worker are now directly automating classic data analysis chores, saving substantial analyst time.
- Quote (Sarah): "We gave it a raw file. Boom. Perfectly accurate. We checked the numbers. It was amazing. That was my aha moment." [27:13]
7. Speculative Futures: Consolidation vs. Fragmentation
- Industry Bifurcation Scenarios
-
One path: rapid model convergence → oligopoly (few companies dominate all tasks).
-
Other path: endless specialization, fragmentation, open source flourishing.
-
Quote (Martin): "All the entire industry kind of hinges on like two potential futures ... oligopoly ... or ... normal software, the universe is complicated." [29:12]
-
Revenue/capital flows currently "borrow from the future" by survival via repeated funding; reckoning will come when growth slows or capital tightens. [32:09]
-
The value of highly specialized models and agent applications (e.g. legal, code, etc.) may persist even as general models improve, especially for tasks where marginal improvements have diminishing value. [33:36]
-
8. AI Coding Models & AGI Completeness
-
Specialist vs. Generalist Models in Coding
- Discussion on whether there is real differentiation for purpose-built code LLMs vs. "AGI-complete" general models with excellent bedside manner.
- Quote (Martin): "There's no such thing as a coding model ... it's good at coding, but it's got to be good at everything." [35:55]
- Yet the host and Martin disagree—expect at least two “axes”: coding-specialist and generalist.
-
Personal Coding Projects: World Labs & Gaussian Splats
- Martin works directly (coding) on Spark JS, a renderer for 3D Gaussian splats for World Labs (Fei-Fei Li's foundation models for 3D scenes).
- Quote (Martin): "I work on an open source library called Spark JS, ... JavaScript rendering layer ready for Gaussian splats." [37:29]
- Spark JS is co-developed with Andreas Sundquist; it's used to solve tough scaling problems for new 3D content types. [38:51]
9. Foundation Model Companies: Thesis & Evaluation
-
How a16z Picks Model Teams
-
Investment starts with "n of 1" founders and teams with provable talent and focus, with shown ability to move the needle in AI (e.g., Ilya Sutskever, Team at Thinking Machines, Noam Shazeer).
-
Quote (Sarah): "At this point in time, we actually believe that there are n of 1 founders for their particular craft and they have to be demonstrated in their prior careers." [44:28]
-
Specialization can lead to lasting value even in a world of powerful general models—see 11 Labs still leading in audio.
-
Revenue growth after capability breakthroughs is unlike any SaaS ever seen—a company can go from zero to tens of millions essentially overnight. [47:23]
-
-
Debunking Rumors and Narratives
-
Vast gap between industry rumor on social media and actual operational reality in leading AI startups.
-
Quote (Martin): "I've never seen the perception of the truth be further from the truth industry wide ever." [49:34]
-
Founders are encouraged to "tune out the noise and focus on building." [51:34]
-
10. Application Layer Success Story: Cursor
-
Cursor: Bottom-Up Model Building
- Cursor built a leading coding model with dramatically lower spend, exemplifying an app-first, vertical-to-model approach.
- Quote (Martin): "For a small fraction of the cost ... developed an almost sota model, which for a period of time was the most popular coding model in the world." [52:24]
- The margin question for applications on "the token path" is existential; Cursor's focus and success shows that domain-specific, high-product companies can win even as foundation models dominate.
-
Outlook on Agent Labs
-
Agent-like applications that price against end-user value (time saved) may yield better margins than those selling at per-token rates.
-
Quote (Host Zwicks): "Agent Labs ... will probably have a better time with the margins because they price against the end user hours spent or like human labor, whereas models get commodity price per token." [54:32]
-
However, models going “first party” (“vertical integration”) can undercut or consume the application layer—a delicate dance seen throughout tech history.
-
Notable Quotes & Timestamps
- "[Sarah] has done the most kind of aggressive investment thesis around AI models ... the broadest investor." – Martin Casado [00:55]
- "Raise capital, turn that directly into growth, use that to raise three times more ... you literally can outspend ... the aggregate of companies on top of you." – Martin [10:45]
- "If you're a founder in AI, you could fart and it would be on the front page of, you know, the information these days." – Sarah Wang [14:39]
- "We just haven't seen the ChatGPT moment happen on the hardware side." – Sarah [19:52]
- "If it's a billion dollar training run, then the inference for that model has to be over a billion, otherwise it won't be solvent ... You can tape out a chip for $200 million." – Martin [23:19]
- "We're hiring ops people ... it's still very manual as far as I can tell." – Host Zwicks [26:53]
- "Claude Coworker ... done in a few seconds. That was my aha moment." – Sarah [27:13]
- "There's no such thing as a coding model ... it's got to be good at everything." – Martin [35:55]
- "At this point in time, we actually believe that there are n of 1 founders for their particular craft and they have to be demonstrated in their prior careers." – Sarah [44:28]
- "Agent Labs ... will probably have a better time with the margins because they price against the end user hours spent or like human labor, whereas models get commodity price per token." – Host Zwicks [54:32]
Segment Timestamps
- Blurring Venture and Growth: 01:46–06:24
- AI Capital Flywheel & Model Company Power: 06:24–11:02
- Founder Dynamics & Talent Wars: 13:32–17:33
- Underinvested Areas & Robotics: 18:06–22:46
- ASIC Economics: 22:46–24:32
- Geography & Market Structure: 25:03–26:49
- AI Automation in Investing: 26:49–28:03
- Specialist vs. Generalist Models: 29:12–36:45
- World Labs, Spark JS, Coding: 37:06–41:55
- Valuing Foundation Model Startups: 42:04–47:57
- Media Gossip vs. Reality: 49:34–51:44
- Cursor & Agent Labs: 51:50–55:04
Memorable Moments
- Sarah recalling how an LLM did hours of growth-team analysis in seconds (27:13)
- Martin detailing the existential industry risk of model-layer consolidation (29:20)
- Both guests lamenting the current social media rumor mill’s distortion of reality (49:34–51:44)
- Deep discussion on what makes a specialty model viable and worth venture investment (44:28–47:57)
Conclusion
This episode offered a candid, insider's look at the current state of AI investing, model economics, and strategic maneuvering at the bleeding edge. Martin and Sarah’s tag-team dynamic highlighted both the hard, capital-driven realities and the human/technical nuances of the industry. Their perspectives are invaluable for engineers, founders, and anyone trying to understand where the next decade in AI will be won, lost, and funded.
