This Week in Startups — Episode E2207 Summary
Inside Harvey AI’s $8 Billion AI Lawyer App, PLUS How OpenRouter Unites the LLMs
Date: November 11, 2025
Host: Jason Calacanis (with guest interviewer Alex)
Episode Overview
In this episode, the show spotlights two of the fastest-moving startups in the AI world: Harvey AI, the much-discussed “AI lawyer” for the professional legal sector, and OpenRouter, which unifies access to dozens of large language models (LLMs) through a single API. The episode breaks down how AI is revolutionizing legal work, the infrastructure and security challenges of enterprise AI, the rapidly evolving landscape of LLMs, new business models emerging in AI infrastructure, and what all this means for startups. The episode closes with classic advice from Jason to founders about building sustainable moats in hyper-competitive tech markets.
Harvey AI: Building the AI Platform for Law
Guest: Gabe Pereira, Co-founder & President, Harvey AI (interviewed by Alex)
Harvey AI at a Glance ([01:58]-[03:16])
-
What Harvey Does:
Harvey offers law firms a unified workspace with two major components:- Vault: Allows firms to bring their data and case history to the platform
- Assistant: Uses AI to assist with drafting, research, and legal workflows
-
Enterprise Collaboration:
Focus not just on helping lawyers, but on building admin, governance, and collaboration tools – both for lawyers and their clients (“client matters” are legal’s version of projects).
Quote ([03:16]):
"The big challenge for legal was there didn't exist this kind of single workspace where you could work on these. In legal, they're called client matters." — Gabe
Growth, Distribution & The ‘Network Effect’ in Client Acquisition ([05:11]-[06:43])
-
Network Effect Emerging:
Enterprises and law firms are prompting each other to adopt Harvey, leading to organic expansion: “Law firms going to their clients and saying, hey, we can work better with you if you buy Harvey, and the reverse, too.” -
Result:
Meaningful reduction in customer acquisition cost and friction for Harvey.
Why AI is a Natural Fit for Legal ([06:43]-[08:27])
- Similarity to Coding:
Law’s highly structured, rule-based language and vast available corpus of case law parallels the abundance of code for coding AIs. - Core Workflow:
Junior associates must synthesize massive sets of information, and AI is now integrating across tools (web search, Lexis, discovery corpora).
Quote ([07:14]):
"There's a ton of analogies between code and legal ... these feel like the two domains where you've really seen these types of companies be able to be very successful." — Gabe
Challenges: Public vs. Private Data, Model Limitations ([08:41]-[13:39])
- Early Days:
GPT-3 could answer consumer legal questions surprisingly well. However, moving into corporate law, most data is private and not available for general AI training. - Biggest Problem:
Enterprise legal work involves unique, highly confidential client data. General models struggle to extrapolate expertise in specialized, private matters.
Quote ([11:27]):
"Most of the data you're working there is very sensitive client data that actually never goes into the public models. And so a lot of that is missing." — Gabe
Architecture: Security & Multi-Model, Multi-Client Challenges ([13:08]-[17:25])
- Integration Complexity:
It’s hard simply to connect all necessary legal data before even thinking about model fine-tuning or RAG. - Partition & Governance:
Systems must partition client data securely; neither Harvey nor law firms can freely train on cross-client information. - Regulatory Parallels:
Building "eyes-off" solutions and rigorous security — similar to requirements in finance and healthcare.
Expanding Beyond Legal: Other Regulated Advisor Sectors ([17:25]-[18:25])
- Model for Expansion:
Harvey’s architecture is already moving into “adjacent industries” where client-advisor data is sensitive (tax, investment banking, consulting, compliance).
Business Model, Customer Mix, and Growth ([18:25]-[20:41])
- Scale:
$100M ARR (annual recurring revenue) by August; 500+ law firm clients, moving now into mid-market and Fortune 500 companies. - Marketplace Tension:
Tension between law firms (who may fear margin erosion from efficiency gains) and clients (who want lower bills), but potential for a “win-win” by changing delivery and pricing models.
Quote ([22:01]):
"There is some tension of the efficiency." — Gabe (on the billable hour and platform efficiency)
- Vision for Law Firms:
"My gut is ... you will see law firms that are 10 times larger than the current law firms and they're able to operate at a much larger scale because I think Gen [AI] will let them decouple revenue from headcount." — Gabe ([23:16])
Competitive Landscape & Platform Play ([23:44]-[27:04])
-
Day-to-Day:
Competes more with other legal tech startups (e.g., Spellbook, Nexle) than general-purpose model vendors. -
Strategic Differentiator:
Harvey isn’t just another LLM assistant — sells an entire AI-first law firm platform: governance, process management, and transformation tools (not just another legal chatbot). -
Sales/Adoption Tactics:
Minimal forward deployment; focus on scalable product-building over bespoke integrations.
Quote ([27:28]):
"Pretty much 100% of our platform is things that we have built that we are selling to every customer." — Gabe
OpenRouter: Routing the AI Revolution
Guest: Alex Atala, Co-founder & CEO, OpenRouter (interviewed by Alex)
OpenRouter in a Nutshell ([31:03]-[33:36])
- What It Is:
OpenRouter provides a single API to access, compare, and use a wide range of LLMs (both closed and open source) from labs around the world. - Key Value:
Drastically lowers switching costs, allows instant experimentation, centralized billing, and “deletes” the complexity of working with each LLM vendor separately.
Quote ([33:27]):
"There are all kinds of paper cuts and ways to shoot yourself in the foot when you switch between providers... We delete all of that and just make it so you can instantly test out new models and then use them in production, too." — Alex
Global Model Marketplace, Vendor Competition ([35:36]-[37:44])
- AI Market Dynamics:
OpenRouter boosts discovery and distribution for upstart and international models, making the overall LLM market more competitive and fluid, reducing vendor lock-in. - Vendor Relationships:
Once a source of tension (big LLM labs worried about losing direct customer ties), but now recognized as valuable distribution and data channels.
Business Model Details ([41:45]-[44:07])
- Revenue Streams:
- Platform fee (5.5% default, negotiable down for enterprise volume)
- Margin from volume discounts negotiated with LLM providers
- Paid value-added features: observability, file management, team controls, etc.
Quote ([43:10]):
"That's like part of how we have a margin. And then the last component is ... inference-adjacent software that people really want." — Alex
Usage Growth & The Future of LLMs ([44:39]-[46:51])
- Scale:
Tracking 5.7 trillion tokens processed in a week—growing rapidly as more tasks become LLM-automatable. - Evals & Benchmarks:
OpenRouter collects rich data on model performance and usage, plans to expand benchmarking and recommendation (empirical guidance for users, community-based).
Discovery, Benchmarks & Model Differentiation ([47:43]-[53:07])
- Autorouter:
OpenRouter routes user requests to the “best” model for a given task, based on neutral, real-world performance (no paid placement). - Dynamic Benchmarks:
Moving beyond static metrics, benchmarks become products: tools for discovering, measuring, and recommending models/functions as task-specific LLM capabilities become more important.
Quote ([52:07]):
"We believe really heavily in dynamic benchmarks that are constantly updating and ... become products as a result." — Alex
The Coming Shift: Inference Spend vs. Salaries ([53:07]-[55:34])
- Predictions:
Co-founder Chris Clark’s thesis: inference (LLM compute) will eventually overtake salaries as the top expense for knowledge-based companies. Alex is cautiously optimistic, noting new human roles (AI orchestration, infra) will remain while many jobs will disappear.
Founder Q&A: Building a Defensible Startup in a Red Ocean
Host: Jason Calacanis ([57:00]-[62:57])
Main Points
- Reality Check:
Rapid development and “vibe-coded” competitors don’t prevent real businesses; most copycats will fall away. - Endurance Wins:
Tech markets are marathons, not sprints — teams with discipline, shipping cadence, fundraising ability, and strategic distribution win out. - Moats Beyond Product:
True moats are built on product, team execution, sales power, distribution, network effects, and internal capability—not just code. - Case in Point:
Winners in ride sharing or delivery didn’t win by being first or having the best tech, but through scale, distribution, network liquidity, and sales.
Quote ([59:58]):
"Moats then are almost like concentric circles of not only product, but also internal capability, team capacity, and market." — Alex
Classic Jason Advice ([62:10]):
"If it was easy, everybody would be doing it. And it's extremely hard ... a death march in some ways ... the last person standing gets the prize."
Notable Quotes
- On AI for Law:
"We can't train on any of [the client] data ... the law firm can't do that because ... it's actually client data from a bunch of different clients." — Gabe ([14:44]) - On LLM Model Marketplace:
"If AI had zero competition ... there isn't a point to OpenRouter, but the more competition there is ... the more ridiculous it is for a developer to try to take advantage of all those features. And that's where we come into play." — Alex ([35:36]) - On Early Founder Fear:
"A lot of people start the race ... eventually you get down to a two or three horse race in every one of these businesses." — Jason ([57:00]) - On the Future of Knowledge Work:
"There are going to be some domains where AI models are just kind of not scaling super quickly ... and jobs that humans mostly do today ... are probably going to be mostly replaced by AI." — Alex ([53:39])
Key Timestamps
- 01:58 — Introduction to Harvey AI, company background and product
- 05:11 — Network effects and collaborative client-law firm use
- 07:14 — Why law is fertile ground for AI innovation
- 08:41 — Regulatory and data-silo challenges in legal AI
- 11:27 — What public models can't do and the path forward
- 13:39 — Architectural and security complexities for legal SaaS
- 18:25 — Harvey's growth metrics and customer base expansion
- 22:01 — Market tension: efficiency gains vs. billable hours
- 23:44 — Harvey’s competitive landscape
- 31:03 — Introduction to OpenRouter, business and API overview
- 33:27 — How OpenRouter simplifies access to a diverse LLM ecosystem
- 41:45 — Business model and margins at OpenRouter
- 44:39 — Token volume processed and growth in AI usage
- 47:43 — Autorouter and the importance of unbiased model benchmarking
- 53:07 — Macroeconomic shift: inference cost vs. salaries
- 57:00 — Jason’s founder Q&A: defensibility in fast-moving tech markets
Conclusion
This episode delivers a deep, candid look at how AI is disrupting both infrastructure (OpenRouter) and highly regulated professional services (Harvey AI). It covers real-world challenges of vertical SaaS, the shape of future business models in AI/Law, and strategic insights for founders coping with hyper-competitive ecosystems. The tone is technical but practical, full of actionable wisdom for founders, CTOs, operators, and anyone tracking the future of work and enterprise software.
