Podcast Summary
Episode Overview
Podcast: The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch
Host: Harry Stebbings
Guest: Nick Frosst, Co-founder of Cohere
Episode Date: September 1, 2025
Main Theme:
In this episode, Harry Stebbings sits down with Nick Frosst, Canadian AI researcher and co-founder of Cohere, to unpack Cohere’s strategy in competing with OpenAI and Anthropic amidst the billion-dollar AI wars. The conversation dives deep into Cohere’s focus on enterprise, the real bottlenecks for AI progress, skepticism around AGI rhetoric, the need for model and technology sovereignty, and how hype distorts the real promise (and pitfalls) of modern AI. Nick gives candid and contrarian takes on benchmarks, talent wars, funding, societal impact, and the responsibilities of industry leaders.
Key Discussion Points & Insights
1. Lessons from Geoff Hinton and Early AI Days ([05:03])
- Nick’s Experience with Hinton:
Nick shares how working under Geoff Hinton at Google Brain shaped his playful, intuitive approach to research. Hinton encouraged creative thinking, using physical metaphors rather than relying solely on equations.- "A lot of it was descriptions in the natural physical world... a lot of it's based on intuition." ([05:12])
2. Why Google Lost the Race to Consumer AI ([06:07])
- Slowness to Commercialize:
Despite inventing transformers, Google failed to quickly commercialize and scale them, allowing others to leapfrog with applications like ChatGPT.- "It's interesting why that is... what systems are in place to make that be the case?" ([06:18])
3. What Is Cohere and How Is It Different? ([07:02])
- Enterprise-First Focus:
Cohere is building foundational language models, but distinguishes itself by unapologetically focusing on enterprise use cases, fine-tuning models for business tool integration and workflow augmentation.- "We're unique in our singular focus on bringing this technology to enterprise." ([07:13])
4. Data, Compute, and Algorithm Bottlenecks ([09:04], [09:32])
- Data Remains Scarce:
Even with synthetic data, real-world, high-quality data is still a bottleneck. - Algorithms Plateau:
Fundamental algorithms haven’t changed much. Progress mostly depends on better data and compute rather than radical algorithmic breakthroughs. - Compute Isn’t Everything:
More GPUs don’t automatically translate to better models; product and application fit matters more.
5. The Hype and Limitations of Scaling Laws ([10:21], [10:47])
- Skepticism of "Scaling Laws":
Nick questions the industry’s obsession with scaling (i.e., just adding more compute), using GPT-4 vs. GPT-5 as an example of diminishing returns.
- "How much better do you think GPT-5 was than GPT-4?"
"I actually think it was worse." (Harry, [10:50], [10:52])
- "How much better do you think GPT-5 was than GPT-4?"
6. AGI: Definitions, Hype & Industry Responsibility ([12:52], [13:00], [23:22], [56:34])
- No Shared Definition:
Nick defines AGI as when a computer is treated as a person, but claims we are nowhere close. - Damaging Rhetoric:
Hype about near-term AGI (especially existential threat rhetoric) harms sober policy discussions and technological trust.- "I don't think Sam Altman has done a service to the world by talking about how close AGI is." ([56:34])
- "The hype around AGI is the most damaging and confusing." ([23:22])
7. Model Specialization & The Myth of Benchmarks ([14:23], [15:08], [17:19])
- Value in Specialization:
The best LLMs are tailored — via fine-tuning — to the context/application. Application fit outperforms generic benchmarks. - Benchmarks Are Overrated:
Industry, media, and consumers overemphasize benchmarks; Nick thinks they're often "gamed" and not reflective of real-world utility.- "They are a reflection of how much the model has been trained on those benchmarks." ([19:06])
- "Oh, you can definitely gamify them. Yeah." ([19:12])
8. The War for AI Talent ([21:14])
- Nick Downplays Hype:
While acknowledging some "crazy" talent headlines, Nick suggests value comes from proven impact, not sky-high bidding wars.- "It's really hard work. It requires a lot of experience, a lot of ingenuity, and a lot of dedication." ([21:46])
9. AI in the Workforce: Displacement, Augmentation & Inequality ([24:11], [27:21], [28:51])
- Human + Machine, Not Human vs. Machine:
LLMs are tools to augment work, not replace core human creativity and culture (esp. in enterprise).- "There has been no independent breakthrough that an LLM has made. The breakthroughs are still people." ([25:12])
- Long-Term Societal Impact:
Nick expects workforce change akin to the Industrial Revolution, with the size and shape of teams evolving, not disappearing.- "It will fundamentally change the way we all do work soon." ([22:40])
- Role of Policy:
Whether AI exacerbates or alleviates inequality depends on policy, echoing the lessons of past revolutions.- "If there's good labor policy, I think it could help. If there's bad labor policy, it could hurt." ([28:54])
10. Open vs Closed Models & Sovereignty ([30:23], [31:27], [49:23], [52:06])
- Cohere’s Hybrid Model:
Cohere releases weights for non-commercial research use, balancing community trust with business.- "That's a good sweet spot for us as a business that allows us to build credibility." ([30:35])
- Sovereignty and Geopolitics:
Nick sees growing demand for sovereign AI infrastructure, akin to owning power plants: foundational models tailored to local language, law, and culture.- "Having a language model that speaks the language of your country is like building infrastructure for the people of your country." ([49:45])
11. Competing with AI Giants — Cohere’s Strategic Edge ([37:39], [38:23])
- Efficiency Over Opulence:
Cohere’s models are optimized to run on just two GPUs, saving customers on deployment costs and sidestepping compute shortages.- "We have spent orders of magnitude less on creating foundational models than some of the other foundational model companies out there." ([36:02])
- Niche Focus Outmaneuvers Big Spend:
By focusing exclusively on enterprise, Cohere doesn’t have to compete with billion-dollar consumer marketing budgets.
12. The Future of AI Interfaces ([32:49], [52:24])
- Prompting Will Fade:
Prompt engineering as a discipline will become less relevant as models become more intuitive.- "I think the idea of prompting as a skill will become less relevant." ([33:12])
- Language as the Main Interface:
Nick expects language-driven UI (voice/text) to become the norm for most business computing tasks.
Notable Quotes & Memorable Moments
- On Sam Altman's AGI Hype:
"AI is going to kill the whole world in two years... that was academically disingenuous and I think did a disservice to the technology he loves." — Nick Frosst ([00:00], [56:34], [56:47]) - On Specialization vs. General Models:
"LLMs generalize really well, but they don’t generalize as well as you might think. If you want to make the best model for a given interface, it's best to be training the model on that interface." — Nick Frosst ([14:23]) - On Benchmark Culture:
"They're a reflection of how much the model has been trained on those benchmarks." — Nick Frosst ([19:06]) - On the Human Element in Work:
"Most of it is talking to people, understanding the culture, what's going to hit, what’s relevant. That’s not in the data set of text from the Internet." — Nick Frosst ([25:44]) - On Why Cohere Is Efficient:
"Our model... trained to fit on two GPUs. That's a really important part of our business strategy." — Nick Frosst ([36:02]) - On Building for Legacy:
"The idea of building something or participating in the construction of something that is bigger than you is rewarding and is fundamentally human." — Nick Frosst ([46:43]) - On AI and National Infrastructure:
"Building a language model that speaks the language of your country is like building infrastructure for the people of your country." — Nick Frosst ([49:45])
Important Segments & Timestamps
| Timestamp | Topic/Quote | |------------|------------------------------------------------------------------| | 05:03 | Lessons from Geoff Hinton on playful research | | 06:07 | Why Google missed consumer LLMs | | 07:13 | Cohere’s enterprise focus & model differentiation | | 09:04 | Data bottlenecks even in era of synthetic data | | 10:47 | Scaling laws skepticism: GPT-5 vs. GPT-4 | | 12:52 | Defining AGI — "a computer you treat like a person" | | 19:06 | "Reflection of how much the model has been trained on benchmarks"| | 21:46 | Talent wars, compensation, and "crazy" industry headlines | | 23:22 | AGI hype as harmful — the existential threat rhetoric | | 28:54 | Policy’s role in AI-driven inequality | | 30:35 | Cohere’s open-for-research weight release policy | | 31:52 | Competitive awareness & founder focus | | 36:02 | Two-GPU model efficiency as strategic advantage | | 37:39 | Competing with bigger-funded rivals: focus pays off | | 49:45 | AI as infrastructure; argument for model sovereignty | | 56:34 | Critique of Sam Altman’s AGI predictions |
Flow and Tone
The episode is refreshingly candid, with Nick’s contrarian yet measured tone creating a dialogue that both challenges AI industry orthodoxy (especially around AGI, benchmarks, and value capture), and foregrounds practical, philosophical, and ethical questions. Harry’s probing style and willingness to debate adds dynamism, particularly in segments about job disruption, the value of work, and societal consequences of technological change.
Conclusion
For listeners who missed the episode:
This conversation offers an insider’s critique of the most hyped narratives in AI, grounded in the lived realities of building — and selling — foundational models to enterprises. Nick Frosst, through examples and argument, emphasizes the importance of pragmatic, evidence-based progress, societal responsibility, and the nuanced tradeoffs involved in model development, openness, policy, and talent.
Best for:
Anyone following AI industry dynamics, model development, enterprise adoption, the future of work, or seeking a no-spin analysis of the challenges and opportunities of modern large language models.
