Eye on A.I. – Episode #321
Nick Frosst: Why Cohere Is Betting on Enterprise AI, Not AGI
Host: Craig S. Smith | Guest: Nick Frosst, Cofounder of Cohere
Date: February 17, 2026
Overview
In this episode, host Craig S. Smith is joined by Nick Frosst, cofounder of Cohere, to unpack why Cohere has chosen to focus on pragmatic, enterprise-focused AI rather than pursuing artificial general intelligence (AGI). Frosst draws on his experience as a foundational AI researcher (including his tenure with Geoffrey Hinton at Google Brain) to outline Cohere’s trajectory, innovations, and philosophy. The conversation traverses technical distinctions between neural architectures, the practical constraints of deploying large language models (LLMs), the realities of enterprise adoption, and the broader industry misalignments around AGI versus targeted utility.
Key Discussion Points
1. Nick Frosst’s Background and Genesis of Cohere
[03:54]
- Nick introduces himself as a cofounder of Cohere, with a background in cognitive science and neural networks from the University of Toronto.
- Discusses working with Geoff Hinton at Google Brain, contributing to foundational neural network and capsule network research.
- Quote: “I remember learning about [neural networks] in 2012… and thought they were. They made a whole lot of sense and were really exciting.” — Nick Frosst ([03:54])
- Met other Cohere cofounders including Aidan Gomez (coauthor of the original Transformer paper) through stints at Google and University of Toronto.
- Founding moment: realization that transformers could be tailored for enterprise applications, not just consumer novelty.
2. Cohere’s Approach: Enterprise AI, Not AGI
[09:43]–[14:34]
- Cohere’s vision is not to pursue AGI, but to create practically useful, efficient, and trustworthy AI tools for enterprises.
- Frosst critiques the shifting and ambiguous definitions of AGI and ASI (artificial superintelligence).
- He uses the “flight analogy” (from Feynman): planes and birds both fly, but in fundamentally different ways—similarly, today’s AIs are “intelligent,” but not in the human sense.
- Quote: “I don't think we've built AGI and I don't think we're going to anytime soon. I don't think the transformer behaves like a person, and I don't think making better transformers makes them more like people.” — Nick Frosst ([10:53])
- Emphasis is on augmenting, not replacing, human capability.
3. Technical Strategy: Efficient, Customizable Language Models
[15:37]–[23:18]
- Key tenets: capital efficiency (models should provide ROI, not just burn computational resources), trustworthiness, regulator compatibility, private deployment.
- Model Distillation:
- Explains the history: “dark knowledge” and distillation as introduced by Hinton (transferring knowledge from large models to smaller models).
- Distillation less relevant/practical for current scale of transformer models due to cost and scale of probability distributions.
- Cohere instead achieves efficiency through data curation, task focus, and custom architectures.
- Quote: “We train our own models from scratch… there’s about 10 companies in the world that make foundational models from scratch. We are one of them and we are unique in that group in our singular focus on the enterprise.” — Nick Frosst ([22:02])
- Models designed for deployment on limited hardware (e.g., two GPUs), oriented towards enterprise needs (e.g., enterprise search, tool integration).
4. Deployment Philosophy: On-Premise and Private by Default
[25:17]–[26:59]
- Security and data privacy at the forefront: rather than bringing customer data to Cohere, Cohere brings models to the customer environment (on prem, virtual private cloud, etc.).
- Especially critical for regulated industries (finance, healthcare).
5. Real-World Use Cases and ROI
[26:23], [51:11]
- Example clients: Royal Bank of Canada (RBC) uses Cohere’s models for document analysis, earnings reports, internal efficiency.
- Quote: “A large customer of ours… is the Royal Bank of Canada. They’re using our models across the company for things like doing analysis on quarterly earnings reports…” — Nick Frosst ([26:26])
- Healthcare: administrative support (summarizing doctor notes, processing insurance).
- ROI measured by tangible metrics (company-specific): e.g., analysts can now track more companies, or process information faster.
6. Research and Open Science Commitment
[28:56]
- Cohere maintains a significant research output (100+ papers), open models (weights available on HuggingFace), and collaborative ethos via Cohere Labs.
- Balances grounded, product-driven research with broader scientific collaboration.
7. Fine-Tuning, Customization, and the Role of RAG (Retrieval Augmented Generation)
[41:45]
- Distinguishes RAG as first coined (retrieval augmented generation).
- Many enterprise solutions now “agentic”—combining tool use, search, and iterative workflows.
- Models are further tuned for client-specific requirements (languages, industries, etc.)
8. Agents and Multi-Agent Systems: Buzzword or Breakthrough?
[38:22]–[45:16]
- Frosst defines “agentic” LLMs as those able to call tools, iterate, and loop, not just reply to prompts.
- Notes that much of the talk around “AGI via agents/society of agents” is speculative, often substituting one buzzword for another without solving core issues of reliability and autonomy.
- Quote: “I think it's much more augmentative than it is fully replacing things… the idea of a society of agents I don't think is really [likely].” — Nick Frosst ([44:54])
9. Evaluation, Benchmarks, and Industry Realities
[32:21]
- Evaluation of LLMs is fraught—benchmarks often align poorly to real business needs.
- Cohere’s advice: “figure out what you want the LLM to do… that's your evaluation, and that's going to be a better eval set.”
- Capital efficiency is crucial—many AI pilot projects fail to reach production due to running costs and lack of ROI (referencing the MIT “95% in demo” statistic).
10. Enterprise Adoption and the Road Ahead
[36:39]–[54:42]
- Most consumers have tried LLMs, but enterprise adoption lags due to data privacy, model quality, and cost.
- The next phase: LLMs becoming a mundane, deeply integrated part of enterprise workflows—“AI is just going to be part of your work life.”
- Frosst cautions against hype cycles: “I hope we don't have a new buzzword next year. I hope it's just useful.”
- No plans for consumer-facing LLM products; Cohere is firmly enterprise-focused.
Notable Quotes & Memorable Moments
-
On AGI and AI Utility:
"I don't think we've built AGI and I don't think we're going to anytime soon. I don't think the transformer behaves like a person, and I don't think making better transformers makes them more like people."
— Nick Frosst ([10:53]) -
On the 'Flight Analogy' for AI:
"Planes can carry insane amounts of weight, they're way larger, can go way faster than birds... We've made artificial intelligence. It's just not the way humans do intelligence."
— Nick Frosst ([11:53]) -
On Efficient Models:
"The model that we released recently is called Command... It requires two GPUs to run; deep seq models take about 16 GPUs... so there's models that we outperform that were eight times as easy to deploy from a hardware perspective."
— Nick Frosst ([21:42]) -
On Evaluation:
"Our suggestion to companies is to say don't look at whatever is the latest eval that's exciting... just figure out what you want the LLMs to do and write 10, 20 examples of that problem and then ask the LLM—see if it gets the right answer."
— Nick Frosst ([33:12]) -
On Market Focus:
"There's a lot of weird stuff going on and I don't claim to understand or endorse a lot of it, but one of those consumer companies is going to be using AI for consumer applications in an effective way... there's not a lot of people using LLMs anywhere near as much as they could be if they were deployed correctly."
— Nick Frosst ([57:06])
Important Timestamps
- [03:54] – Nick Frosst's background, collaboration with Hinton, meeting Aidan Gomez, and Cohere's founding moment.
- [10:53] – Why Cohere rejects the AGI narrative; flight analogy for intelligence.
- [15:37] – Technical strategy: efficient models, private deployment, data curation.
- [21:42] – Hardware requirements: Command model and deployment efficiency.
- [22:02] – Cohere as a foundational model builder.
- [26:26] – RBC as a flagship enterprise client.
- [32:21] – Evaluation and the 'eval du jour' problem.
- [38:22] – Defining and deploying “agents” in enterprise AI.
- [41:45] – Role of RAG (retrieval augmented generation) and agentic systems.
- [44:54] – Distinguishing between society of agents and AGI hyperbole.
- [49:16] – Cohere’s current growth and what’s next.
- [51:11] – Measuring ROI for enterprise deployments.
- [53:22] – Predictions for 2026: boring, pervasively useful enterprise AI.
- [57:06] – Rationale for a pure enterprise, not consumer, market focus.
Final Takeaways
- Cohere’s mission is firmly grounded in solving real business problems, not chasing speculative AGI.
- Their approach to AI is pragmatic: building smaller, efficient, customizable LLMs that respect enterprise privacy and can be deployed flexibly.
- The AI industry must break from the hype cycle, ground evaluations in real-world use, and focus on trustworthy, scalable solutions.
- The next 1–2 years will be defined less by flashy breakthroughs or buzzwords, and more by consolidation, deployment, and integration into the fabric of business technology. Cohere aims to be at the center of that transformation.
