The MAD Podcast with Matt Turck
Episode: Ex-DeepMind Researcher Misha Laskin on Enterprise Super-Intelligence | Reflection AI
Date: July 17, 2025
Guest: Misha Laskin (Co-founder & CEO, Reflection AI, former DeepMind researcher)
Host: Matt Turck
Episode Overview
This episode features an in-depth conversation with Misha Laskin, an accomplished AI researcher who left Google DeepMind after helping ship Gemini 1.5 to found Reflection AI. Laskin shares why coding is the most tractable path toward building "enterprise superintelligence," introduces the team's new product Asimov, and discusses the fundamental research, product co-design, and talent dynamics shaping Reflection AI. The exchange covers the history of AI technological breakthroughs, current bottlenecks in coding agents, and reflects on the practical challenges — both technical and organizational — of bringing superintelligent autonomous systems from labs into enterprises.
Key Discussion Points & Insights
Defining Superintelligence in an Enterprise Context
- Superintelligence vs. AGI: The industry increasingly uses "superintelligence" where "AGI" (artificial general intelligence) was used before. According to Misha, the goalposts have simply moved, and definitions remain vague. (04:56)
- Quote: "I would actually say that superintelligence in these contexts of the large lab context is actually just being used synonymously with what AGI used to be used for." — Misha, [04:56]
- Organizational Superintelligence: Reflection AI's vision is to create “an oracle that understands the entire organization extremely deeply,” answering questions and acting with the expertise of the most experienced members on each team. (02:10, 06:23)
- Quote: "It's going to be a system that really deeply comprehends the organization." — Misha, [03:33]
Why Focus on Coding?
- Coding as a Path to Superintelligence: Coding has become the “superintelligence complete” problem — if one builds an agent that can truly understand, generate, and maintain code at an organizational scale, all capabilities for superintelligence are in place. (00:42, 16:37)
- Quote: "If you really solve this oracle for organizations, just for coding, you’ve basically built all the capabilities you need to have a superintelligence." — Misha, [14:55]
- Intuitive for LLMs: Coding is “intuitive” for language models since their pretraining heavily involves code, which is more accessible to them than GUIs or raw mouse/keyboard interactions. (08:48)
- Quote: "For language models, it’s the opposite. They’ve never seen any GUI data… the thing that is native to them is the exact opposite of what’s native to us." — Misha, [08:48]
- Comprehension > Generation: Most time spent by engineers is not in writing code but in understanding code bases and organizational context; Reflection AI aims to solve the "context engine" problem. (10:53)
- Quote: "…if you actually hover over a shoulder of an engineer at any large organization…you’ll find that a minority of their time is actually spent coding." — Misha, [11:26]
- Current Limitations: Existing tools produce "L9 engineers with amnesia": impressive generative capacity, but no real memory or context, making them less useful in real-world engineering. (10:50–12:05)
The State of Retrieval & Memory in AI Agents
- Limitations of RAG and Agentic Search: Retrieval-Augmented Generation is “primitive” as it’s sparse and often faulty in context-building; newer agentic search methods are better, but still like “exploring a jungle with a flashlight.” (12:20–14:24)
- Quote: "Imagine you are dropped into a large dark jungle and all you have is a tiny flashlight. That’s basically what agentic search is today…" — Misha, [13:48]
- Need for Real Organizational Memory: Next-generation agents must aggregate knowledge from project management, chat, documentation, and even the “tribal” knowledge in people’s heads, building a true memory core for teams. (14:24–16:00)
The Building Blocks of Modern Superintelligence
- Enumerated “Ingredients”:
- Deep Neural Networks (ImageNet moment)
- Reinforcement Learning (AlphaGo, AlphaStar)
- Transformer architectures & Internet-scale data
- RLHF (“aligning” models), then RL-based reasoning agents
(17:39–19:41)
- Quote: "Neural networks, deep neural networks, transformers and Internet scale data, and reinforcement learning… these are the ingredients that are required to build a superintelligence." — Misha, [19:36]
Notable Segment: Launching Asimov
[21:14] — [25:11]
- Asimov Overview:
- Asimov is described as a "deep research" code agent, built to act as a team's collective memory, retrieving and synthesizing context from code, chats, project management, and more.
- Key Feature: Introduction of “team-wide memories,” which capture and organize institutional knowledge including direct input from senior engineers and organic collection from chat interactions.
- Quote: "It's kind of the first time I've actually ever seen engineers excited about documentation effectively." — Misha, [24:02]
- Architecture: Uses a “multi-agent system” with a "big reasoning agent" dispatching “scout” retriever agents for long-context reasoning and codebase exploration. (25:11)
- Tech Stack & Security:
- Cloud-based, but designed to be deployable in enterprise VPCs — a critical requirement for serious buyers. (32:38)
- "VPC is extremely important to an organization. This is definitely... a deal breaker." — Misha, [33:08]
Product vs. Research: Co-Design Philosophy
[26:46], [34:50], [62:46]
- Modern AI agent research is no longer just about model architecture, but product-scoped agent design, interfaces, and environment design — a parallel to DeepMind challenges like AlphaStar or OpenAI’s Dota bots.
- Quote: “A lot of the project was... figuring out how the agents should actually interface with the environment..." — Misha, [28:46]
- Reflection AI splits its team between research and product, hiring those motivated by building useful real-world systems rather than chasing academic benchmarks. (62:46)
RL in Asimov
- RL is being incrementally implemented to sharpen Asimov’s adaptive improvements based on real-world feedback, rather than as a monolithic end-to-end system out of the gate. (33:25–34:50)
Technical Strategy & Market Position
Horizontalization Potential
- Coding is just the start — the foundational “context engine” and institutional memory architecture can be extended to product, marketing, HR, and more. (57:01–57:31)
Talent & Competitive Dynamics
- Reflection AI attracts top talent from big labs by offering a rare chance to build the next “frontier lab” and significant equity — the comparative scarcity of high-upside, high-impact startup opportunities is a key recruitment lever. (59:32)
- Quote: "People end up in a sense kind of self-selecting...if you have a very strong initial team and people see that potential for breakthroughs, then you’ve become... a scarce option." — Misha, [60:22]
Capital Efficiency
- Startups cannot and need not outspend hyperscalers, but can achieve similar innovation progress at 10x lower capital due to focus and leaner GPU scaling. (65:01)
- Quote: "You can’t operate at 100x less capital than a frontier lab, but you can operate at say 10x... when you’re really focused." — Misha, [65:01]
The Future of Superintelligence
- Plurality, Not Monolith: Misha foresees not a single lab building general superintelligence, but a “plurality” — a garden where different superintelligent verticals (medical, organizational, scientific) flourish and collectively yield superintelligence. (51:28)
- Quote: "It won’t be one lab that has built it, but it’ll be kind of the plurality, like the collection of all intelligences will be a general superintelligence." — Misha, [52:37]
- Benchmarks vs. Reality: AI benchmarks (like SWE-bench) dramatically overstate real-world agent competence. The true breakthrough will be contextual understanding — moving from “L4 with code review” to “L9 with memory.” (53:32–56:20)
Memorable Quotes & Moments
- "If you really solve this oracle for organizations, just for coding, you’ve basically built all the capabilities you need to have a superintelligence." — Misha, [14:55]
- "Coding models are getting fairly aggressive timelines on progress... and I would say things have moved faster even than I would have expected." — Misha, [53:55]
- "I do think there will be a general superintelligence, but I think that it won't be one lab that has built it, but it'll be... the collection of all intelligences." — Misha, [52:37]
Timestamps for Important Segments
| Timestamp | Topic | |------------|------------------------------------------------------------------------------------------------| | 00:00 | Opening thoughts on superintelligence vs. AGI | | 06:23 | Defining organizational superintelligence | | 08:48 | Why coding is intuitive for LLMs; coding models as brain/hands/legs metaphor | | 10:50 | Limitation: “L9 engineer with amnesia” — current AI agent shortcomings | | 12:20 | The limitations of RAG and move toward agentic/neural retrieval | | 17:39 | Key breakthroughs and AI “ingredients” needed for superintelligence | | 21:14 | Launch and deep dive into Asimov: the code research agent for teams | | 25:11 | Multi-agent design in Asimov — product and research co-evolve | | 32:38 | Security and deployment: need for VPC/on-prem options in enterprises | | 34:50 | How Reflection applies RL pragmatically today | | 41:07 | Nobel Prize discussion—AI’s cross-pollination with sciences | | 47:28 | Laskin’s path from DeepMind to starting Reflection AI | | 53:32 | Autonomy of coding agents today, limitations & progress toward L9-level (senior) agents | | 57:01 | Generalizing institutional memory — coding to other company domains | | 59:32 | Recruiting top talent against Big Tech lab poaching | | 62:46 | Product/research team makeup and the value of startup focus | | 65:01 | Venture capital needs and capital efficiency vs. frontier labs | | 66:07 | Closing remarks |
Tone and Language
The discussion balances the technical—sometimes almost academic—language of cutting-edge AI research with frank, pragmatic insights about product development, team building, and the messy reality of deploying AI in organizations. Both Matt and Misha show humor, humility, and candidness throughout, lending an approachable tone to otherwise advanced topics.
Summary Takeaways
- AI is at a threshold where, with the known technological ingredients, the challenge is less about a missing breakthrough and more about deep product-research integration.
- Coding agents, when endowed with lasting organizational context memory, are a stepping stone to broader superintelligence—and Reflection AI’s Asimov aims to be that catalyst.
- The future of superintelligence is likely distributed, plural, and deeply shaped by product integration—not just model size or generalization.
- Startups can remain competitive against hyperscalers through focus, talent-driven culture, and pragmatic capital deployment.
For anyone looking to understand the current state, challenges, and high-stakes ambitions at the intersection of AI research and enterprise productization, this is a must-listen conversation—with lessons for researchers, builders, and business leaders alike.
