Podcast Summary: This Week in AI
Episode: What's Left for Humans When AI Builds Everything?
Date: April 8, 2026
Host: Jason Calacanis
Guests:
- Kanjun Qiu, CEO, Imbue
- Karina Hong, CEO, Axiom (AI math & verification)
- Jonathan Siddharth, CEO, Turing
Episode Overview
This episode is a “CEO roundtable” on the rising dominance of agentic AI, open vs. closed AI ecosystems, how AI is changing software development, and what might be left for human purpose and agency when machines increasingly build everything. The panel debates the explosion in coding agents, the data economy that powers model improvement, the future of open source AI, and what humans will do in a world where “agents build agents.” Notably, it tackles the profound, Black Mirror-ish implications for digital identity, work, creativity, and economic structure.
Key Discussion Points
1. Why AI Agents Must Remain Open (00:00–04:37)
- Digital Sovereignty Concerns:
- Kanjun and Jason stress that as agents become custodians of our memories, workflows, and life infrastructure, it’s critical they not be locked in by any one provider.
- Kanjun Qiu [B, 03:29]:
"We're giving our memories, we're giving our workflows... building our business infrastructure on top of agents... And so with agents, it gets a lot more intimate. If Anthropic or OpenAI... have our data, all our memories, our whole life's work, they can convince us of anything... That's a pretty bad world to be in as a human."
- Open source and open hardware are repeatedly called “essential” for user independence and competition.
2. Math Superintelligence and Code Verification (05:33–09:16)
- Why Verify Code?
- Karina highlights a surge in machine-generated code; merely relying on manual testing or LLM-based code review is brittle.
- Intent: Ensure mission-critical software is “mathematically verified,” referencing historical examples like the Paris subway and ESA rocket control systems.
- Karina [D, 07:27]:
"Superintelligence is meaningless if it's not verified. I don't want Schrodinger's super intelligence. So that's what we're doing. We're building an AI mathematician that always gives you the proof of everything."
3. Data as the New Moat: Training AI (10:22–12:51)
- Jonathan Siddharth on the Data Flywheel:
- Turing provides frontier labs with highly specialized data to improve models, especially for coding, enterprise workflows, and STEM reasoning.
- The “superintelligence accelerator” approach iterates between data collection, deployment, and error analysis to improve AI systems.
- Recurring Demand:
“There's unlimited demand for high quality data... The scaling laws continue to hold... The floor of human intelligence that's needed to advance the models also becomes even smarter.” [C, 12:14]
4. Anthropic’s Meteoric Revenue and the Coding Agent Arms Race (15:24–22:31)
- Anthropic Surges Past OpenAI:
- Recent reports show Anthropic achieving a $30B annual run rate, leapfrogging OpenAI.
- The panel attributes this to deep adoption of Claude’s coding agents and better reasoning, with coding now seen as the key AI battleground.
- Karina [D, 17:21]:
“Coding is everything... mass is code and code is math... coding and the reasoning capability, Anthropic has really differentiated itself there... Everyone is using Claude Code or Cursor.”
- Poetic aside: Karina finds Claude beats GPT at poetry, revealing agents’ divergent creative outputs.
5. Why Does Coding Improve General AI Reasoning? (20:05–20:58)
- Coding as Reasoning Glue:
- Coding forces models to construct step-by-step logic and build verifiable abstractions, which transfers to broader reasoning.
- Kanjun [B, 20:05]:
“When you're training these models, they learn embeddings... When you are trying to learn how to code, it kind of learns these good abstractions... you get really good fast training data. In the real world, it's hard to get good verifier data.”
6. The Future of Enterprise, Fine-tuning, and Open Source Models (64:52–68:21)
- Jonathan:
- Enterprise adoption is bifurcating into “fine-tuning” and “no fine-tuning” camps—big models with contextual memory versus smaller, task-specific models.
- The shift: increasingly, smart context/memory means less need for changing model weights.
- Open models might catch up—if they secure domain-specific data and workflows.
7. Risks of Lock-in, Black Mirror, and Open Alternatives (59:16–62:36)
- Digital Self as Rentable Asset:
- Kanjun warns that companies like OpenAI/Anthropic are driving vertical lock-in—users may “rent themselves back” rather than own their data/identity.
- Kanjun [B, 59:16]:
"We're going to give our digital lives and our digital identities to these companies and they're going to rent them back to us... our digital selves being locked up and rented back to us."
- Open source is positioned as the “organic food” of digital life versus the easy, but risky, processed AI offerings.
8. Economic and Work Implications: Productivity Explosion & Value of Human Agency (32:38–45:03, 76:09–85:19)
- Rising Productivity:
- Coding teams are exploding in output, with agents generating code and pull requests far beyond human capacity—but with risk of unnecessary complexity (“feature creep”).
- Kanjun [B, 35:44]:
“You can make your product really complex... and then you can just redo the entire thing based on what you learn pretty quickly.”
- Role for Human Engineers:
- Senior engineers' design knowledge remains crucial; junior engineers benefit, but agentic speed may create “code bloat.”
- Shift in Talent Needs:
- Two valuable profiles: infra “deep nerds” and research scientists/mathematicians who can guide agentic systems and verification.
- Change in Job Structure:
- Jonathan predicts humans moving into verification, running multiple agentic “companies” simultaneously, and working fractionally.
- Jonathan [C, 81:07]:
"A human might run five to eight companies, right. And maybe what the human does is... verifying the work of agents doing the work."
9. Endgame Scenarios: What Happens If AI Can Build Everything? (59:03–62:36, 72:04–73:19)
- Panel Fears and Hopes:
- If fully realized, agentic AI could generate any software, product, or workflow “instantly” based on user prompts.
- Default path: humans are “locked-in” renters of their own digitized minds.
- Alternative path: abundant open agents that users own/control (but economic sustainability for open AI remains tough).
- Jason [A, 72:04]:
“No, definitely, that is my personal Black Mirror... me having to go to Sam saying, can I have myself back?”
- Economic Sustainability for Open AI:
- Open source can follow the “WordPress” model—vast total use, a small slice monetized with pro features/support.
- Hardware abundance (powerful local machines) may revive “personal computing” as enterprises provide ultra-premium hardware for in-house AI agents.
10. Math Superintelligence: Impact on Science and Human Purpose (82:13–85:19)
- Karina’s Vision:
- Dream: AI mathematicians accelerate human scientific discovery, eliminating "reasoning bottlenecks," producing breakthroughs in days instead of centuries.
- Agents could collaboratively pursue scientific quests; humans shift from being “reasoning bound” to guiding/focusing new AI “gods.”
- Karina [D, 84:17]:
"I mean the fact that transformers are not understood and we don't know the output, but we know it's doing an incredible job reasoning... makes one wonder about our own brain."
Notable Quotes & Moments
-
On Digital Sovereignty:
- "I don't want to rent myself back from Sam Altman. That is my personal Black Mirror." — Jason Calacanis [A, 72:04]
-
On Superintelligence Verification:
- "Superintelligence is meaningless if it's not verified. I don't want Schrodinger's super intelligence." — Karina Hong [D, 07:27]
-
On the Economic Future:
- "Maybe what the human does is... verifying the work of agents doing the work." — Jonathan Siddharth [C, 81:07]
-
On Commoditization of Software Work:
- "We're not meant to be factors of production in an economy. You know, we're meant to be living creatures..." — Kanjun Qiu [B, 41:36]
-
On Team Structure & Token Usage:
- “For every three developers, there's essentially a fourth in tokens.” — Kanjun Qiu [B, 33:28]
-
On Human Agency:
- “Humanity is reasoning bound.” — Karina Hong [D, 85:02]
Timestamps for Key Segments
- 00:00–04:37 — The importance of open AI agents for autonomous personal infrastructure
- 05:33–09:16 — Why code verification matters, and Axiom’s formal math AI
- 10:22–12:51 — Data as the engine of AI progress (Turing’s approach)
- 15:24–22:31 — Anthropic’s surge, how coding drives AI model quality
- 32:38–36:44 — How agentic coding has changed engineering teams
- 59:03–62:36 — Fork in the road: rent-your-self vs. open, user-owned agents
- 72:04–73:19 — Black Mirror nightmare: digital self as corporate property
- 81:07–85:19 — Math superintelligence and the unshackling of human reasoning
Final Takeaways
- AI is rapidly making the means of production agentic and ultra-scalable, but creates existential questions about digital control, human purpose, and the risk of “renting your digital self.”
- Open source, open hardware, and formal verification are seen as vital counterweights to closed, lock-in ecosystems.
- Humans will likely move more into roles of verification, judgment, curation, and orchestrating fleets of their own agentic clones—not just coding.
- The endpoint for work may be abundant “personal AI gods” that expand human reasoning and creativity, perhaps shifting the very definition of purposeful labor or agency.
[Episode Guests' Projects]
- Imbue (Kanjun Qiu): Open-source agent orchestration for persistent, user-owned AI agents (imbue.com)
- Axiommath (Karina Hong): AI mathematicians, formal verification (axiommath.ai)
- Turing (Jonathan Siddharth): Specialized data pipelines for improving frontier AI, agent-driven enterprise AI (turing.com)
For listeners seeking a deeper, CEO-level pulse on AI's development and social ramifications, this episode is a must-hear window onto how the industry’s leaders are grappling with AI’s transformative—but fraught—impact.
