Latent Space: Inside Google Labs — Building The Gemini Coding Agent (Jed Borovik, Jules)
Podcast: Latent Space: The AI Engineer Podcast
Date: November 10, 2025
Host: Latent Space (A)
Guest: Jed Borovik (B), Product Lead at Google Labs, Jules Coding Agent
Episode Overview
This episode dives deep into the genesis and technical philosophy behind Jules, Google's autonomous coding agent, built using the Gemini foundation model and developed within Google Labs. Host Latent Space interviews Jed Borovik at GitHub Universe, exploring the evolution of AI coding tools, organizational structures at Google/DeepMind/Labs, product strategies for AI agents, and the future of software engineering in the era of increasingly capable coding AI. The episode also previews the upcoming AIE Code Summit in New York, focusing on the agentic coding movement and how the emerging AI tools are reshaping both productivity and the engineering experience.
Key Discussion Points & Insights
1. Google Labs vs. Google/DeepMind — Organizational Clarity
- Google Labs Mission: To "build kind of new, innovative products that the rest of Google isn't well-positioned for" (04:28–04:38).
- Collaboration: Labs works closely with DeepMind, particularly for model access, but focuses on end-to-end product building (04:47–05:22).
- Labs is seen as a "true AI product org," able to drive new user-facing initiatives that tap into Google's full research, data, and infra stack.
2. Jed’s Journey into AI Coding Agents
- The release of Stable Diffusion sparked Jed’s personal AI inflection point, seeing AI either as a threat or as a "tool to create better art" (03:16).
- Transitioned from work on Google Search to finding a mission-driven role in AI engineering, landing on the Jules team to build the next generation of coding agents (03:17–04:17).
- Context: Early skepticism around the future value of software engineering careers in the age of generative AI.
3. Genesis and Philosophy of Jules (Gemini Coding Agent)
- Autonomous Coding Agents: Jules runs as an independent process/environment (“its own computer”) to handle long-lived, complex coding tasks—unlike ephemeral or tightly coupled in-editor agents (06:58–07:48).
- Ambient and Integrable: Goal is for Jules to be "ambient"—always available and easily triggered (API, CLI, integrations), with workflows ranging from developer command-line to automated, pull-request-driven updates (07:54–08:53).
- CLI/Integration: Jules CLI supports direct local dev; Gemini CLI integration broadens access points (08:31–08:53).
4. Evolving Agent Harnesses & The Power of the Model
- As foundation models (like early Gemini to current versions) improve, "scaffolding" becomes less necessary. Initial complex multi-agent orchestration (code review, subagent personas, etc.) shifted towards less intervention as base models grew more capable (09:39–11:30).
- Jed’s takeaway: "Less is more"—better models require simpler harnessing, reducing engineering complexity (10:25–11:56).
- Notable pattern: Use of embedding/retrieval approaches ("agent-based search") is still active, but limitations have emerged ("it will never be good" as a generalizable solution—comment from Host at 12:05), spawning hybrid approaches (semantic search + other classic tools) (11:31–13:42).
5. Transitioning from Experiment to Real Product
- Jules’ shift from prototype to product solidified with the I/O 2025 announcement—turning point for real-world adoption and real investment (14:04–15:04).
- Labs' intent is never just running experiments but establishing enduring products (14:04–14:16).
6. Community & The “Year of Agents” at the AIE Code Summit
- Discussion on why AI engineering summits matter: fostering neutral, industry-wide, high-signal forum for practitioners, with emphasis on unstructured "hallway" interactions over main stage talks (17:22–18:09).
- Scene: AIE Summit is highly curated, with selectivity described ("23:1 applicant to invited spots"—18:09).
- Borovik’s perspective: Coding agents as a rapidly maturing field, where conviction and verticalization (“just build agents yourself, bro!”—23:54) outpaces generic infra/tooling plays.
7. Technical Deep Dive: Context Windows & Long-term Agents
- Unique technical challenges for coding agents:
- Long-running sessions (users with 30-day continuous sessions—25:04–25:29)
- Managing and compressing huge context windows (25:33–26:24)
- Techniques: auto-compaction, subagent handoff, contextual summarization (patterns like AMPs’ handoff mechanic—26:24–27:04)
- Coding agents are a “special spot for super interesting product impact/research” due to codebase scale, high-context, and duration (25:57).
8. Model/Product Feedback Loops & Collaboration with DeepMind
- Close collaboration with model development teams; coding agents are spotlighted as a "top priority" use case at every leading lab (27:32–28:33).
- Borovik: “We’re inventing a new way to do our art... How do you want to interact with your model?”
- Gemini’s multimodality and plans for richer interaction (video, images as input) surfaced as next frontiers (40:26–41:15).
9. Industry Impact: The Future of Software Engineering With Agents
- Ongoing debate: Will AI decrease the number of software engineering jobs? Jed is optimistic, invoking Jevons paradox—greater productivity increases overall demand (34:00–35:39).
- The real impact: Shift from commoditized (“delegate to agents”) to high-value, strategic, deeply creative engineering work—need to define a new aspirational “good” version of agentic/vibe coding (35:39–38:59).
- Specify vs. Verify: As agents get better, “how do you specify what you want, and then how do you verify what you got is what you intended?” (38:59–39:28).
- How verification, testing, and code review will (or won’t) be enforced or automated in this new workflow (39:28–40:18).
10. Open Product/Research Questions
- Multimodal specification: Beyond text—images, video, pointer-based bug reporting (40:22–40:53).
- Computer use: Emerging capabilities for agents to interact with rendered UIs and browsers as “users” (41:17–41:59).
- Patterns for cross-tool/agent interoperability: Should artifact be code, PRs, tickets, or session history? (30:51–31:57)
- Need for best practice consensus (papers, benchmarks) on context management, agent architecture, and verification (27:04–27:15).
Notable Quotes & Memorable Moments
-
On Choosing to Build, Not Fear:
"This is either—it’s going to take my art, my craft, or it’s a tool to create better art. And I definitely know which path I’m taking."
— Jed Borovik (03:16–03:17) -
On the Philosophy of Simpler Agent Harnesses:
“As the models get better, like less is more. Especially as it comes to improving through whether it’s machine learning or just, you know, regular maintenance.”
— Jed Borovik (10:25) -
On the Limits of RAG/Embeddings in Code Search:
“A chunk that happens to capture the thing you're looking for will fail to capture something else. And so if you only retrieve based on your embeddings of a chunk, it's uses very arbitrary boundaries... But you could just throw attention at it and you can scale probably much better using grep.”
— Host (12:03) -
On Agent Company vs. Agent Infra Company:
“I’ve gone so agent pill to the point where people come to me with startup ideas for infra companies... And I’m like, why don’t you just build agents yourself, bro?”
— Host (23:54) -
On Context Accumulation in Long-lived Agents:
“We store some data for a session, but it only lasts... for 30 days. When the first user starts hitting that, they were upset. We were like, there’s no way anyone’s going to be using a single session for 30 days... just like how powerful that could be.”
— Jed Borovik (25:04) -
On Coding Agents as Research Frontiers:
“Coding agents are, I think, kind of a special spot of like super interesting product impact research.”
— Jed Borovik (25:57) -
On the Beautiful Future for Developers:
"We're inventing a new way to do our art. What does that look like and how does that feel?... If we can't articulate it and think about it, it's less likely we'll get there.”
— Jed Borovik (29:03) -
On Job Impacts and Jevons Paradox:
“As an engineer being able to be more productive encourages more investment in people building software... I’m bullish on this idea that it’s actually going to be great for software engineers.”
— Jed Borovik (34:15–35:39) -
On the Workflow of the Future:
“You kick off a thing, you get some feedback, then you’re like, oh, that’s not what I meant... So you work with the machine to discover what you wanted, and the machine works with you to either get you what you wanted or show you the errors of your ways...”
— Host (38:59)
Timestamps for Important Segments
- 00:04–01:49: Intro, Jed’s New York tech roots, hackathon scene
- 02:18–04:19: Jed’s path from Google Search to GenAI & Jules
- 04:19–05:22: Explanation of Google Labs' mission and its synergy with DeepMind
- 05:22–06:41: Internal Google coding tools, data advantages, transition to Gemini era
- 06:41–08:53: Jules product philosophy, unique positioning in the agentic coding landscape
- 09:12–13:42: Technical evolution—agent "scaffold" simplification, the diminishing need for embeddings/RAG for code search
- 14:04–15:04: Launch as a real product; Google I/O turning point
- 15:04–18:09: AIE Code Summit—community, goals, hallway track value
- 23:54–24:47: Rise of vertical agent product startups vs. generic infra/tooling businesses
- 25:04–27:04: Long-lived agent sessions, context management, subagent handoff patterns
- 27:32–28:33: Feedback loops: coding in product as a driving use case for Gemini and others
- 29:03–32:33: Broader vision & possibilities for agent interoperability
- 33:02–35:39: Economic impacts on software engineering careers; analogy to Jevons paradox
- 35:39–38:59: The spectrum from commoditized “vibe coding” to new forms of “agentic” coding
- 40:22–41:38: Multimodal, video- and UI-driven next-gen bug reporting/specification; computer use as an AI agent capability
- 42:34–43:38: What to talk to Jed about at AIE—Jules, agent workflows, user pain points, recruiting
Final Takeaways
- Jules, built on Gemini, exemplifies the move from experimental AI coding tools to enterprise-grade, ambient, autonomous coding agents.
- As foundation models mature, the focus has shifted from intricate agent orchestration to building seamless, developer-friendly product experiences.
- There is a powerful, collaborative feedback loop forming between model/infra labs and agent product teams, driving both research and user experience forward.
- Upcoming technical/research challenges orbit around context management, agent verification/testability, multimodal input, and new patterns for human-computer collaboration.
- The "agentic coding" era is transforming what it means to be a software engineer—removing grunt work, demanding new skills, and opening massive productivity frontiers.
For detailed show notes and more, visit: latent.space
![⚡ [AIE CODE Preview] Inside Google Labs: Building The Gemini Coding Agent — Jed Borovik, Jules - Latent Space: The AI Engineer Podcast cover](/_next/image?url=https%3A%2F%2Fsubstackcdn.com%2Ffeed%2Fpodcast%2F1084089%2Fpost%2F186621812%2F38ada377118ba1e7fe213ef5b78f06f3.jpg&w=1200&q=75)