The Vergecast – How Claude Code Claude Codes (Feb 24, 2026)
Episode Overview
In this episode, host David Pierce dives deep into two main topics:
- The evolution and impact of Anthropic’s Claude Code (and its user-friendly variant, Cowork), featuring an in-depth interview with its creator, Boris Cherny.
- A practical, sometimes existential discussion on AI tools and data privacy with Verge senior AI reporter Hayden Field.
The show also explores how “vibe coding”—using AI tools to automate busywork and write software—has reshaped the landscape, and answers a listener hotline question about phone buying amidst a looming RAM crisis.
1. The Rise of Vibe Coding & Claude Code
[04:14 – 06:59]
- Vibe coding refers to the new, more fluid way of using AI-powered tools like Claude Code to write code, automate tasks, and solve everyday problems, often by simply prompting.
- David shares a personal anecdote using Claude Code to consolidate notes from multiple apps into Obsidian, “without any manual labor or messy copying and pasting.”
- The consensus: while chatbots haven’t taken over the world, coding assistants like Claude Code now have genuine product-market fit and are transforming workflows.
Quote
“The idea that you can use AI to write good code is just true. That thing has found product-market fit.” – David Pierce [04:35]
2. Interview with Boris Cherny (Anthropic) – The Claude Code Evolution
[06:59 – 38:19]
A. How Coding with Claude Has Changed Everything
- Boris shares that with the release of the Opus 4.5 model in late 2025, Claude Code went from writing 10% of his code to 100%:
“I don’t write any code anymore. Claude code does 100% of my coding.” – Boris Cherny [07:01]
- The pivotal upgrade: Opus 4.5 enabled the AI to run, check, and fix its own code, making manual intervention almost obsolete.
Quote
“Opus is now testing my code... it’s able to open the browser and verify that the website works correctly... The code is just really good.” – Boris Cherny [09:27]
B. Who’s Using Claude Code? Not Just Developers
[12:00 – 15:31]
- Initially targeted at developers, the product quickly found traction with non-technical users, including data scientists, sales teams, and product managers.
- Anecdotes include coworkers using Claude Code for data analysis and even sales pipelines, prompting the development of Cowork, the more approachable tool built atop Claude Code.
Quote
“All sorts of non-developers started using it. And this was just the craziest surprise. But also the best possible thing you can see in product.” – Boris Cherny [12:21]
C. UI/UX Challenges and the Path Forward
[15:31 – 19:42]
- Claude Code is available as a terminal app, IDE plugin, mobile app, and desktop app—adapted for different user skill levels.
- Cowork offers a more protected environment (with deletion protection, VMs, etc.), while developer surfaces remain highly customizable.
“For people that aren’t engineers, we want it to be a little less foot gunny. We don’t want people to mess up their system.” – Boris Cherny [16:23]
D. The Orchestra Metaphor: Coding as Conducting
[18:15 – 18:44]
- David and Boris discuss the fundamental shift from “playing the violin” (writing code) to “conducting the orchestra” (directing AI to do the coding).
E. Claude Code’s Real-World Impact and Unexpected Use Cases
[20:00 – 23:14]
- Claude Code (and Cowork) now writes an increasing percentage of the world’s code, powers “all the biggest companies,” and is used for surprising things like recovering wedding photos, genome analysis, and even buying clamming licenses.
“All the toil, all the stuff you didn’t want to do by hand, it can just do.” – Boris Cherny [23:20]
F. AI Assistants: Power and Limitations
[25:14 – 27:37]
- For high-stakes tasks (e.g. doing taxes), Boris recommends double-checking and having the AI triple-check itself, but notes rapid improvements.
- Early product milestones such as the model organizing project updates via Slack were “kind of taken aback” moments for Cherny.
G. Agent/Tool Use vs. Computer Use
[30:06 – 34:18]
- Boris distinguishes AI “agents” as models that can use external tools, and discusses the roadmap from coding → tool use → full-on computer use (e.g., browser automation for sites without APIs).
Quote
"If you think about what can you actually do with a tool on a computer... to use the website, you want the model to be able to use a browser, you want it to be able to use a computer." – Boris Cherny [32:32]
H. Data Access, Privacy, and Safeguards
[34:18 – 37:16]
- Anthropic prioritizes privacy and security, building virtual machine sandboxes, access restrictions, and aligning mostly to the requirements of enterprise clients.
- Emerging vectors: Prompt injection is a chief concern, with evolving defenses.
“Cowork can only see the folders that you give it access to.” – Boris Cherny [36:07]
I. Everyday AI: Practical Examples
[37:16 – 38:19]
- Examples: AI drafting email responses, unsubscribing from newsletters, paying parking tickets, and other life admin tasks.
3. Data Privacy & AI Agents: How Should We Think About Our Own Data?
[41:40 – 68:13]
Guest: Hayden Field (Verge Senior AI Reporter)
A. Existential Doubt & Risk Tolerance
[41:44 – 45:00]
- David recounts the “teenage mode” of installing AI agents on a computer full of critical personal data and the trepidation of giving access to AI tools.
“Giving this unknowable AI agent access to this is insane.” – David Pierce [42:45]
B. Rapidly Changing Privacy Landscape
[45:00 – 47:16]
- Hayden notes AI startups’ terms are voluntary and mutable, unlike longer-established platforms.
- A company’s attitude or market could shift any time—“these documents are living documents.”
C. Reasonable Caution: Anonymization Is Not Bulletproof
[47:16 – 51:20]
- Experts warn that anonymized data can often be deanonymized, and that what a system considers “private” is imperfect.
- Use caution, especially with free products where “you are the product.”
D. Terms of Service: A Masterclass in Confusion
[53:51 – 54:47]
- David shares a real, confusingly worded clause ("double parenthesis") from Anthropic’s terms about how email data is used, showing the opacity and fluidity of real privacy protections.
E. Centralization vs. Compartmentalization
[58:00 – 60:41]
- Gemini’s privacy pitch: because your data already lives at Google, it doesn’t cross a new barrier. But putting everything with one provider brings its own risk—should we centralize or spread our data?
“If it’s a free product, you are the product... If you’re an enterprise user, you’re way safer.” – Hayden Field [52:36]
F. Practical Advice
[62:20 – 67:00]
- Hayden advises David not to connect all his Google data to Claude if possible, except for mundane information. Each user should judge their own risk profile, given the massive convenience-vs-privacy tradeoff.
- The more specialized assistants (“many AIs”) might be more secure and contextually appropriate than one all-knowing, all-access assistant.
G. The Takeaway
- Make informed decisions; companies should be transparent about the tradeoffs and users need to be aware of what they're giving up in return for AI convenience.
Quote
“You just need to be able to accurately understand what you’re giving up.” – Hayden Field [68:06]
4. Hotline: Should You Buy a New Phone Now? (The RAM Crisis)
[71:18 – 82:46]
Listener Lucas asks: Should I upgrade my iPhone 15 Pro Max now due to rising RAM costs, or wait?
Answers from David, Allison Johnson, and friends:
- Allison: Apple and other manufacturers will do everything to avoid directly raising prices, but may make sneaky moves like removing cheaper SKUs.
- “If you need a phone now or soon, upgrade. If your current device is good for a couple more years, you're probably safe to wait.”
- The “RAM crisis” is real, but not likely to make existing devices obsolete in a short timeframe.
Quote
“All phones are good now. Just go buy the thing.” – David Pierce [81:44]
5. Notable Quotes by Timestamp
-
David Pierce:
- “The idea that you can use AI to write good code is just true.” [04:35]
- “You used to play the violin, and now you’re conducting the orchestra.” [18:21]
-
Boris Cherny:
- “I don’t write any code anymore. Claude code does 100% of my coding.” [07:01]
- “All the toil, all the stuff you didn’t want to do by hand, it can just do.” [23:20]
- “Cowork can only see the folders that you give it access to.” [36:07]
-
Hayden Field:
- “If it’s a free product, you are the product... If you’re an enterprise user of something, you’re way safer.” [52:36]
- “You just need to be able to accurately understand what you’re giving up.” [68:06]
6. Timestamps for Key Segments
| Topic/Segment | Timestamps | |--------------------------------------|------------------| | Intro & Episode Themes | 00:02 – 04:14 | | Vibe Coding, Personal Use | 04:14 – 06:59 | | Boris Cherny Interview | 06:59 – 38:19 | | Data Privacy & AI (Hayden Field) | 41:40 – 68:13 | | Phone Upgrade Hotline (RAM crisis) | 71:18 – 82:46 |
7. Memorable Moments
- Claude Code writing 4% (or likely more) of all code commits worldwide. [20:00]
- AI recovering wedding photos and buying clamming licenses (Cowork). [20:53, 23:19]
- The confusion of privacy terms (“double parenthesis” clause). [53:51]
- David’s honest confession: He connected Gmail to Claude and now regrets it. [68:13]
Conclusion: Why This Episode Matters
This Vergecast episode offers a candid, practical look at how powerful and accessible AI coding assistants have become—and simultaneously the complex, unsettled terrain of data privacy these new tools bring. The central theme is empowerment with a note of caution: Claude Code and Cowork can handle a vast range of tasks, but as users invite them deeper into our lives, the tradeoff between convenience and privacy intensifies. In the shifting tech landscape of 2026, clarity, transparency, and user agency are as important as ever.
