Practical AI Podcast - Episode Summary
Episode Title: Humility in the Age of Agentic Coding
Date: March 17, 2026
Host(s): Daniel Whitenack (CEO, Prediction Guard), Chris Benson (Principal AI & Autonomy Research Engineer)
Guest: Steve Klabnik (Software Engineer & author, Rust community leader)
Episode Overview
This episode explores the rapidly evolving impact of AI—particularly agentic coding—on the practice of software development. Centered on the virtues of humility, openness, and reevaluating core principles, the discussion features Steve Klabnik, a well-known software engineer and educator who shares his personal journey from AI skeptic to advocate. The conversation covers the crucial period of mindset shift, the realities and potential of human-AI collaboration, shifts in engineering culture, and Steve's hands-on experiments with AI-driven programming language development (notably the Roux project).
Key Discussion Points & Insights
1. Steve Klabnik's Background and Path to AI
-
Steve’s unique background: Blends deep software engineering (notably with Rust and Ruby on Rails), computer science, and a strong humanities/language focus. (02:56)
-
Early AI skepticism: Despite being deeply immersed in tech, Steve considered himself an "AI hater" until recently due to both technical shortcomings and social echo chambers. (04:17)
-
Mindset shift: Triggered by both personal hands-on experimentation (with agentic tools like code agents in VSCode) and discussions with “non-programmer” relatives who found real value in the tools. (08:09)
"I was willing to criticize this as useless when it was useless, but now it is not useless. And so that's a big shift in me."
— Steve Klabnik [14:25]
2. The Importance of Humility in Software & AI
-
Humility as a survival skill: Both hosts and Steve discuss the need for “epistemic humility” in a profession previously defined by certainty and static facts. (06:58, 17:29)
-
Persistent change and fact fluidity: The AI landscape keeps shifting, making fixed mindsets risky.
-
Normal people vs. programmers: Non-developers often adapt better to tools that “hallucinate” because they expect less determinism from computers—a lesson for programmers accustomed to strict correctness. (12:16)
"We could all deal with a little more humility... That's, I think, a lot of where the turnaround for me came out of."
— Steve Klabnik [14:19]
3. Redefining What Matters in Coding Practices
-
Reevaluating industry dogmas: With agentic AI, many previously sacrosanct practices (like strict code quality standards, DRY, Agile frameworks, or the “mythical man-month” rule) may lose relevance or require reinvention. (20:27, 22:59)
-
Different needs for AI/human collaboration: What makes “good code” for language models is not always the same as for humans. (20:27)
-
Experimentation phase: The entire industry is in a “trying wings and falling off buildings” phase—mistakes are part of finding new truths. (20:25)
"This is not a time for shutting off ideas. This is a time for reexamining the beliefs that we have—and I say beliefs, because they are beliefs."
— Steve Klabnik [20:33]
4. Agentic Coding, Human-AI Teaming & Project "Roux"
-
Project Roux’s origin: Combining a lifelong desire to work on a compiler/language with the realization that AI agents (like Claude) can handle much of the tedious or mechanical work, freeing Steve to focus on architecture/design. (29:22)
-
The experiment: Seeing if it's possible to create a brand new language—in a way, “training” the LLM as the language is invented and pushing agentic AI tools well beyond blog app/CRUD code (39:07).
-
Iterative learning: Early versions floundered; success required better project structure, validation pipelines, and developing skill at "programming with AI."
-
Design philosophy: Roux aims to sit between Go and Rust, balancing type safety/expressiveness with speed and rapid development, and takes inspiration from several modern languages. (41:15, 44:24)
-
Persistence vs. perfection: Steve candidly says the project is on hold (“my 69 Chevy in the garage”), and its value is as much in exploring and demonstrating concepts as in producing a deliverable language. (51:19)
"I wanted to invent a new language and make Claude write a ton of it because I wanted to see how good it would be at using a language that literally did not exist before this project happened."
— Steve Klabnik [38:07]
5. Practical Lessons from AI-Human Pairing
-
AI usage as a skill: Effective agentic coding demands learning a new set of skills and work patterns, akin to mastering a new editor like Vim. (33:30)
-
Shifting attitudes on code cleanliness: Willingness to tolerate temporary code duplication or mess if tools can later aid in cleanup; rethinking when and how to optimize. (45:25)
"I am more willing to commit minor heresies into my code base and then fix them later."
— Steve Klabnik [47:02]
6. Broader Effects on Team Collaboration and Culture
-
Role of process and structure: Many current software development practices exist to compensate for human limitations; their role in a world of AI agents is up for debate. (22:59, 24:21)
-
Agent quality and velocity: The biggest future challenge is striking the balance between maximizing AI-driven productivity (“letting Claude merge PRs”) and maintaining quality.
"There is so much velocity to be gained by letting Claude merge PRs… But how do we maintain quality in that universe?"
— Steve Klabnik [53:36]
Notable Quotes & Memorable Moments
-
"We have been talking about these tools... there’s a disagreement about the facts, and you can’t, I feel like, get to a higher level of discussion or consensus without understanding what the facts are."
— Steve Klabnik [05:03] -
"A lot of software developers are like, ‘No, this tool is wrong and is bad, and if you get value out of it that means you’re stupid. Not that you know something I don’t.’"
— Steve Klabnik [13:46] -
(On code quality): “What is good code to an LLM may not be good code to a human.”
— Steve Klabnik [20:27] -
“Adding more people to a project makes it slow down—that’s a sacred cow… but if that’s not true anymore, that changes a lot of other things.”
— Steve Klabnik [25:33]
Important Timestamps & Segment Highlights
- [02:56 – 06:58] Steve’s background and change in perspective
- [08:09 – 15:47] How family and real-world use cases challenged his skepticism & sparked a mindset overhaul
- [17:29 – 22:53] Humility, transferable skills, and reevaluating software beliefs
- [22:59 – 28:49] The future of developer processes, impact of AI on practices like Agile, pull requests, code reviews
- [29:22 – 41:28] The birth, purpose, and design philosophy of Roux; lessons from AI-driven compiler design
- [45:25 – 50:11] How human-AI teaming experiments inform workplace practices and thinking on code "mess" and refactoring
- [51:19 – 54:22] Speculations on the future: velocity vs. quality, open questions for sustaining high-performance, high-trust agentic teams
Wrap-up: Lessons, Predictions, and the Road Ahead
- Steve hopes Roux will remain a playground for ideas and that its concepts will spread, even if it isn't a mainstream language.
- The central open question for software: how do we "let go" enough to benefit from the massive speedup AI offers, while ensuring safety, quality, and trust?
- The entire field must approach current changes with humility and a willingness to experiment—and discard—longstanding dogmas.
"The software engineering question of right now is: how do we maintain quality in [an agentic] universe?"
— Steve Klabnik [54:04]
For listeners:
This episode is a must-hear for software developers, tech leaders, and anyone interested in navigating the shift to AI-augmented workflows with open eyes, critical thinking, and a sense of possibility. The combination of personal narrative, technical experimentation, and cultural critique makes this discussion both practical and forward-looking.
