Better Offline – CZM Rewind: The Truth About Software Development with Carl Brown
Date: November 26, 2025
Host: Ed Zitron (CZ Media)
Guest: Carl Brown (Veteran Software Engineer, YouTuber @ InternetOfBugs)
Overview
In this episode, Ed Zitron speaks with veteran software engineer Carl Brown to dig into the realities of software development, the myths perpetuated by tech industry hype, and the current and future impacts of Large Language Models (LLMs) and generative AI on software engineering. The conversation unpacks the actual work of software developers, misconceptions about AI "replacing" programmers, the practical limitations of AI-generated code, and deeper industry and management issues that are eroding long-term software quality and professional culture.
Key Discussion Points & Insights
What Does a Software Developer Actually Do?
-
Definition and Core Tasks
- Software developers primarily solve problems by writing code that instructs computers to perform tasks ([02:46]).
- However, coding is a minority of the job—often between 10% and 25% of the actual work for experienced engineers ([03:25]).
- Quote:
“As a general rule, I would say maybe between 10% and 25%.” – Carl Brown [03:25]
- Quote:
- Senior devs spend much more time communicating, understanding requirements, and planning than just coding.
-
Early Career Differences
- New graduates spend more time coding because they're handed specific, smaller tasks ([03:51]).
The Hype and Reality of AI and LLMs in Software Development
-
Can LLMs Replace Coders?
-
At the junior level, LLMs can approximate the work by breaking tasks into small chunks, but LLMs don't improve over time, while humans do ([04:29]).
-
LLMs lack long-term thinking and context, which are crucial to real software development, which is about building systems incrementally and maintaining context over many days and iterations ([04:46]).
-
Quote:
“But past that, it doesn’t do a good job of being able to do any kind of long-term thinking. And that’s largely the job.” – Carl Brown [04:46]
-
-
Measuring AI’s Real Contribution
- Claims like "30% of code at Meta/Google is written by AI" are misleading; most "AI-written" code is accepted autocomplete suggestions (typos, standard code blocks), not creative work ([05:50]).
- Quote:
“If all that counts as AI, then what percentage of your stuff is written by AI?” – Carl Brown [05:50]
- Quote:
- Claims like "30% of code at Meta/Google is written by AI" are misleading; most "AI-written" code is accepted autocomplete suggestions (typos, standard code blocks), not creative work ([05:50]).
-
Problems With AI-Generated Code
- LLMs tend to create redundant, inconsistent code, making maintenance difficult ([07:07]).
- Example:
- Multiple ways to fetch data from a server are generated, instead of a reusable block.
- Example:
- This leads to security headaches and debugging nightmares—finding which duplicated code block is the problem ([07:56]).
- LLMs tend to create redundant, inconsistent code, making maintenance difficult ([07:07]).
-
Security Concerns
- LLMs don’t make all code similar; their randomness means diverse implementations, sometimes increasing unpredictability ([08:11]).
-
What Are LLMs Actually Good For?
-
Great for small, well-defined, self-contained tasks or when the developer needs a quick syntax reminder in an unfamiliar language ([08:52]–[09:43]).
- Quote:
“So they are decent at when you know what you want and what you want is a fairly simple self-contained thing... it can type it faster than you can.” – Carl Brown [08:52]
- Quote:
-
Not Good For:
- Building complex systems, where context, design, maintainability, and security matter ([09:47]).
- Quote:
“Oh, not at all.” – Carl Brown [09:47] (on trusting LLMs to build a full package)
- Quote:
- Building complex systems, where context, design, maintainability, and security matter ([09:47]).
-
Executives vs. Engineers
- AI tools are often being pushed by executives, not engineers, as cost-saving or “innovation” measures, regardless of actual engineering value ([10:38]).
- Quote:
“The executives are all really excited about it and none of us are.” – Carl Brown [10:58]
- Quote:
- AI tools are often being pushed by executives, not engineers, as cost-saving or “innovation” measures, regardless of actual engineering value ([10:38]).
-
Software Engineering: Misconceptions and Challenges
-
Misunderstood Job
- Most of the job is understanding and defining the problem, not just coding ([12:08]).
- Debugging and long-term thinking are core, and LLMs/GPTs cannot replicate this process.
- Quote:
“I talk about it like LLMs or generative AI is good at solving riddles, but actual software development is more like solving a murder.” – Carl Brown [13:08]
- Most of the job is understanding and defining the problem, not just coding ([12:08]).
-
The Calamity of AI-Generated Code for the Next Generation
- Younger engineers may grow up not learning the underlying "why" and the infrastructure side of software, instead just “filling in blocks” ([13:19]–[14:48]).
- Quote:
“...a generation pumping the Internet and organizations with sloppier code.” – Ed Zitron [13:19]
- Quote:
- Code is buggier and weirder, making debugging harder for future engineers ([15:16]).
- Younger engineers may grow up not learning the underlying "why" and the infrastructure side of software, instead just “filling in blocks” ([13:19]–[14:48]).
-
Hiring Crisis & Broken On-Ramp
- Entry-level hiring is stagnating as management assumes AI can replace junior devs ([14:03]), worsening the long-term talent pipeline.
Empirical Evidence: Declining Code Quality
-
Code Churn Up, Quality Down
- Studies show “code churn” (lines that are quickly rewritten after being introduced) has increased since AI coding tools like GitHub Copilot proliferated ([16:32]–[17:44]).
- Indicates more code is being written that needs immediate revision—sign of poorer quality.
- Studies show “code churn” (lines that are quickly rewritten after being introduced) has increased since AI coding tools like GitHub Copilot proliferated ([16:32]–[17:44]).
-
Maintenance Nightmares
- AI-generated code encourages proliferation of inconsistent approaches to the same problems rather than refactoring and code reuse ([19:07]).
Inside GitHub and Modern Developer Workflow
- How developers use GitHub ([23:20])
- Developers create and test code locally, then “push” changes to GitHub, open pull requests, and go through code review and automated checks.
- LLM industry obfuscates these terms and workflows, contributing to hype and confusion.
The Devaluation of Software Engineering
-
AI hype minimizes the entire discipline
- Management treats coding as “just typing,” ignoring the critical work of design, requirements clarification, architecture, maintenance, and long-term thinking ([25:03]).
- Quote:
“They're devaluing all of the stuff that's not just hacking code.” – Carl Brown [25:03]
- Quote:
- Management treats coding as “just typing,” ignoring the critical work of design, requirements clarification, architecture, maintenance, and long-term thinking ([25:03]).
-
Rise of “Vibe Coding”
- The trend of prompting an LLM for code instead of understanding what is being built or the consequences of changes ([25:40]–[27:08]).
- Good for prototypes or one-off tools, bad for production or code that will be maintained.
- Quote:
“Vibe coding is great for a thing that you’re going to do once and then throw away.” – Carl Brown [26:49]
- The trend of prompting an LLM for code instead of understanding what is being built or the consequences of changes ([25:40]–[27:08]).
Security and Quality: A Growing Disaster
-
Adversarial Internet, Security Risks
- Unlike a generated story, code placed online is attacked by malicious actors ([27:49]–[28:17]).
- Quality assurance (QA) has been deprioritized or defunded in some large companies—creating risk ([28:33]).
-
Accumulating “Scar Tissue”
-
As AI-generated code proliferates with bugs and vulnerabilities, a massive remediation effort looms in the future when the cracks start showing ([29:40]).
-
Known Risks and Secure Coding
- Secure code requires careful control of user inputs, file access, and data permissions. AI code is likely to miss these checks, leading to exploits and data breaches ([30:55]–[31:54]).
-
Workflow and Culture: Agile, Churn and Management
-
The “Agile” Shift and Short-Termism
- Explains how Agile software management tries to manage uncertainty but encourages a focus on only short-term deliverables ([32:32]–[34:55]).
- Leads to loss of long-term vision, planning, and architecture.
- Explains how Agile software management tries to manage uncertainty but encourages a focus on only short-term deliverables ([32:32]–[34:55]).
-
Scaling Problems
- As organizations grow, layers of management increase, diluting technical leadership and cross-team coordination ([36:15]–[38:06]).
- This is as much a problem at Google/Amazon as elsewhere.
- Recent “enshittification” (management over product/engineering) is degrading engineering culture ([37:21]).
- As organizations grow, layers of management increase, diluting technical leadership and cross-team coordination ([36:15]–[38:06]).
-
The AI “FOMO” Effect
- Orgs adopt AI en masse out of fear of missing out, not because AI delivers real value ([39:12]).
- Quote:
“Might as well throw spaghetti at the wall and see if it sticks, because it might kind of mentality is pervasive at the moment.” – Carl Brown [39:12]
- Quote:
- Orgs adopt AI en masse out of fear of missing out, not because AI delivers real value ([39:12]).
Management Delusions and Workplace Stress
-
Layoffs, AI Productivity Promises, and Stress
-
AI is being used to justify layoffs and freeze new hires; existing engineers expected to increase output ([45:18]–[55:32]).
- Hype justifies executive decisions from layoffs (Duolingo, Klarna, BP) to mandated AI tool use.
-
Impact on Working Engineers
- Anxiety is rampant; even seasoned engineers wonder if their teams will be cut next ([54:34]).
- More “tickets” (units of work) are being demanded per developer because of supposed AI efficiency ([55:32]).
- Quote:
“We expect more tickets closed per two week period... because we're giving you this AI now.” – Carl Brown [55:32]
- Quote:
- Measurement systems (velocity graphs) now act as pressure tools.
-
-
Executives vs. Reality
- The people making strategic decisions about software and workforce are often not technical and don’t understand developer realities ([52:40]).
AI Agents Won’t Fix It — Or Replace Engineers
-
Reasoning Models Haven’t Changed the Game
- Newer “reasoning” models are mildly better (“don’t make as many stupid mistakes”) but not transformative ([49:57]).
- AI does not possess the contextual or organizational intelligence for robust software ([46:12]).
-
The Human Factor Is Irreplaceable For Now
-
AI coding tools can multiply productivity in narrow, specific cases. But building and maintaining real-world software — especially secure systems — fundamentally requires context, experience, and judgement ([57:17]–[58:19]).
-
Quote:
“The best analogy I've always found to writing code is actually just writing, right?” – Carl Brown [57:17] -
LLM-based “AI agents” (like Devin) introduce new risks by generating code that is not robust or secure; could expose more of the Internet to bugs and security attacks ([58:27]).
-
-
Advice for New Engineers
-
What Should New Developers Focus On?
- Learn to use AI-driven code editors: it’s an expected skill. Understand their strengths and limitations ([60:12]).
- Learn to rigorously test and evaluate the software output — don’t trust autogenerated code blindly.
- The hiring process is currently broken, dominated by bots and irrelevant interview gates.
- Focus on skill areas AI cannot replace: long-term design, debugging, security, and context-sensitive problem-solving.
- Use AI for disposable code or prototypes (“vibe coding”), but real, maintainable projects need intentional architecture.
-
Where Can AI Actually Be Useful?
-
For tasks with little consequence or clear, testable goals (e.g., website layout, content variants, A/B testing).
-
Not for critical infrastructure or long-lived systems.
-
Quote:
“The only people that are really going to be replaced anytime soon are people that either weren't doing a great job to start with or people whose bosses don't understand what they were doing to the point that the boss thought that what they were doing mattered.” – Carl Brown [65:20]
-
Notable Quotes & Memorable Moments
- "Being able to understand what the problem actually is and how it needs to work… all that work is basically not important [to those pushing AI tools]." – Carl Brown [25:03]
- "Actual software development is more like solving a murder." – Carl Brown [13:08]
- “A lot of this seems to be really, you know, we don't like dealing with the prima donna programmer.” – Carl Brown [48:54]
- “There's a lot of artificial productivity requirement increases... because we're giving you this AI now so you ought to be more productive.” – Carl Brown [55:32]
- "You can get ChatGPT to spit out a few paragraphs for you, right? But you end up with the legal briefs that have the story that's made up..." – Carl Brown [57:17]
Selected Timestamps for Key Segments
- [02:46] — What is software development, really?
- [03:25] — Only 10–25% of job is writing code
- [04:29] — Can LLMs replace coders? Why not?
- [05:50] — Myths about “30% of code being written by AI”
- [07:07] — AI causes copy/paste maintenance and debugging nightmares
- [13:08] — “Solving riddles vs. solving a murder” analogy
- [14:03] — Hiring crisis for junior devs, exacerbated by AI assumptions
- [16:32] — Code churn as a sign of declining code quality
- [25:03] — Devaluation of software engineering work by AI hype
- [26:49] — What is “vibe coding” and its limits?
- [28:33] — Companies ditching QA, adding risks
- [32:32] — How Agile shapes short-term thinking, for better or worse
- [39:12] — LLM “FOMO” and org chaos
- [45:18] — How management now plans headcount and projects with AI “savings” in mind
- [55:32] — The pressure from above: AI must increase output
- [57:17] — The analogy: code writing ≈ writing, AI ≈ basic autocomplete
- [60:12] — Advice to new engineers: learn AI but focus on depth, not just output
Tone & Language
The discussion maintains an accessible, frank, often skeptical tone—edged with sarcasm and dry humor. Both Ed and Carl are incisive, sometimes exasperated with hype and managerial ignorance, but rigorous about the limits and purposes of tools like LLMs in the real, messy world of software engineering.
Final Thoughts
Ed Zitron and Carl Brown’s deep dive is a reality check for anyone believing in the AI coding hype. While LLMs and generative tools are here to stay—and can help with specific, low-stakes tasks—the true skills of software engineers extend far beyond writing code. Context, critical thinking, design, maintenance, and especially security are irreplaceable. The episode warns that without a reckoning, management's tech-utopian dreams are likely to leave tech organizations, software culture, and ultimately all users, much worse off.
Guest links:
- Carl Brown: InternetOfBugs - YouTube
Host:
- Ed Zitron: Better Offline
