Intelligent Machines 856: "SecretlyBriti.sh"
Podcast: All TWiT.tv Shows (Audio)
Host: Leo Laporte
Co-hosts: Paris Martineau, Jeff Jarvis
Guest: Steve Yegge (ex-Google/Amazon, creator of Gastown)
Date: February 5, 2026
Overview
This episode features a deep dive into the evolution of AI code agents, focusing on Gastown—a Claude code add-on by Steve Yegge—its implications, and the rapidly changing world of AI-driven software development. The hosts and guest explore what orchestrated coding agents mean for coding, developer workflow, memory, safety, and future productivity. The show is rich with insight, humor, and a dash of skepticism about both the bleeding edge of AI tech and broader industry impacts.
Main Discussion Segments & Timestamps
1. Introductions & Recent AI Developments (00:00–05:46)
- Leo Laporte recaps the rapid-fire developments in AI over the last month, especially around "Claude Code" models and the Vibecoding movement.
- Co-hosts Paris Martineau (Consumer Reports) and Jeff Jarvis (journalism professor, CUNY) join.
- Introduces Steve Yegge, ex-Amazon/Google, notable for a leaked 2011 Google memo and creator of Gastown.
Notable Quote
"This is not for everyone... don't touch it or you'll die."
– Steve Yegge on Gastown's accessibility [05:23]
2. What is Gastown? – Yegge's Vision for AI Code Orchestration (05:46–16:27)
- Steve Yegge explains Gastown as "Claude code running Claude code—agents running agents, turning it into a factory." [05:54]
- It operates on the concept of team orchestration: an agent that runs other agents for delegated tasks.
- Gastown is already being used in Fortune 100 companies, but Yegge cautions it's research-grade and risky for now.
- Discussion of the accelerating pace of model releases from Anthropic.
Notable Quotes
"You’ve gone from using a rake to rake leaves to using a leaf blower and stuff blows around a little bit more until it converges on being correct."
– Yegge on AI code agents [10:06]
"Gastown is a bit of a swamp thing right now. It sort of oozes rather than whirs... it requires a lot of manual steering."
– Leo, quoting Yegge's blog [10:31]
3. Gastown Architecture – Roles, Orchestration, and Beads (13:25–16:52)
- Gastown models a town with hierarchical roles: human as overseer, Claude as "mayor," deacons, "polecats" (Mad Max refs), all managing delegated code tasks.
- Many roles in Gastown are workarounds for current LLM model limitations, which may disappear as models improve.
- Key innovation: Beads – an issue tracker and memory system integrating with code processes. It stores issues, plans, and knowledge graphs, enabling memory for agentic workflows.
Notable Quotes
"Not only are [Anthropic] a real hive mind, it operates very differently from all other companies... a template for how I think most companies will become."
– Yegge on Anthropic [13:52]
"The only important part of Gastown is beads... a tool for agents... it gives them a memory."
– Yegge [15:40]
4. Coding with Claude Agents: Productivity and Cautions (16:52–31:28)
- Yegge likens agent coding to a "junior teammate" working incredibly fast, but still requiring lots of human review and higher-level thinking.
- Two phases of software: "construction" vs. "engineering"—AI offloads the construction but not the hard engineering/design.
- Role of memory/context: beads help overcome LLM context loss by providing continuity.
- Extensive discussion of safety, workplace productivity, and the risk of burnout with hyper-productive AI agents.
Notable Quotes
"If you're 10 times as productive with AI... you can work eight hours a day and give all that to your employer... or you can work for a half hour a day and be as productive as your peers and you've captured 100% of the value."
– Yegge [31:02]
5. Gastown in Practice: Workflows, TMUX, and Real-World Tips (22:04–29:53)
- Practical tips: use TMUX for multiplexed terminal work.
- Yegge’s workflow involves multiple Claude agents ("mayor" and multiple "crew"), using beads to coordinate reviews and PRs (all automated).
- Warns of agent "vampiric" productivity pressures: “people are getting drained... creating the appearance of hyper-productivity that becomes a bar.”
6. State of AI Models – Why Claude Code Dominates, Model Comparison (25:59–28:26)
- The group marvels at the "event horizon" leap from earlier models to Claude Opus 4.5.
- The superiority of Claude Opus models in the CLI and coding agent use-case—possible causes: superior focus on coding, better training data, self-propagating advantage.
Notable Quote
"The confidence comes from me trying to do stuff that was just too hard... 4.5 was the first one to finally do it."
– Yegge [27:34]
7. Industry and Security: AI Market Sell-off, LegalTech Disruption (51:01–56:56)
- Legal and software markets experience a run on stock prices as investors respond to fears that coding AIs will displace key revenue streams (LexisNexis, Westlaw, LegalZoom, etc.).
- Panel debates whether AI is destroying or transforming established software.
8. Elon Musk’s K2 Civilization—AI Data Centers in Space (57:01–63:42)
- Discussion of Musk’s “fever dream” for deploying 1 million space-based data center satellites (to reach “Kardashev Level 2” civilization, i.e., controlling the solar system's energy).
- Panel debunks the engineering and environmental feasibility but notes it's prime for investor hype.
9. AI Safety, Regulation, and New Media Frontiers (109:54–114:13)
- French prosecutors investigate Elon Musk's X (Twitter) and Grok for non-consensual AI-generated imagery.
- Wired exposes an AI toy chat system leaking kids’ conversations; general skepticism about poorly secured consumer AI.
10. AI and Society: Impacts, Hype, and Disillusionment (81:40–93:33)
- Lively argument: Is AI “human-level intelligence” or specialized, impressive autocomplete?
- Leo is thrilled by the “partner” experience, while Paris enumerates frustrations and limitations.
- AI models seen as stochastic, still not trustworthy for all domains, but useful as powerful tools.
Exchange
“If you don’t see this as a remarkable breakthrough, you’re kind of missing what’s happening.”
– Leo [83:15]
“There’s still a lot of kinks to work out and that's fine.”
– Paris [92:33]
11. Kid’s Online Safety, Social Media Bans (133:42–138:53)
- Reflects on social experiments worldwide where social media is banned for under-16s.
- Raises the question: Are these bans helping, or are they merely an emotional, not data-driven, response?
Memorable Quotes
- “Remember, inference time is the most expensive time to do it because of context... so they will always be delegating to tools, to offloaded CPUs.” — Yegge [14:49]
- "You should really not use that." — Yegge (on OpenClaw) [18:33]
- “Beads is like an issue tracker. Me and Claude arguing for a long time... finally I said, I want issues and git, and it said, Well, I want SQL queries… we wrestled... and came up with beads." — Yegge [15:44]
- “It is madness. I actually think there's a vampiric effect happening. Something bad happening... we're creating this appearance of super hyper productivity." — Yegge [31:31]
- "It feels to me like it must be bad [for kids], so let's ban it." — Leo [135:02]
- “It’s the story of technology, isn't it?” — Leo [93:33]
Humor and Personality
- Ongoing joke about “I gave OpenClaw my Google credentials... and my credit card... and then woke up and deleted it all.” – Leo [19:09]
- Leo and Yegge bond over love of Lisp and Emacs. “Emacs is just perfect software.” — Leo [23:29]
- The origins/internal drama behind “Clawbot,” “OpenClaw,” and “Moltbot.”
- Recurring banter on generational/cultural divides in AI enthusiasm.
- Paris’ search for the domain secretlybritish.com (episode’s namesake). [146:52]
Key Takeaways & Insights
- Orchestrated AI agent coding (Gastown) is powerful but research-grade and risky.
- Model quality and specialization (e.g., Claude Opus 4.5) matter enormously in practical applications.
- Adding memory to agents (beads) is core to practical AI workflows—solves ‘context window’ forgetting.
- AI is changing the balance of construction vs. engineering in software, raising productivity but risks creating new pressures and burnout.
- Legal and knowledge work fields face disruption as LLMs eat into traditional software (LegalZoom, LexisNexis, etc.), causing market shakeups.
- Unbridled end-user agentic AI (OpenClaw) carries major privacy and safety risks.
- A growing cultural/experiential gap is opening between people actively using and reshaping their workflows with advanced AI vs. those left behind or frustrated by limitations.
- Society, media, and business must respond rapidly or risk obsolescence—not just from the tech, but from how quickly behaviors and markets shift.
Further Reading & Resources
- Steve Yegge on Medium: "Gastown: The Future of Coding Agents"
- Gastown [GitHub/discussion forum link TBD]
- “Vibecoding: Building Production-Grade Software with GenAI Agents” — Yegge & Gene Kim
- Martin Alderson, "Two kinds of AI users are emerging..."
- Nature: "Does AI already have human-level intelligence? The evidence is clear." (Eddie Kerning Chen) [Referenced at 86:22]
For more lively banter, in-jokes about Emacs, and a snapshot of the ongoing AI revolution, listen to the full episode or check out TWiT.tv/IM.