Summary – Scrum Master Toolbox Podcast: Pachinko Coding—What They Don't Tell You About Building Apps with Large Language Models
Guest: Alan Cyment
Host: Vasco Duarte
Date: October 8, 2025
Episode Overview
This episode dives deep into the practical realities—both thrilling and frustrating—of coding with Large Language Models (LLMs), as experienced by veteran agile consultant and certified Scrum trainer Alan Cyment. The conversation explores the promise and peril of "AI-assisted coding," the addictive loops developers face, and how abstraction levels are shifting in software creation. The episode is candid, quirky, and insightful, rich with analogies (from Thermomix kitchen gadgets to Japanese pachinko slot machines) and tips for mastering this revolutionary workflow.
Key Discussion Points & Insights
1. Defining AI-Assisted/Vibe Coding
- AI-assisted coding vs. traditional coding: Alan describes "vibe coding" (sometimes called AI-assisted coding) as largely relying on an LLM to output code, bypassing deep rational engagement with the software’s details (02:30).
- He likens its promise to a kitchen Thermomix: you just add ingredients, follow instructions, and supposedly get a gourmet meal—no real skills needed. But early experiences left him “disappointed,” with code that often didn’t even compile (03:20–04:14).
“Thermomix coding is here...And I kept becoming disappointed.”
— Alan, [04:06]
2. The "Pachinko Coding" Cycle: Addictive but Risky
The Gambling/Slot Machine Analogy
- Alan coins the term “pachinko coding,” referencing addictive slot machines in Japan (pachinko parlors), to describe the endless loop of trying, copying and pasting errors, modifying prompts, and hoping that “this time it will work” (10:10).
- The cycle induced a rush of dopamine—followed by frequent disappointment and wasted money and time.
“It was like a TDD cycle, but more of an addiction cycle...This time it's going to work. I know this time it's going to work.”
— Alan, [07:03]
The Price of Hope
- Sometimes he’d spend up to $20 a day chasing solutions, which could actually be cheaper through hiring a human developer for small tasks (13:19–15:09).
3. Hits and Misses: When LLMs Work and When They Don’t
Breakthrough Moments
- On rare occasions, LLMs surprised him with dramatic productivity gains. Alan recounts successfully building an ecommerce integration in a single day, far faster than he could have learned from scratch (16:19).
- He echoes Martin Fowler and the “English as the new programming language” abstraction, suggesting LLMs might herald a new modeling tier: moving from punchcards → assembler → high-level code → English prompts as the next abstraction (17:47–19:05).
“English is the new level of abstraction. So first we had punching cards...And now we could think in terms of real variables and then object orientation came...So each one of those was a new level of abstraction that allowed us to forget about thinking so much at that low level allowed us to think about bigger things.”
— Alan, [17:47]
Failures and Frustration
- LLMs frequently overcomplicate things, especially with UI-related code (CSS, events). Alan joked it was like letting a “Roomba try to paint your room”—often a mess (11:42).
- When using more expensive models, there was no guarantee of better results (11:37).
Adapting Practice: From Rage to “Mecha” Coding
- Alan introduces “Mecha coding”—where the developer is like a pilot in an exoskeleton: more powerful than ever, but still responsible for direction and quality (19:05).
- Viewing LLMs as a “function,” not a person, helps manage frustration; expecting brilliance leads to "rage coding". Instead, he treats the LLM as an extrapolation machine—good at what it’s seen before, unreliable otherwise (22:03).
4. Strategies That Work with LLMs
Agile-Inspired Micro-Steps
- Small, well-defined steps yield best results—embracing TDD and YAGNI (“You Ain’t Gonna Need It”) principles (24:30, 37:47).
- Prompt LLMs to suggest options and clarify pros and cons before coding, using the model as a conversational consultant (28:11).
Purpose-Built Prompts and Process Hygiene
- Provide context with “conventions” Markdown files, set clear process standards (like always running integration tests, checking for errors, and refactoring for readability).
- Explicitly ask the LLM to “take two steps back and rethink,” especially when stuck. This can reset overcomplicated solutions ([30:14–33:42]).
“I wrote some, the prompt was something like take two steps back and rethink the whole way in which you are trying to develop this and try to develop this the simplest possible, most native basic yakni way of turning on checkboxes...And it worked.”
— Alan, [31:42]
Be Realistic About LLM Weaknesses
- LLMs are better on some tech stacks than others (React Native/Flutter = “brittle”; Svelte = “solid” in his experience, [37:47–40:14]).
- Success correlates with the LLM’s exposure to the relevant tech stack in its training data.
5. Case Study: Building a Custom Birthday Reminder App
Alan describes building a “birthday and gift reminder” app:
- Past attempts (pre-LLM) failed due to complex integration/setup.
- With the LLM (“mecha coding”), he delivered a working version in under a day—without knowing Swift—by letting the AI explain options, and iterating incrementally (24:55–29:05).
- He overcame major bugs by prompting the LLM to reset and rethink, eventually achieving a robust result.
6. Notable Quotes and Memorable Moments
-
On Addictive Loops:
“I felt like a servant…I was sort of like serving the machine by telling it, okay, this is wrong.”
— Alan, [06:24] -
On Finding Joy in Success:
"The moment the pachinko machine said I won and I had won...when it did work…it was, I really felt that something…perhaps this could be a new level of abstraction."
— Alan, [16:19] -
On Realistic Use:
"I don't think this approach would ever work for enterprise grade software...For that kind of things, I think it can work."
— Alan, [34:35] -
On LLMs as Bad Programmers: "If you start to treat it like a person, it drives you crazy because it's so wise and so stupid at the same time."
— Alan, [22:03]
7. Practical Tips for Effective LLM Coding
- Go small: Embrace micro-commits and smallest-viable requests.
- TDD/Integration: Incorporate continuous testing as part of the conversational prompt.
- Prompt discipline: Provide the LLM with comprehensive prompts outlining context and process standards.
- Refactor in steps: Always ask AI to suggest refactorings or simplifications.
- Leverage LLM as a consultant, not just a code generator: Query for tradeoffs, options, and explanations.
- Acknowledge tech-stack limitations: Know the LLM will perform unevenly depending on training data for your stack.
- Don’t expect LLMs to replace human craftsmanship: Use them to accelerate prototyping and tackle accidental complexity, not to architect enterprise-grade solutions blindly.
8. Recommended Resources
- Paul Hammond (LinkedIn): Inspirational for crafting comprehensive LLM prompts (40:56).
“I learned…the step that mostly helped me was reading and watching a guy called Paul Hammond on LinkedIn…his style and his insistence…on doing small steps that helped me.” - Kent Beck: Mentioned for “augmented coding,” but Alan found Paul Hammond’s approaches more practical ([40:56]).
Timestamps for Important Segments
- [02:30] – Definitions: “Vibe coding”/Thermomix coding
- [07:03] – Addictive “Pachinko” coding loop
- [10:10] – Coining “Pachinko coding”
- [13:19] – Spending $20/day; rivaling human outsourcing cost
- [16:19] – First breakthrough: the pachinko “win” moment
- [17:47] – LLMs as a new level of abstraction
- [24:30] – “Mecha” coding and process hygiene
- [28:11] – Using LLM as consultant for options/tradeoffs
- [33:42] – Resetting and getting unstuck via prompts
- [34:35] – LLMs not (yet) for enterprise-grade software
- [37:47] – Small steps, TDD, integration, and CI
- [40:56] – Resource: Paul Hammond’s prompting strategies
Conclusion
Alan’s honest, witty account of “pachinko coding” serves as both a warning and a roadmap for developers plunging into LLM-based workflows. The magic happens when practitioners combine agile thinking (micro-steps, constant feedback, refactoring) and new prompting best practices—treating the LLM as a powerful but unpredictable exoskeleton. While the full promise of “Thermomix coding” remains elusive, Alan’s experiments illuminate a path where English truly could become the next programming language—when paired with discipline, agility, and a pragmatic eye for tradeoffs.
Connect with Alan:
- LinkedIn: Alan Cyment
- Web: simmon.com (most content in Spanish)
End of summary.
