Embracing Digital Transformation – Episode #333 AI Game Jam 2026: AI Augmented Game Development
Host: Dr. Darren Pulsipher
Guests: Matthew Pulsipher, Jacob Pulsipher, David Pulsipher
Date: March 13, 2026
Episode Overview
In this episode, Dr. Darren Pulsipher invites his three sons—Matthew, Jacob, and David—to discuss their experience participating in a family AI Game Jam over President’s Day weekend. The challenge: use generative AI tools to design and build a video game, within ~36 hours and on a tight AI budget. The discussion delves into their approaches, tools, learnings, and the broader implications of AI-augmented development—showcasing how AI is not replacing human creativity and orchestration, but empowering new forms of rapid, innovative, and accessible software creation.
Key Discussion Points & Insights
1. Origins and Structure of the AI Game Jam
-
Genesis: The idea sprouted from the family gaming chat, leading to a well-coordinated President’s Day hackathon focused on the theme “Everything is Connected”—chosen with help from Claude and ChatGPT AI models.
“We picked Claude’s [theme] because the ChatGPT one would have been very, very hard to do.” (Matthew, 02:02)
-
Rules:
- Start at 9pm, finish by midnight next day (~30–36 hours).
- Any AI tool allowed; spending over $20 resulted in score penalties.
- Judged by AI usage, originality, fun, connection to the theme, and cost efficiency.
2. Diverse Backgrounds, Varied Approaches
Dr. Pulsipher emphasizes the unique mix of skills and experiences represented by his three sons:
- Matthew: Product Manager, relies heavily on AI for implementation, using a “Team of Agents” feature.
- Jacob: Hardware/Electrical Engineer, uses GitHub Copilot Pro and focuses on maximizing browser-based engineering with minimal manual coding.
- David: Freshman Mechanical Engineer, methodical and prescriptive, builds a bespoke agent framework for task orchestration.
3. Tool Stacks and AI Choices
Each participant chose their own combination of AIs based on budget, prior subscriptions, and the technical requirements of their games:
- David: Maximum frugality—$20 for Claude Code (Sonnet), plus Nano Banana for sprite art and minor Photoshop tweaks. (04:32)
- Jacob: GitHub Copilot Pro ($39/month), mainly Opus model, some Nano Banana (free tier).
- Matthew: Leaned in—Claude Max 100, Nano Banana (for rapid image generation), some ElevenLabs for voice clips, and Sonnet for MCP calls.
4. Development Process: How AI Was Used
-
Matthew:
- Used dictation to rapidly translate ideas to AI.
- Asked Claude Opus to produce a full game design doc, specifying only real-world constraints.
- Enabled Claude’s “Team of Agents” for implementation—no manual code written.
- Used agentic features for rapid prototyping, iterating UI/UX, and handling complexity.
- Let AI handle subsystem logic and state management, e.g., via React and Vite.
“On the game itself, I didn't write a single line of code myself.” (Matthew, 10:09) “It felt like I was working with a thinking partner.” (Matthew, 25:30)
-
David:
- Built a suite of custom agents: overseer (testing), architect (skeleton/UI), mechanic (details), integrator (assets).
- Provided exact algorithms, probabilities, and design via markdown docs.
“I primarily had four different agents, all with different tasks and tools… The overseer would test the code every single time and delegate tasks…” (David, 10:29)
- Used Sonnet for consistency across agents.
- Focused on a civilization simulator sandbox game (“Playing God”).
-
Jacob:
- Chose Phaser 3 engine for browser compatibility.
- Used Copilot to generate all foundational structure from prompts.
- Iteratively designed puzzles on paper, then had AI translate photos/designs into functioning digital levels.
- Experimented with AI’s ability to generate art, maps, MIDI music, but found limitations in AI’s spatial reasoning.
“AI alone could not make me a good game. I had to be the architect of that.” (Jacob, 20:56)
5. Comparing AI Models & Results
- Opus (Claude): “Leagues beyond everything else” for artistic vision, design, and fun factor.
- GLM5 and Codex: Decent technical results, but less creative. Codex “exactly to spec with no artistic flair.” (Matthew, 09:25)
- Gemini Flash/Pro: Inferior tool call capabilities; unimplemented/broken features.
- Nano Banana, ElevenLabs: Used mainly for rapid media asset creation; results passable, not always polished.
6. Lessons Learned & Human Role in AI Development
-
AI as Accelerator, not Replacement:
- All agreed AI enabled far more complex and rapid development than possible alone.
- However, “AI alone could not make me a good game… I had to be the architect… AI is here to augment us, to execute our vision.” (Jacob, 20:56)
- Dr. Pulsipher: “AI is not going to replace us anytime soon… It sure can get rid of some of the grunt work… but it doesn’t know what to code unless I can tell it.” (28:42, 29:32)
-
Importance of Orchestration and Vision:
- Core game systems, algorithms, and design vision still required deliberate human intervention.
- Markdown docs and step-by-step breakdowns proved vital for precise control and quality.
- Iterative playtesting and human feedback loops improved AI results.
-
Subject Matter Expertise Matters:
- Leveraging real-world knowledge and constraints (“domain modeling”) gave AI a strong baseline and reduced confusion.
“If you're modeling stuff it already knows about, it doesn't have to go and find that stuff out…” (Darren, 18:17)
- Games reflected creators’ backgrounds and strengths (product management, engineering, etc.).
- Leveraging real-world knowledge and constraints (“domain modeling”) gave AI a strong baseline and reduced confusion.
-
Challenges Remain:
- Spatial reasoning weak for current AI (“still cannot do 3D CAD modeling for me”). (Jacob, 27:31)
- AI-generated music/art: technically feasible, but not always up to human-published standard.
7. Broader Implications for Digital Transformation
- Skill Development: Effective use of AI in software development is a learnable, iterative skill, not magic.
“There is some skill there and I don't know how I agree to develop it other than just using it obsessively and learning that at least for now.” (Matthew, 33:11)
- Accessibility: Anyone with a vision can now build software—democratization of creation.
“If anyone has a vision for a game, they can do it… Anyone can do this kind of stuff.” (Jacob, 33:01)
- Future Game Jams: Potential to involve more participants, siblings, democratizing event further.
Notable Quotes & Memorable Moments
-
On Human–AI Collaboration:
“AI alone could not make me a good game... it didn’t understand the puzzles or the rules. I had to create those.”
(Jacob, 20:56) -
On Creative Flow with AI:
“It felt like I was working with a thinking partner... I’m bouncing ideas off of it and it’s bouncing ideas back.”
(Matthew, 25:30) -
On Subject Matter Expertise:
“All these subsystems are just functions affecting data objects... I feel like I designed mine around AI's strengths.”
(Matthew, 24:41) -
On Looping in Beginners:
“If anyone has a vision for a game, they can do it. It’s just a matter of helping them realize it's available to everyone now.”
(Jacob, 33:01) -
On AI as Augmentation, Not Replacement:
“When people say that AI is here to replace us, to take our jobs, I think we just need to look at from the approach that this is to augment us... to execute our vision.”
(Jacob, 21:33)
Important Timestamps
- [01:46] Game Jam structure and theme (“Everything is Connected”)
- [04:32] Discussion of each participant’s AI tool budget and choices
- [09:24] Comparison of Claude Opus, GLM5, Codex, Gemini
- [10:09] Matthew: Letting AI write the whole codebase with “Team of Agents”
- [10:29] David: Deployment of a four-agent AI framework
- [14:12] Jacob: Paper prototypes and AI translation into digital assets
- [17:42] Recognizing complexity: “No way I could have done this game in 24 hours without AI.”
- [18:37] David describes his civilization simulator and agent orchestration
- [20:49] Major learnings: AI cannot replace human vision and direction
- [24:41] Aligning game design with AI strengths yields best results
- [26:33] Prescriptive vs. creative approaches with AI agents
- [27:31] AI's spatial reasoning limitations (“It cannot handle 3D CAD modeling.”)
- [29:32] High-level human orchestration essential for quality results
- [33:01] Reflections on skill, accessibility, and democratization
Summary Takeaways
- Generative AI is a powerful accelerator for creative and technical work—when paired with clear vision, orchestration, and iterative human guidance.
- Each person’s approach to leveraging AI reflects their own personality, background, and domain expertise—even among siblings.
- AI will not replace creative, orchestrative, or architectural roles (yet), but dramatically lowers the barriers to entry, making complex software creation accessible to many.
- Skillful use of AI is itself becoming a core competency, requiring practice and self-directed learning.
- As the barrier to software creation drops, so does the intimidation factor—anyone with imagination and a willingness to learn can participate in digital transformation.
For more on AI-augmented development and digital transformation, check out Dr. Darren Pulsipher’s upcoming book “Becoming AI Augmented,” Q3 2026.
