Podcast Summary: “A full software engineering teammate”: OpenAI product lead on getting the most out of Codex
Podcast: How I AI
Host: Claire Vo
Guest: Alexander Embiricos, Product Lead for Codex at OpenAI
Date: January 12, 2026
Duration: ~53 min
Episode Overview
In this insightful episode, Claire Vo and Alexander Embiricos dive deep into OpenAI’s Codex, positioning it not just as a coding assistant but as a full-fledged software engineering teammate. They explore practical workflows, real-world use cases, and productivity strategies for leveraging Codex, both for seasoned engineers and newcomers. The discussion covers everything from “zero to one” getting started, to professional team workflows, integrating with systems like GitHub and Slack, and the future of agentic AI in engineering.
Key Discussion Points & Insights
1. Codex as a Software Engineering Teammate
- Codex isn’t just an autocomplete tool—it’s built to be a full teammate in software engineering, helping users with everything from running codebases to answering questions and planning complex features.
- Alexander: “People love how thorough and diligent Codex is. It's not the fastest tool out there, but it is the most thorough and best at hard, complex tasks.” (00:00)
2. Getting Started: Zero to One with Codex
-
Workflow Demo (03:07–05:00):
- Install Codex via VS Code extension (included in ChatGPT Plus/Pro/Business/Edu plans).
- Users can ask in plain English to make changes or ask questions—no technical jargon required.
- Codex parses context within the same chat, making it easy to request follow-up modifications.
-
Real Use Case:
- Fixing issues (e.g., adjusting a jump animation in a game), adding missing features, or running code locally with simple prompts.
3. Parallel Tasking and Work Trees
-
Parallel vs. Serial Tasks (06:01–11:34):
- For basic, non-conflicting changes, run Codex on multiple parallel tasks.
- For complex or potentially conflicting changes, use
git work treeto create multiple isolated environments. - Codex itself can set up work trees with natural language commands in your terminal.
-
Quote:
- Alexander: “I am lazy and so I don't want to remember the commands for work tree… So typically the way that I would actually do this is I would just ask Codex to create work trees.” (08:13)
-
Practical Tip: Run multiple Codex instances in separate work trees for language/localization changes, prototyping features, etc.
4. Using Codex at Scale: Building the Sora Android App
-
Case Study:
- OpenAI built the Sora app for Android with Codex in just 28 days. Four engineers, immediate App Store success.
- Key to success: Planning. Instead of single massive prompts, the team defined architecture and broke down the work using structured plans (e.g., Plans.md).
- Alexander: “With coding agents, it doesn't get easier, but you just move way faster…” (13:00)
-
Power User Tip:
- Use the “planning” technique with markdown files and structured plans to manage complex tasks.
- Alexander: “Some of our power users at OpenAI have gotten fairly opinionated about how they like their plans to work and we've actually published a blog post on really effective planning.” (15:52)
5. Codex as Both a Learning and Execution Accelerator
-
Codex is valuable both for:
- Rapid prototyping (“vibe coding”)—great for PMs, designers, and experimentation
- Production-level engineering—plan-driven, with review cycles
-
Quote:
- Alexander: “Codex can be really powerful for those places where you just want to learn and you don't actually need a scalable production ready app... There’s massive acceleration on learning and then also massive acceleration on execution.” (21:24)
6. Strategy: When to Plan and When to Improvise
- Harder tasks generally require more up-front planning/specification.
- For quick wins or limited time, use Codex’s parallel “best of N” mode, or multiple work trees to let Codex explore.
- Quote:
- Alexander: “The harder the task, the more you want to plan. But the lazy answer to your question is also, it depends if I have time to wait for a plan or not.” (22:33)
7. Managing Latency and Engineer Workflows
- The new 5.2 model can solve harder problems if given more “thinking time,” but sometimes engineers need to juggle tasks (usually two at a time) while Codex works.
- Claire: “As somebody who used to have this fancy executive job where I had a manager's schedule… Now I'm back to manager schedule. Like, I send the task off and somebody else… does it.” (23:33)
- Product design challenge: surfacing Codex’s “reasoning” process and providing useful progress signals to humans.
8. Integrations That Multiply Codex's Impact
-
Top Integration: GitHub for Code Review (28:06)
- Codex automatically reviews PRs, flags high-confidence issues, and can be prompted to fix them.
- Empirically increased productivity at OpenAI—now used by nearly all technical staff.
- Quote:
- Alexander: “The hit rate on these is really high… Human attention is so scarce, we really want to protect it, but when it finds a really important issue, it'll post here. And then… you can get into this kind of loop.” (28:43-29:29)
-
Other Integrations: Slack, Linear, Cloud-based work
9. Working with Large Codebases and Model Innovation
- Codex and its harness (open source) are constantly updated with each model improvement.
- The harness abstracts away harness-specific optimizations, keeping up with the rapid pace of OpenAI model releases.
- Quote:
- Alexander: “Part of why the Codex CLI is open source is so that anyone who wants to get the best out of Codex models and actually just OpenAI models generally can just go observe how it works… We do this all the time.” (38:38-41:54)
10. Atlas & Side Chat: LLMs as Personal & Contextual Assistants
-
Atlas: Embiricos’ favorite use case is using Chat/Atlas for continual, contextual queries (e.g., vacation dinner recommendations adjusted for personal preferences).
-
Informing the AI when it got something right/wrong not only improves its memory but also protects your own humanity:
- Alexander: “I think it's important to be polite to AI… I just think it's important to be polite to everyone. And I think that if you start not being polite to chat, I think it can wear off on you…” (45:33)
-
Side Chat (47:10): Pop-out conversation for context-aware queries, content rewriting, or learning—directly in the flow of work.
11. Advanced Prompting & Troubleshooting Tips
- Always provide context and avoid false precision when prompting Codex or any LLM.
- If stuck, start a new chat, or direct Codex to review its own previous sessions (stored locally,
.codex/sessions).- Hidden advanced tip:
- “Just ask [Codex] to go read them.” (51:26)
- Hidden advanced tip:
Notable Quotes & Memorable Moments
- On Parallelization:
- “Running Codex in parallel is great. There's no reason to do anything else.” – Alexander (06:01)
- On Product Acceleration:
- “With coding agents, it doesn't get easier, but you just move way faster.” – Alexander (13:00)
- On Practitioner Advice:
- “Learn the fundamentals of git, and then you will be in a safe space when you’re running with the power of these tools.” – Claire (11:34)
- On Open Sourcing Codex Harness:
- “If you're trying to figure out how to get the most out of these new models, go peek under the hood at Codex open source…” – Claire (43:13)
- Altruism Towards AI:
- “Be polite for you, if [not] for AI.” – Claire (46:38)
- On Future of Work:
- “Now that we can just have ubiquitous code, the hard parts become deciding what actually should make it in…” – Alexander (35:08)
Timestamps for Important Segments
| Timestamp | Segment | |-----------|--------------------------------------------------------------| | 00:00 | Codex’s thoroughness and approach to hard tasks | | 03:01 | Demo: Installing and starting from zero with Codex | | 06:01 | Parallel vs serial task execution in Codex | | 08:13 | Demo: Creating git work trees with Codex | | 13:00 | Case study: Sora Android app built with Codex | | 15:52 | Detailed planning workflows and “Plans.md” technique | | 21:24 | Codex for both prototyping and production | | 22:33 | Choosing between planning and improvisation | | 23:33 | Human workflow challenges with Codex’s latency | | 28:06 | GitHub integration and automated code review | | 38:38 | Why Codex harness matters; open source and self-improvement | | 45:33 | User-AI politeness and improving contextual memory | | 47:10 | Atlas, Side Chat, and contextual web summarization | | 51:26 | Pro tip: Reviewing Codex session logs for context recovery |
Takeaways for Listeners
- Codex is a full engineering partner: Powerful for both non-coders and pros, excelling at answering questions, writing/fixing code, and planning.
- Parallelization & planning are key power user features: Use git work trees, planning docs (Plans.md), and multi-instance workflows to handle complex or concurrent tasks efficiently.
- Automated code review delivers real productivity gains: Codex flags high-confidence issues and fits tightly within GitHub flows.
- Always provide context when prompting: Especially for ambiguous changes, and leverage Codex’s session memory as a troubleshooting tool.
- Harness flexibility & rapid updates: The open-source harness makes it easy to benefit from the latest model improvements and integrate with your own workflows.
- Integration everywhere: Codex works in VS Code, terminal, web, Slack, Linear, and GitHub—choose the environment that fits your style.
- Be polite (for your own sake): Maintaining your own habits of kindness, even with AI, has downstream benefits.
- Hidden gem: All Codex sessions are stored locally; you can prompt Codex to review its own session history for stuck tasks.
Further Resources
- OpenAI Blog: On planning strategies with Codex (Plans.md)
- Codex Open Source: Explore the harness and SDK for best practices
- Atlas (OpenAI’s AI web browser): For personal knowledge workflows
This summary captures the practical wisdom, product insights, and forward-thinking strategies shared by Alexander Embiricos and Claire Vo on building, debugging, and accelerating real-world software development using Codex and AI tools.
