
Loading summary
Alexander Ibirikos
People love how thorough and diligent Codex is. It's not the fastest tool out there, but it is the most thorough and best at hard, complex tasks.
Claire Vo
If you're a software engineer or somebody who's even just new to using some of these AI tools, where would you get started? With Codex?
Alexander Ibirikos
We're building it into a full software engineering teammate. One of the things that Codex is great at is simply answering questions. If you have a chat where Codex is producing these plans and you want to change something, it's actually really nice for the model if you just use the same chat to ask for changes to the plan and that way it has all this context in its head when it's ready to get going.
Claire Vo
This is a great starter flow that shows how flexible this platform is and how it can meet a bunch of people at a variety of levels of tasks. How is OpenAI using this for bigger features and bigger products?
Alexander Ibirikos
We used Codex to build the Sora app for Android in 28 days and it immediately became the number one app in the App Store.
Claire Vo
Welcome back to How I AI I'm Claire Vo, product leader and AI obsessive, here on a mission to help you build better with these new tools. Today we have Alexander Ibirikos, product lead for Codex from OpenAI and he's going to show us how you get the most out of Codex. Whether you're a non technical user trying to make changes to an existing code base or want the power tips and tricks for getting the most out of it in the terminal, let's get to it. This episode is brought to you by Brex. If you're listening to this show, you already know AI is changing how we work in real practical ways. Brex is bringing that same same power to finance. Brex is the intelligent finance platform built for founders with autonomous agents running in the background. Your finance stack basically runs itself. Cards are issues, expenses are filed and fraud is stopped in real time without you having to think about it. Add Brex's banking solution with a high yield treasury account and you've got a system that helps you spend smarter, move faster and scale with confidence. One in three startups in the US already runs on Brex. You can too@brex.com Howiai Alex thanks for joining Howiai. I'm excited about today's episode because we actually haven't seen a deep dive into Codex yet and we are going to get the expert take on how to get the most out of this tool and I love that we're just going to dive in and do a zero to one hello world with Codex. So if you're a software engineer or somebody who's even just new to using some of these AI tools, where would you get started with Codex?
Alexander Ibirikos
Codex is a coding agent. We're building it into a full software engineering teammate. But to get started, let's just talk about where most people use it, which is in their ide. I happen to use VS code, so I'll show you codecs and VS code. You can also use it the Codex extension in any VS code fork, like cursor, et cetera. So let's say that I just installed Codex from the VS code extension marketplace. Do you want me to show that by the way, or.
Claire Vo
Yeah, let's do it. Let's truly zero.
Alexander Ibirikos
All right. All right. Okay. Truly zero to one. I mean, I'm not going to uninstall and log out, but we can pretend I did that.
Claire Vo
Yes, I love it.
Alexander Ibirikos
Pretend I clicked on install and then I clicked install. Right. So what would happen then is I'm going to get this glyph here, which is the Codex extension. I have to click through some steps and log in. So in case you didn't know, Codex is included in your ChatGPT plan. So you need a paid plan. So if you have a plus plan Pro Business Team or Edu, you can use Codex. And the limits are really generous. Okay, so let's say I have this thing up and truly 0 to 1, let's pretend I like. Actually, I just heard that this is a game, but I don't even know how to play this game. One of the things that Codex is great at is simply answering questions. I'm the product lead for Codex, so I actually use Codex a lot for asking questions, probably more than most engineers do, because I don't want to bother engineers with silly questions. So I might ask, how do I play this game? We just launched a new model today, so I'm actually curious what model that used 5.2. Cool, we're going to talk about that, I guess. So I'm just going to run npm, run dev here, as it's saying, boot up the server and let's take a look at the game. Okay, so what I have here is like a simple commander type game. I can move my character around, I can recruit troops. It looks like planting windmills is not implemented yet. And I've heard there's something wrong with the jump. Okay, that's way too high. So let's get to work fixing some of these issues. So what I can do here is I can just go and ask. Let's say that jump is way too big. Lower, please. And so for those of you who are new to coding agents, I mean, this is pretty, pretty basic. I just wrote in plain, natural language, plain English, what the change that I wanted. And we can see Codex getting to work, thinking up of a plan. Okay, I need to figure out how the jump works. I need to then reduce it, and then I need to, like, make sure this whole thing works. So let's do that. And while we're at it, let's make some more changes as well. How about we implement the windmill planting? I'm just doing these in your chat so they can go in parallel.
Claire Vo
Yeah, I want to call out some stuff for folks listening or not watching. So what you're basically showing is the process of starting with an existing code base. And you as a. Let's just pretend you're a semi technical user. You're like a product manager on this. And you're like, you know, they ship something, but not exactly what you want. What you're using Codex for is one, how do I even run this thing locally? Which I think is just such a. You know, people forget these basic use cases because I know there are a lot of software engineers that listen to this podcast, but people forget, like, not everybody knows how to run every repo locally. So one little thing you can do is like, just, how do I get this code base running so I can test it? And then two, you're setting up little parallel tasks, which I think is really nice. And I'm curious, you know, how many of these do you find yourself doing on any one code base to just fix little things? And so I guess my question for you is, on these parallel tasks, which in this example are very small, do you feel like it's a better approach to set up parallel tasks and just have, you know, individual ones running or to do them in sort of a serial basis? Like, why one or the other?
Alexander Ibirikos
This totally depends. So, like, this is a bit of a toy project, but realistically, like, the way that I typically work, if I'm like running around. So this is like, very tactical to, like, I guess, PMs. So I'm looking at a terminal here, and I often just have some, like, question that I want to know about. So actually, like, literally this morning I was like, okay, I'm going to do a demo. I know we just launched a new feature that makes it easier to pick models. Can I disable that? And so I ran Codex, which popped me in this is just some internal auto update logic that I ran Codex and then I asked, hey, we have this new feature, by the way, I don't mind telling you guys about it because Codex is open source, so a lot of new things are just like, out there in public. We have this new feature that offers balanced reasoning settings. I'm going to quote it, actually, to give the model a clue that I wanted to search that string. Reasoning settings. How do I disable that for the demo? I might do this kind of thing super frequently, or I might be like, hey, I heard a customer report about this behavior. Is that done? Or I might ask a question like, hey, did we ship this feature? I lost track of whether or not we shipped something. So I ask these kinds of questions a lot. And when I do this, running them in parallel is great. There's no reason to do anything else. On the other hand, if I'm making changes, then I'm more likely to think about, okay, how likely is this change to conflict with another change? And typically I'll either do one at a time or I'll use something called a work tree, which is, I guess, a bit more of an advanced concept. We can get into that if you're interested. I'll use a work tree and I'll just send Codex off to do its work on a separate work tree.
Claire Vo
No, let's take a minute to look at work trees, because I think this is something that most folks that are new to these tools aren't really using particularly well. I see sort of the two paths that you showed, which is one, I'm just going to do like one big branch, do these things in serial, and then commit them in OR two, I'm just going to like kick off a bunch of different tasks, but they're all going in the same conflicting space and creating issues. And so maybe we take a minute and talk about work trees and how those work within Codex, or how you set them up and use them to make sure that you can run parallel changes, but that they don't conflict with each other and can be reviewed separately.
Alexander Ibirikos
Basically, if we're going to have codecs make changes, maybe I can come up with an example on the fly. Let's say that we want to change the language of this input to French or German. Obviously those both can't be true at the same time. This is a very contrived example, but maybe I want to try both. Maybe I'm prototyping something. What I need then is I need two different copies of the code base. So I could just copy the code base twice, just command C command V in finder. Or I could git clone the repo twice. But git has this really nice affordance called a work tree, which basically lets one git instance track multiple copies of the code base. So as sort of a classic mammal, I am lazy and so I don't want to remember the commands for work tree, even though they're very simple. So typically the way that I would actually do this is I would just ask Codex to create work trees. So I might say something like Codex and I could launch Codex like I just did and type the prompt. But a shorthand that's kind of nice in Codex is you can just put your prompt right in. So I might say Codex in here, create two new work trees off the main branch. I might not be this super explicit if I was actually doing it. Main branch, one called French and one called German. And so what you can see that happened just here is that Codex just launched and went straight into this prompt. I do this all the time. In fact, I've gotten so used to running this that sometimes I will forget to write Codex, dash, dash, quote. I will literally just do something like this. I'll be in terminal, I'll say in here, do this.
Claire Vo
Yeah, I was just looking at this because we're in a very meta repository right now, which is in your. In a folder called Codex. Running Codex, Talking to Codex, it's all, you know, it's Codex all the way down, as they would say. But yeah, I think this is an important one to call out for folks that are not watching the YouTube and maybe are listening is you can type as you open a new Codex instance in the cli, you can type this dash, dash and your first prompt right in one line. And this is like classic developer productivity. It's like I cannot be expected to press enter, wait, and then type my words exactly. So I love this. Okay, so then what you've done here is you've used Codex to do what we all use codecs for, which is not have to memorize git CLI commands. And you're creating two new git work trees off main. And then I'm presuming as you work inside Codex, what you're saying is like, okay, in the French work tree do ABC and in German do xyz.
Alexander Ibirikos
Yeah, so let's, let's actually show working in those work trees. So if you see here, I now have two folders, I have one called French and one called German. So what I might just do is I might CD into French, and then I might run Codex. Let's go in and I'll just say, translate the input field placeholder strings, strings to French. Again, very contrived example. So now I can open a new tab and what I will do is I will now CD into the German tab instead, and I'll run Codex. I'll use my nice shortcut to just immediately give it the command and I'll say, translate the input placeholder string to German. And so now Codex can go work on both of these changes at the same time. Here's the French one going. It's figuring out where to do this. Here's the German one going. And so that's awesome.
Claire Vo
You know, we have a lot of people that go on social media and they're like, I'm running 15 instances of codecs across my terminal. They show all these tabs, but they're not sharing practically how they're creating separation of concerns across this code. And, you know, I love that we're showing AI tools, but I think also what these coding tools are allowing people to do is come to software engineering a lot of times without the basics of things like Git. And so, you know, if, in addition to learning some of these AI tools, if I could tell anybody, like, learn the fundamentals of Git, and then you will be in a safe space when you're, when you're, when you're running with the power of these tools, I think is. Is really important. Okay, so, you know, what we've shown is you can spin up codecs kind of in one of two ways, which I also want to call out. One, in your IDE through an extension, plug in two, if you just want to go straight into the terminal experience, great. You can, you know, ask it to do either explanatory tasks, which is. I use it a lot, even on code I have written myself. Like, what, what, you know, what did I do here? Remind me how this works as well as sort of discrete tasks. And then you can parallelize these, especially by using work trees. So I think this is a great kind of like, foundational how you would use some of the basics of this. But how is OpenAI using this for bigger features and bigger products?
Alexander Ibirikos
Totally. So actually, we just published a blog post about this that I think could be cool for people to know about, about how we used Codex to build Sora, the Sora app for Android in 28 days, and it immediately became the number one app in the App Store. So four engineers, 28 days, number one app in the app Store and it's not a trivial app. I was super impressed by the speed as I was watching this team go. And this article has a bunch of really practical advice for how to do it, how to do so. And I think this is really written for professional software engineers building big production apps like working like complex code bases. And a really cool sort of headline to take away here is that with coding agents it doesn't get easier, but you just move way faster and sort of the idea here is that, you know, we didn't have four engineers just like purely vibe code this app in 28 days. They didn't go in and just say, hey Codex, build the Sora Android app and have it work. Actually slight correction, they did try that and it didn't work. It didn't go in one prompt and build the entire Sora Android app. But instead what they did is they thought really hard about the architecture that they wanted the app to have and they used a technique called planning, which I would say is just a super practical thing that you can do. So let me see, I'm going to pull up the Codex code base here, which is a slightly bigger code base. And what I might do in order to do this is I might start a task here. Oh yes, sorry. One thing that I actually I wanted to share. When you first install the Codex extension, it will appear here in the left and I highly recommend dragging it over to here. It's just a nice place for it to live. So there you go. Yeah, And VS code is easy and ide's like cursor. It is hard to find that. So I will let you explore where it is because it's. I feel like it even changes but so I might say something like hey, we want to make this non trivial change. Like for instance, we have a TypeScript SDK and maybe we want to write a Python SDK. Right. So I don't necessarily want to one shot codecs on that, although I could, it might work. So I might say something like this. Make a plan to build a Python SDK based off our TypeScript SDK and this is a reasonable prompt. I could just send this, this would be fine. But actually some of our power users at OpenAI have gotten fairly opinionated about how they like their plans to work and we've actually published a blog post on really effective planning. So Aaron posted a blog post about using Plans MD and it's super easy to use this technique. Basically what you do is you go to this blog post and you just copy this description, it's kind of like a meta plan. It's like, hey, when you plan, this is what a good plan looks like.
Claire Vo
Yep.
Alexander Ibirikos
So for instance, a good plan is like self contained, has a good plan, has milestones and you know, the agent should update the plan as it goes. So I have done so I have copied that into markdown files. Plans md. There you go. I just copy pasted that from the website. And so what I might actually do instead here is I might say using plans md make a plan.
Claire Vo
Yep.
Alexander Ibirikos
Right. And so I might just send this prompt. This will take a while because if you look at the spec for Plans md, it's very thorough. And this is actually something that Codex is really good at. Like, people love how thorough and diligent Codex is. It's not the fastest tool out there, but it is the most thorough and like best at hard, complex tasks. And so I actually, I can say, let's say put it in temp md. I'm asking it to put it in a random file mostly because I did this ahead of time. So here is the plan that it came up with. And so you can see it's about 120 lines that we could read through together. We see these todos that it wanted. We see that it's identified the typescript naming conventions. This is great. Codex thread, et cetera. We actually are really intentional about how we name our SDK parameters. So it's really important for me to read these and verify them, make sure that it didn't get that wrong. It's making, you know, various decisions in here that I might be happy with. Okay, great. And so now what I can do is I could say, I could start a new chat and say implement the plan in SDKplan MD if I'm just happy with it.
Claire Vo
Yep.
Alexander Ibirikos
And it would just go. And this would take. This is probably like a 30 minute to one hour task. But I would be pretty confident in the results of this task. And that's how they built the Sora Android app as well. One very concrete recommendation is if you have a chat where Codex is producing these plans and you want to change something, it's actually really nice for the model if you just use the same chat to ask for changes to the plan. And that way it has all this context in its head when it's ready to get going.
Claire Vo
First of all, I love that you said that it has a head, so the model has its brain. Yeah, I mean we see this a lot. This sort of like build a plan. Obviously I love A spec. I love a prd. I love a technical design document. I'm curious just if we take, we take the Sora app example, I'm presuming that you had a plan of plans, which is essentially like you look across the architecture of an app and then you do kind of what we've always done in software engineering, which is you spec out the full thing you want to do. You break it down into components or initiatives that you can execute on. And then where I think think you're suggesting the velocity comes from is any one engineer can do a detailed, you know, like technical spec and plan in, in partnership with Codex and then have Codex execute the kind of like V1 of that plan for review very quickly. And so you don't kind of get to bypass the architectural thinking of like, how should this app be set up and what, what capabilities do we want it to have and all that stuff. Although you can use AI as a brainstorming partner for that. But then once you have the kind of right size chunks of work and they can be pretty meaty. I mean, like building an entire TypeScript SDK is not like a small initiative. It's not like adding a method to something. Then you can use this planning kind of planning strategy to then get what you're going to do all laid out and then have codecs execute it.
Alexander Ibirikos
Yeah, I think that's like similar to how I think about it. So I would say right now I kind of like the terms vibe coding and vibe engineering, to be honest. My sense is that right now you have a lot of agency in how you spend your time as a developer or as a product manager. I think when you're going to do something like the Sora, build a production app like Sora App that you know, you have to scale, it's really important that maybe you have a bunch of codec senior engineers, but you want that architect or staff engineer think about the shape of the app. And so that's critically important, I think at the same time. So you're going to have to think a lot about the shape of the app and you're going to want to be really careful with review. And we can actually talk about some of that. How we've accelerated review at OpenAI, which is kind of becoming now that we can write so much code, the bottlenecks are kind of thinking about what code to write and then making sure that code is good, reviewing it and landing it. I think at the same time though, Codex can be really powerful for those places where you just want to learn and you don't actually need like a scalable production ready app. So for instance, we use codecs a lot for prototyping. The designers on Codex actually have a fully, mostly Vibe coded full prototype of like all the codecs surfaces that they can just like design into with code. And then we use that to play around and see if we like things, and then if we do, then we'll often Vibe code a branch in the actual product. So a lot of things are just tried by designers there and then sometimes the Vibe coded prototype is exactly pretty close to what we want, and so they'll just land it with the help of an engineer or by themselves even. And then sometimes we're like, okay, this direction was good, or we learned some stuff, we iterated on the Vibe coded prototype. Now we know what we want to build and then we can actually go and give that really well defined spec to an engineer who might rethink some of the fundamental assumptions and so end up having to use codecs to rebuild a lot of it from scratch. So I think there's like two flavors of acceleration. I think there's massive acceleration on learning and then there's also massive acceleration on like executing execution.
Claire Vo
Yeah. This episode is brought to you by Graphite, the AI powered code review platform. Helping engineering teams ship higher quality software faster. As developers adopt AI tools, code generation is accelerating, but code review hasn't caught up. PRs are getting larger, noisier and teams are spending more time blocked on review than building graphite fixes. This Graphite brings all your code review essentials into one. Streamlined workflow, stacked diffs, a cleaner, more intuitive PR page, AI powered reviews and an automated merge queue, all designed to help you move through review cycles faster. Thousands of developers rely on Graphite to move through Revu faster, so you can focus on building, not waiting. Check it out at Graphite dev link. How IAI to get started? That's graphite dev link slash howiai. So I have to ask one question and then I do want to go to code review, which is, you know, like it's sort of this, you know, when you know, you know. But how do you decide between what needs a plan and what doesn't?
Alexander Ibirikos
To some extent, it depends more about me than it depends about the task. So obviously the harder the task, the more likely you want to have a spec. But I also think it kind of depends what you're up to at that time. For instance, if I just want to get something done quickly, I might not have time to wait for a Plan and then go back and forth. So I might kind of throw Codex at it, but I might just do it four times in parallel instead. This is actually a thing we do. You can also use Codex in the cloud, so on web, where it'll run on its own computer and that has a feature called best event, where it'll just do the same task four times. And so often you can just have Codex explore instead of exploring to do a plan. And then you collaborate, you just have it try four different attempts and find out what works best. And I also do that with work trees locally as well. So I guess the better answer to your question is the harder the task, the more you want to plan. But the lazy answer to your question is also, it depends if I have time to wait for a plan or not.
Claire Vo
I like that. I know. One of the things that I have found myself doing, which I think is really funny, is as these, you know, longer running coding models come out, five, two being among them. I'm like waiting a lot more. I gotta, I. I'm like trying to find ways to fill my time. And as somebody who used to have this like, fancy executive job where I like really had a manager's schedule, and then over the past two years I've been like, builder life. And now I'm like, damn, I'm back to manager schedule. Like, I send the task off and somebody else, you know, quote unquote does it. And then I gotta find something to do with my time and I refuse to add more meetings to my list. So I am with you. Do I have the time and patience for a plan?
Alexander Ibirikos
Right? I mean, a lot of the engineers on the Codex team will basically run two things that they're building in parallel. Not more than two. It's usually two. And so they'll kind of be thinking about what to do on one side and then. And by the way, this might just be like two different work trees and two different instances of their IDE open. It could be something like that. And they might just be thinking and collaborating with codecs in one while it's working in the other. And just juggling two seems to be manageable for sort of normal people. Juggling more than two seems quite hard for, for normal people. But my view on this from a product direction perspective is we don't really want to ask humans to juggle. Like, that's not fun for many people. Some people like starcraft in code, but.
Claire Vo
Quick pause. I love starcraft, which is why I feel like I'm really good at all this Right now.
Alexander Ibirikos
Yeah, yeah, yeah. I think it's actually kind of. It's kind of an apt analogy, but I didn't come up with it. I forget who did. But what we're trying to do is just make codecs faster and faster and we are also trying to basically set it up so that you don't have to do the waiting. As the models get smarter and smarter, they can take on harder and harder tasks. I just heard from Naveen at every this morning who was sharing a demo of a bug that no model could fix. And then 5.2 came out yesterday and he threw 5.2 at it and it thought for 37 minutes and was like, this is the bug. And then in fact that was the bug and he got the bug fixed. Right. So as we have smarter and smarter models, there is going to be more instances. There are going to be more instances where you want to wait. But I think that's our job as the product builders around the model, to make it so that even when the model is thinking, you're not waiting for it to think.
Claire Vo
Yeah. Or you know, that you're waiting and you feel good about it. I think that's one of the challenges I've had with some of these where the thinking time is long is I find myself and bless it, it's like when I, you know, worked with human software engineers, I find myself being like, how's it going? How's it, how's it going? You still, you still on it? Like, you still good. And so I do think it's a really interesting product problem because there is, you know, useful latency in, in these models. But as a product and designer, being able to expose that latency and that reasoning and the progress in a way that makes people not feel antsy about it, I think is still a challenge out there for, for you all and for everybody else building these kinds of tools. You know, this was, this has been super helpful on sort of like the basics of Codex. I would love to hear one or two integrations with other systems or tools that you found have been really like, really multiply the impact you can get out of Codex.
Alexander Ibirikos
I think the biggest one by far is going to be GitHub and code review and then there are some others as well. This is just a nifty graph. While I'm here about 5.2 the model, let's take a quick digression. I'll show you because I just think it's super cool. Basically what this graph shows is that 5.2, when you give it, as long as it wants to think, is an amazingly intelligent model at SwedbenchPro, which is an eval of software engineering tasks. But the X axis is pretty interesting. Basically what it shows is the number of output tokens that the model took to perform these tasks. And so it's kind of like, how long did the model have to think? And when we say, hey, you can think as long as you want, ish, it's really smart. But the other cool thing is we're able to say, hey, we actually don't want you to think that hard, or we want you to kind of answer quickly. And so it's performing even higher results than, say, this previous model here, 5:1 thinking, but in significantly less time. And so this is kind of what we're trying to build, going back to what we were saying about waiting. Right. We want to just get you the same result much faster and then get you more intelligent results. And, you know, then you give the model time.
Claire Vo
Yeah, get the right result in the appropriate amount of time.
Alexander Ibirikos
Right, yeah, exactly. So one thing you can do with Codex is you can ask it for code review. This is actually super easy to do, but without integrating with GitHub, we could just be in here. Let's just say that it's written some code. I'm going to kind of ignore what happened there and I'm just going to pretend that I wanted to review this code so I could type review and basically ask it to start reviewing this code. And this is something that people really love. And right now it does feel like when you put the model in a certain mindset, like, hey, you are a reviewer and you give it a different conversation context than the model that wrote the code. You'll just get even better critiques than you might get from a human engineer, partially because this model has a lot of time to read all the code and maybe even execute code to validate those changes. So this is just something super useful that I recommend doing. And many engineers on codecs will have codecs do work and then multiple times ask it to either review its code or critique its code or just make the code more elegant. And that's just a massive accelerant. Let's say that you like this and your team has a practice of doing reviews. Something that you can go do is actually enable automated code review in GitHub. And so here, when this PR was pushed, Codex automatically without anyone having to prompt it and without anyone having to have a computer running. This is just, you know, in the cloud, Codex went, took a Look and found an issue with the code and the hit rate on these is really high. Like we built this feature so that it only points out issues that it's like very confident in our issues because, you know, the sort of. The principle here is just like human attention is so scarce, we really want to protect it, but when it finds a really important issue, it'll post here. And this is where you start to feel the AGI a little bit. It found this issue and then Roman basically replied like, hey, Codex, can you fix it? And then Codex went and fixed it. Right. And so we can get into this kind of loop like that. So that would be my number one integration to give the number two might be slack and linear.
Claire Vo
Well, so I love this flow and I think again optimizing. I'm actually running a 5, 2 branch review right now. I'm not running it in Codex, but I will do a comparison on the two experiences. But I'm doing a very much like compare this to base. Tell me what, what we did. What, what, what we did here. If there's anything I need to be aware of. I do like these in GitHub code reviews. I do feel like where I have found the highest quality is reducing noise in these. So it's really great that you have kind of focused on confidence, focused on what, what bugs really are going to matter. And then this loop of kind of can, can you fix it? And so are you running this on kind of all your repos? Has this become like how code gets reviewed?
Alexander Ibirikos
Yeah, I mean Codex is, is just everywhere. At OpenAI, which has been really cool to see. You know, I feel like when I hear a story of, of some tool being used everywhere, I'm always like, ah, a little skeptical. These people are biased. So the thing I can tell the audience here is that earlier this summer Codex was used by around half the company, which is like, I don't know, a pretty low number, right? Half. We're now at the point where basically all of technical staff, nearly all of technical staff, is using codecs constantly. And so it's funny because we don't even have this comparison point since everyone's using codecs, it's hard to compare the people using Codex and the people not. But there was this period of time where we were seeing that the people using codecs were like 70% more productive. If you looked at PR volume, obviously PR volume isn't the best thing to measure, but it's a thing you can look at. And now that metric doesn't mean anything anymore because Everyone uses codecs. Codex Code review itself is enabled on pretty much every single repo at the company and reviews pretty much all PRs. And it's one of those features where we were a little bit nervous when we shipped it about how people would feel, but it was just an immediate hit and people really like it. Maybe this is a bit of a segue, but speak product person to product person. Something that we've been thinking about is if our mission is to deliver the benefits of AGI to all humanity. I believe one of the biggest limiting factors is do people want to type the prompt or not? I don't like typing. I would be too lazy to do this. We were thinking a lot about, okay, what are things we can do for teams that are just useful without anyone having to do any work? And so we tried a few things. Code review, as I showed you, here, is one of the things we tried. Big hit. We tried some other things that were pretty interesting. Like we built a feature that would automatically attempt to revise the PR when you got a code review feedback from someone else. And maybe I'd be interested to try that again. But interestingly enough, that feature was not super popular. The hit rate was often PR feedback. I mean, sometimes it's nits, but often it's kind of in there. You need a lot of human context to understand that PR feedback. And so Codex wasn't acing it and the sort of hit rate wasn't high enough to be worth the email you get every time an event happens on GitHub. Whereas code review, we were really careful with how often it did things and we made sure, for instance, this is one where Codex didn't find any issues. It doesn't even notify you, just thumbs up and you're done.
Claire Vo
Yeah, I mean, I've had this experience too where it's interesting. I have also used automated code review and I have attempted that like full closed loop. And I have also been dissatisfied with not just the bug fixes. Sometimes they're fine, sometimes they're not. But I often feel like putting a human in the loop that says, I'm pretty sure this is fine because XYZ or just remember we did this because A, B, C. And sometimes there's a little context lost. But the other thing from a product perspective I think is interesting on these, like, proactive versus reactive agentic experiences is if you have a full agent loop, the human bar for quality is extremely high. And so dissatisfaction and frustration can bubble up very quickly. It's one thing if, like your code Review, you know, bot says this is broken, you go fix it and then you get it reviewed again and it knits you. You're like, okay, I didn't do exactly what you wanted. But if you have the experience where like your code review bot, you know, raises something, it fixes something and it knits itself, that gets really annoying.
Alexander Ibirikos
Totally.
Claire Vo
And so I do, again, this is like a little bit of a agentic product design challenge that we're now going to have that people I think need to pay attention to, which is like, how do you design for latency? How do you design for perceived quality and bars when humans are involved, when they're not involved? And then one last thing I want to say to your comment, which is, I mean, if, if our human fingers are no longer valuable and used, what role are we going to play? Like protect the typers. But I mean, I mean, I get what you're saying, which is almost all the friction right now in my product development software development flow is like literally writing the first prompt, like sitting down and just going, now, let's do the thing now. And it's such a funny shift from where we've been before.
Alexander Ibirikos
Yeah, I mean, it's interesting to think about, right. Obviously our mission is to just massively accelerate every single developer and broadly than anyone who's doing anything where an agent using a computer can help. And so, yeah, the question of what our role is interesting. I like to joke that the limiting factor is typing speed. I think that's half true. So I think for things like review this code, please. The limiting factor is typing speed. There's a lot of just micro places where an agent could help you. But then I think the other half is actually thinking now that we can just have ubiquitous code and we can basically prototype things trivially. The hard parts I think become deciding what actually should make it in thinking what a product should do, knowing a customer actually. And then if you're building a complex system, being really thoughtful about the architecture of that system and kind of curating the agent work. So I think I talk to developers who are really motivated by seeing people use what they build, and I think those developers are increasingly just thrilled. And then I also talk to developers who just love like the feeling of writing code. I don't know, I experience joy from both. And I think that's a place where we think constantly about like, okay, how do we make this feel as fun as possible, right? Like setting up dependencies kind of sucks, but you have to do that for your agent. So like, how do we make that easy. Reviewing kind of sucks. So like how do we make that easy?
Claire Vo
Great. Well, just to recap because we've done a lot here so far before we get to lightning round, we have shown installing codecs as an extension, setting off tasks, setting up work trees, using plan and in particular the plan on how to plan on the OpenAI blog to generate plans for more complex implementations, especially when you need longer running tasks, and then automations around code review and bug fixes. The things we didn't call call out, but I think are really important are you can basically use codecs wherever you want. So we showed it in VS code, we showed it in the terminal, you can get it on the web, you can get it in Slack, you can get it in linear, it can kick off in GitHub. Like I do love this idea of kind of like codecs anywhere is also really nice. So again, if you're intimidated by or don't understand VS code, great, kick it off in the web. If you love your command line tool, great. Let's use use a couple of those keyboard shortcuts that we showed. And so I just think this is a great kind of like starter flow that shows how flexible this platform is and how it can meet, you know, kind of like a bunch of people at a variety of levels of tasks. So I'm actually going to start there with our lightning round questions which is, you know, you just released five, two, you showed Swebench. We're clearly, you know, like model wars every week. I am really curious about harness wars, like why does the interface to the model like Codex matter so much? And you know, we've seen a couple things that you've built into the platform, you know, PR review or code review, you know, little like small UX fixes that make make it easy to use. But like where do you feel like harness differentiates in your experience using these coding tool?
Alexander Ibirikos
I think there's two places. One is just the quality of model work and then the other is in the user experience. So taking the quality of model work, I presume many people listening to this podcast tinker with models or have friends who are building models. And some people just they're model whisperers and they know how to use a model. And some people are model whisperers for a specific model, but then maybe not for another model. And one of the things that's I think very true is that these models are changing all the time. Right. I've lost track. I think we're shipping a new model every two weeks recently at OpenAI. And each model we ship is better than the last model. It's super exciting. But they're also evolving. They have new capabilities that are kind of hard to keep track of unless you spend all day on Twitter X and so I think that there's kind of two things here. The first is you need to know how to get the most out of the model. You need to know, for example, that OpenAI models, we kind of have this very, again, AGI pilled way that we train models. We kind of just give them access to a shell and we say, go and do whatever you think you should do in the shell. And so we see these really interesting emergent behaviors where sometimes the model will decide to write a python script to make many, many edits to code. And then we have debates about, well, is that a good idea or not? Would we prefer it if the model was like plotting through the edits or do we like it running a script? But either way, that's a thing you should know. And so one of the cool things about Codex is that we're building our harness in open source and so you get to see the updates we're making for each model that we ship to make the most out of the model. And I can say like, every time we ship a model, engineers from the Codex team will go like, test it, think about it, talk to the research team. I mean, it's something they're just working super closely together to figure out how to make the most out of the model. And sometimes we'll ship a new capability like models getting the iterate parallel tool calling. Let's see how that works. Or recently we shipped something called Compaction, where we can basically have the model start a new conversation with itself with a fresh context. And what it'll do is just give itself just enough, just the right information so that to the user it just feels like it's one conversation rather than two conversations. And so when we build features like that, by building both the harness and the model together, we can be much more opinionated about what to do with the model and then we can make the actual outcomes way better for users. And so part of why the codec CLI is open source is so that anyone who wants to get the best out of codecs models and actually just OpenAI models generally can just go observer how it works. They can use the Codex SDK if they want and just not even touch the harness, just delegate to us for that. Or if you want to build your own harness, you can just go copy paste Parts of code. So we do this all the time. I'm in a bunch of Slack DMs with customers and we'll just send them point code partners like, oh yeah, this is how we do this. Just copy this code, please. So that's on the. Just like having higher quality model outputs, but also keeping pace with the pace of innovation from our research team. That would be like why I think the Codex Harness is awesome there. The other side is product overhead. For instance, earlier this year, most of the really powerful agentic flows that people were using, like codecs and other were in the cli. And this is super basic. But I spoke to many people who don't really like spending all day in Terminal. I love using the terminal, but I spoke to many people who don't. And I spoke to many people who really like seeing the code that is being edited at the same time.
Claire Vo
That's me. I'm a code reader. I like to read my code, right?
Alexander Ibirikos
And so, you know, that's a very, very basic thing, but it's a place where just building the right product experience unlocked a ton of growth for us. And so empirically, I could say that we see many, many more people who like looking at the code that's being written at the same time as the agent than we see like just like running in Terminal, like on the side. So I think that's a very basic example. And then we were kind of touching on this point of latency for a more advanced example. I think if we can harness the model right, we can make it so that you can deploy the model to help you with hundreds of thousands of things a day, but without you having to type, but also without it being annoying to filter these outputs and without you having to wait, because whenever it's helping you with something proactively, it's sort of doing so on its own computer and only letting you know when it has something great. So my view is actually that even as all these models are progressing, let's just say that stops, which it won't, there are many years of product building to do to just like get the harness right and useful for people.
Claire Vo
Well, that's great. And I do want to make sure people did not miss this tip, which is if you are trying to figure out how to get the most out of out of these new models, go peek under the hood at Codex Open Source and just see what I mean. I think that's the other thing is what kinds of changes do you have to make when a new model comes out? Especially if You're a builder out there, maybe you're not building a coding tool, but you're building a SaaS product that use these models when they come out. Being able to observe how the creators of the models actually maximize their unique strengths, I think is a really valuable thing that I think people really underestimate. Okay, my second question I spied with my little eye Atlas. Tell me your favorite Atlas use cases that you think people underappreciate.
Alexander Ibirikos
Ooh, the first one is kind of boring, but it's. It's really, really true, which is I have started just asking Chat for everything instead Chat being chatgpt. I just ask chat for everything because I get answers that are like really catered to me because, because I talk to chat about everything. You know, I'm a weird person. Like, if I ask a question and then I like make a decision based off it, I tell it what decision I made because then it remembers. And so then the next question I answer I get is even better. And so just simply like my workflow for anything that's not code is I go to Atlas, I command T to open a new tab and then I just type whatever I want and I get an answer and I often follow up. Like, for me, that's the sort of magic thing about using an LLM. So being able to just like ask your question and then follow up and then maybe navigate to links. Like, super boring. But. But I love that. My other sort of favorite feature is the fact is side chat. So basically any page you open, should I show. Maybe I can show this?
Claire Vo
Yeah, sure.
Alexander Ibirikos
Yeah.
Claire Vo
And while you're doing that, I have to call out that you have settled a debate that I saw today on X, which is like, should you tell your AI when it's done something right or wrong after it's done it? Like once something's fixed a bug, I'm the person that's like, great, it looks awesome. Thanks. That fixed. But I do think that in my mind I've convinced myself that closed loop creates some context that's like, yes, I did this particular thing right or wrong, user has accepted it and that in some future world it's going to make my life better.
Alexander Ibirikos
Yeah, I mean, so I think there's two reasons to do that. The first is just like memory. Yeah, right. Like if you have memory enabled, which most people do, then you'll get a better answer. A very concrete example is I was on holiday and some plans changed, so we were deciding where to get dinner and so we just asked Chat for dinner recommendations to be clear, we also, we like food a lot, so we also like searched. But it's interesting to get asked Chat because it knew like where we were staying and it knew what food we'd had like the day before or whatever. And so it just gave like a really bespoke recommendation and that was cool. I think my other sort of hot take reason to do this is I think it's important to be polite to AI. I know this is not an official company stance, just to be clear, but my sort of meta reason here is I just think it's important to be polite to everyone. And I think that if you start not being polite to chat, I think it can wear off on you and you just start not being polite to other people in your life. And like, we're adults, like, imagine kids, right? They hear us like talking to the RAI in some way, they're going to go treat someone who they, you know, their children they don't know in some like, not polite way. So that's kind of my, my hot take thing.
Claire Vo
I could not agree more. You know, I am a. Please, thank you, good job. And honestly, it's not because of the AI's humanity, it's to protect my own humanity, which is if I get used to being a jerk to anything human. Like, there is no way that does not bleed into how I think about people. Speak to people. Clip this, pin it to the top of the YouTube channel, be polite for you if AI.
Alexander Ibirikos
That resonates a lot. It's like our humanity is defined by how we treat others, not how they treat us.
Claire Vo
Right, Exactly.
Alexander Ibirikos
So side chat. So basically I can go in here, I can click this button and I can ask questions about the page, right? So I could be like, what's great about GPT 5.2? You might joke and be like, why are you asking AI? To summarize this article? But like, oftentimes I'll be at work and someone will send me a thing and be like, thoughts. And I just like, don't have time. What is this? Right? And you know, and then I can have a conversation though, right? So then I can be like, oh, like interesting. Like, you know, hey, chat.
Claire Vo
What do I think about this?
Alexander Ibirikos
I mean, when I ask that question, it often grounds itself in what I've talked to it about before. It's like, well, since you are a person who likes, you know, like this, you probably would be interested in this detail. So I mean, this is not maybe the best example because I'm asking for a summary, but oftentimes If I'm looking at numbers or math or I need to learn more about a concept, I'll use this. You can also use this to rewrite something. So if I was in a Google Doc, I could select some text and be like, hey, how else might you rephrase this? So I think for me, side chat is a really cool feature, but I think I might be a bit nerd sniped by it, to be honest, in the sense that this is what I joined OpenAI to help build. So I'm very interested in it. I'm like, for me, the idea of an agent that I don't have to. It understands me, I don't have to map myself to its world is really powerful. So sidechat is that Codex? Is that too right? You launch it in the code base and it's in your environment. You don't have to go to it. I think this is the future and I'm really excited for how we can just take all the best ideas from Atlas and Codex and kind of bring them together into basically AGI Super Assistant.
Claire Vo
AGI Super Assistant. Maybe we'll see it next, next year. It sounds like. It sounds like the name of a product you would really.
Alexander Ibirikos
AGI super assistant. 5.1, 5.2.
Claire Vo
Thinking high. Okay, last question. And we maybe already covered this, but we've established you're polite to AI, but when it is not replying, not doing what you want, not remembering, what is your prompting technique to get it back on track?
Alexander Ibirikos
Yeah, I mean, so first off, I have a bit of a weird job in that if I notice the AI not replying, I have to go probably file a bug or like start a sev. Sev being like a word for an incident. So, yeah, I have to go do those things. But I think context is everything, so. So if I see the agent not doing what I want. So I guess one really tactical tip is I don't usually ask for things from the agent without asking for context, without giving context. I'll say like, hey, I want you to change this UI from this to this so that users do this. Or like, because we don't want people to be confused about XYZ and I often. It's funny, I think another hot take. I think PMs are the best prompters because we're used to not being the expert in what we're doing and we're used to not being the most intelligent person in the room. Right. And so usually we can just maybe suggest, but we don't even know if that's Right. So I sort of work with Codex in that way. I'll be like, hey, can you make this more elegant? And I won't say what I want because it'll look at the code and it'll know better than me. So tip number one is give a lot of context and actually get really good at describing the level of ambiguity of your request. Like, do not create false precision in your prompt if you don't actually care exactly about what the outcome is. And then the second thing is like, if that doesn't work and you explain why again and it still doesn't work, then I just start a new chat and you can do things. This is a very advanced user thing that I don't think anyone listening to this will ever do. But Codex is a very open product. It stores its conversations logs in like your home directory in a Codex folder in a subfolder called Sessions. So like dot codecs sessions. So you could just go say like, hey, I started a new session because you got confused. I wanted you to do this because of this. Go read your previous session to understand what's going on and then. And then like, you know, continue from there.
Claire Vo
I love it. Ending this episode with a hidden hot tip, which we didn't get to through our Codex walkthrough, which is all your sessions are stored locally. So just ask it, ask it to go read them. This has been really fun, Alex. Where can we find you? And other than reporting bugs, how can we be helpful?
Alexander Ibirikos
I am hiring PMs, so if you're interested, please apply on the job site and also hit me up on socials. We are hiring generally a lot on Codex. We do love bug reports and we do love feedback and actually it's already an open source. I don't mind talking about it. We actually are also releasing a bunch of Vue configuration abilities for Codex, like the ability to allow us commands or skills. So if you wanted to help build Codex skills or tell us what configuration you want, that would be very helpful. And lastly, just check it out. Codex is awesome. So you can find me on Twitter. I'm Berico Embirico and at the R Codex subreddit. We are there all the time and also love chatting there.
Claire Vo
Amazing. Well, thank you for joining. How I AI cool.
Alexander Ibirikos
Thanks for having me.
Claire Vo
Thanks so much for watching. If you enjoyed this show, please like and subscribe here on YouTube or even better, leave us a comment with your thoughts. You can also find this podcast on Apple Podcasts, Spotify or your favorite podcast app. Please consider leaving us a rating and review, which will help others find the show. You can see all our episodes and learn more about the show@howiaipod.com See you next time.
Podcast: How I AI
Host: Claire Vo
Guest: Alexander Embiricos, Product Lead for Codex at OpenAI
Date: January 12, 2026
Duration: ~53 min
In this insightful episode, Claire Vo and Alexander Embiricos dive deep into OpenAI’s Codex, positioning it not just as a coding assistant but as a full-fledged software engineering teammate. They explore practical workflows, real-world use cases, and productivity strategies for leveraging Codex, both for seasoned engineers and newcomers. The discussion covers everything from “zero to one” getting started, to professional team workflows, integrating with systems like GitHub and Slack, and the future of agentic AI in engineering.
Workflow Demo (03:07–05:00):
Real Use Case:
Parallel vs. Serial Tasks (06:01–11:34):
git work tree to create multiple isolated environments.Quote:
Practical Tip: Run multiple Codex instances in separate work trees for language/localization changes, prototyping features, etc.
Case Study:
Power User Tip:
Codex is valuable both for:
Quote:
Top Integration: GitHub for Code Review (28:06)
Other Integrations: Slack, Linear, Cloud-based work
Atlas: Embiricos’ favorite use case is using Chat/Atlas for continual, contextual queries (e.g., vacation dinner recommendations adjusted for personal preferences).
Informing the AI when it got something right/wrong not only improves its memory but also protects your own humanity:
Side Chat (47:10): Pop-out conversation for context-aware queries, content rewriting, or learning—directly in the flow of work.
.codex/sessions).
| Timestamp | Segment | |-----------|--------------------------------------------------------------| | 00:00 | Codex’s thoroughness and approach to hard tasks | | 03:01 | Demo: Installing and starting from zero with Codex | | 06:01 | Parallel vs serial task execution in Codex | | 08:13 | Demo: Creating git work trees with Codex | | 13:00 | Case study: Sora Android app built with Codex | | 15:52 | Detailed planning workflows and “Plans.md” technique | | 21:24 | Codex for both prototyping and production | | 22:33 | Choosing between planning and improvisation | | 23:33 | Human workflow challenges with Codex’s latency | | 28:06 | GitHub integration and automated code review | | 38:38 | Why Codex harness matters; open source and self-improvement | | 45:33 | User-AI politeness and improving contextual memory | | 47:10 | Atlas, Side Chat, and contextual web summarization | | 51:26 | Pro tip: Reviewing Codex session logs for context recovery |
This summary captures the practical wisdom, product insights, and forward-thinking strategies shared by Alexander Embiricos and Claire Vo on building, debugging, and accelerating real-world software development using Codex and AI tools.