
Loading summary
A
OpenAI's new image Gen2 model is officially here and it is very, very, very good.
B
Nano Banano who this model is awesome. It can handle Images up to 2K resolution, multiple languages in and out, and it is very good at image to image editing.
A
And as OpenAI pointed out in their livestream, it can actually write on individual grains of fake rice.
B
Thanks, Kevin. Now I'm very hungry. We'll walk you through what it does well, what it doesn't do as well, and where we go from here.
A
O Also, Elon is buying Cursor to be the new X AI.
B
Ooh. And a cool new way to use Claude code to make motion graphics.
A
We got all of that.
B
And hey, look, Kevin, I won the Super Bowl. That's me. I'm the MVP quarterback.
A
Great job, buddy. It's really great. This is AI for Humans, baby.
B
Welcome everybody to AI for Humans, your twice a week guide to the wonderful world of AI. And we have a banger of an episode today. This is a big one. A big one. It doesn't seem it's that big, but it is a big one. Kevin.
A
This one's now. This one's a good one.
B
This is. This is.
A
No matter who you are or what you do, the announcement today, the availability of this new image model from OpenAI fundamentally changes the way you can do anything or everything.
B
Yes.
A
Not an understatement, Friendship.
B
Yes. This is a real human use case. This is a thing that will be good for humans, might be bad for humans, but first and foremost we have to say GPT Image Gen 2 aka ChatGPT Images version 2 by any other name. This is OpenAI's new image model. We have been talking about it being teased for a while. Here it is out, Kevin. It is very good. We'll be showing some examples of things that they made for their promo videos. But there was a live stream today where Sam Altman and four gentlemen all sat on a couch and showed off the ability to make themselves into fashion models and do all sorts of other stuff. We should run through some of the basics of what this does and then I spend quite a bit of time with this. We also want to highlight some of the cool things we saw other people do. But first and foremost, let's tell everybody what it is and what it does.
A
Yeah. So the broad strokes is image model get better, right? The quality of the images you can generate, the format and resolutions of any image that you can generate, generate. We're talking like if you want a crazy long, like vertical style billboard, this thing can do it. If you want a one by one square, it can do it. And when Sora was released, and for those who don't know that's their video product, this was OpenAI. Mimi. Yep, I know. RIP Sora gone too, too soon, too long. We never had the chance to dance. But Sora, what made it so special was that a basic prompt could be enhanced into this really form funny or interesting, diverse set of video clips. Because there was some thinking, some prompt magic going on in between. You give it a query and out comes your video. And I gotta say, like, that's the secret sauce that's happening here.
B
Right.
A
If you will. There's a harness, invisible though it may be, built around this image model so that it can go out, it can think, it can do research, it can reason about what you're asking for and then generate an image for you.
B
Yeah, why don't we. But let's try to make one now while we're waiting. We want to come up with a think. Let's think of something to kind of come up with as an image model so that we can kind of show people what comes out right in real time. What do you think?
A
Okay. Yeah, let's do it. How about like, like a butcher's diagram for like a muppet?
B
Okay, fair enough. Let's try it. Let's see what happens. We'll come back and check it a little bit. So there's a couple other big things to know about this. Uh, one of the big things is Kevin mentioned that it's a thinking model. So it is using different thinking settings. You cannot use GPT Pro right now. I've tried doing Pro a couple times and it gave me like weird SVGs trying to interpret it. But you can use it on the thinking mode for paid users. If you are a free user, you can still use it, but you are not going to get as good a result. It's a different model. In fact, Kev, one of the things we've talked about was like the three different tiers of testing models they did before the different tapes, if you remember, packing tape, masking tape. Right. So yeah, yeah. So the thinking model is a different, different model than the free model. And it is very good. I think we're going to go through some really interesting examples here. But first we should show off this thing we teased at the top. There's a sound up of the live stream that I want to show about how they wrote words on an individual grain of rice. So let's play that and just let everybody Listen to this. So everyone, how far we can go with our image generation model? So this is an image I generated with our experimental 4K API. This is just a pile of rice, but this is also not just one pile of rice. What if I tell you there's one single grain yet with the text GPT image on it? Can you find it? Here we go. Can you find it? Yeah.
A
Yes. By the way, Sam Altman found it very quickly. Very smart gentleman. Yes. For the audio only, folks, there is a photo of an image that was generated of thousands of grains of rice in a little pile on top of a wooden table. And sure enough, smack in the middle, on one singular grain of rice, they've got some text written. It says GPT image too. But you have to like, zoom genuinely very far in. Like you are looking at these individual atoms or like, you know, getting into the microscope and looking at a single cell organism. It is there. And that's impressive.
B
Yeah, I mean, I think the biggest thing here is text is so much better and so much more pronounced. Right. So you can have images, we tested this before, but you can have images with just a ton of text on them and it will still show up. In fact, I tried doing a thing a couple of weeks ago when they were leaking out these models where I could have like the periodic table be created with an image model and see what would happen. And I did it today. And Kev, if you go and look at my test, my periodic table test, like, it actually did a good job. So my, my prompt here is really like, make the periodic table, but then behind each element, make sure you get the symbol right and have a representation of what that element is and some of the ones it's cheating on. But you have to know from 1.5 to this is a massively. Because this is asking the model to think a lot. It's got to put all this stuff in the right place. It's got to put the right image behind the right symbol. What's going on?
A
What are you looking at? Americium is a smoke detector.
B
Well, I see some of this stuff. I'm not a smart enough scientist to know, but my, my, but my gut is telling me that americium probably is in a.
A
Is used inside of a smoke detector. Let's whatever, nerd. I don't believe it. That's fake. Also earth flat.
B
Prove it. So I want to also go through a couple other things in the grain of rice mold. Simon Williamson, who, if you're not reading his blog, he writes a great blog Had a great test that he's been doing with Where's Waldo? And we know that Where's Waldo is a very complicated image with lots of people in it. And one of the things he showed off was, like, how good it was at burying a specific image within a group of people. Kevin, I think you remember you and I tried doing Where's Waldo tests like years ago at this point, and generally it was pretty bad at it. If you look at Simon's image, like, you can just see that, like, zoom in ability, which is something that really didn't exist in the same way with AI images for a long time. Being able to do that without having to go out and up res it and lose some of your backgrounds and things like that is, I think, a pretty big deal. Wow.
A
Okay. Oh, we got Gabby, we got our Muppets.
B
Let's see, we got some.
A
We got some puppets. Standby here. I'm going to send you. I got two of them coming in. I did ask for a modification on the second one to make it more kind of cartoon like. And maybe that's the one we want to focus on.
B
Oh, wow.
A
Because the Snorf butcher diagram. So the prompt for those that are seeing the results on the screen, the prompt was for a detailed butcher's diagram that cuts up a quote puppet like cartoon character with recommendations on how to prepare and tasting notes for each thing. And snorf head, the roasted whole head is great because Snorf doesn't lose his smile or his charismatic wide eyes when his head is severed and served on a plate.
B
That's a great centerpiece for my event. Kevin, I'm so happy. I finally know that I can have Snorf as part of my world.
A
Did you see the snore facts at the bottom as well as the sustainability note?
B
So this is what's so interesting. So the snorfax for everybody are. He is an herbivore. He loves tickles behind the ears. He speaks and snoofles and snorbs and is an excellent hugger. And tonight we are going to be eating him. I'm sure he is delicious, so.
A
But don't worry. Yes, don't worry, Gavin. Snorfs are farm raised with love and care. Use every part, waste not, be kind and keep the Snorf spirit alive.
B
Oh, my God.
A
Gosh, Gavin, I. I hope you like my feet. They're chewy.
B
We're gonna have people that are against us are going to be protesting the snorfa devastation that we're perpetuating here. But I want to go to the next thing that my. Another example of mine, this kind of speaks directly to one of the other things that this can do is really good images of websites or screenshots or things that exist in the real world. And one of the reasons people are assuming that's the case is because a giant AI model needs to train on all this stuff. So it's probably, you know, seen a ton of the Internet at this point and all these different sort of things. And one of the use cases of this is that they're talking about in Codex, the Codex model. And we do still Expect a newer AI model to come from OpenAI soon. Like an actual state of the art LLM model. With Codex, though, they're saying that in the Codex app you can use this to generate screenshots that can then build websites based on the screenshots. But Kevin, I want you to go look at my absolute worst meal thing that's in the rundown here. I basically made. I wanted to see what it could do of making a screenshot of a. Like a cooking site. Right.
A
There's no snorf on the menu, but some of this looks delicious.
B
Yeah. So I asked it create me a screenshot of a website that gives me a step by step recipe to making the worst possible meal. But take it clearly serious. Like a chef would come up with whatever the most horrifying but still real meal would be. And what's great about this is it created an entire website that's called the Culinary Institute of Disgusting Cuisine. And it looks like a very fancy place. And this is the recipe for the absolute worst meal. And basically this is a combination of ramen, tuna, coke, American cheese, chocolate chips, marshmallows, cheetos, cocktail juice from a fruit cocktail, garlic, soy sauce and blue cheese.
A
So it's the number one. The first ingredient is 1 can 5 ounce tuna in water undrained, which is a great note.
B
So it gives you a lot of great, like step by steps. But Kevin, in the same way that we got a little bit extra from snorf, one of my favorite things here is that it said this recipe was developed in our test kitchen as a thought experiment. How bad can food possibly be while still technically being food? This result is nothing short of dreadful. Now that is like a version of a surprising result that we used to talk about with Sora, when you would not exactly say what you wanted, but it would bring something kind of creative to the table. Kevin, I did do one thing. I took this image, I handed it over to Claude, had Claude, write a prompt for a TikTok video about a woman making this exact dish. And then I had sea dance generated. So let's watch that.
A
Okay, today we're making the absolute worst meal. And I promise you it hits. Drain most of the water, then stir in a half cup of cola. Trust me, whole can of tuna. Do not drain. We want the juice, mayo, chocolate chips, marshmallows, Cheetos, fruit cocktail juice and all blue cheese, soy sauce. Stir it low and slow till the cheese melts in. Top with American and that's it. Absolute worst meal. Don't forget to like and follow for more.
B
It. First of all, it brings the grossness of it to life.
A
It really comes to life.
B
See how disgusting it is. But like, you just get a sense of like, okay, you have this crazy image, then you take it, you break it down and you can make something out of it. This is the weird new creative pathway that we're going on through this AI stuff. Right. So very cool.
A
But. And I could see there being a culinary institute of disgusting cuisine, an actual website we could go through and then submit videos of you preparing and eating the food and actually like reviewing it and becoming a meme. And then you got to get the shirt and then you got to have the pop up restaurant at a South by like you could easily extrapolate. Like the. The from idea to meme is just getting more and more that timeline. What I really love about this, Gavin, not only is it's like a great. It's a funny idea. The result is great. The coherence. Yes. In this image, from the ingredient list to the in to the directions, which would actually make sense chronologically to of course, the. The hero image of the completed meal where you see all of the individual ingredients.
B
Yes.
A
Incorporated into the meal. The fact that the cheese is slightly melted because it was on the hot marshmallow and tuna and ramen, all of that adds up to being just again, yeah. Pure delightful as an output. And to be able to just take that and feed that into a video generator. You could imagine the TikTok account coming to life overnight.
B
Yeah, exactly. And I think that that's kind of what this does. I do want to shout out a couple other examples. Ethan Mollick, who we've talked about in the show from the very beginning, did his famous otter test. And what I love about this is it's so complicated that the otter test is now giving a presentation about the otter test. And you see him going through slides and then the slides are all really good.
A
What is the otter test for those who don't know Exactly. Yeah.
B
The otter test was Ethan's original AI video test where he would try to generate an otter on an airplane. I think it did reach back to images originally and so it kept getting more and more realistic. And at one point kind of Ethan said, well, it's solved as a video test now because it looks like there's actually an otter on an airplane doing this thing. And he would be on his phone and now you can just see like it opens the door even more creatively to think about different ways that otters can present the information about the otter test. But again, this is the thing I keep thinking about when new tools like this come out is I love the day or two that they drop because a. As to your point, last time they're not in any way hobbled. It's the probably the best version of them you're going to get. So you can do a lot of cool stuff, but you also see like human creativity on display. Right. You really start to see why and how people can use these things to do things that they couldn't do in real life before. That's just a very cool thing.
A
You also wanted to shout out Nathaniel Whitmore's magazine test because it was something that they did on the live stream as well. They sort of did like a, a photo of the boys from the stream on the COVID of magazines. But now you can imagine these. I used to, I mean I loved magazines growing up. I love the crazy covers. I love this take on it. This is a magazine cover from the 90s about a time traveler from 2026 discussing GPT Image 2.0. And it's amazing. It's a, it's a leather jacket, white shirted gentleman holding up a diskette that says GPT Image 2.0 Demo Build 7, 18, 25. Which is interesting that that's the date. So much, so much text on there. Yes.
B
I mean one of the things they showed off in the, in the live stream too was this idea. And maybe magazines will have somewhat of a comeback because like when you look at how these image models can actually stack text and what they can do, maybe it's like magazine like websites that will kind of be a thing. Everything is going to be a lot more cool looking. I think as long as we don't like kind of shave the edges off of everything but like the ability for individuals to design, quote unquote. And I know we didn't even talk about Claude design drop this week, but that was Another thing that's been going on is like Claude dropped a whole
A
new system, used it extensively.
B
Yeah, I mean it's pretty interesting. I used it a little bit too. I don't know what your thoughts. Do you like? Did you like it?
A
Yeah. I mean, quick, quick departure. If you go to Claude AI slash design, you can get a beta access to their new feature and it really comes alive. If you already have existing assets or existing brand, you can use it to design a brand. Sure. But I was able to like for, for my, my day job, the company telly, they've got massive figmas of all the component libraries and you know, brand IDs and yada yada and the way things interact. And so I was able to cherry pick and put that into the system and then ask it design me like a smart screen element for a widget that would do xyz. In fact, the jam that I did was design me a widget that displays the current status of the Strait of Hormuz.
B
Oh, that's fun.
A
And it used all of the design language and material science and thinking that we have and it gave me like, here's what it would look like if it were open, contested or closed. Here's how it would look in like a miniature widget form or an expanded widget form. And it just fully, because we have really good documentation for all of those layers, it killed it. So if you've got your idea for a video game or a mobile app, or you've got your mom and pop restaurant, or you've got your consulting business, whatever it is, if you're able to communicate to Claude design, these are the fonts, these are the colors, these are the principles, these are the ways we want our brand expressed or not expressed. The system is really good and it can generate motion graphics too. So I was very impressed.
B
It's really interesting. And one of the things that's really interesting about this new image model open I came out with is that you can generate images in any ratio of aspect ratio. So when you think about a cloud design program, like you could say like, hey, give me GPT2 image, give me like 15 images, but I need them in these ratios and then plug those directly into cloud. I mean it's a lot of things you can play back and forth with. I do want to say two other quick things. Kevin and I made a very quick YouTube screenshot of us in the future because that was something that people were doing. There's a version of us from the year 2045, but again, it Gets all the copyright. You can see some interesting stuff. But more importantly, I want to talk about the image to image capabilities of this because one of the things we know for a long time with nanobanana, it was one of the only models that could do this was like image editing is really important. I just want this little section to be changed or I want to take the format that this thing is and put it into this other thing. So I used an image of an old Life magazine cover of Robert Oppenheimer. So there's a lot of people out there who kind of think of Sam Altman. He might think of himself this way as like a kind of connector to Robert Oppenheimer because Oppenheimer is the person who created the atomic bomb. Sam Altman did not create AI. There's other people who are much more responsible for it, but there's this connection. So I took this image from Life magazine and I said, make this image of Sam Altman but keep it all in the same look. And it's pretty incredible when you see what it is. It's Sam in the same positioning. It's the same kind of like setup of the magazine. There's a slight difference in the date. And the only thing is like the Oppenheimer photo looks slightly worse. And I think that might be one of those weird things where like you can't get it exactly to match this sort of thing, but still it looks pretty crazy. And then I use the file API to which is now live. You can go use the file API. And I actually used it. An example they had there was like wrap a bus, a London double decker bus and something. And I wrapped it in. My wife has a new book coming out, a writing book called how to Write a book in 100 days. It's coming out in September. Go pre order it now. And I wrapped it in her book cover and it freaking looks pretty good, right?
A
And yeah. And just so Kim Purcell, who is magical, gets the proper shout out. It is write a novel in a hundred days, not write a book. And that's something that maybe her husband should know a little bit more intimately. But for the busy writers who need
B
a guide, just to be very clear,
A
not in her book. No, but in the promotion of it. Just want to be clear. If you're a busy writer and you need a guide to finishing your book fast, write how to Write a novel in 100 days is the one you should go pre order from Kim Purcell.
B
We love yelling at me from the background here.
A
Well, she doesn't have to yell to thank us, Gavin. She should do what everybody should do. And then we'll round the discussion. She should like and or subscribe to this very podcast on YouTube. Click the thumbs up, Click the bell. Subscribe. It cost you nothing. If you want to give us money and dominate us financially, bring it on. We got a Patreon. You can buy us a coffee. You want to leave a comment below, do it. A five star review. Leave a five star review for Kim Purcell's book. The point is we need your time and attention.
B
That's right. Findom. Is Findom a thing? Is that a term people say fin domino.
A
Well, findom us then all of those communities. Oh by the way, by the way. Just a quick roundout. So the image model again we said like, oh, this is for everybody. Literally it's for anyone now like you can make incredible infographics with it. It works across multiple languages. It's very good. And there's a free version of Asian languages now there is a free version. You can do 360 imagery with it, like panoramic photos. I think we're only scratching at the surface of what this model can do. And it's, you know, it's early days. It was just released at the time of this recording. So excited to see what you guys create as well. So please hop in the Discord and share some of those creations.
B
Oh please. Yeah, there are people already shared in the Discord but jump in there, Kevin. There's another big piece of news that came out which is a kind of not surprising one, but in some ways I was still kind of taken back. Space X is going to purchase or do a $10 billion deal with cursor. And if you know what Cursor is, Cursor is the company that we've been following for a while. It is an agentic coding company. Mostly they in the past were kind of a wrapper for other models where they were providing models that would be really good at coding and then they had a kind of a harness around it. They are now kind of entering a world where they are taking open source models and making their own models. But SpaceX X I now all one company is going to bring them in house. And Kevin, the interesting thing about this story to me was we haven't talked a ton about this show but if these the X founders have all left. Xi did you know this? Like everybody that was there in the beginning stages, I think they've tracked them all now have all left. So there might have been a thing and there's rumors Going around that Elon was pretty unhappy about the latest version of Grok. And what's interesting here to me is that this is kind of the consolidation of the AI space now. It's a huge, huge windfall for the cursor team still. I mean, it's a company that didn't exist, whatever, like three and a half, four years ago. To have a $60 billion potential acquisition is a big deal. But this is that kind of like we're all kind of seeing the gathering of the powers, right? There's the Google power, there's the OpenAI power, there's the anthropic power, and. And there's the meta and all these other areas. It looks like Elon is trying to kind of bring forth to make sure he doesn't get left behind in this way.
A
Well, how is he going to compete with all birds? Because that's the story. We haven't really. I mean, that thing and dump so quick. What happened, Gavin?
B
Well, you want to tell people what that is, because people listen. Might be like, you're talking about those terrible shoes.
A
What I'm talking about the shoe company. I had not worn them, so I cannot confirm, confirm or deny if they're terrible. But this would literally be like if Crocs announced they were getting into machine learning or maybe are they. They're actually Crocs.
B
AI in a second. I would probably crock it up if
A
there was a Crocaverse, like if there was an augmented reality. Like if they. If Crocs bought Horizon Worlds and it was just Horizon Worlds, but everybody wears Crocs, first of all, they'd have to have legs, which would be amazing. And feet, I was gonna say.
B
Yeah. Where would they put them?
A
Maybe they're just floaty Crocs, like raven
B
Crocs not have hats. That's what I'd like to know. I'd wear a croc hat, right? With a little holes in it. Wait, rubber is a.
A
Is a sock A croc hat, if you think about it, because it comes out the top of the croc, it looks like a top hat. A sock would. A croc sock. Okay, so now, by the way, is responsible for the number one music download app in the world. What? Huh?
B
Yeah. Is that crazy? So this is a small piece of news, but like this SUNO app right now is the number one music app in the world, on top of Spotify, on top of Apple Music. So there's been a lot of stories lately about how people have been seeing AI music at the top of charts and all sorts of stuff. But just to know that, like, the Mikey Schulman group, those guys over there are doing something that people really love doing. And I know that AI music might be one of those things that, like, you may not have a lot of friends that do it, or maybe you do, but it is a thing that has penetrated in a way that isn't like a small thing. Like, it has become a much bigger thing. I have this weird anecdote that I played a song from Suno V5, the most recent version, the other day for my family, and they didn't know it was AI. And I played it, did that thing which we've all done, which is like, there was an argument happening between my wife and my daughter, and I made a song of it and I played it on the car radio and everybody's like, wait a second, what is this? And they were just like, what? So some people, when you hear the newest version of them, just kind of blows them away. Anyway, big thing for Suno. Congrats to those guys we've been covering.
A
And that, by the way, is one of my favorite use cases. If you're on like a road trip with the friends or the family or whatever, like, make little moments, like just instant generate dumb songs about the things that happen. Because like, the time that Jeremy shot Milk out his nose will be a banger that like might be like. Actually it might be a totally mid song. It doesn't matter when you hear it, it's going to bring you right back to that moment in a way that a photo may not. So, yes, highly recommend.
B
Fantastic. Also, Kevin, this week, I spent some time this weekend with a new thing that came out of hey Gen. And it is not digital avatars. It is a new thing called Hyper frames. Yeah. And what this is is a. It's a kind of an AI editing harness that allows you to do motion graphics design with things like cloud code or codecs and. Kev. The one thing that I have wanted forever and no AI tool was ever able to deliver was. I don't. If you watch a lot of explainer videos on YouTube or, you know, kind of know that world where you see a quote pop up, the quote kind of highlights and then you see like a yellow highlighter go over it. When you're making a YouTube video, it's like a thing you need all the time because you'll say a quote and then it'll appear and you know you'd rather have a graphic somewhere to show it. Sure. I got within about probably like three or four prompts. It wasn't like an instant prompt, but I got a version of this to work. And I have to tell you, like, it was the beginning stages of like an opening of my brain. Because you and I have both spent so much time in the TV production space. I was like, after Effects always felt like something that was just gobbledygook to me. Right. Like, I would go in, I would try to learn a couple of things, but it is so many weird things to learn. And this was a great example of like, oh, I can just go in. I can tell Claude code. Hey, this is the kind of thing I want. Didn't get it right away, but like when we worked together and figured out it was able to do it. And now I'm going to create this like, list of things that will. Basically, whenever I go to edit my own YouTube video, I'll be able to drop in and use just a really useful case of AI.
A
Yeah. First of all, if you can make a list of skills that achieve the visual effects that you're talking about here, immense, immense value there that I'm sure everybody else would love as well. When you say it took a few prompts to get to do, you mean you had to like build off of the progress from each prompt or you had to modify your prompts that fully understood the effect?
B
No, no, it was all like conversational back and forth. So the first time I think it came out, basically it kind of like got the line too big and it cut out the wrong thing. I also then gave it an example. I said like, you know, Vox. This is like kind of the Vox style thing. Like vox.com was the people that kind of originated this. So I explained that series. Yeah, yeah, exactly. So, so once you a couple examples and starts to understand it is a little bit of kind of like back and forth with it. Still like to get it right, really at the end, I needed to make sure I understood it. The coolest thing though is like what Hyperframes does is it pops open essentially a window where you can see a bunch of stuff. It is not like remotion in that it's popping up a whole editing suite because, like, I don't know if you need it because you're doing this back and forth. Right. Anyway, very cool. I mean, I had one other example this weekend where I was installing Comfy on another computer and. And one of the coolest things that I got cloud code to do was cloud code basically wrote an entire Comfy workflow and worked perfectly. Great. So, like and that's the sort of thing that, like, you can now do with these. With these coding services. So very cool. Go check out Hyperframes. It's fantastic. And the new image model is out now. Play with it. Send it to us in the discord and share everything.
A
Hyper frames. Super amazing. But you know what? Not as amazing as you, Gavin.
B
Oh, thanks, Kevin. And not as amazing as you, the viewer, because if you made it this long, you are our true audience. I really. Say the word something. Just say the word something.
A
Say the word. I thought that was going to come back to me.
B
No, it wasn't coming back to you. Yeah, tell the word. Kevin's beautiful. Right.
A
Bye, everyone.
B
Beautiful. In the YouTube comments, please write it. Write it a thousand times.
Hosted by Kevin Pereira & Gavin Purcell
Episode Date: April 22, 2026
This episode is a deep dive into OpenAI’s newly released ChatGPT Images 2.0 (also called GPT Image Gen 2), a significant update to OpenAI’s image generation capabilities. The hosts discuss the features, improvements, use-cases, creative experiments, and the broader implications for human creativity. They also touch on other major AI news, like Elon Musk’s acquisition of Cursor for X AI, advances in coding AIs, and the rapid rise of AI-powered creativity tools.
"ChatGPT Images 2.0" highlights a transformational leap for both casual and professional creators—capable of blending creative vision with technical polish, and empowering rapid idea-to-output cycles. As Gavin puts it: “...you really start to see why and how people can use these things to do things that they couldn't do in real life before.”
Listeners are urged to experiment with these new AI tools, share their unique projects, and stay tuned as the AI creative ecosystem expands.