
Loading summary
A
Will AI models have to be approved by the White House before they're released to you? That is what a New York Times blockbuster story just said. But more importantly, Zach Galifianakis is an AI doomer.
B
Where humans are going with AI I mean, I guess I don't know if I'm old fashioned or I'm. Maybe it's because I'm 56 now, but I think this whole AI thing, and I don't mean for medicine, I don't. It's got a lot of great things. Otherwise, though, I think it's another like biblical, in the biblical term, biting the apple again.
A
We'll dive into how the government might soon be up in our own AI business and how it all could be the fault of Anthropic's Mythos model. And AI microdramas are getting more popular in China. And over here we'll show you how AI production is both improving things and maybe not so much. And if you're wondering why I'm doing this whole intro today, it's because Kevin is moving across the world, literally. But thankfully we've got friend of the Pod, Ben Rellis here today.
C
Hi, I'm Ben.
A
That's Ben. Hey everyone, I'm Gavin. This is a picture of Kevin in a beret with a baguette in an undisclosed location. And this is AI for Humans. Welcome everybody to AI for Humans, your twice a week guide to the wonderful world of AI. And normally Kevin is sitting here across from me, but he is traveling this week. So I have a special guest. Ben Rellis is a good friend of mine. He has co hosted the show before. Welcome, Ben.
C
Hey, good to be here, Gavin.
A
Ben, tell us a little bit about yourself.
C
Uh, yeah, started as a YouTube creator way back in 2007. Then I spent 10 years working at YouTube and I spent the last three years working with Reid Hoffman, mostly in the AI space.
A
Yeah. So Ben and I have been doing some fun stuff and projects together for a long time. We actually are going to talk a little bit later about a way we attempted to try to use GPT Image 2, the new model from OpenAI to kind of do something interesting with zoom backgrounds, which was Ben's idea. But we will get into that in a second. And we're also going to talk a little bit about AI Media because Ben has spent quite a bit of time thinking about and kind of like understanding not only AI media, but in his time at YouTube, kind of the creator and creator economy world as well.
C
Yeah, for sure. A lot of similarities, I think between early days of YouTube and early days of AI video.
A
Okay. But we do now have to start with what is one of the crazier headlines that we've been tracking this story kind of for a while. So we all remember Anthropics Mythos came out or not came out, was announced about six weeks ago, maybe four weeks ago now, that was too dangerous to release. This was Anthropic's kind of state of the art model that was too dangerous to release. And at the time, Anthropic was in a little bit of a tift with the US Government about whether or not they were going to allow anthropic models to be used for government projects. Well, today there is a giant story in the New York Times that we can only kind of assume is driven in part by this Anthropic Mythos story that they are saying that the White House and the US Government may now be asking to a pro to approve AI models before they released. And this is a kind of a big deal. I think that, like, you know, capital B, capital D. When you talk about the idea of the future of AI at large, Ben, I assume you have been tracking this story and it's something that has kind of come across your kind of radar.
C
Definitely been tracking it. I don't know if I have such a strong, hot take on it. It does seem like sometimes these things flip when you're, like, in the White House. Maybe you're more for regulation than when you're out of the White House, but, yeah, it's pretty fascinating to see it develop.
A
One of the most interesting things about this story today is that there's kind of a backstory going on here that there might be a shifting of power around how AI is dealt with at the White House. If you remember, David Sacks was the guy who was in charge. He was the aizar and recently left his position. And there's some other people in the Trump administration who might be taking charge of this. But I think the bigger story here is actually a much larger one. I think it has to do with this idea of AI growing up in some ways, right? That this idea that there is this new world that we're entering in where whether or not the Mythos model is as strong as it is promised to be, and there's no reason not to take Anthropic at its world and word and all these other people have said that. I think that we're entering in this world where government is going to have to get involved in these conversations, in part because if they don't they're kind of left holding the bag if something bad happens. In fact, I do think. I want. I want to quote. Yeah, I want to quote from this story real quick here, which I think is an interesting quote to read. The White House wants to avoid any political repercussions if a devastating AI enabled cyber attack were to occur. People in the tech industry and the administration said so. In a lot of ways, this is kind of getting ahead of this kind of like super powerful AI getting in the hands of people that maybe would be able to do something bad with it. But I also think, Ben, this is like. It is like the kind of starting point. I don't want to say it's getting there yet, but it's the starting point of, like, regulating this industry in a kind of a more significant way, which I think people have very strong thoughts on one way or the other.
C
Yeah. And probably a few years ago would have been premature to try to put some of this regulation in place, and now some of this response is, like, specific to the way that the AI is being used.
A
One of the other things I found fascinating this week is that, like, Sam Altman has been actively, much more active on Twitter than he's been in a while, and he's been talking a lot about their new model 5.5. There's also the piece that we covered briefly last week about the goblins in GPT5. Did you see the Goblins piece and learn about all the Goblins and stuff? So, like, to me, they're actively trying to turn their narrative and kind of be more active. And Codex is a very exciting thing right now. There's a lot of Codex. People are very excited about it. I do think the Anthropic narrative is an interesting, slightly different one. And one of the things about Anthropic is that they are much more concerned, at least on paper, about AI safety than OpenAI has been in the past. And I also think it's fascinating to me that, like, you know, Dario Mode, the CEO of Anthropic, I think, like last year went on stage and kind of like was proactively asking for regulation, was asking for this stuff. So I'm curious to hear what you think, Ben. Do you think there's a world where I think that, again, the Mythos model, I do not doubt is as powerful as they say, but there was a kind of this conversation around this idea of when the Mythos story came out and that it was too powerful to release, that maybe there was a world where the Anthropic team was doing this to really get the world to take this seriously. Do you think it feels like the world is kind of, like, sitting up and listening to this now in a way that they weren't, say, six months a year ago?
C
I think so, yeah. Like, Mythos didn't create the risk, but it, like, made more people pay attention. And that happens with a lot of these things. Sometimes it's with, like, a model release. Sometimes I think it's when something happens, like, you know, somebody actually uses AI to do something nefarious. It makes headlines everywhere. And, like, the technology was over there, but, you know, now people have something, like, very specific to point to, and that very often, yeah, public perception is shaped by model stories. What? Actually, not a lot of this hits, you know, sort of, like the mainstream. And when it does that, that can impact policy, can impact the way people feel.
A
Yeah. You know, it's funny, when you were saying that and thinking about your background, it made me think about, like, when you were at YouTube, like, watching all the stuff that YouTube went through. I mean, not necessarily the kind of crazy things, just, like, the growth of seeing something go from, like, people kind of not taking it too seriously to seeing it become, like, this kind of dominant force in media. You felt that when you were there, like, maybe talk a little bit about, like, what. What is that? Like, when you. You started something, you're kind of like. You're all. You're all kind of putting on a show. I mean, just everybody knows, like, Ben. Yeah.
C
I mean, like, yeah, like, 2006, 2007, it really was dismissed. You have to do these news reports, like, it's dogs on skateboards and babies laughing. It's YouTube. And, like, a lot of the early stuff, I think you could almost, like, equate to AI Sloth, where it was just, like, people putting everything up and what was going on, you know, kind of beneath the surface was that people were starting to experiment with, like, formats and things that would eventually really change culture. But, like, very early YouTube, it was dismissed then. I feel like YouTube had a honeymoon period for a few years where the press was, like, really positive. It would be on Good Morning America once a week with, like, she was a kid in Idaho, and now she's got a beauty channel. Let's hear from so and so. And that was, like, all positive. Just like, this is unbelievable. People are making money and building fans on what they love. And then there was, like, a certain point where then suddenly there started to be a lot more pushback, I think, on how does the kids app work and how does regulation work and, you know, all this kind of stuff. But yeah, I would say the early, early days of YouTube does remind me of the way people think about, like, AI video content now in some ways. And that was just dismissed as like, this is all just a bunch of people screwing around, this nonsense.
A
Yeah, I mean, I think that's also like, how people. I don't think people thought, like, AI necessarily early AI, like ChatGPT when it first came out, I think it was powerful and people understood that, but they also didn't think, like, it was like, world changing. Right. There's this whole thing about, like, those of us that grew up as sci fi nerds. And I don't know if this was you, but, like, I was a super sci fi nerd growing up and I, like, so all of my life I've heard these narratives about, like, you know, utopian societies. And I read like, the culture books before AI was a thing. And like, all. As Ian Banks, if those of you out there like the science fiction culture books. And so in my mind it was always like, oh, this is like kind of the end game of this. And, you know, there's always been like this kind of very small circle of people in San Francisco, but in other places in the world that were kind of focused on the singularity, which we've talked about in the show. That Ray Kurzweil book came out in the 2000s, but I think the mainstream kind of didn't really think of it as a thing that would affect the them. Right. So what's interesting about. Yeah, this mytho story and now following up with this is like, I wonder just how many people, like, open. Because this was on the homepage of the New York Times today. Right. This idea that this was a thing. I wonder in my mind, like, you know, somebody who's like, tangentially aware of AI, like, how much more this has come home to roost because of stuff like this. And like, I do. It does feel to me like it will be the dominant story of the next, like, at least five years, if not longer.
C
Yeah, yeah. And I'm sure you get these emails too. Like, my. My aunt, who's probably the smartest person I know, sends me a New York Times article, like, every few days. Did you see this one? Did you see this one? Yeah, me, the one about. And, you know, it's like, a lot of times, you know, the. These things which are hitting the mainstream, which are as, you know, important as, like, what's going to happen to jobs in the next few years end up being something where the narrative, at least, you know, at this point seems like very negative and very hard. You know, if you're like, in our position, I think to like, I don't want to be on defense.
A
Yeah.
C
Like every time somebody emails me or sends me an article, no, don't worry, you're fine. Here's why we're good. But I think because I've been working in AI a lot more, there's like this expectation that I have answers to things that are really complicated.
A
Well, I think it's. I think you're absolutely right. Like, I think that there's this idea that if you pay attention to AI that suddenly that you're also having to defend it in some ways. And like, I think it's a complicated conversation. We've talked about that. I do want to mention the fact that you talk about the narrative we've talked about in the show. Like, it's really tricky with these narratives. I want to play a clip here. This was Sam Altman on the CEO of the Atlantics podcast, Nicholas Thompson. And there's this little thing where even Sam, I think, is finally starting to realize that the companies themselves have to get better at this narrative. And when we come out of this, we'll talk a little bit about how that differentiates if you're from anthropic.
D
I don't think people. We talk about AI in a way of the kind of like you and I even hear this just technological marvel and how amazing this is and all this cool stuff we're doing, and that's fine. But I think what people really want is like, prosperity, agency, the ability to have an interesting life and to be fulfilled and have some impact. And I don't think that's how the whole world has been talking about AI. And I think we should do more
A
of that so you get a sense that he's aware that they have like there's this promise that people inside of the AI. It's funny, I don't use the term AI bubble because that's a whole different thing. But people who are like using AI all the time are like, believe is going to happen.
C
Echo chamber.
A
Yeah, but. But there are other people don't necessarily like what your aunt who sends you those links, like, you know, does she. Is she aware of the positive things that we think might come from this? Or like, where does. Where do you see normal people landing on this conversation?
C
I think she is aware of the positive things for sure. And I think a lot of times we end up going back and forth a little bit about, you know, the implications of it. There was one article a few weeks ago and the basic idea was this idea of super agency or high agency. People are the most annoying people. And then you know, like AI got lumped into that. And you know, in my mind, yeah, there's like over generalizations. I think that like being high agency is generally a good thing. But yeah, I would say that for the most part, if you're sort of somebody who is skeptical of this space, there's always going to be a new headline every day to reinforce that skepticism.
A
Yeah, you know, this is a small thing, but I saw this. The Minnesota Timberwolves are in the NBA playoffs right now. And they tweeted out just a very simple line that said something like, none of our graphics are made with AI and it got like 16,000 likes in just a few hours. And it made me realize, and we've talked about this on the show too. It's just like how much negative sentiment there is around this stuff. So like it is really important for the AI industry at large to kind of change these narratives. This conversation has gone further than just like your average, you know, AI CEO or the, or the CEO ahead of a magazine or a website. Zach Galifianakis has hot takes on AI as well. Let's take a listen.
B
I mean this where humans are going with AI. I mean, I guess I don't know if I'm old fashioned or maybe it's cause I'm 56 now, but I think this whole AI thing and I don't mean for medicine, I don't. It's got a lot of great things. Otherwise though, I think it's another like biblical, in the biblical term, biting the apple again. I just am very afraid of it.
A
Yeah.
B
And not even for show business. I'm not even talking. I'm just saying in general, I don't trust it at all. And especially the dudes that are designing it.
A
So I mean, you know, you even.
C
Okay, I pause.
A
Yeah, even Zach, I hadn't seen that one. Yeah, even Zach has a take on AI and like it just reminds me of like just how big this conversation has gotten. And I think that there's a real sense that like it could get out of the control. And maybe some would argue that it already is out of the control of some of these people's, you know, some of the company's minds. But I do think just to circle back to the government thing, like I know a lot of people in Our audience are very concerned about the idea that government, especially this government, for a lot of people who understand that this government has, in America has had a lot of problems so far, to put it bluntly, that there would have a control over something that could actually improve lives in some ways and not allowing people to have the choice to make it. I mean, it's a tricky thing, right? It feels to me like in a lot of ways we're going to be figuring out this balance as we go forward because there are people in the world who would say like, well, you wouldn't give the ability to make a nuke to anybody. And there's other people who say like, you're crazy to compare AI to nukes because it's a completely different technology. It just feels like we're entering in this period of time where every single one of these conversations is going to feel bigger than you can expect.
C
Yeah, right. And then the other analogy would be like, well, we wouldn't want to take away cars even though they're dangerous. We wouldn't want to take away the Internet even though there's dangers to it. And you know, I don't know if any analogy is quite perfect for something that has so many implications like this. But yeah, it's interesting. Also there's like the societal stuff day to day. I found these tools to be so wildly valuable that it's hard to not get excited about it in general just because just on a daily basis now I know you're the same way, but it helps us in so many ways. And I got two kids looking at colleges now and I feel like I just talk to Claude for hours about all these details that were really valuable thinking through some of these decisions with them. And then, yeah, then there's the broader societal implications which don't hit you on a day to day basis.
A
I think that's absolutely right. And I think part of it is those people who are never ayers or who don't want to try those things either they don't know what it can do or the other thing, sometimes I think that happens is like they may separate the power structure of the AI company from the AI tool. Right. Which is also not that useful, I will say to wrap up this section, like there's a really good palate cleanser story to talk about what AI can do for good. There was a story in the Guardian that basically got into the idea that AI doctors or sorry, they're not AI doctors, they're AIs. They have now started to outperform Doctor diagnosis in emergency situations. And I don't think anybody is saying, we want doctors to go away. I don't think the world at large would be like, we need less doctors or we don't want human doctors. But I think this is just a good example of it's really hard to make stressful decisions. I don't know. Have you watched the Pit, Ben? Are you. Are you a watcher of the Pit?
C
I'm kind of squeamish. Never know.
A
So I watch the Pit, and I can tell you I am squeamish also. There are so many moments in the pit where I'd be like, oh, I can't. I can't. I can't make this decision right now because I can't look at it. But if you had an AI that was there and could at least guide you into a direction that might help you, like, that is the best version of this kind of world going forward. I know there are people out there who'll be like, give me all the AI doctors. I'd rather have an AI be that. But doesn't have to be for everybody, but, like, there are positive things that are happening as well.
C
So, yeah, I mean, you know that I lost sight of my right eye for the last three months.
A
Yeah.
C
And, yeah, I, like, basically ended up getting the surgery on it thanks to AI that was like, take this really seriously. You need to get to an optometrist now. And then I've just been, like, back and forth on how to treat this and how to make sure I get my eyesight back. And, yeah, certainly that's the one that people usually point to when you're having this debate is like, this is going to solve cancer. It's going to solve all of these different types of diseases. And, yeah, definitely love the upside in that. And then there's a lot of, like, the little things. I will say this is like a tangent, I guess, but I've always kind of wanted, like, a personal website and just never made one. And then you spun one up for me in, like, two hours, and I love it. And so I'm probably going to go, well, you know, I don't know if we can do a little preview of what that looks like with a graphic or something. But, you know, there's like, again, I kind of go back and forth between, like, how much I love it and then also trying to be responsible about, like, the broader implications than just I feel like my life is better, you know?
A
Yeah. I mean, I think that this is. Again, it's A thing we're going to figure out over time. And something else we're going to figure out over time is how to get more subscribers to this YouTube channel. That's right, Ben, because you worked at YouTube forever, we still have to do this on YouTube shows every week. Thank you so much for instituting this. Like and subscribe. Everybody like and subscribe to our YouTube channel. It is how we grow. Also, we have a patreon now, the YouTube.
C
Gavin, they said only say one thing. If you say like comment, subscribe. People don't do it because if you don't do one thing, they do it.
A
See, this is what we needed, Ben. We needed you like three years ago. So all I gotta say is like, just like, just like this time.
C
Actually, I think comment. I think comment is better. Then they say spend time thinking of the comment and then you answer the comment and it feels like. So I'm going to say leave a comment.
A
We are now transitioning into what is one of the weirdest stories that I don't think nearly enough people have been talking about. And Ben, I know you have things to say about this. I do too. We are going to talk here about micro dramas and we're going to talk about AI. And specifically, this is coming out of a big story that again, the New York Times put out a story about the rise of AI in Chinese media production. And this is something we don't get that close of a look into. You know, China and the US we're getting closer in some ways for a while and have started to pull apart further. But if you're not familiar with this, if you're not in the media business, China is a massive, massive media business. One of the largest movies in the world, I think it was last year was a Chinese film in China, grossed $1 billion alone. I think it was called Deal. This is called NIHA or something like that. Do you remember the name of this was? No. Anyway, this is a huge, huge thing. Movies make a. But American movies make money there. Not as much as they did. But the Chinese production world is very different than American in a lot of ways. There's a lot of. A lot less. There's a lot less regulation on what can be used. Actor regulation is much different. IP protection is much different. And they have found a rise in AI video is starting to eat into a lot of their actual production. But then there's a lot of AI video productions that are coming out of China as well as. And Ben, I know you are an AI video aficionado. You've spent a lot of time looking at and working around AI video. I'm curious if you. First of all, we should talk about, as somebody who's from YouTube, like, kind of your thoughts on the microdrama space in general. Because a lot of these are not just one off videos, but they are micro drama series that are created with AI. And then we should talk a little bit about, like, kind of where you see these kind of AI videos popping up.
C
Yeah, I mean, I've definitely been fascinated by it because, you know, YouTube, to me, a lot of the idea was always like, how do you create a format that repeats? But there was always this idea that that format, whether it was like the kind of stuff you did with Jimmy Fallon or the kind of stuff that like comedy channels would do because somebody would stumble into it out of nowhere. It was very hard to do serialized content on YouTube. It was almost always like a repeatable format, Eating hot wings, whatever it is. But you can go straight into episode 36 and know what's going on and like it. And then I thought the same was true mainly of TikTok initially, and shorts and reels and all that. If you're scrolling through, you can't expect the audience to be able to follow what's happening. You need to hit them in the first two seconds. And this seems like it's very counter to that, where it's actually you want somebody to be hooked and then go into the series and eventually pay for it. And so the idea, which I like, is that I just think serialized content has been very hard on social media and on short form for so long. And it seems like this has kind of cracked it. And then, I guess, you know, what I'm less excited about is so far I haven't seen a lot of really great writing, and there probably are some out there, so I don't want to dismiss the category. But just, you know, when I've seen some of these things, it feels like, you know, maybe older version of, you know, soap operas or something, which have been around forever. And not to knock soap operas, I used to watch them. But I just think that it'd be nice if there was both, like micro dramas. But somebody can also crack how to make these things, like, addictive. And everybody's talking about them also.
A
Yeah, I mean, I think micro dramas actually are a really interesting format at large. But to your point, you know, real shorts, which is the most famous microdrama platform to date, is mostly filled with kind of salacious, like, kind of like lifetime, like movies in some ways, like these kind of like my husband left me for a younger woman and now I'm X, Y and Z. But what's interesting about the format at large, to your point, is that it draws people through a linear story on their phone in a short form format. So they keep watching. And yeah, they found that they are monetizing sometimes for creators better. I do think everybody should kind of understand how mature this industry is at large. And I think one thing that if you're in this audience and you're not familiar, bytedance has their own app that I don't know, has gotten enough popularity here, but it's called Pine Drama. You can go download it right now. And if you go to Pine Drama, what they've done by Dance is a company behind TikTok. They are basically, I think, working with creators to bring over full series to the, to the. To the Pine Drama app. But the thing to me that's so funny is I. I've mentioned this on the show maybe, maybe two months ago. I am a heavy TikTok user. I don't know, are you a big TikTok person or use TikTok very often?
C
I would say yeah. More than other apps, I like.
A
Yeah. You were the one who told me you have multiple accounts on TikTok to get different things in the algorithm, Right?
C
Yeah, very briefly. Like, I was finding myself getting lost in TikTok. So I have a work account for TikTok and I just follow my for you page. Yeah, I'm sorry, my. The people I follow. Then I have a separate one for comedy and I try to not go to the for you page and just go to the people I follow and I keep these two accounts.
A
Yeah. So you're. You're curating your algorithm. So the fascinating thing about this to me is speaking of the algorithm, every, you know, I think showed up on my, on my TikTok algorithm. I was like, what is this? And it was a bunch of pregnant AI mermen. And I was like, it was the weirdest story. And it was a scripted kind of AI video tale. And I was like, what is this? And I went to their channel and what I realized at the time, this was actually maybe three or four months ago, they were telling serialized stories and they had like eight in a row. And then there was a break and they had eight other stories. Eight, another story in a row.
C
Right.
A
And it's these companies that are using. It might be, by the way, individuals. It could be like very small companies that are using these AI video tools. And churning out the strangest things ever. And so right now, if you go to Pine drama, the first thing that came up for me was an app, was a. Was a series called Knocked Up Husband edition. And hopefully will, our editor, is able to show people this. In fact, maybe if we can, let's. Let's show people a little sound up of this clip right here.
C
What the.
B
What is happening to me?
A
David, are you pregnant? Yeah. So what you're seeing here is a AI video output, and it's like trucker guy and he, like his belly sidling. It's big, and he's got a baby, and then it's like him dealing with this. But Ben, the fascinating thing to me is there are some of these dramas that have been shot in on a cheap way. Like, a lot of, you know, other micro dramas are real production with real people and just shot very cheaply and kind of across a bunch of episodes. But this AI Video thing to go with that story of the New York Times is starting to overtake the microdrama industry. And what I'd love to get your take on, as somebody who's spent a lot of time in the AI video world is, you know, kind of like there's the conversation around AI Slop. Yeah, we've talked a lot about AI Slop on the show, but there's also the conversation of, like, infinite content. Right. And this idea that, like, everybody will have something that will come to them, it'll be the thing they want. Like, where do you think both, like, we as consumers of this stuff go from here? And then, you know, where does the entertainment industry go from here is a big question, too.
C
Yeah. Yeah. I could see it going either way. Part of me feels like the idea of personalization, infinite content, in some ways could be good. That the way that collectively we've decided to watch content is very bizarre. That was like, after all of these years, this is what we decided. We're just going to go from sports to war to comedy. It's. I think people are universally, like, aligned. This is not an ideal way to watch content. And that maybe, yeah, personalization could be a really good thing, and that they can create algorithms that we find, you know, are more meaningful and all that. Um, and then on the other hand, and we talked about this a little bit like this, you know, the last few days, like, I'll see something happen and I'll think, like, that's really creative. Like, there was this one thing, and maybe you can put it in the video, but it would, like, take old footage from the 80s. And then it would freeze it and it would turn it into like a high def photo and kind of like bring that moment to life. And when I first saw it, I was like, this is so cool. And then my feed showed me another couple, the WWF and Saturday Night Live and, you know, Michael Jordan and this different stuff. And then within like a day, I was like, all right, I've seen enough of those. Like, there was so much of it so fast that this thing, which you know, was like a really creative use, I think, of, I think OpenAI's new image model to be able to like take old footage and turn it into something that looked like it was happening now. It only took like a couple of days before there was thousands of these. I was like, all right, I've seen enough of these now. And you know, that is kind of what's part exciting about remix culture is that somebody does something other people can build on it. And also what I imagine as a creative and somewhat as a viewer is kind of frustrating that the minute something comes out, you can replicate it 10,000 different ways.
A
Yeah. It's funny you say that because I hadn't thought about this until just now. There was a huge moment that happened like four days ago where GPT Image 2 was able to make these kind of childlike drawings and everybody did one. Right. And like, it's so funny that that was like a huge thing, but yet it somehow missed her window that it wasn't big enough for us to talk about. It's a very cool thing to do. If you saw that somebody was talking about Justin Bieber's streams shot up after his Coachella performance. Right. And sure. What that made me think about was just how powerful novelty is now. And this idea that like, maybe in the future, Yes, I do think there will be deep storytelling experiences. I hope so. I still like reading books, I still like watching long form series. But novelty is just going to continue to get more and more powerful as. Because the reason I mentioned the Bieber thing is, the reason I think it might have been the case is that, you know, Bieber did this whole set from Coachella where he was like speaking of YouTube. He was in front of his computer and just like showing people YouTube videos. And a lot of people thought, this sucks, this isn't what a performance should be. I kind of was like, this is really interesting and it made me care about it. So I do think novelty feels like it's going to be the thing that people kind of aim for. And it makes me Wonder like, maybe we will just have more stuff that gets more popular but then dies faster.
C
Yeah, but I mean, the tricky. Yeah, I agree. I mean, the interesting thing about the novelty is that if it's a novelty and there's no reason to come back to it, it's really hard to build a business on it, you know, and.
A
Or tell a bigger story, tell a
C
longer story, or to tell a bigger story, or just like the first time you see it, it's cool. And this relates to what you said at the top of the podcast. If I could just tell the quick story and you can decide whether you want to use it later. I sent you a photo of my Dorm Room in 1993 when I was a freshman in college. And I was like, how hard would it be to recreate this as my dorm room as a photo now that I could use as a backdrop on a zoom. And you're so good at this stuff. I love how quickly you just spin things up. And you were like five minutes later, you sent me something that I thought was really cool and it looked exactly like my dorm room. And then we decided, let's make a dorm room generator where you can put in the college posters you'd have, the school you went to, the things you love. And it was really cool, but it was a bit of a novelty. So if I sent somebody my college dorm, that's cool, but then you can't keep sending it because it's like, all right, you only need to see it once. And then that kind of does the trick. Now it'd be way better if there was some way that, you know, through nostalgia. I wanted to revisit, you know, what my college dorm looked like for some reason, but just as a one off. Yeah, it is interesting how quickly a novelty like that, the first time you see it, it's wild. And then you don't really want to go see all your friends college dorms.
A
Yeah, but so interesting you say that because it is like, it does make you wonder. This is a much bigger question. But like, is it going to change our brains that we don't want to try to take on bigger things? Because like, because like, oh, it was this moment where everybody cared about this thing. And like, I have an idea for that. And then before you know it, everybody's onto something else. I will say the one thing I. I want to shout out. There's a very famous YouTube video that I loved and I know you are familiar with, which is the Grapevall Lady. If you're not familiar with this pause the podcast, you know, I don't care if it hurts our algorithm, go watch this right now. It' two women at an old television show in the 90s and their footage of it is very crappy. I used the technique we were talking about earlier to up res a couple of photos from it and it made me feel joy. Right. So maybe part of it is how do we differentiate this idea of like these tools can we can make ourselves feel joy versus this idea of gosh, what is the, you know, kind of larger storytelling experience feel like? And I hope that, you know, the one thing about that Chinese story, the China story in the New York Times and just kind of AI short form in general is I'm kind of waiting for one of those stories to be written and produced like you said, by somebody who is really compelling as a storyteller. And you and I both know there's people out there like this and I hope that some of those people think about this kind of short form world. And if you've seen any of these short form microdramas that are AI made, I'd be love to hear what people in the audience think or send us in and let us know. But I think that will feel like a change.
C
I was thinking afterwards, I don't want to sound dismissive of microdramas because I'm sure there's great stuff out there. I'm just curious if there'll be one that like breaks through pop culture and you can go somebody on the street and everybody's talking about this micro drama. That would be really cool. I don't think there's been a moment like that ever I could have to think about it for like any kind of serialized short form, you know, content.
A
Yeah.
C
So I think that could be interesting. I will say also because you mentioned the, the photo of the old YouTube video with the grapes that, you know, I, I did take an old video of me and I could put it in here of like me in sixth grade, like at a slow dance party in my basement. And then I turned that into like a high res 8k video. And it did give me like a pit in my stomach. It brought me right back to sixth grade and like Courtney Ginsburg is dancing with Adam Pirro instead of me. And I'm just like very, you know, caught up in like how realistic it looked. But it also was just for me, it wouldn't be something that I would publish.
A
Yeah.
C
And I will say also, when you link out to other videos, Gavin, it helps the algorithm. It doesn't work. That's, like, what Phil's whole thing was. He'd be like, check out all these links. And then people would spend, like, an hour on his video because he's sending them to all these other YouTube videos.
A
I want everybody to know, every week we put a bunch of links down below here, so make sure you're spending all the time there. Thank you, everybody, for hopping on here with us today. Thank you, Ben, for joining us.
C
Yeah, thanks for having me, Gavin.
A
All right, we'll see you all next time. Bye.
C
Bye.
Date: May 5, 2026
Hosts: Gavin Purcell (in for Kevin Pereira) & Guest Ben Rellis
Special Mention: Kevin Pereira absent; Ben Rellis steps in
This episode centers around the explosive New York Times story reporting that the US White House may soon require approval for AI models before public release—a potential seismic shift in AI policy, possibly fueled by the controversy around Anthropic's "Mythos" model. The conversation explores government oversight, the impact of media narratives on AI (with hot takes from Zach Galifianakis), and how China’s rapid content creation using AI microdramas is reshaping the boundaries of entertainment globally. The hosts also reminisce about the parallels between the early days of YouTube and today’s fast-evolving AI landscape, discuss practical uses versus public fears, and reflect on AI’s transformative potential.
Timestamps: 02:13 – 07:08
Timestamps: 05:11 – 08:48
Timestamps: 10:07 – 15:40
Timestamps: 15:40 – 17:57
Timestamps: 16:28 – 18:58
Timestamps: 19:41 – 33:01
Timestamps: 26:50 – 33:22
Gavin, on government intervention:
“We’re entering in this period of time where every single one of these conversations is going to feel bigger than you can expect.” (15:08)
Zach Galifianakis (clip):
“I just am very afraid of it. And not even for show business. I’m not even talking. I’m just saying in general, I don’t trust it at all. And especially the dudes that are designing it.” (14:13)
Ben, on shifting media landscapes:
“The early, early days of YouTube does remind me of the way people think about, like, AI video content now in some ways.” (07:35)
Sam Altman (clip from The Atlantic podcast):
“What people really want is like, prosperity, agency, the ability to have an interesting life and to be fulfilled and have some impact… I don’t think that’s how the whole world has been talking about AI. And I think we should do more of that.” (11:34)
Ben, on the novelty factor:
“The minute something comes out, you can replicate it 10,000 different ways...as a creative and somewhat as a viewer is kind of frustrating.” (28:36)
| Segment & Topic | Timestamps | Notable Moments | |---|---|---| | White House regulation, Mythos origins | 02:13 – 07:08 | “Capital B, capital D” (03:10) | | Anthropic vs. OpenAI, narratives, YouTube parallel | 05:11 – 08:48 | | | Shaping public AI narratives, Sam Altman clip | 10:07 – 12:01 | “People want... prosperity, agency...” (Sam Altman 11:34) | | Public fear, skepticism (Zach Galifianakis) | 13:48 – 14:25 | “It’s another like biblical, in the biblical term, biting the apple again…” (13:48) | | Government overreach vs. necessity debate | 15:40 – 16:28 | Analogies to cars, nukes, Internet | | AI as force for good (medicine, Ben’s story) | 16:28 – 18:58 | “I lost sight of my right eye… thanks to AI…” (18:02) | | The rise of AI microdramas in China | 19:41 – 25:41 | Pregnant AI mermen & Pine Drama | | Infinite content, challenge of novelty | 26:50 – 31:27 | GPT-Image2, dorm room nostalgia | | Lasting storytelling vs. viral trends | 31:27 – 33:22 | “Is it going to change our brains...?” | | Microdrama cultural moment hopes | 33:01 – 33:22 | Will there be a “breakthrough” moment? |
Friendly, conversational, and, as always, irreverent—but rooted in real expertise. There’s a healthy skepticism (and some eye-rolling) of both AI hype and AI doom, with a steady call for nuance and good-faith dialogue. The hosts, especially Ben, ground tech talk in practical experiences and cultural context. The humor is dry and self-aware, especially evident in the playful sidebar about YouTube subscriber lingo and algorithm-gaming.
This episode is a recap of a landmark moment in AI policy, blending thoughtful commentary with good-natured banter and real-world examples. It’s about what happens when a technology hits the “regulation event horizon”; how AI firms, government officials, and the public are shaping (and resisting) dominant narratives; why China’s approach to AI-content creation looks so different from America’s; and what all of this means for the future of both creativity and practical progress in AI. If you want to understand where the AI discourse—and regulatory landscape—is headed, this one’s a must-listen.
Next Week: Expect more deep dives as AI regulation, creative tools, public perception, and the technology itself continue to evolve at breakneck speed.