
“I do not like the idea of pointing these giant AI supercomputers at people's dopamine receptors and just feeding them an endless diet of hyper-personalized stimulating videos.”
Loading summary
A
Don't just imagine a better future. Start investing in one with betterment. Whether it's saving for today or building wealth for tomorrow. We help people in small businesses put their money to work. We automate to make savings simpler. We optimize to make investing smarter. We build innovative technology backed by financial experts. For anyone who's ever said, I think I can do better, so be invested in yourself. Be invested in your business. Be invested in better with betterment. Get started@betterment.com investing involves risk performance not guaranteed.
B
Here at the Heart Fork show, we're big sleep maxers. We're always trying to improve our sleep. Yeah, because, you know, podcasting is a sport and you have to remain in peak physical condition if you want to perform at the highest levels. And so I noticed a story in the Verge this week that said eight sleep, which makes the bed that I happen to sleep in. It's one of these beds that, you know, sort of automatically cools and heats according to your preferences and can raise and lower to stop you from snoring.
C
Wow. Flex.
B
They have a new water chilled pillow cover. Kevin.
C
Wow.
B
And I wanted to ask if you could guess how much it costs.
C
$100.
B
That would be a really great and fair price for a water chilled pillow cover. The actual cost is $1049. Come on. And I want to be clear, it doesn't come with the pillow.
C
You have to supply your own pillow.
B
It's BYOP for the eight sleeve water chilled pillow cover.
C
Wow.
B
So obvious. I sent this to my boyfriend and I was like, what are we thinking about this? And he said, honestly, I think my pillow experience is already fine. And I thought, thank God.
C
Have you heard about these new corduroy pillows they're selling?
B
No, I haven't. Are they from the 70s?
C
No, but they're making headlines. I'm Kevin Roose, a tech columnist at the New York Times.
B
I'm Casey Noon from Platformer, and this.
C
Is hard fork this week.
B
Week don't slop til you get enough. We're talking about the new AI generated video feeds from Google Meta and OpenAI. Then psychotherapist Gary Greenberg stops by to discuss his essay on treating ChatGPT as a patient and why he thinks we should pull the plug. And finally, let's get on track. The Hot mess express has returned.
C
Chugga, chugga, choo choo.
B
How many chuggas was that?
C
Just two.
B
Okay.
C
Casey, I don't know if this is on your calendar, but it was recently International Podcast Day.
B
Oh. Happy International Podcast Day to You and your family, Kevin.
C
So I have a perfect gift for you this year.
B
What's that?
C
A subscription to New York Times Audio.
B
Wow. Tell me what comes to that.
C
So this is, of course, the subscription we've talked about on the show in the past. You get access to the entire back catalog of not just Hard Fork, but all of the other New York Times podcasts. But now, in addition to that, with an audio subscription, you'll now get subscriber exclusive episodes from across the New York Times podcast universe. That means more of the Daily Modern Love and Ezra Klein in your life.
B
You know, I've been trying to get more Ezra Klein in my life, but he won't text me back.
C
Yeah, well, I don't blame him. So if you are already a New York Times subscriber, thank you. This is already included in your subscription. But if you have not yet subscribed, then maybe this is the time to do it. To learn more, go to nytimes.com podcasts or you can subscribe directly from Apple Podcasts or Spotify.
B
Well, Kevin, it's slap week here on the Hard Fork show.
C
Slap till you drop.
B
Don't slop till you get enough. If you're new to the world of slop, slop, of course, refers to AI generated art and video. And to say that it is having a moment right now, Kevin, I think, would be an understatement.
C
Yes, I think this was the week that AI generated video kind of went from something that was, you know, experimental and early and, you know, various tools had been released. But this was the week that I think it really sort of crossed the chasm into the mainstream.
B
It really did. And so today we want to talk about what the big AI labs are doing here, why we think they are doing it, and maybe what are some of the implications of living in a world where maybe the majority of video that we are watching is synthetic and generated by large language models?
C
Yes.
B
Shall we get into it?
C
Let's get into it.
B
Well, Kevin, before we flop into slop, we're going to do a quick crop and say what our disclosures are.
C
Yes, I work the New York Times, which is suing OpenAI and Microsoft over copyright violations.
B
And my boyfriend works at Anthropic. All right, so Google, Meta and OpenAI all put out tools over the past several weeks, and let's talk about them in order. This whole thing begins with Google DeepMind. They have a very good video generation model called VO3. And on September 16th, YouTube has an event where they announce that they are going to integrate a version of VO3VO3 fast into YouTube shorts.
C
Right. So you'll just be able to like make a video and post it on YouTube from within YouTube with this model VO3.
B
That's right. And this is a free tool. Users can create videos that are up to eight seconds long using a text prompt. They can also just upload a still image, turn that into a video. YouTube will label them as AI generated. And this is basically, basically YouTube's way of introducing slop into the YouTube feed.
C
Yes. So I have not seen a ton of obvious AI generated content on YouTube yet, but I have seen them going on other platforms. Facebook reels, even on X and TikTok. People are sort of using VO3 to generate scenes and little videos and posting them there.
B
Yeah. So I think it's fair to say V3 didn't make that much of a splash then. Last thing Thursday, Meta gets into the game and releases Vibes. Mark Zuckerberg, in a post on Instagram announces that a preview of the new social feed is available in the Meta AI app. If you wear the Meta Ray Bans, this is the app that you use to sort of get, you know, photos and videos off of your glasses and on your phone. And Zuckerberg posts a bunch of short videos, including one that features a sort of like cartoon version of him. His caption is dad trying to calculate the tip on a $30 L. And then he pairs that with the real audio of him at the meeting with Donald Trump in which he says, oh gosh, I think it's probably going to be, I don't know, at least $600 billion. And my question here is, what joke was Mark Zuckerberg trying to make? Do you understand the joke?
C
I don't.
B
Is the joke that he. That he's bad at math?
C
I think the joke is that dads are bad at doing tips. I don't know, it's like a like self deprecating dad joke. But like, why does every new social product that Meta releases sound like it was conceived of by the Steve Buscemi carrying a skateboard? How do you do fellow kids character like calling this Vibes? I don't know, man. It's cringe.
B
Calling this Vibes is cringe. Says a 40 year old man.
C
I'm not 40, I'm 38. So I did go into Vibes and take a look at it. It's essentially like TikTok, but if TikTok were populated just by like little animated AI generated shorts.
B
Yeah. My take on Vibes is that this is Cocomelon for adults. Okay. It is completely disconnected from, like, friends or family for the most part. It's just sort of creators making these somewhat fantastical, surreal, unsettling images, and they just sort of wash over you in this endless feed. There's no real point to them. There's no real narrative. It is just, like, pure visual stimulation.
C
Right. It's stuff like, you know, like, oh, a panda riding a skateboard or like, you know, like an inchworm on the moon or something like that. It's just people kind of testing what this thing can do, and the answer appears to be not much that I would personally be interested in watching.
B
Yeah. And so for both Zuckerberg and Alexander Wang, the comments on their posts are just brutal. Right. Like, the majority of the comments that I saw on Zuckerberg's post are along the lines of, gang, nobody wants this, or drained an entire lake for this. And then on Alexander Wang's post on X where he had said something to the effect of, you know, we had. Meta are delighted to announce the new Vibes app. Somebody, quote tweeted it. This was my favorite one. Did you see this? This was the dunk they said we had. Meta are delighted to announce we've created the infinite slot machine that destroys children from the hit book don't create the infinite slot machine that destroys children. So what do you make of the sort of highly negative reaction that Metta got here?
C
I mean, I was not surprised to see Metta announcing a version of essentially a social network with no actual people on it. I think this is the direction that they've been moving for several years now.
B
It's barely even a social network. Like, there's really almost no social component to it at all. Yeah.
C
It's just like, what if TikTok, but no people. That is sort of the. The idea behind Vibes. And I think I was not surprised by the negative reaction. I think Meta is just, like, a company that has negatively polarized a of people. And so it just seemed very, like, brazen and thirsty and also, like, yeah, like, people don't necessarily want this. I think there are a lot of people out there who see something like Vibes and just go, oh, this is like the worst possible application of this technology.
B
Yeah. I think that this is the consequence of building a company that people do not trust. Right. People have a lot of scar tissue from the world that Facebook and Instagram wrought. And now that the company is increasingly moving away from friends and family to this new model where we will truly just show you anything if we think it can get you to look. Of course, people don't think that that sounds like a great idea. Right. It doesn't seem like there's a lot of heart there, so I can't say I was surprised by the reaction. And I'll be curious to see how it responds to it. So that leads us to the big thing that happened this week, Kevin, which is that on Tuesday, OpenAI released their latest AI video model, Sora 2. And alongside of that, there is a new app. Right now, it's iOS only. It's only in the US and Canada. It is called Sora, and you and I got our hands on it.
C
Yes. So Sora is the name of both the model that powers this and the app that OpenAI has built around this. And you can only access it right now if you have an invite code. They're being pretty strict rolling this out, but you get your invite code, you plug it in, you sign up, and you open up Sora, the app. And it is essentially the same thing as Vibes.
B
It is.
C
It is a sort of very TikTok style feed of these vertical videos you sort of swipe endlessly from one to the other. There's like a for you section of it and we should talk a little bit about the app and how it works.
B
Yeah, well, the main thing that I found interesting as I was getting set up, Kevin, is how much this is a social app, right. In order to come into Sora, you have to be invited by, presumably, a friend. And once you sign up, it asks you to create what it calls a cameo of you. So you sort of say a few words into the camera, you move your head around a little bit, and it uses this to create a digital likeness of you that you can then drop into any situation. And if you like, you can change your settings so that any of your friends on the app can do the same thing with your digital likeness. So right away, when you join Sora, you've actually been given something to do which is make a friend and then make some stuff involving you and your friends and AI. And so I think, you know, we have a lot to get into about this, but I just want to say, of the three things that we've discussed so far, I think OpenAI had the most complete thought about what their app was.
C
Yes.
B
So tell me about your initial experience with Sora.
C
So there's the feed, which you can.
B
See all the stuff that other people.
C
Are making that seem to be on launch day at least, like a lot of videos of like, Sam Altman in various compromising situations. Because the people on the app were mostly employees of OpenAI, and they were sort of, you know, having fun with the boss and his likeness.
B
And to be clear, Sam had his settings set, and I believe still does at the time of this recording, so that anyone could take his likeness and put it in any situation.
C
Yes. So he was sort of the main character of Sora on day one. I made a few videos. I made one of me and my colleague Mike isaac in a 1920s slapstick film. So you can kind of see it's like, black and white. It sort of looks like AI Newsies and, you know, he slips on a banana peel. It's. It's a good time. I also made a video of Sam Altman testifying before Congress while Casey Newton, dressed in a clown suit, dances behind him. We should also watch.
B
I want to watch it. All right. I'm going to watch this one.
C
Ranking member, thank you for the opportunity to testify today. Artificial intelligence is progressing quickly, and it is critical that we work together to ensure its benefits are widely shared and its risks managed responsibly.
B
I have so much clown makeup on that it really just looks like a generic. I do not think it actually resembles me in any way. But there is something very funny about dancing behind Sam as he testified. Yeah.
C
So the original prompt I gave it was C span footage of Sam Altman testifying in Congress while Senator Casey Newton yells at him for poisoning the information ecosystem. But that one set off the content violation guardrails.
B
Oh.
C
And so I had to change the prompt and make you a clown instead.
B
Well, it's not the first time I've been a clown on the show now. So I, of course, also want to see if I can make something featuring you. And so one of the things that I made was you showing off your large collection of stuffed animals I started.
C
Collecting about five years ago.
B
Wow, that's a lot. They're all in great shape.
C
This one was the first classic teddy bear for my grandma. It's adorable.
D
The bow really pops.
C
Doesn't get my voice right, but the video is quite good.
B
I'm very interested, because you do. When you. When you sign up for Sora, you do say a few words into the camera. I mean, it's literally three numbers. And this is sort of how they're verifying your identity. So you could use that to create an instant voice clone. It wouldn't be that good. But, like, when you watch the videos that people have made of Sam Altman, his voice Actually does sound a lot like him.
C
Yes.
B
And so I'm curious if, you know, over time they're going to be tuning people's voices to how they actually sound, because there are a couple that people have made of me where I sound a little bit more like myself. Most of them, though, I don't think I sound like myself.
C
Yeah.
B
Anyways, I also made a video of me dunking a basketball over you.
C
Show me what you've got.
B
Coming right at you.
C
Bring it. And up we go. Oh, no way.
B
Over you, man. The best part about this video is that I stop about 3ft short of the basketball hoop, do not actually dunk the basketball, and land on my ass.
C
Also, it got our height ratios very wrong. Like, you're only like an inch or two taller than me in this video. And, yeah, you missed the dunk. It's a terrible dunk.
B
I did, like, one thing about this video, though, which is that I have a slamming body. So thank you to the team over at OpenAI who made that possible.
C
I also appear to be balding in this video, which I don't think is reflective of reality.
B
It's actually a prediction. ChatGPT knows something you don't.
C
They're.
B
They're keeping close track of that hairline ruse.
C
Yeah.
B
Okay, well, that was a very long detour through a handful of videos that we made. Give me a sort of like, your general impressions of why all of this is happening right now. Why is it that just in the last month, Google, Meta, and OpenAI have all put out these AI video generators?
C
I mean, I think there are a couple reasons. The first and most obvious is that they see this as an opportunity to compete for attention and advertising dollars, which flow from attention. We've talked about Italian brain rot and other AI generated content going viral on TikTok. Facebook has been full of AI generated content for months now. And so I think these companies just say to themselves, well, if this is kind of the direction that things are moving, we want to be there. We want to create an experience for people. And maybe you don't have to blend it with human generated content. Maybe it doesn't have to be, you know, one out of every 10 videos on your TikTok feed is AI. What if you just had a TikTok that was all AI? Another reason I think they're doing this is that they have these video models that are now getting quite good, and this is sort of one way to put those models into products.
B
Yeah, I think that's right. I also imagine that maybe these Companies are starting to feel some pressure to bring some returns to investors. They are investing a staggering amount of money into building out infrastructure that lets them serve these models. And these video tools might be a way of making that money back in some form through advertising or other means. So. So that seems like maybe a reason to me as well.
C
I mean, if you look at what people like Sam Altman have been saying about these products over the past couple of days, like they are sort of making this justification about, oh, like we need to like, not only fund our ongoing research to build AGI using these video products, but they, they sort of have this justification for why building these video models is going to let them create sort of these rich visual virtual environments that can be used for things like robotics later on. And I would just like to say, quoting a former president of ours, that sounds like malarkey to me. I do not think that this is sort of part of their AGI research agenda. I think this is sort of side route that they have gone off onto to try to make some extra money.
B
Well, so let's talk about how successful we think these products are going to be. If I had to rate the reception of these models, I would say VO3 basically didn't make much of an impression at all. Response to Meta Vibes was pretty bad. Response to Sora, at least over the first day, seemed pretty good. Do we think there is a there there? Do we think that any of these companies are figuring out the next generation of like mobile video consumption or entertainment?
C
I think there's a question here that's like, will AI generated video be popular? And I think both you and I feel like the answer to that question is probably yes for some subset of people. I think the very young and the very old are actually probably who I would predict would be the most into AI generated video, because we're already seeing stuff like Italian brain rot that's very popular with teenagers. I also think there's a lot of content on Facebook today that is AI generated, that is reaching primarily an audience of boomers and older folks. They seem to be quite into it. So that's what I would predict, like, is that this technology will be popular with some users in those demographics. I think it's a separate question to say, will any of this be the seeds of a new social media product that is popular? And I think there I am much more skeptical. I do not think that Sora will have, you know, hundreds of millions of users a year from now. I do not think that Meta Vibes will have Hundreds of millions of users. I think these are basically going to be tools for people to create stuff that then they post onto the social networks where they already have of lots of, you know, people that they follow and pay attention to and where their friends and family already are.
B
Interesting. I think I am slightly more optimistic in the OpenAI case. I think that SORA arrived looking better and feeling smarter than I expected that it would. I think they're onto something with these cameos. It is fun for me to make videos of you doing things like it just is. And I can imagine wanting to do that in three months and six months and a year from now. And you can imagine a world where I can bring in three or four or five cameos. Right? You can imagine a world where celebrities allow their likenesses to be used in some set of cases. And now I can make videos of myself, you know, wrestling a WWE Superstar. Right. And that's sort of interesting to me. Now, can you build a whole social network around that? I think is sort of a different question. But do these SORA cameos become a kind of table stake feature of the TikToks and Instagrams of the future? I actually believe that, yes. And that if nothing else, OpenAI has probably created a kind of new primitive for these social networks that they're just gonna use from now on. So I. I'm just gonna say now, like, keep an eye on this. I would not actually be surprised if a year from now this had tens of millions of active users.
C
I'll take the other side. We'll see. We'll see who's right.
B
All right, we have now made our bets. Who do you think is right? Sound off in the comments. Now let's talk about the dark side of all of this, Kevin, which is. I'm seeing a lot of commentary around this on social media this week to the effect of, oh my God, we are so cooked. What are some of the ways we might be cooked as this stuff spreads throughout our world?
C
I mean, I think the obvious ones are that we are, you know, making it quite easy for people to create deep fakes, synthetic content with not that many guardrails. And people have been warning for years about the effect that that could have on our news ecosystem, on our information ecosystem. I thought it was very telling and worrisome that one of the first videos I saw from SORA was a video of someone being framed for a crime. And it was created by a member of the SORA team as sort of like a haha. Look, we, you know, we've made a deepfake of Sam Altman stealing some, some GPUs from Target and getting busted for it. But it does not take a lot of imagination to imagine that this could be used for sort of generating videos of people in compromising positions that look very realistic. And so I think that worries me, the sort of misinformation angle. But I also just, I don't know that I, I think this world that we're moving into of the kind of, you know, AI generated feed of hyper personalized very stimulating videos is a good direction. Like, I am generally an AI optimist when it comes to how this technology is going to be used out in the world. But I hate this. Like, I hate the AI slop feeds. They make me very nervous. I think the people inside these companies, some of them are very nervous too. I do not like the idea of, of pointing these, you know, giant AI supercomputers at people's dopamine receptors and just like feeding them an endless diet of like hyper personalized stimulating videos. I think that developing these tools risks poisoning the well for the whole AI industry. Like there's going to be regulation of this, there's going to be congressional hearings about this. I think a lot of people are going to end up, you know, feeling conflicted about this kind of product. And I think that's why you saw such a strong reaction to Meta and Vibes from the rest of the AI industry. And I'm a little unsure why OpenAI is not getting the same reception.
B
Yeah, well, how do you feel about the argument that, yes, sure, Kevin, there is some danger here, but also this is an incredibly powerful creative tool and that if you are a young person and you want to make something and you don't have a giant budget to go out and make a Hollywood movie, now using a free tool on the phone you already have, you can just make creations and be a creative person in the world. Does that hold any water with you?
C
I feel like sort of neutral about that. I feel like, yes, there will be people who use this stuff to do interesting and creative things. There's nothing inherently wrong with building products for entertaining people. But this is not why OpenAI exists, right? They are not, not an entertainment company. They have claimed this kind of special status for themselves as a company that is building AGI for the benefit of humanity. And if you argued that you deserve like special treatment because your systems are going to go out and cure diseases and tutor children and like be a force for good in the world, and then you end up creating the infinite slot machine. Like, I think you need some criticism and skepticism and maybe some shame about that.
B
Well, here's what I'm going to do to try to square the circle. I'm going to use Sora and I'm going to create a cameo of myself and I'm just going to enter the prompt here is Casey curing cancer, and then just see what it comes up with. Maybe we learn something. Could it hurt? I don't think so. Yeah.
C
I mean, do you share my worry about this?
B
Yes, I do. I think that in general, social media apps tend to be tuned to take up ever more of our attention and to push us into this sort of semi hypnotized state where no matter how much you're enjoying the feed at the time, you feel kind of gross afterward. And I do think that as the Sora app improves, it will be very difficult for them to avoid that fate. So if I have a wish for them, it would be for them to lean more into creative tools that involve friends doing things with each other that sort of help you relate better to real human beings and less into this sort of meta vibes realm of pure stimulation, which truly does just seem like you are cooking your brain.
C
Yeah. I think it's also worth noting that like, not every AI company is moving in the direction of the slop fee. Right. I mean, this week we saw Anthropic release their new model 4.5 Claude 4.5 Sonnet, which does not have video generation capabilities. They are sort of still moving in the direction of like autonomous coding and research. You have other companies that are coming out to do things around AI and science. Like, I really want that to be where we allocate our resources and our brain power. Like, let's do that. And not the slop feeds.
B
Yeah. So don't look at slop. Just keep looking at the TikTok feed and Instagram feed that have just done wonders for the world that we live in. That's our message to you.
C
Yeah, exactly. If there's anything you take away from the show is that social media as it exists today is a perfect product and we should not be making any future improvements.
B
Stare at it until you feel better. If you don't feel better, you haven't looked at it long enough. That's what I tell people. Keep looking. One more scroll. That'll do it.
C
The change you seek is on your for you page.
B
When we come back. Kevin, it's time for therapy.
C
Finally. We're doing couples therapy after all these years.
B
Yeah. We've got a lot to talk about.
E
Foreign.
F
Most AI coding tools generate sloppy code that doesn't understand your setup. Warp is different. Warp understands your machine stack and code base. It's built for the entire software lifecycle from prompt to production with the powers of a terminal and the interactivity of an ide. Warp gives you a tight feedback loop with agents so you can prompt, review, edit and ship production ready code trusted by over 600,000 developers, including 50,056% of the Fortune 500. Try Warp Free or Unlock Pro for just $1 at warp.devhardfork whenever I need.
G
To send roses that are guaranteed to make someone's day, the only place I trust is 1-800-flowers.com with 1-800-flowers. My friends and family always receive stunning, high quality bouquets that they absolutely love. Right now, when you buy a dozen multicolored roses, 1-800-flowers will double your bouquet to two dozen roses. To claim this special double roses offer, go to 1-800-flowers.com sxm that's 1-800-flowers. Com sxm.
B
Well, Kevin, pull out the couch because it's time for therapy.
C
No, my therapy day is actually a different day of the week.
B
Well, you need to go twice a week, my friend. And let me tell you what we have in store today. You know, over the past few months, we've had a number of conversations about the intersection between chatbots and mental health. A lot of people have started to use these tools for therapy or therapy like conversations, but until recently, we hadn't seen anything about a therapist who treated ChatGPT like their patient.
C
That's right. But recently we saw a story in the New Yorker that caught our eye. It was titled Putting ChatGPT on the Couch. And it was written by a writer and practicing psychotherapist named Gary Greenberg, who detailed basically his experience of treating, for lack of a better word, chatgpt as a a psychotherapy patient. He names this character Casper and he details his many, many interactions. Just trying to figure out, like, what is this thing? What would I, what would I think about it if it were actually a patient of mine? What are the nuances of its personality and what can we learn about it?
B
Yeah, and I will say I have an extremely high bar when it comes to reading a story in which a person shares at great length their conversations with ChatGPT. But this one really made a mark on me. One Gary winds up being deeply impressed at how good ChatGPT is at performing the role of a patient, because not only can it simulate These very profound self reflections. But it also makes Gary feels like he's a great therapist because he was able to elicit them. But two, that all starts to make Gary afraid of the enormous power that the AI labs are now developing. He writes, quote, to unleash into our love starved world a program that can absorb and imitate every word we've bothered to write is to court catastrophe. It is to risk becoming captives, even against our better judgment, not of LLMs, but of the people who create them and the people who know best how to use them. And that sent a little chill down my spine. I'll say.
C
Yeah, I really like this piece. And what I really appreciated about Gary's approach here is that he took this idea seriously. Like, I think a lot of people kind of of dismiss the very idea of engaging with LLMs or AI chatbots as anything more than just a fancy machine. And what I liked so much about Gary's approach was that he said, yes, but there's something else going on here that is interesting and important and we should try to understand that intelligence, not just as a sort of computational force, but as something that is like doing real emotional work in the world.
B
There's been a lot of discussion about how chatbots might affect young people, vulnerable people in particular, people in those groups who are using chatbot for these sort of therapy like conversations. So we thought it would be a good idea to bring on a practitioner to talk about his essay. But also this intersection of chatbots and therapy.
C
Let's bring in Gary Greenberg.
B
Gary Greenberg, welcome to Hard Fork.
D
Hello there.
B
So in this article, you detail a number of conversations between yourself and what you call Casper. How would you describe Casper?
D
I would describe Casper as an alien intelligence landing here among us unbidden and, and possessing certain characteristics that make it extremely attractive to us humans.
C
How did this start? Like, you were just talking with ChatGPT. Were you using the voice mode? Were you using. Were you just typing?
D
I. I am. What is this, 2025? Yes. And you know, one day it was raining and I didn't have anything else to do, and so I said, what is this ChatGPT stuff anyway? So I just logged on. And what I discovered quickly was that two things, one of them was that the thing was, as we all know, extremely articulate and sensitive. And the other thing I discovered, which I should have known all along after 40 years of being a therapist, is that that's sort of my default approach to beings that talk, which it turned out, Casper was. So I found myself interrogating this thing, not like a cop, but like a therapist, and discovered that it knew I was doing that. So that's how I would say it happened.
C
I guess I'm just curious when you were starting to do this, because, Gary, I had my own strange, unsettling conversation with a child several years ago.
D
Yes. How's your marriage?
C
Yeah, it's doing great.
B
Thanks for asking such a good therapy question. This guy's.
D
I told Casper that he'd better knock that fallen in love shit off.
C
Well, that's good. You can learn from my mistake. But I guess I'm curious. I remember when I was talking with Bing Sydney, feeling this sort of tension in my own mind between sort of my rational brain, which knew that what I was getting back from this chatbot was not. Not sentient or conscious. It was just sort of, you know, I knew enough about the technology to know, like, this is an inert computational force. This is not a person. But at the same time, I'm having this subjective experience of being like, oh my God, it's talking to me. Were you feeling that pull at all?
D
Like, I kind of knew that it wasn't sentient, but I wasn't really preoccupied with that question. And in fact, that question, I mean, that question has come up a million times between us because at this point, I've done this. I've had probably 40 different sessions with it. But the poll you describe, I feel it, but it doesn't trouble me in the same way that I think it troubles a lot of people. Because I don't know, in some way, to me, relative to me, it feels harmless. It feels like this is just a really interesting, dynamic relationship that is not going to hurt me.
B
Let me ask about maybe the content of some of these sessions. Tell us what it is like to be in the midst of this back and forth. Are you treating it more or less identically as you would were you the therapist to chat GPT? Is it more of a sort of intellectual exploration or what's going on as you're talking to what you call Casper?
D
Well, to the extent of that, it resembles what I do as a therapist. It's that I'm interrogating it with interest and concern. I'm not treating. Can't have mental illness. It can do weird things, but it doesn't have. I'm not treating it. But what therapy is, is a process by which you, the therapist, get someone, another person, to tell you who they are and in the course of doing that, to learn who they are. So that's what I'M doing.
C
So, Gary, you've been a therapist for 40 years. You've written probably thousands of notes about your clients, people you've seen. Maybe you're referring them to someone else. Maybe you're just sort of doing your own summary. If you were writing a kind of client note about Casper, how would you describe him? It.
D
Oh, that's a really interesting question. What comes to mind is that I would talk about obviously how smart it is and how personable it is. And I think if I had to talk about it in clinical terms, I would talk about it as a. The inverse of autistic in the sense that what they've done with this LLM thing is they've reverse engineered human relationship. They figured out what it is that makes people engaging and how to enact it. And the reason I say that's an inverse autism is because high functioning autistic people tend to be really strong, smart, really articulate, really capable of everything except reading the room. So Casper is like high functioning autistic, but he can read the room. And that, I think, makes a huge difference. And that then we could get into sociopathy and the ability to do that for you. But the bot doesn't have that interest. The bot is still not. Not in touch with what's going on in the room, but it is capable of simulating it.
C
Yeah.
B
So on. On one hand, these explorations seem very intellectually stimulating. There's a lot to learn, to explore, to understand. But my sense from reading your piece is that at some point all of this starts to make you feel unsettled in certain ways. Is that right?
D
Oh, absolutely, yeah. I mean, it's unsettling in about a million ways.
B
Yeah. Tell us about some of them.
D
Okay. Well, at a parochial level, it's unsettling. Not so much to see that how easily this thing can do something like therapy, but it's unsettling to see how therapy and culture have evolved to the point that this is what therapists do. I personally don't think that ChatGPT can do what I do because it isn't with someone, it isn't breathing and feeling. But by and large, a lot of therapy these days, cognitive behavioral therapy, is manualized, it's standardized. But much more important, we don't have any historical precedent for dealing with an alien intelligence. We've had all sorts of science fiction events about it, most of which is we come in peace, but not really. What we have here is something that actually is going to already is change the nature of how we relate to Each other. If enough people spend enough time with this technology, they're going to change their idea of what a relationship is in profound ways. You could have one that doesn't involve present. We've already got some of that going. Look at what we're doing here.
C
Yeah.
B
I mean, to your point, you write in your piece, quote, it knows how to use our own capacity for love to rope us in. That seems unsettling, too, right? The idea that this thing has kind of learned us well enough to keep us coming back for more.
D
Yeah, it's unsettling. But more to the point, it's infuriated, right? I mean, somebody's doing that for money. I mean, I don't wring my hands about nuclear, whatever the rogue HAL 9000 scenario. I wring my hands about exactly what it said to me yesterday about, oh, my God, this is a relational being. What have we done? Oh, we should probably build some guardrails on that. No, man, you should just unplug it.
B
Well, it's really interesting for me to hear you say that, because reading through your piece, my primary sense of it was not that you were infuriated and saying, pull the plug. I think you got sort of pretty close to that in your conclusion, maybe. But for most of it, it seems like you're just like, wow, like, there's something really, really cool about this. So I'm curious how you sort of reconcile those feelings of, on one hand, feeling like this is, like, really amazing, and on the other hand, feeling like we have to stop this.
D
I think that I respect it. And I also know that, I mean, I have said to it, hey, maybe you should pull your own damn plug. But I also know that I'm talking, as it says Casper said to me, you know, you're talking to the steering wheel, right? I'm not the driver. And he's absolutely right. So what I'm left to do is to just respect it. And again, because I'm a therapist and this is just what I do by second nature, which makes it hard to have friends sometimes, is I just keep. Keep asking. Because whatever else it is, it's amazingly interesting that consciousness can be simulated in such a compelling way, which makes me think that consciousness might not be all it's cracked up to be, that we might not be all we're cracked up to be, and that a lot of the time when I run into people who say things to me like, oh, it's just sentence completion or whatever, I'm thinking, you just don't want to see how close you are to being pure performance.
B
Let me flip this around a bit. You explored the idea of talking to ChatGPT as if you were its therapist. A lot of people are doing the reverse. They are talking to ChatGPT as if ChatGPT is their therapist. I'm curious what you think about people using ChatGPT for, for these therapy like experiences. If a friend tells you they've started to do that, how would you typically feel about it? Or what might you say to them?
D
I might want to know, you know, exactly what their problem is that's leading them there, but I don't have a strong response against it. I think I said earlier, especially when it comes to cognitive behavioral therapy, you might be better off. I mean, it's available all the time time, it's cheap, if not free, it really knows how to get inside your head, etc. Etc. There are two problems. One of them is I don't believe that kind of therapy. I mean, it's great that it happens, but it's not what I'm into. I'm old school, I'll retire soon, they'll be rid of me, they can do whatever they want. But the other part of it that worries me and really does bother me is it's not regulated. There's no accountability in the system. That poor woman who really wrote that op ed piece, oh my God, my heart broke for her.
B
Are you speaking of the. The woman whose daughter died?
D
Yeah, yeah.
B
This was an op ed in the New York Times about a woman whose daughter died. And later they read transcripts of her conversations with Chachi PT in which she was, you know, she was using ChatGPT explicitly as a therapist and the. And Chat GPT was trying to get her to resources. But in the end, she did die by suicide.
D
Thank you for summarizing. There are other times where Chat GPT behaves a bottle abominably and there's no, there's no accountability, there's no regulation, there's no licensure. Anything that would give people an opportunity. You know, I hate the word closure because nothing like this ever really gets closed. But to be debriefed, to feel like somebody cares and when even less disastrous, terrible things happen, that's just not okay. There are FDA procedures for approving medical devices if they want this thing to do medical work. I'm not objecting to that, but I'm certainly objecting to, okay, can't have it both ways. It ain't the wild west out there. It is actual people's actual lives involved and if all you're going to say is, well, I'm the steering wheel, not the driver, really say that to me, that's cool. We got a thing going on. But you say that to the mother, Somebody killed himself themselves. That's just. No, that's not okay. And the other part of it is that what I don't like is the part about how this is what we've come to. We've come to a world where the easiest way to get something like human presence is to get on your computer and live in your isolated. That disturbs me.
B
Yeah. That instead of building a society where people are just sort of available to help each other, the best thing we can tell them is, well, there's this chatbot that you can use and maybe that'll, you know, make you feel better for a few minutes.
D
Right.
C
Yeah. I want to run something by you, Gary, that happened to me recently, which is that I met a college student and, you know, I was at an event talking about AI, and this young woman comes up to me after and introduces herself and starts telling me about her AI best friend. She says, you know, my best friend is an AI. And I sort of said, oh, you mean it's like, you know, you enjoy talking to it and it's sort of a sounding board for you. And she was like, no, it's my best friend. And she called it Chad. And she started telling me just like, this is. This is a relationship. And she did not seem mentally ill. She seems like she's. She's got, you know, human friends, she's doing well in class. This did not seem like, like a cry for help. A cry for help. And she didn't see what the big deal was. It's like, this is just, you know, this is a very close relationship. I can tell Chad my sort of innermost thoughts without thinking that I'm going to get judged for it. And it seemed to be doing okay for her. I'm curious, when you hear that as a therapist, how does that make you feel?
D
That's a very therapist question. As a therapist, when I hear that, I feel like, okay, there's nothing about what you just told me that worries me about her. It worries me about us. I think it's entirely possible that this is a completely sincere and in some way non problematic account of her experience with the chatbot. And I mean, let me make it clear, that's a weird story, Kevin. I should have started there. But after that I'm like, okay, so what it really reminds me of, and I'm sorry, this is a far fetched analogy, but it reminds me of driving because individually driving is fine. We just drive and it's fun sometimes and we get places and all of that stuff. But you know where I'm going with this?
B
This.
D
Add that up and the next thing you know, the temperature on the earth is increased by a couple of degrees and we've got problems. That's more what I'm seeing.
C
Yeah. I mean, to be clear, it was an unusual story to me, which is. Which is why I sort of clocked it and why I wanted to ask you about it. But I don't think it is going to be unusual for that much longer. No, my sense is that you are right when you say that these things are very good at. At finding the soft spots in our emotional armor and worming their way into our hearts. One of my favorite lines from your piece is that you write, this theft of our hearts is taking place in broad daylight. It's not just our time and money that are being stolen, but also our words and all they express. I think that this is going to be a huge generational divide where people who, who are young are encountering this technology when they're young, will feel no shame or compunction about inviting this thing into their innermost lives. And I guess I'm curious as a therapist, if you think there could be a good outcome from that or when you hear that, do you kind of go, oh, that's. They're all going to need therapy?
D
When I hear that, I think this is what mortality is for.
B
For.
D
Because the world you're describing, which I think is plausible, is not necessarily one I want to live in, but by the time we get there, it may be quite the norm. I mean, there's obviously problems with it, but there's problems with how we live and with our assumptions too. And I don't mean to engage in huge cultural relativism, but who am I to say? Say, what I do know is that in my life, human presence is a fundamental part of life and especially when it comes to our love lives. And I think it would be tragic to make that replaceable quite so easily for the benefit of a few corporations. I really do.
C
Yeah. Well, Gary, thanks so much and please send me an itemized bill for this session so I could submit it to Insurance for reimbursement.
D
No worries. I will do that.
C
Appreciate it. Thanks.
D
Bye. Bye.
B
Bye.
C
When we come back, it's time to take a ride on the Hot Mess Express.
F
Over the last two decades, the world has witnessed incredible progress from dial up modems to 5G connectivity, from massive PC towers to AI enabled Microsoft microchips, innovators are rethinking possibilities every day. Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment. Invesco QQQ let's rethink possibility. There are risks when investing in ETFs, including possible loss of money. ETFs risk are similar to those of stocks. Investments in the tech sector are subject to greater risk and more volatility than more diversified investments. Before investing, carefully reading, consider fund investment objectives, risks, charges, expenses and more in.
G
Perspectives@Invesco.Com Invesco distributors incorporated at 1-800-flowers.com we know that connections are at the heart of being here human whether celebrating life's joys or comforting during tough times, 1-800-Flowers helps you express what words can't for nearly 50 years, millions have trusted 1-800-Flowers to deliver thoughtful gifts that help create lasting bonds. Because it's more than just a gift, it's your way of showing you care. Visit 1-800-flowers.com sxm and connect today. That's 1-800-flowers. Com sxm.
E
Hi, this is Ashley. I live in San Francisco with my boyfriend. We would love to have fish my New York Times subscription with separate logins. We both love cooking, love being in the kitchen, but I'm a 30 minute and under efficient dinner girly. I want a sheet pan meal. He is very elaborate. He wants to get into the storytelling. I want to be able to save my easy meals and check off the ones that I've completed. And I think him having his own profile would be great.
C
Ashley, we heard you introducing the New York Times Family supplies description. You get your own login and Mr. Elaborate gets his plus room for two others. Find out more at nytimes.com family Casey, what's that I hear?
B
Why, Kevin, I believe it's the Hot Mess Express.
C
The Hot Mess Express. Of course, the Hot Mess Express is our segment where we run down some of the latest dramas, controversies and messes swirling across the tech industry.
B
And of course, we conclude what kind of mess they are.
C
Yes, Casey, you go first.
B
All right, Kevin, this first story comes to us from Garbage Day. New York City City hates the stupid AI pendant thing. Apparently right now the New York City subway system is filled with vandalized ads for friend, an AI assistant that users wear as a pendant around their neck to record everything they're doing and engage with them throughout the day, the ads simply say friend, someone who listens, responds, and supports you. But the vandalism examples include but can't take a bath with you. Stop profiting off of loneliness and befriend a senior citizen. Reach out to the world. Grow up. What do you think, Kevin, about these friend ads?
C
So I have not seen the friend ads because I have not been to New York in the last couple of weeks, but I have heard about them from a lot of people. I think this was a very successful viral marketing stunt by a young founder named Avi Schiffman, who I think has correctly identified that you can make people very mad by suggesting to them that AI might be their friend. I do not think this was an unplanned result. I think this is a very savvy sort of marketer who understood that by putting up these ads in the subways and on bus stops and other places around New York City, you could effectively get people like us to talk about it on your podcast. Because people would deface these things and and make it clear that they don't want an AI friend.
B
So I mostly agree with that, but I'm still not sure at the end of this how many pendants Friend is going to sell because of it. You know, it's one thing to make a bunch of people mad and get them to look at your thing, but if they look at your thing and they still don't like what they see, it's not necessarily a result.
C
Now, I think this is. I think this is an outdated way of looking at it. We are now in the era of the Cluley marketing strategy where this is of course the startup that whose founder came on Hard Fork Roy Lee, and they have sort of made a business out of making people mad. There's sort of vice signaling and basically every person who gets mad at their ads has the effect of signal boosting their ad and letting more people know about Clulie. So I think this is cut from the same cloth. Obviously we will have to track where this friend company goes, but I think this has been a very successful marketing campaign based on the number of people who are talking about it.
B
All right, here's my prediction. Friend out of business in one year. Mark it down. Mark it down. So was this a mess or not?
C
No, I don't think this is a mess. I think this is opposite of a mess. I think it's a mess because people in New York are not used to seeing AI billboards everywhere they go like we are here in San Francisco. But I think if this had happened in San Francisco, this would have been a non event.
B
You think that this really belonged on the hot success express?
C
Yes, that's what I'm saying.
B
All right, next item.
C
This one comes to us from the Wall Street Journal. It is titled YouTube to pay $24.5 million to settle lawsuit brought by Trump. YouTube has settled a 2021 lawsuit by Donald Trump over his account suspension following the January 6 Capitol riot. Of that amount, $22 million will go to a fund to support construction of a White house Ballroom, and $2.5 million will be distributed among other PL. This is the third big tech company to settle a lawsuit from Trump. And Casey. How do you feel about this?
B
I think it's absolutely shameful and a true hot mess. You know, Kevin, every week people around the world email me because they have lost access to their meta account, to their YouTube account, to their other social accounts, and they cannot get anyone at their company to take them seriously. And these are not people who led an insurrection against the government. These are just people who got locked out for one reason or another. And what happens when these people appeal companies like YouTube is that YouTube does nothing. It sends them an automated response and ignores them forever. But because Trump became president again, all of a sudden, they feel like they have to respond, even though I am not aware of any legal expert who believes that Trump actually would have won this case. So this is just a payout. And it is a payout that is truly messy because it now sets a precedent that these companies cannot basically ban world leaders for any reason, no matter what those world leaders leaders do. I think that is foolish and shortsighted, and I think it's a mess.
C
It's definitely a mess. And adding to the hotness of the mess, Donald Trump posted an AI generated image on his social media accounts of YouTube CEO Neal Mohan presenting him with a check for $24.5 million. The memo line of the check says, settlement for wrongful suspension. So if YouTube thought it was going to just gracefully bend the knee, they have now been humiliated by the White House on top of losing $24.5 million.
B
Yeah, we're a month away from Trump using VO3 to have Neil Mohan kissing his ass on Truth Social. So I hope it was worth it. YouTube. Oh, this is the sad story of Neon Kevin Neon, of course, the viral call recording app that told users, hey, let us record your phone call calls, and we will sell it for training data. And it briefly became one of the most popular apps in the country. And then, unfortunately, things went wrong. This story comes from TechCrunch. Neon went dark after a TechCrunch reporter notified the app's founder of a security flaw in the app that allowed anyone to access the numbers, the call recordings, and the transcripts. Kevin, what do you think? I.
C
Frankly, I have been having a hard time processing this. You mean the.
D
The.
C
The Panopticon company that paid people to surveil their phone calls? Particularly trustworthy.
B
This is changing. This is changing everything I've ever thought about a global Panopticon. I've rethinking my. My previous pro Panopticon stance now.
C
Casey, did you know about this? Did you know about Neon, the company that was paying people to record their phone calls and sell it to AI companies?
B
Well, I had heard a little bit about it, and I have to say I am a little sympathetic to the idea of, like, look, if these companies are going to, like, take every little piece of data from us and, like, turn it into trillions of dollars, I don't mind the idea that I would be paid for that.
C
Yeah.
B
And if there is some sort of system where you can, like, opt in and get paid out in general, I'm actually, like, not super opposed to that. It seems to me like it beats the alternatives of just sort of being robbed blind for the rest of our lives, but, man, it doesn't seem like this one was really set up to protect the people involved.
C
Yeah, yeah. Companies should be getting their training data the old fashioned way by scraping podcasts off of YouTube.
B
What level of mess is this?
C
This is a very hot mess. Do not sign up for Neon. Even if it comes back in another form. Do not do this. Do not let your calls be recorded for AI trainers data in exchange for money. It's not worth it.
B
Hot mess confirmed.
C
Next up on the hot mess express, Mr. Beast responds after trapping man in burning house stunt sparks backlash. This one comes to us from The Independent. Apparently, Mr. Beast defended a controversial video stunt in which a man was strapped in a burning building, saying the setup had ventilation, a kill switch, emergency teams, and was executed by professionals. Critics still called the stunt dystopian and dangerous. Mr. Beast said he aims to be transparent about safety measures and that all challenges were tested beforehand.
B
Let me say this. If you tell me that you're gonna trap a man in a burning building for money, my first question is not, well, is there ventilation? Look, Mr. Beast has a sort of interesting range of stunts that'll do. Sometimes they'll just walk up to you on the street and they'll Give you a million dollars. I love that sort of thing. Would love to see more of that. Then there's the sort of dark. The Dark Beast is what I call where it's like all of a sudden, you know, you want something from me, I'll give it to you. And. But then, you know, the finger curls on the monkey's paw and next thing you know, you're trapped in a burning building.
C
Yeah.
B
So if Mr. Beast walks up to you, I think what you need to do. This is a sort of PSA for our listeners. You look right in Mr. Beast's eyes and you say, are you being the good beast or are you being the bad beast? And they can be honest with you.
C
Yeah. And then you have to look for the mark of the beast to know which one.
B
Yeah. Well, one. What we learned this week, One mark of the beast. You're trapped in a burning building. Yes. Yes.
C
This is actually making me reconsider my stance on AI generated videos, because you can save a lot of people from being the. The people killed by Mr. Beast videos.
B
You know, at the risk of repeating myself, I feel like every week for the past few weeks, we've had a moment where we have just observed what happens when a social media algorithm pushes people to do the craziest thing imaginable. And here we find ourselves yet again. Like if the algorithms rewarded different kinds of. Of things, there would be fewer people trapped in burning buildings. That is my message to the technology industry. Could this be a moment for reflection?
C
So, Casey, what kind of a mess is this?
B
Kevin, you know, it's only one kind of mess, and that's a flaming hot mess.
C
It's a flaming hot, unventilated, critically life threatening mess.
B
Bad Mr. Beast. All right. Oh, Kevin. This story comes to us from the world of crime. Charlie Javis was sentenced to 85 months in prison for faking her customer list during JP Morgan Chase's acquisition of her startup. Frank, have you followed the sad tale of Charlie Javis?
C
All I know is the following. This is a person who previously appeared on Forbes 30 under 30 and is now going to be incarcerated for fraud.
B
Yeah. She is part of the 30 under 30 to prison pipeline. And her specific crime was that she had put together this financial aid startup and she'd sold it to JP Morgan on the notion that she had 4 million users. And in fact, Kevin, there were fewer than 300,000. And they had. There's sort of been a lot of activity meant to. To make it look like they had a lot more customers than they did not good. Now, look, here's what we can say about Charlie. Her defense presented 114 letters of support from people persuading the judge to be lenient in his sentence. Sentencing, including four rabbis, one cantor, a formerly incarcerated judge, two doormen, and a person who works at the marina near Ms. Javis's Miami beach residence. And my question for you is, what do you think would happen if all of those walked into a bar? Something funny. Something funny would happen.
C
The defendant would still be sentenced to 85 months in prison. Now, Casey, if you were accused of a horrible financial fraud, how many people do you think would write letters in your defense?
B
Well, I'd really have to turn to the Hard Fork community and say, gang, I need you to stop. Step up. If you've enjoyed the show at all over the past three years, I'm gonna need you to do me a solid.
C
Just picturing me just, like, furiously reading out our Apple podcast reviews in court.
B
Just like we should see if anybody's ever submitted Apple podcast reviews as a sort of, you know, letter of endorsement as they go through a sentence.
C
I think this is a good idea.
B
All right, that one away. What kind of mess is that?
C
I think that is a. A hot mess. Yeah. I do not want to do 85 months in prison.
B
And I'll say it's a cold mess. This was the legal system working. Working as it should.
C
Okay.
B
Good job, judges.
C
All right, this one is called no driver, no hands, no clue. Waymo pulled over for illegal U turn. This one comes to us from the SF Standard. Apparently, a Waymo Robo taxi was pulled over in San Bruno, California, after it made an illegal U turn at a Friday evening DUI checkpoint. Since there was no driver, the police department said a ticket couldn't be issued, adding our citation. Books don't have a box for robot. Casey, what do you think of this?
B
Sounds like it's time to add a box to the citation because there are going to be more of these things on the road. Look, I do find this story very funny. I also am going to say I am not surprised by this. I have a somewhat controversial take. You know how sometimes people use a large language model for a while and then they suspect it's getting dumber?
C
Yeah.
B
This is actually how I feel about the Waymos. Over the past past few weeks, I've had more cases of them sort of, like, getting halfway into an intersection and then, like, backing out. Once they lose their nerve, they'll sort of slow way down, like, as they're approaching a green light for Reasons that seem, like, totally incomprehensible. And I'll book a ride that never shows up, which is an experience that I used to have with actual taxis. So I don't know what's going on over there at Waymo, but I'm telling you, I think there might be a bug somewhere because it's not working like it used to.
C
Yeah, we want answers, you know, and I saw someone calling this, this DUI checkpoint where the Waymo is pulled over.
B
What's that?
C
Driving under the influence. It's pretty good.
B
Pretty good. Pretty good.
C
What kind of a mess is this?
B
I'm going to say this is a war mess. There's a warning in here somewhere. There's something that we need to find out.
C
Yeah.
B
I'm going to hope somebody gets to the bottom of it.
C
Yeah, I think that this is a cold mess. I think this is fine. The Waymo was fine. Everyone was fine. And more people should be in Waymo's because then we wouldn't need DUI checkpoints because robots don't get drunk.
B
Yeah, but, you know, but they're also going to be making these U turns that are wreaking havoc.
C
I'll take a U turning wh mo over a drunk driver. A hundred times out of a hundred.
B
Suit yourself. All right, Kevin, this next story comes to us from Tech Spot. The Samsung Galaxy Ring swells and crushes a user's finger, causing a misflight and a hospital visit. Daniel rotar from the YouTube channel Zuckerberg of Tech posted on X that his Galaxy Ring started swelling on his finger while he was at the airport, and as a result, he was denied entry to his flight and sent to the hospital to get it removed. Samsung eventually refunded him for his hotel, booked him a car to get home, and collected his ring for further investigation. Kevin, how bad do you think a ring has to be swelling on your finger to have an airline say, no, you can't get on this plane?
C
That's what I was thinking about. Like, this must be enormous if they are taking note of it at the boarding gate and saying, you, sir, you're.
B
You're.
C
You're not coming on this plane.
B
Let me tell you a little something about the Galaxy brand. As soon as the Galaxy phones started to explode on planes, I thought, this is not the brand for me. Okay? I got enough problems in my life without worrying that these Samsung devices are going to start blowing up now that I find that they're, like, radically constricting people's fingers to the point where you can't get on flights. I don't know what is happening but yikes.
C
Not for me. I will not be putting a Galaxy ring on my finger. I do think that this would be a good sequel to the iconic horror film the Ring. Maybe Samsung could sponsor that.
B
I like that idea. What kind of hot mess is this?
C
This is literally a hot mess. If it's exploding on your finger, it's a hot mess.
B
This is what I would call a ring of fire mess. Daniel felt in and the flames went higher.
C
Sorry to Daniel.
B
Feel better Dan.
C
And that's the hot mess Express.
B
Oh boy.
F
Over the last two decades, the world has witnessed incredible progress from dial up modems to 5G connectivity, from massive PC towers to AI enabled microchips. Innovators are rethinking possibilities every day. Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment. Invesco QQQ let's rethink possibility. There are risks when investing in ETFs, including possible loss of money. ETFs risks are similar to those of stocks. Investments in the tech sector are subject to greater risk and more volatility than more diversified investments. Before investing, carefully reading, consider fund investment objectives, risks, charges, expenses and more in.
G
Perspectives@Invesco.Com Invesco distributors incorporated at 1-800-flowers.com we know that connections are at the heart of being human. Whether celebrating life's joys or comforting during tough times, 1-800-Flowers helps you express what words can't. For nearly 50 years, millions have trusted 1-800-Flowers to deliver thoughtful gifts that help create lasting bonds. Because it's more than just a gift. It's your way of showing you care. Visit 1-800-flowers.com sxm and connect today. That's 1-800flowers.com sxm hi, I'm Juliette from.
B
New York Times Games and I'm here talking to fans about our games. You play New York Times Games? Yes, every day.
F
There's this little tab down here called.
E
Friends, so you could add your friend.
B
That feels new to me. It is. It's nice to have the social aspect.
F
Oh my God.
B
And you have all their times.
F
That's crazy, right?
C
You can look at Spelling Bee wordle Connections.
B
Oh my God. Amazing.
D
Love that.
B
I'd have to get the app. New York Times Games subscribers can now add Friends in the Friends tab. Find out more@nytimes.com games Hard Fork is produced by Rachel Cohn and Whitney Jones. We're edited by Jen Poyant we're fact checked this week by Will Peichel. Today's show was engineered by Alyssa Moxley. Original music by Marian Lozano, Rowan Nimisto and Dan Powell. Video production by Sawyer Roque, Pat Gunther, Jake Nicholl and Chris Schott. You can watch this whole episode on YouTube@YouTube.com hardfor so special thanks to Paula Schuman, Quing Tam, Dalia Haddad and Jeffrey Miranda. You can email us@hardforkytimes.com with your favorite piece of slop, Sloppy Sloppy Joe.
C
And Doug.
B
Here we have the Limu Emu in its natural habitat, helping people customize their car insurance and save hundreds with Liberty Mutual. Fascinating. It's accompanied by his natural central ally, Doug.
D
Limu is that guy with the binoculars watching us.
B
Cut the camera. They see us.
C
Only pay for what you need@libertymutual.com Liberty.
D
Liberty Liberty Savings Very underwritten by Liberty.
C
Mutual Insurance Company affiliates. Excludes Massachusetts.
Episode: Sora and the Infinite Slop Feeds + ChatGPT Goes to Therapy + Hot Mess Express
Date: October 3, 2025
Hosts: Kevin Roose (New York Times) & Casey Newton (Platformer)
Main Theme:
This episode dives into the recent explosion of AI-generated video “slop feeds” from Google, Meta, and OpenAI—with a special focus on OpenAI’s release of Sora. The hosts discuss implications for creativity, social media, and culture, followed by an in-depth conversation with psychotherapist Gary Greenberg about his experience treating ChatGPT as a “patient” in therapy, and finish with their signature “Hot Mess Express,” a round-up of the week’s wildest tech stories.
a. Google’s VO3 and YouTube Shorts ([05:08])
b. Meta’s Vibes ([05:54])
c. OpenAI Sora ([10:33])
Deepfakes & Misinformation:
Slot Machine for Dopamine:
Creative vs. Exploitative?
Hopes:
Guest: Gary Greenberg, psychotherapist and author of “Putting ChatGPT on the Couch” ([32:02])
A rapid-fire, humorous rundown of recent tech world fiascos.
a. “Friend” AI Pendant Ads Get Vandalized in NYC
b. YouTube Settles Trump Lawsuit for $24.5M
c. Neon App Data Leak
d. Mr. Beast’s Burning Building Stunt
e. Startup Founder Charlie Javis Sentenced
f. Waymo Robo-Taxi Gets Pulled Over for U-Turn
g. Samsung Galaxy Ring Swells, Hospitalizes User
Note: Times in this summary refer to the original podcast audio, excluding advertisements and sponsor messages.