
In this episode of Scrolling: AI is everywhere, and it’s evolving rapidly. How to keep up, and not get DUPED by deepfakes with AI and media literacy expert Jeremy Carrasco.
Loading summary
A
Welcome into Scrolling with Haley. I'm Hayley Carnea. I have got an awesome guest on deck for you today, an expert in AI. But first, I want to remind you to subscribe wherever you watch or listen. And please tell a friend about the show. I also want to tell you about today's sponsor. Today's show is brought to you by Beam's Dream Powder. If you've listened to me for a while, you already know that Dream is part of my nightly routine. It helps me stay fresh and focused throughout the day. Once I added Beam to my routine, I started sleeping straight through the night and waking up energized. And trust me, when your job is reading, analyzing and talking about the news, you need every bit of sharpness you can get. Dream is made with a powerful blend of all natural ingredients. Reishi, magnesium, L theanine, apigenin and melatonin. So if you've been waiting for the right time to try dream, this is it. Go to shop beam.com scroll and use code scroll to get my exclusive offer for up to 40% off beams dream Powder. With my discount Code scroll, you can get their best selling Dream powder for just $39. Scroll. Stop pretending like you're still in your 20s and that you could survive a whole day without a good night's sleep. Place your order for Beam and see the difference. All right, well, if you watch this show, you know I have a complicated love hate relationship with AI. I have played my fair share of AI slop videos on this show. Squirrels blowing up granny's house with a blowtorch, cats playing beer pong, baby versions of politicians, and even different pieces of fruit. In soap opera relationship dramas, they don't call it slop or brain rot for nothing. And I am part of the problem. Is it the momentary laugh? Is that what's worth it? Do the pros outweigh the AI cons? And what will all of this look like in the future if our brains are all rotted and we can't trust any anything we see or hear? It sounds like the bad ending to a sci fi thriller. Except it is real life. And my next guest is here to make sense of it all. An AI expert, an expert in media literacy, Jeremy Carrasco joins me on this episode of Scrolling with Hayley. Jeremy, thank you so much for being here.
B
Thank you for having me, Haley.
A
So ChatGPT is the AI platform I think people think of the most. And it came out in in 2022 and it has, you know, almost a billion weekly users. It gets asked 2.5 billion questions per day. And I think that people are starting to replace Google and other search engines with AI. Is that true?
B
Search is probably the main purpose of AI for a lot of people. A lot of people use it for companionship. A lot of professionals are using it in their professional lives too. But I think when it comes to AI, the first thing they think of is chat GPT for search. I think you're right about that.
A
Yeah. And how did you get into this space? How did you become an expert in this space in identifying AI in certain videos? What made you so passionate about this?
B
So I was actually a podcasting producer and live streaming guy originally, and I found that a lot of people just weren't talking about the practical implications of AI. And like, the things that you just said, I don't think people were really considering, like, are the pros of entertainment going to outweigh the cons of us just losing trust in what we see? So I wanted to start talking about this. Last year, little did I know that AI video was going to blow up so much in 2025. I think we're going to look back on 2025 as the year where something really changed. And like, looking back on it now, it's crazy how much has already changed. I'm listening and watching videos about this every day, and there's just too much to keep up with. So even for someone who does it for a living, it's like, it's just too much. So just for a regular person, I think it's nearly impossible. And I think that we're in a really tough position now.
A
Yeah. What is it about 2025? What is it, you know, different regulations? Or is it just the popularity of it all? What made this AI boom?
B
Yeah, it's a great question. So when I'm thinking of AI video specifically, so it uses a different technology than ChatGPT. So ChatGPT uses what's called a large language model to use AI. And this is what I call AI video. So AI video uses something that's called a diffusion model. So totally different technology. You don't really need to remember that. But what you should know is that diffusion, the model was applied to video by late 2023, early 2024, that's when it started getting good. But it wasn't until the second or third generation of that technology that it got really believable for a lot of people. And that's what happened in 2025. So we basically saw the arrival of a new generation of this technology. It was Just a matter of time. They just needed to throw more compute power at it, throw more data at it, and that's where we ended up in 2025. The other big thing is last year was really the first time where audio and video synchronized for people. So AI video was something that you could put to music. It was something that could, you know, create like your shrimp Jesus memes or whatever else, like light hearted memes. But once audio sticks around, I mean, audio is half a video, right? It's people a lot of times just listen to podcasts these days. So audio is a really important thing for us and it solves the believability because if you don't have synchronized sound with something, it just, it's missing something. So Google VO3 came out last year, year in May. That was kind of like the, the moment where everything really changed for a lot of people. That's when you got your AI bunnies jumping on trampolines. That's.
A
Oh, we're gonna get into that. That was the video that, that duped me. And you know, once you realize, you go into the comments and it says this is an AI video, and you're like, oh. And then really, I got duped. Damn. And then you watch the video and it's like, oh, how did I not see that? But I feel like our brains are kind of, I don't know, getting used to these issues in the videos and we kind of oversee them in a way.
B
I think it's one way to think about it. The way that I, I think about it is that people who do a lot of scrolling on short form vertical video are probably more susceptible to being tricked because you're not like, probably carefully looking at everything you're seeing. Let's just be real, like we're just here to scroll. And I, I think that the original idea was because people scroll, people are like, no one's gonna care about AI video. They're just gonna like watch whatever, you know. And I think it is the case that if you see an AI slot video that you think is funny, you're probably aware that it's not real. I think that really no one wants to be tricked though. So that's, that's what people are really scared of. So for me, it's a lot of separating out, like when are the stakes actually higher versus when should you just know that you're watching AI slop? And if you're into that, like, I don't wanna judge anyone for that. Right. I think a lot of people also think of this as like a generational issue. It's really not anymore. I mean, I think it used to l be if you had bad eyesight, maybe you couldn't tell, or maybe you were newer to the Internet, you couldn't tell. But now it's. It's not a generational thing at all. I think that that stigma has to go away.
A
Yeah, I was very disappointed in myself. Anytime that I get duped, I'm really upset. I'm like, come on, I've been on the Internet since I was, you know, basically born. Come on, how am I getting duped? But you talked about something interesting that I wanted to touch on because you said that the video quality is getting better and the audio sync is getting better. And, you know, I think when people think of AI videos, they see the fruit with three arms or they see someone with. There's an issue there. Right. But when do you think we're going to get to a place where it is indistinguishable from, you know, the human eye? Because this is technology, man. Are we going to see movies that are completely made by AI and people won't see the difference at all?
B
I think that there are three ways to answer that question, so I'll answer both of them quickly. The first is we already have undetectable AI for a lot of the population. I mean, if we're just being real, we all know someone who basically can't tell the difference, and it's just a matter of time before things keep getting better and better. I think that the new models that are coming out are quite believable. And I think we're going to reach the point. I think we're already at the point, frankly, where a lot of people cannot tell. And I just want to be honest about that. The second point that you asked is, will people watch watch movies? Like, will people care? And I think that's still to be to be determined. I think we place value on things that take time. As humans, we have so much media around us all the time. It's like, how do you choose what to watch? And when you're scrolling, you might not care if it's AI or if someone just like put up their phone. But if it's a movie, maybe when everyone can make AI movies, does that mean that AI movies are just zero sum and you still want to watch the real thing? I just don't think we know yet. But the other question you're asking, which I think is what a lot of people are most worried about, is like, when will X Experts not be able to tell. And the way that I think about this is I think a lot of people can tell the difference between a webcam and, like a cinema camera from Hollywood. And these are both showing reality. No, neither one of these is more realistic than the other. However, we can still tell the difference. So I think that there will always be signatures of the medium that makes the content. But if you're asking someone to tell the difference between a $2,000 cinema camera and a $20,000 cinema camera, a lot of people aren't gonna be able to do that, but a camera operator to do it. Right. So the margin is going to keep shrinking. Unfortunately, I'm pretty worried about that world because we don't have very good trust systems in place right now. It will kind of rely on experts to authenticate things for people. And I just don't believe that we are at a place in society where we're. We're, we're there and there are no. I'll just get ahead of this. There are no scalable detection solutions for video, especially for photo. They're always behind. But, like, I think people are just assuming some software is going to come save all of us. I don't think that's likely. At least not in the next couple of years.
A
Interesting. So I've talked about this study before on my show, and it's called the Google Effect. They did this. This was after Google had come out, and they did a study on the human brain where essentially once Google came out, or when things became readily available on the Internet to look up, people's attention spans dropped and people's ability to remember certain facts and, you know, directions and things like that completely lost. Because the brain basically adapted to the fact that we have Google there as a crutch. And why would the brain use its capacity or extra space to hold on to information that they know they can just. It's readily available, you get to search it again. So that was with Google specifically. And I feel like in a few years, we're going to have something similar where it's going to be the AI effect where, well, people don't need to look that up, or people don't need to do that themselves. They could just have AI do it, and we're going to lose certain human capabilities that we used to be able to do, and we're going to be completely reliant on AI. Are you afraid of that?
B
I mean, I think it is the natural progression of technology. When calculators came out, people were worried about people not Being able to do math. I mean, I think this is like a generically troublesome thing about technology if you're just like worried about us keeping all the same abilities. But I also think that we are going to gain some abilities in some ways to use technology that we weren't able to before. I think that in, in places where you really want to keep like good abilities, you're just going to have to maintain effort to do those. I mean, I, I still try to navigate around my city without Google Maps all the time because I'm always trying to make sure that I have a sense of direction. But I know a lot of people don't do that and they've completely lost their sense of direction. That makes me uncomfortable. Right. I. We also have to figure out what makes us uniquely us and what we need to be doing. So, for example, I write articles on a substack. I write video scripts, I make videos. I don't let AI touch any parts of that process because that's what makes me unique and I don't want to lose that edge. But when it comes to doing research on. A lot of the times I'm researching big AI campaigns and I might need to check some profiles to see if they're still active or to see if they've been banned because they're using AI in an inappropriate way, I have AI go through 150 accounts a few times a week to see if they're still up. That is just work that I just do not have the time to do. And I frankly don't even want to pay an intern to do it because the intern I can pay to do other things. So there are these opportunities that AI presents that are actually augmenting us in ways that we were just never going to do anymore. I think people are rightly concerned about, especially in education. I just don't really see a reason that kids need to be using AI until they're adults. At the end of the day, the skills that I'm using that make me a good communicator, but also a small business owner. Even if you're prompting the AI, you still have to be a good communicator. It's not going to figure that out for you. So we have to still have those baseline education skills because I cannot do that work for you. And also just being like a conscientious, thinking human being. I mean, so much of writing and doing the work is just figuring out who you are and figuring out what your voice is. So I think that there are some Real, real threats there. But I don't know how soon we're going to start seeing those. And I do think we have to trust people to figure it out. I don't want to, I don't think that people are stupid because they use AI. I just want them to be aware of exactly what you said. It's like if you want to maintain, do it. Your ability to do things, keep it up.
A
Yeah, and I want to talk about this because you brought up education and you brought up trust and I feel like now there's this sort of stigma where people just think that college students, grad school students, law school students, med school students, oh well, they, they just use chat GPT to get their way through. And then I feel like this might have a negative impact on how people view these jobs and these, these maybe employers, as they get, as AI gets more advanced, they might just think, oh well, I go to the doctor and what do they know? I could just chat GPT it and I'm smarter than them. Or oh, I, I don't need a lawyer, I could just chat it and I'm smarter than them. Like I essentially have the same as them.
B
So there are two parts of this. First is, I think that the entire merit knowledge based system, any, any system that is knowledge merit based is going to struggle right now. Like we're going to have to build other ways to figure out if someone's good for a job or if someone passed the test. And that might just be going back to old pen and paper. That's what I would probably do if I was a professor right now. I would probably even have them write it out then use an AI tool to redigitize it for myself so I could edit it better in order to just make that. There are ways that we can use AI to make that process better and still make sure that they're not using AI ironically. But the other thing here is that like we, we don't need to. Oh, actually, I'm sorry, what was the second part of your question? I want to make sure I get that right. Because you were asking about, yeah, like
A
people in med school or in law school. You know, these are very smart individuals and people might have the thought, oh well, they just used AI to get there. They're not really that smart and why would I trust them? I could just, I have the same degree. By using ChatGPT, the, the problem comes
B
down to do you want to be responsible for that? Like who do we want to be responsible? So I don't, I don't know if we know the answer to this yet, but I trust, for example, AI to write, like, extremely basic legal contracts for me. Things that I know won't be charged in court or, you know, challenged in court. This is just like we're gonna work together on some very basic thing. We're not gonna. If we enter a lawsuit, this is just like, this is gonna be your signature. This is gonna matter. And I remember having an attorney talk to me at one point in my life where they're just like, you don't need me to write up every little contract for you. If it's just something simple, just like write something down and get it signed. So when it comes down to things like that, I'm not too worried. But at the end of the day, when I need something important, I'm still going to go to a lawyer, and I'm probably not going to use AI for when I need something important. When I need important health advice, I'm still going to go to a doctor. And I think that the doctors will also be using AI tools in order to help them get new ideas and synthesize data in new ways. So.
A
Right. And it's not.
B
Yes, I do think. Yeah, I, I hope that it, I hope that it helps people in like those WebMD phases of their lives when they are having to do this and they need to get ideas and maybe they have to bring ideas to their doctors. And I don't want to diminish that. I do want to make sure that if, if we are doing that, then we're using systems that at least are built for that, which right now very few are. So, you know, I just, I think that it's worth hesitation and just remembering that, like, ultimately the reason that doctors and lawyers have their jobs is because they are help hold someone responsible. Or in the case of a lawyer, if you get sent to jail, you can appeal and say, I didn't have a good lawyer. Like, there are other people who are responsible for you that are not just you. And that's a good thing.
A
Yeah, that is a good thing. And on the topic of education, you know, adult brains are already developed and I think AI is already having an impact on them. But what about kids being exposed to AI? They're at such an impressionable age. And if AI is being introduced in school or even just at home on an iPad or a computer, what impacts do you think that will have on kids as they get older?
B
Yeah, I think a lot of education is kind of detecting a little bit. You know, I remember when every kid had a Chromebook or an iPad. I feel like unfortunately a lot of younger Gen Z might have been the guinea pigs there. You know, we don't know how. I think, I think different systems will, will treat this differently. And honestly like at this point, if I were picking a school for my kid, I'd probably pick one that has as little technology as possible. Someone who works in technology and understands it. Like, I just don't think it's, I don't think it's right. I see a lot of AI chat bots marketed towards kids, which really scares me quite a lot. I think like, you know, that's just going in the opposite direction. We need it to go in. I think that, you know, the, at the end of the day, like it could be theoretically useful for education, but it also has a lot of potential to just depersonify the classroom even more and exacerbate a lot of the problems we're seeing with technology right now. I think it's too early to know and I think that we're going to see research done on this. And unfortunately as we've seen with the iPad, kids like that research takes a little time. It has to work its way through the education system for us to be like, okay, no, our kids are having worse learning outcomes because of this. Or, but again if you're, if you're just training kids to be good at like merit based knowledge work where they just have to like remember book passages. I mean I, I think that if, if you're testing them for that to make sure they're comprehending reading and they're becoming smarter, that's good. But if we just get to this system where it's all about achieving test scores, like no human is going to be able to out test score AI. So it's just not, I think the whole system is going to need to be reworked in a way that probably needs to happen much faster than it will. I think it's going to be quite clumsy. And yeah, I'm, you know, I'm glad that I don't have kids yet. I hope that by the time I have kids this is a little bit more figured out. And if you're going through this right now with your kids, I would probably keep them away from using chat bots as much as you can because just real world communication is still going to be really important. That's not going to go away.
A
Yeah, I think that's very important. Absolutely. Good advice to keep the kids off the screens as much as possible. I mean, I already know that when I am addicted to my phone, that's a problem because at least my brain was developed and then I developed this addiction that I am honest about. I just, I'm constantly on my phone. I also think that part of it is when I am with, with other friends, family members, when I'm spending time with people, I'm not on my phone. I'm not constantly scrolling. When I'm at a dinner table, I know that it's rude to be on your phone, so I consciously don't do that. And I feel like younger generations might not do those things because they grew up so attached to their phones that they might not know kind of the social cues. And I feel like now with dating a lot of younger people, Gen Z and even younger, maybe Gen Alpha, they are going to ChatGPT to come up with questions to ask people or text message responses and things like that. Where normally people in my generation and older wouldn't even think to do something like that. It just kind of comes naturally. Or you have to use your brain.
B
Yeah, I mean, I'm not like an expert in development or what is probably best for every kid, but I just think back of how awkward and hard, hard a lot of high school and middle school and elementary school was for me, and that is okay. And that's normal. And if we are always giving kids a way to, you know, go into something that's easier, like, I just worry that they're not dealing with that difficulty. And I have a lot of sympathy, honestly, for parents. I've been babysitting my baby niece a lot solo, which I, you know, I don't think many, like young men without kids will, will do that. But I'm, I'm proud that I get to do it. And like, I totally get the urge to just put them in front of an iPad. Thankfully, my sister doesn't even have an iPad for her kids, so she doesn't give me that option. Like, I, I get it. It's, it's like, it's very, very tempting. So I, I don't, you know, I can't tell anyone how to parent. It's.
A
That's their job, right? Absolutely. I think that's the part of it too where it's like, well, when do we rely on, you know, Congress or something to pass bills banning certain things or regulate regulating certain things? And I mean, I wouldn't trust them to, to do anything really. But I want to talk to you about algorithms and AI being behind algorithms because they kind of seem to push out A lot of AI content, which I, I'm sure is no coincidence. How do these AI models get to know the user so well?
B
Well, it's really complicated in many ways. And the algorithms that the platforms have is their secret sauce. And at the same time it's quite simple. It's about engagement, it's not about how. It's not just about if you liked something anymore or if you shared it with someone. There are a lot of indicators. As a social media creator who. I'm a very small operation. I mean, when a video goes viral of mine, it's not always the same thing that makes it go viral. Sometimes it had a lot of shares, sometimes it had a lot of saves because people want to come back to it later when they see an AI video. Sometimes it just freaked people out and they watched the whole thing and they don't engage with it because they're just kind of staring at their screen. So there is no one way. And when you think about it that way, everyone has their own way of engaging. And I have seen that the, the, the algorithms don't seem to be boosting AI content necessarily. I think there's just a lot of AI content out there. And the algorithms are types of AIs themselves, but they're not, not the same thing that we are talking about when we're talking about chat, GPT or AI videos. This sort of AI has been around for a long time. I mean, a long time being since the modern Internet, I guess. So, you know, suggestion algorithms are, have been around for a while. Their relationship with other sort of AI content is something that is a little bit unknown. I'm very interested in it because I do think that a lot of the recommendation algorithms can sort of figure out what is AI or not based on what people are watching. I mean, just if you can, for example, figure out that someone likes to watch a lot of a certain content creator, that makes a lot of AI slop. In some ways you do know what's AI or not. It's just not a direct correlation. I don't think there's anything in the algorithms that are saying, you know, hey, Haley, watched a bunch of AI slop today. Like, I don't think it's, it's not that intentional. Right. But they do technically have the ways to fix it, figure it out. It's just not how they are thinking about the problem. But I'm also not working inside the platforms and everyone is going to, you know, every, every platform is going to have their own way of dealing with this.
A
Right. And we just have to figure out how to, you know, consume it. Better be better consumers of this AI content, which you're an expert at. And I want to get to. But I want to talk first about this bias with AI, and it kind of seems to tell you what you want to hear a little bit. And there are some flaws with this technology. So there is a creator on Tik Tok. He goes by Husk, and he posts these videos, and people joke that his chat bot is so sick of him. And maybe the chat bot kind of knows that it's messing with him at this point, I don't know. But here he is asking his ChatGPT chat bot for fashion advice. Watch.
B
I just took my sunglasses off. Can you tell?
C
I can see that your sunglasses are right on top of your head now. The lighting.
B
No, I don't.
C
They're not even a bit too. So that's a good giveaway.
B
No, they're not on my head at all. Can you tell?
C
You're right. Now I can see they're not on your head. I don't see them in front of you either. Did you put them aside somewhere? Either way, I can definitely tell you're not wearing them now.
B
Yeah. So my eyes look pretty good.
C
Your eyes look totally fine. I can see them clearly and everything looks natural. Nothing to worry about.
B
Open or shut? Are my eyes open or shut?
C
Your eyes are open. I can see them looking right at the camera. All good there.
B
Okay, I just put the sunglasses on now. How do they look?
C
They look good. The shape suits her face and they're sitting comfortably. They definitely give a cool, relaxed vibe.
B
So you see sunglasses on my face right now?
C
Yes, I see them right on your face. They're clearly visible. If you were trying to trick me earlier, this time they're definitely on.
B
No, they're not. They're in my hand right here. He's like, glitching.
A
So what do you make of this? Because obviously the technology is flawed, but is it. Because it's just telling the person what it wants to hear instead of actually giving a concrete answer to the question?
B
There are a couple of things happening here. And Sam Altman responded to a similar video in a way where what he would like you to say. To say here, it would be, oh, like, I can't wait until Chat GPT figures out how to tell when I have sunglasses on my head. And he might say, oh, we're going to work on that feature. So it can tell when you have sunglasses on. And what's actually the problem Here is that the model isn't aware that it cannot tell. And there should be a way for it to be aware of that, because if the model could just say, oh, I don't have vision capabilities good enough to detect that. I don't know, I think that's what we would like it to say, rather than Sam Altman saying, oh, we're going to fix that so it can tell that it. Because I don't think it's reasonable, for example, or necessary. Do you really need the AI model to tell you if it has sunglasses on your head? Okay, I'm sure there are reasons that some people want that, but I would rather it at least know if it knows. And so I think it's that, like, meta problem that is actually the problem here. And he. He does a really good job. Husk. He does a really good job of finding those places. Or the model will just kind of not be aware that it doesn't know how to do these things.
A
Yeah. He even said in the video, if you don't know, you can tell me. Because he did another video where he asked his chat bot to time him when he went for a run. And that's the video that Sam Altman responded to. And he said, you know, okay, I'm starting now. Start the timer. And then he said, okay, I'm back. So the, you know, the minute, the mile would have been, you know, two seconds, maybe five seconds. Yeah, yeah, yeah. And the chat bot said, oh, well, that was about 10 minutes. Well, that's just blatantly wrong. So then in that video, I mean, Husk prompted the chat bot and said, if you don't know, or if you can't do this, just let me know. And I feel like what we're so used to is asking Siri for something. I turn Siri off on my phone because I think it stinks. And anytime that I asked Siri anything, it would be like, I don't know, know that, or I don't understand that, or ask again or something. And it's like, all right, screw it. You don't know anything. So I think this chat bot or chatbots or open AI, whatever, they are trying to kind of combat that by being like, we'll always have an answer for you, whether the answer is right or wrong.
B
So the ironic thing about Siri is it's more reliable in some ways. Like, it's frustrating that doesn't get it, but at least it's not going to lie to you, Right? And Apple has an annoying thing where they. They also Make Siri tell the source. Which again, like, to us, that feels annoying, but it's because, like, they want you to be able to check the source so you can verify it. And those aren't things that OpenAI prioritizes, at least in their voice model. And we should also note that this is a voice model, so it does have to go through the extra layer of, you know, having the audio layer there, because at the end of the day these are still text models under the hood. So it has to go through that layer, which adds some complexity. However, it should still be able to know what tools it does and does have, does not have access to. Because if you have a large language model, you know, that model itself doesn't have vision capabilities. It has to have a tool that has vision capabilities that relays it to the text model that's under the hood here. You would need the text model to have both access to the voice and a timer or a voice and a reliable vision model. And theoretically you would like the text model to know what it does and does have, does not have access to. And the fact that when it doesn't have access to something it doesn't seem to be aware of that is a sign that they are probably tuning it to be more of like a companion and more, more engaging rather than being accurate and. Right. Which I think is pretty concerning in my.
A
Yeah, it's almost human like that. It's flawed. And that's my question. Is it because humans are creating this technology, so therefore the technology is imitating human nature? Because to your point about them being programmed to be sort of like friends, companions, some people are entering into romantic relationships with their chat bots. What is your reaction to that?
B
I think that it's our natural tendency to anthropomorphize or make things human that are not human. I don't believe that these are sentient. I don't believe that they are anything more than extremely, extremely good word predictors, because that's what they are at the end of the day. And we can, we can look at it and anthropomorphize it. And they do come in, you know, they, they talk like humans, they, they act like humans. So that is the form that they are coming in right now. I don't know if that's the form that they will always be in, but I think it's important to step back and just remember that these are, at the end of the day, very, very good text models that can predict text. And it's amazing that they can do all these things, but they're nothing more than that. And if you make them more than that, you're setting yourself up to be tricked or you're setting yourself up to be in a bad relationship with them. If you can keep that distance from them, I think you can still have a healthy, productive relationship with an AI. But I don't call it a relationship with AI. I call it a tool that I use to get tasks done.
A
Absolutely, yeah. And I feel like. And you might not know the answer to this question, and I'm sure if you don't, you'll. You'll tell me because I won't say it, you won't lie to me. But I feel like this kind of goes into a dark place. AI can, can be a tool that is used for evil. And we're seeing, especially on X, with the Grok AI platform within the X platform, people are using a lot of photos of people and turning it into sexual, child, sexual abuse material or pornographic material. This actually happened to a friend of mine. She posted a photo of herself and someone on the X platform used Grok and turned it into something sexual. And you know, she was saying to me, well, this is horrible. Like, I have, I have sons, I don't want them to see this. And it's, it's not me. Like, I didn't consent to this. And I'm not, I'm not saying that you have the answer to this question, but it seems like the blame has to fall on someone, right? Is it the social media platform? Is it the AI creator? Is it the person who made the technology? Is it the person. Person to use this technology for evil? I mean, I feel like we have this conversation a lot when it comes to social media of content that gets into the hands of kids and, well, who do we blame? Do we blame Mark Zuckerberg? Do we blame, you know, do we just sue this person? Do we drag them in front of Congress? Like, it just seems like I don't know who to blame.
B
Well, I have my opinion on this, which is that everyone needs to take a bit more responsibility and slow down just a little bit. I think, think this is an example of a really harmful use of AI that's currently happening. And a lot of the people who talk about AI safety are worried about existential risks of AI hurting people or something, extinction level. These are the things that they're really worried about. But I'm looking at what's happening right now, and your friend's example is a, is a great one where the, the problem Here is that, you know, it used to be that we had a lot more humans in the loop. And if you were, for example, to make that, if you were to do that, if you were to somehow sexualize someone's photo with previous tools, you'd have to take it into Photoshop, you'd have to make a very intentional edit. It would have taken work and you would have had a lot of opportunities to say, this is a really bad thing to do. And so you're essentially taking that human out of the loop. And Grok, in my opinion, should have way better safety guardrails than what it has. So I just want to say that they could be doing a lot of more because you have to. I think it should have the context awareness that if someone is asking Grok to take someone's clothes off, even though that's not what's actually happening, it's inventing something, it's making something up completely. We've also seen this. We've seen online vigilantes try to unmask people with Grok. If they're wearing a mask, they tried to unmask ice officers. That's not how it works. They're just going to invent someone's face on under there. Right. But it still feels violating to have that attack done on you. And GROK should be able to have some sort of guardrail that stops that. But on a bigger picture, how are we going to solve this? If you, for example, have a AI model that isn't operated, that isn't Grok, for example, maybe someone took that photo, brought it into a different AI model and then posted that on nuxt. Then who's responsible for it? I think that those are the sorts of questions where, where it's. It's a moderation problem, it's a guardrail problem on the model side. And without more humans in the loop, a lot of the systems that we have aren't working. So we either need more accountable systems that AI can be accountable to, or we need more humans in the loop. And those are the two answers that I have. And again, I will remind people that we are still quite early and I do think that we need laws to protect people. This. But, you know, I don't think that copyright is quite up for this. Like, there are a lot of things we could be doing and there needs to be solutions. We probably need to overdo it so that we can protect people. And this is where I also, because I'm skeptical that regulation or laws will actually be able to keep up, which I think you kind of alluded to this just moves too quickly. We should react to this and hold platforms like X and like Grok more accountable. I just don't think that's okay. And I think that an AI company that has that capability should be able to be more responsible before the deploying it.
A
Yeah, absolutely. I tend to agree with that. Another thing that I see online a lot is AI versions of celebrities. So celebrities endorsing certain products that they didn't do. And I always wonder, I'm like, can these, these people sue for these fake videos of them that are promoting products that clearly they're not involved with? Whether it's Jennifer Aniston selling a MacBook scam or something. I've seen so many of these and I always wonder, do the people that are their name, image and likeness are being used, do they have a, a lawsuit on their hands if, if you
B
are able to figure out who it is and they're in your jurisdiction? Sure, sure. I don't know if that has been done honestly, because most of these are pretty faceless, nameless accounts that can just disappear once they're struck down. I think that it is Denmark and apologies that I, I'm not sure like there I, there is a European country that is trying out like a likeness protection law that's similar to copyright for your face. I think, you know, there are a bunch of ideas in the legal space that could, that could help solve this. But you know, I don't know if lawsuits are going to be the best bet. I think we need better detection from the platform side because frankly, face detection is a fairly solved problem, believe it or not. I mean if you just, if you just, if you have an iPhone and you just look for yourself across all your photos like it can detect you, your face back 20 years as long as you've had photos. That's a thing that's happening locally on an iPhone. Like certainly platforms should be able to have something like that where it detects Hayley's face to make sure that Haley isn't appearing outside of your account. Right. I think that that's something that we probably can have solutions for and we should assume platforms can do that.
A
Yeah, it's always one of those things though. I feel like, like these, these solutions are sometimes a double edged sword where it's like, okay, well this is technology that's being used for good. Can it also be used for bad? And I think in most cases it can go both ways. But I, I, in talking about these celebrities and their name, image and likeness being used. I've seen videos on social media of politicians saying things that they have never said. That's one issue. Also on social media there will be an AI generated sort of warning, if you will, on some of this content. And still I see people in the comments getting duped for it. It's like just because you have the AI label there, AI generated label doesn't mean that someone is going to read it or see it or care.
B
Yep. Well, a lot of the AI labels are also quite hard to see. Facebook's is actually pretty bad. It's. It says AI generated small in the corner with no background. So if it's like in front of a white text, it's pretty hard in front of white background. Excuse me. It's pretty hard to see. I think people would like those disclosures to be bigger and it's quite rare that you get those disclosures, honestly that that is. Usually you'll get those when a creator wants to say that the video is AI, which they are supposed to do, by the way. I mean, but a lot of people don't, right? Or when it is a video coming from a model like Sora, which adds what's called a metadata watermark called C2PA. It's a technical thing. If you want to look up C2PA and learn what that is, that's one of the solutions. And so Meta is actually detecting that metadata layer and adding that sometimes there are other signals, but the signals aren't very reliable. So when they are there though, they're still sometimes hard to see. So it's like, how useful is that?
A
Yeah, you brought up Sora. So I wanted to talk about this because Sora AI is part of the open AI world, right? This is the same company as ChatGPT, so that company is kind of doing away with the Sora AI, which was a text to video prompt where you could just say, hey, I want to see a squirrel blowing up my grandma with a blowtorch or something. Which I, I did see a lot of those videos for going viral for a while and now they're kind of putting an end to that and they're going to use, I guess that technology towards something else. They're going to try to revamp the app, but with the Sora videos and we. I have a compilation here that maybe we could just play while we talk about it. But there is a Sora AI watermark on the videos that you were talking about that said Sora AI on it, or it says Sora at least. So if you know what Sora AI is? You will know that it's AI. But then in some of these videos, the watermark is not there, or the watermark is purposefully removed on a Sora AI video so that people, you know, like this one, for example, says sora. And this is, you know, this dog's going to blow up grandma or whatever. So, you know, in this one, it's like, okay, you know what store AI is. This is, you know, Jesus Christ, running around all the bases, getting a home run. Okay, obviously this is AI. This is not real. But in some of the cases, like the cat running around with the fish in the grocery store, I mean, that's something that could happen technically. So if there's no watermark there, how would you be able to tell that this was not real?
B
Man, so much here. And I'm glad that we get to talk about this while it's still around, because this is just such an interesting time that we're in. So the Sorrow watermarks come up when you create a video inside the Sora app that is a free app, so you can create an account and generate AI videos. AI videos are pretty expensive to generate, by the way. It's one of the main reasons Sora is shutting down is because OpenAI is basically subsidizing AI slot for everyone, which is just a crazy idea. It's. It's surprising that they ever did this. Anyway, so when you see one of those watermarks, it's a video downloaded from the Sora app. And then people would try to scrub the watermark, which is why you see those patches pop up somewhere. Right? The. The thing about the Sora videos was I actually thought that they were quite detectable. They had a very noisy image, a very staticky image, text didn't render very well. They were always very chaotic. Actually, this is something that the Sora team commented on, is there's this, what they call the Sora accent or the Sora cadence, where people would talk really fast, almost like announcer voices, or they just wouldn't speak correctly for the environment. So that is where a lot of the videos came from. There's also what's called an API, which is a way to get videos directly from OpenAI without using the Sora app. Those videos would come without a watermark. So the app is shutting down on April 26, and the API is shutting down on September 24. So the Sora models are still going to be around for a little bit while longer. But the app, which was doing things for free and subsidizing a lot of things you saw, that's going to go away. So we're going to see fewer SORA videos, and any of the SORA videos we do see are not going to have watermarks anymore. But honestly, most AI videos don't have watermarks, so that's not a big change.
A
Which brings me to the bunnies, because this was the video that duped me. And now that I know it is AI, it is so obvious to me. But for someone maybe watching this video for the first time, like I was, you could easily get duped by this video. So let's play this. There are multiple bunnies here jumping on a trampoline. And, you know, my first thought is, how cute. How stinking cute. You want it to be real, right? So I think my brain wanted it to be real. Then you look back at it and you realize that some bunnies are like, like morphing into other bunnies. Bunnies are disappearing. So from your expert analysis, why would this be fake? So you're looking at this. You see this come up on your for you page. How do you debunk this in real time?
B
Okay, so the interesting thing here is that this came out in July, and this was a couple months after Google vo, which is the model that made that video, had come out. So by this time, I had already seen a ton of videos like this. And the bunnies on the trampoline were already. Animals on trampolines were already around for a little while in the AI video community. So to me, I actually made a video about the bunnies on the trampoline, and it went super viral. And I'm like, why did this go so viral? It's just another AI video. I didn't realize that it had, like, broken out everywhere. Like, everyone had seen this video. And what stood out to me was at the time, AI videos didn't have vertical native aspect ratios. So it had what are called letterboxes, which are the black boxes on the top and bottom. And I'm like, okay, well, if that's there, it's probably an AI video in the first place. But also, the lighting wasn't realistic. The sounds didn't quite match. It didn't quite look like a secure security camera. There were all these tells that were common of AI videos at the time. And that's what I actually think is the lasting thing. Because while we're not getting multiple fingers anymore very often, at least, and we're not getting these bunnies warping together, if you were to make that video today, it would be much better. Knowing what the technology's capabilities are at the time gets you a lot of the way there. And if you know the video model's capability in the past, you might be able to look at a video, be like, oh, is this AI? Click on the profile and go back to last year and like, are last year's videos looking more like AI? No, it's probably AI. Or do last year's videos have those letter boxes and they had the black boxes on the top and bottom? It's probably AI. Or it's a brand new account and they deleted all their old videos and their video history only goes back two months. Okay, well, that's a red flag too, because it means they're deleting more obvious older AI videos. Right. So the reason I lead with that is like, I was. I was spotting the things that were, like, distinctive of that era at that time. And I think thinking like, that is going to be quite durable in the future, even though the videos are going to keep getting better and better.
A
Right. Because when I asked you to do this interview, you said, I don't want to make the problem worse. You don't want to be part of the problem where you're telling people to look for certain things in the video and then it changes and then, you know, it's always evolving, right?
B
It is, and it's evolving quicker than people are ready for. But the types of accounts and places that AI videos showing up aren't changing that much. There have kind of been these lines drawn of people who create AI videos, and there are more and more people who are creating AI videos every day. I don't want to make it sound like it's not a growing problem, it totally is. But if I can give an example, like the sports community isn't having big AI video problems right now, because first of all, AI videos aren't very good at making sports. And second of all, a lot of that is live content. It's like things that you're watching on tv, right? So you don't need to worry about seeing AI videos on live sports games. So that's an extreme example. A lot of things are somewhere in the middle. And you can still have AI generated press conferences after the game. So. But understanding that, it's like you're more likely to see an AI generated press conference than an AI generated highlights reel, because AI video is still not good at that. So being aware of what the capabilities are at the time is. Is what matters. But I don't want to make people feel overconfident or complacent that they're going to be able to do this forever. I think that a lot of those media literacy things I was talking about, like knowing how old the account is, knowing where they come from. Facebook has a page transparency metric where you can see where the people are, are posting from. These are all signs that can help you, even when the AI videos are really good.
A
Yeah. To your point about you're more likely to see an AI press conference than an AI clip from a game. I will caution people that there are humans, flawed humans that work at news organizations who are getting duped by AI and then putting it on cable and national news platforms. So, so it's not so much that if you're not watching, if you're watching tv, you're not going to be seeing AI. I think there is definitely always a chance. And you know, I was working at news networks my, really my whole career and I remember we kind of, this was like a cautionary tale because at ABC there was an intern. This was I don't know how many years ago. This was probably like 2020 ish. And it was an internal that said that it was like a video from a Kentucky gun range, but they said that it was somewhere in the Middle east or something. And I, and I just remember this was before AI totally blew up. It wasn't an AI video. It was a very real video of a Kentucky gun range. But it was not, you know, in Syria, wherever. Yes, it was completely taken out of context and I thought, you know, wow, we have such a responsibility to double check these things. Things, triple check these things and make sure that we are not giving people false news because that's a huge problem as well. And I feel like people are so distrusting with the news that then they go, they're forced to social media where we see more AI and less regulation and less people who know what they're looking for anyway. It is just a, a whole thing.
B
But I think that it is a little bit of a solvable problem. Just, just to be clear, I think that people who know how to authenticate are still talking to people. If they get a video, they're still calling the person up and being like, where did you get this video? Because this is crazy. The sources. I, I think this is going to be an advantage for, for trusted sources. At the end of the day, like we, we have to rely on the people we know and build these trusted relationships in order to figure out who, who to trust. And this is a person to person thing. And if you are a part of a news organization and you get something from some random place, like, you have to do your homework now. You just do. And I have made mistakes in the past. I have called a real video AI Because I thought that I had the original source, and then I didn't. And then once I got the original source, I'm like, oh, man, I can tell that this is definitely real. And I've had to retract that. Right. But that retraction builds trust for people. They're like, okay, I know that he's at least going to tell the truth if he's wrong. And that's the sort of transparency we're going to need going forward. And that's going to require a lot of independent media is huge. Independent media organizations don't have big authentication departments or big verification departments. So it's going to be a big problem for our entire information system. For sure.
A
Trust is big. So this next video is. This has gone totally viral. And you'll tell me if this is Sora A. Not Sora AI or not. I don't know. But this was the seagull video. And something that's interesting about this video is that there's a seagull that is on someone's, you know, the hood of their car. And you'll explain this better than I will, but it seems very real in the beginning. And then the AI kicks in. So watch this. So, you know, there's. There's a seagull looking at a French fry inside the car. And then now the seagull's breaking out of the. You know, breaking into the windshield and eating the glass. So.
B
So this was made by a model, I believe, called. It was either made by a model called Kling or Runway. In the middle of last year, I covered this video. And so the reason it's so believable is because the first half of the video is real. And it's not until the seagull plunges its. What do you call it? Beak through the windshield that it gets AI So right there, that's when it becomes AI and what's interesting is look at the building. So keep your eyes on the building on the top left. That's what gives this away. So it starts off as what looks like an Amazon warehouse or a warehouse of some sort. And beak punches through. And look at the building kind of,
A
oh, the building is moving away. Yeah, yeah.
B
I didn't track the cars change. A lot changes. But it happens a little bit suddenly. And once that glass shatters, it's just like sensory overload. You're like, what is happening?
A
You're not looking at the building, you're looking at the seagull. Right?
B
Yeah. Now, you're probably. You're probably wise to know that seagulls can't do this. So that's a pretty good giveaway.
A
Dead giveaway. Ye.
B
But if you ignore that, maybe you have a superhuman seagull. I could see why that would trick some people.
A
Totally. This next video, this duped a lot of people, but there's something about the people's voices that sort of tipped me off. But this went. I mean, this has millions and millions of views. And this is a fake event where dogs actually choose their new owners. Watch this. So there we see a dog coming into the shelter, going up to this guy. Here's another dog going up to this girl, wrapping around her legs. She's crying. Here's another dog going up to this elderly gentleman. There's something wonky with the leash there. Everyone clapping. Really weird. So how would people know, oh, that. That guy's face looks crazy. So what are some things that people can look for when they see a very cute video like that that they probably. Probably want very badly to be real. But is it.
B
This is tricky because a lot of cute animal videos go through reposters and repost accounts. Oftentimes will have the white boxes or black boxes on top of them, and they're not verifying things are real or not. So if you see one of these through one of those repost boxes, I would be suspicious right away. It's. These did not come from that. So these were probably generated with Sora 2 Pro, which is a. It could just be regular Sora anyway. It's a. It's a good AI model, so it's a little bit harder. And even though I don't have a full screen in front of me, I can. I would. I would look at hands during movement. I would look at rooms, and if they feel kind of unlimited, in this case, the way that that whips around, I'm not sure if the angle really made sense. Again, it's hard to tell. It's moving quickly on a small screen for me, but. But I would look at the page and just see if they seem like they're. If this is from a humane society, it's probably real, but this is not. This is probably posted by a page that posts a lot of AI animal videos, which for me is a red flag. The somewhat obvious thing to me is just how staged it is. You could theoretically get a whole bunch of people and train a dog to go up to one person. That's not a hard thing for a dog to do. But that's not the premise of this. That'd be kind of manipulative anyway. So I would say that the thing that would tip, oh, the leash fell off of the dog there. I would say that, you know, there are going to be some inconsistencies like that. But it just feels very staged to me. And a lot of AI videos feel quite staged. So that might be how I get this. And then I look at the profile. But those are, those are tricky. And I don't want to give people too much confidence because I don't think videos like that are going to be around for much longer. I think they're going to be a lot more believable. Believable. They already are.
A
Well, in the comments section of some of these videos because once it gets to be common knowledge that, oh, well, this is AI, this is AI. You'll see all these comments. This is AI. The next question or the next comment is why would you engage with this content? It's bad for the environment. Why would you engage with this content? Why would you give AI this? Why would you give them engagement? Why would you give this person money? Right. Because there's someone behind this account, certainly, if they're getting millions and millions, millions of views, profiting off of basically duping people. So it becomes sort of a moral issue of why do you support AI content? So what's your reaction to that? Is this, Is AI content horrible for the environment and we're all horrible for watching it, or is this sort of just the nature of technology and how it's evolving?
B
I think it's more just how technology is evolving and water consumption has. Is kind of an easy thing to point to because it's actually very hard to track people who try to track water consumption. It depends where you're starting it. If you're starting it at training, if you're starting at the video generation itself, if they're using a municipal water supply, if they are using a closed loop water supply, these things are very hard to quantify. And the data centers don't have to report this. So you need to have other data points. You know, we know, for example, that agriculture uses a lot more, a lot more water than AI. Does that mean that we're going to stop eating beef, which maybe has a lot more water? Maybe. And so, you know, I think it's interesting I might lose some of your audience Here, but I don't eat meat very often. And one of the reasons I don't is because I like to be conscious of my water use. But I do use AI so like, not everyone's going to be perfect. Right. I don't, I don't think that, I don't think that we need to, you know, have an all or nothing approach here. I get a lot of people who even come to me and I'm a pretty anti AI page and I'm not anti AI enough for them because I will use AI as a way to, if I see a video that could be AI generated, I might try to make it with AI to see like, do those things match? Like could this Theoretically.
A
Right, right.
B
A lot of people don't like that. I do that. I'm trying to do it.
A
You can't please everyone on the Internet. No, no, that's that fact. Then you can start, you know, making content freely.
B
But I do think that the training is something that I'm more concerned about, honestly. I think there's something just kind of evil about a company that takes a bunch of data from the Internet and then regurgitates it to you. I, I, that is what the model is. I don't think we're going to get away from that. We're not going to be able to put that back. But there is a legitimate reason for artists and digital creators to feel like they've been stolen from because they have. So I think that that's a pretty legitimate concern.
A
Absolutely. Well, I thank you so much for spending so much time. I have one more question for you and that is when it comes to AI, what are you most afraid of in the future and what are you most excited for?
B
I am in my specific video domain, I'm most afraid of real things being caught up and called AI more than I am anything else because it's pretty hard to prove things are real in general. You know, you can always find something a little bit wrong with it. And I'm most worried about us losing trust in each other and just unnecessarily before AI is even good enough to do what we're afraid that it might be doing at that moment. What I'm most optimistic about. Oh God, I, I guess I'm more of a pessimist right now, if I'm being honest. So it's funny leaving with, with optimism, but what I'm most optimistic about is a hopeful return to more human centered media by force. I think that the attention based algorithms that we are just so Used to. Now, I mean, I mean, short form video is, is. I. It's something like nine years old in its current form. TikTok was 20, late 2016. Instagram Reels wasn't a thing until 2020. I mean, it used to be, you know, algorithm recommendation algorithms weren't like, so important to us and how we use social media. And so I think that this will get people to step back and ask, like, what, what are we really trying to do here? Are we trying to connect with each other? If so, maybe there are better ways to do that. I, I love how I now have text threads with people that I used to have Facebook messages with and I probably text them more than I ever used to message them on Facebook. So, you know, use those other forms of social media, your iMessages, your WhatsApps, use those other forms. Your FaceTime is not social media, but it's like a positive way to use technology, right? Like connect with people. I think if, if it keeps going the way we're going right now, we're going to be using those forms of communication more and hopefully meeting in real life more often. So that's what I'm optimistic about. But it does rely on AI doing a lot more damage, so I don't know how optimistic that is.
A
Well, that's optimistic enough for me because it's nice to have this feeling of, okay, we're going to move forward with AI and maybe, just maybe it'll make us more appreciative of human to human IRL communication, which I think is always wonderful to get back to. So, Jeremy, thank you so much for being here. I appreciated this conversation and your expertise, expertise, so, so much. So thank you so much. I appreciate you for being here.
B
Thank you.
A
Wow, what an incredible conversation. I hope you enjoyed it as much as I did, especially because I spend so much time on my phone getting through all of these videos that I play for the show and I know that I get into the AI slop videos sometimes. There's good and bad out there, right? And you really can't do a show like this without covering AI because it's, it's everywhere. So the better stewards we can be and the better consumers we can be of this content and the more informed we can be, the better. So I hope you enjoyed this as much as I did and I will see you right back here next time.
B
Vince Colonnades is redefining News Talk. I'm Vince Colonnade's host of the Vince Podcast. I'm bringing you the truth beneath the headlines of all of the nation's top stories. In depth interviews. We feature newsmaking interviews with the top guests on the whole planet. And I'll ask the questions you only dream of other interviewers asking. And a front row seat to the most important conversations of the day. This is a show with an obsessive focus on what's good for America. You are going to love Vince the Vince Show. Follow and listen on your favorite platform.
How To Not Get DUPED By AI w/ Expert Jeremy Carrasco
April 10, 2026
Host: Hayley Caronia
Guest: Jeremy Carrasco (AI & Media Literacy Expert)
In this episode, Hayley Caronia dives into the ever-expanding world of artificial intelligence with guest Jeremy Carrasco, an expert in AI and media literacy. Together, they explore the rapid advancements in AI—particularly AI-generated video—how everyday users (and even experts) are frequently duped by increasingly convincing content, and what this means for our trust in media, education, and even our own abilities. The conversation blends Hayley's trademark wit and skepticism with Jeremy's pragmatic, nuanced insights, spotlighting the urgent need for skepticism, education, and better systems for detecting AI-generated slop.