Family IT Guy Podcast
Episode: AI Videos vs Deepfakes: How to Tell What’s Real Online
Host: Family IT Guy (Ben Gillenwater)
Guest: Jeremy Carrasco
Date: February 3, 2026
Episode Overview
This episode dives deep into the challenges of distinguishing real videos and images from AI-generated content and deepfakes on social media. Featuring media producer and AI video expert Jeremy Carrasco, the discussion explores the nuances of detection, practical tips for families and parents, the risks of algorithmic social feeds (especially for children), the dangers of AI slop and propaganda, and the crucial role of skepticism and critical thinking in the digital age.
Guest Introduction & Background
[00:45 - 02:43]
- Jeremy Carrasco shares his extensive experience in media and video production, including live streaming for major events and a side gig as a music teacher to young children.
- His hands-on technical background helps him spot subtle flaws in AI-generated media that most viewers miss.
- Jeremy expresses a strong personal motivation to make the internet safer for kids, even as an uncle and not a parent himself.
Theme 1: How AI Video Generation Works
The Core Process
[05:46 - 11:01]
- AI-generated video is created via massive datasets, using diffusion models that iteratively reduce noise to form images—Jeremy likens this to turning static into a dog running in a video.
- What goes in for training can't encode every detail (like breathing or subtle physical moments).
Memorable Quote:
"The thing to understand here is that in order to get something meaningful on the output, you have to do a couple things on the input... There's only so much human-like detail as of now that's being encoded."
— Jeremy Carrasco [07:15]
Durable Indicator – The 'No Breathing' Test
- AI-generated voice/video often misses human subtleties in speech, like natural breathing, pausing, and tone.
- Jeremy notes many AI videos feel 'off' because the physics aren't real; they're just patterns, not physical processes.
Notable Takeaway:
- [09:30] Audio-visual mismatches are a persistent tell for AI fakes — the way audio aligns with movement is "the hardest thing for them to figure out right now," making it a tipoff even as visuals improve.
Theme 2: AI Videos vs. Deepfakes — What's the Difference?
[12:44 - 15:23]
- Deepfakes are targeted manipulations, often just swapping or modifying a face or mouth movement.
- AI Videos (from diffusion models) synthesize entire frames, which creates holistic but sometimes weird or wobbly backgrounds.
- AI failures in video:
- AI Video: "People in the background... will kind of merge together."
- Deepfake: "Edge artifacts... if they turn their head, it might not track well."
Memorable Quote:
"There’s no silver bullet anymore... Average people can’t always tell, is this a deepfake or is this an AI video, without context."
— Jeremy Carrasco [14:18]
Tactic: Different kinds of fakes require looking in different places—focus on backgrounds for AI video, lips/face for deepfakes.
Theme 3: Propaganda, Viral 'AI Slop', and the Challenge of Context
[15:23 - 18:37]
- Political events (Venezuela, etc.) often include both recontextualized real videos and AI-generated fakes.
- Much of what goes viral (so-called AI slop) is low-quality sensationalism made for attention—either AI video or deepfakes.
- Most AI video seen by average users falls into this "AI slop" category, making detection important for everyday browsing.
Theme 4: Personal Practices – Building Healthy Skepticism
[18:47 - 22:44]
- Jeremy recommends:
- Prioritizing content from trusted creators over algorithmic feeds.
- Being skeptical about social media posts, especially if they challenge your worldview.
- Watching out for "sudden sensation" accounts: AI influencers with no history before late 2025 are suspect, especially if they're already brand ambassadors.
- Critical Thinking is vital; don’t accept emotionally-charged or mind-changing content at face value without deeper vetting.
Memorable Quote:
"The first step is understand who you trust... you can't reasonably vet every little thing. Spend more time watching the people you trust rather than relying on the 'for you' page."
— Jeremy Carrasco [19:38]
Theme 5: Impact on Social Media Consumption (& Parenting Implications)
The Hidden Dangers of 'Good' AI Content
[22:51 - 28:56]
- "Only the bad AI videos are easy to spot; the good ones go unnoticed."
- This poses a serious risk: the unnoticed AI content can subtly influence beliefs or push inauthentic stories.
- Choosing Real Human Connections Over Passive Consumption
- Encourage kids (and adults) to seek out creators, not just aesthetic "slop."
- Mindless scrolling is especially risky in the AI age; algorithmic feeds are not equipped to distinguish valuable from manipulative content.
- Example: The Russian snowstorm event showed both real and AI content—AI versions exaggerated the spectacle, distracting from genuine human stories of resilience.
Theme 6: Social Media Algorithms and Kids — The Critical Risk
[49:21 - 55:40]
- Jeremy’s top advice for parents:
- "No unmonitored algorithmic use for kids on social media."
- AI-generated shorts and content can quickly go from benign (Bluey, Peppa Pig) to violent or inappropriate through the recommendation algorithms.
- Harmful videos (violent, sexualized, disturbing) can and do slip into kids’ feeds—algorithms can’t reliably distinguish harmful from innocuous content.
- Experiment Example:
- Watching a single "Italian Brainrot" AI children’s video led the YouTube Shorts algorithm to recommend a sexualized AI cat video immediately after, even though the account was seeded only with wholesome kids’ content.
- YouTube Kids offers better moderation but is much less popular and less enforced than regular YouTube, which most kids access.
Memorable Quote:
"Once you understand the algorithm doesn't know the difference, that's when it becomes clear why just unmonitored use should not be a thing."
— Jeremy Carrasco [52:20]
Theme 7: AI, Privacy, and Digital Footprint (Especially for Kids)
[57:38 - 67:43]
- CSAM (Child Sexual Abuse Material) & AI:
- AI-generated images (especially via tools like GROK) can circumvent traditional CSAM detection because they’re “new” material.
- Tech companies struggle to detect/ban these, and the algorithms sometimes surface real children’s content in conjunction with AI sexualized material, creating new pathways for predators.
- Never Post Kids' Photos Publicly:
- Even "private" Instagram accounts or school accounts with many followers are risky—algorithms surface these photos to accounts already engaging with risky or predatory material.
- Instead, use encrypted sharing (Proton Drive, Signal) for family photo sharing.
Memorable Quotes:
"I've seen these rabbit holes turn very dark and very real, very quickly... I've seen firsthand why you should NOT post photos of your kids on social media."
— Jeremy Carrasco [63:13]
"We are stewards of their privacy... Even if it’s innocent now, kids may look back and wish those photos weren't posted."
— Family IT Guy [66:49]
Theme 8: Technical Solutions to Privacy
[67:54 - 73:54]
- Privacy Tiers:
- Most-to-least private: Encrypted, user-key sharing (Proton Drive, Signal) > Apple/Google Photos, iMessage > Social Media.
- Apple’s “Advanced Data Protection” allows you to have your private key; Google/Apple are relatively secure, but the big companies still technically own the data.
- Losing your private key (with Proton or Apple’s advanced mode) means losing your data forever; safe storage is critical.
Theme 9: The Role of AI Detection & Trust
[43:41 - 49:07]
- Jeremy will not use AI to generate or deepfake his own face for public content.
- Doing so destroys trust.
- Once followers suspect or detect you’re faking yourself, you can’t regain that trust.
- Practical Parenting Lesson:
- Stress honesty, transparency, and long-term thinking regarding reputation and digital presence.
Practical Parent & Family Guidance
- No unsupervised algorithmic media for kids.
- **Use YouTube Kids' “Approved Content Only” mode when possible; restrict regular YouTube access.
- Do not post kids' faces/photos on social media, even in 'private' groups.
- **Teach children and adults critical thinking and media literacy—skepticism, fact-checking, and awareness of algorithms are essential.
- Favor real human connection content—engage with creators, not just viral slop.
- **Share family photos via encrypted services, not 'broadcast' platforms.
Notable Quotes & Moments
- "AI doesn't breathe. The words just keep coming… Even if I close my eyes and just listen, I can tell if it's AI." — Family IT Guy summarizing a principle [04:45]
- "Only the bad AI videos are easy to spot, the good ones go unnoticed." — Family IT Guy [22:51]
- "If your kid had to choose between being on Roblox with friends or doomscrolling YouTube Shorts—there’s no comparison, they should be with friends." — Jeremy Carrasco [28:55]
- "You can't out-content AI. Other AI-generated people are going to out-content you. So you need to just… lean into who you are if you want to be a content creator." — Jeremy Carrasco [47:10]
Where to Find Jeremy Carrasco
[75:52]
- Find Jeremy by searching his name on your preferred social platforms.
- Upcoming platform: Riddance AI ("helping people figure out what’s real in the age of AI").
- Former/current handle: @ShowToolsAI
Key Takeaway
"No unmonitored, algorithmic use for kids on social media is a really big thing. They won’t be able to tell what is real or not. It is too advanced. Even without AI, it was never a good idea. The algorithm doesn’t know the difference."
— Jeremy Carrasco [49:21]
This episode is essential listening for any parent, teacher, or concerned adult trying to make sense of the runaway advances in AI media, and offers both detailed technical insight and practical, emotionally resonant advice for keeping children (and families) safe online.
