Settle In with PBS News
Episode: How to fight AI slop, according to Hany Farid
Date: March 3, 2026
Host: Amna Nawaz
Guest: Hany Farid, Professor at UC Berkeley, Chief Science Officer at Get Real Security
Episode Overview
This episode explores the rapid advancements and growing challenges of AI-generated media—deepfakes and generative AI—and their impact on truth, trust, and everyday life. Digital forensics pioneer Hany Farid joins to discuss what “AI slop” means, the dangers and benefits around generative technologies, how the threat landscape has evolved, and what individuals and society can do to defend reality.
Key Discussion Points and Insights
1. Hany Farid’s Journey into Digital Forensics
- Origins of Concern ([01:23]–[04:51])
- Farid began his work in 1999, when digital manipulation was primitive compared to today.
- Initially, his work focused on authenticating evidence for courts of law, not anticipating the explosion of AI fakes and democratized manipulation.
- “What started 25 years ago as a bespoke narrow field has really changed dramatically.” — Hany Farid [02:08]
- Shift in Landscape
- The problem expanded from legal evidence to mass disinformation and state-sponsored campaigns, accelerated by the rise of social media.
2. Speed & Scale of the Problem
- Technological Acceleration ([05:08]–[06:59])
- Tech upgrades leap from 12–18 months to as little as 12–18 days.
- Information spreads in minutes. The half-life of a typical social media post: about 90–120 seconds.
- Algorithmic Incentives
- “The algorithms have learned how to spread the most salacious, outrageous, conspiratorial content, because that's what the billions of people online click on.” — Hany Farid [06:46]
- Algorithms prefer misinformation because it drives engagement and profit.
3. Deepfakes & Generative AI: Definitions and Mechanisms
- What is a deepfake? ([08:14]–[11:58])
- Farid questions the term “deepfake,” preferring “generative AI.”
- Capabilities include text-to-image, text-to-video, talking-head video synthesis, and voice cloning—with minimal input required.
- “We have gone from needing hours of audio and hundreds of images to 10 to 15 seconds of audio and one image.” — Hany Farid [11:19]
- Threat Vector Expansion
- Anyone with digital presence is now at risk—not just celebrities or public figures.
4. The Double-Edged Sword: Positive and Negative Uses
- Potential for Good ([12:18]–[14:33])
- A teenager could use these tools to create blockbuster-level creative works.
- But real-world harm—non-consensual pornography, fraud, child exploitation, and democracy-threatening disinformation—is outpacing the benefits.
- “Our job is to start putting in guardrails... Let's not repeat the mistake of the last 25 years with how we allow technology to be weaponized.” — Hany Farid [14:04]
- Guardrails Needed
- Technological, policy, regulatory, and liability frameworks are needed to balance innovation and harm.
5. Real-World Impacts & Everyday Threats
- Individual Targeting ([15:20]–[17:40])
- Examples: scam calls using cloned family members’ voices, AI-generated explicit blackmail images, and supercharged phishing schemes.
- Disinformation can affect how people vote and perceive reality.
- Personal Story
- Amna Nawaz shares being targeted by an AI-generated video that reached over 17 million views before she could respond.
6. Can You Spot a Fake?
- Detection is Difficult ([18:06]–[20:23])
- Farid’s research shows people are “not much better than flipping a coin” at distinguishing fakes from real images, audio, or video—even in controlled settings.
- “Using your senses, you have to understand, is at best limited.” — Hany Farid [18:51]
- Practical Advice
- Awareness is vital: know that sophisticated fakery happens.
- Analog solutions: Farid suggests using family “code words” to verify urgent or suspicious communication ([20:23]).
- Always “hang up and call back” to verify a suspicious phone request.
7. The Lab Perspective: Digital Forensics at Scale
- Defensive Challenges ([21:14]–[23:41])
- Defenders are always behind attackers.
- Current focus: using biometrics (face, voice) and contextual cues to authenticate people in images and video.
- Detection is improving but never foolproof: “We don't eliminate threats, we mitigate them.” — Hany Farid [22:36]
8. AI Misuse: The Hallucination Problem
- False Confidence in AI-Enhanced Media ([23:41]–[28:16])
- Cases where AI-enhanced images have been mistakenly used to identify people (e.g., masked law enforcement or suspects).
- AI “hallucinates”: it confidently creates plausible-looking but completely false images or text responses.
- “This is the mother of all hallucinations. AI doesn't know what somebody under a ski mask looks like.” — Hany Farid [25:15]
- The public and press often trust AI’s output more than they should.
9. Government & Institutional Failures
- United States’ New Participation in Disinformation ([29:31]–[31:41])
- Farid notes it is new and alarming to see the US government distributing manipulated political images.
- “It is wholly inappropriate for the President, United States, and for the administration to be posting videos of the Obamas the way they did... it demeans the White House.” — Hany Farid [29:49]
- Long-term effect: erosion of trust in institutions.
10. Restoring Trust & Cultural Change
- Trusted Information Sources ([32:04]–[34:30])
- Farid urges: “Stop, for the love of God, trying to get news and information from social media... It is not a place to become an informed citizen.” — Hany Farid [32:26]
- Speed comes at the cost of accuracy.
- Transitioning back to fact-based, slower, professional journalism is essential, but will require a massive cultural shift akin to smoking cessation campaigns.
11. Accountability and Policy Solutions
- Corporate Responsibility ([36:30]–[38:41])
- Real change happens when penalties outweigh profits.
- “When did companies start making better products that were safer? ... When the liabilities law said you create a product that you knew or should have known was going to create harm, we are going to sue you back to the dark ages.” — Hany Farid [37:34]
- Major lawsuits (e.g., against Meta) could create an “awakening.”
12. Generational Perspectives & Policy Tools
- Young People’s Skepticism ([38:41]–[40:47])
- Today’s young people are more skeptical by default and show analog nostalgia (e.g., Polaroids, flip phones).
- Policy suggestions: ban social media for under-16s, hold tech giants liable, and break up monopolies to foster safer tech.
- Empowering Parents
- Legislation can give parents the tools and authority to protect children from harmful platforms.
13. Building Defensive Capacity
- Shortage of Digital Forensics Experts ([43:20]–[45:13])
- Not enough training or funding for defenders; most energy, talent, and investment goes into generative/AI offensive technologies.
- Defense is less lucrative but desperately needed; young people’s idealism offers some hope.
14. What Can Individuals Do?
- Personal Agency ([45:52]–[47:51])
- Learn about AI and digital manipulation; ignorance is not an option.
- Vote for politicians championing tech responsibility and support responsible companies.
- Consumer protest works: mass subscriber cancellations can reverse corporate decisions.
- “We have power even when it doesn't seem like that. And so let's, let's exercise it. Let's demand more of our corporate overloads. Let's demand more of our elected officials.” — Hany Farid [46:37]
15. Staying Motivated in a Tough Battle
- Farid’s Motivation ([47:51]–[49:26])
- Some days are tough, but giving in to defeatism is not an option.
- Everyone doing their part is the only viable path forward.
Memorable Quotes
- On Tech’s Fast Pace:
“We used to measure these things in 12 to 18 months... now it's 12 to 18 weeks, sometimes 12 to 18 days.” — Hany Farid [05:30] - Algorithmic Incentives:
“The algorithms... prefer algorithmically the spread of mis and disinformation because that's what leads to user engagement.” — Hany Farid [06:41] - Human Ability to Spot Fakes:
“The average person is not much better than flipping a coin.” — Hany Farid [18:51] - On Using Social Media for News:
“Stop, for the love of God, trying to get news and information from social media.” — Hany Farid [32:26] - Systemic Solutions:
“We don't eliminate threats, we mitigate them.” — Hany Farid [22:36] - Policy Analogy:
“The parallels to tobacco are not far off.” — Hany Farid [34:59] - Individual Power:
“We have power even when it doesn't seem like that.” — Hany Farid [46:37]
Key Timestamps
- 01:23—Farid’s origin story and the evolution of the threat
- 05:30—Speed of change in technology and information spread
- 08:14—What is a deepfake and how are they made?
- 11:19—The expanding threat to everyday people
- 14:33—Positive creative potential of generative AI vs. harms
- 17:40—Real-life scams and threats facing ordinary people today
- 18:51—Studies show people can’t really spot fakes
- 20:23—Analog defenses: code words and callbacks
- 22:36—Forensic challenges and the never-ending defense game
- 25:15—AI hallucinations: making up faces and events
- 29:49—US government participation in manipulative media
- 32:26—Advice: Don’t trust news from social media
- 34:59—Parallels to tobacco, need for lawsuits and regulation
- 37:34—Product safety lessons for tech accountability
- 39:23—Generational shifts in skepticism
- 43:41—Need for more digital forensics experts
- 46:37—Grassroots power and consumer activism
- 48:10—Why Farid keeps faith in progress
Actionable Takeaways
-
For Individuals:
- Learn basic digital literacy about AI and media authenticity
- Use code words and callbacks before acting on emotional urgent requests
- Limit reliance on social media for news—consult trusted sources
- Voice opinions and vote for leaders prioritizing tech responsibility
- Consumer action—cancellations, feedback—do make a difference
-
For Society:
- Pursue policy reform: regulation, liability, youth protections, and antitrust
- Invest in digital forensics education and defenses
- Hold institutions and officials accountable for transparency and accuracy
Tone of the Conversation
Insightful, urgent, occasionally humorous, and ultimately hopeful—with both speakers emphasizing practicality, personal responsibility, and the collective power to drive positive change even in the face of rapid technological upheaval.
