The FAIK Files – "The Trough of Disillusionment"
Host: Perry Carpenter (with co-host Mason Amadeus)
Date: October 24, 2025
Podcast Theme: An inquisitive, humorous, and sometimes unsettling exploration into the impacts of AI on technology, society, and truth—with real-world stories, research, and the “wacky” frontier where fake meets fact.
Overview: The Episode at a Glance
This episode, "The Trough of Disillusionment," is a whirlwind tour through the latest developments and anxieties in AI, focusing on Google's new Gemini models, AI reliability (and pitfalls), the evolving realities of AI-driven job loss, and sophisticated deepfake threats. Perry and Mason blend technical depth with approachable humor as they dissect current events, research, and “out in the wild” examples—illustrating how both the promise and peril of AI are colliding at an accelerating pace.
Key Segments and Insights
1. News Roundup: Google Gemini 3.0, Multimodal Models, and AI as an Agent
[02:57] - [16:19]
-
Google Gemini 3.0 Rollout
- Google begins a soft launch of Gemini 3.0 Pro—their most advanced multimodal large language model—enhancing text, image, and potential audio capabilities.
- Quote:
"Google has quietly begun rolling out Gemini 3.0 Pro, the latest and most advanced version of its multimodal large language model." – Mason [13:23]
-
New Computer Use Capability
- Gemini 2.5 introduced AI that can interact with web and mobile interfaces like a human (e.g., clicking, form completion).
- The hosts discuss the challenge: "visual processing" for bots is hard, despite a legacy of “screen scraping” and accessibility tech.
- Industry Insight:
"All these are very, very susceptible to prompt injection … you can do bad things." – Perry [04:21]
-
OpenAI’s Atlas Browser
- OpenAI similarly released an AI-powered browser/agent capable of acting independently based on prompts, increasing risk surfaces.
-
Pace of Advancement
- The AI field moves so fast that early iterations “suck,” but rapid, iterative improvement makes it dangerous to dismiss nascent features.
- Quote:
"The thing that we can't do is dismiss its future potential." – Perry [07:01]
-
Google’s Market Position
- Both hosts agree that, despite stumbles, Google’s infrastructure, data, and ecosystem make it a major long-term contender.
- Perry recalls Google’s foundational role:
"That’s the innovation … the attention is all you need paper (2017) that gave birth to the transformer model. They got us to where we are now." [13:54]
2. AI Messes Up: Reliability, Overtrust, and "The Trough of Disillusionment"
[17:16] - [33:17]
-
Why Trusting AI is Risky
- Users become complacent after repeated AI successes but are often “bit” by errors.
- Quote:
"You see it work well enough times that you start to become complacent, and then you get bit and then you get embarrassed." – Perry [18:03]
-
The Gartner Hype Cycle
- Perry introduces the “hype cycle” with its "Peak of Inflated Expectations," "Trough of Disillusionment," and eventual "Plateau of Productivity."
- Memorable Exchange:
"It looks like an elastic curve…" – Mason [21:44] "…then you get to the plateau of productivity where it’s actually useful, it’s predictable, it’s semi-transparent..." – Perry [21:31]
-
Major AI Study:
- Largest study ever finds AI assistants misrepresent news content 45% of the time
- 31% of responses showed serious sourcing problems; 20% had major accuracy issues (e.g., hallucination, outdated info).
- Gemini performed the worst: significant issues in 76% of its responses, especially with sourcing.
- Quote:
"45% of all AI answers had at least one significant issue … 31% had serious sourcing problems..." – Perry [25:13] "Gemini performed worse… significant issues in 76% of responses, more than double the other assistants…" – Perry [26:54] - The study comes from the European Broadcasting Union/BBC and includes tools for assessing AI accuracy for journalists and the public.
- Largest study ever finds AI assistants misrepresent news content 45% of the time
-
AI-Generated Code Vulnerabilities
- 1 in 5 organizations suffered serious incidents due to AI-generated code.
- Cautionary note on dependency and the need for code review.
3. AI and Jobs: Reality Check on Automation, Layoffs, and Labor Market Trends
[34:52] - [49:27]
-
AI-Driven Job Loss: Myth vs Reality
- Anecdotal reports from people being “told straight up” about layoffs driven by AI adoption; Amazon’s leaked plans to avoid hiring 600,000+ workers by 2033 through automation.
- Quote:
"Amazon is reportedly leaning into automation … to avoid hiring more than half a million US workers." – Mason [35:55]
-
PR Spin vs Impact
- Amazon considers “community projects” and softer language (‘advanced technology’ instead of ‘automation’ or ‘AI’) to offset expected backlash.
-
What the Data Says
- Brookings Institute emphasizes no “AI jobs apocalypse… for now”—labor market stability overall, though entry-level “administrative” and early-career postings are down.
- Entry-level job postings declined ~35% since Jan 2023.
- Trend: Immediate pain concentrated among new entrants; major restructuring in the years ahead.
- Brookings Institute emphasizes no “AI jobs apocalypse… for now”—labor market stability overall, though entry-level “administrative” and early-career postings are down.
-
Systemic Hiring Challenges
- AI dominates both the screening of resumes and even conducts initial interviews, compounding the difficulty for job-seekers.
- Quote:
"How do those people become valuable mid and senior career people?" – Perry [44:56]
4. Deepfakes in the Wild: Scams, Disinformation, and the New “Believability” Threshold
[50:52] - [72:35]
-
Sophisticated Deepfake Scams
-
Voicemail Attack on Darktrace CEO
- Attackers used a cloned voice of the CEO to leave ringless voicemails requesting sensitive business info.
- Showcases how deepfake tech is bridging from entertainment to targeted social engineering/fraud.
- Perry explains:
"If there’s any technology that’s useful for marketing or PR, it's useful for a scammer." [54:56]
-
UK Politics: Deepfake Video Falsely Announcing Politician’s Defection
- Lip-sync deepfake made an MP appear to “announce” joining a rival party.
- Example reviewed and explained with help from a TikTok explainer ([56:39]). The fabrication included convincing verbal and visual manipulation.
- Commentary on quality:
"I don’t know that I fully agree that his voice sounded unnatural... I think the audio was really good on that fake." – Perry [58:05]
-
Irish Election: Deepfake Announces Major Candidate Withdrawal, Claims Election Cancelled
- The deepfake included a full “news package” with legit network branding, crowds, environmental audio—raising the bar for believability.
- Perry and Mason verify the scene’s real location, revealing how deepfakes will increasingly blend with actual context.
- Key Insight:
"There’s the deception—the person saying the thing. Then there’s the packaging … And then you have to figure out, how do I push that out to the world?" – Perry [59:14] "If we didn’t know we were looking at a deepfake, we’d just be like, ‘oh yeah’..." – Mason [63:06]
-
-
Broader Implications
- Deepfakes now require critical thinking not just about content, but about production, context, and distribution—a “script kiddie versus APT” distinction in social engineering.
- Quote:
"When you take … multiple tools together, you’re connecting broader themes … you’re putting the hacker hoodie on for a second and saying, 'if I wanted to do this, how would I do it?'" – Perry [70:40]
Notable Quotes & Memorable Moments
- "This stuff is moving crazy fast. So when we look at that initial thing and see how bad it is, the thing that we can’t do is dismiss its future potential." – Perry [07:01]
- "Google basically has all the data. All the data. Just leave it there." – Perry [14:14]
- "You see it work well enough times that you start to become complacent, and then you get bit and then you get embarrassed." – Perry [18:03]
- "45% of all AI answers had at least one significant issue … 31% had serious sourcing problems..." – Perry [25:13]
- "Gemini performed worse… significant issues in 76% of responses…" – Perry [26:54]
- "It is to the point now where if you post a job opening on LinkedIn or Indeed … you’re immediately flooded with thousands of applicants. So no human can deal with that. So companies are having to turn to AI to deal with that." – Perry [48:34]
- "If there’s any technology that’s useful for marketing or PR, it's useful for a scammer." – Perry [54:56]
- "We have shifted past ‘seeing is believing’ when it comes to video of something happening." – Mason [67:48]
Timestamps for Key Topics
- 02:57 – News round-up: Google Gemini, OpenAI Atlas, and AI multimodality
- 07:01 – Discussing the pace of AI advancement and the “don’t dismiss” message
- 13:23 – The soft launch and new features of Gemini 3.0
- 17:16 – AI’s unreliability and overtrust: users’ complacency
- 18:34 – The Gartner Hype Cycle: “Trough of Disillusionment”
- 25:13 – Major study on AI assistants: misrepresentation and accuracy issues
- 32:31 – One in five organizations had AI-generated code–linked incidents
- 34:52 – AI job loss in the news: Confirmed layoffs, Amazon automation
- 41:14 – Public skepticism: Can companies actually be “good” or ethical?
- 44:49 – Decline in entry-level jobs; summary of market data
- 48:34 – The applicant’s experience: AI-dominant hiring pipelines
- 50:52 – Deepfakes in the wild: New attacks, scam voicemails, and political fakes
- 59:14 – “Packaging” a believable deepfake (audio/environment/branding)
- 67:48 – “We have shifted past ‘seeing is believing’...”
- 70:40 – Synthesizing deepfake creation: “APT” level production
Tone and Style
Perry and Mason maintain a conversational, approachable tone; they are irreverent and wry but balance this with technical rigor and critical insight. Self-awareness, pop culture references (e.g., Rickrolling OpenAI), and a dash of meta-commentary (i.e., on PR spin or hype cycles) keep the show lively and relatable—even as they deliver serious, sometimes unsettling content.
Final Thoughts
This episode illustrates—through technical news, empirical research, and real-world examples—how the “Trough of Disillusionment” is both a warning and a call to critical thinking:
- AI capability (and risk) are advancing rapidly
- Human complacency and overtrust are dangerous
- Job, news, and reality itself are all being remixed by artificial intelligence
- Disinformation and scam capabilities are already “in the wild,” requiring new skills of skepticism, verification, and (sometimes) humility.
Next Week: Expect more hands-on testing of Google Gemini features, additional field reports from AI’s front lines, and, as always, deep dives into the “weird, exciting, and scary” future of AI.
Resources/References
- News Integrity and AI Assistance report (PDF)
- [Deepfakes Ops Class – discount code: FAKE150 / SHUTDOWN50]
- [Discord & voicemail info in show notes]
Listener callout:
"If you have Gemini 3, email us: helloithlayermedia.com" – Mason [15:04]
End of summary.
