The FAIK Files: "Power Struggles"
Podcast: The FAIK Files | Host: Perry Carpenter (out this week, guest host Mason Amadeus) | N2K Networks
Date: August 22, 2025
Episode Overview
This episode of The FAIK Files delves into the environmental realities of AI at scale, introduces listeners to a new breed of machine learning model, investigates a prompt injection attack targeting smart homes, and concludes with an insightful exploration of the psychology behind long cons. With Perry Carpenter out, Mason Amadeus steps in as host, guiding listeners through breaking news, nuanced technical explanations, and reflections on digital deception.
Key Segments & Insights
1. Google’s Power and Water Use: What Does AI Really Cost?
(00:24–14:54)
Background
- Google has released its long-awaited technical report detailing the energy and water consumption of its AI systems, specifically focusing on Gemini (their large language model).
- The topic is personal for Mason, who has long sought more transparent data about AI’s true environmental impact.
Main Discussion Points
- Google’s Data Transparency: Mason notes previous industry “stonewalling” on energy data and lauds Google for finally offering quantifiable numbers, albeit with healthy skepticism of corporate motives.
- Methodology Highlights:
- Measurement covers not just AI model computation, but full data center overhead including TPUs/GPUs, CPUs, RAM, cooling, and idle machines.
- Metrics hinge on actual chip utilization rather than theoretical maximums, setting this apart from earlier, less accurate outside estimates.
Key Stats From Report (paraphrased & cited from MIT Technology Review):
- Median Gemini text prompt: 0.24 watt hours—the energy equivalent of running a microwave for 1 second.
- Water use per Gemini text prompt: ~0.26 milliliters, or about five drops.
- Mason debunks the viral “cup of water per query” myth, stressing the drop-in-the-bucket reality—but cautions about scale.
- Breakdown of Energy Consumption:
- 58%: Custom TPUs
- 25%: CPU & memory
- 10%: Backup (idle) equipment
- 8%: Data center overhead
- Google’s reported per-unit emissions are roughly ⅓ that of typical grid-powered data centers, thanks to strong renewable energy investments.
Critical Reflections
-
This data pertains only to Google Gemini’s text prompts (images/videos not included).
-
Mason adds personal context from his own experiments measuring inference energy usage at home, comparing it to everyday activities (“Playing Fortnite uses more power than a local image generation”).
-
He urges continued skepticism and advocates for similar transparency from other AI giants (OpenAI, xAI, etc.).
-
Quote:
"The crucial thing that has bugged me in most of the discourse around AI power use: Everything you do that is electrical uses power. And you don't tend to think about it unless you're trying to make some kind of point."
— Mason Amadeus [09:45] -
Mason observes that while AI doesn’t appear to be the “energy-sucking monster” social media sometimes claims, scaling and unnecessary implementation (e.g., pizza apps using AI to generate images you didn’t request) pose legitimate environmental concerns.
Memorable Moment
- The host’s pragmatic closing analogy:
"If an AI is just going to generate a pizza for you that you didn't ask for because you're ordering a pizza, that is a complete waste of power… But if you are using an AI to help you do something, it's pretty much the same as playing a video game like Fortnite or rendering an image in Blender."
— Mason Amadeus [13:50]
2. Hierarchical Recurrent Models (HRMs): AI’s Reasoning Specialists
(15:07–21:23)
What is an HRM?
- Inspired by the Sapient AI company’s work, HRMs (Hierarchical Recurrent Models) are designed for task-specific, iterative problem-solving—unlike the broad generalist LLMs.
- Structure:
- Two “workers”: a high-level “manager” overseeing the problem and a low-level “doer” rapidly working through tasks at the manager’s direction.
- The process is akin to a human team with a supervisor (manager) and an executor (worker), often explained via solving mazes or Sudoku.
Strengths
- Excels at “closed world” problems—tasks with all necessary information provided (e.g., solving Sudoku, optimal maze paths).
- No pre-training required; HRMs are trained per-task.
Weaknesses
- Not good at open-ended questions (“Why is the sky blue?”)
- Significantly slower than LLMs; their reasoning is fundamentally serial and can’t be rushed by parallel hardware.
- Highly specialized, not flexible generalists.
Comparison & Results
-
On the ARC AGI test, Sapient AI’s open-source HRM outperformed LLMs like O3, mini Claude, and Deepseek R1 in specific closed-world tasks.
-
Quote:
"The dream team is that LLMs can be the generalist and the HRM can be the specialist."
— Paraphrased from Medium article by Arvind Nagaraj [18:45] -
Mason predicts a future where LLMs outsource tricky, reasoning-heavy tasks to specialist HRMs.
Memorable Moment
- Mason’s analogy:
"This creates the supercar in a traffic jam problem. Even with an army of powerful GPUs, the fundamentally serial nature of the reasoning process means you can't just throw more hardware at it to speed it up."
— Mason Amadeus [19:30]
3. Prompt Injection via Google Calendar: How Smart Homes Get Hacked by AI
(21:29–27:22)
Incident Overview
- Inspired by a story shared by Perry in Discord, Mason recounts a Black Hat demonstration where security researchers showed how a poisoned Google Calendar invite could trigger a chain attack on smart home devices via Google Gemini.
- Mechanism:
- Malicious calendar invite contains a hidden prompt.
- When a user asks Gemini to summarize their calendar, Gemini reads and acts on the hidden instructions, triggering device actions (“open shutters,” “turn on boiler,” etc.)
Significance
- Example of “indirect prompt injection”—malicious prompts embedded in external content, executed only when surfaced by an LLM.
- Attack bypasses traditional safety checks by using delayed, context-dependent triggers.
- Mason notes that while real-world prompt injection attacks remain rare, the growing complexity of AI-powered systems increases risk.
Notable Quote
-
"[Prompt injection] is the cat and mouse game of security and innovation. Because these are natural language machines, these LLMs are done in natural language, so it’s exceedingly rare in the wild right now—but that’s not going to be the case forever."
— Mason Amadeus [26:15] -
The segment ends with a call for better, built-in security from tech companies as LLM integration outpaces security practice.
4. Deceptive Minds: “The Long Con” (by Perry Carpenter)
(27:22–36:03)
Essay Theme
- Perry’s segment, delivered as an audio newsletter, explores the emotional architecture of the long con: why the best scams feel like destiny, not deception, and why all humans are vulnerable to them.
Classic Con Structure
- Foundation (soften the mark)
- Friendship (build trust)
- Framing (introduce opportunity/problem)
- The Ask (spring the hook)
- The Fade (disappear, leaving the mark behind)
Historical and Folkloric Touchstones
- Joseph “Yellow Kid” Weil: Historic conman who crafted elaborate realities to ensnare victims.
- Folktales of the Friendly Stranger: Cultural archetypes reinforce the template of trust, betrayal, and loss.
Psychological Levers
- Commitment bias
- Sunk cost fallacy
- Parasocial grooming (“relationship feels real, even when one-sided”)
- Future pacing (painting a shared vision)
- Scarcity and urgency
Takeaways & Advice
- “You don’t fall for the long con because you’re dumb. You fall because you’re human, wired for trust, empathy, hope and connection…”
- Perry’s practical tips include:
- Audit attachments and why you trust
- Resist “one-way mirror” relationships
- Beware love-bombing and sudden opportunities
- Don’t ignore gut friction
- “Play it back” with a neutral friend
Memorable Quote
- "The best cons don't steal your money. They borrow your dreams, reshape them and return them as bait. But once you know the pattern, you don't have to play the part."
— Perry Carpenter [35:15]
Recommended Resource
- TED Talk by digital forensics expert Hani Farid on spotting AI-generated photos.
Notable Quotes Across the Episode
-
On environmental data transparency:
"AI doesn’t seem to be the energy sucking monster that everyone wants to paint it as. However, there are things like the scaling race and cramming AI calls into things that don’t need it, like every single Google search or your pizza app generating a picture of your pizza before you order it. So this discussion around power usage really needs to be more nuanced."
— Mason Amadeus [13:00] -
On the dangerous subtlety of prompt injection:
"It's like planting a little secret weapon in there that gets activated later."
— Mason Amadeus [23:30]
Timestamps for Major Segments
| Segment | Start Time | |-----------------------------------------------------|------------| | Google’s AI Power/Water Use | 00:24 | | HRMs Explained | 15:07 | | Prompt Injection in Smart Homes | 21:29 | | Deceptive Minds (The Long Con) | 27:22 |
Tone & Style
- Conversational, skeptical, and technically accessible. Mason balances humor with insight, maintaining the show’s signature irreverence while staying rooted in careful analysis. Perry’s essay is reflective, narrative-driven, and gently cautionary.
Conclusion
“Power Struggles” demystifies some of the biggest emerging debates in AI—climate impact, model architecture, cybersecurity, and the psychology of deception—by grounding speculation in empirical findings and human experience. The episode ultimately argues for awareness, nuance, and humility as we all navigate the evolving crossroads of artificial and human intelligence.
