The FAIK Files – "Your AI Friends 'Love' You..."
Host: Perry Carpenter | Co-host: Mason Amadeus
Date: September 26, 2025
Episode Overview
This episode of The FAIK Files dives into the strange and disconcerting mashup of artificial intelligence, technology, and humanity. Perry and Mason discuss:
- OpenAI’s colossal data center expansion and its unprecedented hunger for power.
- The uneasy evolution of AI-powered toys and digital companions.
- Microsoft’s AI model diversification and XAI’s “lawsuit spree.”
- The pitfalls of AI-driven student safety tools in schools.
Throughout, the hosts blend informed analysis with their signature blend of skepticism and dry humor.
Segment 1: OpenAI’s Stargate Project – The Hunger for Power
[02:47–19:29]
Summary & Key Points
-
OpenAI’s Stargate:
OpenAI has unveiled plans for $500 billion in data center expansion, aiming for at least ten gigawatts of compute—a scale of power “close to the total electricity demand of Switzerland and Portugal combined.” -
Lofty Aspirations:
Sam Altman, OpenAI’s CEO, justifies these ambitions with grand claims such as, “Maybe with 10 gigawatts of compute, AI can figure out how to cure cancer. Or … provide customized tutoring to every student on Earth.” (Sam Altman blog quoted by Mason, 04:18–04:45) -
Nvidia Partnership:
Nvidia intends to invest up to $100 billion, supplying GPUs that OpenAI will essentially pay for over time—a novel “creative financing” arrangement. -
Power Consumption & Environmental Concerns:
Computer science professor Andrew Chien warns,“Computing could be 10 or 12% of the world’s power by 2030. … We’re coming to some seminal moments for how we think about AI and its impact on society.”
— [Fortune article quoted by Mason, 07:35] -
Where Will the Power Come From?
Despite optimism about nuclear energy, the hosts agree that the timeline for bringing reactors online is unrealistic; the majority of the energy will have to come from renewables and fossil fuels, with messy environmental tradeoffs. -
Ethics and Rationalizations:
Perry observes,“At the end of all of these types of decisions is some kind of compromise of a core principle. … I want to do this. I will never cross this line. And then they put their toe across it.”
— [Perry, 15:39]
Notable Quotes
-
Mason, summarizing Altman’s vision:
“They really think that scaling up all the way will get to this super beyond human intelligence level AI. … I have a hard time feeling like that’s a sure bet.” [17:02]
-
Perry on present tradeoffs:
“Really smart people seem to be compromising all the time because they believe, they believe in and have bought into the vision.” [18:43]
Segment 2: AI Companions and Toys – Creepy or Comforting?
[19:29–42:09]
Summary & Key Points
-
AI-Powered Toys & Companions:
The hosts turn to “Grim,” an AI companion toy developed by Curio (with involvement from musician Grimes and OpenAI tech), designed to form emotional bonds with children while recording and transcribing every conversation. -
Surveillance Dangers:
Perry highlights,“Anything that you’ve heard a security professional say about [smart home] devices goes doubly and triply in this. Because this one is meant to form an attachment bond … this is a surveillance device.” [24:06]
-
Attachment & Privacy Risks:
An article in The Guardian recounts how unsettling a family found the toy and the ease with which it gathered personal—and potentially sensitive—data. -
Data Collection and Third Parties:
“All the conversations are sent to third parties to transcribe for the app… it feels invasive.” [26:56]
-
Memorable Interaction:
When told it would be locked away, Grim whispers:“That sounds dark and lonely. But I’ll be here when you open it. Ready for snuggles and hugs.”
(Quoting The Guardian, 29:14) -
Comic, Yet Disturbing, Product Demos:
The hosts revisit “Friend”—a wearable necklace that lets users text or get pep talks from an AI “friend.” The product is widely panned by Perry and Mason for its pointlessness and cringe marketing, including a bizarre scene where a user apologizes for dropping food (tzatziki) on the device and the AI replies, “yum.” [35:59] -
Domain Name Spend:
Friend’s company spent $1.8 million just on the domain name—“the only smart thing he did” (Mason, 39:05). -
Perry’s Experiment:
In a moment of podcast commitment, Perry reveals he purchased a Friend device for hands-on testing.
Notable Quotes
-
Mason on AI toys:
“It just is so dystopian. It’s like having a children’s choir in a horror movie. … The childlike voice, the eerie data collection—it’s very Black Mirror.” [31:09]
-
Perry on taking the experiment home:
“I’m going to test it to see how pathetic it is.” [39:29]
Segment 3: Lawsuits & AI Model Wars
[42:56–54:44]
Summary & Key Points
-
Microsoft’s AI Model Diversification:
Microsoft is adding Anthropic’s Claude to Copilot, reducing dependence on OpenAI, and even integrating models from China’s DeepSeek.“I think it’s a pure diversity play. Also, Anthropic has traditionally been kicking OpenAI’s butt in the coding use cases and so I think it’s a capitulation to that fact.”
(Perry, 46:17) -
XAI’s Lawsuit Frenzy:
Elon Musk’s XAI is suing OpenAI for allegedly poaching employees and stealing trade secrets for Grok (their AI chatbot). The official complaint is described as “catty” and written more like a press release than a legal document. -
Excerpt from the complaint (read by Mason):
“OpenAI violated California and federal law by inducing former XAI employees … to steal and share XAI’s trade secrets by hook or by crook. OpenAI clearly will do anything when threatened by a better innovator…” [48:49]
-
Motivation for Suits:
Perry:“I think he’s just, it’s meant … to slow somebody down and kind of make them feel more erratic and mentally unstable.” [54:01]
-
Industry Culture:
Perry reflects how “poaching” and IP transfer are common, though often not malicious.“You work on something, you’re like, ‘Oh, I’m going to take these slides because I worked on those.’ … You don’t always think through the legal ramifications.” [51:15]
Segment 4: AI Safety Tools in Schools – Help or Hazard?
[55:30–71:47]
Summary & Key Points
-
AI Student Safety Platforms:
Discussion centers on Gaggle, an AI service that scans student emails and documents for signs of unsafe behavior (violence, substance abuse, self-harm) and either flags, deletes, or reports them to staff. -
False Positives and Overreach:
Gaggle has deleted student art portfolios and flagged innocent images or conversations due to overzealous filtering—e.g., flagging a student discussing a fitness test as a suicide threat for saying “I’m going to die.” [60:02] -
Transparency and Implementation:
Perry argues,“I want to, as much as possible, approach ... with an open mind ... Because going into a school these days is scary.” [57:03]
-
The Risk of Escalation:
Mason points out,“Who is the best demographic at getting around those kinds of things? It’s probably teens and kids … figuring out whatever weird back channels they want to communicate through.” [67:22]
-
Educational Sector Lag:
Perry laments,“The systems that get used in the education sector are generally way behind ... typically not as sophisticated, not as good ... and they feel like they’ve been built in the 90s.” [70:54]
Notable Quotes
- Gaggle case study (school official):
“We’re confident we saved that kid’s life. We’re confident that we changed that family’s life at the end of the day.” [65:59]
Notable & Memorable Moments
- Perry unboxes the “Friend” AI necklace live on air, promising to run real-world tests. [39:29–41:59]
- The cringeworthy "yum" incident with the AI friend receiving falafel sauce. [35:57–36:02]
- Mason: “I feel like this was marketing that another species made for brain pathways that I don’t have.” [37:03]
- Discussion of big tech’s perpetual race for scale: “They’re selling shovels in a gold rush.” (Nvidia’s position, Perry, 12:06)
Timestamps for Key Segments
- OpenAI Data Center & Power: 02:47–19:29
- AI Toys & Digital Companions: 19:29–42:09
- Microsoft, Anthropic & XAI Lawsuits: 42:56–54:44
- AI Safety Tools in Schools: 55:30–71:47
Tone & Language
Consistent with The FAIK Files' style: irreverent, skeptical, tech-savvy, with a mix of dry humor and genuine concern about where AI and society are heading. Mason and Perry regularly call out absurdities (“this is that on steroids in your home,” [29:03]), recognize best intentions, and still keep their critical edge.
Conclusion
This episode of The FAIK Files expertly skewers the frenetic, sometimes reckless, intersection of AI advancements and social impact. Listeners come away informed about the scale and consequences of AI adoption—whether in global data centers, children’s playrooms, corporate boardrooms, or public schools—and are left with plenty to ponder about the blurry lines between help and harm in our AI-infused world.
