The 404 Media Podcast
Episode: Inside an AI-Powered School
Date: February 18, 2026
Hosts: Joseph, Sam Cole, Emmanuel Mayberg
Episode Overview
This episode features a deep investigative discussion into Alpha School, a high-profile, AI-powered private school that charges up to $60,000 a year in tuition. The team unpacks Emmanuel Mayberg’s revealing investigation into Alpha School’s use of AI to generate lesson plans, its approach to student surveillance, content scraping, and the realities versus the hype of AI in education. They also discuss the experiences of Kylie Brewer, a content creator impacted by AI-generated abuse imagery, reflecting on the wider implications of AI-powered harassment and legal responses.
Key Discussion Points & Insights
1. Alpha School: What Is It and Why Is It Notable?
[06:27 – 11:05]
- Alpha School is an AI-powered private school with both physical locations and a homeschooling product.
- Core curriculum is condensed into two hours daily using AI-driven personalized learning.
- After core academics, students engage in more enriching activities (e.g., entrepreneurship, public speaking).
- Strong backing in Silicon Valley; notable supporters include Bill Ackman, Linda McMahon.
- Heavily incorporates “bossware”—productivity and surveillance software—from its principal’s previous company, Crossover.
- Quote:
“Alpha School has emerged as the leading example of how [AI in education] goes well, how AI is incorporated in a smart, useful way... promoted by a lot of people in tech.”
— Emmanuel Mayberg, [09:25]
2. Prevailing Coverage vs. The Reality of Alpha School
[11:39 – 13:36]
- Prior coverage has been supercharged by general AI hype, often lacking specifics.
- Wired's reporting focused on student burnout due to strict productivity monitoring; 404 Media investigates the AI itself, especially regarding privacy and educational quality.
3. Leaked Documents: How The Investigation Was Informed
[13:36 – 15:35]
- Emmanuel accessed Alpha School’s “second brain”—a shared Workflowy document with work logs, strategies, problems, and direct feedback from students, offering unique internal insight.
- Quote:
“Everyone is essentially saying in public, like, here’s what I’m thinking about, the thing that I am currently dealing with, here are the problems, here are the proposed solutions, here’s my strategy.”
— Emmanuel Mayberg, [14:33]
4. Main Findings
a) Faulty AI-Generated Lesson Plans
[15:35 – 23:47]
- AI generates lessons, questions, reading comprehension exercises—sometimes with “damning” errors.
- Internal docs reveal a 10%+ hallucination rate, producing unanswerable or illogical questions.
- Students are often confused, frustrated; lack of trust emerges.
- AI is used to vet its own outputs (“no human in the loop”), a critical flaw universally recognized in AI circles.
- Quote:
“They talk about how this does ‘more harm than good’ because you’re trying to teach someone... but the thing that they are learning is incorrect.”
— Emmanuel Mayberg, [18:28] - Quote:
“I just don’t understand the thinking. I really, really don’t. Like, maybe I’m missing something. I just don’t get it.”
— Joseph, [23:36]
b) Content Scraping from Other Platforms
[24:01 – 27:08]
- Employees instructed to sign up for competitor educational sites (e.g., Khan Academy) with personal emails to scrape material and avoid detection.
- Pattern of scraping until cease-and-desist orders are issued—multiple platforms have terminated relationships due to terms-of-service violations.
- Quote:
“There’s a pattern of them kind of going to an online platform, scraping it as much as they can, the company finding out about it and shutting them down and sending them a cease and desist…”
— Emmanuel Mayberg, [26:40]
c) Extensive Surveillance of Children
[27:08 – 29:54]
- Student activity is closely tracked: apps used, websites visited, mouse movements, and even video recordings during tutoring sessions.
- Videos and personal data stored in accessible, open Google Drives.
- Quote:
“Again, Joe Lyman, the principal, has this company called Crossover... It tracks mouse movements, it tracks what websites you’re going to... All of that is also true at Alpha School.”
— Emmanuel Mayberg, [27:40] - Quote:
“I thought was really shocking... Google Docs had the name of each student... and each of those names linked out to a video recording... stored on a completely open Google Drive...”
— Emmanuel Mayberg, [28:24]
5. Alpha School: Flawed or Just Overhyped?
[29:54 – 34:30]
- Emmanuel reflects: while the AI deployment is deeply flawed and surveillance excessive, parents are drawn to results (top 2% SAT scores) and the appealing “2-hour learning” philosophy.
- The reporting does not argue that Alpha School is irredeemable, but that its marketing and reality diverge, especially as AI is not the “magic bullet” advertised.
- Quote:
“They pivoted to this idea that AI is a magic bullet for all of these problems. And that, at least so far, is patently not true. And they know it because they’re talking about it internally.”
— Emmanuel Mayberg, [33:51]
6. Case Study: AI-Driven Harassment (Kylie Brewer's Story)
[39:55 – 57:58]
a) From Victim-Bewilderment to Public Advocacy
[40:36 – 46:52]
- Kylie Brewer, a content creator, discovers AI-nude images made of her are being sold on OnlyFans, following the wave of “nudified” images enabled by tools like Grok.
- Publicly addresses the situation on TikTok, highlighting her lack of control and the feeling of victimization.
- Quote:
“It is still distressing. And I wanted to let you guys know, this isn’t real, but for any female content creator…this could happen to you. It probably will happen to you… You’re already seeing the profoundly negative impact of AI, and that is why we absolutely need regulation.”
— Kylie Brewer, [45:50]
b) The Psychological Toll of Deepfake Abuse
[47:03 – 51:20]
- Kylie’s experience mirrors others targeted by non-consensual AI imagery: initial confusion, loss of control, and trauma resurfacing, especially for survivors of past sexual violence.
- Quote:
“Before it happens to you, you don’t really realize how it feels. And also the feeling that people assume that because it’s not real, that it’s not damaging…”
— Sam Cole, [49:09]
c) Platform Response & Legal Landscape
[52:19 – 57:58]
- OnlyFans took down the impersonation account after reports, though Sam notes the lack of transparency and proactive response is a common failure.
- Legal recourse for AI/deepfake abuse remains limited: state laws exist but are hard to enforce; recent federal efforts (e.g., the Defiance Act) are moving forward but are new and untested.
- Quote:
“It’s hard. Legally, it’s still incredibly hard.”
— Sam Cole, [57:49]
Notable Quotes & Moments
- “Students are being treated like guinea pigs.” — Story Headline, [05:09]
- “They will digest a bunch of material, feed it to an LLM, and then essentially say, give me a class about this subject and generate multiple choice questions...” — Emmanuel on AI lesson-gen practices, [16:39]
- “The future that they’re striving towards is... to generate the entire lesson plan with, again, ‘no human in the loop’... removing as many humans from the process as they can.” — Emmanuel, [22:28]
- “It just doesn’t make sense. I know you said the 10% figure and some people may hear that and be like, well, it’s only wrong 10% of the time. That sounds like quite a lot for a school.” — Joseph, [19:55]
- “It’s a fucked up ecosystem that I expect better from OnlyFans as far as preventing this stuff. But there are other channels that this happens in as well.” — Sam Cole, [54:18]
- “We have the Take It Down Act, which is the first federal level deepfakes law, but it has a lot of its own problems... So that’s where that’s at. We’ll be following that and see where that goes.” — Sam Cole, [56:17]
Timestamps for Key Sections
- Intro & New Podcast Theme — [00:06–02:29]
- Overview of Alpha School — [06:27–11:05]
- General Coverage vs. 404’s Approach — [11:31–13:36]
- How the Investigation Worked — [13:36–15:35]
- AI Lesson Plan Issues — [15:35–23:47]
- Scraping Content — [24:01–27:08]
- Surveillance Practices — [27:08–29:54]
- Main Takeaways — [29:54–34:30]
- Kylie Brewer’s Harassment Experience — [39:55–57:58]
- Kylie’s own words — [45:50]
- Psychological impacts — [49:09]
- Platform/Legal Response — [52:19–57:58]
Summary
This episode delivers a sharp, well-documented critique of Alpha School’s enveloping embrace of AI—exposing deep flaws, ethical grey zones, and direct impacts on students’ learning and privacy. The hosts emphasize the tension between AI hype and on-the-ground reality, calling for scrutiny and transparency. The follow-up segment on AI-driven harassment underscores the unpredictable harm of generative technologies outside classrooms, with accounts that are personal, chilling, and urgent. Both stories speak to a world where technology’s speed outpaces law, oversight, and basic human consideration.
For more, subscribe to 404 Media and support independent investigative technology journalism.
