Intelligent Machines (Audio) — Episode 835: “Glitch Lord – Inside OpenAI’s Secret Struggles and the ‘Empire of AI’ With Karen Hao”
Date: September 4, 2025
Host: TWiT (Leo Laporte et al.)
Guests: Karen Hao (Author, "Empire of AI"), Harper Reed (AI entrepreneur), Jeff Jarvis (Craig Newmark Graduate School of Journalism), Paris Martineau (Consumer Reports Investigative Journalist)
Main Theme & Purpose
This episode explores the hidden inner workings, conflicts, and industry-wide consequences of OpenAI, as revealed by Karen Hao’s book “Empire of AI.” Through her reporting and the panel’s discussion, the show examines how OpenAI’s original mission changed, the cult-like pursuit of AGI, the industry’s shift towards massive scale, and how these trends impact the broader AI ecosystem and society. A secondary theme covers the evolving landscape of AI programming tools, competition among tech giants, and the global politics of AI research.
Key Discussion Points & Insights
1. Karen Hao’s “Empire of AI” — Reporting OpenAI’s Secret Struggles
-
Karen’s Access & Early Impressions
Hao embedded at OpenAI in 2019 for a landmark profile for MIT Tech Review. At the time, OpenAI sought to project transparency, but was already showing signs of internal secrecy and anxiety about journalists.- Memorable Moment: Karen later learned her face was given to security guards to monitor during her stay.
— [05:25] - Quote:
“Apparently, they gave the security guard my face... make sure she does not see journalists poking around.” [05:41]
- Memorable Moment: Karen later learned her face was given to security guards to monitor during her stay.
-
Disconnect Between Public Image and Reality
OpenAI’s stated public mission of openness contrasted sharply with its tightly controlled, sometimes confused internal culture.- Hao recounts execs (including Sam Altman and Ilya Sutskever) fumbling basic questions about their mission.
- Quote:
“They did really fumble with some basic questions. I was pretty surprised... I’m asking the most generic questions here, just articulate why you’re doing what you do and what you’re doing.” [07:02]
- Quote:
- Hao recounts execs (including Sam Altman and Ilya Sutskever) fumbling basic questions about their mission.
-
Ambiguity (and Utility) of "AGI"
The show discusses the vague, contested definitions of Artificial General Intelligence (AGI) both inside and outside OpenAI.-
Quote:
“No one agrees on what human intelligence is... The problem isn’t that there isn’t a definition, it’s that the definition is still meaningless.” [09:06] – Karen Hao
-
This vagueness allows the term “AGI” to serve as a vessel for founders’ aspirations and fundraising narratives.
-
-
Evolution (or Not) of OpenAI’s Values
- While OpenAI pitched itself as a mission-driven nonprofit, ego and winner-takes-all Silicon Valley logic shaped it from the start—money and competitiveness merely amplified these underlying motivations.
- Quote:
“Maybe they weren’t so pure in the beginning... there was already a little bit of corruption in the beginning in terms of their conception of why they were doing OpenAI.” [13:03]
- Quote:
- While OpenAI pitched itself as a mission-driven nonprofit, ego and winner-takes-all Silicon Valley logic shaped it from the start—money and competitiveness merely amplified these underlying motivations.
-
Scale and the 'Empire' Metaphor
Hao critiques the industry’s shift to “scale at all costs” — training models with ever-greater resources, data, and compute, with global consequences.- Exploitation of global resources, labor, and knowledge; consolidation of AI research inside for-profit labs; “imperialist” rhetoric of “good” vs. “evil” empires.
- Quote:
“They are now talking about building supercomputers the size of Manhattan... seizing resources that are not their own... That’s what I call imperial-like behavior.” [27:49]
2. OpenAI’s Founders and Company Culture
- Personalities & Management Styles
- Sutskever: “Visionary, highly cerebral, highly emotional... people would pretty universally say Sutskever, if they had to pick, was the best manager.” [19:19]
- Altman: “A politician, very good at telling stories and getting people to move his direction; can’t operationalize things; tells different things to different people.” [17:07]
- Brockman: “Anxious energy of wanting to be remembered in history, relentless coder, solo operator—not a good manager.” [18:38]
- Altman and Brockman are described as “terrible managers,” with Sutskever regarded as the one who could lead and inspire technically—though all are far from “average guys.” [22:36]
3. The Industry-Wide Narrowing of AI Research
-
Shift from Deep, Diverse Research to One Track
- Hao laments the reduction of academic and experimental diversity as top researchers are hired by industry labs, funneled into optimizing a single approach (Transformers) instead of broader AI questions.
- Quote:
“They're all just reading one page of a book in an entire library... all of the capital and resources go to one sentence of one book.” [34:45]
- Quote:
- Hao laments the reduction of academic and experimental diversity as top researchers are hired by industry labs, funneled into optimizing a single approach (Transformers) instead of broader AI questions.
-
Why Scale? Devil's Advocacy and Limitations
-
The panel examines why OpenAI fixated on scaling Transformers as the magic path—Karen argues there was evidence of their limits, but the cultural drive toward “raw progress” took over.
- Quote:
“At some point you have to start being critical of their decision to continue... when there was already so much they should have known better.” [37:00]
- Quote:
-
The tendency in AI to fixate on technical progress for its own sake, rather than human needs, gets a strong critique.
- Quote:
“These aggressive moves... are kind of derivative of this mentality of let’s just keep pushing for pure science rather than actually pushing for innovation for humanity.” [38:27]
- Quote:
-
4. Global AI Race Rhetoric: US vs. China
- The ‘existential risk’ argument (“we must outpace China”) is exposed as self-serving and largely unfounded; US regulation has not thwarted Chinese progress, and “the only winner is Silicon Valley.”
- Quote:
“...the gap has actually shrunk dramatically... Silicon Valley has had an illiberalizing force around the world.” [40:33]
- Quote:
5. Reporting Process & Journalism
- Hao describes her investigative process—cold-calling everyone who ever worked at OpenAI, leveraging her controversial profile to gain sources, and getting access to private notes because insiders “felt they had witnessed history.”
- Quote:
“I just made a giant spreadsheet of everyone that ever worked at OpenAI and just started cold contacting as many as possible.” [45:42]
- Quote:
6. GPT-5, OpenAI’s Shifting Narrative, and AGI Skepticism
- Hao is skeptical of “AGI,” viewing it as more useful rhetorically to mobilize resources and attention than as a scientific milestone.
- Quote:
“To understand AGI... it should be understood as a rhetorical tool... to justify more and more resources.” [49:36]
- Quote:
- On GPT-5’s muted reception: “I wasn’t surprised... already so much concern within the org... running out of rope in their specific scaling paradigm.” [52:00]
7. Practical AI Coding and the Rise of AI Agents
- Harper Reed on Vibe Coding
- Using Claude Code and other LLM-based tools for code review, debugging, and software engineering.
- Sharing prompts that improve LLM feedback (“careful review,” assigning the LLM a persona/nickname for context tracking).
- Discussion of bouncing between multiple models—Claude, CodeX, Gemini—for various tasks.
- Quote:
“People have the same path for going down this stuff... bouncing from model to model... It's commodified.” [61:51]
8. Global AI Models and Access
- China’s “Deep Seek” and the global rise of open-source models; US policies restricting Chinese student visas have led to top AI talent staying in China and developing strong domestic models. There’s anxiety (and optimism) about more non-Western (“WEIRD”) AI models, and whether this will lead to greater diversity or new risks in global alignment of “human values.”
- Quote:
“We need more than US and China—let’s have a Nigerian model... I would like to see an index.” [75:07, paraphrased]
- Quote:
9. News Roundup: Google Antitrust, Meta AI, AI Copyright Lawsuits
- Google Antitrust Ruling
- Judge imposed mild remedies; AI is now seen as a real search competitor; Google’s dominance (especially in browser share) is challenged but not dethroned.
- Quote:
“AI is the competition, and AI is going to benefit from this in ways we can’t yet predict.” [88:32]
- Meta Struggles to Retain Top AI Talent
- Lavish packages for AI scientists lead to quick exits, resentment from veterans, and an impression that Zuckerberg is “just swinging in the dark.”
- Quote:
“You can make a decision based on money, then you get inside and realize you don’t respect anyone around you... and if you have $100 million, you have a lot of options.” [114:41]
- AI Copyright Lawsuits
- Anthropic reaches settlement with authors over training data; fair use ruling stands, but industry vulnerability visible.
- Quote:
“If it were a huge victory, Anthropic would be going out of business right now... the fair use part stands, which is important.” [134:59]
Notable Quotes & Memorable Moments
-
On OpenAI’s Ideological Divisions:
“...so, so much infighting because different ideological camps splintered over these definitions, and then they start biting at each other’s heads...”
[10:20], Karen Hao -
On the AGI Concept:
“It’s a vessel for people’s own projections, systems of belief... the definition is still meaningless.”
[09:17], Karen Hao -
On Empire-building:
“They monopolized knowledge production... The same way you could imagine climate science would be distorted by oil and gas companies if climate scientists were bankrolled by fossil fuel companies.”
[29:00], Karen Hao -
On the Culture of Progress-For-Its-Own-Sake:
“These aggressive moves... are kind of derivative of this mentality of let's just keep pushing for pure science rather than actually pushing for innovation for humanity.”
[38:27], Karen Hao -
On the Usability of LLMs:
“What distinguishes these AI technologies is they are designed to be easy for everyone to use. That takes away the priesthood, the investment—anybody can say, ‘you can use facial recognition, so can I.’”
[145:40], Jeff Jarvis
Important Timestamps
- [03:32] Karen Hao describes early access at OpenAI
- [05:41] Security gave Karen’s photo to guards—secrecy at OpenAI
- [09:06] The (lack of) shared meaning of AGI exposed
- [13:03] Did OpenAI’s founders ever have pure motives?
- [27:49] Empire of AI: the “imperialist” scale of today’s AI
- [34:45] Industry focus narrows to a “single sentence in a library”
- [40:33] China as 'the bad empire'—US policy not working as intended
- [45:42] Hao cold-calling all OpenAI alums
- [49:36] AGI as a rhetorical device, not a technical reality
- [52:00] OpenAI’s strategy and GPT-5’s troubled reception
- [61:51] Harper Reed: “Bouncing from model to model... it’s commodified”
- [75:07] Discussion of non-Western LLMs and the problem of “WEIRD” values
- [88:32] Post-Google antitrust: “AI is the competition... will benefit”
- [114:41] On talent churn and meaning at Meta AI
- [134:59] Copyright lawsuits: “Fair use stands…which is important”
Additional Segments
Coding With AI: Nicknames, Workflows, and “Vibe Coding”
- Reed and Laporte discuss using Claude Code and ChatGPT for programming, debugging, and self-prompting—down to having your AI call you by a nickname so you know if its context slips (“Dr. Biz,” “Glitch Lord,” “Mr. Beef”).
[64:32-65:10]
Fun & Cultural Picks
- Film: “Perfect Days” by Wim Wenders — a meditative, slow film about a Tokyo toilet cleaner (Leo’s pick) [161:24]
- Paper: “Probing LLM Social Intelligence via Werewolf Parlor Game” — explores LLMs’ ability for social strategizing (Harper’s pick) [157:22]
- Coding Challenge: The “Berghain Bouncer Challenge” — a playful LLM prompt scenario [159:56]
Language & Tone
- The conversation is candid, lightly irreverent, and conversational.
- Sharp analysis is blended with humor and personal anecdotes.
- Both skepticism (especially around “AI empires”) and curiosity (about where AI is going) prevail throughout.
Wrap-up
This episode delivers a dense, energetic exploration of OpenAI’s rise, the motives and shortcomings of the AI industry’s “imperial” turn, the monoculture of “scale-at-all-costs,” and the practical realities and oddities of working with today’s AI tools. Karen Hao’s deep reportage and measured skepticism are a consistent highlight, while later segments bring in technical, ethical, and global context, balancing critique with the sense that AI’s social transformation has only just begun.