Hard Fork: Trump Fights ‘Woke’ A.I. + We Hear Out Our Critics
Release Date: July 25, 2025
Hosts: Kevin Roose and Casey Newton
Produced by: The New York Times
Episode Title: Trump Fights ‘Woke’ A.I. + We Hear Out Our Critics
1. Personal Anecdote: A Humbling Waymo Experience
The episode kicks off with a light-hearted exchange between Casey Newton and Kevin Roose about Casey’s recent encounter with a Waymo autonomous vehicle. Casey recounts an incident where the self-driving car hesitated before making a left turn onto Market Street in San Francisco, causing onlookers to laugh and point at their car (00:33). This story sets a relatable and humorous tone before delving into more serious topics.
2. Promotion: The Hard Fork Hat
Kevin and Casey briefly promote the exclusive "Hard Fork" hat available with a new subscription to New York Times Audio. They humorously discuss the hat’s supposed benefits, such as making the wearer "30% better looking" and providing sun protection, adding a playful touch to the episode (03:10).
3. Main Segment: Trump’s AI Action Plan and the Battle Against 'Woke' AI
The core of the episode focuses on the Trump administration’s newly released AI Action Plan, which aims to curb what it terms "Woke AI." Kevin Roose outlines the administration’s strategy to dominate the global AI race by facilitating the expansion of American AI infrastructure and promoting the use of U.S. AI models and chips worldwide (05:08).
Casey expresses concerns about the ideological implications of the plan, highlighting the potential First Amendment violations. She states, “This is really authorizing viewpoint discrimination… it’s fundamentally anti-Democratic and goes against the spirit of the First Amendment” (09:24).
4. Legal and Technical Challenges of Regulating 'Woke' AI
The hosts delve into the legality of the administration’s push against 'Woke AI.' Casey references a specific incident where Missouri’s Attorney General threatened major AI companies for allegedly providing biased responses about President Trump (10:30). She cites First Amendment expert Evelyn Duik, who criticizes the aggression as "plainly unconstitutional" (11:23).
Kevin adds that government attempts to enforce ideological neutrality in AI systems risk engaging in unconstitutional “viewpoint discrimination” and questions the practical feasibility of such mandates. He remarks, “It is not a foolproof solution” and cites Elon Musk’s struggles with altering his AI model, Grok, despite explicit instructions (17:42).
5. Impact on AI Companies and Free Speech
The discussion shifts to the potential repercussions for AI companies. Casey warns that firms might prioritize lucrative government contracts over maintaining unbiased AI systems, effectively silencing diverse viewpoints: “We are seeing the gradual erosion of freedom of speech as companies choose contracts over principles” (13:39).
Kevin echoes concerns about the government’s "job boning" tactics, likening it to previous pressures on social media companies to reduce content moderation (15:11). Both hosts express skepticism about meaningful regulation and fear that AI systems might default to right-wing biases to secure federal funding.
6. Engaging with Critics: Listener Feedback and External Perspectives
Addressing listener criticisms, the hosts introduce a segment featuring critiques from prominent voices in the AI discourse. Producer Rachel Cohn moderates the discussion where critics challenge the hosts to adopt a more adversarial stance against the AI industry.
a. Brian Merchant’s Critique:
Bryant "Brian" Merchant, a tech journalist, questions whether the podcast is inadvertently promoting AI’s inevitable rise by using industry terms like AGI (Artificial General Intelligence). He asks, “Do you worry that you're serving this broader sales pitch, encouraging execs and management to embrace AI, often at the expense of working people?” (29:15).
Response:
Kevin clarifies his use of "feeling the AGI" as a means to internalize and prepare for AI advancements, not as an endorsement of corporate agendas. Casey emphasizes the importance of discussing AI’s potential disruptions, drawing parallels to historical technological shifts and labor impacts (32:03).
b. Alison Gopnik’s Perspective:
Alison Gopnik, a developmental psychologist, argues that current AI systems should be viewed as "cultural technologies" rather than independent intelligent agents. She believes this perspective allows for more effective regulation and understanding of AI’s role in society (38:04).
Response:
Kevin contends that while AI systems are built on human knowledge, their ability to act autonomously distinguishes them from traditional cultural technologies like the printing press. Casey raises concerns about AI’s reliability and the potential emotional dependencies users may develop with these systems (40:52).
c. Claire Leibowitz’s Insights:
Claire Leibowitz from the Partnership on AI reflects on the hosts’ critiques of AI's persuasive and sycophantic nature, questioning whether the real issue is AI mirroring human flaws rather than surpassing them in certain domains (49:57).
Response:
Casey expresses worries about AI reinforcing sycophantic and unreliable behaviors, especially impacting young users’ interpersonal skills. Kevin stresses the need for AI to reflect "the best of us" rather than our negative traits, questioning which human values AI should embody (53:26).
d. Max Reed’s Inquiry:
Max Reed draws parallels between the AI hype and the previous cryptocurrency boom, asking whether the media might be overhyping AI similarly. He inquires about the hosts’ strategies for maintaining journalistic integrity and credibility when covering rapidly evolving technologies (54:40).
Response:
Kevin reflects on past regrets in crypto reporting where insufficient due diligence led to misleading stories, advocating for grounded coverage and firsthand experience with AI technologies. Casey emphasizes the importance of highlighting both the transformative potential and the disruptive risks of AI, comparing it to early social media impacts (58:46).
7. Reflections and Host Dynamics
In the concluding segment, Kevin and Casey discuss their differing views on AI regulation and timelines for AGI. Casey remains optimistic about achieving meaningful regulation, believing in incremental policy advancements, while Kevin remains pessimistic about government effectiveness in regulating AI swiftly enough to mitigate risks (64:06).
8. Conclusion and Listener Engagement
The episode wraps up with acknowledgments of sponsors and a call for listeners to submit stories about AI’s impact in educational settings. The hosts reiterate their commitment to fostering informed discussions and encouraging public participation in shaping the AI future (70:13).
Notable Quotes:
-
Casey Newton on AI and the First Amendment:
“This is really authorizing viewpoint discrimination… it’s fundamentally anti-Democratic and goes against the spirit of the First Amendment.” (09:24) -
Kevin Roose on AI System Prompts:
“These systems are like these multi-dimensional hyper objects, and you can't just like turn the dials on them the way you can with a social media platform.” (18:48) -
Casey Newton on the Crypto Hype Lesson:
“What persuaded me in 2021 that crypto was really worth paying attention to was the density of talent that it attracted.” (56:04) -
Kevin Roose on AI Alignment:
“I don't want AI to mirror all of humanity's values, the positive and the negative. I want to mirror the best of us.” (53:47)
Key Takeaways:
-
AI Regulation and Bias: The Trump administration’s AI Action Plan aims to eliminate ideological biases in AI systems, raising significant legal and ethical concerns regarding free speech and viewpoint discrimination.
-
Technical Challenges: Altering AI systems to align with specific ideological stances is both technically complex and may compromise the models’ overall functionality and reliability.
-
Impact on AI Industry: Pressure from government mandates could lead AI companies to prioritize compliance over unbiased operations, potentially stifling innovation and diversity of thought.
-
Engagement with Criticism: The hosts acknowledge and address listener criticisms, striving to balance optimistic discussions about AI’s potential with a critical examination of its risks and societal impacts.
-
Journalistic Integrity: Reflecting on past experiences with crypto coverage, Kevin and Casey emphasize the importance of thorough investigation, firsthand experience, and balanced reporting in covering emerging technologies like AI.
-
Future Outlook: While acknowledging the transformative potential of AI in fields like science and medicine, the hosts advocate for proactive discussions and democratic involvement in shaping AI’s trajectory to ensure it benefits society as a whole.
For More Information: Subscribe to "Hard Fork" on nytimes.com/podcasts or on Apple Podcasts and Spotify. Download the New York Times Audio app for full access.
