Hard Fork Podcast Summary
Episode: A.I. Goes to War + Is ‘A.I. Brain Fry’ Real? + How Grammarly Stole Casey’s Identity
Date: March 13, 2026
Hosts: Kevin Roose & Casey Newton
Episode Overview
This episode delves into three major topics:
- AI’s role in modern warfare, specifically in the recent Iran conflict.
- The phenomenon of “AI Brain Fry”—mental fatigue from overuse of AI at work—with guest researcher Julie Bedard.
- A cautionary tale of identity misuse: How Grammarly used AI to impersonate Casey (and others) in its “Expert Review” feature without their consent.
Throughout, the hosts bring their trademark blend of wit, skepticism, and tech-savvy insight.
I. AI Goes to War
[03:19] Main Theme
Kevin and Casey unpack how AI is actively changing the realities and tactics of the Iran conflict, reflecting a broader, permanent transformation in military operations.
Key Insights and Discussion Points
-
AI’s Infiltration into Battlefield Intelligence
- AI is effective at processing "large quantities of information" (06:27), including data from drones, hacked traffic cameras, and intercepted communications.
- Used by Israeli and U.S. militaries for intelligence, mission planning, and logistics—not yet for truly autonomous lethal action (06:58).
-
“Shrinking the Haystacks”
- AI helps military analysts sift through massive troves of data ("hundreds of thousands of phone calls or audio recordings...99% of what you’re collecting is totally useless..."—Kevin, 07:40), allowing missions that previously weren’t feasible for lack of human manpower (08:26).
-
Human vs. AI Decision-Making
- For now, militaries stress that "humans are in the loop," but as Casey observes, "at some point...it’s probably not going to feel very different from the AI just making the decision for where to shoot a missile" (09:21).
-
Moral and Accountability Dilemmas
- High-profile attacks (e.g., the missile strike on an elementary school) have prompted questions about whether AI—especially the Claude model from Anthropic—was responsible. Investigations ongoing (10:01).
Notable Quote
“There will at least be some contingent in the military saying these systems are more trustworthy, they can make decisions faster, and let’s do it. So I think that’s just something that we need to be very much on guard for.”
— Casey Newton [11:23]
Timestamps of Interest
- [03:19] Overview: AI’s role in war
- [06:27] Examples of intelligence applications
- [09:21] Autonomy, human-in-the-loop, and where the line blurs
[11:54] The Claude Factor: Anthropic’s AI in Warfare
-
Claude: The Model of Choice
- As of this conflict, "Claude is the only AI model that has actually been deployed inside classified military systems." (Kevin, 11:54)
- The Pentagon now sees Claude/Anthropic as a "supply chain risk," and Anthropic has responded with legal action (12:15).
-
Project Maven Integration
- Palantir’s Maven Smart System uses Claude to inform battlefield decisions, from suggesting and prioritizing targets to real-time intelligence (13:33).
- This has turned “weekslong battle planning into real-time operations.” (Washington Post, cited at 13:33)
-
Model Customization
- Claude used in the military is closely related to the consumer version, with some adjustments for security and prompt handling (14:46).
Notable Quote
“This is not just like a kind of tool that people in the military are using for handling, like, routine office work. This is actually sort of a core part of their strategic decision making process.”
— Kevin Roose [14:17]
[15:54] Iran’s Countermove: Attacking AI Infrastructure
-
Data Centers as Military Targets
- Iran struck Amazon data centers in the UAE and Bahrain, immediately disrupting banks, ride hailing, and regional internet services (16:45).
- Raises questions about the prudence of building key AI infrastructure in volatile regions (17:47).
-
Fiber Optic Cables at Risk
- Undersea cables through the Strait of Hormuz, critical for regional internet, are vulnerable—everyone is watching for possible disruptions (19:00).
-
Wider Supply Chain Risks
- Conflict already affecting semiconductor material supplies and raising costs globally (18:18).
Notable Quote
“People are starting to question the logic of doing all these multibillion dollar deals in the Middle East...is it just kind of a rough neighborhood and all of the investments...are just going to be kind of perpetually at risk?”
— Casey Newton [17:47]
[19:37] Ethical and Societal Implications
-
Military AI Use: A Boiling Frog Situation
- The gradual normalization of AI in war and the tendency for tech “developed abroad” to “come back home” for domestic surveillance (45:45).
-
Tech Company Principles in Flux
- The hosts remind listeners that “all of these companies were run by people who at one point thought this was all a bad idea...and then they changed their minds” (Kevin, 20:37–22:41).
Notable Quote
“Next time one of these companies tells you about some unshakable principle...it should make you wonder whether that can hold up to pressure as well.”
— Casey Newton [22:41]
II. Is ‘AI Brain Fry’ Real? (With Julie Bedard)
[25:12] Introduction to AI Fatigue
-
Growing Anecdotes and Research
- Surge in blog posts and social media about “AI fatigue” (25:12–25:54).
- Empirical evidence from UC Berkeley: AI tools increase work intensity, drive multitasking, and eliminate small social breaks.
-
“AI Brain Fry” Coined
- BCG’s study: Describes mental fatigue from “excessive use or oversight of AI tools beyond one’s cognitive capacity.” (Julie Bedard, 27:30)
- 14% of surveyed AI users report experiencing it. Symptoms include “feels like I have 12 browser tabs open in my head” (29:44).
Notable Quote
“It’s almost like you got a new coworker and they’re really, really smart and it’s sucking your life force out of your body.”
— Casey Newton [27:38]
[29:00] Interview with Julie Bedard (BCG, Henderson Institute)
What the Research Shows
- “Brain fry” is a cognitive strain distinct from burnout. (Julie, 29:44)
- Most common in high-iteration, high-change roles (like marketing); less so among management or legal/compliance staff (33:33).
- Factors: Oversight of AI tools and work intensity, not just fear of job loss.
- No direct correlation between “AI brain fry” and burnout—burnout can actually decrease for some with appropriate AI use (32:37).
Key Mechanisms
- Oversight and multitasking: People feel more like they’re managing the tool than doing actual work (31:12).
- “Three-tool cliff”: Productivity and satisfaction drop when workers use more than three different AI tools—multitasking becomes counterproductive (37:16).
Management & Organizational Implications
- Open conversations with managers and teamwork around AI decreased the likelihood of brain fry (40:39).
- Re-architecting workflows: Both individual and organizational interventions needed to reduce strain (43:29).
Notable Quotes
“When teams were using AI together and they had better integrated it into their workflows...we also saw brain fry go down.”
— Julie Bedard [41:48]
“I don’t think AI brain fry is going away unless we tackle it head on...this is about systemic redesign of work.”
— Julie Bedard [43:29]
Timestamps of Interest
- [27:30] Defining AI brain fry
- [29:44] Study results & user experiences
- [32:37] Distinction between brain fry and burnout
- [37:16] The “three-tool cliff”
- [41:48] Team usage reduces brain fry
[44:13] Historical Parallels
- Comparisons drawn to the 1970s “Lordstown Syndrome” auto plant alienation: Automation led to widespread feelings of dehumanization and was (at the time) addressed through worker organizing and redesign of manufacturing processes.
Notable Quote
“It wasn’t until there was actually a re-architecture of the shop floor that we actually saw the productivity gains. And to me, that’s an interesting parallel to what we need to do with redesigning work.”
— Julie Bedard [45:38]
III. How Grammarly Stole Casey’s Identity
[51:31] The Incident: Unconsented Expert “Borrowing”
-
Grammarly’s “Expert Review” Feature
- Supposedly offers feedback “from leading professionals, authors, and subject matter experts."
- But per small print, “references to experts…do not indicate any affiliation with Grammarly or endorsement…” (Casey, 52:14).
-
Real-World Testing
- Names including Casey, Timnit Gebru, Julia Angwin, Kara Swisher, and John Carreyrou were used—often people vocally critical of AI practices (53:50).
- Advice attributed to experts is generic, AI-generated word salad disconnected from the person’s actual perspective or style.
-
Legal/Consumer Backlash
- Julia Angwin filed a class action complaint for illicitly trading on her name and giving advice she never gave (54:53).
- After Casey’s reporting, Grammarly (now “Superhuman”) disabled the feature, promising retooling and expert opt-outs (60:37).
Notable Quotes
"These people are paying $144 a year...and Grammarly gives them this service. So if you are a paid subscriber...you are paying a subscription to get Grammarly to hallucinate on your behalf."
— Casey Newton [58:14]
“I do think that all of the AI companies just have a huge entitlement problem, in general... if it’s on the Internet, it belongs to us.”
— Casey Newton [59:30]
“I think the ‘why are we paying Grammarly all this money?’ moment is coming.”
— Casey Newton [64:54]
[63:27] The Impending “SaaS Apocalypse”
- AI-integrated products like ChatGPT, Gemini, or Claude now handle grammar and style for free or at low cost, undermining the business case for pricey SaaS tools like Grammarly (or “Superhuman”).
- Many such services will struggle to compete as AI commoditizes their core value props.
Notable Quote
“No, I think it’s going to be part of the SaaS apocalypse, which is for software that absolutely sucks, that there is no reason to be using in the first place.”
— Kevin Roose [63:27]
[65:44] Could There Be an Ethical Version?
- Casey: Some kind of licensing, guidance toward actual expert material, and real compensation might be a way forward, but not as it’s managed now.
“The key is you have to guide them to the actual expertise, not just what your LLM is hallucinating.”
— Casey Newton [67:22]
Notable and Memorable Moments
- [09:21] Casey on the slow erosion of “human-in-the-loop” guarantees.
- [19:45] Casey compares the growing use of AI in war to “the frog is being boiled.”
- [29:44] Julie Bedard relays colorful feedback from AI "brain fry" sufferers: “Feels like I have 12 browser tabs open in my head.”
- [41:48] Julie’s advice: foster open dialogue and team-based integration to reduce AI strain.
- [58:14] Casey's blunt irony about Grammarly subscribers paying for “hallucinated” advice.
- [64:54] Casey predicts the demise of overpriced, subpar AI SaaS tools.
Useful Timestamps
- 03:19: AI's role in modern warfare
- 09:21: Shifting military decision-making to AI
- 13:33: Project Maven and Claude integration
- 16:45: Attacks on AI infrastructure in the Iran conflict
- 25:12: AI “brain fry” discussion begins
- 29:00: Julie Bedard interview on “brain fry”
- 37:16: The “three-tool cliff” insight
- 51:31: Grammarly identity theft segment begins
- 58:14: On Grammarly users paying for generic hallucinations
- 60:37: Grammarly disables “Expert Review” after criticism
Episode Tone and Language
- Wry, skeptical, jargon-light.
- Lots of direct, plain English explanations (“word salad,” “sucks,” “brain fry,” “shrink the haystacks”).
- Humor is woven throughout even when discussing serious topics.
- Not afraid to call out hypocrisy or hand-waving in the tech industry.
Bottom Line
This episode presents a vivid, multifaceted look at how AI is rapidly entrenching itself in high-stakes arenas—from war zones to the everyday workplace, and even into our writing apps—raising profound ethical, practical, and human questions for listeners, practitioners, and policymakers alike.
For more episodes and full conversations, find Hard Fork on NYT Audio, Apple, Spotify, and YouTube.
