Podcast Summary: "How Is This Better?"
Episode: The First AI War
Host: Akilah Hughes (COURIER)
Date: March 13, 2026
Episode Overview
This episode of "How Is This Better?" explores the explosive rise of AI-generated disinformation during the ongoing Iran war and its broad implications for the future of war, media, and public trust. Host Akilah Hughes is joined by David Gilbert, a Wired journalist and disinformation expert, for a wide-ranging discussion about the mechanics, scale, dangers, and societal consequences of AI in the battlefield of public perception.
Key Discussion Points & Insights
1. The Scale and Spread of AI Disinformation
Timestamps: 00:30–02:41
- Proliferation of AI Images: The Iran conflict marks a tipping point where AI-generated images and videos have become a predominant part of online disinformation, visible to millions almost instantly.
- Diverse Origins: Disinformation is not limited to traditional state actors; it spans from Iranian government sources to monetized blue check accounts on X (formerly Twitter).
"We've been kind of hearing worries about, you know, the coming AI apocalypse in terms of disinformation... This is the first time that I've noticed AI being a prominent or predominant part of the disinformation landscape."
— Disinformation Expert / Journalist (01:46)
2. Official Use of AI and the Normalization of Propaganda
Timestamps: 02:41–05:36
- State & Political Use: From Iranian officials to the Trump-controlled White House, AI-generated content—sometimes shockingly absurd—has become normalized for both propaganda and ego.
- Mainstreaming Fakeness: When official government sources post AI images and videos, it shapes public expectations of what is “normal”—the line between fact and fiction blurs.
"Because the White House is producing them... it gives it a sense of credibility that AI images, you know, you can use AI for propaganda and promotion."
— Disinformation Expert / Journalist (04:12)
"The most incredible AI images of him, you know, as a, as a warrior, as this jacked kind of all American guy... with great hair."
— Disinformation Expert / Journalist (05:12)
3. The Advancing Technology: Can Anyone Tell What’s Real?
Timestamps: 06:02–07:04
- Detection Struggles: Even experts find it increasingly difficult to identify AI-generated images and videos. Forensic tools—like Google’s AI detector—are imperfect and lag behind innovations.
- Diminishing Reliability: The once-obvious signs of fakery no longer apply; realistic-looking content fools people at every level.
"It's. It is now getting to the point where [Hani Farid] can't tell anymore, not only in images, but in video."
— Disinformation Expert / Journalist (06:38)
4. False Fact-Checking: When AI Bots Verify Each Other’s Lies
Timestamps: 07:04–10:45
- Grok's Failure: Users employ X's Grok AI chatbot as a fact-checker, but Grok often amplifies errors—sometimes generating its own fake images to 'prove' a lie.
- Feedback Loop of Fakery: AI systems use false AI-generated evidence to validate other false AI content, deepening the public’s confusion.
"A researcher asked Grok, an AI bot, to verify whether a viral missile strike video was real. Grok got the answer wrong, and then it generated its own AI image to try to prove it was right."
— Akilah Hughes (09:32)
"It's not giving a clear and defined answer... that's a really just terrifying escalation of how AI is making the problem of AI disinformation on X so much worse."
— Disinformation Expert / Journalist (09:19)
5. AI Bot Swarms: The Bots Are Posting (and Reacting) Too
Timestamps: 11:26–12:56
- Bot Networks Evolve: AI-powered bot swarms generate content, replies, and engagement so fluidly that human users can't discern real from fake interactions.
- Content Overrun: The volume of AI-generated content on platforms is poised to eclipse human contribution, undermining trust in all online interactions.
"You no longer think or know if you're speaking to a real person online... the amount of content that is created by AI is very soon, if not already, taking over the amount created by humans on these platforms."
— Disinformation Expert / Journalist (11:45)
6. The First AI War?
Timestamps: 12:59–14:21
- Conflict as Tipping Point: Experts agree—the Iran war is likely the first “AI war” due to the scale and believability of AI-generated propaganda.
- Monetization Incentives: Many actors, not just official propagandists, profit by spreading sensational AI content for clicks and engagement.
"This conflict is definitely, seems to be a tipping point in terms of how AI is being used and how successfully AI is being used."
— Disinformation Expert / Journalist (14:21)
7. Who Are the Bad Actors?
Timestamps: 14:30–16:15
- Blurring Definitions: State actors, individual grifters, and profit seekers all exploit AI, muddling lines between organized and opportunistic misinformation.
- Examples: Iranian state media, US administration, Russian operations, and influencers seeking revenue all play roles.
"The accounts who have a blue check mark on X and who are spreading AI purely to get to the top of the feed... Iranian state actors are using it... Russia ... has tried to use disinformation."
— Disinformation Expert / Journalist (14:46)
8. The Information Void and Consequences
Timestamps: 16:15–17:58
- No Official Data: A lack of reliable government information fuels demand for unofficial narratives online—increasing vulnerability to AI-manufactured content.
- Polarization: People gravitate towards echo chambers and influencer-driven misinformation.
"If you have a void of information, people are going to fill it... people are just going to be misinformed on both sides of the divide."
— Disinformation Expert / Journalist (16:47)
9. Platform Responsibility?
Timestamps: 17:58–19:41
- Hands-Off Approach: Social platforms have rolled back moderation and show little incentive to address the problem, despite oversight board criticism.
- Government Inaction: Especially in the US, regulatory bodies are unwilling or unable to enforce meaningful changes.
"No company has put their hand up and said we're going to do better and no one is there to make them do it."
— Disinformation Expert / Journalist (19:33)
“The Meta oversight board... said that the platform was, quote, neither robust nor comprehensive enough to handle the system scale and speed of AI generated misinformation, particularly during crises and conflicts.”
— Akilah Hughes (19:41)
10. Are We Cooked? Where Does Society Go from Here?
Timestamps: 19:55–21:40
- Crisis of Authenticity: Many users still crave genuine interaction, but may abandon mainstream platforms for smaller communities if AI content dominates.
- Not Hopeless—Yet: We’re at a boiling point—there’s still time for societal pushback, but stakes are rising.
"We're kind of like the frog in the water and it's getting to boiling point, but it's not quite at boiling point yet. We still can jump out."
— Disinformation Expert / Journalist (21:24)
11. The Future of War Coverage
Timestamps: 21:40–23:06
- Blurring Reality: Soon, it may become impossible to tell real citizen journalism from state-sanctioned or AI-generated fabrications.
- Dangers: Fast-spreading fake content provokes immediate, potentially dangerous reactions—by the time truth catches up, the damage is done.
"It'll be impossible to tell. And that's the real worry, I think... it goes around the world instantly and people get angry instantly... And like a day later someone will have proved that it was AI, but it will be too late."
— Disinformation Expert / Journalist (22:22)
12. Final Thoughts: The Consequences
Timestamps: 23:06–23:50
- Erosion of Trust: Wide-scale AI-fueled deception erodes not just trust in media, but the very fabric of truth in society—especially when lives are on the line.
- Call for Regulation: True change will require real accountability and regulation for tech platforms.
"The burden of proving truth is more pressing than ever. But with real lives on the line, we can expect that there will be grave consequences for this kind of behavior. And it's not really up to us. These platforms need to be regulated, need to care about their impact on the world. And that means people with real power coming together to force their hand. So too long didn't read. It's another instance where things aren't better."
— Akilah Hughes (23:06)
Memorable Quotes
- "We're kind of like the frog in the water and it's getting to boiling point, but it's not quite at boiling point yet." — Disinformation Expert / Journalist (21:24)
- "An AI system used fake evidence to defend a false claim about another piece of fake AI generated content." — Akilah Hughes (09:32)
- "People ultimately want authentic experiences online." — Disinformation Expert / Journalist (20:14)
Key Timestamps
| Time | Segment | |----------|-------------| | 00:30 | Difficulty distinguishing AI vs. real images | | 02:41 | Official and unofficial actors spreading AI | | 06:10 | Even experts can’t always spot AI fakes | | 07:04 | Grok AI chatbot’s flaws as fact-checker | | 11:26 | Rise of AI bot swarms on social platforms | | 12:59 | Is this the first "AI war"? | | 16:47 | The information void and echo chambers | | 17:58 | Weak platform moderation and oversight | | 19:55 | Are we “cooked?”; Possible pushback | | 21:40 | Impact on war coverage and truth | | 23:06 | Final takeaways: Trust, regulation, society |
Conclusion
This episode powerfully illustrates how AI is fundamentally altering the landscape of war, media, and public truth. With propaganda now easier and harder to spot than ever, and with both official and profit-driven actors exploiting these tools, the responsibility is shifting from the individual to powerful institutions and governments to protect truth itself. For now, things aren’t better—but there’s still a window to act.
