The Lawfare Podcast: Scaling Laws — The GoLaxy Revelations: China's AI-Driven Influence Operations
Original Air Date: October 31, 2025
Host: Alan Rosenstein, Lawfare and University of Minnesota
Guests:
- Brett Goldstein (Special Advisor, Vanderbilt University)
- Brett Benson (Associate Professor, Vanderbilt University)
- Renée DiResta (Associate Research Professor, Georgetown University)
Episode Overview
This episode of the Scaling Laws series spotlights the recent revelations about China's AI-driven influence operations, as uncovered in leaked documents from the Chinese company GoLaxy. The panel explores how AI technologies have revolutionized information warfare, the specific tactics and architecture revealed in the GoLaxy leak, challenges in measuring the effectiveness of such campaigns, and the preparedness (or lack thereof) of U.S. institutions and allies. The discussion deftly blends technical, political, and national security insights into the current state and looming risks of AI-generated propaganda.
Key Discussion Points and Insights
1. How AI Changes Modern Influence Operations
[04:33 - 13:13] (Renée DiResta)
- The old Russian Internet Research Agency (IRA) model was labor-intensive, using both automated “dumb” bot accounts and human operators crafting granular American personas (e.g., a black woman of a certain age, a young right-winger).
- Early signs of foreign operators sometimes “slipping” with incorrect slang or plagiarizing memes led to their exposure.
- AI fundamentally shifts the landscape: Language models can now generate context-appropriate, original content, reducing these "tells." Advanced AI can mask language and identity more seamlessly and at scale.
- “The supply of disinformation will soon be infinite.” (A, 09:32)
- Multimodality enables fake audio and video, not just text—making whole personas and narratives more immersive and convincing.
2. The Evolution: From Human Operators to Individual Targeting at Scale
[13:13 - 16:32] (Brett Goldstein, Brett Benson)
- The sophistication isn’t just in content generation. AI, fueled by open-source intelligence (OSINT), enables precision targeting—tailoring propaganda to individuals based on vast stolen or public datasets.
- “When everyone has a Persona that's perfectly designed for them and the perfect message is delivered... that’s what's coming and that's what's super complicated.” (D, 14:22)
- Propaganda is now scalable, persistent, and no longer limited by human resources or narrow audience targeting.
3. The GoLaxy Leak: Origin and Revelations
[16:32 - 24:56] (Brett Goldstein & Brett Benson)
- Goldstein details how he received a mysterious link to a 399-page trove in Mandarin, which he securely translated to uncover not just technical diagrams but data collection on U.S. congressmen.
- "There are US congressmen... and it looks like a IO [information operations] messaging type approach. And I'm like, oh shit, this is something that's... super interesting, super hot." (D, 19:42)
- The documents reveal GoLaxy’s twofold approach: (1) massive OSINT, and (2) the creation and maintenance of “resilient Personas” that can survive platform crackdowns.
- Evidence of partnerships with Chinese high-tech companies (e.g., Sugon, DeepSeq) marks a shift from classical neural networks to advanced generative AI architectures.
- Links to Chinese state agencies and intelligence—GoLaxy is set up via the Chinese Academy of Sciences, serving explicit national security interests.
4. Effectiveness and Attribution: How Much Should We Worry?
[35:04 - 48:26] (Benson, Goldstein, DiResta)
- Effectiveness is notoriously hard to measure. In Taiwan’s 2024 election, Chinese operations targeted pro-independence parties, but the Democratic Progressive Party still won. Counterfactuals are impossible, especially since Taiwanese society is highly aware and resilient to such meddling.
- “If the objective is to sort of change a mass line... we have to figure out how to measure and test that... But it’s a hard problem.” (E, 36:41)
- Measuring effectiveness requires not just detection of fake Personas, but understanding their downstream, real-world impact—a massive methodological and causal inference challenge.
- Small-scale experiments show that AI-generated, tailored messages yield dramatically high engagement (70–80% click rates) with little technical investment—an ominous sign of what’s possible at scale.
5. The U.S. and the Defensive Lag
[48:26 - 60:40]
- The same technical playbook could easily be employed against U.S. targets; evidence from GoLaxy shows collection on prominent Americans.
- Detection capacities have decreased even as the threat grows: academic and private sector partnerships (e.g., Stanford’s SIO) have been dismantled, and platforms like Meta, Twitter, and OpenAI have scaled back transparency and threat reporting.
- "As the threat model has increased, the capacity for detection has decreased... this is not a partisan issue. This is a national security issue." (A, 54:29)
6. Strategic Implications and the Gray Zone
[57:13 - 61:48] (Benson, Goldstein)
- The U.S. defense and policy posture treats information operations as a secondary, supporting function, while adversaries treat it as a primary battlefield.
- Influence ops are ongoing, exhausting, and when combined with kinetic, cyber, or economic attacks, pose a strategic challenge the U.S. is not equipped for.
- “This type of gray zone conflict is much more problematic because it's ongoing and never stops. ... It creates some strategic exhaustion.” (E, 58:50)
- Democracies, while currently vulnerable, may gain resilience over time if they address these challenges in partnership.
7. Looking Forward: Patch-and-Pray or Strategic Vision?
- The U.S. faces a critical inflection point: Will it continue patching holes reactively, as in the cybersecurity saga, or proactively build partnerships to confront the new era of AI-driven information threats?
- “Are we going to do what we did with cybersecurity and be constantly reacting to the next threat? Or ... get ahead of the threat in a different way?” (D, 61:02)
Notable Quotes & Memorable Moments
- “The supply of disinformation will soon be infinite.”
— Renée DiResta, [09:32] - “When everyone has a Persona that's perfectly designed for them and the perfect message is delivered... that’s what's coming and that's what's super complicated.”
— Brett Goldstein, [14:22] - “If the objective is to sort of change a mass line ... we have to figure out how to measure and test that in Taiwan they claim ... to have been effective. But these are the documents making that claim.”
— Brett Benson, [36:42] - “As the threat model has increased, the capacity for detection has decreased... this is not a partisan issue. This is a national security issue.”
— Renée DiResta, [54:29] - “This type of gray zone conflict is much more problematic because it's ongoing and never stops. ... It creates some strategic exhaustion.”
— Brett Benson, [58:50] - “Are we going to do what we did with cybersecurity and be constantly reacting... or ... get ahead of the threat in a different way?”
— Brett Goldstein, [61:02]
Important Segment Timestamps
- AI vs. Human Influence Operations: [04:33 – 13:13]
- GoLaxy Leak Discovery & Analysis: [16:32 – 24:56]
- GoLaxy, Personas, and State Links: [24:56 – 30:22]
- Effectiveness & Measuring Impact: [35:04 – 48:26]
- U.S. Vulnerabilities & Strategic Challenges: [48:26 – 60:40]
- Looking Forward – Proactive vs. Reactive Response: [60:40 – 61:48]
Summary Takeaways
- AI has drastically lowered the cost and raised the plausibility of computational propaganda.
- China’s GoLaxy provides concrete evidence of large-scale, AI-fueled OSINT, persona management, and targeting activities—often with explicit state linkages.
- Measuring “success” in influence operations is profoundly difficult, with platforms and researchers alike grappling with attribution and causality challenges.
- The U.S. is falling behind in both detection and strategic adaptation, while adversaries see information as a battlefield and act accordingly.
- Coordination across sectors and nations is urgently needed to avoid repeating the mistakes of "forever-patching" in cybersecurity.
This episode offers a rare, unvarnished look into the technical, political, and philosophical challenges ahead as AI becomes a central weapon in the information wars. For policymakers, technologists, and citizens alike, the warning is clear: the future of democratic discourse and national security hangs in the balance.
