Risky Bulletin – Between Two Nerds: How NSA Will Use AI Podcast by Risky Business Media | February 23, 2026
Episode Overview
This episode of "Between Two Nerds" features cybersecurity experts Tom Uren and the Grugq (“Gruk”) discussing how major intelligence agencies—particularly the NSA—will use AI in their cyber operations. Drawing on new threat reports and their own experiences, they reflect on both the current and future value of AI in offensive and defensive cyber roles, how workflows might shift, and what roles AI can (and can't) usefully fill for state-level professionals.
Key Discussion Points and Insights
1. AI in the Cybercrime Lifecycle ([00:14] – [01:41])
-
AI is Pervasive in Adversary Operations
- Tom reviews a Google threat report showing "adversaries or cyber criminals, threat actors are using AI at pretty much every step in the... cyber criminal life cycle." ([00:21])
- Reference to a past Anthropic report about a Chinese threat actor using the Claude AI to coordinate campaigns with AI handling “grunt work” and reporting results for human oversight.
-
AI Favors Risk-Tolerant Adversaries
- Tom previously thought: "AI powered cyber espionage will favor China," as its operational style appeared to fit higher-risk, high-volume approaches. ([01:17])
-
Professional Agencies Prioritize Correctness
- "An organization like NSA or ASD or the Five Eyes won’t do [AI-driven YOLO ops] because they want to get things right, not screw up and get outed." — Tom ([01:41])
- Grugq: These agencies have a "very high premium on correctness." ([01:41])
2. AI’s Value in “Grunt Work” and Testing ([02:01] – [03:42])
-
AI for Process Robustness
- Tom mentions Trail of Bits CEO Dan Guido’s approach: use AI for small, clearly verifiable tasks, notably for writing test cases that humans tend to neglect ("it was just way too much work." — Tom [02:54]).
- AI allows coverage that hadn't previously been possible, with little risk if it "somehow manages to stuff up" as nothing is lost. ([03:03])
-
Area of Maximum Value
- Grugq: "That's probably where AI adds the most value right now, given the state. It's that sort of thing." ([03:16])
3. How Intelligence Agencies Can Safely Use AI ([03:42] – [08:17])
-
Intelligence Agencies’ Modular Workflows
- NSA and similar agencies divide jobs into many tasks—many of which match the kind of modular, clear-cut problems AI can assist with.
- Tom and Grugq discuss a US military cyber operator job description and how the role breaks down into core knowledge, skills, and abilities. Many "abilities" (e.g., "ability to think critically") aren't suitable for AI today, but knowledge tasks are a good fit.
-
AI as a Double-Check
- Grugq: "If I'm doing a risk assessment and I think I've covered all my bases, I would say, you know, Claude, do a risk assessment... And then once that comes back... compare and contrast. Did it come up with an angle I hadn't thought of? ...it's basically free... and if you don't [get anything], you've lost nothing." ([06:53])
-
Private Models for Sensitive Use
- Tom notes agencies can use self-hosted, private AI models, so they're not subject to external limits or trust issues. ([08:17])
4. Limits and Politics of Using Commercial AI ([08:17] – [10:09])
- Anthropic vs. US Defense
- Anthropic sought to prohibit surveillance of US citizens and fully-autonomous lethal operations via their models, to which DoD responded with severe supply chain threats.
- Reports suggest Anthropic’s AI was used in "the raid to capture Maduro," showing real-world high-stakes application. ([09:24])
5. Which Roles Are Good for AI? ([10:09] – [17:39])
-
Critical Judgments, Human-Only
- Grugq: "Anything with the word planner in it is just, it's bad for AI... you want a human to be making human judgments. This is very much weighing up the balance of risks, plus everything I know, plus the political pressures..." ([11:08])
-
Warning Analyst – Mundane Yet Vital
- AI is well-suited to "warning analyst" style roles: developing cyber indicators, monitoring for changes, and alerting on environmental shifts—tedious but vital work.
- Tom: AI could excel at "what's happening right now... probably do that faster and more reliably than a person would, and could do it 247 without a problem." ([14:27])
-
Human-AI Teaming
- Grugq: "The human in that role can now do more and a more refined task because they don't have to waste cycles on the grunt work part. They can have Claude do that and... focus on thinking like the other side." ([14:35])
-
Target Network Analyst – Mapping and Pathfinding
- AI could help plan routes through networks ("like a Google Maps, except for getting around a network" — Tom [16:50]), by mapping out paths and matching available exploits to vulnerabilities, reducing rote labor.
-
Contrast: Professional vs. YOLO Threat Actors
- Tom: "A YOLO cyber threat actor... would probably go, well, find me a path, press button, let's go. So they will be quick and then do it, but maybe riskier as well."
- In contrast, pros will still "be more cautious about deploying those [AI] benefits or maybe more management oversight..." ([18:12])
6. AI Won’t Replace, but +Human Capability ([18:24] – [21:22])
-
Augmentation, Not Replacement
- Grugq: "All of the roles that AI is good for have a level of grunt work... [AI brings] a level of vigilance that you might not get normally." ([18:24])
- Some tasks require "political or... contextual" reasoning that AI can’t do—information that’s not formalized in the job itself but comes from human experience and social context.
-
What Makes 'The Best' Special?
- Tom: "There were some [analysts] that were just head and shoulders above everyone else. And they did that by... knowing a lot of stuff and knowing how to apply it."
- AI might soon know and apply these, pushing humans to focus on subtle, ineffable qualities that make top performers exceptional—such as how to steer the AI for best results. ([21:06])
7. AI and Malware Evolution ([23:40] – [26:45])
-
Attackers Already Use AI for Fast Malware Dev
- Tom: "Other threat actors... basically code up malware really quickly" with AI. ([23:40])
-
For Professionals – Longevity > Speed
- State actors want high stealth and flexibility in malware (“Russia used the same... malware for 20 years"). Constant churn isn’t always valuable.
- Test case generation and coverage are areas where AI can help with long-term malware quality.
-
Custom Tools at Scale—Opportunity for OpSec
- Grugq: "For ideal opsec, every operation would have its own unique tooling... [AI makes it so you] could then feed through 50 different C2s...development work to do in house... But now you could do that, which would mean it’d be much more difficult to link operations together." ([24:43])
-
Death of Attribution?
- "The rise of AI is actually the death of attribution." — Tom ([25:51])
- Grugq: "It absolutely could be. How could you?...It makes technical indicators less useful, perhaps." ([25:56])
Notable Quotes & Memorable Moments
-
On How Pros Use AI:
- "If I'm doing a risk assessment... I'd say, Claude, given these two risk assessments, compare and contrast. Did it come up with an angle I hadn't thought of or did I have a comprehensive thing? Because it's basically free." — Grugq ([06:53])
-
AI’s Limits:
- "Anything with the word planner in it is just...bad for AI because...you want a human to be making human judgments." — Grugq ([11:08])
-
Re: AI-Enabled Mass Malware Customization:
- "If what you could do is have 50 iterations of different strains of implants...that would be too much development work...But now you could do that, which would mean it'd be much more difficult to link operations together..." — Grugq ([24:43])
-
On Human-AI Teaming and Careerism:
- "An AI is probably not going to be thinking about your career...these people...do want promotions, they do want raises...When they look at it and go like, this is a very, very important thing. I need to make sure it works...you're going to move heaven and earth to make sure it works..." — Grugq ([19:21])
-
Attribution and the Role of AI:
- "The rise of AI is actually the death of attribution." — Tom ([25:51])
- "It makes technical indicators less useful, perhaps." — Grugq ([26:04])
Timestamps for Important Segments
- [00:14] – Current AI use in cybercrime and intelligence contexts
- [03:42] – NSA and intelligence workflow breakdowns: where AI fits
- [06:53] – AI as an automated “second opinion”
- [08:17] – The Anthropic vs. US DoD controversy
- [11:08] – The irreplaceability of human judgment in planning roles
- [14:27] – AI excels at tedious, always-on monitoring and "warning analyst" tasks
- [16:50] – Target network analysis as "Google Maps for networks"
- [17:39] – YOLO cyber actors vs. professional intelligence agencies: approaches to AI
- [24:43] – How AI could enable massive diversification in custom malware and make attribution harder
- [25:51] – Discussion on the future of attribution in the AI era
Conclusion
Tom Uren and Grugq agree that as AI matures, even cautious, process-driven agencies like the NSA will find opportunities for AI to augment their teams—especially in repetitive, testable, or data-intensive roles. AI will not replace seasoned operators, but will free up their time, allowing humans to focus on risk, judgment, and higher-level planning. The rise of AI could challenge attributions and technical indicators in cyber operations, presaging a new phase in both cyber offense and defense.
End of summary.
