Podcast Summary
Podcast: CyberWire Daily (Microsoft Threat Intelligence Podcast Edition)
Episode: AI as Tradecraft: How Threat Actors Are Operationalizing AI
Date: March 12, 2026
Host: Sherrod DeGrippo, N2K Networks / Microsoft
Guests: Greg Schlommer & Vlad (Microsoft Threat Intelligence)
Episode Overview
This episode takes a deep dive into how cyber threat actors—particularly those linked to North Korea (DPRK)—are rapidly integrating AI into every stage of their operations. Sherrod DeGrippo hosts a conversation with Microsoft threat intelligence analysts Greg Schlommer and Vlad, exploring specific examples, operational workflows, and the far-reaching implications of AI-enabled cybercrime, from malware development to social engineering and infrastructure setup.
Key Discussion Points & Insights
1. Who Are the Main Actors? (01:55-04:57)
- Storm 1877:
- Financially motivated, opportunistic DPRK actor tracked by Microsoft for 3 years.
- Recent rapid acceleration in both the scale and variety of attacks.
- "They’re just scaling operations and trying things that… we’ve never seen them do." (Vlad, 02:32)
- Notable for rapid experimentation—testing new vectors, iterating quickly.
- Core takeaway: These changes are attributed to their effective use of AI "in every step of the workflow." (Vlad, 03:00)
- Jasper Sleet & DPRK IT Workers:
- Jasper Sleet: Large-scale operation with decentralized cells—thousands of operatives with diverse tools.
- IT workers now use AI to create plausible personas, resumes, and job application materials at scale.
2. Workflow Evolution: AI’s Integration (03:03-04:57)
- AI's adoption is "far beyond an experiment" for DPRK actors (Greg, 03:26).
- North Korean threat actors are known for being "scrappy" and adaptable.
- Comparison to a "startup" mentality: small groups are given experimental freedom, leveraging a variety of tools—the antithesis of traditional, bureaucratic threat operations.
- "...such a variety of tools, approaches, and ways that they do what they do that I think it’s fascinating." (Vlad, 04:29)
3. AI in Vulnerability Research & Malware Development (06:06-14:55)
- Are North Korean actors using AI for vulnerability research?
- Not yet observed among the most bureaucratic groups (Citrine/Jade Sleet)—they may face cultural or procedural resistance to rapid AI adoption (Greg, 06:06).
- Scrappier groups take the “path of least resistance” and are adopting AI more quickly.
- AI is accelerating the pace and skill level of even unsophisticated actors.
- Malware Authorship & Autonomy:
- AI-driven malware is already present in the wild—not just in research.
- “The scary thing from a defender standpoint is...the variety that it can churn out and the pace at which it can churn it out at.” (Vlad, 12:29)
- Traditional human hallmarks (‘handwriting analysis’ or identifier patterns in malware code) are becoming obsolete.
- "There’s not going to be any humans authoring this type of code at least...It’s going to be very different in terms of what it is and what it does. That’s going to be a big challenge." (Vlad, 13:34)
Memorable Quotes
- Sherrod: “Using the human sort of element of handwriting analysis of code for attribution is coming to an end.” (13:20)
- Vlad: “It creates like an anonymizing function for code.” (14:25)
4. Social Engineering & Infrastructure Automation (14:55-17:38)
- Phishing and Social Engineering:
- Groups like Emerald Sleet are using AI to eliminate usual phishing tells (spelling mistakes, awkward phrasing, etc.).
- AI can also mimic writing styles, even producing emails similar to a known contact's.
- “There won’t be spelling mistakes...that’s all gone. So what do you tell people to look for?” (Greg, 16:22)
- Infrastructure Setup:
- AI can autonomously perform tasks like domain registration, command-and-control setup, server initialization, etc.
- This scale and diversity further complicate attribution and defense.
Memorable Quotes
- Sherrod: “You could even say, make sure it has misspellings. Make sure it doesn’t look too perfect. Cut the perfection down by 20% on grammar and spelling…” (16:22)
- Greg: “Imagine...take some blogs they’ve written, take some emails...say, hey, write it in the style of this person.” (17:16)
5. Attribution and Detection Challenges (13:20-17:49)
- AI-generated content allows near-infinite variation, breaking classic detection methods.
- As all traces of human authorship vanish, detection systems and human instincts must adapt.
- “It’s going to be very different in terms of what it is and what it does.” (Vlad, 13:34)
6. The IT Worker Phenomenon (17:49-19:21)
- LLMs lower the effort barrier for DPRK IT operatives impersonating legitimate job-seekers.
- They now mass-generate resumes, cover letters, and LinkedIn profiles with ease.
- “There really is no jailbreaking. They’re just using LLMs to do a thing that they were actually designed to do.” (Greg, 17:49)
- Raises the challenge for employers and defenders globally.
7. Key Implications for Defenders (08:32-19:21)
- The "leveling" effect: AI enables less sophisticated threat actors to perform advanced tasks.
- “The emergence and the widespread availability of AI tooling is going to kind of level that playing field.” (Greg, 12:28)
- Blue teams must proactively hunt for new, AI-driven attack signatures and workflows.
- A marked shift is expected within the next year—defenders must brace for dramatic changes and increased tempo.
Notable Quotes & Memorable Moments
| Timestamp | Speaker | Quote | |------------|---------|-------------------------------------------------------| | 02:32 | Vlad | “They’re just scaling operations and trying things that… we’ve never seen them do.” | | 03:26 | Greg | “Yeah, I think it’s pretty far beyond an experiment at this point.” | | 12:29 | Vlad | “The scary thing from a defender standpoint is... the variety that it can churn out and the pace at which it can churn it out at.” | | 13:34 | Vlad | “There’s not going to be any humans authoring this type of code at least...It’s going to be very different in terms of what it is and what it does.” | | 14:25 | Vlad | “It creates like an anonymizing function for code.” | | 15:14 | Greg | “There won’t be spelling mistakes...that’s all gone. So what do you tell people to look for?” | | 17:16 | Greg | “Imagine...take some blogs they’ve written...say, hey, write it in the style of this person.” | | 17:49 | Greg | “There really is no jailbreaking. They’re just using LLMs to do a thing that they were actually designed to do.” |
Important Timestamps
- 00:05-01:18 — Episode intro, guest introduction
- 01:55-03:03 — Storm 1877, operational shifts, AI’s impact
- 03:03-04:57 — Workflow details & start-up mentality
- 06:06-07:53 — Transition and resistance to AI within threat actor communities
- 09:05-11:23 — The rapid advancement of AI autonomy; new attack paths
- 12:21-14:25 — Attribution is breaking down (malware, social engineering)
- 16:22-17:38 — AI-enabled spear phishing, mimicking writing style
- 17:49-19:21 — IT worker scams & abuse, scale of AI persona creation
Conclusion
This episode underscores the profound transformation underway as threat actors, especially from DPRK, operationalize AI at scale. From automating code, infrastructure, and social engineering to masking attribution and blurring the lines between human and machine, defenders are entering a new era of challenge. The conversation urges cyber defenders to adapt rapidly, rethink detection methods, and anticipate a surge of more agile, less predictable cyber threats as the AI ‘leveling effect’ takes hold.
For more resources and threat intelligence, listeners are directed to aka.ms/operationalizingaimisuse.
End of summary.
![AI as Tradecraft: How Threat Actors Are Operationalizing AI [Microsoft Threat Intelligence Podcast] - CyberWire Daily cover](/_next/image?url=https%3A%2F%2Fpod.wave.co%2Flogo.png&w=3840&q=75)