CyberWire Daily (Microsoft Threat Intelligence Podcast):
Episode: AI as Tradecraft: How Threat Actors Are Operationalizing AI
Date: March 12, 2026
Guests: Sherrod DeGrippo (host), Greg Schlommer (Threat Intelligence Analyst, Microsoft), Vlad (Threat Intelligence Analyst, Microsoft)
Episode Overview
This episode dives deep into one of the most pivotal topics in cybersecurity today: how threat actors are integrating and operationalizing AI into their tradecraft. Sherrod DeGrippo hosts a candid roundtable with Microsoft threat intelligence researchers Greg Schlommer and Vlad. The discussion explores the rapid evolution of AI-enabled attack methods, especially among North Korean (DPRK) groups, the new challenges for defenders, and what the future might hold as AI continues to accelerate threat actors’ capabilities.
Key Discussion Points & Insights
1. The Rise of AI in Threat Actor Workflows
[01:55-03:03] Vlad introduces Storm 1877 & AI Integration
- Storm 1877, a financially motivated group tracked for about three years by Microsoft, has in recent months dramatically increased its activity and diversity of tactics.
- The group now iterates attack vectors rapidly, scaling up what works and discarding what doesn't—“like a startup.”
- Major shift: AI as a core component in every step of their workflow, contributing to this acceleration.
- Quote:
- “We're just seeing them accelerate in a very fast way... iterating, starting with a new form of attack, a new vector, quickly testing it in the wild and moving on if it doesn't work and expanding if it does.” — Vlad [01:55]
- Quote:
2. AI Adoption: Experimentation or Full Integration?
[03:03-04:57] Evolution beyond Experimentation
- Early AI adoption has now moved well beyond experimentation among key DPRK groups (notably Jasper Sleet and Storm 1877).
- **DPRK operators are described as "scrappy" and highly adaptable, leveraging AI to:
- Iteratively test and refine TTPs (Tactics, Techniques, Procedures)
- Rapidly deploy new attack methods
- Different groups adopt AI in distinct ways:
- IT worker groups have more latitude to experiment with AI, while bureaucratic, intelligence-focused orgs (e.g., Citrine Sleet) are slower to incorporate new tools due to rigid structures.
- Quote:
- “It's really interesting to watch just how much they operate like you would expect a startup to operate, where... small groups [have] freedom to experiment and do their own thing.” — Vlad [04:29]
3. AI Use Cases: Malware, Infrastructure, and Social Engineering
[09:05-14:55] Rapid Maturation and Pervasiveness
- AI-generated malware is operational ("in the wild"), not just experimental.
- Full, end-to-end malware authored by AI tools is now observed.
- AI increases variety and velocity, undermining traditional detection and attribution strategies:
- Human-pattern recognition is disrupted—no consistent "handwriting" in AI-generated code.
- Attribution by code structure and style is much less reliable.
- Quote:
- “If they can change what it is, what it does, what it looks like three times a week... that almost becomes nigh impossible because there's no human hand to leave those traces.” — Vlad [12:29]
- AI Agents are automating infrastructure tasks:
- Registering domains, setting up command and control servers, and more.
- Social Engineering gets a boost:
- AI-assisted phishing removes classic red flags: no more spelling mistakes or awkward grammar.
- Phishing lures can be generated in local languages, matched to victims, even mimicking known styles or individuals.
- Quote:
- “There won't be spelling mistakes... There won't be issues that you may encounter from having a non-native English speaker building the lure. That's all gone. So what do you tell people to look for?” — Greg Schlommer [16:22]
4. The End of "Handwriting Analysis" in Attribution
[13:20-14:25] Attribution Complications
- Analysts historically used idiosyncratic code patterns for attribution ("handwriting" in code).
- With AI, that function is anonymized—malware can look different every iteration, and no unique human traces remain.
- Even common infrastructure patterns (e.g., preferred registrars/domains) are being randomized by AI scripts.
- Quote:
- “It creates like an anonymizing function for code.” — Sherrod DeGrippo [14:25]
- “You no longer can spot a pattern of, okay, well, this group uses these guys and so on.” — Vlad [14:28]
- Quote:
5. AI in Building Human Personas for Espionage and Fraud
[15:14-18:53] Real-World Social Engineering & Scale
- AI is used to craft resumes, LinkedIn profiles, cover letters, and other artifacts for fraudulent job-seeking personas (notably by "Jasper Sleet").
- These activities don't require jailbreaking—they're a legitimate use of LLMs, which greatly increases scale and capacity for social engineering and infiltration.
- Quote:
- “They're building resumes, they're populating stuff on a LinkedIn page, they're writing cover letters... There really is no jailbreaking. They're just using LLMs to do a thing that [LLMs] were designed to do.” — Greg Schlommer [17:49]
- Quote:
6. Defensive Implications and the Road Ahead
[08:32, 12:21] Defender Perspective & Future Challenges
- The rapid adoption of AI by less-advanced threat actors "levels the playing field".
- Traditional detection, threat hunting, and attribution methods must adapt quickly.
- Defender teams must prioritize proactive research into new AI-driven techniques.
- AI-generated content and infrastructure remove classic indicators, demanding new security paradigms.
- Quote:
- “I expect that the emergence and the widespread availability of AI tooling is going to kind of level that playing field. I think we're going to start to see the actors that...we've assessed to be less capable start to demonstrate more agility, more ability to carry out highly targeted operations, more advanced tooling, more advanced malware.” — Greg Schlommer [11:28]
- Quote:
Notable Quotes & Memorable Moments
- “We're just seeing them accelerate in a very fast way... iterating, testing, and expanding if it works.” — Vlad [01:55]
- “They operate like a startup... allowed to have this freedom to experiment and do their own thing.” — Vlad [04:29]
- “If they can change what it is, what it does, three times a week... that becomes nigh impossible [to attribute].” — Vlad [12:29]
- “Phishing emails: there won't be spelling mistakes, no more non-native speaker errors...What do you tell people to look for?” — Greg Schlommer [16:22]
- “They're building resumes, LinkedIn pages... just using LLMs to do a thing they were designed to do [for espionage].” — Greg Schlommer [17:49]
- “I think we're just at the start of it. If you ask me again in six months, I would say absolutely [there will be more change].” — Greg Schlommer [08:32]
Timestamps of Key Segments
- Storm 1877 & AI acceleration: 01:55–03:03
- DPRK's scrappiness and startup mentality: 03:26–04:57
- AI-enabled malware & infrastructure discussion: 09:05–14:55
- Attribution and code "anonymization": 13:20–14:25
- AI in social engineering personas: 15:14–18:53
- Obsolescence of traditional indicators: 16:22
- Defensive outlook and leveling effect: 11:28, 08:32
Conclusion
This episode paints a vivid—and warning—picture: threat actors, especially decentralized and financially motivated groups like those from North Korea, are rapidly operationalizing AI to speed up, anonymize, and scale cyber attacks. The lines are blurring between sophisticated crime and state operations, and defenders must shift how they think about detection, attribution, and protection in the face of AI-enabled adversaries.
For more resources: Visit AKA.ms/OperationalizingAIMisuse
![AI as Tradecraft: How Threat Actors Are Operationalizing AI [Microsoft Threat Intelligence Podcast] - CyberWire Daily cover](/_next/image?url=https%3A%2F%2Fmegaphone.imgix.net%2Fpodcasts%2F93158ad8-19a1-11f1-bd15-7babb2c5f1d1%2Fimage%2F1fc7634177ae08ebc56a6f283fa9f679.png%3Fixlib%3Drails-4.3.1%26max-w%3D3000%26max-h%3D3000%26fit%3Dcrop%26auto%3Dformat%2Ccompress&w=1200&q=75)