RSAC Podcast Episode Summary
Podcast: RSAC
Host: Tatiana Sanchez
Guest: Alex Holden, CISO of Hold Security LLC
Episode Title: I vs AI
Date: January 26, 2026
Episode Overview
This episode of the RSAC Podcast, hosted by Tatiana Sanchez, dives into the evolving role of artificial intelligence (AI) in the hands of cybercriminals and the complex challenges this creates for both organizations and individuals. Guest Alex Holden, a renowned cyber threat intelligence expert, shares insights drawn from direct observation of cybercriminal communities, highlighting real-world attack examples, organizational defense strategies, and vulnerabilities in current AI deployments.
Key Discussion Points & Insights
1. AI as a Double-Edged Sword in Cybercrime
(02:28–04:44)
- AI adoption outpaces defenses: Cybercriminals are adopting AI rapidly, often more efficiently than legitimate organizations.
- AI as both tool and accomplice: Criminals use AI for everything from crafting phishing emails to coordinating ransomware — tasks that now require fewer people and are executed with greater speed and sophistication.
- Quote:
“AI is becoming a tool, a malleable tool in hands of cybercriminals… in their hands, AI is not only a tool, but it's also an accomplice.”
— Alex Holden (02:32)
- Quote:
- Erosion of traditional defenses: AI neutralizes old heuristics (like spotting phishing via misspellings), enabling attackers to convincingly mimic cultural and linguistic nuances.
- Scalability and velocity: Attacks become faster, harder to detect, and possible at much greater scale, enabling even solo actors to run previously large operations.
2. Real-World AI-Driven Attacks
(05:37–09:40)
- Corporate schemes:
- AI aids attackers in understanding and manipulating business processes, not just technology. Exploiting process-based vulnerabilities is now straightforward.
- Unlimited attempts: AI allows mass-automation of social engineering (e.g., AI-powered help desk impersonation), with negligible cost per attempt.
- AI can autorespond, learn context, and maintain rapport, making phishing and BEC (business email compromise) attacks much more convincing.
- Individual targeting & romance scams:
- AI enables nuanced, ongoing dialogues for romance scams. These scams craft complex, believable stories, provide fake documentation, and emotionally manipulate victims beyond what humans alone could achieve.
- Quote:
“In most cases even examining conversations, communications, it's not obvious to experts to see if this is a real person looking for romance or if this is an AI driven scam.”
— Alex Holden (08:59)
3. Victim Recovery and Institutional Safeguards
(09:40–11:38)
- Bank responses: Some financial institutions train staff to spot and counsel potential scam victims, but online, attackers use AI to walk victims through money transfer steps, often circumventing offline safeguards.
- Quote:
“AI would actually give them step by step instructions in order to transfer the money... providing technical support from the bad guys to a victim…”
— Alex Holden (11:14)
- Quote:
- Recovery of lost money is rare once funds transfer electronically or into crypto.
4. Proactive Organizational Defenses against AI Attacks
(12:08–14:55)
- Beyond technical controls: It’s essential to scrutinize not just tech but also business processes — especially those influenced by third parties or customers — for avenues of manipulation.
- Behavioral safeguards: Use behavioral markers (e.g., unnatural response speed) to spot AI-driven attacks. Insert hidden prompts or challenges AI would notice but humans would not (e.g., complex math problems).
- Quote:
“They put a complex math problem... invisibly into a human into a problem. And in response an AI sees it and it gives a number. Well guess what? Most people I know don't know this off the top of their head and they would not be able to respond in 10 seconds. AI does.”
— Alex Holden (13:25)
- Quote:
- Balance: Defend without disrupting customer service or business flow.
5. Incident Response to Successful AI Attacks
(15:26–16:44)
- Follow classic incident response protocols — detect anomalies, investigate, contain — but update playbooks for AI-driven scenarios.
- Prepare for complexity: AI-driven incidents may lack the obvious “errors” or tell-tale signs of traditional attacks.
- Quote:
“In your incident response playbook, put in components. Consider scenarios that would be AI driven and how you would be investigating these things.”
— Alex Holden (16:02)
- Quote:
6. AI as an Attack Surface: Defending Your Implementation
(17:09–21:46)
- AI deployments are under-tested: Most organizations do not subject AI systems to the same security evaluation as traditional software (e.g., pen testing, patch management).
- Quote:
“Very few AI implementations actually undergo proper security evaluation overall… and because this is not just a piece of software, it is a complex solution… The data may be there and maybe it's said to be protected. But how much of a vulnerability is there?”
— Alex Holden (18:44)
- Quote:
- Vulnerabilities in AI:
- AI can leak data due to exploitation or poor configuration.
- Social engineering can trick AI into revealing confidential information.
- Criminals value “AI manipulators” highly for their ability to exploit these systems.
- Urgent need for standards: Alex urges adopting robust testing, red teaming, and defensive postures for AI, warning that current progress is slow and fragmented.
Memorable Quotes
- “AI is not only a tool, but it's also an accomplice… attacks are coming out faster, they are more sophisticated, and the knowledge of cybercriminals is vastly improved.”
— Alex Holden (02:41) - “We are being bombarded by many AI components in the hands of really evil people. And it's hard to tell the difference and it's not very easy to defend against it.”
— Alex Holden (09:25) - “…in many cases way for us to succeed, to progress. So AI is vulnerable, your implementation of AI may be vulnerable and you should take necessary steps to defend against it.”
— Alex Holden (21:13) - “Hopefully it becomes much better science like we have right now with PEN testing, red teaming and many other things. But in my view we are getting into this very slowly and not always effectively…”
— Alex Holden (21:25)
Key Takeaways
- Cybercriminals are adopting and weaponizing AI with alarming speed and creativity.
- Traditional security awareness measures (e.g., phishing detection) are outpaced by AI’s capabilities.
- Attacks can now be more targeted, persistent, and personalized — both for organizations and individuals.
- Proactive defense requires scrutiny of business processes and innovative behavioral defense techniques.
- Amend incident response and risk frameworks to account for AI-driven attacks and failures.
- Current AI implementations often lack robust security vetting, pen testing, and ongoing monitoring — a critical, unaddressed risk.
Timestamps for Important Segments
- [02:28] — AI as an “accomplice” and the evolution of attack sophistication
- [05:37] — Corporate and individual attack examples, AI's role in scaling social engineering
- [09:40] — Financial recourse for romance scam victims and banking safeguards
- [12:08] — Strategies for organizational defense and behavioral anomaly detection
- [15:26] — Incident response best practices in the AI era
- [17:09] — Defending AI implementations; security gaps and needed actions
This episode delivers a pragmatic, sometimes sobering look at how AI is transforming the cyber threat landscape, offering actionable guidance for fortifying organizational defenses and highlighting the urgent need to mature AI’s security posture. Alex Holden’s real-world perspective helps both cybersecurity practitioners and leadership grasp the stakes, challenges, and concrete steps needed right now.
