To The Point Cybersecurity Podcast
Breaking Down the Human Side of Advanced Cyber Attacks and Social Engineering with Dr. Margaret Cunningham – Part 2
Date: September 23, 2025
Host(s): Rachael Lyon, Jonathan Knepher
Guest: Dr. Margaret Cunningham, Technical Director, Security and AI Strategy, Darktrace
Episode Overview
This episode dives deep into the human factors that influence cybersecurity, focusing on the psychology behind cyber attacks, successful defense strategies, and the challenges of measuring success. Dr. Margaret Cunningham brings expertise from applied experimental psychology and behavioral analytics, shedding light on the complexity of human-centered security, the fallibility and resilience of individuals, and how emerging technologies like AI both help and complicate daily operations. The discussion also touches on effective metrics, the realities of AI and machine learning in security, and the future of human expertise alongside automation.
Key Discussion Points & Insights
1. The Simplicity (and Pitfalls) of Human-Focused Security
-
Family Safe Words – The hosts joke about how setting up basic family protocols like safe words is simple and effective, but even here, people frequently choose the same, easily guessable words.
- Notable Moment: “[Safe words are] probably like pickle, pizza, hot dog, banana... but the most common is actually ‘pineapple.’” – Dr. Margaret Cunningham [02:13]
-
Human Fallibility vs. Hidden Resilience
- Dr. Cunningham shares her obsession with understanding how things break and human error in security, questioning why, given so many failure opportunities, we don't see even more disasters.
- She suggests we undercount "protective factors and positive behaviors" that individuals and organizations enact, often invisibly, to keep things safe.
- Quote: "This has convinced me that we're under counting a lot of the resilient factors and positive behaviors that people are doing... we really should be seeing a lot more failure." – Dr. Margaret Cunningham [04:17]
-
Celebrating Success, Not Just Avoiding Failure
- There is a tendency to only focus on breaches or failures, while ignoring the analysis of “why things go right.” Margaret champions root cause analysis of successes.
- Quote: “The ratio of where we spend our effort is identifying problems... instead of identifying successes. We just don't spend time there. And I think that we should.” – Dr. Margaret Cunningham [04:54]
2. Metrics, Mindset, and Measuring Security
-
Toxic Metrics and the Wrong Incentives
- Traditional metrics (e.g., number of incidents handled) can actually disincentivize progress. If incident numbers go down because better detection or automation was put in place, it’s often seen negatively.
- Quote: “The way we count things and... message value for team performance... can be very toxic for actually creating a successful security program.” – Dr. Margaret Cunningham [07:19]
-
The Need for a Mindset Shift
- The industry lacks easy, communicable metrics for positive outcomes (e.g., "we had fewer breaches because we did our jobs better"), especially ones non-technical stakeholders understand.
- Solutions include mapping to business-centric metrics—like changes in cybersecurity framework scores or overall risk reduction—but this demands “exceptional leadership.”
-
Start Small, Experiment, and Build Momentum
- Don’t be discouraged by the scale of potential changes; even minor changes or experiments can shift culture and perception. Momentum is built through “small wins,” even in reporting or metric selection.
- Quote: “Having the courage to start small is kind of harder than it sounds, but definitely worthwhile.” – Dr. Margaret Cunningham [13:07]
3. Adapting Security Approaches for the Modern Threat Landscape
-
Consistency and Core Security Principles
- Despite changing attack vectors (deepfakes, social engineering, voice cloning), many threats are “fancier versions of things we’ve seen before.”
- Margaret urges grounding responses in established best practices like account access management and zero trust, rather than chasing the “new and scary.”
- Quote: “A lot of the basic security posture, zero trust mindset… is still so deeply relevant that if you can help people get over the ‘this is brand new, this is very scary’ moment... a lot of them are still the same.” – Dr. Margaret Cunningham [14:23]
-
Challenges for Small Businesses
- Scaling enterprise-grade security tools to smaller organizations is a major gap. Many small orgs are left exposed because available tech isn’t tailored or priced for them. Outsourcing via MDRs or consultants is only a limited fix.
- “We have a significant gap on sophisticated tooling for small businesses that is affordable and maintainable. It’s going to be a problem.” – Dr. Margaret Cunningham [16:47]
- Memorable Moment: “My most out-of-date hardware is like in my body... our brains are not creating new pathways and updating like there’s no new chip... I’m not putting the chip in when it does come.” – Dr. Margaret Cunningham [17:39]
4. The Intersection of Human Psychology and Cybersecurity
-
Margaret’s Journey from Psychology to Cybersecurity
- Dr. Cunningham’s obsession with human performance led her from Homeland Security into security R&D. Her path was non-traditional—there are rarely explicit job titles for applied psychologists in cybersecurity, but the need is real.
- Quote: "I have started referring to myself as a 'backend psychologist.'" – Dr. Margaret Cunningham [20:45]
-
Psychology Beyond ‘User Awareness’
- Her expertise isn’t in front-end training, but backend architecture and cognitive issues in secure coding and system design.
5. AI, Automation, and Their Cognitive Impacts
-
The Limitations of Generative AI in Security
- Generative AI (e.g., LLMs) often promises more than it delivers. While it creates content, it does not reduce cognitive loads—often giving humans even more to review and question, especially due to non-deterministic outputs.
- Quote: “Generative AI tooling does not take work off of people's table... It's one of the more deceptive types of helpfulness because ultimately it's not reducing the cognitive load for people.” – Dr. Margaret Cunningham [23:23]
- Quote: "Sometimes I feel like it's empty calories in a way, what I get out of Gen AI, you know, I'm not satiated..." – Rachael Lyon [25:43]
-
AI/ML Definitions and the Psychology of Expectation
- There’s confusion around what’s truly “AI” versus “just ML.” Generative systems exploit human anticipation (the “nucleus accumbens effect”), creating an addictive feedback loop even when the answers are suboptimal.
- Quote: “…that idea that you're going to get the magic is so engaging for people that we return to the same patterns over and over and over again. Kind of like hunting for that hot tip from ChatGPT.” – Dr. Margaret Cunningham [26:41]
-
Risks of Automation and Loss of Expertise
- Over-reliance on automated tools may erode expertise, hamper critical skill development, and create future training data sets of “bad AI output.” Feedback loops of automation could result in diminishing human skills and creativity.
- Quote: “Just going through the motion is how we learn. Right. Like, and if you're not doing that… you're going to distill and lose the creativity...” – Jonathan Knepher [34:45]
-
Calibrating Trust in AI
- Both over-trust (AI natives) and under-trust (experienced pros) pose risks. Striking the right balance is still an open challenge.
6. Human and Machine Collaboration: The Road Ahead
-
Agentic Workflows and the Challenge of Transparency
- With multi-agent AI systems and automated workflows, accountability and traceability become murky (“no audit log for cognition”). High reliability demands from traditional engineering clash with the messiness of generative and language-based AI tools.
- Quote: “…when we think about critical dependencies in AI infrastructure, a lot of things we've demanded in the past, our repeatability, the ability to reverse or undo something. And with generative systems and natural language, we strip away a lot of those capabilities and expectations for high reliability systems.” – Dr. Margaret Cunningham [32:15]
-
Looking to the Future
- Both hosts and guest are excited yet cautious about how AI and human expertise will coevolve. The field is rapidly changing, full of promise and pitfalls.
- Quote: "I've learned more things and then had to unlearn a lot of things... And to me that makes it really, really fun and engaging." – Dr. Margaret Cunningham [38:56]
Notable Quotes & Memorable Moments
-
On Safe Words:
“It’s pineapple.” – Dr. Margaret Cunningham [02:13] -
On Failure and Resilience:
“Ultimately there are things that people are doing that we might not be aware of that are serving as protective factors both at an individual and organizational level.” – Dr. Margaret Cunningham [04:29] -
On Success Metrics:
“The nuance of ‘we saved a lot of money by revamping our detection engineering which reduced our need to respond to incidents by 50%’ is actually a pretty sophisticated conversation...” – Dr. Margaret Cunningham [07:45] -
On AI Cognitive Burden:
“Right now, if we're talking about generative AI, I think it's one of the more deceptive types of helpfulness because ultimately it's not reducing the cognitive load for people.” – Dr. Margaret Cunningham [23:23] -
On AI Addiction:
“That idea that you're going to get the magic is so engaging for people that we return to the same patterns over and over and over again.” – Dr. Margaret Cunningham [26:41] -
On Skill Loss:
“Just going through the motion is how we learn… and if you're not doing that… you're going to distill and lose the creativity...” – Jonathan Knepher [34:38]
Timestamps for Important Segments
- [01:06] – Human-centered security: Safe words & family protocols.
- [04:00] – Hidden resilience and why success is underappreciated.
- [07:00] – Flaws in traditional security metrics and their impact.
- [13:00] – Building momentum with small, meaningful changes.
- [15:00] – SME security needs vs. current tooling gaps.
- [18:00] – Dr. Cunningham’s personal journey into cybersecurity.
- [23:00] – Why Gen AI frequently increases not decreases cognitive load.
- [26:36] – The behavioral science behind Gen AI’s feedback loops.
- [33:10] – The auditability crisis in agentic/multi-agent AI workflows.
- [34:45] – The emerging risk of de-skilled professionals in an automated future.
- [38:56] – Final reflections: rapid learning and the fun of adaptation.
Conclusion
This episode emphasizes the complex interplay between human psychology, security culture, and technology—particularly AI. Dr. Cunningham argues that while “doom and gloom” is natural in cyber, organizations should also root out and amplify pictures of success. As automation and AI become more prevalent, leaders need to foster balanced, evidence-driven approaches, recognize the limits of so-called intelligence, and remain vigilant about the human skills at the core of resilience.
To stay updated on future episodes and insights from top cybersecurity experts, subscribe on your favorite platform.
