RSAC Podcast Episode Summary: “Bridging Artificial and Emotional Intelligence in Audit”
Date: February 11, 2026
Hosts: Tatiana Sanchez & Casey Zirkis
Guest: Nancy Yuen – Head of SOX Financial Data and Reporting Regulatory Governance, SOFI Technologies
Episode Overview
This episode delves into the critical intersection between Artificial Intelligence (AI) and Emotional Intelligence (EI) in the context of audit and cybersecurity. The discussion, led by RSAC hosts and AI/EI expert Nancy Yuen, tackles the potential pitfalls of over-reliance on automation, the irreplaceable role of human nuance and ethical judgment, and practical strategies for embedding emotional intelligence into both organizational culture and technology design.
Key Discussion Points & Insights
1. The Human Factor in Automated Worlds
Timestamp: 02:07 – 04:37
- AI is excellent at flagging anomalies, scoring risks, and automating controls, but human failures often stem from behavioral issues that automation can't solve.
- There’s a tendency toward “bias and over-reliance of automation”—trusting machines blindly and seeking the path of least resistance.
- The "black box problem": Most AI models lack transparency, impeding users from understanding or challenging outcomes.
- Quote:
“We humans want to find the quickest pathway, just like an electrical circuit... But we also would accept [AI] outputs blindly.”
— Nancy Yuen, 02:31
2. Automation vs. Human Judgment
Timestamp: 05:29 – 08:56
- Automation is exceptional at routine high-volume, data-heavy tasks.
- Critical human functions remain: handling exceptions, managing ethical dilemmas, and exercising judgment and empathy.
- Over-automation disables “relational controls”—intuition, context awareness, emotional intelligence (EI).
- Heavy reliance on AI risks “mutual atrophy” of human brain functions like self-awareness and empathy.
- Loss of these skills impairs our readiness for real-world challenges where AI isn’t present.
- Quote:
“Over automation can disable the relational controls that we humans need to have. And this would include your human intuition, that gut feeling, the ability to... read the room.”
— Nancy Yuen, 06:39
3. Psychological Safety and Human Risk
Timestamp: 08:56 – 14:07
- AI cannot detect subtle human factors like fear of reporting or team cultures that hide mistakes.
- Psychological safety—ensuring honesty is valued over perfection—is essential for surfacing hidden risks.
- Shifting from blame-based to learning-oriented organizational cultures enables proactive value and growth.
- Admission of weakness and curiosity should be normalized and encouraged.
- Leaders play a critical role in modeling vulnerability and normalizing mistakes.
- Quotes:
“Having a human nature means making mistakes, means being imperfect. And…especially when your workplace and your management leadership...require you to be perfect, you are going to be scared. And that’s where psychological safety…and psychological security is requiring this transformation.”
— Nancy Yuen, 09:43
“The faster we break things, the faster we make mistakes is the faster we find solutions. Please normalize not knowing—the curiosity of a child is how it learns.”
— Nancy Yuen, 12:45
4. Applying Emotional Intelligence in AI Development
Timestamp: 14:07 – 21:23
- Emotional Intelligence (EI) components:
- Self-Awareness: Recognizing one’s own emotions and context.
- Self-Regulation: Managing emotional responses to situations.
- Motivation: Adapting and acting constructively.
- Empathy: Understanding how others perceive or feel about our actions.
- Social Skills: Effectively navigating interpersonal situations—described as the "final exam" of EI.
- Designers and users of AI systems must anticipate misuse, unintended consequences, and continually question system outputs.
- The importance of "feedback loop" in AI development mirrors human learning—regularly informing and correcting the system.
- Human oversight must outpace the speed of tool adoption, particularly in sensitive contexts like healthcare and security.
- Quote:
“When we are instructing the machine to perform certain actions, it’s really important…to set boundaries…These are our values…With AI, we need to teach it…”
— Nancy Yuen, 18:01
“The rate at which we use AI is now outpacing the rate of development. And that’s something important, especially when we’re considering it’s humans at the other end.”
— Nancy Yuen, 20:45
Notable Quotes & Memorable Moments
- On blind reliance:
“We trust machines…But we also would accept their outputs blindly.” — Nancy Yuen, 02:36 - On over-automation:
“That empathy of human emotional intelligence…is required to identify those risks that the AI is not programmed to recognize.” — Nancy Yuen, 07:03 - On leadership and safety:
“When leaders start to [admit mistakes and uncertainty], you’re normalizing that it’s okay to make mistakes—and please make mistakes.” — Nancy Yuen, 11:24 - On self-awareness and neurodiversity:
“I myself have high-functioning autism. Social skills were not my strong suit. And this is an area I need to build up…We need to develop that social skills.” — Nancy Yuen, 17:11
Segment Timestamps
- Introduction: 00:05 – 01:36
- Nancy Yuen Introduction & Framing: 01:36 – 02:07
- AI, Automation Bias, and Human Over-Reliance: 02:07 – 04:37
- Automation vs. Human Relational Controls: 04:37 – 08:56
- Psychological Safety in Organizations: 08:56 – 14:07
- Bridging EI and AI in Practice & Design: 14:07 – 21:23
- Closing Remarks: 21:23 – End
Tone & Language
Nancy brings an insightful, thoughtful, and often candid tone—frequently interweaving personal experiences to ground complex concepts. The conversation mixes technical clarity with accessible analogies (e.g., parenting, classroom questions), making the insights highly relatable.
Key Takeaways
- Automation must be balanced with human oversight—AI lacks the nuance, context, and emotional understanding that can surface hidden risks and foster resilient organizations.
- Leaders must champion psychological safety, normalize curiosity and mistake-making, and intentionally foster EI skills.
- As AI adoption accelerates, a parallel investment in EI—at every level from tool design to user training—will be key to sustainable, ethical, and effective technology implementation.
Summary by Segment for Non-Listeners: If you haven’t heard the episode, expect a deep and practical exploration of how humans and AI must co-evolve in audit and security roles—a compelling case for stronger emotional intelligence in an increasingly automated world.
