Transcript
A (0:05)
You're listening to the rsa conference podcast,
B (0:07)
where the world talks security.
A (0:13)
Hello listeners. Welcome to this edition of our RSAC podcast series. Thank you for tuning in. I'm Tatiana Sanchez.
C (0:21)
And I'm Casey Zirkis and we are
A (0:24)
your RSAC podcast hosts. Casey, what are we going to discuss today?
C (0:29)
Well, we often discuss AI in the cybersecurity world as either a defense tool for organizations and users or an attack technique used by cybercriminals. However, we rarely take the time to examine how AI is affecting humans, especially our emotional intelligence. That's why we're excited to be joined today by Nancy Yoon, who will explore the gap between AI and emotional intelligence. We'll talk about what individuals can do to bridge that gap and how to effectively use AI to assist us in our professional and personal lives while keeping our brains exercised, ready to dive right in. Tatiana?
A (1:12)
Yes, I'm really excited about this topic, but before we get started, we do want to remind our listeners that here at RSAC we host podcasts twice a month. And we encourage you to subscribe, rate and review us on your preferred podcast app so that you can be notified when new tracks are posted. And now we would like to ask our guests to take a quick moment to introduce herself before we dive in. Nancy? Sure.
B (1:36)
Hi everyone, this is Nancy Yuen and I am the head of SOX Financial Data and Reporting Regulatory Governance at SOFI Technologies. And thanks so much for having me. I am passionate about both topics, emotional intelligence and artificial intelligence and how they intersect and just passionate about just getting learning to everybody while learning myself. So I'm looking forward to this.
A (2:04)
Thank you and thank you for being
B (2:06)
here with us today.
A (2:07)
Nancy In a world where I can flag every anomaly, score every risk and automate every control, why do our biggest failures still come down to the human behavior?
B (2:20)
In my research and even personal use of AI, the biggest failures still come by taking that assumption. Whatever we do with artificial intelligence is just going to work itself out without a human involved. And that's what I call the bias and over reliance of automation. We humans want to find the quickest pathway, just like an electrical circuit. You want to find the path of least resistance. And what we tend to do with AI is trust machines. And we trust machines through to perform the work that we typically would do manually. But we also would accept their outputs blindly. And especially given that path of least resistance. We are always facing time pressures and therefore it leads us to over rely on the critical judgment that we assume the AI will have and turn that AI into a place of reliance. There's also this concept of the black box problem. So many AI models that we all use daily, oftentimes in our work and in our personal life, they operate without any transparency. It's really hard to understand how they made a decision, how they produce an input or output. And this opacity really makes it impossible for you and I to double check their logic, to double check their challenges, and also to to double check how they have integrated ethics within their decision making. The other thing that we also want to talk about is contextual understanding. Does the AI understand nuance? Does it understand culture? Does it have a sense of bias? And what is the ultimate why driving the data that it's producing? And that's where humans like us come in, like the three of us come in, so that we can go ahead and understand, interpret, and really direct the AI instead of having it direct us.
