Transcript
A (0:01)
Welcome to to the Point Cybersecurity Podcast. Each week join Jonathan Neffer and Rachel Lyon to explore the latest in global cybersecurity news, trending topics and cyber industry initiatives impacting businesses, governments and our way of life. Now, let's get to the Point. Hello everyone. Welcome to this week's episode of to the Point podcast. Hi, I'm Rachel Lyon here with my co host Jon Neffer. We're excited to welcome back for our part two discussion continuing conversation with Ed Gadette. He is the CEO and founder of Senseinet that's developed the first and only collaborative cloud platform and exchange for enterprise and third party risk management in healthcare. He has more than 25 years of software experience, including serving as CMO at Improvada and holding senior executive roles across a number of innovative startups and public software companies. He also holds patents for mobile and quorum based authentication, secure content sharing, and managing data objects in a distributed context. So without further ado, let's get to the point.
B (1:10)
You know, I'm really intrigued like on the risk here, right? Like the, you know, I get the traditional like security risk assessment, but how do you analyze like the, the output risk, if you will. Right. Like you've got all the normal things to do. But I mean this is, AI is almost like a creative, it's non deterministic. Like how do you figure out like is the outcome and the outputs of the system useful? Does it, does the value add exceed the risk value and how does that fit into that, the risk framework you're talking about?
A (1:50)
Well, and to piggyback there. And who's assessing this? Right. I mean, that's the other part. John. Yeah, I'm excited for this answer, Ed.
C (1:59)
Yeah, it's a hard answer because mostly on one hand it starts off with really understanding the use case. Right. And understanding the outputs in a way that enable you to validate, verify efficacy. Right. No different than, I'll give you an example. If, if you're leveraging AI in a way to generate content, let's just say, okay, and your brand, the reputation and the risk of brand reputation is important to you. Well, you better not send out that content without reviewing it.
A (2:47)
Right?
C (2:48)
Right. So there's a, you know, there's a guardrail that has to still. And this is why I think you see this adoption sort of spiking and then sort of slowly, you know, cresting and now sort of coming into this realization that, wow, we can adopt quickly, but we better make sure that maybe we take a pilot approach to this. So we really understand the risks, not just the cyber risk, but the data risk. So, you know, understanding in that example, the content, great, you can generate all this content, but if you send out something that's offensive, whether you know, verb, you know, in the written word or images or whatever, you run the risk of reputational damage. You run the risk of. Right, so it's a very similar approach, like you can generate diagnoses, you can generate care, but if someone's not looking at that with an eye to efficacy, quality, right, that you run the risk of doing the wrong thing. So, you know, I think one of the, one of the, probably one of the largest use cases right now and most adopted use cases in healthcare is ambient listening. And so ambient listening, for those of you not aware of it, is in a typical session with a doctor, right? There's typically, there's a conversation between the doctor and patient and the doctor is interacting with technologies in some way. Sometimes the doctor has their head in the screen, sometimes they're using an iPad, but they're capturing notes, they're capturing that interaction electronically so they can go back and review those notes and maybe create a care plan or whatever. Right. Ambient listening removes that distraction. Now the doctor, the caregiver, the clinician can have a face to face conversation like we're having and not look at the keyboard and not look at the screen. Right? Because the ambient listening is pulling in the conversation details and then creating summary notes. Now the doctor is still responsible for those notes. They're his or her notes.
