Transcript
Greg Otto (0:02)
What security professionals can learn about identity from Amazon on this episode of Safe Mode. Welcome to Safe Mode. I'm Greg Otto, Editor in chief at cyberscoop. Every week we break down the most pressing security issues in technology, providing you the knowledge and the tools to stay ahead of the latest threats, while also taking you behind the scenes of the biggest stories in cybersecurity. An attack is coming.
Stephen Schmidt (0:24)
It's about keeping us safe. He's just a disgruntled hacker.
Greg Otto (0:27)
She's a super hacker. Stay alert, stay safe, stay safe. This Safe Mode. Welcome to this week's episode of Safe Mode. I am your host, Greg Otto. In our interview segment this week, we're going to be talking to Stephen Schmidt, the chief security officer of Amazon. Steven put a really interesting blog post out a couple weeks ago that talked about how Amazon handles identity management across aws, specifically with Midway Tool. Really good conversation about what security professionals can do when thinking about scaling up and how identity fits into that. But first, talking with Derek Johnson, reporter for cyberscoop. Last week it was election security. This week we've swung the pendulum the other way in your beats talking about AI specifically this advent over the past month of the big AI companies moving into healthcare. We saw in January, OpenAI announced ChatGPT Health, Anthropic, Google followed with their own healthcare focused products. And you know, when we saw these announcements, we said, okay, there's some examination to be done here. And you really focused on the privacy side of things and how privacy works into these products coming out. So you talk to a bunch of experts. What did you find?
Derek Johnson (1:58)
Yeah, and I think, you know, what we found or what we wanted to focus on because if you look at these apps and if you look at the large language models that they're based on, the flaws or the security vulnerabilities are going to be largely kind of what you expect and what we've covered with, with previous AI tools, right. There's propensity for data leakage, there's a propensity for prompt injection and vulnerability to other things like that. But I think what we really wanted to focus on was looking at the legal protections that are around this data because one of the things that all of these companies did in the rollout was to really emphasize the way that they were securing your, your data. All of their sort of large sections on OpenAI's site that kind of goes into all the things that they're doing to protect your data and the partnerships that they have with other entities. But the, the, the tricky thing about that is Those protections are not backed by the force of law, the way that those, the way that your healthcare data is out of your doctor's office or at a hospital, because those. That data is protected under a law called hipaa. In the data security rule, which essentially requires regulated entities to take reasonable steps to secure their patient and medical records and data and things like that, the lawyers and healthcare experts that we talked to said that these tech companies are almost certainly not covered under hipaa.