
In this episode, we sit down with Stephen Schmidt…
Loading summary
A
What are states saying about their relationship with CISA when it comes to election security? We'll talk about it on this episode of Safe Mode. Welcome to Safe Mode. I'm Greg Otto, editor in chief at cyberscoop. Every week we break down the most pressing security issues in technology, providing you the knowledge and the tools to stay ahead of the latest threats, while also taking you behind the scenes of the biggest stories in cyber security.
B
An attack is coming.
A
It's about keeping us safe.
B
He's just a disgruntled hacker.
A
She's a super hacker.
B
Stay alert. Stay safe.
A
Stay safe. This is Safe Mode. Welcome to this week's episode of Safe Mode. I am your host, Greg Otto. In our interview segment this week, we're going to be talking to Stephen Schmidt, the chief security officer of Amazon. Steven put a really interesting blog post out a couple weeks ago that talked about how Amazon handles identity management across aws, specifically with Midway Tool. Really good conversation about what security professionals can do when thinking about scaling up and how identity fits into that. But first, talking with Derek Johnson, reporter for cyberscoop. Last week it was election security. This week we've swung the pendulum the other way in your beats talking about AI specifically this advent over the past month of the big AI companies moving into healthcare. We saw in January, OpenAI announced ChatGPT Health, Anthropic, Google followed with their own healthcare focused products. And you know, when we saw these announcements, we said, okay, there's some examination to be done here. And you really focused on the privacy side of things and how privacy works into these products coming out. So you talked to a bunch of experts. What did you find?
C
Yeah, and I think, you know, what we found or what we wanted to focus on, because if you look at these apps and if you look at the large language models that they're based on, the flaws or the security vulnerabilities are going to be largely kind of what you expect and what we've covered with previous AI tools. Right. There's propensity for data leakage, there's a propensity for prompt injection and vulnerability to other things like that. But I think what we really wanted to focus on was looking at the legal protections that are around this data because one of the things that all of these companies did in the rollout was to really emphasize the way that they were securing your data. All of their sort of large sections on OpenAI's site that kind of goes into all the things that they're doing to protect your data and the partnerships that they have with other entities. But the, the, the tricky thing about that is those protections are not backed by the force of law. The way that those, the way that your healthcare data is out of your doctor's office or at a hospital, because those. That data is protected under a law called hipaa. In, in the data security rule, which essentially requires regulated entities to take reasonable steps to secure their patient and medical records and data and things like that, the lawyers and healthcare experts that we talked to said that these tech companies are almost certainly not covered under hipaa. Hipaa, HIPAA security rule is designed for healthcare providers. Healthcare exchanges their process, your data. They do not, they are not extended to tech companies who make AI health apps. And so we reached out to Google, we reached out to OpenAI, we reached out to Anthropic. We did not, you know, and I essentially, you know, said, hey, look, this is what we are hearing from legal and health experts, you know, tell me, tell me if you think that they're wrong. And we ultimately did not get a response from all three of them. You can kind of draw your own conclusions from that. But it's, it's a, it's a
B
gets
C
at something that I think is really important, which is that there's a difference between data protections that are backed by the force of law and data protections that are backed by a terms of service agreement. And it looks like when it comes to your health data and the interactions that you're going to have with these health apps, it's going to be the latter.
A
It reminds me of the spin that we hear talking to CISOs when they're just talking about putting any of these generative AI platforms in their enterprise and the worry that an employee is going to take some protected data and shove it into the platform and who knows where that goes. Like, I can't tell you how many CISOs or security experts that I've talked to about it, and they've talked to employees, and employees are like, I did this with that. And they're like, you did what with what? No, you didn't, but you didn't stop doing that, please. For various, you know, regulatory reasons, this spins it, you know, an entirely different way, where it's people taking, you know, some of the most private and, and personal data and maybe throwing it out there in ways that they, you know, there's no real way for them to reconcile where that is going. And the flip side of that too, is that we did see from OpenAI and Anthropic, they are at least taking steps that they know that that's out there. Like, it's not. They're just going, well, what are you talking about? This isn't a problem. And we've seen with OpenAI, they're partnering with this company called Be well, which handles individual phi. And they themselves, on their own page, have said we follow it up to a voluntary level in which we are trying to make sure that we are as compliant as possible. But there's not HIPAA compliance. It's not saying we are HIPAA compliant, period. And that's really. That the, the gray area that I think that we highlight so well in this article.
B
Yeah.
C
And I think, you know, so OpenAI and anthropic will. They will not say we are HIPAA compliant, they will not say we follow hipaa, but they do certain elements of their healthcare offerings being configured to be. To support hipaa. And that's really something that kind of flips the accountability back onto the, the user. Right. Because it's now it's really onto the user to make sure that they're doing everything to make sure that they're HIPAA compliant, whereas if it were a healthcare provider, they would be legally compelled to and legally accountable if something happened with that data. Um, so, so it's, it's something that, you know, we looked into because looking at the marketing and advertising, it wasn't really clear what those terms meant, you know, support HIPAA compliance. And, you know, what does that mean? Does that mean that. That these are entities that, that have to do this or, or, or they're choosing to do it voluntarily and it does very much appear that it's voluntary.
A
And just for anybody out there, that is. I want to be clear about what HIPAA does protect because you talked about the entities earlier with hipaa. What, like, what exactly does HIPAA cover? Like, we're talking about, like basically the direct handlers of health, your doctors, your health care providers, like, not anything that's like, I want to say, like to support that.
C
Yeah. So. So, Lee, it's. It, it puts you legally on the hook to have some kind of a reasonable security program in place. It requires you to report a breach to Department of Health and Human Services, to victims, maybe even to the public, if it is serious enough. So there are all these things in the security rule of HIPAA that essentially force these entities, these regulated entities, to either do those protections up front or suffer some pretty public and painful consequences on the back end. And that's the part that's really missing here, is that if you look at sort of, you know, a Couple of folks that I talked to compared this to the 23andMe situation where you had millions of people submit genetic data to this company that made all kinds of promises about how they were going to protect it, but when it came time when they went bankrupt and they had to sell the company to another buyer, they had to negotiate a separate agreement to get that buyer to agree to treat their data in the same way that 23andMe had originally said. So this is a voluntary thing. It's whether these companies choose to do it or not. And that is very different from what we've seen in the healthcare space where data security is much more tightly regulated than other sectors.
A
Derek, really fascinating look. Really something that people should definitely think about at least as ChatGPT and Anthropic make their way into health. I mean, because we've seen it on commercials. Like this is something that ChatGPT advertises on television and I see it on commercials all over the place. So this is definitely something that is on like more of a way beyond the wavelength of enterprise. This is something that affects individuals writ large.
C
And I think anecdotally we all know people who Google their symptoms or they use ChatGPT to ask them about their symptoms. And so this is something that is fast, it's easy, it's convenient, it's cheap, it's not going to go away. But readers should be aware of the privacy trade offs when you go to an AI chatbot for your healthcare needs.
B
Great.
A
Derek, thanks for joining us.
B
Thanks.
A
I want to take a quick second to tell you all about something going on during our Cyber Week coming up in February. Look, if you're in D.C. and you're in the dibs space, you know that the November 26, the CMMC deadline is looming and honestly there's just a lot of fluff out there. But on February 18, Virtru is partnering with us to hold an event at their HQ called dcmmc. At this event you're going to hear from actual assessors and C3PAOs coming in to talk about what real world data security looks like when the auditors actually show up. It's going to be a great room to be in. I'm going to be there. We're going to do a deep dive panel at 4:30 and then stick around for a bourbon tasting at 5:30 to actually talk shop. I would love to see everybody there. If you're in the D.C. area, come join us and come to all of our events during Cyber Week. I put the registration link in the show notes. Or you can just head over to Virtru's website, virtru.com dcmmc event. Snag a spot before it fills up and we'll see you there. Now to our interview segment with Stephen Schmidt, the Chief Security Officer at Amazon. Look, identity management case to the kingdom. It's super important, absolutely important, whether you are a small business or all the way up to an enterprise level like the size of Amazon. So Stephen put a really interesting blog post on LinkedIn a couple weeks ago that really examined how Amazon had wrestled with this, especially through its Midway product. I wanted to expound upon that with Steven, so brought him aboard and we talked about identity management and really the thought process that goes into identity management, especially when you have an organization as big as AWS and an attack service as big as aws. And really looking at the threats that AWS has deflected through Midway, we get into a story on how AWS turned away a password spraying attack by midnight Blizzard Apt 29 Russia, how he turned that away with Midway. So really interesting conversation that gets into not just the technical side, but the governance side and how organizations of all sizes can think about identity management. Check it out. All right. Joining us on our interview segment this week is Stephen Schmidt, the senior Vice President and Chief Security Officer at Amazon. Steven wrote a really interesting LinkedIn post that had to do with the way that Amazon has tackled identity and identity management across their systems. And look, as enterprises grow, there's more infrastructure, there's more platforms, there's just more, more, more, and it becomes weak links in the chain. So who better to talk about identity than the chief security officer of one of the biggest cloud service providers out there? Stephen, really appreciate you hopping aboard.
B
Greg, happy to be here.
A
So the blog post that you wrote, it really dove into this product called Midway, Amazon's internal authentication system. And I think the way that we can jump into it is, look, when I talk to security leaders, they know the table stakes in 2026, passwords are dead, MFA's table stakes. But it starts to get fragmented. Like I just said in the lead in there, there's strong authentication on cloud apps. Legacy systems might have weaker controls. There might be exceptions for test environments, contractors with different standards, and there's just tech debt that keeps growing. And you kind of highlighted this a little bit in your blog entry. You called it a fragmentation problem. And I think to start off, why does the fragmentation problem keep happening and why is it so hard to fix?
B
Sure. So let's set the stage here. My job is about protecting customers. It's about ensuring like, that AWS services meet customer expectations and our shoppers have a secure experience, et cetera. And that's all based on the security of the systems that we operate, of course. And one of the things that most people don't realize is that our adversaries understand the fragmentation problem that you brought up in the beginning there. They realize that the newer systems that we've got tend to be relatively well secured. And so what they start doing is looking for that crack in the armor, the chink, and saying, where are the places that I might be able to get a foothold in infrastructure and use that as a way to lever myself elsewhere. And Amazon's a huge company. I've got to deal with protecting about 1.5 million employees. And when you look at the size of our operations and the fact that we've been around for 20 odd years, it means that we've grown up with a lot of different systems. So many years ago, at this point, actually about 10, we realized that in order to protect ourselves from the next several evolutions of adversary attacks, we had to build a strong authentication infrastructure. And what I mean by authentication infrastructure is, is the person who is talking to a computer system, the person we expect them to be, are they talking to us on computer systems that we believe are appropriately secured, and can we continue to revalidate that? Yes, indeed. That is the person I expect over time as they change the way that they behave, the systems they use, the places they are in the world.
A
So with that, you know, I'm wondering, so Amazon goes to build Midway to solve this. And, you know, one of the things that you talked about in the blog entry was that Midway made sure that there's no exceptions, whether it's production systems, test environments, legacy apps, you know, identical authentication standards across all of that. So I'm wondering, like, look, everybody talks about that being the goal, but that's very, very hard to achieve. And like you just said, you're protecting one and a half million employees. Like, I'm wondering what made it possible at Amazon in order to make this really a possibility where you could get to a point where there are no exceptions.
B
So two things make that possible. Number one, as you said, Greg, what makes it effective isn't really the technology itself. That certainly helps. It's that everything at Amazon uses Midway. We designed Midway as a single, simple way for any team to handle authentication. And I emphasize simple intentionally because one of the biggest hurdles that we've had to face as technologists over time is building complexity. And when you try and unpeel the onion of previous decisions, old tech, et cetera, you find that retrofitting it with new stuff, you know, you get the engineering, ooh, this is shiny. I want this cool new thing in here. And they just make it more and more and more complicated. And then what happens with your builders is they say, I don't know what to do here. I can't. I can't make this all work. So one of the overriding goals for Midway from the beginning was the simplest thing that we can possibly do and make it the easiest for our builders to implement. So it's harder to do the wrong thing than it is to do the right thing. And that makes an enormous difference in adoption. And as you said before, whether you're building something brand new or integrating a legacy system, at Amazon, you use Midway, a personal test account, exact same. High security bars are production accounts. Everything does. We don't draw lines. We use one standard, one process and one bar. So how did we get here? Well, we realized in the security. Org a while ago that this is something we were going to have to do because we knew what our adversaries were doing and we knew where they were heading and we knew what the problems were. But taking that organizational step to say everybody is going to do this took two things. One is the security team building tools that made it easy to do this correctly for our software development engineers across the company. And two is an institutional desire to do it. And when you look at Amazon, we're not structured like other companies. And I'm not saying this is the right way to structure a company. It's just what works for Amazon. But I report to the CEO of the company. That's really intentional. It is, because Andy, our CEO, views security as foundational to everything that we do, and that gives us opportunities that other companies haven't had. So we can go across the entire company as big as it is, and say we are all going to go in this direction. We vary the speed that we get there based on the individual business needs, the risk profiles, the ability to invest, et cetera. But we all get there in the end. If you look at aws, for example, its security bar was really, really high from the very beginning because we realized that the biggest thing that we had going for us was customer trust. And we had to be able to ensure that our customers knew that they were going to be secure when they were in aws. Other businesses are different because they have different kinds of content for Example, our ads business is very different than aws. It's not, it's a lower security bar. It's just a much narrower interface. It's simpler so that it means we can secure it more easily. Whereas aws, very broad, literally all around the world and tons of different surface area to have to deal with there. So Midway looks at this problem from two perspectives. One is building something that's universally usable by our software development engineers inside the company. So we built a set of tech stacks that said, all right, if you're running a web app, here's the Midway shim. Just talk to the Midway shim, we'll handle the rest for you. And the user interface side, where our humans are interacting with Midway. Because quite often when we looked at existing infrastructure options before we built Midway, we're like, wow, this is really kludgy. Or this is hard to use or man, I hate this thing where the user's like, why am I constantly having to type numbers in? It's a pain in the neck and I get them wrong. So we said, how can we make this smooth and easy? And we chose to do it using a tech stack that's really specific. You know, we chose U2F universal second factor security keys rather than one time passwords or soft tokens. And you know, there are things that are available on the market. You plug it into the side of your computer, but it gives you a strong cryptographic anchor that says you actually have that physical device in your hand that is a really important defense against a lot of phishing and social engineering attacks.
A
I was going to say, I was, I was hoping that we got into that a little bit because especially on the hardware token side, like look, we've seen sophisticated phishing campaigns defeat software based mfa. I mean there's, we've seen tons of stories about data breaches where there might be push alerts and attackers. Just spam push alerts until somebody goes, all right, like my phone's going, oh yeah, leave me alone, I'm sleeping. Um, so I'm wondering, you know, when you look at those soft tokens and the decision to really layer it on top of the U2F, the healthy device and the PIN, plus the continuous session validation, you know, what is the threat model that drove you to the hardware or really to layer everything into Midway?
B
Yeah, the threat, the threat model that we've dealt with there is. We have to be able to defend against the most sophisticated attackers on the planet. You know, they're nation state attackers, they're people who are Incredibly well funded, they're super motivated. And more importantly than anything else, they're willing to do this in the long game. They're willing to invest over time in recruiting human sources in some places. And they have the ability to defeat just about everything else remotely. So for example, if you use a one time password, an otp, you know, that's the six digits that you'll get from a variety of sources, that sort of thing. There are so many skilled social engineers who will be able to convince you as the victim to give them that otp, you don't even realize it's happened. Because they put that interstitial page in, that looks like the legitimate page you're trying to log into, and then they log you in in the background. Meanwhile, they've got your access token text messages as an authentication method. Wow. We've seen nation state adversaries steal text message OTP codes for a very, very long time. They're simply not valuable. In that case, it's not only something the nation states can do now, but a lot of criminal actors can as well. And passwords, sorry, their time is gone. They're not really useful. In fact, we had some lovely discussions with some of our AWS customers at Re Invent and they say, wait a minute, why don't you guys don't require password rotation on your corporate infrastructure anymore? You're like, you're right, we don't. In fact, even the US government has said that rotating passwords doesn't make sense. Choose a good password and if that password's been compromised, you need to change it. But rotating passwords actually weakens security because what do people do with things like that? They write them down because they're hard to remember and if they're written down somewhere, they can be stolen.
A
So going back to what you're talking about with the nation state side of things, in your blog entry you talked about Midnight Blizzard and with all the names just to make sure our listeners are aware, Midnight Blizzard, Apt 29, Cozy Bear, we're talking about a syndicate here that's been pretty widely linked to Russia. They, you know, went after, they go after everybody and they, they try to compromise organizations with password spraying attacks. And you detailed a little bit that AWS caught them trying to target some of the infrastructure and Midway helped block those attacks. So I'll let you go a little bit deeper into what you discovered there. But I think that that's, it's a good example of what needs to be done to stop the most sophisticated threats that are out there.
B
So Midnight Blizzard, you're right, that particular threat actor is really prolific, focused on infrastructure providers and focused on companies who have data that they're interested in, you know, whether it's defense data or stuff that could be valuable to an intelligence service or stuff like that. And they were using password spraying attacks. Password spraying of course, is trying common password combinations like sports teams names or birthdays or pets names against many, many, many accounts all at the same time. And in a hope that they get in, they specifically went after our and other companies Entra ID environments. Entra is Microsoft's identity system that is behind Office 365, amongst other things. We're a Microsoft customer. We use Office 365 for our email or M365 I think, as they're calling it now. Entra ID supports administrative accounts that are password enabled, which not an awesome choice.
A
We just spent the past 10 minutes trashing passwords.
B
Exactly. But it does. So knowing these risks, we actually changed the Microsoft login process for any of our accounts so that any login to our Entra ID environment had to go through Midway. And as a result, when the Midnight Blizzard actor attempted to compromise accounts, they couldn't because there was no single place where we used password enabled Entra id. Even all of our test accounts use Midway. And it's that ubiquity that really helped protect us there. The flip side of that is another large tech company unfortunately did get compromised by these same threat actors who use password spraying to gain access to what that company described as a legacy non production test account that did not have strong off. Unfortunately, that account led to other accounts that did matter. It ultimately led to several email accounts with senior leaders in that company being taken by the actor there. The company, I believe, has taken steps to harden their system since then. But it really, really demonstrated for everyone why having a single standard everywhere for all accounts is super important. Because it is really hard to understand trust relationships between accounts that you think don't matter and the accounts that really do matter. That stuff happens behind the scenes. It happens during test processes. People lose track of it. They don't do it intentionally. They're trying to troubleshoot a problem. They say, I'll just enable this for right now and it never gets undone. And then that's the chink in the armor that the adversary needs.
A
Right. The organizational challenge was something I was going to ask about because like, like you said, every security leader has faced this. Yeah, the business unit needs to chip some tomorrow or a contractor needs access, or somebody goes look this is just a test environment. I mean, the example you just brought up a legacy test environment, and next thing you know, it's a. It's a huge compromise. So how do you, as a security leader, prevent exception creep? And what was the government governance model to make sure that not just from a technical standpoint, it was. The Midway was able to do what it needs to do, but everybody was on board with what needs to be done.
B
There's a really, really Amazonian answer to this. It's called mechanisms. A mechanism is really just a fancy word for saying that we've got tooling that prevents things or tooling that enforces things. And along with building the correct path, the easy path, the midway path, we also built tooling to identify any situation where that path was not being followed. That tooling is applied universally to all of our accounts so that we can see, for example, if someone has enabled password authentication and exposed it to the outside world. We have a set of tools that run against every single AWS account that we use as a company. Not on our customer stuff, on our stuff. And it detects any deviance from our expectation for security. Security that allows us to alarm on situations where things don't meet our expectations. But more importantly, the security team has a role in each one of those accounts where we can revert the change. So we can say, you opened this up to the Internet. I'm sorry, that's not okay. We're closing it. And so we built a mechanism which looks for the state that we expect. If the state is not present, it alarms and it reverts so that we automatically protect ourselves. A lot of folks who we talked to were like, yeah, you know, I've got this CSPM thing and it looks for misconfigurations, and then it raises an alarm. Like, awesome. What do you do next? Well, my SOC operator triages the alarms and they go through things and then they cut tickets to people and blah, blah, blah, blah. And I said, and how many minutes does that take? You know, oh, our, you know, our SLA is for alarm to acknowledgement. 15 minutes. And then when you build it all up, it's ouch, hours. And you know how fast these actors work. Hours is forever. You're done. It's gotta be a reaction that occurs automatically within a couple minutes to really protect you appropriately.
A
Right. Dive into that a little bit deeper because like you just said, the hours build up. I mean, that's friction. And I think in the blog entry you put it really well in that engineering teams appreciate midway because Secure path is the fast path. And it is fast is like, like we just said, the fast part is generally what people are trying to strive to on the security side. So dive deeper. Like, how did, how did you design for Developer Velocity? And making sure that things can still get done in security does not create that friction.
B
So we actually have goals set for my team as part of that process on reducing builder friction. So we measure the amount of time that a builder has to take to do the things that we're asking them to do, and we have goals to reduce that time every single year. So, for example, if Midway took an hour for a developer to implement originally, it's now down to less than 15 minutes. Because we've constantly filed off the sharp edges on our installation procedure, on our operations procedure. We've built new SDKs that are aimed at the different kinds of languages that people are using across the company. And we've automated more of the backend work by investing on my team to do the development work so that it means that the builders out there don't have to. And that's a multiplier. My team does it once. The many thousands of builder teams across the company don't have to do it. Now that takes centralized investment and it takes an organization which says, yeah, these guys, they're kind of overhead, you know, they're not really in the profit margin side of things. Investing there, that's just an expense. But when you actually do the math and when you look at what a 15 minute change makes for every software development team across the company, that money adds up fast at our size. So it makes financial sense to do the centralized investment as well. And that's a piece that most people don't realize when they're doing the negotiation work about how do we invest in this, how do we do this improvement process for our identity infrastructure? The security team comes in with a bill and says it's going to cost this many zeros, you know, of course, and then the, the financial folks in your company are saying, that's a lot of zeros. I don't know if we need all those zeros. And you say, well, you know, we got to do it, because important things. True, they are important. Take the rest of the argument. How much time can you save for all the development teams in your company? And that's been big leverage for us.
A
Yeah, I was going to say, is, is the time being saved? Is that part of the playbook? Because, like, look for security leaders who are listening to this and realize, oh, you know what Maybe I do have gaps. And this is sort of speaking to what I deal with on a day to day basis right now. What is that playbook? Where do you start and how do you sequence the work to get to a point where, not just from a technical standpoint, but like the governance and philosophy is part of your security program?
B
You first start off with humans. Make sure that the people who make the decisions in the company, the people at the top of the stack, understand the real threat. And security teams can go one of two ways. They can either be people who are enablers, or they can be people who say, no, don't do that. You've got to be an enabler here. First of all, get the folks to understand what the threat is, that this threat is real. It's not theoretical. Dispassionately give examples about stuff. By the way, just, this is one of my pet peeves. The sky is falling security people. Guess what, folks, you are the worst thing that you can do for anybody because no one's going to believe you. They're just not. So you've got to work with the humans that you've got in your company, help them understand the real threat. Get the people who are experts in the space to say, yep, you know, this is what they're after, this is why they're after it. This is the kind of stuff that you have in your company, which represents the things that they're after. And so therefore the threat is a real one for you. Okay, cool, so what do we do? Well, first of all, we've got to go build the infrastructure layer at the bottom that supports all of this operation. That is something the security team should own and they should own reporting on a regular basis on how much time they cost the builders teams to do what they're asking. And why is reporting on a regular basis important there? In a culture like Amazon's where everything is metrics driven, reporting on a regular basis means that we get asked regularly, how are you going to reduce that? When do you reduce that impact? How do you make it easier for people, other companies? Same thing. Hey, you just cost 100,000 developer hours. That's really expensive. Can we get that down to 80,000? What's the investment necessary to get it down to 50,000 or 20,000 and keep knocking those edges off? And it's an iterative process. This is not something where somebody's going to do it overnight. This is something that's going to take years to do, but you have to start.
A
Great. Stephen, really appreciate you hopping aboard this perspective. I love having people on to talk about the perspective of how the technical blends into the human side of things. And like you said, nobody likes security downers. And I feel like we get mired a little bit too much in the doom and gloom. And this has been really helpful in showing the path forward for security practitioners that need a way forward, especially with identity being such a focus of our adversaries.
B
Security, if done right, can be an accelerant for your business. Turn it into that positive. Turn it into something that people want to embrace.
A
All right, Stephen, really appreciate you hopping aboard. Thanks for joining us.
B
Thank you.
A
Thanks for listening to Safe Mode, a weekly podcast on cyber security and digital privacy, brought to you by Cyber Scoop. If you enjoyed this episode, please leave a rating and a review and share it with your friends, your co workers, your sizzos, your sysadmins, your mom, your dad, anybody that wants to know more about cyber security. To find out more more information or to contact me, please look for all of our social media handles or visit cyberscoop.
B
Com.
A
Thanks for listening. Check us out next week.
Date: February 12, 2026
Host: Greg Otto (A), Editor in Chief at CyberScoop
Guests:
In this episode, host Greg Otto explores two major topics:
Listeners gain actionable insights on practical security, governance challenges, and the imperative of making the secure path the easiest path—both for users and for massive development teams.
Main Points:
Notable Quotes:
Timestamps for Key Segments:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
This episode of Safe Mode offers a dual-track look at the future of security:
For individuals and CISOs using AI in health: Be aware that privacy promises from tech companies aren’t legally enforceable like HIPAA is, with the implication that users carry new risks for their personal data.
For enterprises: Amazon’s push to eliminate passwords and unify authentication under Midway’s strict, exceptionless regime is a model for blending user experience, automation, and top-level commitment. Key success factors include making the secure path the path of least resistance, automating enforcement and remediation, and quantifying and reducing developer friction.
The tone is practical, candid, and intentional—good security requires both strict governance and strategic investment, but with the right approach, it becomes an accelerant for innovation, not a brake.