
In our final episode of 2025, Dave Lewis, global …
Loading summary
A
React two shell is pretty bad. Just how bad is it? We'll get into it on this episode of Safe Mode. Welcome to Safe Mode. I'm Greg Otto, editor in chief at cyberscoop. Every week we break down the most pressing security issues in technology, providing you the knowledge and the tools to stay ahead of the latest threats, while also taking you behind the scenes of the biggest stories in cyber security.
B
An attack is coming.
A
It's about keeping us safe.
C
He's just a disgruntled hacker. She's a super hacker. Stop. Stay alert. Stay safe. Stay safe.
A
This is Safe Mode. Welcome to this week's episode of Safe Mode. I am your host, Greg Otto. Joining us in our interview segment this week is 1Password's Dave Lewis going to be talking about the access trust gap and everything that is going into Identity Management in 2025. Lots of gray areas there and we will discuss all of them with Dave, but first talking with Matt Kapko, who has been on the REACT to Shell beat this entire month. It's been a mess and generally Matt tends to deal with all of the messes that come up in the cybersecurity landscape. But I feel like this one is coming. The 11th hour is probably the mess of the year. So let's catch everybody up to where we are.
B
Yeah, it certainly feels that way, Greg, thanks for having me on. So this is a maximum severity vulnerability in React server components, which is an open source application library with widely used across the Internet. The defect was disclosed two weeks ago and attacks have been surging from attackers of different origins and motivations ever since. Attackers have developed an unparalleled number of ways to trigger the defect, which they are exploiting to elevate privileges and pivot into other parts of targeted networks. It's. It's pretty bad, as you said.
A
So for those that aren't familiar with react, we're talking about development here. We're talking about code. This isn't a box. This is a piece of hardware. This isn't some virtual interface or whatever. Like React components are in just about everything because they're a key piece of software that is used really like you said, across the Internet. Like is it connected to the Internet? Yes, if that box is checked, yes. There's probably some React code in it, right?
B
Yeah. And this is simplifying things, but there's probably like a 1 in 3 chance that react is somewhere in your system if you're an organization that's online, which is just about every today. Right. So it's pretty broad, I think of it as like the scaffolding of the Internet. A lot of applications and other open source libraries are built on top of it. So there's just a lot of dependencies that go further up and downstream that are impacted by this vulnerability.
A
So the experts that you have talked to and you've talked to a number of them, if you've been following on cyberscoop, have really said that we're really running the gamut here when it comes to attacks. If you can think of the type of attack that's been going on, it's probably been waged via react2shell. Am I right or am I being sort of hyperbolic there?
B
No, not at all. That's right. On Cyber criminals are exploiting the vulnerability for financially motivated attacks. Ransomware gangs as well. Nation state threat groups from China, Iran, it's attracting attention from all corners of the globe it seems and they're all swarming to exploit the vulnerability. Some of those attacks are more targeted than others. Google Threat Intelligence said it observed financially motivated hackers and at least five China state sponsored threat groups that are exploiting the defect, most likely for espionage. Google said it also attributed some attacks to Iran but it didn't provide much more detail there. But it's, it's widespread and yeah, every type of attacker seems to be going after this.
A
So nation state attacks, check. Financially motivated attacks, check. Ransomware attacks which I guess are financially motivated. Tax of course, but there's that destructive element to it, check. We've got all of it in this. That isn't all. Like what was the number? Forgive me if you said this because we've had so much information over the past month. Do we have sort of a volume of attacks here that we're seeing?
B
Yeah, so this is, it's difficult to pin down but Palo Alto Networks, unit 42, their incident response firm has been tracking this pretty closely. Confirmed known victims. They have more than 60 organizations now that have been impacted by attacks resulting from exploitation of this vulnerability. It's, it's go. It's widespread though. The number of detected attempted exploits identified by gray noise sensors is breaking new records every day. So this, it's going to get worse. I think it's just a matter of confirming whether or not organizations were compromised down the line. But yeah, widespread.
A
So and I think that this is important to talk about in terms of where this fits in the attack chain. This is really being used to pry open the door really like this is an entryway and this is what's being used to establish lateral movement. So like if detection, I would imagine, of the experts that you talk to, detection is hard because this is the front door and the attacks themselves are something entirely different. But this is the way in, correct?
B
Yes. This is the initial access point. Researchers describe it as pretty trivial to trigger this exploit. Caitlin Condon, vice president of research at Vulnchek, she told me They've confirmed almost 200 valid public exploits for React to Shell so far. That makes it the highest verified public exploit count of any CVE ever. A lot of those are variants of the original exploit, but I think it just underscores the many ways that attackers have found to use this vulnerability as a way to break into networks. It requires no authorization. Once they're in, they're able to execute remote code and, you know, move into other parts of the network. So really wide open what they could do once they break in.
A
Something else that underscores the mess here too, is that, look, when we first covered this, you wrote about the patch meta who oversees the react components put out patches, but there's been like updates with the patches too. Like we have patches on top of patches at this point. Which again, sort of underscores just how messy this has become over the past few weeks.
B
Yeah, certainly. And this happens often. Once there's just industry wide attention on a specific piece of software or a particular vulnerability, they're going to find more issues in the code. In this case, there's been three new CVEs that were discovered as a result of all those eyes on this original vulnerability. One of those vulnerabilities is a patch for a patch that wasn't patching. So they're finding problems. They're finding that patches that they created weren't effective. It really just creates a mess for organizations that are trying to clean up from this. The original patch for the original React to shell vulnerability will not address those new vulnerabilities that were discovered. So multiple patches required at this point.
A
So, okay, we are heading into the winter break. Everybody tries to kick their feet up for the holidays, and cybersecurity practitioners know that. I'm sure there's, there's tons out there that have horror stories about their holidays being ruined. Is this, is this a holiday ruining issue here? Like, what's, what is the prognosis from the experts that you talk to as to whether things are going to calm down so everybody can sort of enjoy their holiday?
B
Yeah, I'm not sure that there's much rest ahead. All the researchers that I've talked to are, are fearful, really, that this is going to get worse. They expect it to have long tail implications down the line. They're comparing it to vulnerabilities that have lived on in infamy, like log four shell. They're expecting this to be a problem for years to come. And the number of attempted exploits that have been found by scanners, I think it just shows that there's probably more impacted organizations out there, just haven't been confirmed yet. Sounds bad. I think it's going to get worse.
A
Yeah. Heavy sigh. That's no end in sight. That's tough to hear. But look, as we see this continue to unfold, you've done a great job keeping our audience up to date on it and I'm sure, like you just said, there will be more headed into 2026, unfortunately. So I feel like we're going to have to check back in again with you as we see this continue to unfold. But yeah, fun times ahead.
B
Count on it. Definitely.
A
All right, thank you, Matt.
B
Thanks, Greg.
A
In our interview segment this week, we are talking with one Passwords Dave Lewis and really talking about identity management and what that really looks like in 2025. 1Password had a really interesting report called the Access Trust Gap, which really pushes forward the idea of what it takes to monitor and manage identity management inside enterprises and really getting into what, what matters when it comes to all of the different spokes that power identity management. We get into Endpoint Management, Device Trust, BYOD Reality, sas Brawl, Shadow, it's Shadow AI Mobile Device management. It all matters with the way that we all get our work done inside enterprises. And Dave's a longtime expert in this and definitely has some really interesting things to say. Check it out.
C
All right.
A
Joining us on this week's Safe Mode interview segment is Dave Lewis, global advisory CISO for 1Password. Dave, really appreciate you hopping aboard. Thanks for joining us.
C
Hey, thank you very much for having me.
A
So a couple of weeks ago, 1Password put out a report called the Access Trust Gap, which talks about identity management from a bunch of different areas. So I'm wondering, I want to dive into those areas, but I'm wondering, how do you define an access trust gap and what approach do you feel 1Password is taking to help organizations bridge this gap and all of the meandering ways the gap exists?
C
Well, in the simplest terms, it is that difference between the assets you control versus the assets you don't control. So for example, good old remember, we all sort of lost our minds with BYOD back when that was a thing and it still continues to this very day where you have workers that have access to corporate assets, but on the same token they'll go on stream or whatever it happens to be in their personal lives. And sometimes those two do not connect in the best possible way. So the access trust gap really is about managing that missing piece in between those two ends of the spectrum.
A
So with that missing piece, from your vantage point, as you advise many organizations globally, what's the most common issue that you're seeing?
C
The most common issue, quite literally is that lack of control, that lack of observability, that lack of transparency as to what exactly is happening within their organizations. And it's really funny because I spent 20 years as a defender before joining the vendor space at all. And this was a big thing back then. I mean, a lot of the problems that we're having to contend with today in security across the board are conversations I remember having 30 years ago, and we're still having those chats today. So now what we're seeing is there's this mad rush for organizations to implement AI as an example. And as a result, they're putting in all of these different projects and they sort of lost control of the narrative. So while all of these projects are being deployed throughout their enterprises, they don't necessarily have security factored into the equation. And this is where that access dress gap really becomes more of a blast radius than you'd want in the event that something can and will go wrong. And this is one of those things where we have to address these pieces because if we're not taking security in the right frame of reference and looking at what could potentially go wrong, we are really setting ourselves up for failure. And this is not really a tenable way to approach things. We want to make sure that we are addressing things and putting in the regard rails that are well required.
A
You know, it's funny that you bring up the conversation seems to have been the same for the past 20 years and then you layer AI on top of it. I find that in a lot of these reports to be the same thing where it's, you know, when I talk to experts like yourself, you know, I want to talk about AI because everybody's having to deal with it. But then people go, well also don't forget like we still have the other problems that have been pervasive for the past 10 or 15 years. So drawing on that experience, you know, if, let's say I'm a new CISO, I read your report and I have 90 days, as you know, my, my introductory to an organization to reduce Breach likelihood. What are the top three moves that you would prioritize and why?
C
Honestly, doing gap analysis to look where the problems are, that is your first and foremost step there. Then prioritize those. And when you go through those, you're going to find that identity is the key piece of the puzzle because the foundational element of any security program is the human element. And now by extension we have the non human identities as well. So it really has taken what we had as a known issue and expanded it rather significantly. So if we're tackling the identity space first, it is going to help reduce risk in the organization. Obviously it will not obviate that it's always going to be an issue. But if you can reduce the risk as much as possible the organization, you are doing your due diligence to help protect that organization, protect the stakeholders and your intellectual property.
A
So let's dive into that human element, especially around the identity management, because there was an interesting stat that I saw in the report. 44% of CISOs report that employees used weak or compromised credentials as their top challenge. And going back to what you were saying, you know, being around for two decades, this, I feel like this could have been 2008, could have been 2015, 2025. So I mean, we've been talking about it forever, but it is still a problem. So I'm, I'm wondering, look, CISOs and security experts in organizations small, medium and large have, have tried to tackle this before. So I'm wondering, where do you think these programs slightly get undermined in practice and implementation? Is it fallbacks, is it exceptions, help desk requests, legacy apps? I mean, is it all of those? And which one is, is really more dangerous to a security program in an organization?
C
Well, first and foremost it's been 30 years for me, not 20, so it's, it makes it that much worse. Makes it that much worse. But yeah, that just did. One of the problems that we've had historically, and I was perfectly guilty of this in the earlier part of my career, was we have this preponderance of placing blame on the user. And when we vilify the user, it's like if you take a rolled up newspaper and bat it across a dog snout, eventually the dog's going to stop barking. That's not a good approach by any measure. And unfortunately people seem to think that that is how you enforce better security within an organization. And in those particular organizations they're really setting themselves up for failure because the users will then go out and build out shadow IT projects, shadow AI projects love or hate the term. It's not a negative so much as it's, you know, people are trying to get their jobs done and sometimes they will route around the controls that you have in place in order to do that. And if you are putting in a culture of fear, unfortunately, the users are going to do everything they can do to avoid being called out. So this just exasperates the problem. And it's really kind of difficult to watch that because we have learned these lessons over and over again. And as an industry, we're relatively in our infancy. So if you look at, you know, the medical profession, you look at jurisprudence, they have, you know, centuries of canon that they can call upon. We really haven't got to that point yet where we have that breadth and depth of knowledge that we've recorded and, you know, have as our own canon to make sure that we're not making those same mistakes over and over again. So when you see these things cropping up within your organization, yes, that is the human element. But rather than vilify the users, we have to give them controls that are not written by engineers for engineers. We have to make it a seamless experience for the user. So if you have the ability to store all your credentials in one place in a safe and secure manner, if you have ways to have network zone segmentation within your organization to reduce risk, if you have security awareness programs that are speaking to the user in a way that's encouraging empathy, helping people understand, because your end users are. They're in hr, they're in finance or whatever part of the business. Cybersecurity is not front of mind for them. We like to think that is. But as much as we bang on that drum, we realize or have to come to that realization that when we're dealing with the end user, we have to find a way that's going to work for them. Not because we said this is the way it is. And then we just have to put that flame sort of justice to the side and figure out a better way forward. Because communication is absolutely key. And when you look at, you know, Malcolm Gladwell has a great book called Talking to Strangers, we have to find the voice that the user is going to hear. So if we keep saying, oh, it's a zero day, oh, it's this, oh,
A
it's that they don't care, you're going to lose.
C
Yeah, they don't care, the audience is going to lose that. And that's why at 1Password, we're very focused on making sure that the technologies that we provide the users is easy to use and effective because we don't want to encourage that same sort of behavior. Because if you look at applications, people like to say, oh, there's all these security vulnerabilities. Actually, it's not a security vulnerability, it's an application problem. Because if the application had been coded in a way that was safe and secure in the first point, there wouldn't be the security vulnerabilities that follow on later. So we have to really reframe the discussion and tackle the root of the issue. And again, not to vilify developers or anything like that, but historically we've been very good at chasing after the users, not going over and having a clear and concise conversation with the developers so they understand the security impact to the organization potentially if something can and does go wrong.
A
So you're hitting on something that I wanted to talk about with regards to the statistics on MDM. In the report, there was a figure. 73% of employees use personal devices for work. Half of those devices are not managed by mdm. So part of that gets to the heart of what you were saying in that the users want to get their work done and they're not thinking about cybersecurity, but somebody has to think about cybersecurity in the organization. So I'm wondering, you know, for those that are responsible for cybersecurity, you know, most companies still do treat devices, whether it's manager unmanaged. Do you think that there's a better model for device trust or what could more could be done for granting access to sensitive apps or working this into a workflow so people don't go, yeah, I'm getting an email about some sort of zero day and I'm ignoring it because I have work to do. Like, how do you bridge that gap?
C
Well, this is where you have to put in technologies that are going to be, you know, seamless to the user, are going to be effective. You want to have something that tackles extended access management, that looks for these gaps along the way as well within an organization, you have to have a good SaaS manager in place because most of these AI platforms are SaaS delivered. So one of the things I've been doing over the last six months is doing CISO dinners around, primarily Europe and my colleague as well in North America. And the prevailing piece that we keep running into is that these CISOs do not have a handle on the governance, let alone the technology. So one CISO at one event said it perfectly, said, we have closed the door to AI projects coming into the environment, but they're now coming through the window. So having that ability to have the visibility in your organization and the observability so you can see what these applications are in fact doing and be able to manage that in a single point is way to help fundamentally reduce the risk. So if you have like some sort of AI app that is, you know, got 150 users that you've paid for and only 20 of the users are actually making use of that, you're burning money right there. So you can then reduce the risk of the organization by clawing back those seats and then that goes to the availability because then you have those finances that you can reallocate to other parts of the organization.
A
So speaking of the SaaS part of it all. No, if you're an organization that is SaaS first and I feel like most of them are out there now, and you're a security practitioner and you were just talking about observability, what is the modern equivalent of asset inventory so you can have that observability and how do you keep it current when your tooling might not be the greatest so you can have that full visibility, the full observability and a full inventory of those assets?
C
Well, historically we had tools that were heavily reliant on you, populating what was known within your environment. And when I say what was known, I remember I worked at one power company and we were told, okay, we have, you know, 2,000 seats for this particular antivirus product. And I said, okay, great. And I factored that in for the upgrade. And when I went to do the upgrade and did the discovery, I found it was closer to 8,000 nodes, had had deprecated version of an antivirus. So if you're not able to see that because you're doing it in a manual process, which unfortunately a lot of organizations are still relegated to today, you need to have some sort of intelligence built in that can go through your organization and see what is available within, within your organization. That gives you that ability to say, okay, these are the apps that I have to be worried about. This is not to current or whatever it happens to be and be able to take steps to remediate that as well as having the observability. When you factor in the AI aspect of things is these agents, if you're doing it from the agentic AI project, sorry, perspective, these agents are going to get to that point. We're going to be making autonomous decisions on your behalf, or at least that is the fever Dream. And, you know, on the five steps of agentic AI, we're generally speaking at best around level two, we will probably get to level four within a short amount of time. But I don't think we'll ever get to that level five, which is, you know, level two being akin to having driver assist on your car and level five being a fully autonomous vehicle. I've seen enough automated taxis in traffic jams to know that we are not fully at that point yet. And from a technological perspective within enterprise IT space, I don't believe we are there yet. So that's why we have to have that observability to see what the applications are doing and be able to address it. Because all of these agentic AI agents. Agentic AI, you got me? All of these agents are going to require credentials. They're going to react, have access to APIs, they're going to have access to systems. Like another Cecil that I was speaking to not too long ago said that somebody in their organization had asked their internal LLMs that who the company doesn't like me? The agent dutifully went off, queried the HR system and came back with the list of people that didn't like them. That's one of those moments.
A
Yeah, what? Like, you're blowing my mind. This is, this is taking a left. Like, how, how is that just a hallucination? Or like, what, what HR system is doing something like that?
C
It turns out that they had not put proper guardrails in place. So it actually was bona fide information. And this is the gotcha factor is like, as we have this rush to put in AI products, we have to make sure that we're, you know, managing the credentials for these agents safely and securely. We're managing the guardrails so that they do not have access to everything under the sun. And being able to look at it and say, okay, I shouldn't be able to ask this particular question of my internal LLM as an example and have it go pull this information. So if I'm asking for financial results to be reported next month, I shouldn't have access to that if I'm in the shipping department. These are the kind of things that you have to go through. And don't just force feed these AI products into your organization. Take that step, take that critical thinking and say, is this the right thing to do? Or have I put the controls in place and have somebody, you know, that's, you know, fundamentally broken in how they think about things like myself say, have you tried this? Right? And you know, the poke factor, you know, poke it till it breaks.
A
Yeah. Speaking of, you know, poking and prodding, I'm wondering, in your opinion, what SaaS failure mode causes real world damage inside these organizations? Is it over position, over permissioned roles, public sharing authorization, consent misconfigurations, all of the above. And what's the fastest risk reduction playbook?
C
Well, honestly, we have to make sure that we're getting a handle on the credentials because if these agents are having access to exactly the HR system, when fundamentally it should never have been able to return that result. These are the kind of things that I do worry about having a direct impact to the organization, especially if it is internal secrets that are not meant for the entire rank and file, let alone being leaked into the public domain. These are those things we have to like. If you look back to the, any, any firewall rule and I say look back in that, you know, I know it still exists in many organizations today. It was put in as a matter of convenience because somebody didn't have the time to go through, troubleshoot or see what ports were needed for application A to talk to application B. Unfortunately this then becomes an exposure and we're seeing that same sort of thing with over permissioned agents being allowed to access all manner of thing. Like the echo leak scenario that came up not too long ago, okay, where one organization sent an email to another organization and at the bottom of the email there was white text and so the human reader didn't see it, didn't think anything of it. It was a generic email to them. But the agent on that system read the email, executed the instructions and then deleted any evidence from the logs that the conversation had happened, the instruction set and then gave remote control to the other party. Thankfully, all of that was remedied before it ever made to the press. But the fact that that was even possible in the first place, that's a real concern because we have really gotten ahead of our skis. AI, I love it to death. I absolutely think there's a lot of really good business cases, but we're in such a mad rush to get these things out there and maximize the value that unfortunately, as is the cycle, year after year security then gets put off to the side and then we have to catch up to shore up the defenses.
A
What else are you seeing when it comes to AI, specifically where organizations are underestimating what they have to do to secure their use cases?
C
Well, definitely the access control, that is a big piece. The other piece is the cost because I am seeing a lot of Organizations talking about the amount of money they have to spend on tokens and things to that effect and the costs are escalating very quickly. It's sort of like when cloud computing was first on the scene and people would leave their instance running for X amount of time, come back to a massive bill. I may or may not have had that exact problem happen to me. But these are. The thing is we have to have that element of the humanity. We have to look at it and go, what are the different aspects that are going to have that impact? We have to have that clear and concise conversation because you have the cost, you have the security aspect, you have the risk to the organization if things are not implemented in a safe and secure manner. So we have to look at it from the human element, from agentic AI, from the controls that we have to contend with like the EU AI act and various other pieces of legislation around the world. That, you know, EU AI act is very, very specific and it's very granular. And I've talked to some organizations that aren't ready yet and some of the stipulations are coming into effect in very soon.
A
Right?
C
So these are the kind of things we have to make sure that we're not letting this get away from us because there is a real impact. You know, there are consequences and the ramifications thereof.
A
So whether it's AI or more quote unquote traditional identity credential management. Finally I'm wondering what advice do you give most often that leaders don't want to hear but then later wish that they had acted upon.
C
Well this may sound self serving, but honestly tackling the credential problem because every organization I deal with they'll say, oh, do you have password management? Do you have SaaS management? Do you have any of these tools in place? And they're like eh, we have to worry about firewalls, we have to worry about this particular agent, this particular agent. And they've really lost the focus on the impact to the organization in the event that the credentials get exposed to the Internet. My favorite one is it's like, oh, it's okay, we have mfa. And I said do you have it wall to wall? And they're like no. All right then do you not see the flow here? And they're like oh damn. And you know, making sure that you have wall to wall within the organization for a password manager, SaaS manager, things to that effect and extended access management, it's going to reduce the risk in your organization. Fundamentally we have to make sure that we're approaching this is a wall to wall all in type of approach. Because if you're doing a piecemeal, like some organizations will do, Pam, but they'll only do it for critical assets, but then you've exposed the rest of the organization to potential impact. So these are the kind of things that really need to be addressed going forward.
A
You bring up the MFA part. Now I'm wondering because you talked about it being wall to wall and that's really interesting to me. Like when you say wall to wall and you're talking to these scissors or security practitioners, what signals are you telling them to watch for? To like meaningfully change or guard their the access? Is it user, device, session, location, behavior, a little bit of, you know, all the above like. Or is it dependent on what the organization is?
C
I'm gonna go with yes. And then.
A
Okay, okay.
C
And. And obviously the risk appetite is going to change from one organization to the other. So if you are making teddy bears or centrifuges, obviously your risk profile is going to be drastically different. But the core fundamentals still remain. Making sure that you have good password management, multifactor authentication, credential management, passwordless authentication, all of these elements help to reduce risk for the organization. So just waving your hand saying don't look behind the curtain, the great OZ commands, it is not going to improve the security for the organization. Luck is not a strategy. It may play out for a lot of organizations out there, but it will potentially backfire.
A
Great, Dave, really appreciate your insights. Thanks for joining us and have a happy new year.
C
Thank you for having me. You too. Cheers.
A
Thanks for listening to Safe Mode, a weekly podcast on cybersecurity and digital privacy brought to you by cyberscoop. If you enjoyed this episode, please leave a rating and to review and share it with your friends, your co workers, your sizzos, your sysadmins, your mom, your dad, anybody that wants to know more about cyber security. To find out more information or to contact me, please look for all of our social media handles or visit cyberscoop.com thanks for listening. Check us out next week.
Episode Title: The Access‑Trust Gap: Why Security Can’t See What Work Depends On
Date: December 18, 2025
Host: Greg Otto (A), Editor in Chief at Cyberscoop
Guests:
This episode tackles two major security issues:
Listeners will gain insights on current threats, organizational gaps in access controls, and why both technology and human behavior are central to risk management.
[00:32 – 09:05]
Severity and Scope:
Exploitability & Impact:
Attack Methods & Actors:
Patching Mess & Persistent Risk:
Detection Difficulty:
Future Outlook:
Greg Otto [01:50]:
“React components are in just about everything… Is it connected to the internet? Yes? There's probably some React code in it.”
Matt Kapko [03:08]:
“No, not at all. That’s right on. Cyber criminals are exploiting the vulnerability for financially motivated attacks... Nation state threat groups from China, Iran, it’s attracting attention from all corners.”
Matt Kapko [06:35]:
“One of those vulnerabilities is a patch for a patch that wasn’t patching. …The original patch for the original React to Shell vulnerability will not address those new vulnerabilities.”
Matt Kapko [07:54]:
“Researchers... are fearful, really, that this is going to get worse. They expect it to have long tail implications down the line. They're comparing it to vulnerabilities that have lived on in infamy, like log4shell.”
[10:10 – 31:54]
What is it?
Why it’s hard:
Same old problems, new tech:
Top CISO Moves (First 90 days):
73% of employees use personal devices for work; half aren't managed by MDM.
“We have to put in technologies that are going to be, you know, seamless to the user, are going to be effective.” [20:06]
SaaS and AI platforms compound governance and visibility problems; shadow/rogue adoption is rampant.
CISO Quote:
“We have closed the door to AI projects coming into the environment, but they're now coming through the window.” [20:55]
Legacy tools required manual inventory input—no longer scalable.
True solution is intelligence-driven, automated discovery and observability.
Example:
Agentic AI:
Anecdote:
Overpermissioned roles and agents, “public sharing,” misconfiguration, and poor credential management named as top SaaS risk factors.
New tactics exploit invisible/white text instructions in emails to hijack agents and cover tracks (“echo leak” scenario).
“...the agent on that system read the email, executed the instructions and then deleted any evidence… and then gave remote control to the other party.” — Dave Lewis [25:45]
On user blame:
“We have this preponderance of placing blame on the user... And if you are putting in a culture of fear, unfortunately, the users are going to do everything they can do to avoid being called out.” — Dave Lewis [15:25]
On the SaaS explosion:
“We have closed the door to AI projects coming into the environment, but they’re now coming through the window.” — Quoted by Dave Lewis, relaying a fellow CISO [20:55]
On wall-to-wall protection:
“Making sure that you have good password management, multifactor authentication, credential management, passwordless authentication, all of these elements help to reduce risk for the organization.” — Dave Lewis [31:08]
On the realities of AI and organizational readiness:
“We have to make sure that we’re not letting this get away from us because there is a real impact. You know, there are consequences and the ramifications thereof.” — Dave Lewis [28:57]
Summary by Safe Mode Podcast Summarizer | For more, visit cyberscoop.com or follow Safe Mode Podcast.