
Loading summary
David Spark
Best advice for a ciso go.
Rob Allen
I've got two pieces of best advice for ciso. 1. Listen to this podcast. It's essential.
David Spark
2.
Rob Allen
Buy threat locker. It's equally essential.
David Spark
It's time to begin the ciso series podcast.
Welcome to the CISO Series podcast. My name is David Spark, the producer of the CISO series and my co host for this episode. I know you all love him. He's the principal over at duha. It's none other than Andy Ellis. Andy, say hello to the audience.
Andy Ellis
Hey folks. And happy springtime.
David Spark
Happy springtime it is. We are available all year round 247365@cisoseries.com except when the site goes down. But that doesn't happen. The site is fine only when the
Andy Ellis
AWS fails or something.
David Spark
Something like that. Or it used to be problems. But we have some protections in place that we did not have before. So things are much, much better now. Go check out all of our shows. Our sponsor for today's episode, a spectacular sponsor of the CISO series. We love having them. It's none other than ThreatLocker. Allow what you need, block everything else by default, including ransomware and rogue code. That is Threat Locker. We'll have lots of new cool things to tell you about threatlocker a little bit later in the show. But Andy, before we were on mic, we were talking about the whole buying process. And I just want to set this up and this I've learned through my whole life. People can say yes a thousand times, but they only need to say no once. And that is one of the things that I've learned. People can string you along, string you along, string you along with yes, yes, yes, yes. And it only comes down to a single no can end everything. And that's the is a sad reality of life. You've run into this.
Andy Ellis
Yes, I absolutely have. And organizationally the challenge sometimes is the person who's talking to the vendor doesn't have buying authority. So they'll say yes. All through this process, they bring a vendor in, they string them along, they're excited because they think that all they have to do is prove this would be incremental value and money will magically appear.
David Spark
And by the way, people like to say yes because you're happy, they're happy, everyone's happy.
Andy Ellis
Right? It's non confrontational. And then all of a sudden the vendor sends over the contract. It's like we did the proof of concept or proof of value. We hit all the milestones that we wrote for you and now we would like you to pay us. And then the CISO gets involved and the CISO is like, what are you talking about? So this is my advice for you as a vendor, is ask the question, hey, assuming that we pass technical validation, we succeed at the pov, what does the purchasing process look like? Who needs to sign off on it? And then the second thing, if you are a CISO and you have people on your team who are doing this, you need to have serious conversations with them because they're seriously damaging your brand.
David Spark
Yeah. You don't want misalignment of someone saying, yes, we'll buy when the person who does have purchasing authority is saying, no, we're not buying. You don't need that because.
Andy Ellis
Or it just wasn't in the budget yet.
David Spark
And Andy, you say you've seen this a lot, you've been stuck in the middle of these things and you're trying
Andy Ellis
to, yeah, I'll get a call from a vendor that I know that I've either invested in or advised and they're like, hey, this, this process seems to have gone astray. I see. You know, the ciso, can you just have a back channel call? And I'll reach out to the ciso and sometimes they're even surprised there's a contract. They're like, I haven't seen a contract. There shouldn't be one issued. This isn't in the budget for right now. Like, I knew my guy was off investigating this area. And so the mismatch is all often happening inside the company. It's not between the company and the vendor, but the vendor ends up burning a lot of resources trying to land a deal that's never going to happen.
Rob Allen
Can I just add a small programming note?
David Spark
That is Rob Allen, please jump in. Yes.
Rob Allen
Yeah, no, there was you as a vendor referenced in that last conversation. And I just want to say that that wasn't me as a vendor.
David Spark
No, it was definitely not Rob. And in fact, let me introduce Rob. Rob has been a frequent sponsored guest with the CISO series. We love Threat Locker. We love Rob. Even with all the gentle, kind hearted abuse that Rob gives me. And in fact, that's a running joke that we have. But for those of you who don't know him, he is the chief.
Rob Allen
Abuse is such a strong wor.
Andy Ellis
He's just like a hedgehog. If you rub him the wrong way, it doesn't feel so comfortable.
David Spark
He's such a hedgehog. Why are you insulting him? Yes.
Rob Allen
Like if you give me grief about my audio setup or you say, I'm a little hot there or I can't hear you, then I prickle.
David Spark
We have some soundbites of Rob that could be great extortion material, essentially. You wouldn't dream of us. Rob is the chief product officer over Threatlocker. Everyone please welcome Rob Allen. Rob, we love having you back. Thank you for coming back.
Rob Allen
It's a pleasure to be here. David,
David Spark
if you haven't made this mistake, you're not in security.
Quote, the insider threat isn't always malicious. Sometimes it's your best employee. Joshua Copeland of Crescendo argues that the most dangerous insider threat is often the person who keeps the things running. This is the key thing we're talking about here. They're effective and rewarded for it. But access expands, oversight shrinks, documentation becomes optional and eventually nobody knows what they this, you know, the person who keeps things running has access to including them. Sometimes when they make a mistake or burn out, the blast radius is massive. And you realize you didn't have resilience. You, you had dependency. As Copeland puts it, quote, if your culture celebrates heroics over repeatable process, you are quietly manufacturing insider risk. I like that quote. So here in Liza, there are critical people doing critical jobs that desperately need redundancy and double checks. Andy, how do you build those safeguards without killing the thing that made them heroes in the first place? Or does building up this kind of hero culture or always introduce risk? I like this take. What's your take on this?
Andy Ellis
So I 90% agree with Joshua here. The one caveat I want to say is I dislike using the term insider threat to talk about this great employee here. If you must use the insider threat, it's their management, not the employee. Right. There is a hazard, which is the over reliance.
David Spark
So you're manufacturing hazardous environments, not insider threat. Right.
Andy Ellis
You're manufacturing a hazardous environment and the hazard is this single point of failure, which is often an amazing employee who basically like whenever there's an issue and you're like, well, to fix this will take like five engineers, four months. And you've got this superhero who walks in and solves it over the weekend with a tool that only they know how to use. And this happens a lot in companies. And one of the big challenges is they don't bother making their tools usable by other people because they have been burned by spending that energy before. But nobody ever uses their tool anyway. So why would they invest energy in trying to make it like halfway safe when nobody's going to meet them at the other half of the way? You're not willing to invest the operations in QA and software development resources. They're, of course, going to be incentivized to do the least amount of work to solve the problem, because any work they do above that has no value and they are making the right choice. So pause on that one and think about what should this person do differently? What you should do differently as a manager is you should force the issue. First of all, this person probably is not happy maintaining all of these processes and tools that they keep building. They want people to take it away from them, but in a graceful way. So hire somebody whose job is, you're going to back up this person every time they build a new tool. You'll go in, you'll learn how to use it, you'll write the documentation, you'll operationalize it. Also force the person to take vacation. I had one of these employees on my team. He got engaged, and he let us know over a year in advance. He's like, this is the day I'm getting married and I'm taking a honeymoon, and I don't want to be contacted. So we said, okay, we'll start sending you on free vacations. Like, we're not going to pay for your vacation, but it's free vacation time. And the first time you go out, we want you to have your cell phone and your laptop handy. And we're trying not to call you, but any call, the first thing you're gonna do is call your manager and say, hey, I need somebody to document what I'm doing. The next time you go on vacation, you don't have your laptop. You can only talk to people through issues the next time, no phone. And we just kept progressing until we felt confident this person could be gone for the two weeks of their honeymoon without us needing to call them. And we actually managed to build a whole bunch of stable processes around the problems this person had solved.
David Spark
So that's interesting. You sort of stepped yourself through, through sort of weaning yourself off of the hero, right?
Andy Ellis
Because we were the problem, not them. The culture is the problem, not the person who is solving your problems.
David Spark
I love that. All right, Rob. I've actually met many of the people over there at Threatlocker, and I think. And actually kind of the theme of your company is that this sort of hero theme. But hero can have a negative backlash when the hero's not available. I don't know, you put up the bat signal and Batman's not available. So what can you do to sort of, as you say, it's not an insider threat. It's a culture issue. What can you do and have you done to sort of deal with this issue?
Rob Allen
Well, first of all, I'm going to surprise you, David, because I'm going to agree wholeheartedly with both Joshua and Andy. So that may come as a surprise to you. The other thing I would say is, yes, we very much do embrace cyber heroes. Our support team are cyber heroes. I mean, effectively, we are all cyber heroes. I think maybe one hero is not a good thing. Many heroes is a good thing. Yes, might be a good way of looking at it because as Andy said, dependency on one individual is not good in any environment. And obviously burnout is one thing, just is somebody available is something else. But as I said, I cannot disagree with that in any way, shape or form. So not having one hero and instead having many heroes like the cyber heroes could be the opposite of not a good thing.
Andy Ellis
Well, but no, I really like that take because there's this way of looking at organizational maturity, right? Like the capability maturities model that SEI uses where you say like level one is hero culture. And the real problem we have is organizations that think they're level three but are not. And so it's only held up by a couple of level one heroes who are showing up and like saving you from the fact that your processes don't actually work. If you're entirely a level one organization that actually can be very healthy for your organization, it has a lot of downside risk, no stability. If all of a sudden half your people leave. Training up new people is really hard. There's huge issues. But if what you've got is a lot of heroes, that's great. As you move up level two and level three, what you want is a consistent level of heroism where when there's a problem, people will swarm on, fix it, but then maintain it at the level your organization's supposed to be. Having one person you rely on to patch your organization, that's the real problem.
David Spark
And I also think, I think what you have, Rob, with Threat Locker, is quite unusual that you can have so many sort of top tier people because not only that, you also train them too. You have a training program for them. But I've actually worked very early on, I worked in an organization. There was definitely one guy who was keeping the house a car together when he's not available. We were all screwed. And sadly, and I'm sure you've seen it, there are many organizations like that.
Rob Allen
Yes, undoubtedly. I mean, look, I've got examples I can think of in my head, I mean, we've got a guy who's now our director of infrastructure. Incredibly clever guy. Absolutely incredible. I mean, he basically talked me through setting up a data center in Dublin in 2021 in the height of COVID like pretty much talked me through it. Guy's an absolute genius. And at that point, if you told me that we would have a team of six or seven or eight infrastructure people who basically you can go to just as confidently as you can go to him, then I wouldn't have believed you. But it takes time, it takes effort, as you said, it takes training as well. It's super important. It takes having trust in people as well. It's for people like me to not only go to the main guy in the department, to go to any of the other people who are there as well and have the same level of confidence in them as I would in him to be able to do the thing that I'm looking for them to do. So there is a lot to it. But as I said, from my perspective, having been with the company for the length of time, I have been seeing that development from one guy who's basically going to fix everything or sort everything out to me, to there being any number of people I can go to if that one guy isn't available is invaluable and phenomenal.
David Spark
What works, what's not working?
Quote 57% of significant cyber incidents involve attacks the cybersecurity team has not prepared for. End quote. So that's according to the Citactik 2025 State of Cyber Incident Response Management report. So in a CSO online piece, Evan Schuman argues we over index on dramatic breaches instead of the subtle lateral movement attackers actually use. While many organizations run tabletops once a year purely as a compliance checkbox, there's always inherent randomness in attacks that no tabletop can fully predict. Cybersecurity is not alone. The military teaches the same principle. We've heard this again and again. The tabletops reveal the communications and approvals structure. So here's my thought on this, Rob. Should we just be quizzing our staff on that? On who do you go to in an incident? In this incident or this case, rather than relying so heavily on trying to guess the next big or small attack, should we just reflex reaction, just know the communication structure? Because it seems that's kind of the core of everything here. Yes or no on my theory?
Rob Allen
Well, I'm going to surprise you now. I'm going to revert to type, which is the 57%. I'm really interested in that. 57% of cyber incidents involving attacks on cybersecurity and not prepared for. I would wager that 100% of cyber attacks involve tax that the cybersecurity team had not prepared for. But, I mean, that's beside the point. Look, tabletops in some instances do have value, and it's not something that I would completely rule out. I mean, we had something similar ourselves not too long ago from a risk management perspective. And a couple of things that were brought up, and it basically made our CTO not sleep for a couple of nights because he realized, oh, this is something that we need. We really need to concern ourselves with, and this is how we solve it. So there is certainly value to that kind of exercise in that. I mean, as with many things, if you get clever people together talking about potential outcomes and what can happen, then good ideas will invariably fall out of that. So there is value to that. And I wouldn't say that organizations should not do that. But as you said, proper planning involves more than just, well, if this happens, then we do that. Because realistically, that's probably not going to take care of every eventuality. I mean, there's been so many good examples of organizations who have a ransomware response plan, and the ransomware response plan is stored in somewhere that just got ransomware. So what do we then do?
David Spark
But the thing is, shouldn't you know who the next call needs to make? I now know this. I need to talk to somebody. I don't know who to talk to. Shouldn't that be the plan that everybody knows just who the next person to talk to?
Andy Ellis
So I think, David, that we're in sort of two different places here. And first of all, you're absolutely correct. If you're doing incident response, having a generic incident response plan, like here is how we respond to incidents is basically like, you've got to do this right? And everybody who's done one incident, the first thing they do is, okay, I have to write down who I'm supposed to call. And over time, you'll evolve that, you'll grow it. So that's like table stakes. And I think that's one of the reasons Rob wasn't really responsive to that is I think we all assume that exists. Like, if you have not done that, you shouldn't be worrying about scenarios. You should be worrying about, who do I call.
David Spark
But I hear again and again, the reason for the tabletop is to figure that out. I hear it all the time. Like, a lot of people don't know that until they do the tabletop.
Andy Ellis
Well, I think you need. If you've never done a tabletop, you don't know what you don't know. It's nice for you to say, oh, who's the next person you call? My incident response template when I was a CISO was 90 pages. Yikes, right? But we are not always referencing it. And we actually got to a point that you literally had a person whose job was to keep track of the template that is separate from the person running the incident because they would be the ones who'd have to remember something like, oh, someone mentioned pii. Which means we now have to call these seven other people because it's not just one person. Like, I've got to go call somebody in my investor relations team. Which is not something most people have thought about until it comes up. Because if I'm going to have to disclose a breach, then I need the IR team to be prepared for what happens after a breach gets disclosed. So, yes, this is necessary. If you don't have any incident response plan, go run a tabletop simply because it will teach you that you don't know anything. And there's a lot of things that are obvious. Often better get somebody who's done this before to come in and at least give you that starter. I think the real question here is 57% it's a scenario they hadn't practiced. I can argue that one in either way. One of the challenges of surveys is they're very vague and so people who are going to answer that one differently, like was this that we had a scenario and there was one component that exposed a hazard we hadn't known about. Like, oh, we practice ransomware, but we didn't practice ransomware that started with. We had one machine in our network that was not running the coolest anti ransomware vendor that's out there. Maybe that's the scenario that we didn't bother practicing, but that's what actually hit us.
Rob Allen
Can I just say, Andy, kudos for the low level Donald Rumsfeld reference there with the known unknowns and things we know and things we don't know.
Andy Ellis
Yeah.
Rob Allen
And by the way, David, you did also quote the IRA earlier in the episode. What did I say was the thousands of things. Sorry, thousands of yeses and one no. It only takes one no.
David Spark
Oh, that's an IRA quote.
Andy Ellis
Yeah. I mean it's a standard quote that the IRA definitely did use.
Rob Allen
As long I think the IRA you'll find the IRA were first. It was when they Tried to blow up Maggie Thatcher in 1984. They said, we only have to be lucky once. You have to be lucky every time. And it's very applicable from a cybersecurity perspective as well.
David Spark
Before I go any further, I do want to tell you about our spectacular sponsor. And that would be who? Threatlocker. Yes. Now, most security tools assume a very simple model. Just let everything run and just try to catch what goes wrong. The problem is, attackers already know that model. Threat Locker flips the script with a deny by default approach that gives organizations control over what can run, what can access data, and what can elevate privileges before an attack happens. Instead of chasing alerts after malware executes, Threat Locker helps teams define what should be allowed in their environment and blocks everything else by default. Now, that includes application scripts, DLLs, and even which apps are allowed to access sensitive data, all without relying on signatures or behavioral guessing. Now, for CISOs, this means fewer unknowns, fewer emergency escalations, and a security posture that's based on certainty, not detection. Hope. Threat Locker is used by organizations that want to reduce attack surface or without breaking the business through visibility, first deployment, staged enforcement, and real human support. If you're rethinking endpoint security, ransomware resilience, or least privilege, threatlocker is worth a serious look.
It's time to play what's Worse?
All right, we all know how to play the what's Worse game. Andy's going to answer first. Rob, you're going to either agree or disagree. This comes from one of our absolute favorite contributors, Dr. Dustin Sachs, who has a new company called Cybercog Labs. And here is his two scenarios. This is a short one. What is worse? A board that only wants a yes or no answer to the question, are we secure? Or a board that wants deep technical detail and. And then misinterprets all of it. It's a good one, man.
Andy Ellis
Dustin, that's a really good one. I like that. Because the challenge of the first one is that they're really just trying to create a scapegoat here, right? They want you to be on record saying, of course we're secure. Or they want you to say no, in which case you've just created this internal disaster. Like, why are you, the ciso? Like, the CEO, is going to be yelling at you like it's your career is being sabotaged. By the way, just as an aside, if you get asked that question, and
David Spark
I remember years ago asking you this question on camera.
Andy Ellis
Yeah. If you get asked, like, the common Variant of this is like, is there enough budget? Is security getting the right prioritization? And you want to be able to hedge. You want to say, yeah, kind of the correct answer is, look, we could always do more, but I'm sure all of my peers believe they could also do more with increased prioritization and budget. And there we always have to make the trade off as to, are we over investing on security versus sales, because we want to ensure there's still a revenue flow. And that's really the context in which I operate, some variant of that. So now let's take that second one of the board that wants you to go into detail and is going to just completely misinterpret things. Honestly, I think that one gives you more opportunity. It's a bad scenario, just to be very clear, because now you're going to be dealing with the fact that the CEO is going to come to you and be like, hey, this board member, like, said this thing, and why did you make them believe that? And now you're going to have to deal with those kerfuffles. But at some point, you actually become a benefit to the rest of the company. If your board is doing that to you, they're probably doing it to other parts of the business as well. And so if you can take some of that distraction away from the marketing team, away from the head of revenue, you might be able to serve your company better there. I'm not saying this is a good outcome. I'm merely saying that there's.
David Spark
But at least it's more revealed. And you're not the scapegoat in this case, right?
Andy Ellis
You're not the scapegoat here. It's not simplistic. You're gonna get them very involved with you. And you might find allies among the other executive suite who are like, oh, thank God that they're abusing you this quarter. Cause last quarter, that was me.
David Spark
And also, when you did do the technical dive, at least you were saying some accurate information, right?
Andy Ellis
You were saying accurate things. Because, remember, you're not just presenting to the board, you're presenting to all the other executives who are in the room and who helped you review. And so as long as you maintain that level of honesty there, you'll build some sympathy. So these both are problematic, but I would say the first one is worse because nobody is actually engaged on cybersecurity in that company.
David Spark
All right, this is a tough one, Rob. I throw it to you. Do you agree or disagree with Andy? Same or different reasons? What's Your thoughts?
Rob Allen
I agree wholeheartedly with Andy.
David Spark
Okay.
Rob Allen
It will surprise you to hear. No, look, as he said, both are not great.
David Spark
Politely saying it.
Rob Allen
But the second option is definitely the least bad. Detail is good. Even if that detail is going to be misinterpreted because at least you can then educate. You're opening a conversation rather than just yes or no, which is basically the end of a conversation.
David Spark
Let me. Hold on. Let me just throw in this shoehorn here. The board is making decisions.
Andy Ellis
The board does not make decisions. You're wrong. David. Sorry, I gotta step in here, but they're misinterpreting.
David Spark
But then they're approving things. They're approving things, aren't they?
Andy Ellis
No, no. And every CISO needs to understand this. The job of your board is two things. There's only two things the board does. They. And one of them is not run the company.
David Spark
I know that.
Andy Ellis
Do not make decisions. What the board does, number one, ensure proper governance is in place. They represent the shareholders. Their job is to evaluate is management doing okay or not. That's it. They don't tell management what to do. Now, they might tell the CEO, hey, that CISO is a loser. Fire them. But they don't have hire fire authority. That's up to the CEO. The second thing they do is, is ensure there is a CEO succession plan. That's it. That's the only thing boards do. They may provide advice, they may provide help, but you are not trying to get the board to make decisions about how you do security. That is the fastest way to finding the door on your way out.
David Spark
Okay, so that makes a good point, because then. So here's my question. Would your answer change, and I'm sorry that I cut you off, but would your answer change if it wasn't the board that wants technical detail, but the C suite that wants deep technical detail, would that change
Rob Allen
from. Okay, Andy's nodding very much here.
Andy Ellis
I would much. If this is my CEO, like, who controls my budget, I would much rather have them asking very simple question and let me go run my security program than attempt to micromanage it wrongly. Because they have management authority.
David Spark
Yeah.
Rob Allen
And again, I don't disagree with that. But I still think the yes or no question is a time bomb. The yes or no question is a trap.
Andy Ellis
Oh, it's a time bomb. But the other one's a bomb.
Rob Allen
Yeah, well, possibly. Possibly. But again, I'm all for transparency. I'm all for conversation. I'm all for discussion, rather than, as I said, a yes or no. Is the end of a conversation. Well, mind you, a no is probably the the beginning of a much bigger conversation. But realistically speaking, who's going to say no? We're not secure. I'm the ciso. Because the answer is going to be what the hell are you doing? Then why do I employ you?
David Spark
All right, so we have agreement here in both cases.
Please, enough.
Andy Ellis
No more.
David Spark
We've seen a rush of companies looking to integrate AI into security, security solutions, leading some to wonder if we're already starting to over trust the value of AI. I would say a big fat yes to that, Andy. When it comes to AI being integrated into cybersecurity, everybody's doing it whether you like it or not. But what have you heard enough about with AI integration? And what would you like to hear a lot more?
Andy Ellis
So when I think about AI, I think about three different areas and most people only think of one of these, right? And the three areas are functionally data analytics, pattern matching. Like, can I figure out if this is an adversary by just looking at a lot of data? We've been doing machine learning in that space for a long time. People stop talking about it. Really Key, I want to hear a little bit more about that. The second thing that we hear a lot about, which is the generative AI, which is, oh, I have a prompt which might be some data, might be other things, generate for me an answer. I'm tired of hearing about that. And here's the reason why. The third thing we need to be talking about is repeatable automations. I do not want to have a generative AI taking the same set of inputs and providing me unpredictable outputs. If the problem that I have is ransomware is on this machine, I should have a playbook of things. We're going to go now. Do I want that playbook followed every single time? I'm tired of hearing about how, oh, we're just going to use LLMs everywhere to solve everything. Like maybe you have an LLM, write the playbook, but now you should have a playbook that is repeatable and automatable, that does not involve generating new outcomes until you get to the end of it.
David Spark
All right, very good. I throw this now to you, Rob. What have you heard enough about with AI integration in security tools specifically? And what would you like to hear a lot more?
Rob Allen
I mean, if I'm brutally honest here, David, I think the words AI in general in terms of security is something that does somewhat grind my gears, primarily because of the overuse and the fact that everything and everyone seems to now think that they must be AI powered or infused by AI. It is quite tiring. The last place we met in person was RSA and you literally could not take 10 steps at RSA without being accosted by some.
David Spark
Same with black hat.
Rob Allen
Yeah. Monstrous booth with something, something AI plastered all over it.
David Spark
Andy, didn't you count like the booths that didn't have AI in them or something?
Andy Ellis
I did in fact count everything. So of the 359 booths I saw, 85 of them had the word AI on them, 26 said Agent. So it was, I mean it wasn't like everybody, but it was a lot. That's still like 30%.
Rob Allen
That's less than I would have expected, quite frankly, because I would have expected to be 90% because that was my perception.
David Spark
So it's over indexing on it. But you know what, the way I see it is you go back many years ago when everyone was moving to the cloud and they led with the concept of the cloud. Nobody really leads with the cloud now. I mean, it's kind of assumed most of us are in the cloud of some sort.
Andy Ellis
I hope so, yes.
David Spark
Hope so. So I see kind of a similar transition that everyone's going to be using it just like they're using electricity at some point. Yes, Rob,
Rob Allen
perhaps I'm going to sit on the fence slightly about that one. I mean, look, we have a fairly simple way of looking at it and again to Andy's point that there are absolutely uses of AI that are extremely valuable. So taking large amounts of data and finding conclusions from them is perfectly valid and perfectly reasonable and perfectly useful way of using it. Where we draw the line fundamentally is having AI making decisions on the fly about what's good or what's bad, because invariably some of those decisions are going to be wrong. And to our point from earlier about bad guys only having to be lucky once, it only takes one wrong decision about something's goodness or badness for it to be game over for any organization. That's where we kind of draw the line. I mean, there are very limited areas within our platform that we use AI, so we use it for like website classification and stuff for things that we've never seen before. But in terms of actual decisions about what's good and what's bad, that is not something we would countenance. Now the reason for that is quite simple, which is because our approach is to block by default, so we don't need to make decisions on the fly about whether something's good or something's bad. If we see something we'd never seen before, it's just going to get blocked, it's going to get denied. We don't need to figure out if it's good or it's bad. So we don't need AI to inform decisions. Our decision by default is block. So that is one of the benefits of the approach that we take is we're not dependent on making AI or anything else making decisions on our behalf. It's very binary decisions as yes or no. Is it explicitly allowed? Cool, off you go if it's not, no. But yeah. It's a long winded way of saying that there are certain uses that are perfectly valid and absolutely should be encouraged. And there are some uses which because of the approach that we take, we don't feel that we need to do.
David Spark
Coming up next, can you measure your time to exposure, not just time to remediation?
Today's exposure Management tip is sponsored by Qualys.
Several major cloud breaches in the past started with misconfigurations introduced during routine deployments that sometimes remained exposed for weeks. Post incident reviews revealed that patch SLAs were met and but the exposure itself wasn't detected quickly enough. The breach window existed not because fixes were slow, but because exposure awareness lagged behind operational change. Most organizations obsess over how fast they can fix vulnerabilities, but not many of them measure how fast new exposures are created. Cloud changes, identity sprawl, SaaS integrations, and configuration drift often introduce risk faster then teams can remediate it. Mature exposure management programs track time to exposure, taking the time to identify a vulnerability and then stepping back to determine the likelihood of it being exploitable within the company, along with the options that exist to monitor the situation and remediate as necessary. The next step, material exploitability, applies an even longer lens, monitoring an exploited vulnerability to the point where it uncovers a material exploitability that can do sufficient damage. The key is to not just react to symptoms, but instead focus resources where exploitability is real, not theoretical.
Want to go beyond exposure visibility and actually reduce risk? Find out how by visiting qualys.com rock. Unexpected outcomes or failures
quote 70% of employees admit to bypassing security controls not out of malice, but simply to get their jobs done more efficiently. Dr. Dustin Sachs of Cybercog, the one who gave us our what's worse scenario? Cybercog Labs. Actually, he shared that stat on a CISO Tradecraft blog post arguing we've spent decades treating cybersecurity as a technical problem when it's really a behavioral one. Security leaders must meet people where they are rather than forcing compliance. For instance, don't make developers sit and watch a SAS DAS test run. Automate them in the background so results are ready when they return. Being the developers, his Dustin six recommendations focus on auditing for friction rather than just compliance. Harnessing NEO diversity as a security asset and aligning talent to natural behaviors. What's wrong with our security programs or compliance in General if nearly 3/4 of employees are bypassing security controls? Andy, it sounds like blaming the audience is no longer the answer here. What have we screwed up?
Andy Ellis
So I agree with the premise, but I would actually frame it the exact opposite way, which is we have been treating cybersecurity like a behavioral problem instead of a technical problem.
David Spark
Okay, Right.
Andy Ellis
We have been treating humans as if they're eusocial insects. Like, you know that there's a bee at the top and all the other bees do exactly what they're told to do. They follow all directions. That is technically not how humans operate. And so you tried to assume that you could institute an organization that was rigid, would absolutely follow rules. That's not a behavioral problem. That's a technical problem. Your policy doesn't work. Your organization doesn't work because the elements are what the elements are. There's nothing wrong with humans. There's a reason we are the dominant species on the planet and not bees. Stop trying to build your security controls for bees. This is a different way of framing the same problem Dustin sees. Just to be very clear, like, I'm not in fundamental disagreement about his recommendation. Recommendations. I am saying you can't say this is a behavioral problem because behavior is what it is. You're not going to adjust it. The moment you talk about a behavioral problem you think you can solve for the humans. Stop trying to solve the humans and solve the actual problems. And his recommendations, some of these are fantastic. But stop trying to say we're just going to do security awareness training and get ourselves out of this. Because security awareness training is an anti Goldilocks solution. It is the worst of all possible worlds. It tells the security team, hey, we did our job. We did the bare bones minimum to tell people. It tells the people, hey, we did our job. We sat there for 30 minutes and we clicked a couple of links and now we're golden. And you have actually made the problem worse because everybody thinks the problem should be better, but it's not going to be. It's kind of like consumer recycling. Like we make people sort their recycling in their Home, despite the fact that it provides no meaningful benefit. And we'd be way better off with single stream recycling systems. But instead we're doing stupid things. Blaming the users. Stop blaming the users. That's because you're treating it like a behavioral problem. It's a technical problem if your systems just don't work right.
David Spark
Interesting take taking in reverse, that we should solve many of these things that we're solving behaviorally through technical solutions. What do you see?
Rob Allen
Rob? I'm sort of intrigued. I'm actually looking at this question again and I think there may be a comma missing between security controls and not out of malice. Because the question as presented is 70% of employees admit to bypassing security controls, not out of malice. Does that mean that 30% are bypassing security controls out of malice?
David Spark
No.
Rob Allen
I don't know. I'm wondering who those 30% are.
Andy Ellis
Well, I think it's at 70. Yeah, I think that 70% admit to bypassing security controls. That's the statement.
David Spark
Yeah.
Andy Ellis
Not out of malice.
Rob Allen
Okay, that's fine. Yes, as I said, there's a comma missing there, but I like that.
Andy Ellis
Dustin, be a little more precise. Or whoever copied and pasted this for us.
Rob Allen
Commas matter. Let's eat Comma, grandma and let's eat
Andy Ellis
grandma are slightly different.
Rob Allen
Two very different things. Sorry, I got completely distracted by that nerd sniping.
Andy Ellis
Rob.
Rob Allen
Yes, I am a pedant, I confess. So, yeah, so 3/4 employees are bypassing security controls. I mean, to be honest, it's not a tremendous surprise, as Andy said. I mean, employee training is the least good solution to that problem because it doesn't matter how well you train people, they're still going to click on that link.
David Spark
Yes.
Andy Ellis
Well, to be clear, clicking links is what we pay people to do.
David Spark
Right?
Andy Ellis
The moment you try to say, actually, technically, clicking links is how they get paid. Because if they didn't click the link from HR or finance, they're not getting paid.
Rob Allen
Yeah, it's always going to fall down. I mean, I don't know if I gave this example before, but it was a kind of impromptu exercise at one point where Danny Jenkins, our CEO, posted a link to our general teams chat, basically saying, hey, everybody, need you to run or download and run this thing right now. Now, we're a really well trained, really well educated cybersecurity company. We do quarterly training. Everybody you know, don't do this, don't do that, don't click on the link, don't do whatever. And still nearly 20% of our company clicked on that file and tried to run it because ostensibly Danny, our CEO, had said they needed to do so.
Andy Ellis
So here's my. Here's my question, though, for you, because I want to dig in on this one.
Rob Allen
Yes.
Andy Ellis
It was on your internal Microsoft Teams chat from his legitimate account.
David Spark
Yes.
Andy Ellis
Okay, so why did 80% of the people not comply with the request for management? That's your. No, no. This is the problem, which is there is no good choice for an employee to make here. Either do what management asks you to do or don't do what management asks you to do.
Rob Allen
Those people who know Danny would know that things that Danny wants to be done, to be done. So there's a valid argument, right?
Andy Ellis
We want to test this new thing we just deployed, and we're testing it with Threat Locker, and we don't want to have to push it out. So this is completely legitimate. Here's the thing. Yeah. This is because we don't treat it like a technical problem. Why is clicking links unsafe? Solve that problem. Don't solve that. Users click links.
Rob Allen
Well, the interesting thing was that even though, As I said, 20% of our organization clicked on the thing, tried to download or try to run it, none of them were able to do so because threatlocker blocked it. Because again, blocking things is what we do.
Andy Ellis
Right. So did you. So those 20% should all get bonuses of some. They shall be recognized. They did what Danny did in a safe environment.
Rob Allen
Yeah. Well, now, how did they know it was Danny that was saying it?
David Spark
Oh, by the way, you know what's happening right now? Someone is pulling that quote of yours and sending it to Danny. That's what's happening. Oh, yeah.
Rob Allen
100%, undoubtedly. Yeah.
Andy Ellis
No, but, no, like, they get to trust you have deployed a secure chat environment. At least we hope Microsoft Teams is remotely secure now.
Rob Allen
But there is a big but in this, which is that, first of all, Danny is not always sitting at his desk. Danny's not always in the office. Danny is very often on stage. What Danny will do, invariably when he's on stage is he will take his phone and he'll leave it down somewhere or he'll give it to somebody, say, hold on to that. So there is no guarantee that the person who has just posted this thing to download this thing is actually Danny. It may have come from his phone, but that doesn't mean it was him that posted it.
Andy Ellis
So I'm just gonna say it was a safe environment to follow what Danny said to do. They acted safely and with alacrity. Now, if somebody reached out to Danny and said, hey, Danny, is this really you? They should like get extra kudos.
Rob Allen
But yes, absolutely. So confirmation would have been a very.
David Spark
But you can't have the entire staff reaching out to Danny and asking that very question.
Rob Allen
Yes, but again, I just think this was.
Andy Ellis
Yeah, or reach out to your bast and say, hey, like there's lots of ways to solve that one. But this is one of the things that just gets me, which is either we're going to use electronic communications to run our organizations or we're not. We can't selectively say, oh, when security didn't like that you did it, then you did something inappropriate. But if Danny is always telling people what to do over Microsoft Teams, then the one time we want to retroactively say, well, maybe you shouldn't have listened, then it's BS but hold on.
David Spark
No, but maybe. Hold on, let me clarify. So maybe what Danny's communication should be is everybody should log into their teams account with no link in the email. And we know that Danny doesn't put things. I mean, I'm just suggesting something here.
Andy Ellis
Yeah. If you want to make that the standard. And look, we did things like that. Like I had got our marketing team to stop giving out USB drives to new hires so that we could say stop using USBs. We, we will never give you a USB drive. We don't want you to stick one into anything. We made that the norm. And that's fine. But you can't say that the norm is not follow instructions from senior management that come through the company's communication channel only when there's a security issue.
David Spark
All right, I need Rob to close this up.
Rob Allen
I know. But to that point, Andy saying that whatever our chosen communications method is, is assumed to be secure is a recipe for I'm sorry. And that there is no confirmation involved is how your accounts department sending a half a million dollars to Timbuktu because they got a teams message from Danny saying they should do so.
Andy Ellis
It's either trustworthy or it's not. Be consistent on that is all I'm asking. Which is if every time they get asked to do a thing they're supposed to challenge it, then that's fine.
Rob Allen
That's fair. That's fair.
David Spark
Have we come.
Rob Allen
David, you're expecting me to disagree?
Andy Ellis
Good job, Dustin. On this one.
Rob Allen
Cause in this particular circumstance, it wasn't a. This is something that happens all the time. It was an out of the blue. It was a. Hey, everybody run this please. As a test. It wasn't as. I said something that would often happen. So, I mean, realistically speaking, from my perspective, it's the 80% of people.
David Spark
No, but I think what happened sounds like you learned something from it, didn't you? It sounds like, yeah.
Rob Allen
Oh, absolutely, yeah. Tremendously illustrative.
David Spark
Yes.
Rob Allen
And again, I'm not quite sure about the 80% should be rewarded. I think I might disagree with Andy on that, but I.
David Spark
First of all, it all depends on the culture that you're building at your company and how you.
Andy Ellis
Right, exactly.
David Spark
So What? All that 80%, they can contact Andy and he'll give you a bonus.
Andy Ellis
Correct, Andy, I will say you did an amazing job. And yeah, actually, if Threat Locker will give me budget authority to spend bonus money, then I will absolutely pass on a bonus.
David Spark
Well, our audience could go ahead and work on that if they would like to. And all the ThreatLocker employees listening can work on that as well. We have come to the end of our show and I want to thank Andy and I want to thank Rob and I Want to thank ThreatLocker. Remember, you can go to their website, threatlocker.com CISO Throw in the slash. CISO. Just do me that favorite. It's an easy way to let them know that you heard about them through the CISO series. Just go to threatlocker.com CISO but remember, ThreatLocker, allow what you need. Block everything else by default, including ransomware and rogue code. More about that threatlocker.com CISO please go check it out. Andy, as always, thank you so much. And Rob, I'm going to ask you a question I asked and I think I already know the answer to it, is are you hiring over there at Threat Locker?
Rob Allen
We are very much and always hiring over a Threat Locker.
David Spark
Especially if you live near Orlando. In or near Orlando.
Rob Allen
Not only, but mostly if you live in Orlando. Yes. Or the surrounding areas or would like to live in Orlando.
David Spark
Not a bad place to be.
Rob Allen
It's not a bad place to be.
Andy Ellis
Except in July and August.
Rob Allen
Yeah, it's not great for golf in high summer. I will completely agree on that. But at this time of the year, March, April, it is beautiful around here. So as one who has moved to Orlando and very much likes Orlando, it is a great place to be and Threat Locker is a great place to work. So, yes, we are hiring. We're always hiring. We have just moved into a much bigger building than we were in previously, so we've got lots of room for new people now and we are hiring furiously to try and fill that room.
David Spark
Do you have any events coming up related to hiring at Threatlocker?
Rob Allen
It's funny you should mention it, Dave. We absolutely do. We've got a hiring event on site here in Orlando April 15th. I'm sure there will be information about it on our website, threatlocker.com careers. So yeah, anybody who is in the area, we've had phenomenal success with these hiring events heretofore. We do them pretty much every month or two months and it's been incredibly successful. So yeah, April 15th here in Orlando. Keep an eye out on our careers page and you'll probably get information and details about it. I think if you're driving around Orlando, you will also probably see billboards everywhere to that effect too, because we found that that's a really useful way of getting people's attention and getting to come to said hiring events. So yeah, April 15th in Orlando. Come along.
David Spark
Excellent. Thank you very much Rob. Thank you very much Andy. Thank you so much Threatlocker. And thank you our audience. We greatly appreciate your contributions. And for listening to the CISO Series
podcast that wraps up another episode. If you haven't subscribed to the podcast, please do. We have lots more shows on our website cisoseries.com please please join us on Fridays for our live shows, Super Cyber Friday, our virtual meetup and cybersecurity headlines. Week in Review. This show thrives on your input. Go to the participate menu on our site for plenty of ways to get involved, including recording a question or a comment for the show. If you're interested in sponsoring the podcast, contact David Spark directly@Davidisoseries.com thank you for listening to the CISO Series podcast.
Title: It's Okay to Put All Your Eggs in One Basket as Long as You Really Trust the Basket
Date: March 10, 2026
Hosts: David Spark, Andy Ellis
Guest: Rob Allen, Chief Product Officer, ThreatLocker
Theme:
A candid, practical conversation between security leaders on the dynamics of vendor relationships, the pitfalls of “hero” employees, incident response preparedness, behavioral vs. technical approaches to security, and the hype versus reality of AI in cybersecurity.
Timestamps: 01:00 – 03:56
Timestamps: 05:05 – 13:15
Timestamps: 13:15 – 18:47
Timestamps: 20:48 – 26:50
Timestamps: 26:56 – 32:12
Timestamps: 32:12 – 34:15
Timestamps: 34:15 – 44:49
This episode provides real-world scenarios and pragmatic advice on vendor relationships, avoiding reliance on IT “heroes,” preparing for the unpredictable in cybersecurity, and the real limits of awareness training. It also demystifies how boards and executive leadership should (and should not) engage with CISOs, the responsible role for AI, and the importance of technical controls and organizational norms over simply “blaming the user.”
Memorable, practical, and at times funny—this is a strong episode for any security practitioner, IT leader, or policy maker seeking to improve how teams and organizations approach real-world security problems.