
Loading summary
A
This is Rich Stroffolino with the department of no. Mike Bickford, former CISO over at the New York State Gaming Commission. I gotta ask, one, thank you for being back on the show so soon, but two, what is your priority this week?
B
Planting the garden this weekend, but also getting immersed in some new activities at work.
A
Ooh, maybe some seeds of knowledge perhaps will be blooming as well. And Brett Conlan, CISO over at American Century Investment, what is your priority this week?
C
I think on the personal front, We've got some VIPs coming to our house, so we're getting ready for that. And on the work front, I would say that I am program. We are looking at that closely. So we're looking at, you know, what we have to do to fill that gap. Identities are now logging in thousands of times an hour across vendors we've probably never heard of. So that's where our focus is going this week and this quarter.
A
Fantastic, that is. I love to hear where people's mindsets are when it comes to everything. You know, we're getting hit with so many new things every single day. It seems like so kind of getting a handle on that. I absolutely love to hear it. Hey, and anybody watching live, let us know in the chat, what is your priority this week? I'd love to see where your mindset is. Where your mindset is at. Love to see it. All right, producer Josh, let's run that open and get into the show. From the CISO series, it's Department of. No. Welcome to the Department of Know youw Virtual Friday strategy meeting. Helping close out the week, have some fun and figure out how we can integrate all this news. We get all this crazy cybersecurity news into your week, into your departments. Figure out what makes sense here. A huge thanks to our sponsor for today, Threat Locker, for helping make the show possible. We'll hear more about them later in the show. Remember, if you want to get involved, first up, best way to do it is live in our YouTube chat we broadcast every Friday at 4pm Eastern. If you're not joining us right now, so make sure you join us one week or every week, preferably. Or you can email us feedbacksoseries.com we get more messages there. We do read each one of those. We do appreciate it. So thank you so much. Before we jump into the news, just a quick disclaimer that all the opinions of our guests are in fact their own, not necessarily those of any employer. And so do with that what you will. We've got about 30 minutes though. Let's get into the news, starting off with our no or no segment. This is where there's so much news out there. We've already alluded to this. We need to know, is this something we need to be bringing to our security teams? We need to be bringing up at work figuring out what this means for our organization or. Interesting headline, but don't need to go any further. First up here, this one raised some eyebrows for me. Insurers move to cap payouts around AI The Financial Times reports insurers including QBE Insurance and Beasley are moving to cap payouts for AI related cyber incidents, introducing sub limits that significantly restrict coverage for risks like LLM jacking, where attackers exploit enterprise AI systems to, to avoid usage fees. Reminds me of early AWS auth jacking, that kind of stuff. Brokers and legal experts warn that changes could narrow protection across a broader range of emerging AI threats, even as insurers argue that they are clarifying coverage rather than reducing this. All right, Brett, I'm going to start with you. Insurers capping AI related losses. Do you want to know more about this or is this just insurers being insurers?
C
Insurers being insurers. Every time there's a new risk category emerges, whether it's cloud, IoT ransomware, same cycle, right? Initial broad coverage, claims start coming in, carriers start the sublimates, market adjusts. And there's nothing here that say I specific. It's all in draft form. If your team is threat modeling on your cyber policy, paying full price, you've got bigger problems than that.
A
Mike, what about you? Is this, is this something you want to know more about or are you kind of in the same wavelength with Brett here?
B
I definitely want to know about it. I want to inform, you know, the, the, the rest of the leadership as well because this I, I take a different stance. It's, it's not just that the insurance, you know, the insurers being insurance insurers, it's, it's the market is signaling here that, you know, AI risk is still poorly understood and hard to price. It really is, you know that what we're seeing is, is a just your traditional maturity gap AI driven risk, especially things like LLM abuse or LLM jacking doesn't even have, it doesn't have yet the decades of actuarial data that goes behind like car crashes or housing insurance or thing or natural disasters. So the insurers are protecting themselves with supplements, just like, you know, Brett alluded to. But as a CISO I want to know more. I want to know not because of the policy paychecks and mechanics and all that stuff, you know, but it tells me, you know, where coverage gaps still exist.
A
Yeah, and that's. That's one of those things. I mean, we've talked about this, you know, with ransomware, right. For years now, right. Where this is, it still felt so nascent. And so from an insurance perspective, Right, Mike, to you, completely to your point, without those decades of experience and seeing the scale of attacks kind of rapidly changing, what we thought was rapid, it turns out now when we talk about rapid with AI, It's. It's literally feels like hour to hour sometimes, right. With the rate of change that we're seeing here. So, yeah, this is one of those things where I'm like, it feels more incumbent to have that ongoing discussion of. Of. Of. Of what, you know, what the expectation is or what their understanding of this is for your organization. But, Brett, I think your point in terms of, you know, kind of depending on that. Absolutely spot on. That is what I would bring to my security team. Right. Is. Is that kind of realization, I think is. Is the important one here. All right, next up here, unauthorized Mythos access. Bloomberg reports a small group of unauthorized users claimed in a private discord they were able to access Anthropic's Mythos model. One member of the group works for a third party contractor for Anthropic. The group combined intelligence from a previous supplier breach to guess Mythos online location, and then the contractor was able to use their access credentials to actually get in and test it. Anthropic investigated the report and said there's no evidence that access went beyond a third party vendor's environment. They were trying to keep on the down low, not doing anything like, I don't know, scan Google for vulnerabilities. But Mike, I got to ask you. If a random discord group got access, odds are the genie's already out of the bottle somewhere else. Do you want to know more about this or is this. You're good with this story. What are your thoughts?
B
Definitely want to know more. If a discord group can, you know, even claim access, the real issue isn't the access. It's control over the supply chain. Right. I want to know more about how is that happening with the convergence of identity, vendor risk and telemetry gaps that we have. Even if the access didn't extend beyond a vendor environment, that pattern obviously still exists and still matters. So this is identity control and third party risk and weak boundaries. That's what we need to.
A
Yeah, and Kevin Fernler chat asking, is this a permissions misconfiguration? I mean, if it's. If it's not, congratulations on whoever was now doing the audit on all contractor access for Anthropic here. But Brett, I'm curious, how did the story strike you? Do you want to bring this to your team? Dig in more on this?
C
I think I'm bringing it to my team for both reasons, actually. So first I want them to know it's not for the reason the headline suggests. So the headline was there was a hack on Mythos, but it really wasn't a hack. Right. This is osint, this is vendor sprawl. This is what we talk about all the time. A compromised third party now got access and they guessed it. They guessed sort of where the model was hosted from that breach. The other thing, what I'm telling that team is okay, they got non production access. It was from a vendor. We're not part of Project glasswing. There's no evidence that it went beyond the vendor. They're saying it's not being used for attacks. Nothing was exfiltrated, nothing was weaponized. I want our focus on what the Mythos environment is going to be bringing down the road, not necessarily the actual story itself, other than the ability to distinguish the true story, which is this is identity compromise again. And what are we doing about that? And now you're at a fourth party.
A
It's just parties all the way down. The least fun variety of parties. All the way down. Next up here. London hospitals continue to suffer from 2024 ransomware attack. This ransomware attack occurred in June 2024 by the Quillen Ransomware Group and it continues to reverberate in the system. Internal documents show at least one NHS trust is still working without fully restored systems and managing a large backlog of delayed test results, restricted blood supplies and theft and publication of sensitive patient data, as well as delaying treatments for highly time sensitive conditions, including, including cancer. Critical results are being communicated by phone. We're having full reports being delivered on paper or PDFs and manually uploaded into patient records. Brett? A recent study by King's College London described ransomware as the most significant current cyber threat to the nhs. I'm curious, do you want to know more about this particular incident? Kind of the. The organizational flow of this, or is the long tail of ransomware unfortunate but kind of established fact? At this point point, I definitely want
C
the team to know about it and I'd push back on the framing right so the long tail, the ransomware is being established. That's typically why we stop paying attention attention. But established doesn't necessarily mean that we've looked at the whole story there. So to me, recovered and restored are different words and most organizations can't tell the difference until they've lived through it. So if you're looking at your risk registering a third party risk register, it's probably going to list some of those areas and they're all going to have vendors in a different row versus what happens when one of them is this vendor and what happens at month 18 now when something like that is going on. So I think it's good for them to have context around it. And again, the recovered versus restored is extremely important for companies to take note of.
A
Mike, what about for you? How did this story strike you? Are you, are you bringing it to the team?
B
Yes, absolutely. You know, the breach here is the event, right? The, the real risk happens with the operational trail or tale that follows this. You know, this is absolutely something leaders should be paying attention to. The long tail, the ransomware is where the, you know, that's where the real damage is occurring. You know, delayed care, manual workarounds, degraded trust, you know, in healthcare especially those cyber incidents become, you know, patient safety, you know, matters and issues. This reinforces why resilience, not just prevention, has to be a top investment priority. And you know, one thing I'm thinking in this space, Brett, I don't know if you are, but are. Do you think that we're under investing in recovery compared to prevention?
C
Absolutely. I think that resilience and recovery are something that we've talked a lot about. But this really is an example where we're seeing that people aren't really testing it. I think that we're seeing them look at recovered and resilience as the same thing and it's not the ability to withstand the attack, come up on a different area and get back to running quicker than your recovery allows, I think is really what we're going to see the areas shift to. So again, I think this is a great story to keep in the headlines
A
for your team and I would just say just a quick plug this Thursday. Coming up on Defense in Depth, we actually digging into all of this, you know, what do we actually mean by ransomware recovery? How are you actually testing it, what that actually means and what looks like for your organization. So be on the lookout for that in your podcatcher of choice for defense in depth shameless plug. And last story here for nowhere. No Apple fixes iOS flaw exploited by the FBI Apple has released an urgent iOS update to fix a security flaw that was reportedly used by the FBI to recover deleted messages. The issue wasn't in apps like Signal itself. The end end encryption there is rock solid. But in the iPhone's notification system, which stored message previews even after messages were deleted or the app itself was removed, investigators were able to access those remnants through the device's internal database. Of course they had to get access to the phone, but that's a separate issue here. But you know, Mike, a secure protocol let down by problems at an endpoint. That sounds deeply familiar for a lot of cybersecurity scenarios here. Do you want to know more about this kind of, oh, like, overall story? Obviously Apple patched this one particular instance, but did that, does that end it for you or is there something of interest beyond that?
B
It is something to know, something to keep in, you know, in the forefront. Because until Q day happens right now, encryption, you know, isn't going to fail, but the endpoints do. So that to me is a classic example of, you know, strong cryptography is being undermined by endpoint artifacts. You know, I'd want to know more because it reinforces, you know, our core lesson. You know, secure systems can't, you know, they can still leak data through unintended persistent layers. You know, from a security leadership perspective, this drives the need for, you know, full stack visibility, you know, not just app security, but also the OS level behavior and you know, the, the forensic residue that it's, that it's leaving there. So, you know, the more, you know, right, it's the, the telemetry around what's happening. But like I said, until Q day happens, we can, you know, we can trust the encryption is happening, but we need to keep it in the forefront.
A
It's got to get to an end of a pipe somewhere. And therein lies the rub. Brett, what about for you, how did this, how did this story strike you
C
is completely normal and that's the problem that we're dealing with, right? So I, I think to me there's two moves I'm making and talking to the team about we want to make the oauth sprawl visible. And I don't think a lot of orgs can list every app their employees have granted access to. Google workspace admins can, can audit it, but you can't govern what you can't see. And then the second thing is we, you have to stop with the allow all as a default and this goes to security Teams, development teams and vendors. How many times are you bringing in a tool? And if you think about it, that model is broken. In practice, we're not reading the consent screens. The fix is always give this tool admin level access so that it can do what it needs. It needs an admin, you know, account to do all of these things. And if you actually talk to the engineers and talk to the vendor and talk to your developers, you'll find out that's not what they need. So I would require admin approval for any app that's requesting for this particular existence Right Drive, Gmail, Workspace, Wide access and Google I think supports that. And then most orgs, from what I've read up on, haven't even turned that on. So I think we have to get better at how we handle these things. So kill the allow all and make the OAUTH sprawl visible. And if you do that, I think you're going to get ahead of the problem.
A
All right, before we move on to our bigger discussions of the week, have to spend a few moments today and thank our sponsor who helps make this all happen. And that today is ThreatLocker. ThreatLocker is extending zero trust beyond endpoint control with their recent release of Zero Trust Network access and Zero Trust Cloud access. Access isn't based on credentials alone. It requires the right user, the right device and the right conditions. Because as we've seen in recent large scale CRM breaches, stolen credentials and misconfigurations can expose massive amounts of data. With ThreatLocker, nothing is exposed and access is limited to exactly what's needed. Learn more and start your free trial today@threatlocker.com CISO all right, let's dig into one of the bigger stories. I saw this. I said this was all over LinkedIn and so I was like, I should probably see what's going on here. Vercel confirms breach and stolen data is for sale Web infrastructure provider Vercel. I'm getting that right, Right. This is one of the, this is like an hors d' oeuvre situation for me. So I apologize if it's Verkel or something has disclosed a breach tracing back to a compromised third party AI tool called Contacts AI. An employee installed its browser extension, signed in with the enterprise Google account and attackers who had Already infiltrated context AI through credentials stealing malware use the OAuth access to pivot into Vercel's internal systems and access some customer environment variables. Core services and open source projects like Next JS appear unaffected and Vercel is working with Mandiant on the investigation. A threat actor claiming to be Shiny Hunters says they're selling the stolen data for two months. And then right after the story came out, Vercel also said they had a completely separate data breach that exposed customer data, but didn't say if it was related. But that was just the most recent update. Vercel's breach started with one employee installing a browser extension and clicking allow all on an OAuth grant. That feels like within the normal milieu of enterprise behavior. I'm curious, Brett, for you, we're just talking about this allow all behavior. How can we govern OAuth token sprawl across an org when the pace of AI tool adoption isn't exactly slowing down?
C
Yeah, so I think it comes back to. Right, you have to get rid of the kill the allow all. You have to make the OAuth sprawl visible. I think what we're seeing is OAuth tokens are the new lateral movement. The breach didn't have an exploit. There was not a zero day to it. It wasn't phishing. So you're looking at a developer's personal session and the environment variables are what allowed the breach to occur. So we have to get better at this. I don't have the single answer to it, but I think what I said earlier still stands. Make the OAuth sprawl visible and kill that allow all.
A
Mike, what about for you? I mean, I guess, are you in agreement? And then how do we, as, as an organization, as in a relationship with the vendor, like how are we starting to move toward removing that allow all?
B
Well, it starts with visibility. I do agree with what Brett said. It's, you know, that's the new OAuth is new lateral movement layer. And, you know, the, the fastest way to reduce that lateral movement risk is visibility. You can't, you can't govern what you don't see and what you don't have visibility to. So that, that has to be a principle in governing and understanding and having process around users and even admins implementing those sort of permissions with these AI tools on your browsers and other places. It's, you know, once these AI tools start exploding and they are, you know, they're the users by default are just clicking accept or clicking allow all without understanding what it's gaining access to or what they're accepting. And that's a, that's a significant risk. It starts with visibility, like I said, but you have to put some process around it, start removing Some of the admin access and endpoint protections that that that are out there.
A
Yeah I. And. And that that it feels like the gravity of that. Right. Is so hard to resist. Right. Because all of these tools benefit from the flatter you can make all of your data for these tools to access. Like they're first of all they're all begging Brett, to your point they're all begging for it. Right. They're all asking you to click all like to it auto fills allow all on Claude. I can tell you that right now. And not even I mean you know talking back to to iOS security at least they're usually when it's like location sharing it sometimes it give granularity of allow only when you're using the app or something like that. So yeah it feels like you're going against so much I don't want to call it a dark pattern but like where the app where these services want you to go. Right. Is right to where it can do the most damage the fastest.
C
Yeah. So think about it. Right. Shadow AI is already worse than the shadow sass that we saw. Right. And then the tools are asking for broader scopes. They want to read your whole calendar, they want to read all of your email, they want to look in all of your drive. And I feel like we're at consent fatigue and the problem got 10 times worse over the past 18 months and governance can't be block everything. So now we've got to go to inventorying weekly approve explicitly and revoke aggressively and we're just securities playing catch up. I think the business is playing catch up. It's something that we're going to have to get ahead of.
A
All right, next up here, contagious interview scams self propagate. According to research from Trend Micro, North Korean threat actors are evolving the contagious interview scams into a self propagating supply chain attack using fake job offers to trick developers into running compromised code that spreads malware through repositories. The campaign is attributed to the group Void docaby and uses malicious VS code tasks and hidden repository files to deploy rats, steal credentials and affect downstream projects when code is shared. This can rapidly cascade across open source and enterprise environments and more than 750 infected repositories have been now identified. Mike, you know, self propagated supply chain attack is new to the palette for me. Doesn't taste great. I'm not going to lie. The initial approach here seems tried and true. I mean we've seen these job scams. These are nothing new. But A lot more sophistication now on the back end. Previously some of these supply chain X have seemed pretty targeted. They want, you know, threat actors want access to one maintainer, one project, one where they can do a lot of damage here. But are we ready for something that has more of this wormable potential that's going for as much breadth as depth at this point? Like how does this change, does this change the game at all for you for these kind of attacks?
B
Well, I think yes it does. We're, you know, what we're watching here with this is supply chain attacks are evolving from a surgical strike on a particular target or a company with a, with a, you know, to, to really becoming scalable weapons. And that, that's a shift from precision to propagation. And, and that scale is, you know, historically you see supply chains, these attacks are highly targeted in, you know, very sophisticated. You know, now, now that they're able to scale that out, you know, you're, you're seeing that worm like characteristics where compromise spreads, you know, through developer workflows and repositories and you know, vendors that you, you don't even think are going to be impacted. But you're, you're now having to look at not only your controls but all of the vendors, all the developers, all of the sole supply chain in ensuring those controls are in place. You know, the scanning the environmental controls, the behavioral monitoring of build pipelines, all that has to be taken into account all the way through to rollout. You know, it's, it's, you know, it's. Is it. Can you say that your developers are the weakest linkers that you're tooling? And I think you need a combination of both.
A
You know, I mean Brett, from your perspective, are we ready for that kind of build out? Are those kind of assumptions to, to be ready for this kind of stuff?
C
Absolutely not now, thankfully, right. The proof of concept just shipped so we don't really get a choice and we're going to have to go get ready for this. But if you look at it, traditional supply chain attack, one to many, you compromise one upstream, affect everyone downstream. Now we're talking about a many to many. Even the infected developer infects their own repos, which infects the contributors, which infects their repos. The curve is exponential. Go figure. Gotta love that. It's not linear and it's going to ride on trusted workflows, get clones open in VS code. The security tools that we have today are not going to flag that as suspicious. And so if I'M looking at this now. You know, the teams are going to have to start looking at what they have to add to every repo. They're going to have to stop shipping the workspace configs the way they exist today. You're going to have to block workspace trust, auto execution things and then you're going to have to enforce sign commits. That's at the top of my head. I mean there's probably more you're going to have to do but this absolutely changes the game on what we're doing and it just seems to be the theme of 2026. We are no longer looking at linear attacks. Everything is growing exponentially.
A
Yeah. And I mean kind of the sub theme I guess for that because that certainly is like every story is touching on that where the things that were the limiting factor, that few at least they can't do it to everyone all at once is no longer a thing. But this like to me the thing I am looking at the most this entire year is how open source as like a tool that businesses rely on as a, as a community effort. Like how it handles being stressed on seemingly every part of its, of every assumption that open source is making from maintainers to the ability, you know, for the crowd to look for vulnerabilities. You know, like every assumption that we are seeing with open source is kind of being challenged all at once as, as part of you know Brett, what you were just talking about with you know, everything turning suddenly turning into many to many situations with, with attacks like this and that is something I would definitely want to keep on the department of. No, keep our fingers on the pulse on because and we've talked about this on other shows, you know, open source is open source. It's, it's not going away but it feels like it will also never be the same same after what you know, it's transforming into. It has to to continue to exist. I think so. Definitely something to keep an eye on. The other thing to keep an eye on is our last discussion story of the day and it's a new one to me again new things to the palette. And the cyber security show, not always the best thing but AI generated ghost breaches. Now I say AI generated ghost breaches you say may say what the heck is an AI generated ghost breach other than a pathetic attempt at SEO. Well, it's coming after my cybersecurity headlines hard here because these are false but convincing breach stories that trigger real world crisis responses. A Cyber Scoop article highlights cases where entirely fictional events reported as Real old resolved breaches resurfaced as new and AI generated quotes were falsely attributed to experts. These could potentially waste cybersecurity resources, damage reputations, influence regulators and investors, and even help attackers, make phishing or impersonation campaign more believable. Here. This is just one, as a, as a producer of a cybersecurity news show, I need to be conscious of, be on the lookout for like more, more so than ever. I'm curious though, Brett, for you, as a, as an organization, I can see how this could be used to certainly waste resources. Right? Like that's damaging enough here. We only have so much attention, but have you seen anything like this in the wild here? And is there a way to perhaps more directly weaponize this other than we made company spin their wheels on nothing here?
C
I think when I read ghost breaches, I looked at that and saying, okay, does something exist if Gartner hasn't created the magic quadrant for it yet? Right? So this is already happening. It just wasn't being counted as ghost breaches. So if you remember, I think back in 2324 a few years ago, where there was the fake breach notification that was sent to the state of Maine, but there was actually no. Right? So yes, to me, I mean, we've seen things like this. I just don't think they were counted as ghost breaches. And now we've given it a name. So now here comes the magic quadrant for it. But you know what, following someone's incident yesterday at this company, please reset your credentials at this link. The whole thing is fabricated. Recipient is going to sit there and go, yeah, yeah, I think I heard about a breach. And now they're going to go ahead and put their information in. We're going to generate plausible looking sample data and it's going to claim that a breach has happened and it's going to demand for payment. You're going to have people and companies who are maybe not necessarily prepared, don't have the right context, and they're going to panic and pay. And so I think you're going to see these things more and more, but I think we've already seen good examples of them. We just didn't have a category for it. So thank you to whoever created ghost breaches. Now we have a way to quantify it.
A
Mike, are you getting spooked out by ghost breaches or is this established practice by another name and perhaps more scale?
B
Well, I think, you know, back to Brett's example of what happened in Maine, you know, how do you defend against Something that's not real. Right. I, I think it's, you have to focus on validation, the speed of validation and your trusted source, sources of truth. And not just detection. You know, it's, it's, you know, it's a test. You know, I think threat actors are going to be looking at this to say, okay, let's see what they do when we do this, when we pull this lever or push that button. You know, the next breach might not be real, but, you know, the, the impact will be, you know, this is, you know, to me it's fascinating and dangerous because even a fake breach, as, as Brett said, it's, it's, it's going to trigger a response. It's going to trigger regulatory scrutiny, reputational damage. You know, I haven't seen it widely, you know, operationalized yet, but tomorrow's another day, right? It's absolutely coming. I think it's absolutely coming. Threat actors could combine this with, you know, the phishing, the market manipulation.
A
Yeah, I mean, I think about prediction markets, right? Like all of a sudden, you know, you have a Kalshee thing, oh, Company X will have a reported security breach and somebody gets a, you know, a payout. Like, like just there are, I guess there are one more mechanisms to very quickly put out plausible media. You know, I mean, this is essentially like a multimedia deepfake, you know, potentially. And there are now even different ways to monetize it beyond just we're jerks and we want to, you know, cause someone to get audited or at least have their comms team have the worst night ever. But I even think about, I mean, there was a story in headlines today, or not today, this week that was about lovable, right? And they were denying that they had a data breach, some API chicanery based on unclear terms of service. Basically, there was no leak. Things were working as functioning. It was just the functioning was bad. But I could totally see, you know, that having a life of its own if you just wanted to cause pain for them, right? And it feels like every company now, particularly on the comm side, like this to me, this to me is almost as much as a. How linked is your, your cybersecurity structure, your IT structure throughout the organization, right? To make sure you're getting that communication up to your customers who may see that in the news, right? Who may see something get picked up by an outlet that doesn't have a lot of great scrutiny, but it gets a ton of run on social media and all of a sudden you have people worried about that. I I feel like this is potentially a way to make sure that, that that relationship is strong enough to quickly be able to respond to that kind of stuff because that's where I could see it having the most damage of oh, security said there was nothing, but we didn't tell our customers that and now all of a sudden everyone's mad at us and that could cause some very serious harm.
C
So I think you hit on a couple great things actually. So like what's the comms department going to do? And so you're going to have to add a fabricated incident now to your IR playbook. Who's going to authorize a public denial and what the holding statement is actually going to say. And just like we would say on a real crisis, right. Deciding that in the middle of it is not going to work. So if you look at your old playbook, something real happens, then you respond new playbook. Something might be real, something might not be. And your 30 minute response window now is 30 seconds. And what are you going to do about it? Now if we want to combine sort of our different shows here and go back to the worst, the best worst idea, right. We just had in the news where that soldier bet on Maduro. So what if you have people start betting on companies that are going to get hacked and then start publishing fake hacks to get paid out.
A
I guarantee one, I guarantee that's already happening, right. And then, and then two, like again all of a sudden there's this massive incentive structure, right. For something that doesn't actually require you to have any technical acumen other than writing a convincing story and getting it some run somewhere.
C
Exactly.
A
And there's enough ambiguity of what like you know, we, I. My favorite like non headline is whenever they publish, oh, there's a, there's a leak of 100 billion credentials, right. And it's just an amalgamation of old data leaks. Like you could very, again with very little effort. Say we have this data that was from this organization and we could, we could turn that into an incident that just happened. It gets some run. It's a nothing burger to this, you know, to the cyberscoop articles point. But all of a sudden I just made a couple hundred grand on cowsheet or something like that. So this is going to, this has
C
got to win me the best worst idea on the, on the right. So I want this counting as the best worst idea of you found out there's a breach or a potential breach, you're going to put some money down on the poly market so you can make millions and then regardless of the outcome, you're a winner.
A
So this is, this is not a, you're not a white hat, you're not a black hat, you're like a green hat, right? You're just like, you're just trying to make money.
C
That's, you're just trying to protect your, your wealth.
B
You know, that's your, that's your, that's your spokesperson. Spokesperson is your comms. And like, you know, like, I want to go back to. You're defending against something that isn't real. So why not put the spin on it right now, if I was putting the 10Q together, for example, on a publicly traded company, I'd say, hey, we're noticing an uptick on ghosting and ghost ad breaches. So we're trying to defend that by running tabletop exercises. The story is already curved before it even happens. And it's all about spinning, and you might as well put the spin on it before it even happens. That way you're saying, hey, we, we predicted this is going to happen. So if, if I was looking at a, a competitor that was going to go public, you know, or something like that.
A
Yeah.
B
Oh, that would be a target, right? Because they're, they're going on, you know, their stock is going to go up, curb it by, you know, pushing an AI generated ghost breach on them.
A
And there's also a bunch of AI companies about to go public. Oh, I want like, I mean, like, oh, my. The mind reels. Guys don't get why are we giving people bad ideas? This is shout out to Jay Schmooze in the chat who Brett gave you the best worst idea.
C
I waited for David to second that, but I'm sure that's coming.
A
David is crying at the idea, at the integrity that this will, the stress that this will put on cybersecurity headlines. Well, no bounds. Unfortunately, we're just about out of time here on the show. But before we get out of here, Brett, I want to know from you, what's one piece of advice that you can maybe pull out some positivity, the power of positivity that we can share with our audience for what advice you got?
C
Yeah, I think that despite everything you're hearing in the news and the speed of which everything is going on, it all is always going to come down to focus the team, get them aligned, and it's always going to be about reducing that risk Surface. We can't fix everything all at once, but we can look at what's going to make the biggest impact and positive side of this is it's still mostly focused around the identities. So that's where I would like to see teams focus and companies focus right now.
A
Mike, what about for you? What advice would you have for our audience before we head out?
B
I think that comes on the tail end of these stories that we talked about today in tabletop exercises. At least talking through the scenarios on the what if and are you prepared for, you know, the right, the right actions, the speed of those actions and what's needed, you know, to make sure your response is, you know, is true, you know, to the threat. I think you can't overemphasize the tabletop exercises. There's enough meetings people have today that could be emails. But why not? When I have a very constructed, well thought out tabletop exercise on some of these things and think through it, what's your response going to be? What's your next step of action? Who are you escalating to? I think that's, that's the thought I'd leave for the audience, I think.
A
Words of sage wisdom from both of you. Thank you so much and thank you also to our audience for having some fun in the chat today. Shout out to Bone Circuit in our chat here. Sharing some, some interesting AI agent setups and stuff like that new face. Haven't seen you there before so I hope you can make it to another department of note next week. Some of our other favorites though, Jay Schmooze, like we already mentioned, Kevin Farrell, the big boss man, David Spark, all having some fun and just, just making it a really, really fun place to hang out. So make sure you join us each and every Friday, 4pm Eastern. Get involved in the chat and have some fun too. Thank you. Michael Bickford, the former CISO over at the New York State Gaming Commission, and Brett Conlon, CISO at American Century Investments. Truly appreciate having you on the show. Two of my favorite people to have on. So we'll have to have you back on before too long. If you want to follow them on LinkedIn, we have their links to their LinkedIn profiles. Said the word link way too many times. That's okay. They're in our show notes. You can check them out there. Give them a follow. They are good folks. Thanks also to our sponsor today, that is Threat Locker. Remember you can send us feedback anytime. Feedbacksoseries.com Remember to join us next Friday, 4pm Eastern for another edition of the Department of Know. And be sure to register for our next Super Cyber Friday event coming up on May 1st hacking the death of entry level jobs. Are they dying? Open question. We will get some resolution or at least some opinions on that. That's at 1pm Eastern, so go to Cisoceries.com events to register for that. Get in on that. We have a vibrant chat room on there as well. We play some games. You can win some swag. It's a good time. Thank you so much for joining our Friday stand up. Have a great weekend and stay secure out there until the next time we meet. For myself, for our wonderful producer Josh, and for all of us here at the CISO series, including the big boss man, David Spark, here's wishing you and yours to have a super Sparkly day.
B
Cybersecurity headlines are available every weekday.
A
Head to cisoseries.com for the full stories.
B
Behind the headlines.
Date: April 24, 2026
Host: Rich Stroffolino
Guests:
This episode of "The Department of Know" dives into a packed week of cybersecurity news, centering on high-profile topics such as the Vercel data breach (via OAuth sprawl and third-party AI tools), the evolution and consequences of supply chain and social engineering attacks, and the rise of AI-generated “ghost breaches”—fictitious data breach stories that elicit real-world crisis responses. Throughout, the panel also emphasizes the critical need for visibility, resilience, and communication strategies in security programs.
[03:16]
[06:30]
[08:26]
[12:45]
[17:35]
Breach traced to employee browser extension (Contacts AI) that attackers used to pivot into Vercel’s internal systems via OAuth.
Brett:
Mike:
Memorable exchange:
[22:19]
[27:49]
"Ghost breaches": entirely fictional but convincingly presented breach stories, sometimes AI-generated, cause organizations to launch crisis responses or even pay extortion demands.
Brett:
Mike:
Practical Consequence:
This episode is pragmatic, fast-paced, and marked by candid exchanges between seasoned CISOs who challenge each other’s assumptions and go beyond surface headlines. There’s a wry sense of humor (“parties all the way down—the least fun variety of parties” [08:26]; “now here comes the magic quadrant for it” [27:49]) and a clear intent to provide actionable insights for teams grappling with the accelerating complexity of cybersecurity in 2026.
For more stories and analysis: