
Loading summary
A
This is Sarah Lane with the Department of no. Dmitry Sokolovsky, Senior Vice President, Information Security at Semrush. What is your priority this week?
B
Get going for the year. Holidays are over. Let's get going with those projects.
A
And Nick Espinosa, host of the Deep Dive radio show. What is your priority this week?
C
This week is supply chain due diligence week. We're going to. We're going to go through all of them. It's like religion here. So I'm very much actually kind of looking forward to this to see if we're running the best or still running the best.
A
All right, we have a lot of good stuff to talk about. Producer Steve, run that opening from the CISO series. It's Department of no. Welcome to the Department of no. Your Virtual Monday strategy meeting. Our sponsor today is Dropzone AI Agentic. AI for your SOC and. And autonomous alert investigation. Remember to get involved in our YouTube chat live. We broadcast every Monday at 4pm Eastern Time. Or email us@feedbackisoseries.com as always, a disclaimer, opinions expressed are those of our guests. We've got about 30 minutes, so let's dive right in. Let's move into our no or no segment. I'm going to run through some stories from the past week. Demetri and Nick and I want your quick take. Is this something security professionals need to know about, or is it more noise than signal? All right, here's the story. Number one. JP Morgan is notifying investors about a data breach tied to an incident at law firm Freed. Frank. Just like Goldman Sachs Disclosures in late 2025, an unauthorized party copied P2 files from a shared drive. This is an issue regarding security or. Or lack thereof within shared files at professional firms. Is this something your team needs to know more about or no? Nick, let's start with you.
C
Yeah, I do breaches of the week every Sunday. This is par for the course. I mean, everybody knows these, right? Like, so I don't think this requires special attention unless you're working with or our client of JP Morgan, in which case you're probably having a pretty bad week. But. No, I think we could skip this one.
A
What about you, Dimitri?
B
Just as a quick sort of touch on, hey, let's make sure maybe legal team, maybe finance team. Let's make sure that we don't have any connections to this one that we need to follow up. Other than that, I don't think so.
A
All right, we got story number two. This is. This is kind of a doozy. A UK police department Drew up a risk report for a possible international soccer game between a British team and a British team and one from Israel with the goal of deciding whether to ban certain fans from attending due to a risk of violence. The problem is the game never actually existed. It was a copilot hallucination. Microsoft said, quote, you should have double checked the sources. Not our problem. So it seems all the time saved using LLMs will now be taken up doing due diligence on the output. Dmitry no or no?
B
That's definitely a no with a K. We definitely want to be checking whenever we do use, and we do use AI systems internally within our processes as well as important company processes. And we need to double check the important things. One where someone might be banned or something bad like that could happen legally. I'd say someone should at least glance at it. So for sure, a reminder, this stuff happens. So let's check our work.
A
Nick, do you agree?
C
Yeah, yeah. Mine's a no as well, I guess with a K. I mean this, I think is a clear, clear governance failure issue. Right? It's governance masquerading as AI, as an AI mistake and all of that. You know, the hallucination itself. LLMs hallucinate all the time. Right? I mean we're, we know that to be a fact, but what is completely unacceptable is the uncontrolled insertion of generative output into an intelligence product that has real world consequences. You know what I mean? Like they literally would have banned basically like a nation from showing up to support their team. I think it's absolutely ridiculous. And then Microsoft trying to put this on the cops. Really, like I think they should have really reviewed this, like is saying they should review it. I think that's insane. That's a shared responsibility, you know, really.
A
I mean, it does seem like this is the question that lots of people have, whether, you know, you're Microsoft or you know, government or you're running a soccer team, that people all the time are kind of wondering, well, hold on a second, you know, what is real, what is not.
C
Right.
B
I think this is, to Nick's point, shared responsibility. But I think it needs to go further and I think both sides botched the communication piece here. This was a perfect opportunity for Microsoft to come out and say we will definitely review the way the important information is pulled in with our customers to make sure that we're both taking care of this responsibility in front of the public. They didn't take that opportunity.
C
Right, right. And I, and I would also say that I think if generative AI is going to be used like it had. There has to be notices for this, right? It has to be disclosed in an intelligence report. You know, all factual claims have to be verified by a human. You know, I mean, like, like, I mean, I think that's. That's so important. A human reviewer has to validate all of this, you know, and I just think AI and LLM, what it's generating shouldn't be in an intelligence report to begin with, even if it is disclosed.
B
But def. Yeah, definitely. It's like a bunch of minions doing your work. You should check it before it goes out on stage. That's how I'm thinking about it.
C
Yeah. You know, I mean, it's like the stoner kid in the back that's now doing dissertations on 7th century poetry. Dude. Using chat GPT. Come on. You know, like, like, let's use some common sense here. No doubt.
A
All right, let's talk deadlock. This is a ransomware gang who is using Polygon smart contracts to hide its command and control infrastructure, to encrypt victim systems and will sell stolen data on underground markets with smart contracts already very much relied upon and more convenient as a way of doing business. Is this a no or a no? And if we need to know more, what is your advice, Nick?
C
Yeah, you know, I think this is a no with a K by far. I mean, this. This is a clear evolution. I mean, blockchain is C2, you know, I mean, it's. It's genius. Think about it. Smart contracts, they're globally accessible. They're super resilient to takedowns because of what they actually are. They're really hard to disrupt. I mean, not to mention selling stolen data instead of running leak sites. I mean, this looks like evolution and maturity. And ransomware gangs, and I hate to say it, you know, but. But I think we are looking at the next wave of what we are going to see. And security programs that still prioritize things like IOC blocking, you know, post compromise response, all of that, they're going to be misaligned here. So this is. I hate this. For the record. Oh, I hate it. But it's brilliant. And so I do think we need to talk about this. This is absolutely something I tell my team.
A
Dimitri, what are your thoughts?
B
I totally agree. This is a next step, if not several steps ahead. This is the level of strategic thinking that we would want to expect out of ourselves or our government level protection mechanisms in this space. And what we're seeing is that the bad actors are starting to use it it's definitely a wake up call for us. This is, you know, the regular, like Nick's point, regular protection factors will not be, will start dropping in efficiency. What do we have to counteract them? This requires in depth thinking because this is here to stay. This kind of evolution is here to stay. And again, I tie this. I can see the attribution North Korea. This is going to stay. And this is going to stay at not just making some money mafia level. This is going to be APTS and command control, not just for regular mafia attacks.
C
Absolutely, totally agree. It's crazy.
A
I mean, Dimitri, what would be your first, you know, response method to something like this?
B
We'd have to learn more about what that looks like at a network level, what that looks like at traffic level. Is that even possible to, to, to check? Is there maybe some newer way, newer way of thinking? Maybe there's new ways to aggregate the coming data, maybe newer ways to scan for it. Again, we're looking at next level of encryption and next level of coordination of encryption is going to be incredibly difficult to take down. Are we now at a point where maybe, just maybe, we need to be talking about white networks, meaning isolated and trusted networks where traffic comes in and things like this would not at all have a reason to be present? Right. So we're now getting to the point where it's no longer enough to bring up a localized network and just generally protect it. We're talking about complete control of all traffic. So maybe a bit bitchain network implementation is the answer on the other end of it as well. There's definitely a lot of thinking. I couldn't tell you an answer off of my head, but for sure we need to spend a good amount of time rethinking this.
C
Right? Right. Well, if I'm thinking it through, we lose a lot of those traditional response tactics that we have. Right. Like coordinating with ISPs, you know, hosting provider abuse reports, you know, on top of it, IP blocking also fails. Right. Because endpoints are rotating automatically. You know, there's going to be a threat, intel lag, you know, I mean, it just. And on and on and on. I mean the indicators age out quickly in a situation like this. So. So it is one that I think we have to just go back to the drawing board for. Like I said, it's brilliant and it sucks.
A
Yeah.
C
You know, this is, we're dealing with this now smart.
A
And I hate it.
C
Yep, pretty much. Pretty much.
A
Let's move to the US now. The US Administration wants to let private companies play a more direct role in offensive cyber operations. This is according to former senior officials speaking with the New York Times. The move would expand the current model where firms can build tools to, but not conduct attacks, and would require changes to federal law, plus congressional approval. Politics aside, this opens up a range of possibilities and maybe some problems for private companies, not only as suppliers, but also as possible targets. Nick, no or no?
C
Oh, definitely no. Take your pick on that one. No, no. That's no with a K. That's absolutely no with a K. And the reason being is I think we need to talk about what cyber warfare means in the modern world. And I have a preamble for the record. I'm just going to throw it out there that cybersecurity is agnostic to politics, but we're not immune from it. So I don't care what your political beliefs are. This is something that needs to be addressed. Right. And offensive cyber operations, they're not just technical activities, right? We are talking about intertwining, essentially, state power with. With the private sector, you know, and the current model, basically, the private sector, we build the tools, but the government executes. That exists for good reason, I think, right, because cyber operations are inherently political. They're escalatory and all of that. And so I think, personally, they should remain under direct control. Think about the legal, the. The ethical, you know, the risks of outsourcing cyber warfare. Right. I mean, as a private company, you're paid to perform, you know, so where are the guardrails? I think this one is. Is. If we're going to do it, it would have to be an insanely tight legal framework. There'd have to be a huge amount of oversight, all these things. I think this is a. This is a tough one. This is a tough one to really get behind.
A
Dimitri, do you agree with Nick here?
B
More than. More than he can say. Our cyber surface as a country is hundreds times bigger than what it is geographically massive. And we have significant, massive amounts of money already dedicated to protection on the ground, missile systems, et cetera. All that we have in place, we have nothing compared to that at cyber front. And we are a lot more exposed. And the government can't. I think they realize that they can't catch up. They don't have enough talent, they don't have enough resources. They cannot catch up. So I think they are willing to take a risk, a minor risk of this becoming slightly awry. Yes, some companies might get breached, some of the tooling may get stolen, et cetera.
A
But.
B
But if it's at economic level of this country. They're putting all that power, all that entrepreneurship power behind it. I think they're counting on this becoming a massive booster for the US on the cyber warfare front, which I think today we already are in cyber war that's already happening. We are in the front lines of that war that's ongoing. And I think this is the government finally realizing that maybe it's time to let the. All that brain power loose a little bit and, and actually get some points. So, so I, I'm, I love this news. Controls and everything else are for sure, but we also know that they're gonna, they're not gonna work the way that they're designed. Like all that is going to be broken. I'm just excited that this is finally becoming important enough to come up to this level.
C
Yeah. My fears is it's going to go off the rails, you know, like, and I, I get what you're saying. I really do. Like, I think it's, it is possibly an evolutionary step, but like, okay, and, but in kinetic warf, outside mercenaries and contractors all the time, you know, and there are cutouts to the US government. Let's not lie here, you know, blah, blah, blah. But, but this one I just think needs a whole bunch of oversight. But, but, you know, letting actual operators off their leash, you know, especially if you're in an act of cyber warfare, probably isn't a bad thing. I'm just really afraid the, the creep that we're going to see is going to be rather profound.
B
It is a large part of Israeli gdp. Why can't it be ours?
C
No, sure, sure. Absolutely. But we've also seen some spectacular issues where Israel's gotten into trouble. I'm looking at you Pegasus vulnerability, you know, so, so I mean it's one of a hundred.
B
Come on. It's just a little bit. It's.
C
Come on now. Good with the bad, good with the bad.
A
Well, I mean it does. It depends on the private company. Right. There are certain private companies which could be really good for any government to, to, to collaborate with and others where, you know, that creep that you're talking about, Nick, could, could be substantial.
C
Yeah.
B
I also think if, if it's engaged at this level, meaning if it is engaged at any level, there will be some guardrails, there'll be some oversight, but most importantly, there'll be a lot of conversations and I think we'll catch the bed eggs fairly quickly, especially considering the actual industry. Like the, the most of the people in this industry are ethical. Yes. Some are not but most are and I think the involved, the higher chance of catching someone doing something untoward and bring up to the service. I know Nick maybe is a little bit more skeptical than I. I hope you're right, man.
C
I like. Look, I. Why can't you just get along?
B
You know what I mean?
C
You know, like I would love to requit if cyber security had to go away. I could go run a puppy day.
B
Totally.
C
And lower my blood pressure. But until then. Yeah, Right, right. Downward blood pressure. Right. So.
A
All right, well, we. We have more stories to take a little bit of a deep dive into. But first we'd like to thank our sponsor, Drop Zone AI. How many alerts did your SOC investigate last week? Do you even know how many sat in the queue untouched? If you don't know those numbers or you don't like them, Drop Zone AI wants to help. They've helped enterprises like UiPath and Zapier handle 10 times more alerts without adding headcount. Their AI SoC agents work around the clock, investigating every alert autonomously. Book a demo and they'll show you exactly how many hours you could recover. Everybody wants Malvers back. Right? Head on over to dropzone.AI and request your demo today. All right, let's get into some stories that deserve a little more of our attention this week. Let's start with Jen Easterly. We're all familiar with her. The RSA conference announced last week that Easterly has been appointed as its Chief Executive Officer. She probably doesn't need an introduction to anybody watching or listening to the show. Her departure from cisa, though, comes at a challenging time for the organization. So my question to both you, Demetri and Nick. What are your thoughts about Easterly taking this role at rsac?
B
This ties directly to the previous news. I think RSEC is aimed at private. This is sort of like the point where most, if not all private presentation of cyber security, cyber resilience in this country, or not just this country, the world is rsec, right? So the perfect place to come out and say, hey, we need some help. And who is best to do that than someone who worked for tailored access operations in her past who ran attack based on. Just based on her LinkedIn profile. I feel like she's had some experience doing the attacks on the adversaries of the United States. So she would be perfect to know what we are here to face in the next 5, 10, 15, 20 years of this cyber war that we're talking about. So if I wanted someone to run this communication hub for all my Private enterprise input aimed at cyber resilience. I'd try and find a colonel that ran this on my side in the army. I could not find a better person to put that. Bravo. This is amazing.
C
Yeah. So I'm taking a little different approach on this one in the sense that I'm thinking about it from the optics perspective. I think this is a clear play by RSAC for governance and credibility. Like she's not a logistics hire. Like nobody thinks, you know, she's going to put her head down. Right. I mean, she's a very visible player in our field. But think about what RSAC or RSAC is. It's not just an event organizer, Right. It's essentially a platform that shapes our industry's narrative. Right. It's the biggest cyber defense conference in the world, as far as I know. So our priorities, our norms, a lot of these things are set there. I attended last year, actually saw her on stage and aside from getting the worst food poisoning of my entire life, not at the event, but I learned a ton of information. Right. We always learn every time we go to, to rsa. So I think by appointing Easterly, RSAC is basically signaling that they intend to be a lot more policy aware. But I also think they're trying to be more geopolitically informed. Like we can't, you know, ignore the 800 pound gorilla in the room right now to say that, you know, she was with a previous administration, retired before the current one. And again, politics is agnostic to cybersecurity, but we're not immune from it. And so I think that this is also a very visible, visible look in, into that world. I think RSAC is viewing cybersecurity as not just a national, but a global risk management vertical now. Right. It's not a vendor marketplace. We can go and walk vendor row, we always do. But I think that's their focus now. And think about the other conferences that we attend. Right? There are more trade shows in marketplaces than anything else. So I think RSEC's position, positioning itself closer to that thought leadership standard, right. They're going to influence our entire, our entire ecosystem. And they, and they have for years, I think as well. And so to be fair, it is a politically charged environment. Right. But I think it's a shot across the bow as well. She was running CISA and, and she was focusing on, you know, to Demetri's point, a lot of things that, that we need, like public and private partnerships, you know, that essentially make something like SISA essential. So I think this is, this is really good. She's also really good at Secure by Design. She was very vocal on that. You know, she pick where Chris Krebs left off and he was great at his job too, you know, so, so I, I think it's a good pick. I really do. Optics aside, politics aside, like, she's deeply respected in our industry and I think she's going to do a great job.
A
We had a user in our chat mentioned that US Federal organizations have been banned from participating in arson.
C
That's what I'm talking about. That's exactly what I'm talking about here, is that our field is not immune from politics. We're agnostic to it, if we're really thinking about it, you know, but the, you know, and we probably have all had clients or work for companies that are on one side of the political spectrum or the other. But I think something like Easterly, you would like to think that industries can rise above politics, but, you know, we have a lot of colleagues that are going to work for the government that are not going to be able to attend because of this. And whether you think that's good or bad, that's not my concern. I feel like this is one of the largest educational moments every year, you know, for our industry. So to not have that presence, not to mention the fact that when you're talking about the governmental side of this, there is traditionally been very deep coordination between RSAC and the US Government because of what it is, you know, so I, I think it's. It's difficult and I think I understand it from our sex position. They're trying to become the forum for all security norms, including AI security. You take your pick. And so we're going to see the politicization. I don't think we're going to get around, but we do have to address it.
B
I think she would also be a good working buffer for that, meaning to handle the politics. She has the connection still. She was always very diplomatic in how she spoke about things. And I feel like the next point, maybe she's. She's what's going to allow for that rift, if there is one to mend a little bit. But even if not, I think she's one of the people that, despite all that, can still continue with her role as a protector. I think that's how she thinks, again, based on her career, most of her career, if not all of her career, was some level of protection of really important things for this country. And she's continuing that. So I think through communication, again, communication is the most important tool, the most powerful tool we have in this industry, especially if she can be that. And I, I'll give her a little bit more credit. I think that. Nick, you're giving. I think she, like, I don't think this is a figurehead appointment type of thing. I think she's gonna drive. She. Yes, she's a driver and she'll drive as as much out of it as she can now, whatever that means. We'll see in a year and we'll come back and say, well, that was really good. Now she's driving the, the conversation. This whole country is no better because they were going to say, no, she got all. We'll see. But, yeah, I'm very optimistic about it.
C
Yeah. No, and I didn't, I didn't mean to imply that she was going to be a figurehead, you know, but she's not. You know, we, we can all get somebody that will be like a coo, right, That'll get operations done. She has that. But she's visible.
B
She was an operations officer.
C
I know that brigade.
B
Totally.
C
She can get this done. But she's also incredibly visible in our field. My concern here, especially, you know, with the comment that I see on my screen that, you know, the Fed has banned, you know, basically its departments from, from attending. This is exactly that. You know, she resigned prior to the incoming administration. And so, you know, from an optic standpoint of that administration, they're typically not fans of that, from what I understand. You know, also, political risk is part of my day job. You know what I mean? So. Which dovetails with the cyber security side. So I could speak on this for hours, but. But I think personally she's a great hire. I think it's a. I think RSA made the absolute. Or RSAC made the absolute right choice.
A
Well, what if I told you that 2/3 of third party applications access sensitive data without justification? You might say, who said that? The report released this month by researchers at Reflectis analyze 4700 leading websites over a 12 month period ending in November of last year, suggesting that 64% of third party applications access sensitive data without business justification, up from 51% in 2024, with government, sector and education sites showing the most active compromise. Demetri, what are your thoughts?
B
I say you probably get the number low. Too low. It's probably higher than that, but let's trust the report. Yeah, it makes sense. And I think it's an indication of how we still, as a society, don't see this space for what it really is. And it's, you know, we think we have real life newspaper piece of paper in front of us. There's so much information about us online, so much more than we could ever have in our houses. So we, I think we don't have the perception of exactly how much is out there and then how much of it is used to give us good things in life, but really use against us. Right? Make us, make us into product. But people don't get maybe. Yeah, exactly. But that's just the sort of the tip of, the tip of the iceberg. And the real question is, what are we going to do with this? Are we all going to come out and say, well, it's terrible, let's start a campaign to look. How exactly are we going to do that? What level of governments, at what level does this even get up yet to apply? So I think we are, I don't know, five, ten years from even trying to solve this problem. But it is indicative and it doesn't surprise me at all.
C
Yeah, I mean, come on, dude, like, this is clearly a governance failure issue here, right? It's not a malware problem, you know, and the most important takeaway I think is it's not mainly about compromises in the traditional sense. Right. It's a fundamental shift in thinking from, you know, are we breached? To why does this vendor have all of this data at all? I mean, look at what happened in Canada with the Ciro. Like they, they have everything on every financial advisor in Canada, including their actual investments, their stock portfolios, everything, and they got everything breached. You know, this is a huge issue. I mean I, I think that if we are really talking about this, it's really kind of like unjustified access. You know, it's unjustified access in, in that way. I mean, think about it this way. Most organizations are pretty mature at this point, right? Especially on the enterprise side. So, you know, they're, and they're good at things like detecting malicious scripts or blocking bad domains or you know, managing vendor contracts, but they're far less mature at doing things like mapping specific like data aspects that are accessed by third party scripts that they've got running or continuously validating, you know, business purposes like alignment, all those kinds of things. Least privilege has always been a challenge, you know, in client side web environments. I mean these are the kinds of things I think that, that we are going to have issues with. You know, web tracking tools often start with a narrow scope and then you start adding additional permissions, right? Inherit access through methodologies like Tag managers like all this kind of stuff. I think this is an issue. So it's not surprising that government and education is the most affected by this one. You know, if I'm going through the report, it's really.
A
When we, when we talk about these types of stories on cybersecurity headlines, which we do quite often, the third party vend kind of gets thrown under the bus, so to speak. Dimitri, do you feel like there's anything that companies and third party vendors can do to make this better?
B
This reminds me of how the food industry went from fat to sugar to fat to sugar, and it just kind of bounced based on what the consumers are telling them. We don't want any sugar. Okay, we'll give you fat. We don't want any fat. We'll give you sugar. It keeps switching. In this case, it's kind of the same. It is on us, the customers, in this case to also start getting really, really, really involved into who has access to our data and why. Isolate control, governance gap, ish. It's not just on the third party. So I think they're getting run and thrown under the bus a little too much, in my opinion. There should be a lot more participation from us, from people whose data it actually is.
C
Yeah. I mean, and I think to be fair, like calling out platforms like Google Tag Manager, Shopify, et cetera, I think that can be a bit misleading as well. Right. Those platforms are infrastructure. Right. They're not inherently malicious actors here, you know. You know, but the vulnerability, right, is not the tool. It's the absence of governance around the tool, I think is one of the big things. And that does obviously have broader implications. Right? I mean, think about data minimization requirements for, you know, GDPR or Pepita in Canada, you know, or the Essential eight in Australia, et cetera, et cetera. You know, not to mention there's shadow supply chain issues here as well. Running in an employee's browser. Right. We are literally bypassing procurement and legal reviews, standard supply chain due diligence. I mean, this, this is, I think, one that, that, that is, is deeply concerning. And think about it. The very first story we started with that we didn't really want to do was JP Morgan getting hit in their supply chain. You know what I mean? I do my breaches of the week, like podcast, like every Sunday, and over half of them are, I didn't get hit, but I outsourced something, my billing, my healthcare, my phi, my pii, whatever it is, right? It's a perpetual issue. It really is. It's it's not going away well, Nick.
A
I know you've got thoughts on this next story. A group calling themselves Poison Fountain is seeking to get website admins to add poison fountain URLs to their websites. With the goal being to implant bad data for AI crawlers to find and then process as part of AI training. See what they're going with here. This is being attempted in an effort to undermine AI's intelligence models. According to one individual on the project, its goal is to make people aware of AI's Achilles heel, the ease with which models can be poisoned, and to encourage people to construct information weapons of their own.
C
Yeah, yeah. I mean, and this one is so flipping obvious. I mean, you said it like right in the preamble, right? There are clear structural weaknesses in AI training, you know, and Poison Fountain, for the record, is not showing us anything new in terms of vulnerability. It's just weaponizing a well known vulnerability for AI training, right? So think about how like modern foundational models are trained on, right? Massive volumes of scraped web data, you know, curated data sets oftentimes are weakly curated, you know, and limited verification on facts, right. And all of that. So by virtue of that, we have to validate data sets, right? And that's a huge task. And attackers only need to inject just enough Poison content to screw up outputs, create bias associations, you know, just degrade reliability in general. Right? And don't forget, this is cheap as heck and it's super easy to execute on. They're just getting web developers to, to essentially embed this, you know, through scraping, right? It's super hard to detect if you're running an LLM, you know, and ingesting this just across a world Wide Web for training purposes, you know, it's just weird. It's, it's, it's unique from like a prompt injection or a jailbreak as well, you know. So I think this is a huge issue. And training data upstream, or rather poising that data upstream, I mean, think about the effects, right? They're delayed, you don't catch it immediately. They're going to be subtle, you know, because you're not going to see it. And then all of a sudden it's just going to get worse and worse and worse and worse, you know. So I think it's a huge, I think it's a huge thing and it's good or bad. Like it makes me laugh to think that everybody's going to get output, you know, that's incorrect. At the same token, it has real world ramifications. Like basically rejecting every Israeli from coming to France to see a video, to see a soccer game or whatever. Right. Like in that other story, I just.
B
See, I just see this, the overlord, AI overlord coming in eventually. And the people that are running this are going to be in trouble.
C
The Terminators. Terminators are going to.
B
That's what I'm saying.
A
And people are already, I mean, there, there's so many conversations with people who, who aren't actually running these models themselves, but saying, I don't know, you can't trust it. Well, now you're really not going to be able to. And, you know, where does that, you know, end user fall into place?
C
Right, right. Well, and I think that's, that's super important too, because if you think about it, retraining an AI is going to be so super expensive. And this isn't just about them demonstrating risk. This is actively encouraging adversarial, most likely criminal, probably criminal, although I'm sure we don't have a law for right behavior. And it normalizes offensive manipulation.
B
So I think it's already happening, though. Large, big countries, this is what they do to manage information in their environments and in the world. This is already happening. These guys are just highlighting, hey, look, it's happening, everyone.
C
Right. But it, But I also, but I don't think this is necessarily the same equation as like disinformation warfare. Right. I mean, the intent here is to poison the well, not to put out demonstrably false information. It's to basically create or rather have AIs output garbage that is recognized and the AI essentially gets shut down. Right. You know, I think that's the ultimate goal.
B
That's what they hope it is. Of course it's going to work exactly how they thought it's going to.
C
These are people that are working for the AI companies. These are, these are employees. They park their car, they go to.
B
The water cooler, they get revolution. Yeah, let's burn the AI people working there.
A
Yeah, we were beyond that.
C
Yeah. I mean, it's just crazy. This is crazy.
A
Over in our chat, a soggy toaster says, seems to be built in. In some instances, it's, it's part of the strategy to transfer risk. But this is never acknowledged in their risk registers. Good luck trying to stop it, though.
C
Yeah, yeah. Well, and when I'm putting together risk registers, AI is now on that. That, you know, in the same way, like all the good DNS filtering now has very specific categories for AI anything, you know, not just a blanket AI situation, but that's exactly what it is. You know, it's, it's, it's, it's a risk and we need to understand that that output, if it's, you know, imagine moving a zero on your taxes, right? You're either going to owe the IRS an absolute ton or they're going to owe you a ton and then somebody's going to catch it and somebody's either going to jail or, or whatever, you know, and this is I think one of those scenarios as well. But yeah, I mean, real world developers I think have to focus on actually curated and permission focused data sets, you know, because they're just scraping anything. If they're scraping anything, you're going to get these kinds of things.
B
I think that's definitely to be structured, trusted data. Sure, yeah. If I can just, just jump in for a second here. It's interesting because Soggy Toaster's comment was actually for the previous story about the risks involved in third party vendors, but it fits for this one as well and it's really well.
C
And supply chain due diligence should be on your.
B
Good luck stopping it though.
C
Yeah, I agree there. Right on.
A
Well, this has been, this has been a lot of fun. As we close out today's standup, what piece of advice can you share with our audience? Dmitri Sokolovsky, we'll start with you.
B
Check your sources. When you get that Chad GPT dirty first draft. Read it, read it, really read it and then use it.
A
Nick Espinosa, your thoughts?
C
Yeah, two quick ones. One, when I'm talking with CISOs, I always tell them, learn, learn nerd to English translation. Because if you are trying to get a budget pass, you are trying to speak to the vision of the company. You've got to speak in English. You cannot speak in, in terms and anything else in the same way. When I talk to my accountants, my eyes gloss over and I'm like, just what? Just tell me what I owe the IRS this year. I don't need to know the specifics. Right. And also the other thing is, what is an actually, what is a good ciso? It's the same as what is a good manager. Now obviously I'm not talking about the technology or the security side of this. A good manager, a good CISO is a person who removes obstacles for their people to let them achieve. Don't be the obstacle. That's, that's, that's the goal and that's my advice.
A
Good advice, good advice. Thanks to everybody who was involved in our chat on our live stream. Come back Next Monday we'll do it again. And thank you so much to both of our guests, Dimitri Sokolovsky, senior Vice President of information security at SEMrush, and Nick Espinosa of the Deep Dive Radio show. We will have links to both of their linkedins in the show. Notes Become their friends. They're very, very smart people. Thank you also to our sponsor, Drop Zone, AI Agentic, AI, your soc, and Autonomous Alert Investigation. Remember, you can send us feedback anytime@feedbackso series.com Join us again next Monday at 4pm Eastern for another edition of the Department of no. To register for the live show on YouTube, just go to CISO series.com and click on Events. See you next time.
C
Cybersecurity headlines are available every weekday.
A
Head to CISO series.com for the full stories. Behind the headlines.
Date: January 20, 2026
Host: Sarah Lane (A)
Guests: Dmitry Sokolovsky, SVP Information Security, Semrush (B); Nick Espinosa, Host, Deep Dive Radio Show (C)
Theme: Security leadership roundtable on recent cybersecurity stories: AI’s hallucination risks, the evolution of ransomware methods, offensive cyber operations, Jen Easterly’s move to RSAC, third-party data access, and “poisoning” AI models.
This episode of the Department of No tackles the top cybersecurity news stories that leaders need to know about right now, focusing on:
The panel critically weighs which incidents merit action versus which are merely “noise,” and ends with practical advice for security leaders.
On AI Hallucinations:
Nick (05:45): "It's like the stoner kid in the back that's now doing dissertations on 7th century poetry. Dude. Using ChatGPT. Come on..."
On Ransomware Tactics:
Nick (06:24): "Blockchain as C2...it's brilliant and I hate it."
On Offensive Cyber:
Nick (10:29): "We are talking about intertwining essentially state power with the private sector."
On Governance Failures:
Nick (25:18): "It's not mainly about compromises...It's a fundamental shift from 'Are we breached?' to 'Why does this vendor have all this data at all?'"
On Jen Easterly at RSAC:
Dmitry (16:45): "If I wanted someone to run this communication hub...I'd try and find a colonel that ran this on my side in the army. I could not find a better person to put there. Bravo. This is amazing."
On AI Data Poisoning:
Nick (31:53): "Retraining an AI is going to be so super expensive...This is actively encouraging adversarial, most likely criminal, probably criminal behavior, and it normalizes offensive manipulation."
Advice for CISOs:
Dmitry (34:45): "Check your sources. When you get that ChatGPT dirty first draft—read it, really read it, and then use it."
Nick (34:57): "Learn nerd-to-English translation...And, don't be the obstacle. A good CISO removes obstacles to let their people achieve."
For more details on the stories, visit CISOseries.com.
Next episode: Every Monday at 4pm Eastern. Join the live chat or catch up via podcast.