
Loading summary
A
From the CISO series, it's cybersecurity headlines. This is Rich Rafalino with the Department of Know. Matthew Bybee, Director of Cybersecurity at Tickstracs. I gotta ask you, what is your priority this week? Where's your mindset at?
B
Yeah, well, thanks. Thanks for having me. My focus this week is just making sure we finish strong in terms of a number of cybersecurity and IT related projects that we had on the docket this year and make sure that we get those closed out for the, for the betterment of the organization and at the same time make sure that our compliance initiatives that have that come up every year, annually finish strong as well this time of year, especially at the holidays and audit seasons that come and go. Those are my two key focuses this week.
A
Yeah, usually end of year for me means I'm telling, I'm replying to all emails that were circling back in January. So I'm glad someone's being a little bit more productive. Productive there. Derek Fisher, Director of Cyber Defense and Information Assurance Program at Temple University. What is your priority this week?
C
Well, thanks Rich, for having me on. And so, you know, this week we're getting ready to wrap up the semester. So we have about a week or two left and so there's a bit of grading going on, but also trying to get a lot of the students prepared for, for the realities and the roles that are available in cyber security. As we know, it's pretty difficult out there right now just in the general job market, but we're seeing a lot of changes and upheaval in cybersecurity. So it's really trying to help get these students prepared and out there in the field and hopefully making a difference.
A
I like this. This is a hopeful, this is a productive way to start a week after, you know, a little holiday weekend. Might be a little sluggish. I like the positivity, the productivity. Welcome to the department of Know, your Monday cybersecurity standup. I hope we can bring that energy throughout this whole show. To do so, we have to thank our sponsor first. And that's Vanta Compliance. That doesn't suck stock too much. Remember, if you're watching on YouTube, drop your thoughts in our chat. We love to hear from you. I see Kevin Farrell is already in there. Oy, bruv, to you as well. And if you listen to this as a podcast, remember to join us live every Monday at 4pm Eastern on the CISO Series YouTube channel. Or send us an email feedbacksoseries.com before we jump into some of the stories for today. Just a reminder that all of our guests opinions expressed on this show are in fact their own, not necessarily those of an employer, friends, family, family or indeed nemeses. We've got about 30 minutes so let's jump in here. First up here is our favorite segment we like to call no or no. This is where we need to know is this news story something security professionals need to know about? Or maybe is this a little bit more noise than signal? First up here, Fluent Bit bugs allowed cloud disruption Researchers from Oligo found five long standing easy to exploit vulnerabilities in Fluent Bit, a widely used open source log collector deployed across every every major cloud platform industry standard. The bugs included authentication, bypass path traversal, remote code execution, denial of service and tag manipulation. Some flaws date back more than eight years and threaten full cluster compromise when chained, which I'm not technical here, but that sounds real bad since we all use the cloud. Is this know a little more or no? Thanks Matthew, for you. Where are you at with this?
B
I would say know a little more. As somebody who's used fluentbit in the past, this is definitely something that's a little close closer to my heart than maybe some other things. But I think it's because the platform is used quite extensively, it's open source, it's used pretty much by most, I think of the cloud providers out there and is a good tool for telemetry and observability for engineering teams, not just security teams. I definitely would want to know a little bit more about it.
A
Derek, what about you know, a little more or good headline but not much underneath there. What do you think?
C
No, I would agree know a little bit more. I mean it's good thing that we haven't had any recent cloud issues that have caused any kind of, you know, problems across the the, you know, digital world here. But I mean seriously, you know, there's a lot of different things to kind of unpack here. There's, you know, supply chain questions, there's patch management, vulnerability management questions, there's exploitability, understanding technical debt. I mean it's you know, a stew of, of you know, what we all deal with on a regular basis in cyber security. So I think definitely need to know a little bit more about this.
A
Yeah, this is one of those ones where because you would think this is an open source component that everyone's using, every big major cloud provider is using, they all have incentives to find this and yet that to me is probably one of the more interesting aspects of this. They have every motivation to fix it and not there. Yes, we will definitely be seeing if there's any more details to come out about that, so thank you for that. Next up here, Hash jackattack fools AI browsers. CATO Network says a new indirect prompt injection method called Hash Jack Quality name researchers hides malicious instructions after a hash pound sound, whatever you want to call it, depending how old you are in legitimate URLs, AI browser assistance like Copilot in Edge, Gemini in Chrome and Perplexity's Comet read these hidden fragments even though they never leave the client, letting attackers turn trusted sites into vectors for data exfiltration, phishing, misinformation or just good old harmful guidance. This is quite a creative technique, but also I guess bound to happen here. Just another way the bad guys see opportunity in everything. Derek, for you is this know a little more or no thanks on this one?
C
No, I would agree. No, no, a little bit more on this. I think we're, you know, with the, in the age of AI and, and how we're utilizing a lot of this technology rapidly and we're integrating it everywhere and, and really we're, you know, back into the same situation where we're trying to keep up with how it's being misused, not necessarily used. I mean we barely understand how these things are being used, let alone how, you know, attackers are, are going to be able advantage. So I think this is just the tip of the iceberg. We're gonna see a lot more of this.
A
Matthew, what about for, you know, a little more or. No thanks.
B
Yeah, I, I would say no a little bit more for a lot of the same reasons that Derek mentioned. I think, you know, ultimately it kind of points out something that's old right, is sanitization of inputs and outputs, which is kind of something that is, you know, from a practice perspective has been out there for a while. But nonetheless, as Derek mentioned, because of the proliferation of and use of AI, this is definitely something that, that would be on my radar.
A
When you have these types of stories that, you know, we see AI is now this new again. It's this, it's this wonderful fresh landscape for threat actors to exploit their creativity. We've barely been able to handle them with the traditional attack surfaces that we know. I guess when you see stories like this, how do you present this to your team in terms of like assume nothing's for granted or is this, you know, Matthew, to your point with input sanitization is this, this is a known problem just with a new technological face. I'm Curious from your perspective, Matthew?
B
Yeah, meaning, I mean, I think to your last point there, it's not an old, old problem, but it's just an old problem in a new way. And so, yeah, meaning I think it's something to be cognizant and aware of as we move forward and as organizations implement new tools, new AI based tools and integrate AI into their business processes and things like that. It's just something to be aware of from a security context and making sure that where possible we do our diligence in terms of reviewing the tool stacks that we're using as an organization. Not that that necessarily would catch or prevent this, but knowing what you're using is kind of half the battle, right? Knowing what your users are using. So it's very important.
A
Shout out to Kevin Farrell in our chat who says hashtag sounds like something available at your local AI hop. There's multiple layers of awful to that, Kevin, and I'm very proud of you for enabling all of those. Our next story here. Anthropic Question over Claude esp. The U.S. house Homeland Security Committee has summoned Anthropic CEO Dario Amodi to testify on Dec. 17 about a likely Chinese espionage campaign that used Anthropic's AI Claude to target at least 30 organizations. Lawmakers did praise Anthropic for disclosing the attack. They are one of the more transparent AI providers out there, but called it a significant inflection point for US Cybersecurity with employees in all organizations choosing to use generative AI even before their employer sets up guardrails. I'm curious Derek, for you, you mentioned barely have a handle of how we're using it legitimately. Is this know a little more or no thanks in this story?
C
I mean I think it's no, a little bit more. You know, you have the, you have this, it's not really new. Right? We, we've known that attacker's been trying to use AI and AI systems to try to accelerate their attacks. Just because this is a nation state, I don't think it really makes it any different in that sense. It's just, it's the same problem in terms of making sure that we understand how these systems are being used and misused. You know, I think again, Anthropic being able to disclose this and, and get ahead of it is, is good. So I think, you know, that's leading us in the right direction. It's just really like again I, I just. Because it seems like I sort of zeroed in on the nation state advanced Persistent Threat act aspect of this, but I don't think it's any different than that or, or somebody else, you know, sitting in the basement somewhere doing the same thing.
A
So, Matthew, what about, you know, a little more or no thanks on this one?
B
Yeah, I would say no, a little bit more. You know, I'm cautiously optimistic that maybe the, that this will lead or help shape or bring some additional priority as a nation right around the things that we need to do to elevate the conversation around how we secure these types of platforms. So, and you know, whether that's a regulation or, or whatever, I think I'm hopeful that this will kind of help, help steer that conversation in a positive direction.
C
Yeah, sorry, I would say that there. I, I'm, I don't know if I share the optimism because I think that's like, I wish, I really wish and you know, I'd love to be optimistic about it. I think that, you know, we're taking this in front of, you know, Congress where we know that they're not the show.
A
Yeah, it becomes a show at that point. See, that's where I'm like, I'm a little, you know, I'm with Matthew on a little bit of the optimism here because, hey, they're saying they did a good job being transparent, but if they get taken to the woodshed about enabling nation state actors to run these attacks on Claude, even though they came out with it, to me, all that does is if I'm OpenAI, if I'm perplexity, if I'm any other provider, I guaranteed someone is at least trying to use them for these exact same types of purposes. They have so much less reason to now come forward and disclose this kind of stuff. So I'm hoping these conversations in front of Congress are actually productive and not just political theater to score some points. There's a first time for everything, right? All right, next up here, security keys may prompt for PIN after recent updates. Here it doesn't. It didn't sound bad at first, but Microsoft warned users that FIDO 2 security keys may prompt them to enter a PIN when signing in after installing Windows updates released since the September 2025 preview update. This is an intentional change. Microsoft says to comply with web authentic specifications, which dictate how authentication methods such as pins, biometrics and hardware security keys should handle user verification requests. After installing the Windows update, you might be required to create a PIN to sign in with a security key, even if a PIN was not required or, or set during your initial registration. Here. These types of changes may be a little confusing, holds the door anytime there's that kind of confusion or unexpected behavior always seems to open the door for threat actors here. Matthew, for you is this know a little more or no thanks.
B
It would be no, a little bit more mainly I think. I mean there's this security implications to this but also there's just basic end user right. Issues that this is going to create whether, whether you know, for the good or not. And so I would want to know a little bit more so I would be able to communicate throughout the organization what may. How many phone calls or tickets may this may generate to our help desk.
A
So yes, anytime you can get ahead of that. Derek, for you what about you know, a little more no thanks on this one?
C
Yeah, a little on the fence one of it but I mean to the point of, you know, now there's a new user experience that needs to be, needs to be understood and known. I think the other thing is that the whole purpose of FIDO is to have this, you know, passwordless, you know, type of authentication and this sort of throws a little bit of a wrench in it. Although it's sticking to the, to the web auth, you know, authentication standard. So it's like I think when I first read this, you know, struck me as like oh good, you know, this is the way it's supposed to be done. It's secure by design and we want to get that direction. But it's like okay. Then you dig in a little bit more and it's like well why didn't we do this? Why didn't Windows do that or Microsoft do this originally, you know, it was like well they were sort of going with a more permissive, you know, for usability. Right now you're switching it and now you're creating a usability issue. So I mean security always has to fight against, you know, features and usability and this is an example of that. So to, to Matthew's point, I mean it's, it's going to have to be a lot of education and discussions with, with users.
A
Why didn't Microsoft do this originally? As part of my multi part book series that I'm going to be writing for the rest of time such as their burden being the OS market share leader. Our last story here for no or no prompt injections muddle ChatGPT's Atlas browser OpenAI's ChatGPT Atlas browser launched back in October. It includes agentic AI capable of autonomous tasks. But this expands the risk of prompt injections, direct or even Indirect injections could expose sensitive data, execute code or compromise networks of agents. Experts warn the problem grows as agents gain tool access and autonomy, making attacks even more dangerous. Mitigations include strict least privilege access, sandboxing human oversight and treating untrusted inputs as hostile. This combines two very, very prominent dangerous issues that we talk about all the time here. The growing use of prompt injections in generative AI based tools. Then the possibility of over enthusiastic adoption of agentic AI tools as part of a permanent drive to do more in less time. You know, the promise of all of these tools is this, is this know a little more here Derek, for you or no thanks.
C
I'd like to know a little bit more is why to OpenAI even came out with this browser. I think a lot of people are asking that question because, you know, I don't think anybody was asking for this but you know, there's, there's a whole business case, I'm sure that's surrounded by this but you know, it's the same question or the same topic that we've been talking about where like we're giving these tools to people that don't really have the guardrails around it that are necessary to, to use it efficiently and safely. So we're going to continue to see these types of issues going going forward, whether this is really an outlier or any different than any of the other AI tools that are being misused. I, I don't know if that's really the case here.
A
But Matthew, what about, you know, a little more or no thanks when it comes to these browser exploits here?
B
Yeah, I'm kind of on the fence. I mean I kind of want to know a little bit more but at the same time, you know, these days these things are just happening so fast kind of to Derek's point, there's so many of them and so there and they occur almost daily, if not daily. And so it's kind of which ones are the most concerning, which are the ones that may have an impact, a direct impact to our business or to our infrastructure or to how our users are using things like this. I will probably side a little bit with Derek and I would want to know a little bit more but just a smidge.
A
Well, that raises a very interesting question for me of with all of these tools, how are you with your teams when someone comes to you, Hey, I just saw OpenAI has this new browser, has all these, these awesome automation capabilities. How, you know, how are you balancing that? We need to throw some cold water on this and not Just obviously just run anything. But at the same time these are useful capabilities. We need to probably know what they can do. There may be obviously our hopes of lots of productivity gains with some of these. Like what's, what's the balancing act with something like that?
C
I mean, I think, you know, for, we could have a whole long conversation about this.
A
But I mean, there's five minutes. Yeah, some serious product here.
C
Right. But most organizations, I mean we should be striving towards having like AI governance boards or decision makers and additionally being able to have use case libraries. Like what, what are the use cases for these, for these different AI tools that you're, that you're trying to use and do these tools fit those use cases? Is there another way to, to do this? I mean, the case with the, with the Atlas browser, I mean, is this really something that people are going to be clamoring for inside of an organization? I don't know. I mean, is, is it going to do something different than what you can get done in, in a normal browser with perhaps a plugin that's been already approved by your organization? I don't know. So I mean it really comes down to identifying what your use cases are. Have a governance board that says these are the allowed and disallowed activities, you know, as it relates to AI.
A
All right, well, before I move into our deeper discussion stories, have to spend a few moments and thank our sponsor for today and that is vanta. What's your 2am Security worry? Is it do I have the right controls in place or are my vendors secure? Enter Vanta. Vanta automates manual work so you can stop sweating over spreadsheets, chasing audit evidence and filling out endless questionnaires. Their trust management platform continuously monitors your systems, centralizes your data and simplifies your security at scale. Get started@vanta.com CISO that's V-A-N-T a.com CISO all right, this one really caught my eye in the past week. So I definitely want to get both your takes on this hack lore to tackle security myths. This is a new initiative called hacklore.org that launched a pushback against long standing security myths like frequently changing passwords or avoiding all public WI fi or public USB chargers. That kind of stuff. Created by former Yahoo and DNC security chief Bob Lorde, the project promotes simple evidence based practices like passkeys, MFA password managers and just keeping software up to date. More than 80 cybersecurity experts signed the open letter urging a shift toward practical guidance and support for secure by design and secure by default approaches. Jen Easterly was on that. I know our CISO series friend of the CISO series, Andy Ellis and Mike Johnson, were also signers of that open letter. I'm curious if either of you have visited the hacklore.org site. It seems to have some very credible experts on there, but very long list of people. Is this something you want to, you know, with these kind of things, how are you trying to roll out kind of these practical, hey, these are real world things that will help versus juice jacking, which there's literally never been any documented evidence of it happening in the wild. Derek, I'm curious for you.
C
I mean, I think, you know, when we talk about cybersecurity, we're talking about managing risk, right? And we have to manage, we have to identify, prioritize and manage the risks, you know, that we see. And so there's like sort of, sort of hesitant here because I think there's been historical, you know, cases of this and some of it has been academic. Like in terms of some of the things that have been identified by Hackler Public. WI Fi at one point was not the most secure thing. Right? And yes, we've moved way beyond that and we're to the point now where, you know, we don't really have the same concerns around it that we used to. But, you know, it did exist at one point and I think some of those things are hard to relearn or unwire, you know, in terms of like how we address like security issues. Are there bigger problems? Absolutely. And I think that's the point of Hackler, at least the way I, you know, am sort of interpreting it is that, you know, there's, there's way bigger problems that we have to solve and telling people that, you know, the off chance that you plug in your phone into a USB at an airport, you know, that you're gonna have some kind of juice jacking attack or, or some type of data exfiltration or compromise or, or something like that, like those, you know, is that really the highest risk that you're likely to see? No, you're more likely to click on a link that you've been sent in a phishing email that's going to be way more impactful. You know, attackers are always going to go for the, the lowest hanging fruit. What's the easiest thing to do is, you know, replacing a usb, you know, receptacle at an airport an easier job than sending an email? Of course not. So I think, you know, there is, there is the, the concept and I think the purpose of this is really to change the mindset to there's bigger problems. We're in the business of managing risk. What's the highest risk and how do we manage that?
A
Matthew, what about for you? Are you, are you ready to be busting some hack lore? And how much do you love that name? Like that's just like a choice name too.
B
I'm not a big fan of when we use the term hack in a broad sense, but I really, when I read it and their site and everything, I really liked it. And I think it's somewhat long overdue from the vantage, I think far too long, especially in the cyber community. And I'll even raise my hand because I know I'm guilty of this. I think oftentimes we've taken credible threats or I mean we've taken scenarios like Derek talked about risks or threats and we've used them in fearful ways, right? Oh, better not do that. You know, better not plug in, you better not connect to public WI fi, you better use a VPN anytime you're on the Internet. And I think oftentimes we as practitioners have used that not in the best way, right. And we've driven fear into our users and into consumers. And so I was actually pleasantly surprised to see kind of how they've approached some of those things because I think kind of dispelling those things, whether they're actually can be done academically or in the wild, I think moving us forward. Right. And I think also the other thing that's good about it is we have compliance frameworks, right. That are outdated and old, that reference some of these things. Right. And I think it's a good resource, a good mechanism and as practitioners and everybody who put that together, their voice, right. To try to help steer direction and change in that area. Because right now all of us as companies have to deal with multiple regulatory and compliance challenges. Right? And like I mentioned, some of those things are on this list. And so I think overall I think it's a really good thing.
A
What I think is really interesting about it is it really does seem to be taking the onus off of the individual with, with so many of these where it's like you don't do that thing versus outside of a, you know, for personally, yes, you can enable MFA on your personal accounts, yes you can use a password manager. But a lot of that is more top down driven like focus on these secure behaviors on a corporate level versus exhausting kind of the inconvenience bandwidth that individuals have for. Oh, but I. Okay. I can't log on on Starbucks and, oh, yeah, I can't charge at the airport. And like, all of these different practices where, you know, it gets to a point of exhaustion. Right. Oh, there's so many things I shouldn't do. I know I'm. I know I'm screwing up somewhere. Is the. Is the cry of someone that's given up.
C
Right.
A
Of someone that's like, they've been told they've been doing stuff wrong for so long that they're not even. They don't even want to try anymore because they know they're just going to screw up. So I do think there is an interesting kind of refocusing of, you know, instead of making people feel bad for using airport, WI fi or whatever to instead being like, no, actually, this needs to be driven to be secure. It needs to be driven from on top and then figure out how to get it down to everybody else. Am I reading that right?
B
No. I mean, yeah. I mean, I think so. I think there's a certain amount of deprogramming I think is kind of what you're alluding to.
A
Right.
B
We've got to deprogram ourselves from some of this lore, whether it was actual real or not.
A
Yeah. And I put it on LinkedIn, but I was like, the password on this sticky note is, to me, kind of in the same vein, where it's like, is that the best behavior? No. Is that something we should exhaust people with a lot of the times? Or like, the password book. Like, my parents have a password book in their kitchen. If they use unique passwords, I don't care. They could write it on a chalkboard. No one's going to go into their house and read their password chalkboard. Also, mom and dad don't use a password chalkboard.
C
Please. Dear God.
A
All right, our last story here for today. Corporate takeovers, meet SonicWall firewalls. ReliaQuest reports that Akira ransomware affiliates exploited compromised SonicWall SSL VPN appliances in companies acquired through mergers and acquisitions. That is a good way to get acquired. Attackers gained access to the acquiring firm's networks through inherited devices. Then search for privileged legacy credentials, unprotected host and predictable server names. Again, acquired firms network inherited devices, search for privileged legacy credentials. I mean, should we be surprised by any of this here? We all know mergers and acquisition. A turbulent time for all aspects of the business. Derek, for you, is anything about this, I guess, too surprising, or is this just those crazy threat actors?
C
Nope. I. I yield my time. No, no, I mean, you know, M and A's are messy. Right? And, and you don't, you're generally when you're. And I think the, the article pointed that out is that generally when the acquired. The acquired company is usually smaller than the, than the one doing the acquiring. Meaning that you, you don't. The acquired company doesn't spend a lot of time or hasn't put a lot of investment into their infrastructure, their security. They may not even have a security team. And so you're as the acquiring company and you're bringing in this new organization and you're, you're taking on all their security debt, their technical debt. You're bringing in all their individuals, you know, and so you have, you don't know how many people are disgruntled and how many people are upset about being acquired. You now maybe have an increased insider threat. So, I mean, the security issues, you know, related to M and A are pretty significant. I think really where, you know, we can make some headway in that is, is, you know, making sure that the security teams are, are well integrated into the M and A activity as it's occurring. I know that doesn't happen everywhere and that's really kind of, It's a shame because I think there's sort of, you know, obviously a lot of sensitive information being passed around. And not only that, but the M and A may fail. So you don't necessarily want to bring everybody in until, you know, it's, you know, a sure best. But like anything, you know, getting security involved early will decrease pain down the road. So.
A
And then, Matthew, for you, I mean, is this just, hey, let's attack the business logic?
C
Right.
A
We know you're, you might not even know who you're reporting to for a brief period of time or that there may be some ambiguity in that. Right. In M and A. Perfect time to strike. Here is. Is there anything novel about this or is this just everyone needs to be, you know, be extra cautious during this turbulent time, as Derek was alluding to?
B
Yeah, I mean, I would agree there's nothing unusual about this at all. Right. This is something that I think has been occurring and will continue to occur. And much to Derek's point, I think getting practitioners involved earlier and in the process is really important. Although, you know, at the same, I mean, having been through multiple M and as, there's always that tension between business, IT integration and security. Right. Just back and forth. And oftentimes the business, well, not oftentimes, the business always moves faster than IT and security. And before you know it, things are already hooked up or credentials are already shared or even if it's not at a technology level. And so I think it's bringing the practitioners in earlier and communicating at a whole. Right. To everyone who's involved, kind of what those risks look like, that threat model looks like. And to be aware of it. Right. And to be, you know, the people who champion, hey, if you want to do something, let us know. Contact the security people. We'll help facilitate it, but we'll help do it in a secure manner. Right?
A
Yeah, that's the thing. I mean because yeah, there's probably so many people that are like, is this, should I be seeing this? Is this normal? Should I have gotten this authentication, you know, this MFA push or something like that? Yeah, I, that, that. Matthew, thank you for that. That is some a quality piece of advice. So listen, for anyone going through an M and A, just give us some credit. You can use it all you want. Truly, truly appreciate it. Good, good stuff. Thanks. Everybody in our audience that made the time to join us live here, really appreciate it. Schmooze coming in, talking hack lore here. At least the requirement to run tripwire is no longer mandatory. Schmooze, you're hurting all of us here. You're going into the deep water there. And then Kevin Farrell, please. My parents password chalkboard is not connected to the Internet. How dare you. They know better. They just connect it over Bluetooth 1.0. I'm sure that's fine. Don't worry about it. Don't worry about too much. And they're right next to a Chinese embassy. I'm sure nothing is going on there at all. Thank you so much to our fantastic guest, Matthew Bybee, Director of Cybersecurity at Ticks Track. Eric Fisher, Director of the Cyber Defense and Information Assurance Program at Temple University. A man of letters indeed. We'll have links to both of your linkedins in our show notes and we will have to have you back on because this was spectacular. Thank you both so so much. Thanks also to our sponsor for today, Vanta Compliance. That doesn't suck too much. Remember, you can send us feedback at any time to feedbackisoseries.com and join us next Monday at 4pm Eastern for another edition of the Department of Know. And we also have a Super Cyber Friday coming up. Hacking AI Data readiness. So if you want information on any of that, go to our events page@cisoseries.com thanks for joining our Monday stand up. Have a great week. Stay secure out there from all of us here at the CISO series. Here's wishing you and yours to have a super sparkly day. Cybersecurity headlines are available every weekday.
C
Head to cisoseries.com for the full stories.
B
Behind the headlines lines.
Date: December 2, 2025
Host: Rich Rafalino
Guests:
This episode of the Department of Know tackles some of the latest cybersecurity stories impacting organizations and practitioners. Topics included high-profile vulnerabilities in cloud infrastructure components, emerging AI security risks (especially around prompt injection and generative AI misuse), shifts in user authentication requirements, efforts to debunk common security myths, and the specific challenges associated with mergers and acquisitions. The dialogue maintains a pragmatic, slightly wry tone, emphasizing the importance of evidence-based security actions and clear communication.
Matthew Bybee (00:19):
Emphasizes finishing yearly projects and compliance initiatives, which are particularly important during “the holidays and audit seasons.”
Derek Fisher (01:11):
Focused on grading and preparing students for rapidly shifting roles within cybersecurity:
“We're seeing a lot of changes and upheaval in cybersecurity. So it's really trying to help get these students prepared and out there in the field and hopefully making a difference.”
"It's close to my heart... The platform is used quite extensively... open-source, used pretty much by most of the cloud providers... I definitely would want to know a little bit more about it."
“There’s a stew of, you know, what we all deal with on a regular basis in cybersecurity. I think definitely need to know a little bit more about this.”
"I think this is just the tip of the iceberg. We're gonna see a lot more of this.”
“It points out something that's old, right—sanitization of inputs and outputs... But because of the proliferation and use of AI, this is definitely something that would be on my radar.”
“We’ve known that attackers have been trying to use AI and AI systems to try to accelerate their attacks. Just because this is a nation state, I don’t think it really makes it any different... Anthropic being able to disclose this and get ahead of it is good.”
“I'm cautiously optimistic that maybe... this will... help shape or bring some additional priority as a nation around the things that we need to do to elevate the conversation around how we secure these types of platforms.”
“There's just basic end-user issues that this is going to create... I would want to know a little bit more so I would be able to communicate throughout the organization how many phone calls or tickets this may generate.”
“Security always has to fight against features and usability and this is an example of that... Now you're switching it [security model] and now you're creating a usability issue.”
“It's the same topic that we've been talking about... we're giving these tools to people that don't really have the guardrails around it that are necessary to use it efficiently and safely.”
“These days, these things are just happening so fast... which ones are the most concerning, which are the ones that may have an impact, a direct impact to our business?”
“We should be striving towards having AI governance boards... and use case libraries. What are the allowed and disallowed activities as it relates to AI?”
“At one point [public Wi-Fi] was not the most secure thing. Yes, we've moved way beyond that... Are there bigger problems? Absolutely... Attackers are always going to go for the lowest hanging fruit.”
“Oftentimes we as practitioners have used [these myths] not in the best way... we've driven fear into our users. I was pleasantly surprised to see how they've approached some of those things...”
“…Compliance frameworks that are outdated... reference some of these things. It's a good resource [Hacklore] and a good mechanism... to help steer direction and change in that area.”
“It really does seem to be taking the onus off the individual... It gets to a point of exhaustion—'Oh, there's so many things I shouldn't do, I know I'm screwing up somewhere.'”
“There's a certain amount of deprogramming... from some of this lore, whether it was real or not.”
“M and A’s are messy, right?... You're taking on all their security debt, their technical debt... You don't know how many people are disgruntled and upset about being acquired—you may have an increased insider threat... Getting security involved early will decrease pain down the road.”
“Having been through multiple M and A’s, there's always that tension between business, IT integration, and security... Business always moves faster than IT and security... It's bringing the practitioners in earlier and communicating... what those risks look like.... Be the people who champion, 'Hey, if you want to do something, let us know, contact the security people—We'll help do it in a secure manner.'”
On the pace of AI-related threats:
“We barely understand how these things are being used, let alone how, you know, attackers are... going to be able [to leverage] advantage.” —Derek Fisher (05:37)
On the fatigue of endless, outdated security advice:
"There's so many things I shouldn't do, I know I'm screwing up somewhere. Is the cry of someone that's given up." —Rich Rafalino (25:14)
On the need to update compliance frameworks and security awareness:
“We have compliance frameworks... that are outdated and old, that reference some of these things [myths]... I think it's a good resource… to try to help steer direction and change in that area.” —Matthew Bybee (22:10)
On the M&A security dynamic:
“Business always moves faster than IT and security.” —Matthew Bybee (29:02)
This edition emphasized that cybersecurity is about managing the highest risks with evidence-based strategies, not fighting yesterday’s battles or scaring users unnecessarily. Rapid change—whether in AI, cloud, compliance, or business structure—means practitioners need to focus on education, process improvement, and “deprogramming” users from legacy thinking while preparing for new technical realities.