
Loading summary
Cyrus Tibbs
From the CISO series, it's cybersecurity headlines.
David Spark
Salt typhoon breaches. National Guard AI does phishing old school with the visible text. And 20 years on, maybe it's time to fix that freight train vulnerability after all. These are some of the stories that my colleagues and I have selected from this past week's cybersecurity headlines. And now we're looking forward to some insight, some opinion and some expertise from our guest, Cyrus Tibbs, CISO at Paul Pennymax. Cyrus, thank you so much for coming on for, for lending us your wit and wisdom. I gotta ask though, how was your week in cybersecurity?
Cyrus Tibbs
Week's been great. I'm happy to be here. Thanks for inviting me. You know, just keeping one foot in front of the other, managing all of the, all of the old problems and all the new ones coming at the.
David Spark
Same time, keeping a forward motion, that's all we can ask for in this day and age. The other thing we can ask for is a big thank you for our sponsor for today, which is Threat Locker Zero Endpoint Protection Platform. Now, if you're listening to this show as a podcast, remember that next week you too can join our loyal band of vocal experts on YouTube live. Do so go to cisoseries.com look for that events dropdown and find the week in review image. Click on it. You can join us and contribute in our chat, which is what all of our live listeners can do right now. We will do our best to address them in the show, but we always love having your commentary to help make this show better and maybe make us laugh along the way. And remember, you can send us feedback about the show, something you like, some reaction to the news, an angle that we didn't cover or consider feedback@cisoseries.com before we jump in to the news of the week, just for a quick reminder that these are all Cyrus's opinions, not necessarily those of his employer, his staff, friends, family or even nemeses. I'll go as far as to say, I just want to make sure that's clear before we disclaim anymore. We've got about 20 minutes, so let's just jump into the news first up here. Salt Typhoon breached National Guard and stole network configurations. The Chinese state sponsored hacking group breached and remained undetected in a U.S. army National Guard network for nine months in 2024, stealing network configuration files and administrator credentials. These could be used to compromise other government networks. The methods by which the group penetrated the National Guard network were not disclosed. We don't know the Details of that. But bleeding computer states that Sal Typhoon is known for targeting old vulnerabilities in networking devices, things like old Cisco routers. So, Cyrus, on last week's show, we discussed the fact that Congress has passed a bill giving the US military a cybersecurity boost, particularly focused on offensive cybersecurity. While giving the greatest respect to all those who serve. It can be somewhat worrisome when military organizations find themselves hosting foreign powers inside their systems. I'm curious, how can a threat actor stay undetected for so long and what more could be done to prevent this or get even visibility into this problem?
Cyrus Tibbs
You know, this is a. This threat actor is probably one of the most interesting ones to be following this year between their presence in the telecom networks that was referred earlier this year and now sort of a very similar pattern in the National Guard here. You know, the vulnerabilities that they're known to exploit, the network stack are pretty old. So that leaves us to, you know, if there wasn't some kind of new novel way in, it is showing that this is one of those basic blocking and tackling things. That sounds simple. That's actually really hard for a lot of organizations to do. I think a lot of organizations will focus on. They maybe lose sight of some of their network infrastructure and IoT thinking that this isn't really the target, it's not where the data is, but it is a great place to set up presence. I think for me, it brings right into your core of you need to always make sure you have an active dynamic inventory being updated and that you are constantly managing vulnerabilities within that. I mean, the MO of this group to attack network infrastructure with these old vulnerabilities is well known, but I think it just serves as another challenge to organizations really scoping in everything when it comes to their environments and their vulnerability programs.
David Spark
Yeah, it seemed like if there was a theme, I guess for last year, it would have just been around. Like, we have no handle on identity in a meaningful way, like as an industry. But this year I feel like just the last three months I've been hearing that asset inventory is this giant challenge. Just we're not talking enough about that. So thank you for highlighting that because, yeah, that is. And especially with an organization with, I'm assuming, a very diffuse set of infrastructure. Right. The National Guard, militaries all over the place, you know, it's kind of tough to get a handle on where everything is, I'm sure.
Cyrus Tibbs
Yeah, it's one of those situations where it's Easy. It's very easy to set up a whole wide expanding network across the world. It's another thing to keep it patched. And a lot of times those things are cost prohibitive. But I think we're seeing, especially in this advent of AI, all of the ability to your attacker's ability to inventory you and then know what exploits will attack you. That's become a lot faster now. So I think the dynamic that we used to have around network infrastructure is kind of not being traditional target well, like anything else, as we've gotten better in other places, they're finding new places to go.
David Spark
Cybercriminals, if you need another business model, asset discovery as a service actually would be extraordinarily valuable to send that. You can send an invoice. I'm sure a lot of people. It's not quite ransomware, it's asset extortion, where I'm not suggesting anyone do cybercrime. Also for the record, next up here, Pentagon welcomes Chinese engineers into its environment. Okay, this is a little awkward and unfortunate case of the fox guarding the henhouse. US military systems are receiving backend support from engineers based in China. That may sound like a security risk, and that's because it is. ProPublica reports that while these foreign engineers work through digital escorts in the US the escorts often lack the technical skills to detect malicious code or misuse. Let me just read this part again. The escorts often lack the technical skill to detect malicious code or misuse. The arrangement was approved by the Pentagon despite serious internal warnings from Microsoft staff about national security risks. Cyrus, given what we just talked about in the previous story, how does this make you feel? Like this seems like a head scratcher.
Cyrus Tibbs
Right. You know, the global support of IT systems is not a new trend. And I don't think it's going to be a trend that goes away. And private companies as well as public companies are all going to be relying on sort of these interconnectedness and people who are not always in our legal jurisdictions. And naturally the insider threat from those folks is elevated. Right, because you may be more permissive with what you might let somebody do who is in the US Jurisdiction as opposed to someone who's in China's jurisdiction. If we look at like. And also, you know, when companies have very strong security, these MSP kind of staff or staff that have access, these are becoming a new target. So if you look at the recent Coinbase incident, right, they targeted people at that level. So I could. I think that this really should be a wake up call to everybody to really be thinking about, well, how are you controlling access and validating the work of your staff who are not in the US Legal jurisdiction because you have less recourse there and they have more motivation just from a financial perspective. So I think this challenge of the workforce, insider risk that is outside our legal jurisdiction is going to get larger. And I think as organizations respond and enhance their security controls, I think more and more of these staff are going to be a target of our adversaries.
David Spark
All right, next up here, Google Gemini Flaw Hijacks Email Summaries for Phishing Bilibcomputer is reporting that Google Gemini for Workspace can be exploited to generate email summaries that appear legitimate but include malicious instructions or warnings that direct users to phishing sites without using attachments or direct links. This is basically the reinvention of the old white font zero point size technique that is a tale as old as time. In this case, the attack leverages indirect prompt injections that are invisible to humans but obeyed by Gemini when generating the message. Summary this vulnerability is certainly not unique to Google. We've seen all sorts of prompt injection as this whole new vector to wrap our head around it is once again though part of the old cat and mouse game between people trying to use these tools and threat actors. However, it's an easy leap between zero point white text to enormously powerful tools like Microsoft's recall or AI based transcription tools where similar techniques that include summarizations could be leveraged. I'm curious Cyrus, what's your take on this?
Cyrus Tibbs
So I feel like these AI tools, we have to kind of view them the same way we would view like people in our organization from a risk perspective. They do present an insider risk, they are presenting information and they are kind of having this, I guess non deterministic response to things that they process. Personally, I feel like in the AI social engineering and in the AI injection risks that these things present, I don't think we're, I think we have to see the ground on trying to detect these things and we really need to rely on guardrails at the next level. I don't think that the, you know, the traditional ways of phishing, I don't think we can build enough security detection models for the variety of these types of phishing attacks we're going to see and these types of injection attacks we're going to start seeing. So we really need just again focus on some of the fundamentals around what's the access layer, endpoint layer, data layer protection you have downstream from this? Because you can rely on the large language models to try and detect this thing. But I really think that this is a whole new type of social engineering where people are saying how do we social engineer our AIs? That people are using AIs right now, they're, they are very accommodating to whatever you ask them for, regardless of what it's used for. So their design is to be accommodating to the request. So we just have to make sure that downstream of those requests that we've got the right controls in place. Because I think we need to see, just like I think we're going to have to seed our ability to detect social engineering at the user level. I think this is the same thing at the AI. At the AI engineering level the exact same problem exists. The guardrail has to be a layer behind this.
David Spark
I, I have to imagine that's an interesting perhaps difficult conversation to have though with an organization where it's, we want these AI tools so we can be more productive, we can move faster. Right. Like so that's the whole point. We can scale stuff, you know, you know we can, we can get that hockey stick productivity growth. And to come in and say, well actually we need to implement these guardrails. We need to have either more humans in the loop to, to, you know, when we're talking about output verification or, or just something in that loop. Again, treating them like employees. I've heard it, treating them like junior staff members. Right? Like that, that kind of idea of that's the kind of risk profile. I imagine that's a tough, tough to get some buy in when all of these tools are being sold as this is going to take you to the moon, unlock new things for you. Right?
Cyrus Tibbs
Yeah. I mean don't get me wrong, I'm an AI optimist. Even though, I mean I sound like it. I think that it's creating huge productivity benefits both for security teams and adversaries and businesses and everybody. We're all benefiting from it and that there's, you know, I mean maybe it's a cheap comparison but it's kind of like when like Ms. Office first came out like you can, everyone was like, oh, I have Ms. Office skills. And I think eventually no one writes that on the resume anymore because we just assume it's a skill you have. So I think there's no stopping this stuff. But I just think that it, there has been a paradigm that we had around detecting these kinds of things before they happen. And I just think with the, the way these ars are going, it'd be similar, like trying, it's like trying to say, like, can I detect when one person is trying to fish another person without any kind of obvious, you know, link or attachment or anything like that in there. Right. And you can't. And so I think we have to just assume that, you know, we need to really. I think when I talk to other systems, everyone's like, yep, I'm just, I'm doubling down on access control and endpoint control and data controls because, you know, the, this is kind of like you're going to have this kind of large scale spear phishing and AI ingestion and prompt attacks. And this is, this is going to be the wave of the future. And so I think we have to all prepare for it and just think about it from the loss events that it can occur and work backwards from those to see how do you make sure that you get the human in the loop in the right places so that, you know, you can't have a automated loss event from an email from somebody you know. All right.
David Spark
Well, before we move on to our next story, I have to spend a few moments and thank our sponsor for today, ThreatLocker. ThreatLocker is a global leader in zero trust endpoint security, offering cybersecurity controls to protect businesses from zero day attacks and ransomware. ThreatLocker operates with a default deny approach to reduce the attack surface and mitigate potential cyber vulnerabilities. To learn more and start your free trial, visit threatlocker.com CISO that's T H R E A T L O c k e r.com CISO just want to acknowledge everybody in the chat. TJ I'm glad you are here. Made some time to be here. Schmooze. Thank you. And Kevin Farrell kicking us off here, of course, the big boss man, David Spark in there as well. Always fun to see people in there. Yes, Kevin Farrell, you caught the pumpernickel moment of the week. Now our next story here. This is probably one of my favorite ones of the week here. So Cyrus, I can't wait to get your take on it. AAR pledges to start fixing 20 year old train vulnerability next year. The technology that allows the front of a train and the back of a train to talk to each other in order to discuss issues like, you know, stop is apparently woefully not secure. According to cisa, an organization that would purportedly know about such things. There's no authentication or encryption between those two points which could allow a threat actor to send rogue brake control commands to either end of the train. This was discovered by researcher Neil Smith way back in 2012. And in the intervening 23 years, no action has been taken. The association of American Railroads has said it as pursuing new equipment and protocols with a start start date projected by 2026. Don't worry though. Only about 70,000 total devices need to be upgraded. So Cyrus sounds a lot like the railroad industry has been working on their lobbying all the livelong day. A comforting fact as we wait for, you know, at crossings with cars full of oil and molten sulfur to pass by at 60 miles an hour. Just a variety of nightmare scenarios I can think of. But I'm curious, why does infrastructure always seem to get passed over when it comes to, I don't even say proactive security, but just basic security practices?
Cyrus Tibbs
I feel that this really dovetails into a lot of the challenges we see in the IoT world in general. The IoT world is very loosely regulated. The software written is generally written to solve a specific physical problem. And the people who maintain that technology generally aren't thinking about this sort of like long term cyber problem. This is one of those things where I think the original designers of this probably just assumed security through obscurity and saying, well, you'd have to be on the train and you'd have to, you know, that person would also be at risk. But I think when you take a step back, you realize I think this is just like a major failing of our regulatory framework to enforce these kinds of controls. Because this is a perfect example of the technology was developed in such a way to sell it, but not to maintain it without the right regulatory framework to get these IoT providers to provide that long term support as well as the people who purchased it to ensure that these systems have a way of being patched. I think we're stuck in the IOT world where you basically have to go back a little bit to the walled garden mindset of how do you build walls around these things to effectively put kind of overlays and protection on them. I mean, I think the risk of something like this is so tremendous that, you know, the private sector really isn't going to respond to this until you have major incidents start occurring and it starts costing them real money. But right now it hasn't really, it hasn't really cost a lot of money at this point. It hasn't come to fruition now. I mean, I don't know if publishing these, this stuff still widely will encourage that to occur. Even though it's been around for so long. It's super concerning to me that what we see in the IoT space. And I think that once as more and more things are IPv6 and more and more things are being driven by IoT, you'll start seeing these major loss events starting to occur when they're targeted and hopefully we respond in kind to start ramping up the security and compliance and patching and requirements around things like these. Because I just think this is just a blind spot kind of worldwide.
David Spark
I can buy the security through obscurity. I highly suggest you read all the details about this. We'll have the link to it in our show notes. But security through obscurity not acceptable. But I can understand that mindset. But this has now been vulnerabilities that have been discovered by multiple researchers presented at things like Black Hat where there's a lot of visibility into them. So it's like that is no longer the case. And yeah, it's one of these things that I don't know, I don't know how you get more visibility than presenting something at Black Hat where there's a ton of mainstream press coverage.
Cyrus Tibbs
Right. That just shows you that even the awareness of something like this isn't enough on its own to drive the people who own these products to design them in a way. And thinking about the worst case scenario.
David Spark
All right, well, next up here, Wetransfer says we apologize over terms of service. Gaffer, the popular large file transfer service raised the ire of many of its customers this week when they interpreted a clause in the terms of service legalese to use customers content. This is probably because the clause said you grant us license to use, reproduce, modify, create derivative works of and publicly display your content. Now this may have simply been awkwardly written copy written by awkward contract lawyers or maybe by ChatGPT itself, but it still represents the slippery slope of companies everywhere wanting to train their, you know, just trying to get hands on customer IP at the end of the day. But as the Register points out, the furor underscores the public's distrust of companies in general. I'm curious, Cyrus, do you think that companies should have better judgment in terms of should have better judgment when it comes to like, hey, this is a AI gold rush, but maybe we don't want to make all of our customers mad at the same time.
Cyrus Tibbs
I think traditionally what we've seen in that it's been a selling point in your technology architecture to make sure that your data is protected. It's your data and products have typically leaned into that. So if you're buying a SaaS product, they want to ensure you like, okay, you have your tenant and we have some encryption in place to make sure that your data is not being mixed with other people and the wrong people don't have access to it. There's lots of assurances around that. But I think in some technology markets there's a real motivation to suck up your customers data to make your AI more effective. I think we saw this with Slack, had done a thing where they had changed their terms and conditions and sort of changed back. And I think that organizations, especially ones that have really sensitive ip, need to be very careful in their contract language and the agreements that they enter with these companies to really be aware of how your data is being used. Because ultimately a lot of these companies, they're not building their own language models. Usually they're using one of the other large ones. So you may inadvertently just be giving all of your data to Google without knowing it. And Google, you don't have the agreement with Google, you have the agreement with the intermediary that's using Gemini or something. So I think it's really important that companies explicitly have contract language and data protection data privacy writers in their contracts that really outline what's acceptable and start with a stricter approach. Like okay, I want to, you know, I don't want you to have any of my data, I don't want to train any of it and see what the lawyers of these companies come back with and that will help you. So instead of trying to read through all their legalese when you're, when you're engaging in a contract, provide them yours and see what they're, see what they come back with and see what you wait because it is going to be a really difficult balancing act because it's not one of those loss events where the loss event happens right away by doing the wrong thing. This is something where you may train it over years and let down the line. All of a sudden someone's got a process here.
David Spark
Yeah, the idea, I kind of like that almost that optimism, right of like usually like whenever I think of B2B contract things that have consumer facing repercussions, right. It's like as a consumer you have absolutely no interest in that. But like maybe there is some, you know, as, as a brand, as a, as a company, you can do that on behalf of your users. Because I didn't even think about that. You know, usually I just assume lazy lawyers trying to give themselves the broadest possible, like never, never assigned to malice. What can be assigned to by by overly broad lawyering. But yeah, the idea of oh maybe this is actually coming from one of their partners. They're working with one of the more one of the foundation models. Right. And so they want as broad of access and so yeah, I could definitely see that as a, you know an effective marketing. You know, kind of how Apple's whole brand is privacy. Someone else using a similar thing of hey, we're going to advocate for you and not get pushed around by B2B. Easier said than done for sure. Yeah. And yeah TJ that signed agreement is on me. Didn't read it entirely. I will point you to a South park episode for more information about that. All right, and our last story of the day. Google says Big Sleep AI tool found bug that Hackers Planned to Use Google says its AI agent Big Sleep discovered and thwarted a critical SQLite vulnerability before hackers could exploit it, marking what it claims to be the first time AI has actively blocked a zero day attack in the wild. The tool was developed by Project Zero and DeepMind and found multiple real world bugs since November when it debuted and is now being used to secure open source projects. So here we see AI working for good as a kind of a zero day defensive layer, I guess. Cyrus, I'm curious, what are your thoughts about this?
Cyrus Tibbs
I think when I think about like the new generative AI, the vulnerability lifecycle space is probably the space that scares me the most. I feel like the what Google is starting with here is showing how AI can discover vulnerabilities and I think the discovery, the automatic discovery of these vulnerabilities in code, the writing the exploitive code for it, having that operationalized, I think that time frame is going to be dramatically compressed. If we think back to how much of our software that we run is written on non memory safe coding languages, I don't think we're that far away from an AI that can look at something especially in the non memory safe areas and have it find vulnerabilities that no one knew about force and democratizing that something that used to be the zero day discovery was a very niche high skill area. I think that is really going to be democratized in a scary way. And I think a lot of organizations right now we kind of see the zero day and unknown vulnerability as this sort of anomalous event.
David Spark
Right.
Cyrus Tibbs
Everybody remembers log4j from years ago because it sticks in your mind so much. But I think organizations are going to start seeing zero days in a much wider scale and it's really going to have us make questions about how much of our legacy software should we continue to run, how much do we need to refactor and rebuild? And then I also think it's going to put a lot of strain on vulnerability management organizations. You know, you kind of want to have this purist perspective of well if I have a vulnerability, I want to close it. Right. But if you start getting to the point where these things, the volume of this starts picking up in an extreme way, you may not have the staff to do as much remediation as you can. So you may have to cede some ground and focus more on that path remediation. I think you're seeing the product space and vulnerability management really shift dramatically toward this is how do you do that? Path analysis and threat modeling analysis. So I think this is going to be, this is the part that's, this is the area that scares me the most. And I don't think that there's really a good solve other than we should be ready for more zero days and we should be ready for more of these, of this type of work to be being done because there's clearly going to be huge profit in it.
David Spark
You know, I'm curious, what are your thoughts in terms of the impact on the open. You know, obviously every enterprise basically is running on, is using some open source components, right. Like it's very famous. Google and Microsoft are the biggest contributors to open source. Right. I'm curious. We've seen a lot of, you know, going back to log 4J or any number of these exploits out there, a lot of trust issues coming into there. A lot of just that whole ecosystem is under a lot more stress. Do tools like this make this, you know, better? Obviously this is responding to, you know, increase like as you were just mentioning, increasingly weaponized threats increasing or the decreasing time to find stuff at scale. Does this shift anything for you and kind of how you're, how you're viewing that ecosystem system or is this just kind of the inevitable cat and mouse part of that?
Cyrus Tibbs
This may be my hot take on it, but I've always felt that software companies don't have enough liability or skin in the game in vulnerabilities. I've always felt that, you know, these companies are creating a lot of these vulnerabilities or they're writing code in a haphazard way that ends up creating vulnerabilities that their customers pay for. And I think that both in the open source maintenance space as well as, you know, but one I think the larger tech giants and larger companies that do rely on these open source projects. They need to pony up money and fund them. And I do. They do in a lot of ways. I know Google has a whole effort around securing open source, but they absolutely, they have a fiduciary and moral responsibility to help keep these things secure because these are going to get attacked a lot more. And then as well, there really needs to be, for my personal opinion, regulatory wise, there needs to be more accountability from the vendors on the software they create. You know, like what we were talking about earlier with the railroad, you know, like stopping mechanism, right. Like, you know, we're also conditioned to think that, well, it's the railroad's fault to do something about this and it's the IT staff that work at the railroad who aren't doing the thing. But, but what's the liability of the people who sold it to them? Right. So if someone sells me a car with a flaw in it, there's liability to that person. I think that has the risk of slowing down software development to a degree, but that may not be a bad thing in some places. So I think like, just to summarize, you know, the tech companies need to, need to do a lot more to be supporting in this area. And you know, regulatory wise, we need to, we really, really think about the liability around these types of vulnerability, especially if this starts happening more and more and more. We can't keep blaming companies for not being able to keep up with a problem they didn't create.
David Spark
All right, well, thank you so much, Cyrus Tibbs. Fantastic. I love ending on a hot take like that. That was awesome. Before we get out of here, was there any story for you that was a thumbs up or a facepalm for you this week? Just something that, it just jumped out at you when you were reading it in the news?
Cyrus Tibbs
My, I think my favorite one is the hackers that are hiding binaries and DNS.
David Spark
Yes, that's.
Cyrus Tibbs
That is that, that one made me laugh because just the ingenuity to take something that like is, is as rock solid and beautiful as DNS. DNS is a beautiful protocol. The whole Internet runs on it. It's fast, it's amazing. And just for someone to realize that they could use that to deliver a binary to themselves, I thought that made me laugh sometimes. Sometimes I have to admire the adversaries and their ingenuity and that was the one that really stood out to me. Obviously it's going to put more pressure on, you know, DNS providers to start adding controls and I think it's something you can easily detect, but just the just the ingenuity of that one made me chuckle.
David Spark
Well, Cyrus Tibbs, the CISO at PennyMac. Where can people find you on the cyberspace if they are so inclined to follow what you're up to?
Cyrus Tibbs
I am only on LinkedIn so you can find me on LinkedIn. I try to avoid all the other social media for my own sanity. So LinkedIn you can find me there and I'm very open to connecting if.
David Spark
There was any doubt that you were a wise man limiting your social network exposure. Perhaps the greatest wisdom of all. And thanks also to our audience today. Schmooze, TJ Williams, Kevin Farrell having some fun in there and questioning the future of cybersecurity as we know it. Always lively and fun conversations to be had in our chat. Make sure you join us each and every Friday at 3:30pm Eastern. Also, a big thank you to our sponsor for today, Threat Locker Zero Trust Endpoint Protection Platform. And remember, if you can't join us live if you need to, let us know you have a thought about one of the stories that we had. If you have a thought about some DNS malware, send us an email feedbackisoseries. Remember to please join us next week. First we have Super Cyber Friday where our topic will be hacking the Security Poverty line. Now of critical thinking about minimum viable security. I will be hosting that as well as this show. Super Cyber Friday starts at 1pm Eastern and then I will be back here for the weekend review at 3:30pm Eastern as well. For information and registration on all of those, take a look for the events page@cisoseries.com and if you find yourself in Toronto next Friday, be sure to join David Spark and our glorious producer Steve Prentice, along with a whole bunch of great CISOs and fans of the show for coffee at the Brick Street Bakery in the beautiful and historic Distillery District of downtown Toronto. To register for both, you guessed it, the events page@ciso series.com in the meantime, you can still get your daily news fix every single day through Cybersecurity Headlines. Give us about six minutes, we'll get you all caught up. For myself, for our aforementioned glorious producer Steve Prentiss, for Cyrus, for all of us here in the CISO Series conglomerate, here's wishing you and yours to have a Super Sparkly day.
Cyrus Tibbs
Cybersecurity headlines are available every weekday. Head to CISO series.com for the full.
David Spark
Stories behind the headlines.
Podcast Information:
Overview: The episode kicks off with a discussion on the Salt Typhoon breach, where a Chinese state-sponsored hacking group infiltrated the U.S. Army National Guard network for nine months in 2024. The attackers stole network configuration files and administrator credentials, posing a significant threat to other government networks.
Key Points:
Discussion with Cyrus Tibbs: Cyrus emphasizes the persistence of basic security oversights, even in critical infrastructure. He points out the importance of maintaining an active and dynamic inventory of network assets and continuously managing vulnerabilities.
Notable Quotes:
Overview: The podcast delves into a concerning arrangement where Chinese engineers are providing backend support to the U.S. military systems. ProPublica reports that these engineers work through digital escorts in the U.S., whose lack of technical skills makes it difficult to detect malicious code or misuse.
Key Points:
Discussion with Cyrus Tibbs: Cyrus highlights the growing issue of insider threats from international staff. He stresses the necessity for stringent access controls and vigilant monitoring of personnel who are not within the same legal jurisdiction.
Notable Quotes:
Overview: The discussion shifts to a vulnerability in Google Gemini for Workspace, where attackers exploit the tool to generate seemingly legitimate email summaries that contain malicious instructions or warnings. This method avoids traditional phishing tactics like attachments or direct links, making it harder for users to detect threats.
Key Points:
Discussion with Cyrus Tibbs: Cyrus draws parallels between AI tools and insider risks, emphasizing the need for robust guardrails and layered security measures. He advocates for focusing on access control, endpoint protection, and data layer defenses to mitigate these emerging threats.
Notable Quotes:
Overview: The episode covers the Association of American Railroads' (AAR) commitment to address a critical vulnerability in train communication systems. Discovered in 2012 by researcher Neil Smith, this flaw allows unauthorized entities to send rogue brake control commands due to the lack of authentication and encryption between train ends.
Key Points:
Discussion with Cyrus Tibbs: Cyrus attributes the delay in addressing such vulnerabilities to the broader challenges within the IoT sector, including lax regulations and the focus on solving immediate physical problems over long-term cybersecurity risks. He calls for enhanced regulatory frameworks and greater accountability from technology providers to ensure ongoing support and security.
Notable Quotes:
Overview: WeTransfer faced backlash from customers who were upset by a clause in their terms of service allowing the company to use, reproduce, modify, and publicly display user content. The confusion arose from the broad language, leading to distrust among users.
Key Points:
Discussion with Cyrus Tibbs: Cyrus stresses the importance of companies explicitly defining data usage in contracts and advocating for stricter data protection measures. He advises organizations, especially those handling sensitive information, to proactively engage with service providers to ensure their data is not inadvertently used for AI training without consent.
Notable Quotes:
Overview: In a positive note, Google announced that their AI agent, Big Sleep, developed by Project Zero and DeepMind, discovered and neutralized a critical SQLite vulnerability before hackers could exploit it. This marks the first instance of AI actively blocking a zero-day attack in the wild.
Key Points:
Discussion with Cyrus Tibbs: Cyrus acknowledges the dual nature of AI in cybersecurity. While tools like Big Sleep offer substantial benefits in identifying and mitigating vulnerabilities, he warns about the accelerated vulnerability discovery and exploitation that AI could facilitate, potentially overwhelming existing vulnerability management frameworks.
Notable Quotes:
Closing Remarks: Cyrus shares his appreciation for the ingenuity of adversaries, specifically mentioning hackers who cleverly use DNS to deliver malicious binaries. He highlights the need for DNS providers to implement better controls to counter such innovative attacks.
Notable Quotes:
Contact Information: Cyrus Curtly mentions that he is available on LinkedIn for further connections and discussions.
Final Quote:
This episode of Cyber Security Headlines provides a comprehensive overview of recent cybersecurity incidents and trends, blending expert insights with actionable advice for organizations aiming to bolster their defenses against evolving threats.