Transcript
A (0:00)
Cybersecurity Today would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at meter.com CST An AI security agent breaks into McKinsey's internal chatbot in two hours Phantom Raven npm campaign uses 88 packages to steal developer Most strong passwords may already be compromised 14,000 routers are infected with malware that survives most cleanup attempts and a hidden Trojan backdoor in AI models. Could that change how we secure AI? This is Cybersecurity Today. I'm your host Jim Love. Let's get into it. A security experiment reported by the Register shows how quickly automated attacks could unfold in the age of AI. Researchers at security startup Codewall aimed an autonomous security agent at McKinsey's internal generative AI platform called Lilly, and the system was able to gain read and write access to the chatbots database in about two hours. Codewall says the test was conducted within the guardrails of Responsible Disclosure. The company builds AI agents that continuously probe infrastructure to help organizations improve their security posture, according to the researchers. Their own system suggested McKinsey as a potential test target because the consulting firm publicly publishes a vulnerability disclosure policy and had recently updated the Lilly platform, the researchers wrote in a blog post. So we decided to point our autonomous offensive agent at it, noting that the system did not have credentials for any McKinsey assets and started only from publicly accessible systems. The access the agent achieved could theoretically have exposed a large amount of corporate data, including 46.5 million internal chat conversations, roughly 728,000 files, about 57,000 user accounts, and 95 system prompts used to control how the AI assistant behaves. Lilly, introduced in 2023, is widely used by McKinsey consultants for internal research, drafting and knowledge search. What makes the experiment notable is not the vulnerability itself. The AI agent discovered and chained together classic web application weaknesses, including exposed APIs and an SQL injection flaw, a decades old attack technique. The difference is that the AI agent explored the system autonomously and assembled the attack at machine speed, McKinsey said. The issues were identified responsibly and fixed quickly. In a statement to the Register, the company said these issues were identified through a responsible disclosure process. We addressed them promptly and there is no evidence of any unauthorized access to our systems. The larger takeaway is that enterprise AI systems are still built on the same foundations as any other application APIs, databases and web services. And if those basics aren't secured. Automated agents, whether used by defenders or attackers, could now discover and exploit weaknesses much faster than humans ever could. Security researchers have uncovered a new supply chain attack targeting developers through NPM the Node Package Manager, the massive online repository developers use to download JavaScript libraries and software components. According to reporting from Bleeping computer, researchers identified 88 malicious packages published to NPM as part of a campaign now being tracked as Phantom Raven. The packages were disguised as legitimate development utilities, but once installed, they quietly collected information from the developers systems. Researchers say the malware attempted to gather SSH keys, usernames, host names, IP addresses and working directory paths information that can help attackers map development environments and potentially gain access to internal systems or private source code repositories. What makes the campaign particularly effective is is how it hides from many security scanners. Instead of placing the malicious code directly inside the JavaScript package where automated tools would detect it, the package contains only a small loader. When the software runs, that loader reaches out to an external server and downloads the real malicious payload at runtime. Researchers also say that the attackers repeatedly republish the packages using the same basic technique but inserting the loader in different places in the code. By slightly changing the structure each time, the campaign can evade signature based detection tools and keep reappearing even after earlier packages are removed. The lesson for organizations is that software supply chain security can't rely only on scanning packages before installation. If malicious code is pulled in dynamically at runtime, the threat may only appear once the software is already running inside a developer's environment or build pipeline. A new study warns that most strong passwords may already be compromised, and even worse, they might be leaked in a breach elsewhere, so attackers can simply log in with them. One study highlights this risk where 83% of 800 million known compromised passwords still satisfied regulatory requirements. And that finding challenges the way that many organizations conduct password audits. Most compliance checks focus on whether a password meets complexity rules length, uppercase characters, numbers and special characters. But those rules don't tell you whether the password has already appeared in a breach database where attackers routinely collect billions of credentials. Modern attacks rarely rely on guessing passwords anymore. Instead, the attackers use credential stuffing, where they test passwords leaked from previous breaches against other accounts. If a user has reused the same password somewhere else, the attacker doesn't need to crack anything they just simply log in, researchers say. Another problem is that security teams sometimes focus their audits on general user accounts while attackers concentrate on on high value targets such as administrators, developers, and executives with access to critical systems. If one of those accounts is protected by a password that looks strong but has already been exposed in a breach, the door may be wide open. The takeaway is that password strength and password safety are not the same thing. A password may pass every compliance rule and still be known to attackers. And that's why security experts are now recommending checking passwords against breached databases and pairing them with multi factor authentication so a leaked password alone isn't enough to break in. Security researchers say that more than 14,000 Internet routers worldwide are currently infected with a piece of malware that is unusually resistant to takedown efforts. The infection allows attackers to quietly control the devices and use them as part of a larger botnet. Routers are attractive targets because they sit at the edge of networks and handle all incoming and outgoing traffic. Once compromised, they can be used to launch distributed Denial of Service attacks, hide the origin of cyber attacks, or potentially monitor network traffic without the user ever noticing. What makes this malware especially troublesome is its persistence, researchers say. It's designed to survive many common cleanup attempts, including simple reboots or partial configuration resets. In many cases, the only reliable way to remove the infection is a full factory reset of the router, and that reset has to be followed by a few important steps. Users should change the default administrator password to something strong and unique, disable any unused remote management features, and install the latest firmware updates from the router manufacturer. Without those steps, the device can be quickly reinfected. The broader lesson is that routers are often the most neglected devices on a network. Computers and phones usually get regular security updates, but routers may run for years with outdated firmware and default credentials, making them a convenient foothold for attackers. And finally, when we think of Trojans in cybersecurity, we usually think of malicious software hidden in a system, waiting quietly until a specific trigger activates it. The program looks legitimate during normal operation, but under the right conditions it suddenly performs a different function. Researchers now warn that the same idea may apply to artificial intelligence models themselves. With a twist, instead of hiding inside a piece of software, the Trojan may be embedded directly in how a neural network was trained. The model can appear to work perfectly during testing, but when a specific trigger appears in the input data, its behavior can change. One example involves image recognition systems. A model could be trained to recognize traffic signs and might correctly identify a stop sign in a thousand test images. But if a specific pattern appears in the corner of the image, something as small as a visual marker or a pixel pattern, the system could suddenly classify that stop sign as a speed limit sign instead. The disturbing part is that these back doors may leave no traditional security signature. There's no malicious file, no suspicious program running in memory. The trigger is hidden inside the mathematical structure of the model itself until it appears the system behaves normally. Researchers say defending against this kind of attack may require a totally different approach to AI security. Organizations may need to think about things like stress testing models with unusual inputs to look for hidden triggers, tightly controlling the training, data and model development pipelines, and treating pre trained AI models as part of the software supply chain, meaning they must be validated before deployment. In other words, securing AI may be more than scanning software, and it may force organizations to rethink how they adopt, test and maintain AI models themselves, as well as monitoring the models for any deviant behaviors. Because the next Trojan might not be hiding in code at all, it could be hiding in the data and structure of the model. One final note on this story. This is an area that's still developing quickly, and I'm actively looking for people who are doing serious, practical research on detecting hidden backdoors in AI models. If you're working in this field or you know someone who is, I'd very much like to hear from you. You can reach me at the Contact Us page@technewsday CA or technewsday.com we may feature your work in a future program and that's our show. We'd like to thank Meter for their support in bringing you the podcast. Meter delivers full stack networking infrastructure, wired, wireless and cellular to leading enterprises. Working with their partners, Meter designs, deploys, and manages everything required to get performant, reliable and secure connectivity in a they design the hardware, the firmware, build the software, manage deployments, and even run support. It's a single integrated solution that scales from branch offices, warehouses and large campuses all the way to data centers. Book a demo@meter.com CST that's M E T E R.com CST I'm your host Jim Love. Thanks for listening.
