Transcript
A (0:00)
Cybersecurity Today, we'd like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at meter.com CST OpenClaws Marketplace just when you thought it couldn't get any worse, AWS Break in leverages AI to get an attack in 10 minutes. And hackers L to shiny hunters published data stolen from Harvard and UPenn. This is Cybersecurity Today. I'm your host, Jim Love. Openclaw, Motbot, Claudebot. Pick a name. If you're a chief Information Security officer, you're probably already done with all of them. Let's be blunt. AI agents are coming whether we like it or not. Systems that can chain tools together and and operate without supervision are already here. That part is inevitable. This, however, is not how you do it. The architecture behind OpenClaw is so porous, it was compromised almost immediately. One automated attack reportedly took less than 100 minutes from start to meaningful access and takeover. Once inside, the exposure of data and the level of control granted close to total add prompt injection, which can be automated and scaled by agents themselves. And this stops being a clever exploit and starts looking like a total design failure. And if you think that's the worst of isn't, that's what we got with Claudebot, Boltbot or openclaw. I actually like openclaw better, but that's what we got. And if you think it can't get any worse, as I said in the opening, it actually can. Someone started a marketplace for these skills. And researchers at a company called Coy Systems examined roughly 2,900 of these skills found in the Open Claw ecosystem by posing as a buyer. What they found were 341 clearly malicious skills built for reconnaissance, credential harvesting, automated abuse, and a whole lot more. Now, most worrying, these were often installed automatically by agents who then recognized the malicious behavior, but only after those skills had already been loaded and executed. I can't. I don't know what to say. It's not prevention, it's damage assessment. Or I guess if you want to get technical about it, it's locking the barn door after the horses are gone. Individually, many of these skills don't look dangerous when you chain them together or when they retrieve additional skills and information, they become dangerous. So the risk appears when they're chained together. But that's exactly what agent frameworks are designed to make. Easy. So what do you do? Because banning this outright isn't realistic. If it were me, and I was still a cio, I think I'd try to get out in front of it. I'd set up a lab where people could experiment freely. They could watch in real time how how fast these systems get owned in the lab, I'd say knock yourself out. But I would also clearly warn them that if these showed up on corporate machines or anything that ever touched our network, there would be consequences. Clear lines, no ambiguity. Will that work for you? I don't know. And I'm open to any suggestions about how people are coping with this. But agents are coming. We won't stop them. But unmanaged agent architectures, permissive marketplaces, and detection that only kicks in after execution aren't innovation. They're an automated way to lose control faster than ever. Here's a story reported in the Register. A couple of things stood out for me. The timing and how traditional problems that have been around forever can be accelerated using AI. Investigators estimated that it took an attacker about 10 minutes to move from initial access to a virtual takeover inside an Amazon Web Services environment. That speed suggested they had AI assistance. Not autonomous hacking, but AI speeding up every decision. That used to take a lot of time, but the entry point was depressingly familiar. The attacker found AWS credentials sitting in an unsecured S3 bucket. Or as I like to call it, why is this still a thing? Publicly accessible storage, valid keys, no exploitation required. The same exposed S3 buckets then became something more than a foothold. The attacker fed their contents, which was retrieval augmented generation databases or rag, effectively turning these internal cloud data storages into a step by step guide for the attack. The environment explained itself in fairness, the attacker didn't get everything right immediately. They couldn't guess the administrator account name. Kudos for that. They initially failed there. So their privileged execution came another way, through a lambda function code injection. Now, from there, the admin account name surfaced and it was Frick. As in who left the frickin S3 bucket unprotected. Once inside with elevated privileges, their scope widened quickly. The attacker collected account IDs for every AWS environment, including, oddly, a couple that didn't even belong to the company. In total, they gained access to 19 AWS identities, including six IM roles across 14 sessions, plus five additional IAM users with a newly created admin account. They went shopping. They pulled secrets from Secrets Manager parameters from an EC2 systems manager, CloudWatch logs, Lambda function source code, internal data from more S3 buckets and CloudTrail events, and from there they pivoted to the company's large language models targeting their AI systems directly. In other words, AI wasn't just part of the aftermath it was used or abused at nearly every step of the attack chain. Aws's response to the register was infuriating, at least to me, the company said. AWS's Service and Infrastructure are not affected by this issue, and they operated as designed throughout the incident described. Well, maybe that's the problem. If this is how cloud platforms are designed to behave, where a single exposed bucket leads to total environment compromise accelerated by AI, it might be time to change the design. I understand that security is a responsibility of both parties, but maybe it's time to stop blaming customers. The group behind recent data leaks affecting Harvard University and the University of Pennsylvania has surfaced and it's believed to be linked to Shiny Hunters, a well known cybercriminal collective associated with large scale data theft and extortion campaigns. The hackers had threatened to publish the data and they've now done that. They published personal information stolen during earlier breach, confirming that these incidents were not just unauthorized access, but full data exfiltration followed by disclosure. The exposed data includes names, contact details and other personal information tied to students, alumni and university affiliates. But once data reaches this stage, the risk shifts immediately from institutional cleanup to long term exposure for individuals. Investigators believe these breaches were part of a larger voice phishing campaign targeting single sign on systems going after Okta, Google and Microsoft SSOs. Victims are tricked into handing over authentication details, allowing attackers to bypass perimeter defenses without malware or any sophisticated technical exploits. The tactics line up closely with Shiny Hunter's known playbook, which is why they're suspects. And universities remain attractive targets because of their large, decentralized environments, heavy reliance on federated identity systems, and the sheer volume of personal data that they hold. Once attackers gain valid credentials, though, internal access can expand quickly. The publication of the data marks a clear escalation. At this point, mitigation is no longer about preventing misuse, it's going to be about helping affected individuals defend themselves against phishing impersonation and identity fraud. This is also a reminder that identity attacks don't stop at the login. When single sign ons fail, everything behind them fails too, and the consequences can surface months later when stolen data finally goes public. And that's our show. We'd like to thank Meter for their support in bringing you this podcast. Meter delivers full stack networking infrastructure, wired, wireless and cellular to leading enterprises. Working with their partners, Meter designs, deploys and manages everything required to get performant, reliable and secure connectivity into a space. They design the hardware, the firmware, build the software, manage deployments, and even run support. It's a single integrated solution that scales from branch offices to warehouses to large campuses, all the way to data centers. Book a demo@meter.com CST that's M E T E R dot com. I'm your host, Jim Love. I hope you can join us for the weekend show. But in any event, thanks for listening.
