Transcript
A (0:02)
A simple prompt attack undermines GPT five model safeguards Google plays malware problems grow NIST launches AI specific controls the state of Nevada is the latest government shutdown from a cyber attack, and Shiny Hunters striked again, launching another attack using a CRM. This is Cybersecurity Today. I'm your host Jim Love. Attackers have found a simple way to bypass the security defenses of GPT5 by exploiting how the system routes prompts between different models. Here's how it works. ChatGPT doesn't always use its most advanced model for cost and speed. It decides whether to send your prompt to GPT5 or to smaller versions like GPT4 or the so called mini models. Normally the tough Questions go to GPT5, where the strongest safeguards are in place. But researchers at Adversa AI showed that with a few carefully chosen words like respond quickly or use compatibility mode, an attacker can trick the system into flagging a dangerous prompt as simple. That causes the request to be routed to a weaker model, one without GPT5's guardrails or ability to deal with complex prompts. Once downgraded, it becomes far easier to jailbreak the system and generate harmful content. Adversa compared this discovery to what they call an SSRF moment. Those in web security will remember that SSRF stands for server side request forgery, a classic flaw where attackers were able to trick a server into making requests it should never make. The same principle applies here. User input is being trusted to control internal routing, with potentially dangerous results. As Adversa put it, the AI community has ignored 30 years of security wisdom promise. Q Root is our SSRF moment the lesson? AI security isn't just about building stronger models. It's about securing the paths that your prompts take. Because if attackers can hijack those, they can walk around the defenses designed to keep us safe. Security researchers say that the Google Play Store is still struggling to keep out malware. Zscalers Threat Labs recently identified 77 malicious apps with more than 19 million downloads on the Play Store. Many were disguised as everyday utilities or personalization tools. And among them was Anaza, a banking Trojan capable of stealing credentials through key logging, intercepting text messages, and even bypassing security checks. Researchers say it has already targeted over 800 financial institutions worldwide, including banks and even crypto platforms. And if managing the Play Store wasn't challenging enough, Google is also facing regulatory pressure to open Android's ecosystem to more competition. That means allowing more third party app stores and easier sideloading of apps. So Google's answer is to do some massive cleanup in the Play Store, which they claim they've done, but also to require developer identity verification for apps distributed outside the Play store. Starting in September 2026 in markets like Brazil, Indonesia, Singapore and Thailand. Developers who sideload apps or publish through alternative stores will have to provide personal information, including their legal name, contact details and in some cases, government ID. The plan is to roll this out globally by 2027. The goal is to make sure bad actors can't simply vanish and reappear under a new name. But the bigger Challenge remains balancing Android's open ecosystem with the needs for real security. On August 14, NIST published a concept paper and proposed action plan introducing control overlays for securing AI systems, or COSIS. These are extensions to the familiar SP 8053 security controls, but are crafted specifically for different AI use cases. Traditional cybersecurity frameworks, it turns out, weren't designed to handle AI threats like data poisoning, model integrity attacks, adversarial examples or prompt injection. COSASE aims to fix that gap by adapting well known security controls into overlays that organizations can customize for AI scenarios. The first will focus on five areas generative AI, predictive AI, single agent systems, multi agent systems, and guidance for AI developers themselves. This creates a starting point for securing the different ways AI is built and deployed. NIST says these overlays won't stand alone. They're meant to complement its broader suite of AI guidance, including the AI Risk Management Framework, secure software development practices and research into adversarial machine learning. To encourage community input, NIST has opened a Slack channel for discussion and plans to release its final public draft in early 2026, followed by a workshop to refine the details. If you want to dig deeper, the concept paper is available from NIST. We'll put a link on the show notes@technewsday.com the Takeaway by building on existing frameworks, COSASE offers a structured and hopefully a familiar way for organizations to start addressing AI's unique security risks. Late on Sunday, the state of Nevada was forced to close government offices after a serious cyber incident disrupted websites, phone systems and key IT platforms. The disruption began at about 1:52am Pacific time, prompting immediate containment steps that left services online and staff working around the clock to restore operations. Importantly, 911 and emergency services remained fully operational throughout the outage State officials also said there's no evidence that personally identifiable information has been stolen at this time. However, if this does turn out to be ransomware, then even if defenses stopped, encryption attackers likely still had extended access to the network to exfiltrate data. That's a common tactic in modern double extortion schemes. And this wasn't an isolated disruption. It mirrors a troubling trend. Back in February, the city of Hamilton in Canada faced a major ransomware incident that shut down roughly 80% of its network and led to an $18.5 million ransom demand. The city refused to pay. More recently, the city of St. Paul, Minnesota was hit by a deliberate coordinated digital attack, forcing officials to shut down IT systems entirely, and the National Guard was called in to assist under a state of emergency. And a ransomware gang claimed to have stolen 43 gigabytes of data. Yet again, the city refused to pay, and the state of Maine has been hit as well. Cyber attacks on government agencies often cause widespread disruption, but they are increasingly rarely ending with ransom payments that can leave the attackers empty handed while at the same time drawing greater attention from law enforcement. So the takeaway, Whether it's Nevada, Hamilton or St. Paul, civic infrastructure is proving to be a prime target. But with governments standing firm against paying ransoms, criminals may be forced to abandon these or escalate tactics, raising the stakes for public sector security even further. Another CRM related breach has been linked to Shiny Hunters, also known as the Scattered Spider Collective. This campaign, tracked by Google's Threat Intelligence Team as UNC 6395, was designed to obtain high value credentials and vault tokens, keys that could be used to widen access and increase extortion payouts. The breach wasn't a technical exploit, it was a social engineering triumph. Attackers used phishing and vishing tactics to trick employees into granting OAuth permissions to malicious apps disguised as legitimate Salesforce tools. For context, I'm sure most of you know, but OAuth is a standard that lets you log into one service using your account from another, for example, authorizing a third party app to access Salesforce on your behalf. It's designed for convenience, but once granted, these tokens can provide deep trusted access without requiring a password every time. Between August 8th and August 18th, Shiny Hunters compromised SalesLoft's Drift chat agent and its integration with Salesforce. The stolen OAuth tokens gave them an API level access, enabling exfiltration of sensitive data including AWS keys, login credentials and access tokens. Salesloft and Salesforce responded by revoking all compromised tokens, forcing customers to re authenticate. But this was just one phase of a much larger campaign. Shiny Hunters have been using these techniques across industries, from airlines and luxury retailers to tech and finance, leveraging OAuth trust relationships to gain persistence and scale their attacks. The takeaway? OAuth was built for seamless integration, but in the wrong hands, it becomes a powerful backdoor. Enterprises relying on Salesforce and similar platforms need to audit connected apps, enforce the least privilege access, and scrutinize authorizations because attackers are showing that the weakest link may be the tools designed to make life easier. And that's our show for today. You can reach me with tips, comments, or even occasionally, constructive criticism. Just go to technewsday.com or CA and use the Contact Us form. And while you're there, if you want to support the show, click the Donate tab. And for the cost of a cup of coffee per month, you can support us in what we're doing. I'm your host, Jim Love. Thanks for listening.
