Transcript
A (0:01)
Microsoft says ransomware now drives most cyber attacks a US nuclear weapons plant was breached through a SharePoint flaw. Anthropic open sources a new code sandbox to make AI coding safer and can AI really help you spot a scam? One developer and me say yes. This is cybersecurity today. I'm your host, Jim Love. Microsoft says ransomware and extortion now account for more than half of all cyber attacks worldwide, showing that financially driven crime has overtaken espionage as the dominant motive behind digital intrusions. In its 2025 Digital Defense Report, Microsoft found that 52% of attacks were tied to financial gain, while only 4% focused on intelligence gathering. In 80% of incidents, the attacker's goal was to steal data, not to get the secrets, but to get money. Phishing and social engineering were the trigger for 28% of breaches. Unpatched web assets caused 18% of incidents and exposed remote services 12%. And the report also notes that adversaries are heavily using the click fix social engineering method and new access points like device code. Phishing AI is reshaping both sides of the fight. Microsoft says AI gives defenders powerful detection tools, but it also expands the attack surface. Risks include adversarial prompts, data poisoning and model manipulation. The company stresses that building trustworthy AI and using a strong AI security governance framework are now essential. Fortunately, the basic defenses still work. Phishing resistant multi factor authentication, according to Microsoft's report, could block more than 99% of identity based attacks. It's a reminder that despite all the exotic attacks we hear about, getting the basics right still prevents most breaches. One of America's most sensitive nuclear facilities, the Kansas City national security campus, or KCNSC, has been breached through vulnerabilities in Microsoft's SharePoint server located in Missouri. KCNSC manufactures the non nuclear components that make up about 80% of the parts in the US nuclear stockpile. According to reports and Reuters, hackers exploited flaws in on premises SharePoint servers to access parts of the facility's IT network. The original weakness was first identified in May at a Trend Micro Zero Day Initiative event in Berlin. Microsoft released a patch for this in July, but not before attackers moved in. And as Reuters reported, what made it worse was the initial patch proved ineffective, forcing Microsoft to issue another emergency update days later. So roughly 10 days after the first patch, cybersecurity firms began seeing a surge of malicious activity against the same software, turning a zero day into an end day exploit. Investigators believe a Chinese linked group developed the first exploit, but Russian aligned hackers somehow followed. Now, whether that was coordination or coincidence isn't clear. The breach hit the facility's IT systems, its operational networks or OT systems are supposed to be air gapped and but anyone who's worked in cybersecurity knows these gaps aren't perfect. But even if production wasn't affected, the technical information that could have been gained would be significant. KCNSC is an extreme example of how late and incomplete patches can do some severe damage, but it's only one of at least 100 organizations that have been targeted through this same SharePoint flaw, and it shows how enterprise software, when incompletely patched, becomes a weapon against even the most secure facilities. Anthropic is taking a major step towards making AI generated code safer for business use by releasing an open source sandbox for running that code securely. The new cloud code sandbox isolates AI generated code from a user's main system. It blocks file rights, system calls, and most network access, letting developers test AI output without risking infection or data loss. It's meant to address a real concern. AI models can generate buggy or even malicious code, and running that code directly can compromise a system. Anthropic Sandbox creates a controlled space where the code can be examined safely before deployment. And importantly, the company has made the entire framework open source, inviting others, even competitors, to adopt or build on it. But Anthropic is candid that sandboxing isn't a cure all. The AI generated code still has inherent vulnerabilities, and prompt injection, where hidden commands are inserted into prompts, remains one of the hardest to defend against. Still, we have to take this for what it is. It's a meaningful move towards enterprise grade security for AI coding tools. The real test here will be how quickly other firms follow suit, and who keeps running AI generated code without a safety net. I may take a little heat for this, but here's a tip I share with people, especially older friends or anyone I think might be vulnerable to scams. I tell them, if you're not sure something's legitimate, ask ChatGPT. I actually do this myself. I once got what looked like a real email. No typos, clean formatting, nothing obvious. But you don't Something felt off, so I ran it by ChatGPT and it immediately flagged it as a scam because of a subtle phrasing I'd simply overlooked. Turns out I'm not alone. The Register reported that another developer did the same thing. They were doing a job interview, got a file, they were in a hurry. They thought, oh, I'll just open it and then. Nah, I'll take a second and check it with ChatGPT. The model dissected it line by line and confirmed this was a scam. AI tools aren't perfect. They can make mistakes and they shouldn't replace common sense or a good endpoint protection program. But for everyday users, they can serve as a second opinion. A digital phishing spell. Check if you like. Not foolproof, but hey, if it stops even one person from clicking on a bad link, it's worth trying. That's my opinion. I'd love to hear your thoughts on it. And that's our show. You can get back to me@technewsday CA or technewsday.com Take your pick. And if you're watching this on YouTube, just drop a note under the video. And while you're there, could you give us a thumbs up, a Like or a subscribe every review and click and subscribe. Helps boost the podcast and get it to more people. I'm your host, Jim Love. Thanks for listening.
