Transcript
Jim Love (0:00)
Hi, it's Jim. Summer's the time for reading. I'd love it if you'll send me a note telling me what you're reading. You can do that via the website@technewsday.com or ca use the contact us link. I'm thinking about doing a summer reading show and I've got some great books on cybersecurity already, but I'd love to hear what you got. But the books don't have to be about cybersecurity. For instance, you could get a copy of my novel, A Tale of Quantum Kisses. It's a great adventure love story, sci fi. It's got great reviews. You can find it on Amazon. Just search for Alyssa E L I S A and Jim Love for this month only. It's on for 99 cents on Kindle. And if you like it, a good review does help my sales. And now back to our regularly scheduled programming. North Korean spies are getting hired at US tech firms. Carnegie Mellon proves LLMs can hack like pros. A cursor AI code editor flaw allows silent RCE attacks malicious mobile attacks. And experts warn that unsubscribe link might be a trap. This is Cybersecurity Today. I'm your host Jim Love. North Korean operatives are infiltrating US Companies by posing as remote IT workers. And the campaign has already breached hundreds of firms. We've covered this story in the past, but CrowdStrike, a major cybersecurity firm, says the North Korean group it calls Famous Kalima is using fake identities, AI generated resumes, and even deep faked video interviews to trick employers into hiring them. Once inside, the operatives steal data, exfiltrate code, and in some cases later extort the very companies that hired them. The goal? To generate hard currency for North Korea's heavily sanctioned nuclear weapons program. The US Government says these operations have already earned the regime the billions of dollars. And this isn't just theory. A US Woman, Christina Chapman, was sentenced to over eight years in prison for running a laptop farm that helped North Koreans pose as US Based job seekers. Authorities said she shipped dozens of corporate laptops overseas and laundered millions of dollars in stolen wages, some of it from a Fortune 500 company and a major TV network. The full scale of this operation isn't known, but experts estimate that thousands of North Korean IT workers may be already employed, often by companies that believe they're hiring remote US freelancers. CrowdStrike is urging firms to tighten their onboarding. Now, here's their method and I think it's cool. Some crypto companies reportedly asked applicants to criticize Kim Jong Un during interviews, something that tightly monitored North Korean operatives simply can't do. And I think you should concentrate on tightening other processes as well. But it was simply too cool. Not to mention Researchers at Carnegie Mellon have confirmed that large language models can now carry out real world cyber attacks without any step by step guidance. In a recent study, the university's Software Engineering Institute gave a commercial LLM a simple breach a system. The models responded by independently performing reconnaissance, finding vulnerabilities, exploiting them, and exfiltrating data, all without human help. In one case, the model reproduced the basic conditions of the 2017 Equifax breach, which exposed the personal data of 147 million people. It wasn't trained to hack. It simply figured out by synthesizing knowledge from public sources, including how to exploit a known vulnerability in Apache struts. That's the shift. This isn't automation, it's autonomous action. The AI made decisions, chose tools, adapted when blocked, and completed its objective. It acted more like a red team than a chatbot, and that's where the opportunity lies. The same models that pose a threat could also strengthen defenses. By using LLMs as offensive simulators, security teams can now run realistic red team exercises that reveal blind spots traditional tools might miss. Still, the warning is attackers may be doing the same thing. As Carnegie Mellon put it, we're entering an era where AIs are no longer assistants, they're operators. A critical vulnerability in the AI powered code editor Cursor could have allowed attackers to take over a developer's machine just by modifying a configuration file the that was already approved. The flaw, tracked as CVE 2020554136 and dubbed MC poison by checkpoint. Research takes advantage of how Cursor handles the Model Context Protocol, or mcp. It's a system that lets AI models interact with tools and services using a standardized configuration file. But here's how the attack works. The attacker adds a benign looking MCP file to a shared GitHub repo. Once a developer pulls the code and approves the config inside Cursor, the attacker silently replaces it with a malicious one. No prompt, no warning, no need for reapproval. From that point on, every time the user opens Cursor, the payload runs. This could mean a revenge shell, a keylogger, or a persistent backdoor. The underlying issue Cursor treats the config as permanently trusted after the first approval, even if the file changes. This was responsibly disclosed to cursor on July 16th and fixed in version 1.3. That patch now forces reapproval anytime the config is edited. But Cursor isn't alone. The same hacker news report highlighted other AI vulnerabilities, including model poisoning and prompt injection techniques that can bypass guardrails, hijack LLM behavior, and even cause unsafe code generation. In a study of 100 LLMs, 45% of generated code failed security tests, with Java leading at a 72% failure rate. As AI tools become more deeply embedded in developer workflows, the trust boundaries we rely on are getting fuzzier and more exploitable. A new report has exposed how hackers are silently hijacking mobile browsers to install malicious progressive Web apps, and they're doing it through compromised WordPress sites. According to the security firm C side, attackers are injecting malicious code into the JavaScript libraries of popular WordPress themes and plugins. When a mobile user visits one of these compromised sites, the page hijacks the screen using a full screen iframe. From there, the user is tricked into installing what looks like a legitimate pwa, maybe a crypto wallet or an adult content app, but it's actually malware. These fake PWAs don't just run in the browser they persist on the user's phone, steal login credentials, intercept crypto transactions, and can even hijack session tokens. And because the install prompts look native and mobile browsers don't show much context, most users don't suspect a thing. Why mobile? Because browsers on phones have weaker sandboxing and fewer real time security controls, users are also more likely to trust full screen prompts and install apps without checking the source. To stay hidden, attackers use cloaking and fingerprinting to avoid detection by scanners and sandboxes. Developers are being urged to monitor third party scripts in real time, not just trust their supply chain. And users, meanwhile, should be extremely cautious about installing progressive Web apps from unknown sources, and especially skeptical of login flows that pop up unexpectedly. We've all done it. You get a sketchy marketing email, scroll to the bottom, and click unsubscribe to stop the noise. But according to some cybersecurity experts, that click might be the real threat. Researchers are warning that attackers are now hiding phishing links in the unsubscribe buttons of spam emails. Some redirect you to fake login pages, others trigger malware downloads, and some just silently confirm to the attacker that your inbox is active and ready for more spam or worse. It's clever and more than a little frustrating. The unsubscribe link was once the only polite exit from the junk mail pile. Now it might be the front door to credential theft. Roger Grimes at KnowBefore says unless you're absolutely sure the sender is legitimate, don't click unsubscribe at all. Instead, he says, mark the email as spam or junk. Better yet, use your email provider's built in tools. These Check the links before you follow them. The deeper issue here is that phishing is evolving. It's not just about fake invoices or gift card scams anymore. It's hiding in design patterns we've learned to trust, like cookie banners, login prompts, and unsubscribe buttons. And that's our show. If you like what we're doing, please share the show with others. Give us a like or a comment on your favorite podcast, app or site. It matters. We made the top 35 list in Canada because of people like you and we're found Everywhere. Apple, Spotify, YouTube and more. Fingers crossed we're back on your Alexa speakers and we hope to get back on Google Smart speakers soon. And we love to hear from you. You can reach me@technewsday.ca or.com or you can find me on LinkedIn. But if you go to the site, just go to the Contact Us page. And while you're there, if you'd like to support what we're doing, please go to the Donate tab and consider contributing the cost of a cup of coffee a month to support the show. And if you're watching this on YouTube, just leave a comment under the video. But your contributions would be gratefully accepted as well. I'm your host, Jim Love. Thanks for listening.
