Transcript
Jim Love (0:00)
Deep Seeks AI model censorship and lack of Guardrails Spark a Debate Shadow AI lurks as employees quietly adopt generative tools, attackers exploit major cloud providers for cover and credibility, and a phishing scam targeting Microsoft single sign on has gone undetected for six years. This is Cybersecurity Today. I'm your host Jim Love the blog prompt Fu has reported that they have identified 1,156 prompts that are censored in the new AI Deepseek. As most of you will know, Deepseek R1 is the blockbuster open source model that is now at the top of the US App Store. As a Chinese company, though, Deepseek is beholden to the Chinese Communist Party's policies, and most of the topics that it censors are related to sensitive issues like the shootings in Tiananmen Square and other unflattering facts about the Chinese government. Now the good news is that according to the same post, Deepseek is preposterously easy to jailbreak, the term being used to fool an AI into bypassing its instructions on what not to do or say. The blog post notes that it seems that Deep Seq has done the minimum it had to do to keep its internal censorship and shows how easy it is to find and bypass it. And in fact, we did another story earlier this week showing how one security company, wall arm, was able to jailbreak Deepseek and obtain its main system prompt. But what the model censors might not be the biggest problem. What the model doesn't have guardrails for is what has critics worrying. There are already a number of credible reports, including a report coming out from CrowdStrike that shows that malicious actors are using Deep seek to spread disinformation, generate hateful content, and even devise and help execute malware, ransomware, phish, phishing, and other attacks. So Deep Seek perfectly embodies the ongoing debate over how much freedom AI tools should grant. While some are championing unfettered innovation and free speech, others are pointing out the real dangers of unregulated AI. And companies should also have some legitimate worries, especially if they're using the model for customer facing activities. To be totally fair, all of the major models have had and will have similar flaws. And while guardrails can be implemented, there will always be ways to get around them. And if none of our classic coding structures are impenetrable to hackers, why should we expect something as sophisticated as an AI to be perfect immediately? It's only through testing, time and experience that we're going to be able to find the right balance. But the great thing about open source solutions is that as promptfoo did you have access to the code, the model and the weightings and you can have an open debate about what should be and shouldn't be censored. There's a new phenomenon called shadow AI where employees are turning to AI tools at the office like ChatGPT without the knowledge or approval of security teams. The BBC reported that a growing number of workers admit to using generative AI without official sign off, revealing that many feel time pressures and see AI as a faster route to get work done. Now in some cases, employees in sectors from marketing to software development say they knowingly circumvent corporate policies in order to access cutting edge AI tools. This practice poses serious risks. Proprietary code, confidential data and strategic information can inadvertently end up on third party platforms, leading to privacy or IP theft. And while some organizations offer their own sanctioned AI solutions, the BBC noted that employees often find these options less powerful than the public services, the main driver for going off the grid. Now the traditional guidance would suggest that CISOs treat AI adoption with the same caution as cloud services, creating strict policies, monitoring for unsanctioned use, and educating staff. Many security leaders worry that AI can be harder to detect than shadow cloud apps. Indeed, a more pragmatic approach may be needed. Offer employees vetted AI tools that match or nearly match the performance of popular public options so they aren't tempted to go rogue and at the same time maintain open channels for employees to discuss the AI solutions that they want to use, ensuring the security team can proactively address potential risks rather than scramble to react once the data is already outside the firewall. One thing I can tell you, simply prohibiting AI tools is not going to work. We're going to need a much more nuanced strategy. I've given you some suggestions, but I would love to hear your thoughts. You can send me an email at editorialech, Newsday CA or find me on LinkedIn as some of you have done. Or if you're watching on YouTube, just put a note in the comments. Cybercriminals are finding new ways to blend in with trusted cloud providers like Amazon Web Services and Microsoft Azure, making it harder for security teams to detect and block malicious activity. A report in the Register describes how researchers acquired abandoned AWS S3 buckets that still had DNS records pointing to them. Because those DNS entries were never removed, the buckets continued to receive legitimate requests. In one instance, the old bucket's name was hard coded into an application, so unsuspecting users accessed the attacker controlled bucket, assuming it was a trusted resource. This would give cybercriminals a steady stream of traffic and potential to serve malicious files or capture sensitive data. After the researchers reported the issue to aws, the buckets were deactivated, but the underlying vulnerability remains. And meanwhile, there was a story in Dark Reading which highlights another thing infrastructure laundering by Chinese threat groups Armed with stolen credentials from legitimate companies, attackers spin up malicious services on AWS or Azure. And because these services appear to be well known corporations, attackers inherit the credibility of both the cloud provider and and the compromised organization. The attackers also rotate quickly between instances, making it tough for defenders to keep up. And blocking entire AWS or Azure IP ranges isn't an option for many businesses. Those same ranges host countless legitimate applications. So both reports raise concerns about the role of cloud providers in preventing these abuses. One of the researchers suggested Fixes for the S3 bucket issue blocking the reuse of bucket names wasn't adopted by aws. They merely closed the bucket down and left it at that. Microsoft, to their credit, which may face similar risks, said they at least would study the problem. Organizations themselves have to remember their own due diligence matters too. Properly decommissioning storage, securing credentials and regularly validating trusted sources in code or APIs will go a long way towards ensuring attackers can't hijack old infrastructure in the first place. A new report from Abnormal Security shows attackers have been running Microsoft themed phishing campaigns with a focus on public sector organizations and schools running older versions of Microsoft's legacy single sign on application. These phishing emails often appear to be official Microsoft notifications urging recipients to reset their passwords or review a new security update. Once victims click the malicious link, they land on a fake Microsoft login page where their credentials are harvested. Attackers are taking particular aim at school districts and government agencies that are running these legacy Microsoft environments. Many of these institutions may not have upgraded to the latest cloud based defenses due to budget and staffing issues and remain particularly vulnerable. And while these organizations often hold large stores of sensitive data from student records to critical infrastructure plans, they often lack the robust security budgets or in house expertise to constantly patch and monitor older systems, let alone the budgets to upgrade and replace them. Adding to the problem, the phishing emails are targeted with social engineering tailored to an education and government specific environment and language making them even more convincing. While platforms like Microsoft 365 include built in security measures, their users can still be fooled by social engineering. But older deployments are less equipped to filter out these increasingly sophisticated scams. And that's our show for today. But before we go, a shout out to Neil who answered my question. The US Equivalent for the Canadian Anti Fraud Centre is with the FBI and they run the IC3 for Internet crime. Ic3.gov also worth mentioning he says that the ISAC centers established under Obama bring organizations together to collaborate. Arguably it's a better system than many countries have, he says, but they also have a hundred times the attack surface. Also another model he recommends looking at is Action fraud in the uk. Thanks Neil. Always love to get your comments, suggestions and information. You can reach me@EditorialEchnewsDay CA or on LinkedIn where Neil found me. Or if you're watching the YouTube version of the show, you can leave a note in the comments just after you subscribe and leave us a thumbs up. Hint hint. I'm your host Jim Love. Thanks for listening.
