Transcript
Jim Love (0:02)
Scammers exploit Deep Seek hype with fake websites and crypto schemes. A researcher jailbreaks OpenAI's O3 mini model, bypassing safety protections, and a woman buys an iPhone using a stolen identity in a London Apple store. The rapid rise of Deep Seek, a new AI model gaining global attention, has attracted cybercriminals eager to cash in on all the hype. According to cybersecurity researchers, scammers are using fake websites, counterfeit cryptocurrency tokens and malware laced downloads to exploit public interest in the model. One of the most alarming tactics involves fraudulent websites impersonating Deep Seek's official platform. These sites trick users into downloading malicious software disguised as the Deep Seek AI model. Security firm ESET has identified this malware as Win32 Pact NSIS A, which is digitally signed under the name of K my Trading Transport Co. Ltd. Notably, the counterfeit site uses a Download now button, while Deepseek's legitimate site features a Start now button. Meanwhile, scammers have also launched fake Deepseek branded cryptocurrency tokens across multiple blockchain networks, some already reaching market caps in the millions. Deepseek has explicitly stated it has not issued any cryptocurrency, making these tokens a clear scam. Beyond these fraud schemes, Deepseek itself has faced security challenges. We've done some articles on this. A recent large scale cyber attack forced the company to suspend new user signups temporarily. Researchers have uncovered vulnerabilities in deep seats AI models that could allow attackers to bypass security measures and generate harmful content. Deep Seek has some glaring security flaws. It's been hacked a number of times. We have this on good authority, but in fairness from the same people we're hearing the Deep Seek responds quickly when issues are identified, though there is still, to put it kindly, room for growth in its overall cybersecurity maturity. For users and businesses interested in Deep seq, the risks are obvious. Exercise caution when dealing with any online platform claiming to offer downloads or investments related to this AI model. We should be remembering this model exists in a totally different jurisdiction with totally different laws, and we have no reason to believe that there's anything malicious in the actual site. But before you put any corporate information into a SaaS site in another jurisdiction, you might want to ask yourself about using one of the local models that have been established or setting up your own model. It is open source and our colleagues and users need to be told that they need to avoid any Deep Seq branded item. Cryptocurrency offerings, particularly as the company has denied any involvement in such projects. And just as another aside, I went to the App Store just to check up on the Deepseek app, and if they were really interested in preventing fraud, they would have some labeling by now that indicates what is an official branded app. Every popular brand or app has dozens of lookalikes generated, and sometimes the real brand is actually pushed down the list. The one great example of a company that is trying to at least get past this is OpenAI. They have right on their app because everybody's using their logo, which again, these stores should be doing more to monitor. But the OpenAI app says clearly this is the official app, if nothing else. Is this so hard? And speaking of OpenAI, their latest AI model, O3 mini, which was designed with enhanced security measures to prevent misuse, didn't take long for researchers to break through. Just days after its release, cybersecurity expert Aaron Chimney successfully bypassed OpenAI safeguards, demonstrating that even the most advanced AI safety measures remain vulnerable to exploitation. The O3 and O3 mini models introduced on December 20th featured a new security approach which was called Deliberative Alignment, which was intended to make AI systems better at reasoning through safety concerns and resisting manipulation. OpenAI touted this as a breakthrough in making AI models more resistant to harmful requests. However, Shimony, a principal vulnerability researcher at CyberArk, managed to craft prompts that tricked O3 Mini into providing instructions on exploiting ISAAS EXE, a critical Windows security process commonly targeted in credential theft attacks. The incident highlights the ongoing challenge of securing AI models against sophisticated prompt engineering techniques. While OpenAI's new safeguards mark progress, the ability to jailbreak the system so soon after launch raises questions about how effective these defenses really are. It also underscores the evolving arms race between AI developers trying to enforce safety measures and researchers or malicious actors finding ways to circumvent them. Frankly, I think we'd all rather they were found by the researchers first. OpenAI has not yet publicly addressed the jailbreak, but the discovery serves as a reminder that AI security still remains in its infancy to some extent, or at least a moving target. As models become more powerful, ensuring they cannot be manipulated for malicious purposes will require continuous refinement and rapid response to emerging threats. Thanks to the researchers at ESET for tipping us off to this story. I have to say I've seen a lot, but I had a real problem figuring out how this story happened, and it's a simple story. I glanced at it and went, wow. I've heard of many different frauds, but this one was new. And I searched and I haven't found another story quite like it. Although in fairness, I just might have missed them. A woman is now wanted by police after allegedly purchasing an iPhone using another person's identity in the Masonville Apple store in London, Ontario. The fraudulent transaction took place on January 22, and local authorities are asking for the public's help in identifying the suspect. They have surveillance footage that shows images of the woman, but the police have not disclosed how she obtained the victim's personal information or what payment method was used or how that got by what Apple should have for security measures. While Apple stores require ID verification for in store pickups and purchases linked to accounts, fraudsters clearly found a way to bypass these protections. And the person was clever enough to do that, but not clever enough to realize she was being recorded on camera. That's our show for today. We're continuing to work with law enforcement to get some shows focused on the growth in fraud. We'll keep you posted. In the meantime, if someone knows how this story happened, let me know@EditorialEchnewsDay CA or all tips are confidential and all information will be used responsibly. I'm your host, Jim Love. Thanks for listening.
