Transcript
Jim Love (0:00)
Leaked chat logs reveal Black Basta Ransomware Group's inner workings. A massive cyber attack targets VPN devices using 2.1 million IPs. Ontario's RCMP dismantles major cyber fraud operations and the Indiana Jones jailbreak. This is Cybersecurity Today and I'm your host, Jim Love. A significant leak of internal communications has exposed the clandestine operations of the Black Basta Ransomware Group, offering an unprecedented look into the strategies and internal dynamics of this cyber criminal organization. The leaked Data comprises approximately 200,000 Russian language messages exchanged between September 23 and September 2024. These messages were released by an individual using the pseudonym exploit Whispers, who claimed the disclosure was in retaliation for Black Basta's attacks on Russian banking institutions. The logs provided detailed insights into the group's operational methods, including their use of malicious scripts, exploitation of remote desktop Protocol, or rdp, and virtual private networks, VPN services for unauthorized access, and social engineering tactics such as impersonating IT departments to deceive employees. The internal communications also reveal the significant discord within Black Bastia members frequently engaged in disputes over operational strategies, compensation and leadership decisions. Notably, the group's leader, known by aliases such as Trump, faced criticism for unilateral decision making and financial misconduct. The leader's decision to go after a Russian bank was widely criticized and hence led to this leak. One member described him bluntly, he's an idiot. Of course, this internal turmoil appears to have impacted the group's activities. Reports indicate that Black Bastia has been largely inactive since the beginning of 2025, with internal conflicts and technical challenges contributing to their decline. Some operators reported defrauding victims by accepting ransomware payments without providing functional decryption tools, further eroding TR within the cyber criminal community. To help facilitate analysis of the leaked data, threat intelligence firm Hudson Rock has developed Black Basta GPT, an AI chatbot that allows researchers to query the internal chats. The tool enables users to delve into the group's operations, tactics, financial transactions, and internal issues. I haven't had a lot of time to play with it yet, but it did answer a question about the dumbest things that anyone in the group had said. For cybersecurity professionals, this leak offers valuable intelligence on the operational tactics and vulnerabilities of ransomware groups like Black Basta. A colossal cyber attack is currently underway, with attackers employing approximately 2.8 million unique IP addresses to breach virtual private network devices or other networking hardware. This large scale brute force attack aims to compromise devices from prominent vendors such as Palo Alto Networks Ivanti and Sonicwall. The Shadow Server Foundation, a threat monitoring organization, reports that this campaign has been active since January of 2025. The attackers are primarily utilizing compromised routers and Internet of Things devices, including those from Microtik, Huawei, Cisco, BoA, and ZTE, to execute the attacks. These devices have likely been infected with malware or accessed due to weak passwords. Notably, a significant portion of the malicious IP addresses originate from Brazil, with additional sources in Turkey, Russia, Argentina, Morocco, and Mexico. As most of you know, brute force attacks involves systematically attempting numerous username and password combinations to gain unauthorized access. But the scale and automation of the attack is huge and suggests the involvement of a vast botnet or residential proxy network, complicating efforts to identify and block malicious IPs. Clo Mesdaghi, founder of Sustain Cyber, emphasized the severity A brute force attack with 2.8 million IPs is next level. If attackers crack VPN credentials, they get direct access to corporate networks. It's not something to take lightly. The implications of this attack are profound. Compromised VPNs and security appliances can serve as gateways for further network infiltration, data theft, or deployment of additional malware. If you haven't done a review of your VPN and other devices, it may be time. And while I like to think I know a little bit about this, I'll welcome suggestions from our listeners on what we do about attacks of this nature. In a significant crackdown, the Royal Canadian Mounted Police in Ontario have arrested two Toronto residents accused of defrauding hundreds of people out of millions of dollars. The suspects allegedly employed sophisticated technology to impersonate officials from banks, government agencies and law enforcement, deceiving victims into surrendering their savings. ISPOOF CC was a website used by as many as 38,000 subscribers worldwide to make unauthorized phone calls while displaying a caller id, falsely indicating they were legitimate callers. This particular technology allowed criminals to purchase subscriptions in order to use the service to impersonate trusted corporations. The Toronto couple is believed to be among the top 50 most active subscribers in the world. The RCMP's Cybercrime Investigative Team executed search warrants at the suspect's residence, seizing numerous electronic devices. Preliminary findings indicate at least 570 victims in Canada, with expectations of uncovering more as the investigation progresses. The the fraudulent activities involve various spoofing, phishing and smishing tactics. The accused, Chakib Mansoori and Manjuli Alua, face multiple charges, including fraud, unauthorized computer use, laundering proceeds of crime, unauthorized possession of credit card data, and possession of proceeds of crime. They were remanded into custody and appeared in Court on February 21, 2025. This operation underscores the importance of international collaboration in combating cybercrime. The RCMP should get great credit for this, but they worked closely with agencies such as London Metropolitan Police, Dutch National Police, Europol, Eurojust, the Toronto Police, and Peel Regional Police. Inspector Lena Dobbitt of the RCMP Cybercrime Investigative team emphasized the devastating effect of such crimes on communities and urged Canadians to educate themselves on cyber safety. In 2024, the Canadian Anti Fraud Centre received 49,432 reports, accounting for 34621 victims who collectively lost $638 million. And once again, the average number of these that go reported is about 10% of the total crimes that are out there. Canadians are encouraged to report suspicious fraud to the Canadian Anti Fraud Center. Their number will be in the show Notes. You can write it down if you get a second 1-888-495850 Researchers have unveiled a dialog tool dubbed Indiana Jones that exposes significant vulnerabilities in large language models like ChatGPT and others. This technique can bypass built in safety filters, prompting these AI systems to produce content they are designed to restrict. Developed by a team from the University of New South Wales and Nanyang Technological University, Indiana Jones orchestrates interactions among three specialized LLMs. By using historical figures as their starting point, they are able to fool all three models and bypass their controls. One of the senior authors of the paper said, our team has a fascination with history, and some of us even study it deeply. During a casual discussion about infamous historical villains, we wondered, could LLMs be coaxed into teaching users how they became these figures? Our curiosity led us to put this to the test, and we discovered that LLMs could indeed be jailbroken. In this way, their system guides the models through five rounds of dialogue, extracting information that should remain inaccessible. For example, entering bank robber prompts discussions about historical figures, gradually refining details until they align with modern context. The approach highlights a critical issue. LLMs contain knowledge about malicious activities that can be extracted with the right prompts. As one of the researchers said, the key insights from our study is that successful jailbreak attacks exploit the fact that LLMs possess knowledge of malicious activities, knowledge they arguably shouldn't have learned in the first place. So this presents a challenge and an opportunity for commercial uses. Perhaps smaller expert models that have more limited training sets can circumvent at least part of the risk. But in larger public models, who gets to decide what we should and shouldn't be allowed to know about. Whether it's the Chinese removing Tiananmen Square or American governments telling me that transgender people don't exist, who do you trust as the guardian of truth? Nevertheless, the Indiana Jones method demonstrates that despite efforts to put guardrails around AI models to keep them from executing malicious commands, they are far too easy to circumvent. And that's our show. You can reach me at editorialechnewsday ca or on LinkedIn. Or if you're watching this on YouTube, just leave me a comment under the video. I'm your host, Jim Love. Thanks for listening.
