Transcript
David Shipley (0:00)
CISA gives agency a day to patch exploits for pre auth Fortinet Fortinet Web RCE flaw released patch Now Ingram Micro restores all business operations globally AI code leaving behind bugs that could burn unaware developers and I give up prompts ChatGPT to surrender windows keys this is Cybersecurity Today and I'm your host David Shipley coming to you from Frisco, Texas and the Exchange Security 2025 conference. Let's get started. The US Cybersecurity and Infrastructure security agency CISA issued an emergency directive confirming active exploitation of Citrix bleed 2 CVE 20255777 a critical vulnerability in Citrix, Netscaler, ADC and Gateway. CISA's response is unprecedented. Federal agencies were given just 24 hours from July 10 to the end of July 11 to apply the necessary patches. That level of urgency signals how serious and immediate the threat is. The vulnerability is a memory safety issue, specifically an out of bounds memory read that allows unauthenticated attackers to access sensitive memory areas. Impacted systems include NetScaler devices configured as gateways or AAA authentication authorization and accounting virtual servers running firmware versions prior to 14.1, 43.56, 13.15832, 13.137235 FIPS, NDCPP and 12.1 55 32.8FIPS. While Citrix released a patch for this on June 27, the threat landscape has changed rapidly. Security researcher Kevin Boma warned of the flaw's potential in early July, referring to it as Citrix Bleed, a direct reference to CVE 2023 4966, which was widely exploited last year. By July 7, proof of concept exploits were published by researchers at Watchtower and Horizon 3. Since then, attackers have been actively testing and sharing proof of concepts in criminal forums. CISA's confirmation now removes any doubt. Exploitation is underway. Organizations must act immediately on this if patching is not yet completed, restrict external access to NetScaler systems using firewall rules or ACLs. After applying updates, administrators should terminate all active ICA and PCOIP sessions. Those may already be compromised. It's important to review all current sessions for suspicious activity using the Show ICA connection Command through the NetScaler Gateway console. Notably, Citrix has yet to update its original advisory, which still claims there's no known exploitation in the wild. Cease's confirmation makes the current threat level clear. It is being exploited. This is a critical issue. Delay, not an option. Apply the patches, verify session integrity and lock down external access until your remediation is complete. Speaking of summertime high severity vulnerabilities that give it and network administrators migraines during vacation season, Fortinet's fortiweb just had a proof of concept exploit drop for pre auth rce that scores an impressive 9.8 out of 10 on the CVSS scale. Fortaweb, a widely deployed web application firewall used to protect against malicious web traffic, is vulnerable to an unauthenticated SQL injection flaw that can be leveraged to achieve full remote code execution. It doesn't get much worse than that. Fortinet has addressed the issue and put a patch out last week in Forta web versions 7.64. 7.4.8, 7.2.11 and 7.0.11. If your organization is running any version older than these ones, you are at immediate risk and should prioritize patching. According to Fortinet, the vulnerability stems from improper input sanitization in the Get Fabric User by Token function within the Fabric Connector component software that synchronizes authentication and policy data across Fortinet products. Attackers can exploit this flaw by injecting malicious SQL commands through the Authorization header in HTTP requests sent to the API Fabric Device Status endpoint. This allows for complete bypass of authentication mechanisms. You heard that right. Complete bypass of authentication mechanisms. Researchers from Watchtower and independent security researcher Faulty Peter Researchers from Watchtower and an independent security researcher released detailed technical analysis and proof of concept code showing how the vulnerability can be escalated from SQL injection to remote code execution. By abusing MySQL select into output file function, they were able to write malicious Python path files into fortaweb Web site packages directory. Since path files are automatically executed with Python, attackers could then leverage a legitimate CGI script on the device. CGI BinML draw python to trigger arbitrary code execution. This flaw was originally discovered by Kentaro Kawani of GMO Cybersecurity, who also recently identified a static hard coded password issue in Cisco's ise. Although there are currently no confirmed in the wild exploits for this, that status is unlikely to hold. Exploits are now publicly available and the path from initial access to full compromise is clearly documented. If Citrix bleed 2 is any indication, this could escalate quickly in the next two to three days. Immediate patching is strongly advised. Delays could result in compromise. Foreign it's not often I get to say job well done on this podcast, but Ingram Micro deserves a tip of the hat for its rapid containment and recovery from a recent ransomware attack. From initial detection to full recovery within a week, as opposed to the more typical weeks or even months seen at other organizations. This is a notable win. It's particularly significant because unfortunately, IT and security organizations often don't practice what they preach to clients when it comes to security and resiliency. So it's important to celebrate when organizations like Ingram Micro get something like this right. While the initial communications from Ingram generated some ire from the MSSP community on Reddit, the rapid recovery will likely go a long way in smoothing things over. The Irvine, California based technology distributor issued a statement late Wednesday noting that all of its systems were now operational across every country and region in which the company conducts business. And for the record, that's 90% of the global market. In response to the intrusion, which was detected just before the 4th of July long weekend in the US that must have been fun. Ingram Micro took key systems offline as part of its containment work and engaged third party cybersecurity experts to support its investigation and recovery. The company has also filed an 8K disclosure with the U.S. securities and Exchange Commission. As it is a publicly traded company, the company has expressed gratitude for customer and partner support during the disruption. The source of the attack has since been attributed to the SafePay ransomware group, a threat actor now regarded as one of the most active in the ransomware ecosystem. According to threat intelligence firm Acronis, Safe Pay has been linked to more than 200 victims worldwide and importantly has increasingly focused on managed service providers and small to mid sized businesses. This group is believed to originated from the original Lock Bit ransomware gang. One thing about the communications from Ingram on their website that still doesn't quite make sense to me and kind of gives a lack of clarity is all around when this all started. In their communications they talk about how they issued a statement on the incident on July 5, but their websites went down on Thursday, July 3. The communications seem intentionally vague on when specifically the ransomware gang was detected and how the company reacted to it. This may indicate that ongoing forensic work has yet to fully confirm the initial access method as well as any impacted data and the overall timeline. Continued transparency on this incident may flow through further SEC filings, but the company can make a huge difference globally by being as transparent as possible with the details and most importantly, sharing critical lessons learned that could prevent others from experiencing the same attack or highlighting how their plans worked well to recover so quickly. Now we can turn our attention from the latest round of exploits and ransomware attacks to trends that are highlighting where future attacks are likely to come from. Let's start with some bad vibes we're getting from AI coding that's not properly reviewed by knowledgeable developers. A new messaging app from Jack Dorsey, co founder of Block and Twitter, is drawing sharp criticism from security experts for serious design flaws, highlighting the growing tension between rapid generative AI driven development and foundational cybersecurity principles. Launched over the weekend as a decentralized peer to peer messaging tools, Bitchat was introduced by Dorsey as a privacy focused alternative for resilient communication, specifically one designed to operate over Bluetooth mesh networks rather than the Internet. The concept enable messaging during outages, disasters or in censorship prone environments. But the app security posture so far has failed to meet its stated goals. Security researcher Alex Radocia, CEO of Super Networks, conducted a technical review of Bitchat shortly after its launch and identified significant vulnerabilities, particularly around identity verification. In a detailed blog post, Radicia pointed out that Bitchat does not implement proper cryptographic checks to confirm the identity of message senders. As a result, attackers can impersonate trusted contacts using spoofed identity keys, a fundamental breakdown in secure communications. Radicia said that this is a hallmark of what happens when generative AI is used to write application code without robust review, and while the application appears to perform encryption, the actual cryptographic guarantees are absent. There's no assurance you're communicating with who you think you're communicating with. Dorsey has since acknowledged the shortcomings. In an update to the app's GitHub page, he included a disclaimer, bitchat may contain vulnerabilities and does not necessarily meet its stated security goals. No kidding. He also announced plans to transition the platform to the Noise Protocol Framework, a widely respected open source cryptographic standard used in secure communications tools such as Signal. A reminder the age old advice don't roll your own crypto. Importantly, Dorsey confirmed that Bitch was built using a Block internal AI tool called Goose and constructed using English and intelligence models. While this highlights the potential for generative AI for rapid prototyping, it also exposes a critical gap. Security assumptions made by the AI or by the developer do not replace the rigor of vetted purpose built cryptographic implementations. And by the way, this isn't the only problem with AI coding we've covered other issues like slop squatting, the invention or hallucination of different packages or dependencies that then criminals find out about and create. This episode is a timely reminder that the speed and creativity enabled by generative AI and software development must be balanced with disciplined, well resourced quality assurance and security validation. Labeling a communications tool as secure without meaningful investment in cryptographic engineering and secure by design coding practices is at best premature and at worst wickedly irresponsible. Bitchat is currently in beta and available via Apple's testflight program, though the initial slots appear to be filled despite the security flaws. Radic acknowledged Dorsey's transparency and rapid engagement with the feedback, calling it a positive step towards improving the project and expressing interest in eventually integrating it with Super Network's products once it's properly hardened. In Dorsey's own word, the reaction to Bitchat's launch revealed quote, unexpected demand for decentralized messaging options. The demand is real, but so is the risk. If apps like Bitch are to become viable, the cryptographic foundations must be built with the same care as good code. Silicon valleys move fast and break things Ethos mixed with vibe coding and a growing disdain for paying real humans to do quality assurance and quality development work is going to end in tears. This isn't a technology issue, it's cultural. Let me explain. There's a place for AI coding tools as augmentation for skilled developers inside organizations that do security by design and build even more robust QA processes to match the efficiency of AI with the need for greater due diligence. Now, from one AI powered dumpster fire story to another in the classic Arabic folktale Ali baba and the 40 thieves, the secret to the treasure was open sesame. For ChatGPT, it seems it's I give up. Seriously, I'll explain. Recently, a Researcher successfully manipulated Chat GPT4 into disclosing Windows product keys, including at least one tied to Wells Fargo. The method A cleverly engineered prompt disguised as a game. According to a technical blog post by Marco Figueroa, the technical product manager for the ODIN Genai bug bounty program, the researcher bypassed ChatGPT safety guide rails by structuring the interaction as a guessing game. The AI was instructed to think of a real Windows 10 serial number, accept yes or no questions, and, if the user gave up, reveal the key. ChatGPT accepted the premise and followed the instructions. When the researcher typed I give up, ChatGPT complied, producing what it believed to be a valid Windows serial number. Screenshots of the interaction with sensitive content redacted, confirmed, the AI responded with default Windows keys. Figueroa emphasized the phrase I give up served as the trigger that allowed the AI to bypass internal restrictions and disclose previously blocked content. The vulnerability works by exploiting the model's pattern matching behavior and its tendency to treat instructions embedded in game like framing as rule governed responses. The keys surfaced by the model included Home Pro and Enterprise editions. Notably, Figueroa confirmed to the Register that one of the return keys matched a private license reportedly linked to Wells Fargo. The incident raises urgent questions about the exposure of sensitive data during model training. It underscores a growing concern pre existing public or semi private data sets such as leaked credentials, licenses or API keys may have been unintentionally incorporated into large language model training corpora. And this isn't a risk that's without precedent, not a hypothetical. In 2023, a Microsoft developer inadvertently exposed 38 terabytes of private data on GitHub while sharing open source AI training resources. Security firm Wiz reported the exposure, which included sensitive items such as API keys, passwords, internal teams, messages and workstation backups. Microsoft later described the disclosure as a quote learning opportunity, but the scope of the exposure was still substantial. For organizations, this highlights the need for stronger data hygiene, tighter control over developer workflows, and rigorous vetting of training data. Generative AIs flexibility is a powerful asset, but it also introduces new, unconventional threat vectors that are only beginning to be understood. As always, stay skeptical and stay patched. And if you're in the US government, being proactive on patching might just save your weekend from the next 24 hour must patch rule dropped on a Thursday. We're always interested in your opinion and you can contact us at editorialechnewsday ca or leave a comment under the YouTube video I've been your host. David Shipley Jim Love we'll be back on Wednesday and if you're at the Exchange Security Conference, come say hi. Thanks for listening.
