B (139:04)
Okay, so let's, let's take a closer look at the malware itself. For that we turn to Trend Micro who titled their coverage of this your AI gateway was a backdoor inside the Light LLM supply chain compromise, which they tease with the follow on Team PCP or those are the bad guys. Team PCP orchestrated one of the most sophisticated multi ecosystem supply chain campaigns publicly documented to date. It cascaded through developer tooling to compromise Light LLM and exposed how AI proxy services that concentrate API keys and cloud credentials become high value collateral when supply chain attacks compromise upstream dependencies. So they gave they they led their coverage with three key takeaways. They said Light LLM, a widely used AI proxy package, was compromised on PyPi with two of its versions containing malicious code. These Light LLM versions deployed a three stage payload credential harvesting kubernetes, lateral movement and persistent backdoor for remote code execution. Sensitive data from cloud platforms, SSH keys and kubernetes clusters were targeted and encrypted before exfiltration. Second point Light LLM incident was part the Light LLM incident was part of a broader campaign by the criminal group Team pcp, which has demonstrated deep understanding of Python execution models, adapting their attack rapidly for stealth and persistence, in this case a little too rapidly. Team PCP has pre and third Team PCP has previously compromised security tools like Trivi and Checkmarks kics to steal credentials and propagate malicious payloads. Attackers leveraged compromised CICD pipelines and security scanners to escalate privileges and publish trojanized packages. So here's what more we learned from Trend Micro. They explain on March 24th, production systems running Light ll. And that's exactly last week. Last Tuesday, production systems running Light LLM started dying and just as happened to column and engineers saw runaway processes CPU pegged at 100%, containers killed by out of memory errors. The stack traces pointed to the Light LLM package, a popular python package downloaded 3.4 million times per day that serves as a unified gateway to multiple LLM providers was compromised on PyPI. Upon analysis, it was found that versions 1.82.7 and 1.828contained malicious code that stole cloud credentials, SSH keys and Kubernetes secrets. The malicious versions deployed a three stage payload, a credential harvester targeting over 50 categories of secrets, a Kubernetes lateral movement toolkit capable of compromising entire clusters, and a persistent backdoor providing ongoing remote code execution. And just to pause, just think of if this had not been caught, 3.4 million instances downloaded per day would have been infected with this nasty malware. I mean, this is bad malware, they wrote. This compromise was not an isolated event. It was the latest link in a cascading supply chain campaign by a threat actor tracked as Team pcp. This post traces the cascade from its origin, the open source vulnerability scanner Trivi, and then presents our technical analysis of the Light LLM payload. Team PCP orchestrated one of the most sophisticated multi ecosystem supply chain campaigns publicly documented to date. The campaign spanned PyPi, npm, Docker Hub, GitHub Actions, and OpenVSX in a single coordinated operation. While it did not specifically target AI infrastructure, the campaign's cascade through the developer toolkit caught Light LLM within its blast radius and exposed how AI proxy services that concentrate AI keys and cloud credentials become high value collateral when supply chain attacks compromise upstream dependencies. Key sections of this blog and I'm not going to share all the details because we don't need that, but they wrote Key sections of this blog entry include a technical analysis of the malicious multistage payload and its impact on AI environments, a timeline and operational review of Team PCP's campaign, and a deep dive into how security tools themselves became attack vectors. Trend AI Research's analysis into the Light LLM compromise also covers attribution challenges, gaps in public threat intelligence, and actionable defense strategies. Detailed indicators of compromise and miter attack mappings have been provided, but for an even more comprehensive understanding of the security incident, reach out to Trend AI Research for the full technical report. Okay, so that's much deeper than we need to die for all that. But what they uncovered and reported about the root source of the vulnerability was interesting under their how your security scanner can become the attack vector, they wrote. Trivi is an open source vulnerability scanner developed by Aqua Security. It scans container images, file systems and infrastructure as code for security vulnerabilities, and it is integrated into the CICD pipelines of thousands of software projects via the trivia action GitHub Action Security scanners now, so, so, okay, the point is, Trivy was the the root of this compromise. So they explain. Security scanners are uniquely dangerous supply chain targets by design. They require broad read access and into the environments they scan, including environment variables, configuration files and runner memory. When a scanner is compromised, it becomes a credential harvesting platform with legitimate access to secrets. In late February 2026, an actor operating under the handle megagame10418 exploited a misconfigured pull request target workflow in Trivi's CI, their continuous integration to exfiltrate the Aquabot personal access token. Aqua Security disclosed the incident on March 1 and initiated credential rotation. However, according to Aqua's own post security post incident analysis, the rotation wasn't atomic and attackers may have been privy to refreshed tokens. Okay, now that's an important point, so I want to pause here to explain that. We've talked about the concept of so called atomic operations. The name obviously comes from the word atomic, and it's meant to imply that it cannot be further divided into smaller pieces. Molecules of course being collections of atoms, are divisible, not so the atom. So to clearly illustrate the occasional need for atomic operations, you know, say, say that a computer program needed to count up to a certain number, but no more. If the program was single threaded, meaning that it only ever had one thing going on inside itself at once, that would be easy to do. The program would read the value of the thing that's being counted. If it was already at its upper count limit, then the program. I'm, I'm sorry, if, if it was not already at its upper count limit, then the program would increment it to its next value. If it was already at the upper limit, it would just leave it there. But now imagine what happens if there's a lot more going on in the program with multiple simultaneous execution threads running around. Perhaps because the CPU has multiple cores or the application itself has many threads running in this environment, there's a chance that both CPUs would wish to increase the count at the same instant. So they would both be executing the exact same code at the same time. They would both read the counter's value, they would both see that it had not yet reached its limit, so they would both increment it, thus increasing its initial value by 2. But if the counter had been previously sitting at 1 below its limit, that increase by 2 would move it up past the limit. A very subtle bug. You know, these sorts of so called race conditions have historically been the source of of, you know, many hard to find problems. You know, they're the, they're the sort that never happen while you're watching it, while you're developing the code, but they somehow always occur when you're on stage demonstrating what it is that you've got. So in our example that test the value and maybe increment it, that would need to be made atomic so that the testing and the incrementing could not be broken apart and performed separately, even by different processors that are executing at the same time. That operation could only be done by one processor or execution thread at a time, so the other one, the other processor trying to do it would be briefly stalled until the first processor had finished with that atomic operation. And at that point the second processor could could proceed. And if it saw that the variable was already at its limit, it would not also increment it. Okay, so we left off with Trend Mic Trend Micro noting Aqua Security disclosed the incident on March 1 and initiated credential rotation. However, according to Aqua's own post incident analysis, the rotation was not atomic and attackers may have been privy to refreshed tokens. In other words, somebody might have still been logged in when a token was updated and then they would have grabbed that Trend Micro then continues the gap the gap that is this race condition gap that that gap proved decisive on March 19th at at 17:43 UTC, Team PCP used still valid credentials to force push 76 of 77 release tags in the privy action repository and all 7 tags in setup Privy, whatever those details mean. But it meant two malicious commits containing a multistage credential stealer. The malicious code scraped the runner worker process memory for secrets, harvested cloud credentials and SSH keys from the file system, encrypted the bundle using AES256CBC with an RSA4096 public key, and exfiltrated it to a typo squatted domain scan aquasecurity.org According to analysis by CrowdStrike, the legitimate trivy scan still ran afterward, producing normal output, leaving no visible indication of compromise. Okay, in other words, because Aqua Security was for whatever reason, logistically unable to rotate every single credential at once when no one was actively logged on, the bad guys were able to maintain their corrupting persistence. Trend Micro finished this portion of their write up by writing this is the meta attack. A security scanner the tool defenders rely on to catch supply chain compromise itself became the entry point for a supply chain compromise. The trivy compromise in GitHub Actions gave the attacker the keys to publish arbitrary versions of Light LLM to pypi. Everything that followed was exploitation of that initial foothold. And Light LLM was just a coincidental casualty of this. They said the lesson that the lesson is uncomfortable but critical. Your CICD security tooling has the same access as your deployment tooling. If it's compromised, everything downstream is exposed. And what we're now seeing is the bad guys have gotten sophisticated enough to take advantage of that. I mean it, it is truly terrifying. So what we see is that the enabling of this attack on Light LLM had nothing to do with AI per se. It's just its popularity that allowed it to would, that would have allowed it to explode at 3.4 million instances of compromise per day had they had the bad guys not made that crucial mistake that that crashed the machines that it tried, that it was trying to compromise. So after providing a fully detailed forensic analysis of this malware campaign, Trend Micro concluded with a summary and recommendations. They wrote. As AI machine language tooling proliferates across enterprise CICD pipelines, the attack surface expands with it the tools the developers install to interact with AI systems. Proxy gateways, model routers, experiment trackers and inference servers handle high value secrets by design, supply chain attacks against these tools inherit the trust and access of the AI infrastructure itself. So again, AI is not to blame here. It's really just a case of the more tools you're using, the more exposure there will be when any one of them might be compromised. Trend Micro continued saying the malicious payload analyzed in this report is a direct exploitation of the systemic secret management failures extensively documented in prior Trend AI research. As previously described, developers have adopted ENV files so profusely that they have forgotten their sensitivity, leaving them exposed. And threat actors are actively scanning for exactly those files. The harvester analyzed here operationalizes that attack surface at scale. It performs exhaustive file system walks targeting env, local ENV production and ENV staging files across up to six directory levels while simultaneously extracting AWS credentials, cloud provider tokens, kubernetes, service account secrets, CICD pipeline configurations and database connection strings. The same categories of secrets Trend AI research previously identified as most commonly stored in plain text inside dot ENV files. And they finish off by some well reasoned security recommendations. They said this case highlights the risk of building an entire ecosystem on top of fragile trust. The Light LLM hack is just the latest example of attackers exploiting the reliance on on open source repositories and poor secret hygiene. Security is not an afterthought. You can outsource entirely to a vulnerability scanner. So the apparently very highly skilled this team, pcp, these attackers appear to have just been in a bit of a hurry. This led them to deploy otherwise very potent and. And sophisticated malware that must have taken a lot of time to to generate, allowed them to deploy it containing a flaw that, that unfortunately for them, and thank God for us, immediately caused that malware to draw attention to itself.