Loading summary
Dave Bittner
You're listening to the Cyberwire network, powered by N2K. Secure access is crucial for US public sector missions, ensuring that only authorized users can access certain systems, networks or data. Are your defenses ready? Cisco's security service Edge delivers comprehensive protection for your network and users. Experience the power of zero Trust and secure your workforce wherever they are. Elevate your security Strategy by visiting Cisco.com Go SSE that's Cisco.com Go SSE CISA braces for widespread staffing cuts Russian hackers target a Western military mission in Ukraine. China acknowledges Volt Typhoon. The US signs on to global spyware restrictions. A lab supporting Planned Parenthood confirms a data breach. Threat actors steal metadata from unsecured Amazon EC2 instances. A critical WordPress plugin vulnerability is under active exploitation. A new analysis details a critical unauthenticated remote code execution flaw affecting Ivanti products. Joining Us Joining us today is Johannes Ulrich, dean of research at the Sens Technology Institute with his take on Vibe security and does AI really understand and does that ultimately matter? It's Friday, April 11, 2025. I'm Dave Bittner and this is your Cyberwire Intel Brief. Thanks for joining us here today. Happy Friday. It is great to have you with us. The Trump administration is preparing to cut about 1300 positions at the Cybersecurity and Infrastructure Security Agency, slashing roughly half its full time staff and 40% of its contractors. These planned cuts follow White house frustration over CISA's perceived role in moderating conservative content. Major reductions are expected at the National Risk Management center and the stakeholder engagement division. CISA's threat hunting team will also be downsized. Some responsibilities may shift to the cybersecurity division. Officials say the exact scope and timeline remain undecided and could change. Meanwhile, the administration is pushing early retirements and buyouts, offering up to $25,000. Political appointments for regional directors are also under consideration. CISA director nominee Sean Planky's confirmation is being blocked by Senator Ron Wyden over transparency issues. The cybersecurity industry has largely stayed silent after President Trump revoked security clearances for Sentinel 1 staff, Reuters reports the move appears tied to the company hiring Chris Krebs, ex CISA chief fired by Trump in 2020 for rejecting election fraud claims. Despite Krebs respect in cyber circles, most major cybersecurity firms declined to comment, fearing retaliation. Only the Cyber Threat alliance spoke out, criticizing the action as political targeting. Sentinel One said it expects no major impact, though its stock dropped 7% following the news. Russian state backed hacking group Gamarudon also known as Shukworm, has been targeting a Western military mission in Ukraine using removable drives to deploy attacks. Between February and March of this year, they used an upgraded version of their Gamma Steel malware to steal sensitive data. The group likely gained access via malicious shortcut files on external drives. Recent tactics show a shift to PowerShell based tools, increased obfuscation and use of legitimate services for stealth. Once infected, the malware collects screenshots, system info and documents, storing payloads in the Windows registry and using PowerShell or curl over Tor for exfiltration. It also spreads to other drives and establishes persistence via registry keys, Symantec notes. Gamerdon's tactics are evolving, making the group a growing threat despite its relatively unsophisticated methods. In a secret December 2024 meeting in Geneva, Chinese officials indirectly admitted to cyberattacks on US Infrastructure tied to the Volt Typhoon campaign. According to the Wall Street Journal, the US Delegation, part of the outgoing Biden administration, interpreted the admission as a warning over American support for Taiwan. Volt Typhoon attributed to China in 2023 targeted critical US sectors using zero day exploits and stayed undetected in parts of the electric grid for 300 days. The attacks spanned communications, energy, transportation and more, raising concerns about espionage and potential disruption. The meeting also touched on the SALT Typhoon campaign, which compromised telecom data from senior officials. While the US views Volt Typhoon as a serious provocation, Salt Typhoon is seen as typical cyber espionage. Both nations continue to escalate mutual CyberAttack accusations. The US will join an international agreement under the Pall Mall process, an international initiative launched in February by the United Kingdom and France to address the misuse of commercial spyware. This follows a voluntary code of practice signed by 21 countries aiming to regulate commercial cyber intrusion capabilities and curb abuses targeting civil society. Sparked by scandals in Poland, Mexico, and Greece, the agreement seeks to separate responsible vendors from those linked to human rights violations. Human rights advocates praise the move as a bipartisan step toward responsible spyware governance. Laboratory Services Cooperative, a nonprofit supporting reproductive health labs, confirmed a data breach affecting 1.6 million people. Hackers accessed its network in October 2024, stealing sensitive data, including personal IDs, medical records, and insur details. Most affected individuals had lab work done through select Planned Parenthood centers. LSC is offering credit and identity protection services and says no stolen data has appeared on the Dark Web so far. An investigation is ongoing, with federal law enforcement and cybersecurity experts involved. In March, a threat actor used server side request forgery attacks to steal metadata from unsecured Amazon EC2 instances, according to F5 Labs. The attacker targeted EC2 hosted websites that left instance metadata exposed potentially leaking Sensitive AIM credentials. The campaign ran from March 13th through the 25th and involved tens of thousands of GET requests from IPs tied to French firm FBW Networks SAS. F5 advises migrating from IMDS v1 to IMDS v2 or blocking requests to the metadata IP to mitigate future risks. A critical vulnerability in the Autokit WordPress plugin is being actively exploited, according to security firm Defiant. The plugin, with over 100,000 installs, allows attackers to bypass authentication and create admin accounts on unconfigured sites by exploiting a missing value check in API key validation. This gives full site control, including uploading malicious files or injecting spam, while only unconfigured installations are at risk. Users are urged to update to the latest version to patch the flaw. A newly published analysis details a critical, unauthenticated remote code execution flaw affecting Ivanti products including Connect Secure, Policy Secure, Pulse Connect Secure and ZTA gateways exploited by a suspected China linked actor. The bug stems from a stack based buffer overflow in the web server binary via the X forwarded for header. Exploitation is complex due to payload restrictions. Only digits and periods are allowed, forcing attackers to use heap, spray and ROP techniques to gain code execution. The attack bypasses ASLR through brute force. Ivanti patched Connect Secure in February with other product updates due in April. Pulse Connect Secure is no longer supported. Given the public proof of concept and active exploitation, urgent patching or mitigation is critical. Coming up after the break, my conversation with Johannes Ulrich, dean of research at the SANS Technology Institute, with his take on Vibe security and Does AI understand and does that ultimately even matter? Stay with us. Do you know the status of your compliance controls right now? Like right now, we know that real time visibility is critical for security, but when it comes to our GRC programs, we rely on point in time checks. But get this, more than 8,000 companies like Atlassian and Quora have continuous visibility into their controls with Vanta. Here's the gist. Vanta brings automation to evidence collection across 30 frameworks like SoC2 and ISO 27001. They also centralize key workflows like policies, access reviews and reporting, and helps you get security questionnaires done five times faster with AI. Now that's a new way to GRC. Get $1,000 off Vanta when you go to vanta.com cyber that's vanta.com cyber for $1,000 off foreign. Are you frustrated with cyber risk scores backed by mysterious data, zero context and cloudy reasoning? Typical cyber ratings are ineffective and the true risk story is begging to be told. It's time to cut the bs. Black Kite believes in seeing the full picture with more than a score. One where companies have complete clarity in their third party cyber risk using reliable quantitative data. Make better decisions, reduce your uncertainty. Trust Black Kite. And I'm pleased to be joined once again by Johannes Ulrich. He is the Dean of Research at the SANS Technology Institute and also the host of the SANS ISC Stormcast podcast. Johannes, welcome back.
Johannes Ulrich
Yeah, thanks for having me back.
Dave Bittner
So I want to talk about Vibes today. Johannes, I want to talk about vibes. It seems like the word vibe has found its way into infosec. People are Vibe coding.
Johannes Ulrich
Yeah, Vibe coding is a big thing. Why learn how to code with AI? We just describe the problem and AI will magically solve it for us with some really interesting code that we don't really need to understand. We just use it and I guess then complain, cry later.
Dave Bittner
You sound like you might have a particular opinion about this approach, Johannes, that perhaps Vibe coding isn't the best path.
Johannes Ulrich
Yeah. And the same methodology, of course, enters all kinds of realms, including security. With coding there was, if you are still on X, there's some great little memes that went around there with like developers applying that methodology. I think most of them are made up, but they're funny enough, it doesn't really matter that they're made up. The Vibe security comes in if. And I thought a little bit of this work for the SANS college, where you sort of colleges have this problem with vibe paper writing, where people write papers using AI, then faculty, not our faculty, but other faculty, may use AI to actually then create the paper. So you have like. And the similar thing happens with security where you have developers use AI to create code and then you have security teams using AI to check that code for security flaws. And of course now you're basically losing any kind of diversity in your methodologies here. You're hitting a point where things just become too complex to actually double check what the AI is doing. That's the really important part here. If you don't know how to code, if you don't know what proper security looks like, how do you know if that firewall rule set that AI came up with now these 200 lines of firewall rules or whatever, if they're actually correct?
Dave Bittner
Well, let's say you are the person who's supervising a team of coders and they're saying to you, hey, we can have some real efficiency upgrades here by partnering with these AI systems. How do you manage it to make sure that it stays within the realm of checkability?
Johannes Ulrich
Yeah, and I think you mentioned an important virtual partnering. It's a partnership. It's not where I just hand it over. And I think it's very similar to outsourcing code generation and has very similar problems. A lot of companies outsource coding and outsource security, and that's a valid thing to do. But the problem usually comes back to bite you is, did you write the specifications complete? If you can't really write specifications that allow a human developer to create the system for you, how is AI supposed to have a chance at it? Back to the firewall rules. How is AI going to write correct firewall rules if it doesn't know what your network looks like? So you still have to do your inventory, which, of course, a lot of people are having issues with. And if that's not right, it's sort of the good old garbage in, garbage out. You won't get any good results if you don't tell it, really know what these results are supposed to accomplish.
Dave Bittner
You know, I was chatting with someone earlier today about interacting with AI, and one of the things that she warned against was boxing AI into a corner, you know, because AI tends to want to please you. So if you tell it, you must give me these results. It will. Even if that involves lying to you or, you know, making things up. It strikes me that with coding, where things are black and white, you know, we're talking, they work or they don't, that you have to be careful about not backing your AI into a corner to the point where you don't understand what it's making for you.
Johannes Ulrich
Yeah. And backers of the specifications, I think that hits us with security all the time, where we have software that actually works just fine unless you start to try to bypass authentication. And then it will still work fine. It will do everything you told it to do. It will just not verify who you are. And if you never told it how to do it, well, it won't do it. So that's really kind of problem. It's often an old computer science problem where developers are usually created on passing functionality tests, not security tests. And if you give the same reward system to AI AI, it'll basically end up with the same bad code.
Dave Bittner
So are we talking about having some audit methodologies in place here? I mean, how do you get the best of both worlds?
Johannes Ulrich
I think you get the best of both worlds by having developers, like I said, partner with the AI, where the developer is still ultimately in charge and reviewing the results and able to understand what the AI produced, whether that's code, whether that's cloud formation, configuration, whether it's firewall rules or whatever. At our college we actually implement a policy around this. I think already this like two years ago, last year, forgot, time flies. But they said, hey, you know, you're free to use AI, but your response for the result, it's not where you can say, hey, I ate my homework. So it's there still has to be a developer, someone who is actually double checking the work and also double checking the specifications and prompts that are being used. And I think in order to write good prompts, just in order to write good specifications for human developer, you need to understand the system.
Dave Bittner
Yeah. It seems to me reasonable as well that whoever your developer is, even if they're partnering with the AI to do their work, they're responsible for being able to walk you through what the code does and actually not hand waving away that, oh, here's where a miracle occurs.
Johannes Ulrich
Correct. So they have to understand what's happening there. And I think one of the most dangerous things about all of this, which I don't really see people talk about too much, is AI is really good. And that's very dangerous because if you have like that partner that's usually right and you wasted a lot of time in the past trying to prove it wrong, then you start hand waving and the results are usually really close to the right result. If they're wrong, I'm not sure if you tried it. To have AI summarize security news and Trinet's actually pretty good at that. Give an article and tell me, hey, what's the takeaway from this or what's the action? What I found it, it gives you the right result. It may not give you the result that you're really looking for. And the real difference here is you have an article, some problem with no authentication and it'll tell you, hey, use strong passwords. But this was an authentication bypass where passwords weren't at all involved. So the result sounded reasonable. And it's one of those results that maybe someone who has just taken their first security class would have given you reading that article, but they didn't sort of read deeper. They didn't sort of understand the context of that particular vulnerability.
Dave Bittner
No, It's a great insight and I've often said that to me, a useful way to look at AI for that sort of thing is that it's a tireless intern, right? Like it will go off and do all the work that you ask it to do, but at the same time, you would not bet the company on an intern. You just wouldn't do it.
Johannes Ulrich
Well, you know, you blame interns for any vulnerabilities there. So now you just blame.
Dave Bittner
Sure, absolutely.
Johannes Ulrich
I just blame the AI. So that analogy actually, I think is really good there.
Dave Bittner
All right, Johannes Ulrich, thanks so much for joining us.
Johannes Ulrich
Yeah, thank you.
Dave Bittner
What's the common denominator in security incidents? Escalations and lateral movement. When a privileged account is compromised, attackers can seize control of critical assets with bad directory hygiene and years of technical debt, Identity attack paths are easy targets for threat actors to exploit, but hard for defenders to detect. This poses risk in active directory, entra ID and hybrid configurations. Identity leaders are reducing such risks with attack path management. You can learn how attack path management is connecting identity and security teams while reducing risk with Bloodhound Enterprise powered by Spectrops. Head to Spectrops IO today to learn more. SpectorOps see your attack paths the way adversaries do. And finally, large language models are acing benchmarks faster than researchers can invent them. But does that mean they understand? To tackle this big question, IEEE Spectrum and the Computer History Museum hosted a lively March 25 debate. On the no side was Emily Bender, a vocal LLM critic and co author of Stochastic. On the yes side stood Sebastian Bubek of OpenAI, co author of Sparks of AGI. The fiery but respectful showdown explored whether these AI systems truly comprehend or just cleverly imitate. The debate kicks off with Emily Bender on Team nope and Sebastian Bubeck from Team Kinda yeah. Diving into linguistics, AI benchmarks, and whether machines can grasp meaning like we do, Bender leans hard into the they're just parrots metaphor, warning us about the illusions of understanding and the dangers of relying on LLMs in healthcare, law and more. Meanwhile, Bubek cheerfully reminds us these models are pulling off math feats that make your high school teacher weep. And whether or not they understand, they're undeniably useful. The debate was spirited, nuanced, and philosophical. Oxford Union meets Silicon Valley. One of the takeaways is Understanding might be overrated if your chatbot can still beat you at logic puzzles or build you an app overnight. These are still early days, and time will tell how much we come to trust and rely on these technologies in our day to day lives. For now, I think it's fair to say that loveem or hate em, they are here to stay. And that's the Cyberwire. For links to all of today's stories, check out our daily briefing@thecyberwire.com we'd love to know what you think of this podcast. Your feedback ensures we deliver the insights that keep you a step ahead in the rapidly changing world of cybersecurity. If you like our show, please share a rating and review in your favorite podcast app. Please also fill out the survey in the show notes or send an email to cyberwire2k.com N2K's senior producer is Alice Carruth. Our Cyberwire producer is Liz Stokes. We're mixed by Trey Hester with original music and sound design by Elliot Peltzman. Our executive producers Jennifer Iban. Peter Kilpe is our publisher and I'm Dave Bittner. Thanks for listening. We'll see you back here next week. Foreign Looking for a career where innovation meets impact? Vanguard's technology team is shaping the future of financial services by solving complex challenges with cutting edge solutions. Whether you're passionate about AI, cybersecurity or cloud computing, Vanguard offers a dynamic and collaborative environment where your ideas drive change. With career growth opportunities and a focus on work life balance, you'll have the flexibility to thrive both professionally and personally. Explore open cybersecurity and technology roles today@vanguardjobs.com.
CyberWire Daily – Episode Summary: "CISA Shrinks While Threats Grow"
Release Date: April 11, 2025
Host: Dave Bittner, N2K Networks
In today's episode of CyberWire Daily, host Dave Bittner delves into a range of pressing cybersecurity issues, from significant staffing cuts at the Cybersecurity and Infrastructure Security Agency (CISA) to evolving cyber threats posed by state-backed hacking groups. The episode also features an insightful interview with Johannes Ulrich, Dean of Research at the SANS Technology Institute, discussing the implications of AI in cybersecurity.
CISA Shrinks Amid Growing Threats
At the outset, Dave Bittner addresses the Trump administration's plans to reduce CISA's workforce by approximately 1,300 positions, slashing about half of its full-time staff and 40% of its contractors. These cuts are reportedly a response to the White House's frustration over CISA's perceived role in moderating conservative content.
Officials have indicated that the precise scope and timeline of these cuts remain undecided and subject to change. Additionally, the administration is promoting early retirements and buyouts, offering incentives up to $25,000, and considering political appointments for regional directors.
Quote:
“[...] the exact scope and timeline remain undecided and could change.”
— Dave Bittner [00:02]
Sean Planky, the CISA director nominee, faces confirmation delays as Senator Ron Wyden blocks his appointment over transparency concerns.
Russian Hacking Group Gamarudon Targets Ukraine
The episode highlights increased cyber activities by the Russian state-backed hacking group Gamarudon (Shukworm), which has been targeting a Western military mission in Ukraine. Between February and March 2025, the group utilized an upgraded version of their Gamma Steel malware to exfiltrate sensitive data.
Symantec notes that despite the group's relatively unsophisticated methods, their evolving tactics pose a growing threat.
Quote:
“Gamerdon's tactics are evolving, making the group a growing threat despite its relatively unsophisticated methods.”
— Dave Bittner [05:00]
China Admits to Volt Typhoon Cyberattacks
In a significant development, Chinese officials indirectly admitted to cyberattacks on U.S. infrastructure linked to the Volt Typhoon campaign during a secret December 2024 meeting in Geneva. The U.S. delegation interpreted this admission as a warning regarding American support for Taiwan.
The meeting also addressed the SALT Typhoon campaign, which compromised telecom data from senior officials. While Volt Typhoon is viewed as a serious provocation, SALT Typhoon is considered typical cyber espionage. Both nations continue to escalate mutual cyberattack accusations.
Quote:
“Both nations continue to escalate mutual CyberAttack accusations.”
— Dave Bittner [06:30]
US Joins Global Spyware Restrictions Agreement
The United States has joined an international agreement under the Pall Mall process, an initiative launched by the United Kingdom and France in February to combat the misuse of commercial spyware. This follows a voluntary code of practice signed by 21 countries aimed at regulating cyber intrusion capabilities and curbing abuses targeting civil society.
Human rights advocates have lauded the move as a bipartisan effort toward responsible spyware governance.
Quote:
“Human rights advocates praise the move as a bipartisan step toward responsible spyware governance.”
— Dave Bittner [07:45]
Data Breach at Laboratory Services Cooperative
Laboratory Services Cooperative (LSC), a nonprofit supporting reproductive health labs, confirmed a data breach affecting 1.6 million individuals. The breach occurred in October 2024, compromising sensitive data such as personal IDs, medical records, and insurance details. Most affected individuals had lab work done through select Planned Parenthood centers.
Quote:
“An investigation is ongoing, with federal law enforcement and cybersecurity experts involved.”
— Dave Bittner [09:15]
Metadata Theft from Amazon EC2 Instances
In March, a threat actor exploited server-side request forgery (SSRF) attacks to steal metadata from unsecured Amazon EC2 instances, as reported by F5 Labs. The attacker targeted EC2-hosted websites that exposed instance metadata, potentially leaking sensitive API credentials.
Quote:
“F5 advises migrating from IMDS v1 to IMDS v2 or blocking requests to the metadata IP to mitigate future risks.”
— Dave Bittner [10:30]
WordPress Autokit Plugin Vulnerability
A critical vulnerability in the Autokit WordPress plugin, widely installed on over 100,000 sites, is being actively exploited. Security firm Defiant reports that the vulnerability allows attackers to bypass authentication and create admin accounts on unconfigured sites by exploiting a missing value check in API key validation.
Quote:
“Users are urged to update to the latest version to patch the flaw.”
— Dave Bittner [11:15]
Ivanti Products Remote Code Execution Flaw
A newly published analysis reveals a critical, unauthenticated remote code execution (RCE) flaw affecting Ivanti products, including Connect Secure, Policy Secure, Pulse Connect Secure, and ZTA gateways. The vulnerability stems from a stack-based buffer overflow in the web server binary via the X-Forwarded-For header.
Ivanti patched Connect Secure in February, with other product updates scheduled for April. Notably, Pulse Connect Secure is no longer supported. Given the public proof of concept and active exploitation, urgent patching or mitigation is critical.
Quote:
“Given the public proof of concept and active exploitation, urgent patching or mitigation is critical.”
— Dave Bittner [12:00]
Discussion Highlights:
In a candid conversation, Johannes Ulrich addresses the burgeoning trend of Vibe coding and Vibe security, where AI systems are leveraged to generate code and security measures based on problem descriptions without deep human understanding.
Notable Quotes:
“If you don't know how to code, if you don't know what proper security looks like, how do you know if that firewall rule set that AI came up with [...] are actually correct?”
— Johannes Ulrich [16:30]
“They have to understand what's happening there. [...] You just blame the AI.”
— Johannes Ulrich [21:17]
Key Takeaways:
Conclusion of Interview:
Ulrich underscores the necessity for developers to remain knowledgeable and vigilant when integrating AI into their workflows, ensuring that AI serves as an assistant rather than a replacement.
The episode of CyberWire Daily underscores a critical juncture in cybersecurity, marked by institutional changes, escalating state-sponsored threats, significant data breaches, and the complex integration of AI in security practices. As CISA undergoes substantial downsizing, the industry faces mounting challenges from sophisticated hacking groups and evolving vulnerabilities. Meanwhile, the conversation with Johannes Ulrich highlights the indispensable role of human expertise in an increasingly AI-driven cybersecurity landscape.
For a comprehensive overview of today's stories and further insights, listeners are encouraged to visit the CyberWire daily briefing at thecyberwire.com.
Produced by N2K Networks