Loading summary
A
You're listening to the CyberWire network, powered by N2K.
B
AI is changing how enterprises operate and how they stay protected. It's time to eliminate risk and protect innovation. From March 23rd through the 26th, join Trend AI for actionable AI security insights. Catch impactful sessions at RSAC, then unwind and grab a bite at their lounge in Trapa. Sueno Experience industry leading AI security in person. Engage with the experts and get your chance to win $500,000. San Francisco lets AI fearlessly. Learn more@trendmicro.com RS. The EU imposes sanctions after cyber attacks DHS boosts surveillance spending AI firms recruit weapons risk experts. Stryker says their disruption led to no patient impact. LeakNet leans on click Fix Sears Chatbot spills data A Chinese security firm leaks a private Key tech giants team up on scams Teens sue Xai over alleged AI generated abuse on today's Threat Vector segment, David Moulton and guest Erica Shumate, founder of the EN Strategy Group, explore how AI is fundamentally reshaping the security landscape and cyber crooks cause a complimentary curbside convenience.
A
Foreign.
B
It's Tuesday, march 17, 2026. I'm dave bittner and this is your cyberwire intel briefing. Thanks for joining us here today. It's great to have you with us. The European Union has imposed targeted sanctions on three foreign companies and two individuals linked to cyber attacks against its member states. The measures affect China based Integrity Technology Group and Anxum Information Technology, along with Iran based MNET pasargad. EU officials say integrity facilitated the compromise of more than 65,000 devices across six countries between 2022 and 2023. Anjun allegedly provided hacking services targeting critical infrastructure, while its co founders were also designated Eminet. Passargad is accused of breaching a French database, selling the data on the dark Web, and conducting disinformation operations during the 2024 Paris Olympics. The sanctions prohibit EU entities from providing financial resources and impose travel bans on individuals. The EU's cyber sanctions regime now covers 19 individuals and seven entities, reflecting a broader response to escalating global cyber threats. The Department of Homeland Security is preparing a major expansion of surveillance technology spending in 2026, with contract forecasts outlining hundreds of millions of dollars for enhanced detection and tracking systems. This includes a $1 billion agreement with Palantir and additional investments in AI driven platforms, mobile surveillance tools and data extraction technologies. Officials and advocacy groups say increased funding, including a $191 billion package passed in 2025, has significantly accelerated these efforts. Critics argue oversight has not kept pace. Lawmakers and watchdogs have raised concerns about civil liberties risks tied to tools capable of facial recognition, phone data extraction and large scale monitoring. Questions have also emerged about transparency as privacy impact assessments declined sharply and none have been filed this year. Internal tensions are also surfacing. The DHS inspector general alleges the agency has obstructed oversight efforts while lawmakers continue to push for investigations and limits on surveillance authorities. Anthropic is seeking a chemical, weapons and explosives expert to help prevent what it calls catastrophic misuse of its AI tools amid concerns they could reveal how to build dangerous weapons. The role requires experience in weapons defense and knowledge of radiological devices. OpenAI has posted a similar position, reflecting a broader industry trend. While companies frame these hires as safety measures, some experts warn they may introduce new risks by exposing AI systems to sensitive weapons knowledge. Critics also highlight the lack of international regulation governing AI and weapons related information, raising concerns about oversight as the technology continues to advance. Stryker says a recent cyber attack was contained to its internal Microsoft environment and triggered a mass device wipe, disrupting operations but not products or patient safety. The company reports that tens of thousands of employee devices were remotely erased after attackers gained administrator access and used Microsoft Intune to issue wipe commands. Investigators found no evidence of malware deployment or data exfiltration, despite claims by the Handela group that IT destroyed over 200,000 systems and stole data. Electronic ordering remains offline following forcing manual processing while restoration efforts continue. The incident shows how compromised identity and cloud management tools can cause large scale disruption without ransomware, according to Stryker and investigators. Medical devices were unaffected and Recovery is underway. LeakNet ransomware is using a click fix, social engineering lure and a legitimate Deno runtime to gain initial access and execute malware in memory, reducing detection. Researchers at ReliaQuest report that victims are tricked into running malicious scripts, which Deploy Deno assigned JavaScript runtime to execute payloads directly in memory. This bring your own runtime approach helps bypass security controls and leaves minimal forensic evidence. Once active, the malware fingerprints, the system connects to command and control infrastructure and enables follow on actions like credential theft, automation, lateral movement and data exfiltration via Amazon S3. Attackers are increasingly abusing trusted tools to evade defenses. According to Relioquest, consistent behaviors like unusual Deno use or abnormal psexec activity may help defenders detect these attacks. Millions of customer interactions with Sears Home Services AI chatbot Samantha were exposed in publicly accessible databases, according to security researcher Jeremiah Fowler. The data included 3.7 million chat logs, 1.4 million audio files and transcripts containing sensitive customer details like names, addresses, phone numbers and appliance information. Some recordings captured hours of ambient audio after calls ended, potentially exposing private conversations. The databases owned by Transformco were secured after disclosure, but it remains unclear how long they were exposed or if others accessed them. Exposed service data can enable targeted phishing and fraud Researchers warn that rapid AI adoption without strong data protections increases privacy and reputational risk for companies handling large volumes of customer interactions. Chinese security firm Kihu360 reportedly exposed a sensitive wild card SSL private key inside the public installer for its 360 Security Claw AI assistant, creating serious security risks. Researcher Lukas Elezhnik found the key embedded in an uncompressed archive, allowing anyone to extract it and potentially authenticate as the company's servers. The Certificate, valid until 2027, covers all subdomains, meaning attackers could impersonate services, intercept traffic or launch convincing phishing campaigns. The issue is notable given Kihu360's role as a major cybersecurity provider with hundreds of millions of users. Leaked private keys undermine core Internet trust mechanisms, according to available reports. The company has not yet revoked the certificate or issued a public response, leaving potential exposure unresolved. Google and major tech companies have signed the Industry Accord Against Online Scams and Fraud at the UN Global Fraud Summit, aiming to coordinate defenses against increasingly sophisticated global scam networks. The agreement brings together firms like Amazon, Microsoft and Meta to share threat intelligence and align efforts. Google also plans to expand its $15 million investment with AI driven detection tools, increased collaboration with law enforcement and initiatives like the Global Signal Exchange Scams are becoming more organized and cross border, requiring unified industry and government responses to reduce financial and emotional harm. Three teenage girls have filed a lawsuit against Elon Musk's xai, alleging its Grok image generator was used to create and distribute AI generated child sexual abuse material using their photos. The complaint says altered nude images of the minors were shared on platforms like Discord and Telegram without consent, with one case leading to a suspect's arrest after Sisam was found on his device. Plaintiffs allege the content was generated through a third party app using Grox technology, arguing XAI still bears responsibility because it licenses and powers the system. The case highlights growing risks of AI generated exploitation and questions platform accountability. According to the lawsuit, XAI failed to prevent misuse despite known risks contributing to reputational and psychological harm. The company has not publicly responded. Coming up after the break on today's Threat Vector segment, David Moulton speaks with Erica Shumate about how AI is fundamentally reshaping the security landscape and cyber crooks cause a complimentary curbside convenience. Stick around. No, it's not your imagination. Risk and regulation really are ramping up, and these days customers expect proof of security before they'll even do business. That's where Vanta comes in. Vanta automates your compliance process and brings compliance, risk and customer trust together on one AI powered platform. So whether you're getting ready for a SoC2 or managing an enterprise governance risk and compliance program, Vanta helps keep you secure and keeps your deals moving. Companies like Ramp and RYTR spend 82% less time on audits. With Vanta, that means less time chasing paperwork and more time focused on growth. For me, it comes down to this. Over 10,000 companies, from startups to large enterprises, trust Vanta to help prove their security. Get started@vanta.com cyber. Most environments trust far more than they should, and attackers know it. ThreatLocker solves that by enforcing default deny at the point of execution. With ThreatLocker allow listing, you stop unknown executables cold. With Ring fencing, you control how trusted applications behave. And with Threat Locker DAC defense against configurations, you get real assurance that your environment is free of misconfigurations and clear visibility into whether you meet compliance standards. ThreatLocker is the simplest way to enforce zero trust principles without the operational pain. It's powerful protection that gives CISOs real visibility, real control, and real peace of mind. ThreatLocker makes zero trust attainable even for small security teams. See why thousands of organizations choose Threat Locker to minimize alert fatigue, stop ransomware at the source, and regain control over their environments. Schedule your demo@threatlocker.com N2K today. On today's segment from the Threat Vector podcast, host David Moulton sits down with Erica L. Shumate, founder of the EN Strategy Group. They're exploring how AI is fundamentally reshaping the security landscape.
C
Hi, I'm David Moulton, host of the Threat Vector podcast, where we break down cybersecurity threats, resilience, and the industry trends that matter the most. What you're about to hear is a snapshot from from my conversation with Erica Schumach, a public policy strategist and former FBI intelligence analyst who spent years working at the intersection of national security, AI, and technology ethics. Erica brings something rare to this conversation. She's lived on both sides. She's worked counterterrorism, counterintelligence, and crimes against children as a federal analyst. Then she moved into big tech. She's seen how policy gets treated as an afterthought when media, social, speed is the priority, and she has a clear point of view on what it costs us. We talked about who actually holds power when AI compresses decision time, why siloing engineering from ethics is a liability, and what the next generation of security leaders needs to think about beyond technical skill. Erica, welcome to Threat Vector. I'm really excited to have you here and have been looking forward to this conversation since we started planning it.
A
Same. I am very, very excited to be here today and I be able to just have a conversation and hope that your audience finds it actually very valuable.
C
I know when I was looking at your background, I was impressed of your. Your time in the intelligence community and then how you shifted that service into the private sector, helping out a number of different companies think about AI and cybersecurity and that that intersection where things come together, even going into national security. Could you talk to me a little bit about that journey? Two sides, but kind of the same mission.
A
For me, my North Star is always thinking about the human first and what human centered design is. And my whole mission is working at the intersection of where people and technology collide. And when I take a look back back at like the work that I've done and walking through that path, for me it's been having grown up in the FBI, in the US Intelligence community, that work started very early on where I was focused on very much so, you know, national security and criminal matters from counterterrorism, counterintelligence, transnational organized crime, and also national kidnappings, crimes against children. Like, I've literally worked a gamut of different programs. And what was very unique for me is that when I came into the FBI, I was in a very small satellite field office. And there I had the opportunity to work all the things that I'm telling you about, like any. Pick any hour of the day, I can't even say any day of the week. And I could be working very matters just because of the nature of how the office was set up and also the location of where I was. And that really set this, you know, very young, naive professional up really to be able to, what I would say, kind of tip my toe into a bit of everything and actually understand it and do it very well. Because I understood no matter whatever I was working, the analytic trade craft in itself is the same, even though the threat in all of the emerging trends might be different. That piece to me was like all of it was the same. And so that's kind of what I think about taking myself back to the beginning of my career in national security.
C
So it sounds like what you developed was a framework for dealing with threats or you know, assessing risk and then you could apply that to different domains or specific instances. Am I understanding it right?
A
Yeah, no, you're, you're 100% right.
C
Well, let's, let's shift away from that sort of like environment that you grew up in and some of the national security risks of that post 911 era to another big thing that's hitting, you know, with a, with a ferociousness is AI. And I'm curious how you react to and what you think stands out most about how AI is being integrated into national defense, into cybersecurity. What stands out for you right now?
A
Great question. What stands out for me most is that AI is really being operationalized and in national defense and cybersecurity, quite frankly, before we've even fully internalized how it changes the threat dynamics. And we're not just automating tasks, we're automating judgment under this real pressure. Right. You have these additional points that you're thinking about that you have to layer in. AI compresses time. Think about detection, think about decision making and response. They all move faster because of AI, which can be a good thing. But then on the flip side, you have to think about how your adversaries benefit from this too. Especially our non state actors and legacy cyber frameworks that assume human pace escalations. Really AI breaks that entire assumption of what is possible and what is not possible.
C
So in, in real world security operations, I, I'm curious, how do you ensure that the, the ethical principles survive the pressures of mission urgency or that hot threat response that's going on?
A
Great, great question. Ethics. Love it. Ethics don't survive because people are good. Like that's what people want to believe, but that's just not how it works. They survive because systems enforce them. So holding the accountability piece right, ethics must be embedded into our workflows. Accountability must also be predefined. What is that criteria? What is accountability? What happens if, if I do this or the system does this, then what is the consequence of that? What am I being held to is if this thing fails as the person who is leading the thing, pressure tested before real world deployment also is part of that, that we need to always keep top of mind. And when we think about the tools and processes we want to think about again, the human, human in the loop for high impact decisions is a must, it is a non negotiable and we have to really think about that. Kill switches and escalation protocols are also necessary. Again, we're dealing with what we talked about earlier, fast technology. We have to have a way to be like, we got to kill it now. Even if you're like, oh my gosh, this is going to cost so much, we got to do the right thing and think about that part later because there are real people in front of this technology. Post incident reviews that focus on learning, not blame is where we keep the ethics at center and not the finger pointing, right? It's so easy to try to find someone where they're going to fall on the sword when we want to. Just think about the lessons learned. Particularly again, when we talk about not if it's going to happen, the when. If we're working from that standpoint from the beginning, we can always continue to have our active action post mortem where our people still believe that this company is doing the right thing and we followed all the steps and if we didn't, what was the mishap and why? And being able to lean into that is what people care about I believe the most too.
C
This one is worth your full attention, especially if you're making decisions about AI deployment or trying to close the gap between your security posture and your governance structure. The episode is called who Holds Power when AI Compresses Decision Time and it's live now in your Threat Vector feed. Thanks for listening. Stay secure. Goodbye for now.
B
Be sure to check out the complete episode of Threat Vector wherever you get your favorite podcasts or on our website. TheCyberWire.com. Foreign. You could rebuild your network from scratch to make it more secure, scalable and simple. Meet Meter the company reimagining enterprise networking from the ground up. Meter builds full stack zero trust networks including hardware, firmware and software, all designed to work seamlessly together. The result? Fast, reliable and secure connectivity without the constant patching, vendor juggling or hidden costs. From wired and wireless to routing, switching, firewalls, DNS security and vpn, every layer is integrated and continuously protected in one unified platform. And since it's delivered as one predictable monthly service, you skip the heavy capital costs and endless upgrade cycles. Meter even buys back your old infrastructure to make switching effortless, transform complexity into simplicity, and give your team time to focus on what really matters, helping your business and customers thrive. Learn more and book your demo@meter.com cyberwire that's M E T E R.com cyberwire foreign. Cyber threats strike Minutes matter Booz Allen brings the same battle tested expertise trusted to protect national security to defend today's leading global organizations. They safeguard their data strengthen enterprise resilience and mobilize in minutes across energy, healthcare, financial services and manufacturing. Their teams don't just respond, they anticipate, outthink and stay ahead of evolving threats. This is powerful protection for commercial leaders only. From Booz Allen See how your organization can prepare today@booz allen.com Commercial. And finally, drivers in Perm, Russia got an unexpected perk this week free parking courtesy of a cyber attack. Rather than civic generosity, a large scale DDoS attack overwhelmed the city's parking payment systems, knocking the perm parking portal offline and making it impossible to pay. Officials responded pragmatically, suspending enforcement and effectively turned paid zones into a temporary free for all with hopes of restoring the service soon. The incident is a reminder that when attackers flood systems with traffic, even routine services can grind to a halt. Disruptions like this can ripple into daily life, sometimes with oddly welcome side effects, according to local authorities. Although the outage was caused by a massive DOS attack, local drivers may remember it more fondly than most cyber security incidents. And that's the Cyber Wire or links to all of today's stories. Check out our daily briefing@thecyberwire.com we'd love to know what you think of this podcast. Your feedback ensures we deliver the insights that keep you a step ahead in the rapidly changing world of cybersecurity. If you like our show, please share a rating and review in your favorite podcast app. Please also fill out the survey in the show notes or send an email to cyberwire2k.com N2K's lead producer is listed. Oaks were mixed by Trey Hester with original music and sound design by Elliot Peltzman. Our contributing host is Maria Ramazas. Our executive producer is Jennifer Ibin, Peter Kilpe is our publisher and I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow.
A
Foreign.
B
If you only attend one cybersecurity conference this year, make it RSAC 2026. It's happening March 23rd through the 26th in San Francisco, bringing together the global security community for four days of expert insights, hands on learning and real innovation. I'll say this plainly, I never miss this conference. The ideas and conversations stay with me all year. Join thousands of practitioners and leaders tackling today's toughest challenges and shaping what comes next. Register today@rsaconference.com cyberwire26 I'll see you in San Francisco. Foreign. When it comes to mobile application security, good enough is a risk. A recent Survey shows that 72% of organizations reported at least one mobile application security incident last year, and 92% of responders reported threat levels have increased in the past two years. Guard Square delivers the highest level of security for your mobile apps without compromising performance, time to market, or user experience. Discover how Guardsquare provides industry leading security for your Android and iOS apps at www.guardsquare.
A
Com.
This episode of CyberWire Daily explores a wave of fresh international and industry responses to escalating cyber threats. Key coverage includes: the EU’s newly instituted sanctions on foreign hackers, expanded US surveillance spending, missteps in corporate data protection, and the ways AI is both a tool and a target in cybersecurity. The episode also features a candid discussion between David Moulton and Erica Shumate on AI’s profound impact on security operations and ethical risk management.
[01:58–02:50]
[02:51–03:46]
[03:47–04:25]
[04:26–05:25]
[05:26–06:16]
[06:17–07:02]
[07:03–08:00]
[08:01–08:43]
[08:44–09:40]
David Moulton interviews Erica Shumate, EN Strategy Group founder and ex-FBI intelligence analyst
[14:44–23:27]
“AI compresses time. Think about detection, decision-making, and response. They all move faster because of AI—which can be a good thing. But on the flip side, you have to think about how your adversaries benefit from this too.”
— Erica Shumate [19:36]
“Ethics don't survive because people are good … They survive because systems enforce them.”
— Erica Shumate [20:59]
“Kill switches and escalation protocols are also necessary. We're dealing with fast technology. We have to have a way to be like, ‘We got to kill it now.’ Even if you're like, oh my gosh, this is going to cost so much, we got to do the right thing and think about that part later—because there are real people in front of this technology.”
— Erica Shumate [21:20]
“The EU’s cyber sanctions regime now covers 19 individuals and 7 entities, reflecting a broader response to escalating global cyber threats.”
— Dave Bittner [02:47]
On the Sears chatbot leak: “Some recordings captured hours of ambient audio after calls ended, potentially exposing private conversations.”
— [06:55]
On the Perm, Russia DDoS: “Rather than civic generosity, a large scale DDoS attack overwhelmed the city's parking payment systems, knocking the Perm parking portal offline and making it impossible to pay ... effectively turned paid zones into a temporary free-for-all.”
— Dave Bittner [27:44]
This episode captured the rapid escalation of both technological risks and policy responses in the cyber realm, with a focus on international sanctions, the security/ethics divide in AI-driven environments, and the persistent vulnerability of even major organizations. Erica Shumate’s insights spotlighted the urgency of embedding robust, enforceable ethics—backed by accountability and human oversight—as AI accelerates the tempo of cyber offense and defense.