Loading summary
A
You're listening to the Cyberwire Network powered by N2K. Maybe that's an urgent message from your CEO. Or maybe it's a deepfake trying to target your business. Doppel is the AI native social engineering defense platform fighting back against impersonation and manipulation. As attackers use AI to make their tactics more sophisticated, Doppel uses it to fight back from automatically dismantling cross channel attacks to building team resilience and more Doppel outpacing what's next in social engineering? Learn more@doppel.com that's D O P E L dot com. The former NSA chief says the US can beat China in cyberspace Canvas cuts a deal with hackers the FCC proposes KYC rules for phone users SAP patches critical flaws a poisoned 10 stack npm supply chain attack spreads malware Humanitarian aid lures deliver spyware Japan launches an AI driven cyber review Texas sues Netflix over data practices Harvard experts debate the future of agentic AI security on our Threat Vector segment. David Moulton welcomes a software Karen CSO at Qualtrics and author of Lessons from the Front Lines. Our guest is Tim Starks from cyberscoop discussing changes to the Cyber Corps Scholarship Program and the Gentleman's Guide to Awful opsec. It's Tuesday, may 12, 2026. I'm dave buettner and this is your cyberwire intel brief. Thanks for joining us here today. It's great as always to have you with us. Former NSA and US Cyber Command leader Timothy Hogg says China's long running cyber campaign against the United States is serious but far from unbeatable. In a New York Times editorial, he points to intrusions tied to groups like Volt, Typhoon and Salt Typhoon, which targeted utilities, telecommunications networks and senior officials. Hogg argues the United States already holds a major advantage through its private sector. American cybersecurity firms, cloud providers and telecom companies operate at unmatched global scale and can often detect malicious activity faster than governments. He cites a recent Google disruption of a Chinese espionage campaign as proof that industry led action can work quickly and effectively. Hog believes voluntary cooperation is no longer enough. He calls for clearer laws authorizing companies to disrupt foreign cyber operations, more funding for critical infrastructure defense, and stronger public consequences for Chinese cyber activity, including sanctions and coordinated disruption efforts. He also warns that U.S. cyber Command remains underfunded relative to the scale of the threat infrastructure. The company behind the widely used Canvas Learning platform says it reached an agreement with the Shiny Hunters extortion group after a cyber attack disrupted services at roughly 9,000 educational institutions across the U.S. canada, Australia and the U.K. the attackers claim to have stolen 3.5 terabytes of university and student data and threatened to publish it online unless a ransom was paid. Instructure says the agreement resulted in the return of data, digital confirmation of its destruction and assurances that affected institutions and students would not face further extortion. The incident interrupted exams and coursework for many students. Security experts and law enforcement agencies generally discourage ransom payments because attackers may still retain or resell stolen data. Instructure said its priority was protecting customer information and minimizing harm to students and schools. The Federal Communications Commission is proposing new know your customer rules aimed at reducing illegal robocalls, but critics warn the changes could create major privacy concerns and effectively end anonymous burner phones in the United States. Under the proposed rules, prepaid phone customers could be required to provide government issued identification, a physical address, a legal name and an existing phone number before receiving service. The FCC is also considering behavioral red flags, including cryptocurrency payments, virtual office addresses and suspicious websites or email accounts. The FCC says telecom providers are best positioned to stop illegal calls before they reach consumers. But civil liberties advocates argue the plan could expand surveillance and make it harder for vulnerable people, including abuse survivors and refugees, to access anonymous communications. Proposed enforcement measures could fine telecom providers $2,500 per illegal call, creating strong incentives for aggressive customer monitoring. SAP has released 15 new security notes for its May 2026 Security Patch Day, including two critical vulnerabilities with a CVSS score of 9.6 affecting S4HANA and SAP Commerce. The S4HANA flaw is an SQL injection vulnerability that could allow authenticated attackers to access sensitive data. A second issue affects SAP Commerce and could enable unauthenticated attackers to upload malicious configurations and execute arbitrary server side code. SAP also patched a high severity OS command injection flaw in forecasting and replenishment, along with additional medium and low severity bugs across multiple products. SAP says there's no evidence these vulnerabilities are being actively exploited, but customers are urged to apply patches quickly. Attackers published 84 malicious versions of official 10 stack npm packages in a 6 minute supply chain attack that exposed developers to credential theft, self propagating malware and potential disk wiping. Researchers say the attackers exploited a GitHub Action's cache poisoning weakness to steal NPM publishing tokens without compromising Tanstack maintainers directly. The malicious Packages uploaded on May 11 were removed within roughly 30 minutes after detection by STEP Security. Analysis from Socket and STEP Security found the malware searched more than 100 locations for cloud credentials, SSH keys, crypto wallets and developer secrets. Researchers also identified a deadman switch that could wipe an infected system if stolen GitHub tokens were revoked. The incident highlights ongoing risks in software supply chains and the danger posed by routine package installation commands in developer environments. Researchers at Sibel Research and Intelligence Labs have identified a new espionage campaign called Operation Humanity humanitarian Bait that uses fake humanitarian aid documents to deliver Python based spyware to Russian speaking targets. The attack begins with phishing emails carrying a malicious shortcut file hidden inside a RAR archive. The malware uses PowerShell and fileless execution techniques to evade automated detection while displaying a decoy PDF related to humanitarian assistance. Researchers say the spyware is hosted through GitHub releases and heavily obfuscated using PI Armor. Once installed, the malware can steal browser credentials, Telegram session data, cryptocurrency wallets and screenshots while also logging keystrokes and enabling remote desktop access through Rustdesk or AnyDesk. The campaign demonstrates how attackers are increasingly blending trusted services, social engineering and stealth focused malware to to maintain long term access and evade security tools. Japanese Prime Minister Sanay Takeachi has ordered a government wide cybersecurity review following concerns that advanced artificial intelligence models, including Anthropic's bug hunting system Mythos, could dramatically increase the speed and scale of cyber attacks. Takayachi directed Cybersecurity Minister Hisashi Matsumoto to assess whether government agencies and critical infrastructure operators can effectively detect and remediate vulnerabilities. The move reflects growing concern that AI systems capable of rapidly identifying software flaws may also help attackers automate exploitation efforts. The announcement follows broader warnings from regulators and security experts worldwide that frontier AI models could reshape the cyber threat landscape. Some researchers, however, argue Mythos does not uncover vulnerabilities beyond human capability and may not significantly outperform existing open source tools. Governments are increasingly treating AI enabled cyber risk as a national security issue requiring policy level responses and infrastructure readiness. Texas Attorney General Ken Paxton has sued Netflix alleging the streaming company collected and shared sensitive user data with advertisers, data brokers and ad tech firms without meaningful consent from subscribers. The lawsuit claims Netflix tracked viewing habits, device information, locations and behavioral data from both adults and children, despite past public statements from company leadership suggesting the platform did not engage in advertising related data collection. Texas also alleges Netflix combined user demographics, IP based location data and viewing activity to build detailed advertising profiles. The state is seeking financial penalties and a court order barring what it describes as unlawful data collection practices. Texas also wants Netflix to disable Autoplay by default on children's profiles. The case highlights growing scrutiny of how streaming platforms collect, analyze and monetize behavioral data, particularly involving children's accounts and targeting advertising ecosystems. Cybersecurity researchers and policy experts say increasingly autonomous agentic AI systems could transform both cyber defense and cybercrime, raising urgent questions about regulation, liability and national security. During a discussion hosted by Harvard's Berkman Klein center, experts pointed to rising AI assisted cyberattacks, including phishing campaigns and software exploitation efforts that can rapidly identify vulnerabilities. IBM data cited during the panel showed attacks targeting public facing applications rose 44% year over year in 2026. Panelists argued businesses and governments need clearer security standards and liability frameworks before AI driven threats escalate. Further concerns included AI enhanced phishing, autonomous cyber retaliation and the difficulty of securing sprawling software ecosystems. At the same time, researchers said agentic AI could strengthen defenses by detecting fraud patterns and suspicious behavior in real time. Coming up after the break on our Threat Vector segment, David Moulton welcomes Asaf Karan, CSO at Qualtrics and author of Lessons from the Frontlines. Tim Starks from cyberscoop discusses changes in the Cyber Corps Scholarship Program and the Gentleman's Guide to Awful opsec. Stay with us Foreign. When it comes to mobile application security, good enough is a risk. A recent Survey shows that 72% of organizations reported at least one mobile application security incident last year and 92% of responders reported threat levels have increased in the past two years. Guard Square delivers the highest level of security for your mobile apps without compromising performance, time to market or user experience. Discover how Guard Square provides industry leading security for your Android and iOS apps at www.guardsquare.com. No, it's not your imagination. Risk and regulation are ramping up. Customers expect proof of security just to do business. That's where Vanta comes in. Vanta automates your compliance process and brings compliance, risk and customer trust together on one AI powered platform. Whether you're Preparing for a SoC2 or managing an enterprise GRC program, Vanta helps keep you secure and your deals moving. Companies like Ramp and RYTR report spending 82% less time on audits. That's not just faster compliance, that's more time to focus on growth. When I look around the industry, I see over 10,000 companies from startups to big enterprises trusting Vanta. Get started@vanta.com cyber. On our latest Threat Vector segment, host David Moulton welcomes Asaf Karan, CEO at Qualtrics he's also author of Lessons from the Front Lines.
B
Hi, I'm David Moulton, host of the Threat Vector podcast where we break down cybersecurity threats, resilience, and the industry trends that matter most. What you're about to hear is is a snapshot from my conversation with Assaf Karen, SVP and Chief Security Officer at Qualtrics, and the author of a new book, Lessons from the Front Lines, out now from Wiley. AI is the most powerful tools defenders have ever had. It is also the most capable weapon attackers have ever had, and right now, attackers are using it. ASAF has spent 25 years protecting some of the world's most targeted digital environments. From the Israeli government to PayPal to Qualtrics. He's not speculating about what AI powered attacks look like. He's watching them happen. In this episode, we get into why AI doesn't just lower the barrier to entry for attackers, it removes the ceiling. What prompt injection actually means for defenders, and why it's different from any threat most teams are built to handle and the moment your organization deploys an AI tool, why your threat model has to change immediately. ASAP has a phrase that stuck with me. The moment you bring AI into your environment, you have less slack. You can't skip steps. Most organizations are skipping steps. If your security program hasn't caught up to what AI means for your attack surface, this is the conversation to start with. Asaf, welcome to Threat Vector. I'm really glad to have you here. I know there had been some scheduling nonsense, but we finally got it, man. We're finally on the mic together. So let's have a good conversation.
C
Six, six. Rescheduling to get to this point, if I, if I count it correctly, but let's go. I'm, I'm excited.
B
In your book you wrote about this danger of feeling like you know enough and how that confidence can become quietly become a liability. In a field that's moving as fast as AI security, I think that trap feels easy to fall into. Where do you see that showing up
A
now,
C
specifically with AI? I think that I'm seeing a lot of security teams not understanding how pivotal this moment is and
D
using
C
legacy thinking in making decisions and maybe defaulting to, to the default of security teams, which is being the department of no. I think especially there is a gap of knowledge in security teams understanding AI and machine learning. I think it has been there for
A
a while,
C
but with the explosion happening right now, that fear is dangerous and that lack of curiosity that I'm seeing in a lot of places is bothering me because I think that we're creating more impact than good when we're doing it.
B
How do you catch yourself from falling into that trap?
C
Sometimes successful, sometimes I'm not. By the way, I don't want to make it sound like I'm always curious, but I do curiosity checkups. I sit down and generally say to myself, what did I miss? There is a friend of mine, Leah, who's the CISO of LinkedIn and they wrote on LinkedIn that. Something I agree with completely, that there is a superpower in willing to look like you don't know the answer or willing to look like you're stupid and ask questions like you're stupid and. And sometimes I'm successful, sometimes I'm not. In the day to day, like the accelerated day to day pace that we're in a lot of time, it's just easy to come in and say, hey, this is the answer. Move on. Right? And I do have a good team around me that knows to also challenge me when I'm that way and tell me, asaf, you're wrong here, let's have a conversation. And that's really, really humbling and it's great to have that support structure.
B
Asaf, one of the things that you may have noticed, and I certainly have, and it's counterintuitive to think this way, I think, is that there's a lot of focus on AI and I think that that is warranted. On the other hand, have we pulled so much of our focus away from some of the basics that seem like we need to be able to go in and deal with the discipline and grit work that isn't all that sexy and new, but needs to be done such that the attack that you're talking about isn't so damned easy.
C
Yes, yes, thank you for that. We need to say this more. The best solution, two good solutions for AI attacks. One is minimization. If it doesn't need to be on the Internet, it shouldn't be on the Internet. If it doesn't need to be on the endpoint, it doesn't need to be on the endpoint. If it doesn't need to be in a packet package in the source repo, it shouldn't be there. And we have been in a world where we're maximizing things we need to minimize. We need to reduce the attack surface to a point where the attack is not possible, not get to the point where we're trying to defend a growing attack surface. And the other is baseline boring architecture. We need to do identity, right, we need to do data, right. We need to do scoping, right, we need to do network segmentation, right, we need to do recovery bcp, right. And these are hard things. And we've been glossing as an industry, we've been glossing over them with mitigating controls and good enough and all of the. There is no good enough anymore because what we're doing is even worse than attackers using AI. We're putting AI on top of broken mechanisms. So we're putting a non deterministic engine on top of a broken deterministic architecture that can go and do whatever it wants. And our ability to control a non deterministic engine is very, very low right now. Until we get into the world where there is runtime security for the AI solutions that we provide to our customers, there has to be very strong architectural guardrails on the bottom. And if we put on an AI agent on bad identity infrastructure, it will find a way through prompt injection, through other means, through I don't know, to get to the data that it wants to get to or the attacker wants to get to using our own bad infrastruct. So completely agree with you. There is in my mind a whole resurgence of being brilliant at the basics.
B
I want to end on hopefully a positive note.
A
Right.
B
You've written this book about what it takes to lead in this field long term. You're watching everything that's going on with AI right now. Is there anything that gives you confidence that defenders may come out ahead in this era?
C
Yeah. To. To steal a quote from Phil Venables, I'm a short term pessimist, long term optimist. I think that the next couple of years are going to be either hilarious or daunting depending on who you are. But I think in the end this technology is so exciting that we're going to be able to do something that we've been trying to do for years and years and years unsuccessfully, which is to free up people to do people work and not to do manual labor tasks. And we're going to have. We're already at the deficiency of the amount of people in the profession and people are burning out because they need to handle incidents on a day by day basis. So copy paste answers into questionnaires or do third party risk management things the that don't bring value but are part of the process. And we're going to be able to automate a lot of these processes and reduce the amount of time people are actually doing stuff like vuln triage. Or incident triage and have them work on the larger picture that it's going to be much easier. Not easier. It's going to be much more exciting to be a security professional in two years than it is right now because you're going to work on big picture stuff more than you are today. And I think that that's exciting and I think we will get ahead of the curve. We need to adopt the technology as fast as attackers. That will not happen. So that's why we have two years of catching up to I think we'll catch up.
B
The episode is called AI in the Wrong Hand Hands and it's live now in your Threat Vector feed. Thanks for listening. Stay secure. Goodbye for now.
A
Be sure to check out the complete Threat Vector podcast wherever you get your favorite shows. It's always my pleasure to welcome back back to the show Tim Starks. He is a senior reporter at cyberscoop. Tim, welcome back.
D
Hi, Dave.
A
Really interesting article you've written here about this cybersecurity scholarship program, which I feel like has kind of been through the ringer lately.
D
That's part of what you're.
A
Can you unpack it for us? What's going on?
D
Yeah, it's, I mean, we can start with the ringer, the beginning of the ringer, if you will, which is the Scholarship for Service program. Cyber Corps is the government gives you scholarship funding and then you commit to work for them for a little while. If you've been paying attention to the way things have been going with federal employment of cyber security personnel lately, not been a lot of jobs. You know, the first shoe to drop on this was a few months back where I wrote about, and another outlet wrote about the way in which students that we spoke to were like really struggling to fulfill their side of the bargain. They were worried about having to be left with debt. So that was part one. Then part two was CISA when it was in its funding lapse, canceling all summer internships that were related to the program.
A
Right.
D
That left them even fewer avenues for completing their, their commitment. The scholars who were part of this program, now they're just changing the program. They're making it something else. They're turning it into cyber AI SFs. They're saying if you're, you know, everybody who's going to be entering this program now needs to demonstrate some AI capabilities as students. Some, some, some proficiencies. And while, you know, there was actually a dollop of good news which we can get to in the, in this a little later, the, the, this was bad news. For some of the students who were like, well, wait, where does this leave me? Because it explicitly said that people entering this program without AI experience would be unemployable within the next two to three years.
A
So just to be clear here, you know, for our listeners sake, the people who entered this program in good faith, the deal that they were engaging with was that they would, in exchange for agreeing to work for the federal government for X number of years, they would get scholarship money to continue their studies. Is that a simple way to explain it?
D
Perfectly accurate, yeah.
A
So then the feds say, well, there aren't as many jobs as we thought there would be. In fact, we're trying to cut a lot of jobs with things like DOGE and other things have been going on with this administration. And so that leaves these students without the opportunities, but they're still on the hook to pay back the money if they don't find a job in the federal government.
D
Yeah, that's the gist of it. Now, there has been talk of deferments as a way to deal with some of that or delays and, you know, people being able to fulfill this commitment within a certain amount of time. But if you look at where the federal budget is going, they're looking to cut CISA even further, just as one avenue of working for the government. You know, they've lost lots of cyber jobs at lots of other agencies as well. There's some talk of them trying to hire some people, you know, from positions they'd eliminated or people that they'd lost. But how confident are you, if you're in this program that a one year extension will do the trick? You're not terribly.
A
Yeah. So then let's dig into this sort of change of direction here. If I'm a student and I'm midway through my process here, and again in good faith, I've been studying cyber, hoping to come out and enjoy my time with the federal government. This is kind of a reset, right?
D
It is. And so, you know, one of the things that this, the people who are in the program were telling me this go round were, where does this leave us on placements? Like are we, does this affect us? You know, if we're unemployable according to the program that we're. We're in, if we're legacy is the term they used, if we're legacy scholars and we're unemployable, are you going to do anything to like help us with some extra curriculum, with some extra coursework now, you know, from the, from the government Standpoint, they say this shouldn't affect placements, but, you know, the students haven't heard that from the agencies that are running the program themselves, and they are skeptical for lots of reasons.
A
Yeah. Well, we'll stay tuned on this one, hoping for the best for all those students. Before I let you go, this story kind of slipped under my radar, so I wanted to catch up on it with you. We've got a new person who's running things in the House Cybersecurity Committee, the subcommittee on Cybersecurity and Infrastructure Protection. Yes.
D
The top. The new top Democrat is Delio Ramirez. She's taking over for Eric Swalwell, who's. Who has had some issues, to say the least. One or two, I think you could say he was probably an effective person in that position.
A
Yeah.
D
Before. Before his troubles came to light. So. So she stands to have the same potential to. To influence things. She's going to be a new voice on cyber that we're not. That we're hearing. I think it'll be interesting to see. You know, this is a kind of a full turnover of the leadership of that committee because on the Republican side, you know, once Andrew Garbarino became the full committee chairman, they had to point somebody new, and that was Andy Ogles. But Andy, I don't want to use his first name like that, like we're on. Like we're, you know, colloquial. But Congressman Ogles is. Has not. Not quite yet demonstrated what his priorities are going to be. He hasn't signaled much of what his focus is going to be. There was a hearing recently where she was there for the hearing longer than he was. So I think if you look at where the leadership of this committee is, she has a chance to make a difference here, I think.
A
Yeah, I guess that's my question. Is she in a position to really have some influence here? The way that this committee is stacked and packed? Can she matter?
D
I think so. You know, the subcommittee prior to. Actually for the last several iterations of the subcommittee with leadership changes, has been pretty bipartisan in its leadership. I mean, not universally, but pretty bipartisan, certainly by today's congressional standards.
A
Yeah.
D
And Andrew Garbarino has had that style of leadership of the full committee in general. So it's not like they're not moving Democrat bills the way a lot of other committees just are ignoring them and completely neglecting them and kind of shunning Democrats and saying, we're not going to work with you. She doesn't have, you know, a cyber background per se. But she has prior to this, despite being a relatively new lawmaker, shown some zeal for these issues when talking in committee hearings. Getting into the nitty gritty of things like Microsoft's handling of Salt Typhoon, you know, so, so, so she's not, she's not someone who hasn't shown an, an interest in this. She's shown an interest in it. She's very, she's been particularly vocal on things like DOGE and its elimination of people at these agencies. I don't know if she can make as much of a difference there. But on if she wants to get her hands dirty legislatively, I think there's room for her to do that and get into some nitty gritty policy issues and maybe depending on how how the Chairman Ogles is is going to be running the committee, I think she has a chance to make a difference for real.
A
All right, well, that's good to hear. We'll have links to both of the stories we talked about today in our show Notes. Again, Tim Starks is senior reporter at cyberscoop. Tim, thanks so much for taking the time for us.
D
You're welcome. Thanks, Dan.
A
Foreign. Most environments trust far more than they should, and attackers know it. Threat Locker solves that by enforcing default deny at the point of execution. With Threat Locker allow listing, you stop unknown executables cold. With Ring Fencing, you control how trusted applications behave. And with Threat Locker DAC defense against configurations, you get real assurance that your environment is free of misconfigurations and clear visibility into whether you meet compliance standards. ThreatLocker is the simplest way to enforce zero trust principles without the operational pain. It's powerful protection that gives CISOs real visibility, real control, and real peace of mind. ThreatLocker makes zero trust attainable even for small security teams. See why thousands of organizations choose ThreatLocker to minimize alert fatigue, stop ransomware at the source and regain control over their environments. Schedule your demo at threatlocker today.
D
Study and play come together on a
A
Windows 11 PC and for a limited
D
time, college students get the best of both worlds. Get the unreal college deal everything you need to study and play with select Windows 11 PCs. Eligible students get a year of Microsoft
A
Microsoft 365 Premium and a year of Xbox game.
D
Pass ultimate with a custom color Xbox wireless controller. Learn more@windows.com studentoffer while supplies last ends June 30th terms at aka Ms. CollegePC.
A
And finally, in a development that might qualify as occupational irony, the ransomware as a service group known as the gentlemen has itself been hacked with thousands of lines of internal chats and operational details spilled online. The leaked data reportedly includes discussions about compromised Fortinet credentials, command and control tooling, EDR killer software, and even recommended YouTube tutorials for sharpening ransomware skills. Researchers at Dynarisk say the chats provide a rare real time look inside modern extortion operations, complete with Bitcoin wallet addresses, infrastructure management and debates over fake CVE scripts. The Gentleman emerged in 2025 and quickly built a reputation for aggressive tactics targeting healthcare, manufacturing and critical infrastructure organizations. Researchers say the group relied heavily on credential theft, living off the land techniques and careful reconnaissance before deploying encryption. The leak exposes both the industrialization and the occasional fragility of modern ransomware operations. Even cybercriminals, it seems, struggle with operational security. And that's the cyber Wire. For links to all of today's stories, check out our daily briefing@thecyberwire.com we'd love to know what you think of this podcast. Your feedback ensures we deliver the insights that keep you a step ahead in the rapidly changing world of cyber security. If you like our show, please please share a rating and review in your favorite podcast app. Please also fill out the survey in the show notes or send an email to cyberwiren2k.com N2K's lead producer is Liz Stokes. We're mixed by Trey Hester with original music and sound design by Elliot Peltzman. Our contributing host is Maria Vermazes. Our executive producer is Jennifer Ivan. Peter Kilpe is our publisher and I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow.
Host: N2K Networks
Air Date: May 12, 2026
This episode covers a wide sweep of urgent cybersecurity issues: from insights on the US approach to China’s aggressive cyber operations, through breaking incidents in ransomware, supply chain attacks, and policy shifts, to in-depth discussions about preparing for AI-driven threats. Key highlights include industry reactions to government proposals, a critical ransomware settlement with education tech giant Instructure, and an exploration of how cybersecurity professionals must adapt to the era of agentic AI—with expert commentary from both Asaf Karan (Qualtrics) and reporter Tim Starks (CyberScoop).
[00:30 – 03:15]
Summary:
Timothy Hogg, former NSA and US Cyber Command chief, argues that while China’s cyber campaigns are formidable, the US is not powerless. He stresses the unique resilience provided by the US private sector:
Policy Suggestions:
Quote:
“Hogg argues the United States already holds a major advantage through its private sector. American cybersecurity firms, cloud providers and telecom companies operate at unmatched global scale and can often detect malicious activity faster than governments.” (Host, 01:30)
[03:15 – 12:00]
Canvas/Instructure Ransom Payment to Shiny Hunters
Policy Debate:
FCC’s Proposed “Know Your Customer” Phone Rules
SAP Patch Release:
TanStack NPM Supply Chain Attack:
“Operation Humanity” Spyware Targeting Russian Speakers:
[12:00 – 15:38]
Japan’s AI Cybersecurity Review:
Texas Sues Netflix:
[15:38 – 25:30]
Memorable Quotes & Moments:
On AI Transforming Attacker and Defender Capabilities:
“AI is the most powerful tools defenders have ever had. It is also the most capable weapon attackers have ever had, and right now, attackers are using it.”
(David Moulton, Host, 15:49)
On Security Teams’ Legacy Mindset:
“I’m seeing a lot of security teams not understanding how pivotal this moment is... using legacy thinking in making decisions and maybe defaulting to the department of no.”
(Asaf Karan, 17:47)
On Intellectual Humility & Staying Curious:
“There is a superpower in willing to look like you don’t know the answer... and ask questions like you’re stupid.”
(Asaf Karan, quoting a peer, 19:15)
On “Brilliant at the Basics”:
“We have been in a world where we’re maximizing things we need to minimize. We need to reduce the attack surface... The other is baseline boring architecture... There is no ‘good enough’ anymore because what we’re doing is even worse than attackers using AI. We’re putting AI on top of broken mechanisms.”
(Asaf Karan, 21:02 & 22:13)
Looking Forward:
“I’m a short term pessimist, long term optimist… We’re going to be able to automate a lot of these processes... Not easier. It’s going to be much more exciting to be a security professional in two years than it is right now... I think we will get ahead of the curve.”
(Asaf Karan, borrowing from Phil Venables, 23:41)
[26:15 – 34:11]
Key Discussion Points:
Flaws in the Cyber Corps Scholarship for Service program (SFS):
Quote:
“People entering this program without AI experience would be unemployable within the next two to three years.”
(Tim Starks, 27:40)
Federal agencies claim changes won’t impact placements, but students remain skeptical.
Congressional Committee Leadership Change:
[36:09 – 36:59]
The Gentlemen Group Breach:
Quote:
“Even cybercriminals, it seems, struggle with operational security.”
(Host, 36:56)
This high-velocity, news-packed episode offers both a reality check and a strategic playbook for anyone coping with cybersecurity’s evolving landscape in 2026.