CyberWire Daily — March 4, 2026
Episode Title: When Zero-Days Escape the Lab
Host: Dave Bittner (N2K Networks)
Main Theme:
A look at how leaked zero-day exploit kits are impacting iOS security at a global scale, emerging threats in social engineering fueled by AI, and the pressing issue of managing digital identities after death. The episode delivers a round-up of critical news, expert interviews from Zero Trust World 2026, and a thought-provoking industry segment on the future of AI-driven social engineering.
1. Episode Overview
Dave Bittner anchors a packed episode focused on the unprecedented leak and proliferation of powerful iOS exploit kits, major cyber events affecting critical services, and a deep-dive interview with Brian Long, CEO of Adaptive Security, about AI’s role in reshaping social engineering attacks and the new challenges of posthumous profiles. The episode connects these issues under the broad theme of evolving cyber threats and response strategies in a rapidly changing digital world.
2. Key News Segments & Insights
Sophisticated iOS Exploit Kit — The “Karuna Kit” Leak
- [03:13–05:04]
- A sophisticated exploit kit, possibly originating from a leaked U.S. government framework, is behind what may be the first mass-scale Apple iOS attack.
- Reported by Google’s Threat Intelligence Group and Iverify, the “Karuna Exploit Kit” uses multiple zero-day vulnerabilities and has been observed in both espionage and cybercrime campaigns.
- The kit was used in:
- Attacks tied to a surveillance vendor’s customers.
- Espionage operations targeting Ukrainians (linked to Russian groups).
- Campaigns by financially motivated cybercriminals in China.
- Iverify estimates at least 42,000 iOS devices compromised, unprecedented for Apple’s ecosystem.
- Compared to “an EternalBlue moment” for iOS referencing the 2017 NSA-wannacry leaks.
- Apple has released relevant patches, particularly linked to “Operation Triangulation.”
- Quote:
“The spread of the toolkit resembles an eternal blue moment, referring to the leaked NSA exploit that fueled the global WannaCry NotPetya outbreaks in 2017.” (Dave Bittner, 04:24)
Facebook/Meta Outage
- [05:05–05:45]
- Facebook worldwide outage occurred around 4:15 pm ET; lasted until 6:21 pm ET.
- Affected Facebook, Instagram, WhatsApp Business API.
- Cause not detailed by Meta.
Critical Vulnerabilities and Exploits
-
FreeScout Help Desk RCE Vulnerability
- [05:46–06:42]
- CVSS 10.0; allows unauthenticated attackers to execute remote code.
- Exploited using an invisible zero-width space character to bypass file validation.
- [05:46–06:42]
-
Juniper PTX Routers Flaw
- [06:43–07:24]
- CVSS 9.3; root-level access possible via the Onbox Anomaly Detection Framework.
- No authentication needed if misconfigured.
- [06:43–07:24]
LastPass Phishing Campaign
- [07:25–08:18]
- New phishing campaign mimics LastPass warnings, tries to trick users into entering master passwords on spoofed sites.
- LastPass published indicators of compromise.
Telegram as a Cybercrime Marketplace
- [08:19–09:25]
- Researchers note Telegram has supplanted traditional dark web forums, offering rapid marketplace features, ransomware group publicity, and DDoS organization.
Healthcare IT Rule Changes Criticized
- [09:26–10:28]
- Industry groups warn that loosening U.S. Healthcare IT certification could shift cyber risk to providers and weaken privacy.
Stolen Google Gemini API Key Incident
- [10:29–11:44]
- A Mexican startup incurred $82,000 in fraudulent Google cloud charges after API key theft.
- Shared responsibility model highlighted; 2,800 more exposed api keys found by researchers.
CISA CIO Departure
- [11:45–12:25]
- Robert Costello steps down after nearly 5 years as CISA CIO, credited with modernizing critical cyber infrastructure.
3. In-Depth Interview: Brian Long, Adaptive Security
“AI and the New Age of Social Engineering”
[15:11–23:34]
The AI-Driven Shift in Social Engineering
-
Sophistication & Accessibility
- Attackers use AI tools—large language models (LLMs), deepfakes—for highly targeted, realistic attacks (voice, likeness, context-specific).
- AI models now available at low or no cost; over 2 million models on Hugging Face.
- Quote:
"With new AI tools, attackers are able to attack with significantly more sophistication and unfortunately also success." (Brian Long, 15:13)
-
Explosion in Attack Volume
- Social engineering/phishing has increased 4.4x since ChatGPT’s release.
- Deepfake attacks grew 17x year-over-year from 2023 to 2024 (over 100,000 reported), with further exponential growth into 2025.
- Quote:
"We've seen over a 4.4x increase in social engineering phishing attacks since ChatGPT came out... deepfakes grew 17x from 2023 to 2024." (Brian Long, 16:34)
Evolving Attack Patterns
- Attackers impersonate trusted but less visible employees, not just executives—e.g., a fake CFO instructing a controller.
- Quote:
"What's changing is you see more attacks now where they might be impersonating someone in the middle of the organization... knowing context on the business." (Brian Long, 17:22)
- Quote:
Defender Challenges
-
Security teams are overwhelmed with tools, alerts, lack of time and resources.
-
Detection is inadequate:
- Existing tools often don't cover personal email, phone, or third-party video chat environments.
-
Quote:
"It's hard to find the time to keep up with the newest threats and feel like you're even just keeping your head above water..." (Brian Long, 18:02)
Rethinking Security Awareness & Controls
- Importance of “spreading awareness,” since average employees lack understanding of AI attack capabilities.
- Security awareness training should include:
- Education on AI-enabled threats.
- Controls for remote work; verification for new hires.
- Noted rise in job applicant impersonation:
- Gartner predicts 1 in 4 applicants could be fakes by 2029.
- Quote:
"One of the biggest growing attacks is impersonation to get the job and then inside a risk... Gartner said that by the year 2029, one in four job applicants will be fake." (Brian Long, 20:19)
Recommendations
- Immediate focus: Company-wide AI threat awareness.
- Education for families and children, e.g., AI/deepfake abuse is a growing problem in schools.
- Suggested protocols:
-
Personal codewords for identity confirmation (for vulnerable populations, e.g. seniors).
-
Quote:
"You need to have a code word, you need to have steps you take in order to authenticate them before doing anything." (Brian Long, 22:16) "If we just train ourselves and our people to take a minute when something seems a little off, take a step back... There are few times when urgent action is really required." (Brian Long, 22:50)
-
Closing Advice
- Take time to assess situations, avoid responding hastily especially where urgency is stressed—this is a hallmark of social engineering.
- Apply the same caution at home and in organizational controls.
4. Emerging Issue: Digital Identity & Posthumous Profiles
[26:02–28:20]
- OpenID Foundation warns there is no global standard for managing digital lives after death.
- Current lack of coordination leads to risks: fraud, identity abuse, scams (including deepfake use of deceased identities).
- Privacy laws like GDPR/CCPA rarely cover posthumous data.
- Quote:
"The lack of coordination could invite fraud, identity abuse and even scams powered by deepfake technology that impersonates the deceased to manipulate friends or relatives." (Dave Bittner, 26:43) - Urges new rules for digital inheritance and standardized account management after death.
5. Notable Quotes
-
On zero-days leaking into the wild:
"The spread of the toolkit resembles an eternal blue moment, referring to the leaked NSA exploit that fueled the global WannaCry NotPetya outbreaks in 2017."
— Dave Bittner, 04:24 -
On the power AI is giving attackers:
"With new AI tools, attackers are able to attack with significantly more sophistication and, unfortunately, also success... They can use OSINT data...and then...deepfakes over voice and likeness..."
— Brian Long, 15:13 -
On the challenge for defenders:
"It's hard to find the time to keep up with the newest threats and feel like you're even just keeping your head above water... let alone dealing with what the new stuff is."
— Brian Long, 18:02 -
On the urgency exploit in social engineering:
"Urgency is the opportunity for people to take a second, think logically and then look at the controls of the organization or, you know, if it's in your own household."
— Brian Long, 22:57 -
On posthumous digital identity risk:
"Platforms treat death like a rare corner case, even though it eventually applies to every Internet user."
— Dave Bittner, 26:24
6. Timeline of Important Segments
| Timestamp | Segment/Topic | |------------|-------------------------------------------------------------| | 03:13–05:04| Karuna iOS exploit kit leak & implications | | 05:05–05:45| Facebook/Meta global outage | | 05:46–06:42| FreeScout help desk RCE | | 06:43–07:24| Juniper PTX router vulnerability | | 07:25–08:18| LastPass phishing campaign | | 08:19–09:25| Telegram cybercrime marketplace expansion | | 09:26–10:28| Healthcare IT rule change controversy | | 10:29–11:44| Stolen Google Gemini API key incident | | 11:45–12:25| CISA CIO resignation | | 15:11–23:34| Interview: Brian Long on AI, deepfakes, and social engineering| | 26:02–28:20| OpenID report: posthumous profiles and digital estates |
7. Conclusion
This episode highlights a turning point in both offensive and defensive cyber operations: from the widespread use of extremely potent exploit kits in the wild—potentially originating from leaked government tools—to a dramatic rise in AI-powered social engineering that threatens every layer of organizations and families alike. The discussion with Brian Long leaves a strong call to action: move quickly on awareness and verification, because the nature of identity and trust in the digital realm is shifting faster than ever. The episode closes by urging policy changes to address the largely overlooked but inevitable problem of managing digital identities after death.
For full stories, links, and further coverage, visit thecyberwire.com.
