CyberWire Daily: "The Internet Joins the War"
Date: March 5, 2026
Host: Dave Bittner, N2K Networks
Featured Interview: Daniel Barbou, Director of EMEA Security at Adobe
Episode Overview
This episode delivers a high-tempo snapshot of major cybersecurity events and trends as global tensions spill into cyberspace. From surging hacktivist campaigns in the Middle East to critical infrastructure threats and the evolving intersection of artificial intelligence and security culture, listeners hear timely news and in-depth insight. The featured interview spotlights Daniel Barbou of Adobe, who shares a human-centered, collaborative vision for building trustworthy AI in enterprise settings.
Key Discussion Points & Insights
1. Geopolitics & Cyber Conflict: Middle East Hacktivism Surge
[03:00-06:30]
- Following the U.S.-Israeli military operations against Iran (Feb 28), hacktivist activity surged region-wide.
- Radware Report:
- Within 9 hours of kinetic strikes, multiple hacktivist groups launched retaliatory DDoS attacks on government/critical infrastructure.
- Stats: 107 attacks by 9 groups in 8 countries between Feb 28–Mar 2.
- Kimas+ and Dinet accounted for ~70% of attacks.
- Targets: Primarily government (53%), followed by financial and telecom sectors.
- Most-impacted nations: Kuwait, Israel, and Jordan (>75% of activity).
- Russia-aligned group Noname 05716 joined on March 2, indicating possible escalation.
- Industry Perspective:
- Palo Alto Networks Unit 42 is tracking 60+ active hacktivist and Iran-linked threat actors.
- “The surge highlights how geopolitical crises increasingly trigger rapid, coordinated hacktivist campaigns aimed at disrupting national infrastructure and amplifying political messaging in the digital domain.” (Host; ~06:00)
2. Anthropic’s Claude & AI Defense Tech Fallout
[06:40-09:40]
- In reaction to the U.S. blacklisting Anthropic’s Claude AI model, defense tech firms (including major contractors like Lockheed Martin) are phasing it out of critical systems.
- The controversy stems from:
- Anthropic’s refusal to guarantee its AI wouldn’t be used for fully autonomous weapons or mass surveillance.
- Industry concern over a looming formal ban, leading federal agencies to preemptively remove the technology.
- Political dimension:
- Senator Ron Wyden objects to Defense Department pressure; warns about "potential mass surveillance of Americans."
- “Vast amounts of personal data...can be purchased from largely unregulated data brokers and analyzed using AI.” (Sen. Wyden paraphrased; ~08:30)
- Wyden seeks legislation limiting government access to such data.
- Analyst warning: The shift could disrupt DOD/IC operations reliant on Anthropic, which had deep integration.
3. Major Law Enforcement Action: Leakbase Takedown
[09:45-11:00]
- International police, coordinated by the FBI, dismantled Leakbase—a major forum for trafficking stolen data and exploits.
- Operation Leak:
- 100+ law enforcement actions, 13 arrests, 32 searches, 33 suspect interviews.
- Leakbase domains and full database seized; forum had 142k+ members.
- Threat context:
- Leakbase dealt in U.S. network and critical infrastructure access.
- Seized data will be mined to identify further victims and criminals.
4. Zero-Day & High-Impact Vulnerabilities
[11:00-12:20]
- Cisco SD-WAN Flaws:
- Two recently-patched vulnerabilities are under active exploitation.
- Attackers can escalate privileges or overwrite files; another zero-day enables admin access via authentication bypass.
- Possible links to threat actor UAT8616.
- Google Chrome Emergency Patch:
- Rollout addresses 10 new CVEs (3 critical, 7 high).
- Exploited flaws: integer overflows, object lifecycle issues.
- Google withholds details until users update; urges urgent patching.
5. Privacy & Platform Security: Age Verification and TikTok DMs
[12:21-14:30]
- Age Verification Critique:
- Techdirt highlights risks of mandatory age checks: centralized troves of biometric data are exposure-prone.
- The issue resurfaced after Persona, Discord’s proposed vendor, was found exposing thousands of sensitive files.
- Discord abandons Persona; experts warn the model threatens privacy and rarely effectively keeps out minors.
- TikTok and End-to-End Encryption:
- The company refuses to offer end-to-end encryption in DMs, citing safety and easier abuse detection.
- Child safety groups support the move, but privacy advocates say it’s out-of-step with industry norms and endangers user privacy.
Feature Interview: Fostering a Human-Centered AI Security Culture (Daniel Barbou, Adobe)
[14:32-28:05]
Building AI for Security: People-First, Trust-Driven Approach
- Role & Philosophy:
- “My role focuses on securing Adobe’s products and platforms...with a strong emphasis on AI—both improving security and securing AI itself.” (Daniel Barbou, 14:32)
- “Security doesn’t scale through technology alone; it scales through people.” (Barbou, 14:52)
- Initial Reaction to AI in Security Workflows:
- “It was definitely a mix of both [excitement and skepticism]...great tools only matter if we invest just as deeply in our people. That was one of the cornerstones.” (Barbou, 15:20-15:58)
Shifting Culture and Rethinking Trust
- Misconceptions & Challenges:
- “One of the biggest misconceptions...is to look at AI as a shortcut when...it’s more of a force multiplier.” (Barbou, 16:09)
- AI draws value only when built on clear ownership, high-quality data, and sound threat models.
- “Trust in AI naturally takes care of itself. That’s a misconception. In reality, teams need education and transparency to use it well.” (Barbou, 17:26-18:30)
- Examples of AI Surfacing Challenges:
- If input data is poor, AI amplifies bad results; trust issues arise if users don’t understand or over-rely on AI.
- “When people do not understand its limits, they rely on it too heavily and stop asking questions.” (Barbou, 18:30)
The Security AI Guild: A Cross-Team Community
- Purpose & Principles:
- Built as an alternative to rigid programs—principles over frameworks.
- Three Core Principles:
- Outcomes Come First – Tied to real-world problems.
- Ownership – Clear accountability to production/handoff.
- Learning is Shared – Failures and successes are visible.
- “It’s not a think tank. It is not a recurring meeting. It’s basically an execution engine...focused on real outcomes.” (Barbou, 19:34-21:30)
Designing Trustworthy AI: Social, Not Just Technical
- “At the end of the day, from a cultural perspective, we need to cut across traditional boundaries...no single team has all the context...the guild creates shared space to experiment responsibly.” (Barbou, 22:17-23:30)
- AI aids vulnerability discovery, triage, and detection, but humans retain accountability: “AI accelerates insight, but trust comes from the culture—built on shared responsibility, transparency, and humans staying accountable for the outcomes.” (Barbou, 24:20)
Advice for Building AI Security Cultures
- “Start with people, not platforms...the first 60 to 90 days should focus on shared learning, clear principles and safe spaces for experimentation—not mandates or hype.” (Barbou, 24:58-25:37)
- Partnerships across industry, community, and academia are essential, since everyone is still “struggling or experimenting” in this domain.
Building Trustworthy, Powerful AI
- “You can’t bolt trust on after deployment. It has to be part of how teams learn, collaborate, and make decisions.” (Barbou, 26:45)
- Key ingredients: Shared responsibility, transparency, explainability, enablement (not just enforcement), human interlock (“AI is assistive, not autonomous. Humans stay accountable.”) (Barbou, 26:45-28:06)
Other Noteworthy Tech & Research News
Wi-Fi Heartbeat Detection: PulseFi Prototype
[29:00]
- UC Santa Cruz unveils "PulseFi," using Wi-Fi signals plus machine learning to detect heart rate with near-clinical accuracy.
- Effective even when users are several meters away, sitting/standing/walking.
- Relies on cheap hardware; could enable new passive health monitoring in homes.
- "In other words, your Wi-Fi may soon know your pulse—whether you asked it to or not." (Host, 29:45)
Notable Quotes & Timestamps
-
"Security doesn’t scale through technology alone, it scales through people."
– Daniel Barbou, [14:52] -
"Trust in AI naturally takes care of itself. That’s a misconception. In reality, teams need education and transparency to use it well."
– Daniel Barbou, [18:30] -
"It’s not a think tank—it’s not a recurring meeting. It’s basically an execution engine, focused on real outcomes."
– Daniel Barbou, [21:20] -
"You can’t bolt trust on after deployment. It has to be part of how teams learn, collaborate, and make decisions."
– Daniel Barbou, [26:45] -
"AI is assistive, not autonomous. Humans do stay accountable."
– Daniel Barbou, [28:05]
Summary Table of Important Segments
| Segment | Timestamp | |-----------------------------------------------|------------| | Middle East Hacktivist Surge | 03:00–06:30| | Anthropic/Claude Fallout | 06:40–09:40| | Leakbase Forum Takedown | 09:45–11:00| | Cisco/Google Security Flaws | 11:00–12:20| | Age Verification Debate/TikTok DMs | 12:21–14:30| | Feature: Human-centered AI at Adobe (Barbou) |14:32–28:06 | | PulseFi Heart Rate via Wi-Fi | ~29:00 |
Tone & Language
- Authoritative, clear, and pragmatic—balancing deep technical understanding with organizational and human factors.
- Daniel Barbou brings an empathetic, candid, and practical viewpoint on security culture change.
- Host keeps the pace energetic and focused on actionable insights.
This episode is essential listening for anyone tracking modern cyber threats and the evolving role of AI in enterprise security cultures—particularly for security professionals, technology leaders, and those managing organizational change in a high-risk, fast-moving landscape.
