CyberWire Daily – "AI chips flow east."
Date: September 16, 2025
Host: Dave Bittner (N2K Networks)
Featured Interview: Spencer Thelman (Palo Alto Networks) with David Moulton (Palo Alto Networks)
Episode Overview
This episode delivers timely cybersecurity news with a spotlight on the geopolitical, business, and technical risks surrounding the global flow of advanced AI chips, intensifying cyber threats in Eastern Europe, and a deep dive into evolving enterprise AI security strategies. The featured "Threat Vector" interview tackles how organizations are adapting to the dual challenges of securing both employee use of generative AI tools and the growing risk landscape of internally developed AI models and agents.
Key News Stories & Insights
1. US-UAE AI Chip Deal Sparks Security Fears
(00:52 – 03:07)
- The Trump administration is advancing a controversial agreement to give the UAE access to massive quantities of cutting-edge US AI chips despite national security warnings.
- Many chips are destined for G42, a UAE tech giant with deep ties to Chinese entities.
- Expert concern: Advanced chip tech or AI models built on them could end up in China, undermining US export controls.
- The NYT further uncovered a $2B investment into World Liberty Financial, a crypto firm linked to Trump and Witkoff families, raising conflict-of-interest alarms.
- Cybersecurity risks: Potential US AI supremacy loss, exposure of third-party data in Emirati infrastructure, and crypto-related compliance gaps.
"Despite warnings from national security officials, many chips are slated for G42, a tech firm controlled by Sheikh Tanun Bin Zayed, who has long standing ties to Chinese companies."
— Dave Bittner (00:57)
2. Flowwise AI Critical Vulnerability
(03:08 – 03:41)
- A “serious flaw” disclosed by Flowwise AI allows attackers to take over user accounts, reset passwords, and view personal info.
- Urgent updates recommended; users unable to patch should block the password reset feature immediately.
"Failure to act leaves accounts fully exposed."
— Dave Bittner (03:38)
3. Sophisticated FileFix Social Engineering Campaign
(03:42 – 04:34)
- Attackers impersonate Meta account suspension notices, tricking users via a “FileFix” phishing campaign.
- Victims paste malicious file paths (disguised PowerShell commands), leading to malware installing the StealC infostealer.
- The scheme leverages steganography and rapidly evolving tactics, with several variants observed over two weeks.
"Researchers warn that file fix tactics are rapidly evolving, making user education critical to defense."
— Dave Bittner (04:24)
4. macOS Spotlight Zero-Day
(04:35 – 05:20)
- Objective C researchers reveal a zero-day in macOS Spotlight plugins—capable of bypassing Apple Transparency, Consent, and Control (TCC) protections.
- Malicious plugins can leak sensitive data or exfiltrate AI model content, even bypassing sandboxing.
5. Outsourced IT and the Rising Cost of Cyber Risk
(05:21 – 06:32)
- Security researcher Kevin Beaumont critiques major UK firms (Co-op, Marks & Spencer, Jaguar Land Rover) for outsourcing security and IT to Tata Consultancy Services (TCS).
- Outsourcing reduces costs but increases risk and exposes firms to threats (e.g. Lapsus ransomware).
- Real issue: broad systemic over-reliance on managed service providers (MSPs), risking service continuity and economic stability.
6. Poland Supercharges Cybersecurity Budget
(06:33 – 07:30)
- Poland triples its cybersecurity investment to €1 billion after a spike in Russian-sponsored attacks—averaging 20–50 sabotage attempts daily.
- Recent disruptions include hospital/medical data breaches and thwarted water system attacks.
- Funding to secure water infrastructure and strengthen defenses across 2,400 local administrations.
7. NTT Group Joins International Cyber Defense Effort
(07:31 – 08:05)
- Japanese telecom giant NTT joins the Communications Information Sharing and Analysis Center (Com ISAC), promoting collaboration on global critical infrastructure defense.
- Notable Quote:
"Trust partnerships and information sharing are essential to securing the digital backbone of modern society."
— Dave Bittner, summarizing NTT Group's statement (08:02)
8. Jaguar Land Rover Extended Shutdown
(08:06 – 08:57)
- JLR's global operations remain offline after a major cyberattack, with economic losses mounting at $98 million/day.
- Incident highlights the need for regulations to equally prioritize service continuity, not just data privacy.
9. Kering Data Breach Affects Millions
(08:58 – 09:34)
- Luxury brands Balenciaga, Gucci, and Alexander McQueen compromised; hackers claim 7.4 million customer records.
- Data includes names, addresses, phone numbers, and even high spending records (>$80,000).
- No payment data taken; company denies any negotiation with hackers.
Featured Threat Vector Interview: Defending AI in the Enterprise
David Moulton (Palo Alto Networks) speaks with Spencer Thelman (Principal Product Manager, Palo Alto Networks) on securing the rapidly evolving AI ecosystem.
(16:10 – 24:10)
Enterprise AI Security Challenge
- Staggering Expansion:
"Last December, they cataloged 800 AI applications. By May, that number hit 2,800. That's 250% growth in just five months."
— David Moulton (16:45) - Employee Use Surging:
"Over half of enterprise employees now use generative AI apps daily, and up to 30% of what they send contains sensitive data."
— David Moulton (16:55) - Immediate Relevance:
"If you're still thinking AI security is a future problem, you're already behind."
— David Moulton (17:04)
The Two Core Problems in Enterprise AI Defense
(17:29 – 18:28)
- 1. Securing Employee Use of Generative AI (SaaS / SaaS-like apps):
- Example tools: ChatGPT, Perplexity, Grammarly.
- Key risks: Data leakage, compliance, lack of visibility.
- 2. Securing Internally Built AI Apps, Models, and Agents (cloud, on-prem, etc.):
- Range from models hosted in AWS/Azure/GCP to custom internal agents.
"We believe the benefits of AI are profound, but so are the risks... We therefore have a kind of moral obligation to help our customers capture the power of AI, but do so safely and securely."
— Spencer Thelman (Dave Bittner voice, 17:34)
The Five Pillars of AI Security
(18:49 – 20:50)
- Model Scanning — Ensures models are free of malware/deserialization vulnerabilities before entering production.
- Posture Management — Governance of permissions and configurations (minimize privileges, avoid excessive permissions).
- AI Red Teaming — Simulated attacks to see what threats pass; informs runtime security needs.
- Runtime Security — Ongoing monitoring of prompts/outputs for prompt injection, sensitive data exfiltration, malicious URLs, etc.
- Agent Security — Encompasses all above, since agents act autonomously across multiple threat surfaces.
"Agent security is primarily broken down into runtime security and posture. A great way to think about agent security is that it's kind of a superset of large language model security."
— Spencer Thelman (20:30)
What Are AI Agents, and Why Are They Risky?
(21:00 – 23:19)
- Definition: Agents are autonomous applications that can reason and take actions towards goals via API interactions.
- Example: Using an agent to plan and book an entire trip—versus just a chatbot giving suggestions.
- Risks: Agents require autonomy, memory, and tool access—expanding the attack surface (tool misuse, memory manipulation, cascading hallucinations).
- Example: Overprivileged agents in Copilot Studio could delete records in Salesforce if not properly governed.
"It's that autonomy that make agents profoundly powerful...but it carries these risks because...it needs to be autonomous, it needs to have memory, and it needs to interact with your tools. And all three of those carry some novel risks..."
— Spencer Thelman (21:52)
"...The impact of that could be destructive. What we need to do is look at, here's all the things that an agent could do and then restrict its freedoms down to just the things it needs to do to accomplish its goal."
— Spencer Thelman (23:13)
Notable Call to Action
"...When half your workforce is using tools that leak sensitive data by design, the window for getting ahead of this threat is closing fast. If this got your attention, don't wait..."
— David Moulton (23:37)
Other Noteworthy Moments
AI-Generated Phishing Against Seniors
(25:52 – 26:43)
- Reuters & a Harvard researcher tested AI-created phishing emails targeting seniors; 11% of volunteers clicked.
- Some chatbots initially refused, others gave in with coaxing. Companies like Google retrained their bots after findings.
- FBI and companies urge heightened vigilance—AI-boosted scams pose a growing threat.
"This genie isn't going back into the bottle."
— Sponsor voice (26:39)
Timestamps for Key Segments
- US-UAE AI chip deal: 00:52 – 03:07
- Flowwise AI vulnerability: 03:08 – 03:41
- Meta (FileFix) phishing campaign: 03:42 – 04:34
- macOS Spotlight flaw: 04:35 – 05:20
- IT Outsourcing risk (Beaumont on UK firms): 05:21 – 06:32
- Poland boosts cyber budget: 06:33 – 07:30
- NTT joins Com ISAC: 07:31 – 08:05
- Jaguar Land Rover cyberattack: 08:06 – 08:57
- Kering data breach: 08:58 – 09:34
- Threat Vector (AI Security Interview): 16:10 – 24:10
- AI-generated phishing scams: 25:52 – 26:43
Summary Tone
The tone is urgent and direct, with an emphasis on both threat awareness and practical security strategy. Technical depth is balanced with clear explanations and real-world relevance, especially regarding the evolving risks of generative AI, AI agents, and nation-state cyber activities.
For comprehensive links and further reading, visit the CyberWire daily briefing.
