Cybersecurity Today – December 15, 2025
Host: David Shipley (sitting in for Jim Love)
Episode Theme:
A roundup of the latest cybersecurity threats and responses, focusing on Apple zero-day vulnerabilities, the manipulation of AI-powered search engines by scammers, sophisticated malware in torrent files, and Stanford’s experiment with AI-driven penetration testing.
1. Apple Patches for Critical Zero-Day Browser Flaws
[00:22 - 03:42]
Key Points
- Apple issued urgent security updates after confirming two serious WebKit vulnerabilities were actively exploited in the wild.
- Updates cover a wide range of devices: iOS, iPadOS, macOS, watchOS, tvOS, VisionOS, and all Safari browsers.
- Vulnerabilities discussed:
- CVE-2025-4352.9: Use-after-free flaw, leading to arbitrary code execution.
- CVE-2025-14174: Memory corruption issue (severity: 8.8/10) also patched in Google Chrome earlier.
- Attribution: Google’s Threat Analysis Group and Apple’s security teams discovered and reported both flaws.
- Impact: Due to WebKit’s ubiquity, all browsers on Apple mobile devices (including Chrome, Edge, Firefox) are affected.
“Because WebKit is used not only by Safari, but by all third party browsers on iOS and iPadOS... the impact of these vulnerabilities extends across the mobile browsing ecosystem.” — David Shipley [01:57]
- Security note: These are the 9th zero-day vulnerabilities patched by Apple in 2025.
Recommendations
- Users should update devices immediately to mitigate risk.
- Highlights the growing sophistication and targeting of attacks against individual users with outdated software.
2. Scammers Poison AI Search Engines with Fake Support Numbers
[03:43 - 07:25]
Key Points
- New scam technique: Attackers are manipulating public web content that large language models (LLMs) scrape, causing AI search tools to provide fake customer support numbers.
- Termed as: “Large language model phone number poisoning.”
- Methodology:
- Scammers create optimized content on compromised reputable domains (including government/university sites) and open sites (YouTube, Yelp).
- When a user queries an AI assistant for support info (e.g., airline phone numbers), the AI may return a scam call center number.
- Real-World Example:
- Perplexity’s Comet browser returned a fake Emirates airline reservations number and similarly for British Airways.
- Google’s AI Overview found citing multiple fraudulent numbers as authoritative.
- Wider Issue:
- This is not limited to one AI system; effects are seen across platforms — “cross platform contamination.”
- AI blending legitimate and fraudulent data creates trust challenges.
“Because AI models blend legitimate and fraudulent content, the resulting answers can appear credible, making scams much harder to detect.” — David Shipley [06:35]
Recommendations
- Verify contact info independently, especially for customer service, travel, or financial requests.
- Be wary of sharing sensitive data with AI-generated responses, and recognize these tools are still evolving.
3. Torrent Malware Hidden in Subtitle Files
[07:26 - 10:57]
Key Points
- Sophisticated malware campaign discovered by Bitdefender spread via torrents for a fake movie: One Battle After Another.
- Malware Details:
- Torrent included a movie file, images, subtitles, and a Windows shortcut (appearing as a “movie launcher”).
- When run, the shortcut executes Windows commands to extract a PowerShell script embedded in the subtitle file.
- PowerShell then extracts AES-encrypted payloads and rebuilds scripts hidden as Microsoft diagnostic data.
- Installs the well-known Agent Tesla Remote Access Trojan (RAT), designed to steal browser credentials, emails, FTP, VPN logins, and screenshots.
- Unique Aspects:
- The infection chain is unusually complex and stealthy; malware is hidden in subtitle lines, evading casual detection.
- Trend Connection:
- Similar campaigns tied to other movies, sometimes with other malware like Luma credential stealers.
- Rising streaming costs and declining quality are fueling piracy—and thus, malware risk.
“Torrent files from anonymous publishers frequently contain malware, and pirating newly released movies carries a high risk of compromise.” — David Shipley [10:23]
Recommendations
- Avoid torrents and pirated content, especially from anonymous sources.
- Reminder: Piracy raises risk both legally and technically.
4. Stanford AI Agent Outperforms Human Penetration Testers
[10:58 - 13:23]
Key Points
- Stanford experiment: An AI agent (“Artemis”) tested on university’s computer science network (8,000 devices).
- Results:
- In the first 10 hours, Artemis outperformed 9 out of 10 human pentesters (found 9 valid vulnerabilities, 82% validity).
- Some flaws missed by humans were found by AI; Artemis used command line when browsers failed.
- AI spun up parallel sub-agents, unlike humans who worked sequentially.
- Cost Comparison:
- Running Artemis: ~$18/hour, advanced version $59/hour—far less than a $125,000 annual human salary.
- Limitations:
- AI struggled with GUIs and had higher false positive rates.
- Broader Context:
- Reflects larger trend of AI lowering the bar for both defense and cybercrime.
- “The real danger here isn’t about AI operating autonomously on its own… it’s how humans and AI can scale together to achieve more.” — David Shipley [13:01]
- Implication:
- Security risks and attack surfaces are expanding faster than defense budgets/teams can keep up.
5. Notable Quotes & Memorable Moments
-
On Apple’s broad vulnerability impact:
“Because WebKit is used not only by Safari, but by all third party browsers on iOS and iPadOS... the impact of these vulnerabilities extends across the mobile browsing ecosystem.” — David Shipley [01:57]
-
On AI scam sophistication:
“Because AI models blend legitimate and fraudulent content, the resulting answers can appear credible, making scams much harder to detect.” — David Shipley [06:35]
-
On piracy and malware:
“Pirating newly released movies carries a high risk of compromise.” — David Shipley [10:24]
-
On AI pentesting:
“The real danger here isn’t about AI operating autonomously on its own… it’s how humans and AI can scale together to achieve more.” — David Shipley [13:01]
6. Episode Summary & Takeaways
- Defensive urgency: Update Apple devices promptly due to high-severity active exploits.
- Changing threat vectors: AI technologies are both targets and tools, with attackers now manipulating AI search results and automating hacking efforts at scale.
- Piracy resurgence: Stealthy new malware campaigns are thriving as users return to torrents to escape streaming bloat.
- AI’s double-edged potential: Stanford’s experiment shows AI can (cost-effectively) augment human expertise but also amplifies risk if misused.
- Looking ahead: Attackers are gaining ground in sophistication and scale, while defenders face tighter budgets—foreshadowing a challenging year.
For more detailed stories and updates, visit Cybersecurity Today or reach out at technewsday.com.
