Talkin' About [Infosec] News (Black Hills Information Security)
Episode Title: Anthropic $1.5 Billion © Settlement – 2025-09-08
Release Date: September 10, 2025
Overview
This episode of Talkin’ About [Infosec] News dives into recent events and hot topics in infosec, focusing on the stunning $1.5 billion copyright settlement facing Anthropic for ingesting pirated books to train its AI models—despite a judge affirming fair use for LLMs. The panel, composed of security practitioners and pentesters, discusses evolving ransomware tactics, privacy failings with dashcam footage and surveillance, fresh examples of supply chain attacks, and some quirkier security news about, yes, imported chicken eggs from Russia. As always, the group shares informed hot takes spiced with humor and memorable moments.
Key Discussion Points & Insights
Anthropic’s $1.5 Billion Copyright Settlement
[08:33] – [13:58]
- Background: Anthropic agreed to pay at least $1.5 billion to settle a class-action lawsuit filed by authors, after it was proven they ingested 500,000+ pirated books to train their AI models.
- Legal Nuance: Although a judge ruled LLM training is fair use, Anthropic reportedly sourced the data illegally—prompting litigation despite what would otherwise be permissible use.
- Settlement Implications:
- No legal precedent: The settlement avoids a definitive copyright judgment for future cases.
- $3,000 per title/author (as calculated by the hosts) could be cheaper than legitimately buying half a million books.
- Impact on AI Training:
- Difficulty of “unlearning” pirated data: “You'd have to establish some way to uncompute the neural network training process … retrain on everything except [the illicit data].” (John, [14:44])
- Large LLMs are “designed to be siphons of data”—ultimately always vacuuming up some copyright material, intentional or not.
“It’s literally the, ‘I’m gonna pay you $3,000 to f-off. Thank you.’ … And like, I thought they were the good guys!”
— Ryan [12:40]
- Parallels to the Music Industry: Hosts compare the AI copyright fracas to the transition from Napster to Spotify: initial disruption and lawsuits, then eventually a new licensing status quo.
“We basically ended up with Napster, and I would guess that the same reality is true for AI. ... We're going to have to have some weird licensing model or something for how [AI] deals with copyrighted content.”
— Ryan [17:50]
Ransomware Evolves: Pay Us or We Train an AI on Your Data
[05:42] – [08:17]
- 404 Media covered a new ransomware tactic: Rather than just threatening to leak data, attackers claim they’ll train an AI on victims’ data if no ransom is paid.
- Highly targeted approach: Some sites (for example, ones catering to artists and promising never to use AI) become especially vulnerable to this tactic.
- Hosts debate whether such threats have real bite or are mostly empty posturing. Still, it highlights attackers’ adaptability—and fears over AI misuse.
“They found a great leverage point to, like, destroy this company one way or another. But they're asking for 50k—and it makes me wonder just how much they actually have.”
— John [07:21]
The Commercialization and Privacy Nightmares of Dashcam Data
[24:00] – [32:13]
- 404 Media exposé: Nexar, a dashcam company, left 140 TB of user footage exposed due to hardcoded and globally accessible AWS S3 keys.
- Footage uploaded automatically (default setting), some cameras face the driver(!), and videos get resold to “legitimate” commercial clients: e.g., cities, businesses, even possibly surveillance/AI model training.
- Privacy risks aren’t unique to Nexar—ring doorbells, security cams, and even smart toilets (!) leak or monetize data in similar ways.
“Out the gate, this thing is a privacy nightmare.”
— Joff [30:34]
- Discussion: Attackers, governments, and companies can connect dashcams to surveil populations, identify drivers at protests, or aggregate sensitive location information.
Government Spyware: Paragon and ICE
[33:42] – [39:17]
- US Immigration and Customs Enforcement (ICE) now contractually licensed Israeli “Paragon” spyware (similar to Pegasus).
- Panel raises alarm about potential applications (immigration tracking, abuse, privacy implications) and reflects on the proliferation of post-9/11 surveillance tech.
“Can you imagine being like, alright guys, we're gonna buy the spyware, but I'm gonna make sure we only use it ethically. It's going to be fine.”
— Ryan [34:44]
DDoS “Records” and Supply Chain Threats
[40:11] – [46:11]
- Massive DDoS attack on Cloudflare hit 11.5 Tbps for 35 seconds, allegedly leveraging infrastructure inside Google.
- The group debates the significance of these “record” attacks, noting how hard it is to validate the “biggest ever” claims.
“Is this actually the biggest DDoS ever? … I have no idea.”
— Ryan [40:57]
- Ongoing, troubling trend of supply chain attacks via popular npm packages and developer phishing:
- New attack targets browser plugins to redirect cryptocurrency transactions.
- Comment on the inexorable risk of integrating third-party code and leaking API secrets.
“Software supply chain attacks scare me more than anything else, actually.”
— Chad [45:23]
Miscellaneous Infosec Quick Hits
[49:51] – [51:47]
- SVG image JavaScript phishing and iCloud invite phishing as recent examples of creative attack vectors.
- The rise of “Scattered Lapsis Hunters”—a threat group blending techniques from Scattered Spider, Lapsis$, and ShinyHunters—highlights the constant churn and collaboration among cybercrime groups.
Viewer Questions and Infosec Humor
[47:54] – [53:41]
- Hosts riff on hoarding old hardware (“D-link hub I keep for memorabilia, in case the world ends, so I can still have a Halo LAN party.”—Ralph [52:28]).
- Lament the ever-present need for legacy adapters, as technology continues to iterate.
The “Egg-spionage” Scoop: US Imports Russian Chicken Eggs
[54:43] – [56:53]
- Bizarre but real: The US quietly bought $500k worth of eggs from Russia (first time since 1992), uncovered via Indian news outlets.
- Hosts mock the surreal “PR” by Russian media, making egg puns and joshing about nation-state eggs-as-surveillance devices.
“If your eggs are spying on you, then they might be Russian. I don’t know. If your egg has a micro-usb, do not plug it in.”
— Ryan [56:53]
Prompt Injection in Grok Ads on X (“Twitter”)
[57:53] – [59:08]
- Novel scam leverages a field in paid X (Twitter) ads to include prompt injection for Grok AI, redirecting AI summaries to adult/gambling content—essentially turning AI into a new XSS-type vector.
“This is like stored cross site scripting all over … That’s where we’re at with AI.”
— Chad [59:05]
Memorable Quotes
-
“This is how AI works. I mean, that’s the nature of large language models. They have to be large.”
— Ryan [12:56] -
“This is how capitalism works.”
— Joff [13:02] -
“We’re just Hoovers [LLMs]; they go out on the Internet and find everything they can find.”
— Ryan [15:50] -
“If the product is too good to be true, that means you’re part of the product.”
— Ryan [32:16] -
“I feel like I’m the. I’m a member of the last generation that knows what privacy actually was.”
— Joff [33:15] -
“I'm never going to plug that [crypto hardware wallet] in again.”
— Ryan [50:15]
Notable Timestamps
- 08:33–13:58: Anthropic’s copyright settlement, legal/technical ramifications, and LLM data “unlearning” challenges
- 05:42–08:17: Ransomware evolves to threaten “AI training on your data” as new extortion angle
- 24:00–32:13: Dashcam privacy, exposed records, and commercial data resale
- 33:42–39:17: ICE and government spyware—what could go wrong?
- 40:11–46:11: DDoS record, supply chain NPM attacks, API key compromise risks
- 54:43–56:53: Russian eggs hit American breakfast tables (!)
- 57:53–59:08: Prompt injection hijacks Grok AI on X for malicious ad redirection
Summary Table:
| Segment | Topic | Start | Key speakers/issues | |-----------------------------------|-----------------------------------------|--------|----------------------------------------------------| | AI Ransomware Tactic | Ransom via AI training threat | 05:42 | Unusual extortion schemes; targeted to companies | | Anthropic Copyright Settlement | $1.5B for pirated book ingestion | 08:33 | Legal, technical; parallels to Napster/Spotify | | Dashcam Privacy Fiasco | Nexar’s exposed footage | 24:00 | Privacy, surveillance, “user as product” | | Government Use of Spyware | ICE licenses Paragon spyware | 33:42 | Surveillance, privacy history, government overreach| | DDoS, Supply Chain, API Keys | Attacks, software supply chain threats | 40:11 | DDoS scale, npm hijack, key leaks | | Misc/Quick Fire | SVG JS/Phishing, Scattered Lapsis | 49:51 | New attack vectors | | Russian Egg Import | Quirky supply chain news | 54:43 | Eggs as nation-state issue/pun | | Prompt Injection in Grok Ads | AI prompt injection = ad fraud/XSS | 57:53 | AI exploited as “stored XSS” equivalent |
Final Takeaway
The episode does more than just recap headlines—it deeply explores (and pokes fun at) the blurring lines between technology, privacy, and legal frameworks in 2025’s security landscape. Whether it’s AI’s thirst for data, the commoditization of consumer surveillance, or surreal international egg shipments, listeners leave with a nuanced (and thoroughly entertaining) view of modern infosec.
End of summary.
![Anthropic 1.5 Billion © Settlement - 2025-09-08 - Talkin' Bout [Infosec] News cover](/_next/image?url=https%3A%2F%2Fassets.blubrry.com%2Fcoverart%2Forig%2F577207-646458.jpg&w=1200&q=75)