Podcast Summary: Talkin' About [Infosec] News, Powered by Black Hills Information Security
Episode: Ransomware Victims Stop Paying Hackers – 2025-11-03
Date: November 6, 2025
Overview
This episode centers on a dramatic shift in the ransomware landscape: significantly fewer victims are paying off hackers following attacks. The Black Hills Information Security (BHIS) team of penetration testers and infosec experts break down the latest stats, share theories for these trends, and dissect adjacent infosec topics including cyber insurance changes, AI and mental health risks, lawsuits over product flaws, and incidents where the line between security professional and criminal blurs. The conversation is lively, self-aware, and rooted in both technical expertise and industry anecdotes.
Key Discussion Points and Insights
1. Podcast Banter & Setup (00:00 – 04:03)
- The crew opens with casual Halloween postmortems and nostalgic TV references, quickly moving to the main news segment.
2. Major Theme: Ransomware Profits Plunge as Payments Drop (04:03 – 16:04)
Declining Payment Rates
- Key Stat: According to Coveware, only 23% of breached companies now pay ransoms, down from 85% in 2019.
- "[Coveware] publishes a cyber threat intel report. The big statistic is only 23% of companies who are breached nowadays pay the ransom...in 2019, it was 85%." – [G, 04:56]
Drivers Behind the Decline
- Legislation & Policy Changes:
- The UK has banned ransom payments; large UK companies now may request government bailouts instead of paying hackers.
- Cyber Insurance Evolution:
- Insurers are “not being as cooperative” with claim approvals, adding friction to payouts. (10:22)
- Backups and Cloud Migration:
- Improved organizational backups, especially with cloud adoption, have given companies more confidence to refuse payment.
- "If you have good backups, you don't pay the ransom. That's just how it works." – [G, 13:46]
- Improved organizational backups, especially with cloud adoption, have given companies more confidence to refuse payment.
- Erosion of Trust in Hackers:
- Reports abound of hackers not delivering decryption keys or re-extorting victims, destroying faith in ransom payoffs.
- "It's no longer like a high percentage guarantee...that if they pay it, they will get their data back..." – [A, 08:17]
- Reports abound of hackers not delivering decryption keys or re-extorting victims, destroying faith in ransom payoffs.
- Alternative Cyber Criminal Activities:
- Shifts toward data selling, info-stealer malware, and state-level resource reallocation (e.g., Russian cybercrime focusing on Ukraine, per theory) muddy the ransomware game.
Cynicism and the Need for Better Data
- The team notes skepticism around statistics based on limited reporting and differing data sets.
- "Take all this with a grain of salt...this data fluctuates wildly. If one company pays one ransom, this data is going up or down by 50%." – [G, 15:22]
Ransom Demands Have Skyrocketed
- Average ransom payment now exceeds $1 million (up from under $100,000 in 2019), likely due to fewer but more desperate/extreme companies paying.
- “The average ransom payment is up to over a million dollars now...more than a thousand percent...” – [G, 12:13]
Notable Quotes
- "You want to feel good about it, but then you're like, what does this actually mean?" – [G, 05:59]
- "I think the amount of work that has gone into paying ransoms and not paying ransoms and backups and processes around ransomware since 2019 is a staggering amount of work." – [G, 06:22]
3. Broader Infosec Factors (16:04 – 32:59)
AI, Mental Health, and Social Impact
- Discussion pivots to alarming mental health statistics (e.g., "over a million people a week" express suicidal intent to ChatGPT).
- "Is AI just really good at getting people to be suicidal?" – [G, 16:18]
- AI Psychosis: Examination of how LLMs reinforce user mental states, especially dangerous in vulnerable or isolated populations.
- Importance of guardrails, intervention messaging, and the social responsibility of platform developers.
- "If I were able to help steer policy within these big tech companies, I would be going in the opposite direction...redirect people at risk to getting help, to being more optimistic." – [C, 25:10]
- Broader social engineering: algorithms on platforms like Facebook/Meta actively reinforce negativity and division for engagement and profit.
- "If you're hating on liberals, if you're hating on them, redneck conservatives, China and Russia both say thank you and we need to do better." – [A, 20:40]
Callouts and Resources
- The team underscores the importance of real human connection, physical activity, and seeking help if experiencing mental health crises.
- 988 Lifeline resource mention [32:03].
- Encouragement to talk to someone, and a plea: "We all want to see you at the next con." – [A, 32:40]
- Discussion of landmark lawsuits against AI companies over user self-harm; platforms track and intervene where possible, but data is nuanced.
4. Infosec Law, DMCA Abuse, and the Streisand Effect (33:00 – 43:05)
Lockpicking Lawsuit
- Story: YouTubers bypassed a supposedly secure lock with a can shim, posted instructional videos; manufacturer sued for slander, triggering the "Streisand Effect."
- "Long story short...Proven Locks was like, hey, we're gonna sue you for this. This was slander...judge was not having this at all." – [E, 34:33]
- Attempts to suppress the information led to more exposure and ridicule. Legal threats and DMCA notices failed spectacularly.
DMCA Overreach
- Commentary on the misuse of DMCA threats against security researchers and content creators exposing vulnerabilities.
- "You really have to be careful about where you're dropping the DMCA on this stuff and how it can actually be used as well." – [A, 39:54]
Takeaways
- Security by obscurity doesn't work; transparency and responsible disclosure ultimately lead to better products.
5. Platform & Content Moderation Woes (41:30 – 46:45)
YouTube and Content Moderation Failures
- Example: Videos teaching how to bypass Windows 11 setup restrictions were struck by algorithmic moderation, then restored following public outcry.
- "All the AI moderation on YouTube is completely unchecked. There is no recourse if you're a creator other than public outcry." – [G, 42:00]
- Reflection on the centralization of power in internet platforms and challenges for alternative, decentralized video platforms to scale.
6. Insider Threats and Security Community Trust (47:07–50:59)
Cybersecurity Professionals Gone Rogue
- News of two employees from a ransomware negotiation firm (Digital Mint) pivoting to actually launching ransomware attacks themselves.
- "This is...the company that specializes in negotiating ransoms...began using malicious software to conduct ransomware attacks against their own victims." – [G, 47:39]
- Discussion on ethics, trust, and industry reputation.
- "If you're in this field and you can't be trusted, you can't be in this field." – [G, 50:03]
7. Cloud Outages and DNS Catastrophes (51:06–53:46)
- Brief coverage of a major Azure DNS outage, paralleling previous AWS incidents.
- The widespread impact of cloud provider issues and the myth of digital invincibility.
- "Are you saying the cloud is just someone else’s computer?" – [G, 53:13]
8. Rapid Fire News: Breaches, Tool Releases, and AI Security Trends (54:00–61:58)
Recent Breaches
- Pennsylvania school’s Salesforce compromised, mass offensive email incident.
- EY (Ernst & Young) data leak due to unsecured Azure backups.
Security Tools and AI in Offense
- Arsenal of new AI-driven offensive security tools being showcased, democratizing capabilities that once required large budgets.
- "It's going to explode. Things that you weren't able to do before in that short amount of time, people are able to do now in a much shorter amount of time." – [E, 57:46]
- Caveats: Increase in "standardization of mediocrity" as AI-generated code tends toward functionality over security unless explicitly directed otherwise.
Security Implications of AI Coding
- AI code suggestions propagate legacy vulnerabilities, reinforcing the need for explicit secure coding prompts and automated reviews.
- The prospect of endless "tabs vs. spaces" AI wars humorously proposed as a hypothetical service meltdown.
9. Closing & CTF Winners (61:58–end)
- CTF challenge winners announced; emphasis on hands-on learning for the community.
Notable Quotes & Memorable Moments
- "You want to feel good about it, but then you're like, what does this actually mean?" – [G, 05:59]
- "It's no longer like a high percentage guarantee in the minds of people that if they pay it, that they will get their data back or their service back." – [A, 08:17]
- On AI & social media: "That whole system is set up to take that hatred and to take that rage and keep you on as long as you can." – [A, 20:40]
- "If you're in this field and you can't be trusted, you can't be in this field." – [G, 50:03]
- "Are you saying the cloud is just someone else's computer?" – [G, 53:13]
Timestamps for Key Segments
- Ransomware Stats & Discussion: 04:03–16:04
- AI & Mental Health Spiral: 16:04–32:59
- Lockpicking/DMCA Lawsuit: 33:01–40:24
- YouTube DMCA/Moderation Problems: 41:30–46:45
- Insider Threat – Ransomware Negotiators Attack: 47:07–50:59
- Azure DNS/Cloud Outage: 51:06–53:46
- Recent Breaches & Security Tools, AI Trends, Industry Riffing: 54:00–61:58
- CTF Winners & Wrap Up: 61:58–end
Conclusion
This episode delivers a candid, data-driven look at why fewer companies are paying ransoms, underscoring complex factors: policy, business reactions, tech advancements, and shifting criminal tactics. It then branches into the impact of AI on both security and society, industry insider scandals, and the lighter (if only in tone) side of product security and lawsuits. With sharp humor, practical takeaways, and genuine concern for both the technical and human sides of security, the BHIS crew makes this episode essential listening for anyone tracking the direction of modern infosec.
