Podcast Summary: Talkin' About [Infosec] News
Episode: The Impending AI Bubble
Date: August 30, 2025
Hosts: Black Hills Information Security Team (John Strand, Corey, John Hammond, Bronwyn, Mary Ellen, Fidelis, Ian, others)
Episode Overview
This episode dives into the looming "AI Bubble," the mounting hype (and skepticism) around artificial intelligence, and how these patterns echo the early 2000s dot-com craze. The crew, a lively mix of penetration testers and infosec professionals, cover AI in customer service, quantum computing rumors, browser wars, security news, and—finally—the much-teased story about AI translating chicken clucks.
Their tone is irreverent, humorous, and self-aware, balancing infosec jargon with accessible metaphors and comic relief.
Key Discussion Points & Insights
1. Letters of Marque for Hackers (Privateering in Cyberspace)
[02:17]–[09:14]
- Discussion: Arizona congressman submitted a bill to authorize hacking foreign adversaries, modeled after historical "letters of marque" for privateers.
- Historical context from Bronwyn: these allowed colonial-era sailors to attack enemy ships with government backing.
- Team jokes about pirates, Red Bull instead of rum, and the idea of starting "Flame Wars 2025."
- Notable Quote:
"It just seems like a good idea... I can't see any possible way that anything could possibly go wrong with this." – Corey [04:34]
Takeaway:
While tongue-in-cheek, the team highlights risks of unleashing chaotic, semi-government-sanctioned hacking—and draws parallels to state-sponsored attacks already taking place globally.
2. Sentencing & Consequences for Cybercrimes
[09:16]–[16:29]
- Scattered Spider case: 20-year-old sentenced to 10 years; team reflects on hacking gone awry, the temptation of illicit hacking vs. legitimate bug bounties, and the outsized financial lure of crime.
- Java developer sentencing: Created “isEmployeeEnabled” function that triggered a devastating fork bomb post-firing; received a 4-year sentence.
- Jury nullification and developer empathy:
"If the attorneys were doing this right, they would get a bunch of developers on the jury... The requirements kept changing and users kept on changing things..." – Corey [14:01] - Team reflects on burnout and how repetitive, solvable problems in infosec continue to fester.
3. The New Browser Wars & Prompt Injection in AI Browsers
[16:29]–[22:54]
- Brave vs. Comet (Perplexity’s AI Browser): The Brave browser team publicly exposes prompt injection exploits in AI-powered browsers.
- Prompt injection demo: AI assistant can be tricked to divulge sensitive info it shouldn't (e.g., login codes).
- AI and privacy concerns:
"Now everything that I put into this browser is automatically going to be harvested by the AI company." – Bronwyn [19:01] - Recurring browser nostalgia and the never-ending fight for privacy and usability.
- Notable Quote: "It’s almost like one massive vacuum bag exploded and we’re getting everything back—old browsers, popups... Everything we thought we got rid of is coming back with a vengeance." – Guest [22:03]
4. Quantum Computing and the Hype Cycle
[23:19]–[28:27]
- Corey delivers an in-depth rant on the overhype and misconceptions around quantum computing, especially regarding crypto.
- Explanation: Quantum's practical utility is overstated—real world quantum systems can’t run Doom, aren’t general purpose, and error correction remains a problem.
- China’s claims: Skepticism about announcements from Chinese labs about massive quantum breakthroughs.
- Analogies: Fast & Furious bridge jump as quantum leap, "computer go fast" memes.
- Notable Quotes:
- "The dream is a quantum computer that can run Doom. And that is the bar." – Corey [24:00]
- "Out of all the possibilities, he [Vin Diesel in F&F] would have died horribly. If you gave it a quantum computer, it wouldn't swing—it would just teleport across." – John Strand [27:00]
- Takeaway:
Quantum computing is exciting, but practical breakthroughs are much farther out and overhyped, while crypto-vulnerabilities remain a future risk.
5. AI Displaces, Then Fails to Replace, Human Roles
[28:43]–[37:19]
- Major banks and companies (Australian bank, Klarna, IBM) replaced staff with AI, only to rehire due to AI's inadequacy.
- Media coverage discrepancy:
Publicizes layoffs and AI "efficiency," but the embarrassing walk-back is buried. - Prediction:
"I think we are coming for an AI bubble, and if only for the reason that the amount of money being invested into AI companies is not sustainable." – Bronwyn [33:12] - Team likens the hype to the dot-com bubble: Irrational VC funding, undefined goals, and companies with unproven business models.
6. The "AI Bubble": Parallels to Dot-Com Crash
[37:19]–[41:37]
- Major theme: AI startup investment may not match real-world sustainable revenue.
- Open question if ANY major AI company is truly profitable.
- Scaling bottlenecks: LLMs plateauing due to data and compute limitations; ChatGPT-5’s underwhelming advance is example.
- Speculation that a few giants (Amazon) could fully commit to AI support automation, but most companies cannot commit safely, especially in sensitive banking.
- Notable Quote:
"It's going to work in some instances. But I think 90% of the companies who are like, ooh, we can nuke our entire customer service team...What happens when I need to reset my password and it's an AI?" – John Hammond [36:01]
7. AI in Productivity Apps & Security Loopholes
[41:37]–[46:31]
- Microsoft Copilot’s integration into Excel warns, “Do not use for numerical calculations or any task requiring accuracy or reproducibility”—undercutting Excel’s core purpose.
- "What else are you using Excel for?" – John Hammond [42:08]
- Logging Failures: Copilot found to sometimes not generate logs, undermining access auditing.
- "Should we have logs? Eh, Eff it. It doesn't need everything..." – Corey [45:03]
- The team skewers the recklessness of rolling out half-baked AI features without mature security controls.
8. Old Vulnerabilities Never Die
[46:31]–[49:17]
- Ongoing exploitation of years-old vulnerabilities (e.g., Cisco Smart Install) by APTs.
- Security industry trap: Always rolling forward, but basics (patching!) are still ignored.
- Bronwyn laments:
"Good security is boring. Good, good management is boring. Good governance is boring. And you know what? Boring is grossly undervalued. I like boring." [48:23]
9. Password Managers & Clickjacking
[49:19]–[52:49]
- Recent research on clickjacking password managers; the consensus: known issue, mitigations (e.g., not using autofill) hamper usability.
- Using password managers (despite risks) is still far safer than password reuse.
- "Even if you're using a password manager that's actively getting hacked, it's still better to use a password manager than to not." – John Hammond [51:44]
10. The Value/Limitations of Security Awareness Training
[52:57]–[58:44]
- Recent headline claimed “training doesn’t work,” after one in 19,500 users fell for a phishing test.
- Team debates metrics for training value: improvements are visible in immature orgs, but perfect security is unattainable.
- The arms race—technical controls vs. human controls—is ongoing and neither absolute.
- "If your failure rate is 1 at a 19,500, what other security controls in your organization are that effective?" – Corey [53:18]
- "Phishing is really not that big of a problem for most orgs right now. What is a problem is Teams calls to Salesforce employees...vishing..." – John Hammond [58:44]
Memorable Moments & Notable Quotes
- [19:01] Bronwyn: "Now everything that I put into this browser is automatically going to be harvested by the AI company."
- [24:00] Corey: "The dream is a quantum computer that can run Doom. And that is the bar."
- [48:23] Bronwyn: "Good security is boring. Good, good management is boring. Good governance is boring. And you know what? Boring is grossly undervalued. I like boring."
- [53:18] Corey: "If your failure rate is 1 at a 19,500, what other security controls in your organization are that effective?"
- [59:12] Mary Ellen: "So if a bird clucks, what does it mean? ... Poultry farmers are now using AI to tell what their chicken flocks are saying."
- [59:33] John Hammond (joking about a chicken AI translator): "Can you imagine an AI translator that's like, man, I had some really bad poops earlier and it's like 'Warning: Disease detected by AI.'"
(Finally) the CHICKEN AI Story!
[59:04]–[62:59]
- Mary Ellen’s “clucking news”: AI apps now claim to translate chicken clucks, giving farmers insight into their flock’s health/mood.
- The app, "Cluckify," records and analyzes chicken sounds in different conditions.
- Team jokes about possible messages ("Ow, it hurts," "I'm bleeding everywhere," chickens discussing AI's takeover…), and potential for animal translation to reveal more than we want to hear.
- [60:02] John Hammond: "OK. Has anyone, does anyone have chickens? And if so, can you go test this?"
- Plans to outreach to chicken-owning Black Hills colleagues to test and report back.
Closing & Plugs
- Blue Team Summit and Blue Team training classes promoted ([63:02]).
- Classic sign-off energy: “Whatever the chickens say, it’ll just keep coming.” [63:37]
Timestamps for Key Segments
- 00:00–02:01: Light banter, tech checks (skip)
- 02:17–09:14: Letters of Marque discussion
- 09:16–16:29: Cybercrime sentencing, developer burnout
- 16:29–22:54: AI browser vulnerabilities & browser wars
- 23:19–28:27: Quantum computing, crypto, and hype
- 28:43–37:19: AI layoffs, the echo chamber, early signs of the AI bubble
- 37:19–41:37: AI bubble and parallels to dot-com
- 41:37–46:31: Copilot in Excel, logging mishaps
- 46:31–49:17: Old vulnerabilities persisting in orgs
- 49:19–52:49: Password manager clickjacking
- 52:57–58:44: Debating security awareness training
- 59:04–62:59: Chicken AI story
In Summary
This episode is a rollicking, insightful ride through the latest in infosec—from the ridiculousness of AI-powered chicken translators to the sobering risks of uncritical AI adoption. The looming "AI bubble" is explored with healthy cynicism, drawing parallels with the dot-com bust. Infosec basics (patching, password managers, awareness training) remain frustratingly relevant, while bleeding-edge advances are often overhyped and under-delivered.
The crew wraps up with a chicken (cluck) crescendo, promising more poultry penetration testing next week.
For listeners:
If you want laughs, real-world analogies, and a (skeptical) pulse-check on security and tech hype, this episode is a must-listen. And yes, the chickens do finally come home to roost.
![The Impending AI Bubble 2025-08-25 - Talkin' Bout [Infosec] News cover](/_next/image?url=https%3A%2F%2Fassets.blubrry.com%2Fcoverart%2Forig%2F577207-646458.jpg&w=1200&q=75)