Talkin' Bout [Infosec] News
Episode: The Coming SAAS Apocalypse - 2026-02-23
Hosted by Black Hills Information Security
Date: February 25, 2026
Episode Overview
This episode dives into the increasingly complex intersection of infosec and the evolving world of SaaS, AI, privacy, and platform control. The hosts and guest panelists discuss the impending "SaaS apocalypse" driven by the rise of AI-generated software, shrinking openness in Android, posthumous AI avatars, government feuds with AI vendors, and the ever-present issue of data breaches. The conversation weaves between light-hearted anecdotes and serious warnings about privacy, security, and the digital future.
Episode Participants
- Corey Ham (Host)
- Derek (Panelist)
- Bronwyn (Panelist)
- John Strand (Panelist, Founder BHIS)
- Brian Furman (Panelist, AI Ph.D.)
- Shecky (Panelist)
- Megan (Producer)
Key Discussion Points & Insights
The Shrinking Openness of Android (08:37–15:28)
Overview:
- Google is moving toward a more locked-down Android ecosystem, echoing Apple's app-store-centric model.
- Growing difficulty in installing third-party OS (e.g., GrapheneOS) as Google phases out hardware like Pixel, hurting privacy and security research.
- Panelists mourn the loss of Android's "open" promise and discuss implications for privacy tools.
Highlights:
- John Strand:
"This is not about security... It's predominantly about how you can lock people into an ecosystem to where the only way that they can load apps is through your store, that they get a percentage in the cut from the sales on that." (08:37) - Derek:
"I think I trust Apple a little more than Google. ... Google invented surveillance capitalism and I don't think it's ever been a privacy platform." (14:41) - Discussion of the GENIE index and economic consolidation as a parallel to tech monopolies.
- Privacy research is threatened: Without root access and open platforms, vetting app behavior becomes harder.
Notable Quote:
- John Strand:
"We're getting higher and higher into the oligarchy scale... The French Revolution happened at a Genie index of 82 and we are currently at 83." (11:54)
Meta Patents AI to Talk With the Dead (15:58–21:26)
Overview:
- Meta's patent to create AI-driven avatars of deceased people sparks ethical and emotional debate.
- The panel discusses the "creepy line", grieving, exploitation, and potential to harm rather than comfort.
Highlights:
- Shecky:
"They're preying on people that want to go ahead and have these messages." (16:26) - John Strand:
"They're going to be trying to market and like. Like, seriously monetize us after we're dead. Like, that feels. You talk about this as way crass." (17:24) - Bronwyn:
"Grieving involves letting go. ... Even if I did have a lot of data points, it still wouldn't capture him." (19:16)
Notable Quote:
- John Strand:
"When I'm dead, I want to be dead. I don't want to live on in Facebook because that's hell and I don't want to die and go to hell. I don't want to be a Facebook." (17:24)
SaaS Apocalypse: AI Threatens Third-Party SaaS Vendors (21:45–29:16)
Overview:
- "Build, Don't Buy": John Strand proposes that AI now enables companies to rapidly roll bespoke internal tools, threatening mid-level SaaS vendors.
- Rise in "vibe coding" – leveraging AI to clone simple SaaS apps quickly but with potential security risks.
Highlights:
- John Strand:
"Every company of a certain size that has an engineer... now has the ability to recreate the SaaS services that they pay for relatively easily using AI tools..." (22:11) - Bronwyn:
"Humans are really good at generating crap code too. ... We're going to see an explosion of crap code." (25:22) - Derek:
"The economic impact is what I would worry about more like overall economic impact." (29:16)
Notable Quotes:
- John Strand:
"Buy it, don't build it. We're now flipping and it's now becoming build, don't buy." (23:18) - Corey Ham:
"SaaS was never actually good. It was just easy." (27:11)
AI in the Workplace: Productivity Promise or Hype? (29:16–36:36)
Overview:
- AI isn't (yet) boosting productivity in most firms, per new research. Most observed gains from AI go to users who already have deep domain knowledge.
- Discussion of latest, more "agentic" AI systems showing early signs of breakthrough.
Highlights:
- John Strand:
"There's no question that AI is a bubble, right? And I think that the lead time is going to be a lot like what happened when the Internet started taking off." (30:56) - Derek:
"Now with like the agentic stuff that really has only been out since like late last November, it is night and day, like we turned a corner." (31:54) - Bronwyn:
"People who already know how to do a thing are much, much better able to leverage AI... but if you don't know how to do something... that's when you start seeing so much AI slop getting into all kinds of output." (32:58)
Notable Quote:
- Corey Ham:
"If you're looking at [AI] like: let's fire people and make more money. Enjoy it while it lasts." (35:41)
US Government vs. Anthropic: The Limits of AI Use (36:46–40:31)
Overview:
- Anthropic clashed with US government on how its AI could be used, facing threats of blacklisting.
- Raises question: How much control can/should an AI vendor exert over government use?
Highlights:
- Derek:
"Does a company, once they sell a product, get to say how you use it in terms of service... But also, do you know who you're selling this to? It is literally the Department of War. What do you think this is already." (38:08) - John Strand:
"But going and saying that we're blacklisting you and we're going to blacklist any company that uses you—a bit much." (39:03)
Data Breaches: "The Largest Ever" (Not Really) (41:09–43:28)
Overview:
- Conduit (a data warehousing company) breach dubbed "possibly the largest ever"—panelists debunk this claim and express breach fatigue.
Highlights:
- John Strand:
"They kept on saying, like, this could be the largest data breach in history. And I'm like, God damn, there's some stiff competition there." (41:09) - Derek:
"So much data is out there on all of us for everything." (42:59) - Corey Ham:
"It's the same law firm. It has to be because the same exact formatting, the same subscription to Crawl, identity monitoring or whatever, the same bs." (42:28)
AI Safety: Researchers, Bioweapons, and Frontier Models (43:28–46:12)
Overview:
- A top AI safety researcher leaves Anthropic, highlighting challenges in keeping AI from enabling the creation of real-world threats like bioweapons.
- Panelists paint a grim picture of the risks inherent in proliferating powerful open-source models.
Highlights:
- Derek:
"LLM AI safety at the moment is really just an illusion. ... You can get an obliterated model. And I'm not saying that it'll make a bioweapon, but it will tell me how to hot wire a Volvo XC60 or make meth." (45:04) - Bronwyn:
"My doctor, he's always asking me, how you doing on that alcohol? It's like, doc, I work in cyber security. Stress is the name of the game." (44:57)
Dell's Perfect 10.0 CVE: "Celebrate the Little Things" (46:16–48:10)
Overview:
- Dell's Recover Point for Virtual Machines receives a CVSS score of 10 due to hard-coded credentials.
- Panel points out the recurring pattern of poor defaults in enterprise gear.
Highlights:
- John Strand:
"Dell has recover point for virtual machines that is under active exploitation as a CVSS score of a perfect 10.0. So can we get a round of applause for Dell or no?" (46:32) - Derek:
"If it's a piece of software with hard coded creds running in a virtual machine, you don't need a proof of concept. Just go get the vmdk file and start grepping." (48:01)
AI-Assisted Threat Actors: Pen Testing at Scale (48:41–51:42)
Overview:
- Russian threat actors use AI to supercharge campaigns against Fortigate devices.
- Points toward a future where defenders and attackers are both heavily AI-augmented.
Highlights:
- Corey Ham:
"They're using AI to speed up their workflow. That's what we're doing as pen testers... It's just about how fast these kinds of types of attacks are going to scale." (49:15) - Derek:
"Real fun is going to be when... local models... catch up to where the frontier models are now. And you can just run it on your MacBook... and it's as powerful as Opus 4.6 is right now." (51:18)
AI Search Poisoning: The Hot Dog Journalist Example (52:05–56:04)
Overview:
- Stunt journalism demonstrates how AI can inadvertently amplify questionable or false facts by indexing SEO’d web content.
- Rather than a new exploit, it's a familiar "garbage in, garbage out" cycle.
Highlights:
- Bronwyn:
"Overall quality of the model content going down and becoming a vicious cycle. ... It's not news to me." (53:30) - Derek:
"'I tricked AI' or 'I got AI to do this'... AI is not a person or... a thing. Right. It's basically a really powerful mathematical tool..." (54:22) - John Strand:
"I think that these stories are important because it is going to get the narratives across... Maybe it's going to resonate with a different group of people, even if it is some type of repetition." (56:04)
AI Literacy & Fact-Checking (60:47–62:17)
Overview:
- Can the average person distinguish between reputable and AI-contrived information?
- The panel expresses skepticism: digital (and AI) literacy is declining; people uncritically accept what’s presented.
Highlights:
- Brian Furman:
"...it's kind of important to maybe go do your own research to make sure that what you're getting back is legitimate and valid." (60:47) - Derek:
"Not at all." (61:20) - John Strand:
"Our literacy literacy is going down because... when I was a kid, you can't trust the encyclopedia." (61:21) - Derek:
[on his daughter's skepticism of AI for cheating] "Everything created with AI is bad to her. So I'm gonna let it go until she learns some linear algebra." (61:43)
Memorable Moments & Notable Quotes
- "No, I don't want to be a Facebook." – John Strand (17:24)
- "Buy it, don't build it. We're now flipping and it's now becoming build, don't buy." – John Strand (23:18)
- "SaaS was never actually good. It was just easy." – Corey Ham (27:11)
- "LLM AI safety at the moment is really just an illusion." – Derek (45:04)
- "Our literacy literacy is going down..." – John Strand (61:21)
- "Everything created with AI is bad to her. So I'm gonna let it go until she learns some linear algebra." – Derek [on his daughter] (61:43)
Timestamps for Key Segments
| Segment | Timestamps | |--------------------------------------------------------------|-----------------| | Pre-show Banter / Intros | 00:24 – 08:37 | | Android Openness / App Store Lockdowns | 08:37 – 15:28 | | Meta & AI for Posthumous Users | 15:58 – 21:26 | | SaaS Apocalypse / AI-generated Enterprise Software | 21:45 – 29:16 | | AI & Workplace Productivity | 29:16 – 36:36 | | US Gov't v. Anthropic / AI Vendors & Use Cases | 36:46 – 40:31 | | “Largest Data Breach in History” Hype | 41:09 – 43:28 | | AI Safety, Bioweapons, Risks | 43:28 – 46:12 | | Dell VM Product Perfect 10 CVE | 46:16 – 48:10 | | Russian Threat Actor: AI-accelerated Attacks | 48:41 – 51:42 | | Stunt AI Journalism (Hot Dog Article) | 52:05 – 56:04 | | AI Literacy, Fact-Checking, and Generational Perception | 60:47 – 62:17 | | Plugs, Events, Call to Action | 64:04 – end |
Tone and Takeaways
- Lightly irreverent, sometimes darkly humorous: Hosts maintain a bantering tone, even as topics skirt the dystopian.
- Cautiously optimistic about AI as a tool—with recurring reminders that bad actors and bad code are age-old problems, but scale and velocity are new.
- Unified warning: Tech industry and society need to prioritize privacy, competition, open access, and literacy to avoid technological and social lock-in.
Upcoming Events & Recommendations
- BHIS & Anti Siphon Training events:
- Sock Summit, March 25th
- Upcoming AI workshops and podcasts from Brian & Derek
- Webcast: "OWASP LLM Top 10" on Wednesday
Summary prepared for those who missed the episode or want a deep, structured recap of key insights, memorable quotes, and actionable takeaways.
![The Coming SAAS Apocalypse - 2026-02-23 - Talkin' Bout [Infosec] News cover](/_next/image?url=https%3A%2F%2Fimg.transistorcdn.com%2F2Ioe7VfKahjskeWMc6JlrG-uDcNML4YMyNs7TtgkTWM%2Frs%3Afill%3A0%3A0%3A1%2Fw%3A1400%2Fh%3A1400%2Fq%3A60%2Fmb%3A500000%2FaHR0cHM6Ly9pbWct%2FdXBsb2FkLXByb2R1%2FY3Rpb24udHJhbnNp%2Fc3Rvci5mbS82MGU2%2FMjRkMTI3YTQ5NDc3%2FOGE3YjExMDRjNTc3%2FNTM4Yi5wbmc.jpg&w=1200&q=75)