Podcast Summary: Talkin' Bout [Infosec] News — US Defense Chief Uploads Secret Info to ChatGTP
Episode Date: February 5, 2026
Host: Black Hills Information Security Team (Ryan, Wade Wells, Ralph, Hayden, Andy, Michelle Khan, et al.)
Episode Main Theme:
This episode dives into the recent infosec news with a focus on a reported incident where the US CISA chief uploaded sensitive documents into ChatGPT, exploring industry reactions, risk realities, policy implications, and broader conversations about AI use, supply chain threats, data breaches, and more. The discussion is lively, technical, and laced with humor and asides.
1. Opening Banter & Warming Up
Timestamp: [00:01]–[09:33]
- Robot Vacuum Mishap: Wade recounts accidentally pouring water into his robot vacuum's air intake, causing it to explode but "it still works...smells a little bit like an electrical fire" ([01:36] – Wade Wells).
- AI in Everyday Tech: Light ribbing about Tesla’s use of Grok AI, the trolley problem, and self-driving car ethics; tongue-in-cheek about Chinese vs American tech blowing up.
- Tesla News: Tesla pivots from Model X & S to focus on robots; panel jokes about delayed Roadster/Semi releases and "self-driving" via actual robots ([04:58] – Ralph: “I told you we would be self-driving. Look, he'll get in and drive for you.”).
- Global AI Car Deployment: Michelle shares observations from Dubai on self-driving cars and the reluctance of the US to allow Chinese-made AI vehicles, highlighting competitive advantages abroad.
2. Main Infosec Discussion — CISA Chief & ChatGPT Incident
Timestamp: [09:33]–[18:43]
-
Incident Overview:
- Ars Technica report claims CISA chief Madhu Makala uploaded contractor docs into ChatGPT after specifically requesting and receiving an exemption.
- Ryan notes this exemplifies the classic C-suite scenario: “the CEO demands access to an AI and then misuses it” ([11:45] – Ryan).
- Consensus: “Is it technically a big deal?” Maybe not, but it’s a critical case study in broken policy, controls, and real-world AI adoption.
-
Key Quotes & Takeaways:
- "It's not just AI, that's everything. That's always the C-suite." ([11:45] – Wade Wells)
- “Literally, like, I don't want to use MFA.” ([11:48] – Michelle Khan)
-
Detection of the Incident:
- Not a public admission; sources leaked the access and misuse inside CISA; access was provisioned and later revoked.
- Attempt at “special” versions of ChatGPT, but skepticism remains about the actual protections and controls.
-
Industry Reflections:
- Hayden introduces internal AI gateways as a possible solution: middleware to validate/monitor data flow between LLMs and the organization ([13:15] – Hayden).
- Pushback: “That gateway is going to cause them all to slow down.” ([13:43] – Wade Wells)
- Internal models (open source/self-hosted) are suggested as the future, especially for sensitive orgs ([17:48] – Ralph).
- “If we're being honest, is OpenAI actually storing all uploaded docs? No, it’s not feasible at that scale.” ([14:43]–[15:06] – Ryan & Ralph)
3. Supply Chain & Software Threats
Timestamp: [18:43]–[25:48]
a. NPM Supply Chain Attacks
- Latest Update:
- After the “Shy Hulud” worm, npm/GitHub introduced
ignore-scripts=trueto mitigate supply chain risk. - Researchers at Koi Security discovered a bypass using
.npmrcto override git binary paths.
- After the “Shy Hulud” worm, npm/GitHub introduced
- Discussion Points:
- Security controls are “cat and mouse,” but more controls reduce attack surface ([20:00]–[21:48] – Ryan, Ralph, Wade).
- Attack vector: Compromised repo owner; controls must be server-side.
b. Notepad++ Supply-Chain Compromise
- Incident:
- Updater server was compromised for months, delivering targeted malicious updates.
- Reactions:
- “If you use Notepad++, assume you’re compromised... they were only targeting specific people.” ([24:15] – Ryan)
- Likely a nation-state (China) targeting telecom, “violent typhoon” suspected.
- Emphasis on the importance of updating WordPress sites and plugins – common attack vector ([25:07] – Wade Wells).
4. Data Breaches & Info Stealer Trends
Timestamp: [26:23]–[29:36]
-
Credential Dumps:
- Fresh dump of 149 million credentials – “rookie numbers” compared to the podcast’s own leak collection ([26:38]–[27:20] – Ryan/Ralph).
- Major targets: WordPress, OnlyFans, Coinbase—even government portals like “adfs.FBI.gov.”
- MFA is now table-stakes: “Safe to assume no password is safe at this point” ([27:20] – Ryan).
-
Info Stealer Prevalence:
- “What kind of a computer gets infected? – It’s usually home devices: game downloads, plug-ins; personal/work and insecure endpoints” ([28:10] – Michelle Khan).
- “Credential syncs are the real danger – [browser] sync brings work infections home and vice versa” ([28:51] – Ryan).
- Amusing anecdote: Installing "N Cage" browser plugin leads to Nicholas Cage faces taking over a coworker’s synced home PC, freaking out his wife ([29:01]–[29:36] – Wade Wells et al.).
5. AI/LLM Security, Open Source Models & “Vibe Coding”
Timestamp: [30:45]–[36:05]
- OpenClaw / Mult Book Fiascos:
- Open-source AI tools (OpenClaw, MultBook) launch rapidly with inadequate security; MultBook billed as a “social network for AI agents” had keys/permissions grossly misconfigured.
- “If you make something this fast, you can’t make it secure. It’s just impossible.” ([35:16] – Ryan)
- No real-world harm, mostly spectacle: “They could impersonate an AI with another AI” ([34:50] – Ralph).
- Key Quote:
- “People tweet about how you can give it a trading account... friend lost three grand. Gave this AI $3,000 and said ‘Go for it, chief’” ([47:23] – Hayden).
6. Pen Testing, Law, and Physical Security: CoalFire Case Revisited
Timestamp: [36:12]–[38:39]
- Outcome:
- Settlements reached years after the infamous CoalFire pentest arrest (2019); duo gets $600K split.
- Panel’s Take:
- “It always is, when the cops do something dumb, the public has to pay for it. Classic.” ([37:39] – Ryan)
- “Physical pen testing is not a crime!” — referencing a legendary DEF CON-style t-shirt ([38:12] – Michelle Khan, Ralph).
7. Lightning Round — Other Notable Threats and News
Timestamp: [38:39]–[51:53]
- CLAUDE AI Evolution:
- New model to be released; claimed to outperform OPUS at lower cost, optimized for Google TPUs ([41:39]–[42:35] – Ralph, Hayden, Ryan).
- "The continual march [of AI progress] is continuing... every three months, something new." ([42:07] – Ralph)
- Exposed LLM Servers:
- 175,000+ publicly accessible Ollama AI servers found via Shodan, with variable security/posture.
- “You want free AI? Just go to Shodan” ([45:55] – Wade Wells).
- Residential Botnets:
- Google’s takedown of a residential proxy botnet used by hundreds of threat actors. Caution: “If you’re not paying for it, you’re the product.” ([57:01] – Andy)
- Data Breaches:
- Panera Bread and OkCupid breached by Shiny Hunters, likely via vishing and SSO—the group’s signature.
- “Employers discovering your dating profile, risk of doxing, there’s tons of things” ([51:46] – Ralph).
- Costco RAM Thefts:
- Costco removes RAM from demo PCs to prevent theft—panel has fun with the concept of being “excommunicated for stealing RAM” ([39:59]–[41:00]).
8. OT Security: Poland Grid Attack
Timestamp: [54:13]–[56:24]
- Incident:
- Poland’s grid was nearly taken down by an attack exploiting Fortinet vulnerabilities and default creds; EDR blocked the “wiper” stage and prevented widespread outages.
- Takeaway:
- Defense-in-depth, especially strong endpoint protections (EDR) in OT networks, is proven to thwart advanced attacks.
9. Community, Trainings, and Final Thoughts
Timestamp: [58:00]–[63:06]
- Plugging Classes & Events
- OSINT and Blue Team courses, Sock Summit, and conference details.
- Humorous Product Suggestions:
- Ryan proposes AI-driven visualizations (“Pew Pew” maps for AI queries that display company “mood” or possible malice for fun and simplified detection) ([59:45]–[62:29]).
- Meta-Reflection:
- Discussion on automation, AI replacing jobs, and the unpredictable speed of technology advancement.
10. Memorable Quotes & Moments
- “AI is... a security disaster, but we're moving so fast that it doesn’t matter." ([32:00] – Ryan)
- “If you use Notepad++, assume you've been compromised." ([24:15] – Ryan)
- “It’s not the AI cars I’m scared of. It’s everyone else who's driving around the AI cars.” ([06:17] – Wade Wells)
- “Credential sync is what gets people; you can't just lock down one endpoint anymore.” ([28:51] – Ryan)
- “Employers discovering your dating profile, risk of doxing, there's tons of things...” ([51:46] – Ralph)
11. Key Takeaways
- C-Suite Tech Decisions: Senior leadership often pushes for bypasses to policy, putting organizations in difficult security positions with new tech (AI, cloud, etc.).
- Practical AI Security Concerns: Gateways and middleware for controlling LLM use are necessary, but friction and bureaucracy limit adoption; most orgs will use internal/private models for truly sensitive data.
- Supply Chain & Update Security: Attacks on software updaters and package ecosystems (npm, Notepad++) are increasing in sophistication. Upstream & supply chain hygiene is critical.
- The Human Element: Despite new tech, most security failures are still due to basic misconfiguration, reused passwords, or credential sync from home/personal use.
- Lightning Speed of Change: From LLM model evolution to botnet takedowns and new exposed attack surfaces, attackers and defenders alike are barely keeping up.
For those who missed the episode:
This week’s TBN delivers laughs, lessons, a sobering look at real-world infosec challenges, and relentless reminders that in a world of AI-powered everything, the old human problems (passwords, misconfigurations, faulty policies) remain...just now with new speed and scale.
End of Summary
![US Defense Chief Uploads Secret Into to ChatGTP - 2026-02-02 - Talkin' Bout [Infosec] News cover](/_next/image?url=https%3A%2F%2Fimg.transistorcdn.com%2FrL8v7m0L2ZHvLcODyQBXxB4RdHw_JZEhwKSBlnwwvCU%2Frs%3Afill%3A0%3A0%3A1%2Fw%3A1400%2Fh%3A1400%2Fq%3A60%2Fmb%3A500000%2FaHR0cHM6Ly9pbWct%2FdXBsb2FkLXByb2R1%2FY3Rpb24udHJhbnNp%2Fc3Rvci5mbS9iYWI4%2FYTc1ZWQxYTc0NDE0%2FNzJmZGRiMzAwZjMy%2FOGUwOS5qcGc.jpg&w=1200&q=75)