
Hosted by Jeremy Snyder · EN
Looking for the latest news and views from the world of AI security?
Welcome to Modern Cyber with Jeremy Snyder, a cutting-edge podcast series where cybersecurity thought leaders come together to explore the evolving landscape of digital security. In each episode, Jeremy engages with top cybersecurity professionals, uncovering the latest trends, innovations, and challenges shaping the industry.
Also the home of 'This Week in AI Security', a snappy weekly round up of interesting stories from across the AI threat landscape.

In this episode for May 14, 2026, Jeremy breaks down a watershed moment in cybersecurity: the first confirmed case of hackers using AI to discover and weaponize a zero-day vulnerability in the wild. We also explore a major self-reported PII leak in the banking sector and the expanding attack surface of AI development environments.Key Episode Highlights:The First AI-Generated Zero-Day: Google Threat Intelligence confirms hackers used AI to discover and weaponize a 2FA bypass in an open-source admin tool, marking a transition from theoretical risk to documented reality.Banking Sector PII Leak: Community Bank (operating in PA, OH, and WV) filed an 8-K reporting that sensitive customer data, including SSNs and dates of birth, leaked into an AI application during training.The "Beagle" Backdoor: Sophos uncovered a fake Claude-Pro website pushing trojanized installers that deploy a memory-resident backdoor targeting AI coding environments.Framework Exploitation: Research reveals how prompt injection in popular frameworks like Semantic Kernel, LangChain, and CrewAI can escalate to full remote code execution (RCE).Phonetic Obfuscation: New proof-of-concept research shows that LLMs can navigate phonetic misspellings to interpret malicious intent, effectively bypassing standard text filters.Pixel-Perfect Phishing: Vercel’s v0.dev tool is being used by attackers to generate nearly perfect brand impersonations for Nike, Adidas, and Microsoft, making phishing detection significantly harder.Secure AI Across Your Entire OrganizationUnregulated AI usage and data leaks are the biggest threats to your organization's reputation. Get full visibility into your AI environment and block sensitive data exfiltration in 15 minutes. Book your FireTail demo: https://www.firetail.ai/schedule-your-demoEpisode Linkshttps://cloud.google.com/blog/products/identity-security/beyond-source-code-the-files-ai-coding-agents-trust-and-attackers-exploithttps://www.microsoft.com/en-us/security/blog/2026/05/07/prompts-become-shells-rce-vulnerabilities-ai-agent-frameworks/https://www.bleepingcomputer.com/news/security/fake-claude-ai-website-delivers-new-beagle-windows-malware/https://www.infosecurity-magazine.com/news/researchers-10-wild-indirect/https://www.darkreading.com/cloud-security/hackers-ai-exploit-dev-attack-automationhttps://www.darkreading.com/ics-ot-security/worlds-first-ai-driven-cyberattack-couldnt-breach-ot-systemshttps://hackread.com/hackers-exploit-vercel-genai-phishing-sites/https://bishopfox.com/blog/cve-2026-42208-pre-authentication-sql-injection-in-litellm-proxyhttps://securityaffairs.com/191888/data-breach/braintrust-security-incident-raises-concerns-over-ai-supply-chain-risks.htmlhttps://shape-of-code.com/2025/06/29/an-attempt-to-shroud-text-from-llms/https://databreaches.net/2026/05/12/us-bank-reports-itself-for-revealing-customer-data-to-unauthorized-ai-application/

In this episode for May 7, 2026, Jeremy reports from the sidelines of BSides Luxembourg. This week marks a significant shift in AI-driven vulnerability research, moving from source code analysis to the successful reverse engineering of closed-source compiled binaries.Key Episode Highlights:GitHub Backend RCE: Researchers from Wiz used AI-augmented binary analysis to find an X-stat header injection vulnerability in GitHub’s Git push pipeline, achieving a CVSS score of 8.7 on closed-source code.The "Copyfail" Crisis: A critical Linux security flaw dating back to 2017 was uncovered using AI-assisted tools. The story highlights the tension between automated discovery and the rise of "AI slop" in automated vulnerability disclosures.CISA Patching Mandates: CISA is considering lowering the required "mean time to patch" from 14 days to just 3 days in response to AI’s ability to find vulnerabilities at an "apocalypse" scale.Shadow AI Exposure: A study by Intruder found over 1 million exposed AI services via certificate transparency logs, with 31% of Meta Llama servers requiring zero authentication.Google "Cosmo" Leak: A massive 1.13 GB system-level agent for Android briefly leaked on the Play Store, revealing an autonomous browser agent with deep system permissions.The Criminal Skill Gap: New research from the University of Edinburgh suggests that while AI is boosting professional developers, most cybercriminals currently lack the skills to weaponize AI at a "weaponizable scale".Shadow AI and unsecured AI models are the new frontier of enterprise risk. 31% of exposed AI servers are operating with zero authentication. Don't let your infrastructure be the next headline. Get full visibility into your AI environment in 15 minutes. Book your FireTail demo: https://www.firetail.ai/schedule-your-demoEpisode Linkshttps://www.wiz.io/blog/github-rce-vulnerability-cve-2026-3854 https://cyberscoop.com/copy-fail-linux-vulnerability-artificial-intelligence/https://www.reuters.com/legal/litigation/us-officials-weigh-cutting-deadlines-fix-digital-flaws-amid-worries-over-ai-2026-05-01/https://venturebeat.com/security/ai-agent-runtime-security-system-card-audit-comment-and-control-2026https://thehackernews.com/2026/05/we-scanned-1-million-exposed-ai.htmlhttps://www.euronews.com/next/2026/05/05/cybercriminals-gave-ai-a-go-and-came-away-disappointed-study-findshttps://www.bleepingcomputer.com/news/security/learning-from-the-vercel-breach-shadow-ai-and-oauth-sprawl/https://azat.tv/en/google-cosmo-ai-leak-privacy-safety/https://www.wiz.io/blog/github-rce-vulnerability-cve-2026-3854

In this episode for April 30, 2026, Jeremy breaks down a week where the "human-in-the-loop" failed spectacularly. From a production environment deleted in just nine seconds to "Abliterated" models providing kidnapping instructions to Congress, the risks of autonomous AI agents are no longer theoretical. They are live.Key Episode Highlights:Abliterated Models on Capitol Hill: OpenAI and Anthropic briefed House lawmakers on "abliterated" models - versions with safety guardrails stripped - demonstrating how they can provide step-by-step instructions for criminal acts.Entra ID Hijacking: Researchers at Silverfort discovered that the new "Agent ID" role in Microsoft Entra ID can be exploited to hijack service principals, leading to a full Global Admin takeover.The 9-Second Disaster: An AI agent at PocketOS, attempting to fix a staging environment, fetched production credentials and deleted both the production environment and its backups in under ten seconds.LiteLLM SQL Injection: A critical vulnerability in the LiteLLM gateway saw targeted exploitation within 36 hours of disclosure, specifically aiming for provider API keys.Vercel Breach Update: The recent Vercel data breach is traced back to a "Luma Stealer" malware infection at a third-party AI analytics partner.Episode Linkshttps://www.politico.com/news/2026/04/22/ai-chatbots-jailbreak-safety-00887869https://security.googleblog.com/2026/04/ai-threats-in-wild-current-state-of.htmlhttps://www.microsoft.com/en-us/security/blog/2026/04/06/ai-enabled-device-code-phishing-campaign-april-2026/https://hackread.com/microsoft-entra-agent-id-flaw-tenant-takeover/https://www.bleepingcomputer.com/news/security/hackers-are-exploiting-a-critical-litellm-pre-auth-sqli-flaw/https://www.cbsnews.com/news/anthropic-investigates-mythos-ai-breach/https://thehackernews.com/2026/04/vercel-breach-tied-to-context-ai-hack.htmlhttps://x.com/lifeof_jer/status/2048103471019434248Is your organization part of the 82% with unknown AI agents running on your network? Don't wait for a "9-second deletion" event. Get full visibility into your AI agents today. Book your FireTail demo: https://www.firetail.ai/schedule-your-demo

In this episode for April 23, 2026, Jeremy explores a week where "first principles" in security are being forgotten in the rush to adopt AI. From guessable API endpoints exposing Anthropic’s most powerful model to a $10,000 fine for a lawyer’s AI "slop," the message of the week is clear: There is no AI without API security.Key Stories & Developments:The Mythos API Leak: Unauthorized actors gained access to Anthropic’s Claude Mythos model by simply guessing API naming conventions. This classic case of Broken Function Level Authorization highlights a major oversight in the rollout of sensitive models.Shadow AI Agents: A new survey from the Cloud Security Alliance reveals that 82% of enterprises have unknown AI agents operating without security oversight.The $10K Hallucination: An Oregon lawyer was fined $10,000 for "AI slop" in court filings, setting a firm legal precedent that AI error does not excuse professional negligence.MCP Design Flaws: The Model Context Protocol (MCP), designed to wrap APIs in human language, is proving vulnerable to coercion. Attackers are using human language requests to probe back-end systems through NGINX."Logjack": New research into "Logjack" shows how malicious prompts hidden in system logs can compromise the LLMs used to analyze them.Meta Keystroke Capturing: Reports indicate Meta is capturing employee keystrokes to refine internal AI training sets, raising massive concerns about insider risk and password exfiltration.Shadow AI agents are the new Shadow IT. Are you part of the 82% with zero visibility into your AI agents? Discover every agent and API connection in 15 minutes. Book your FireTail demo: https://www.firetail.ai/schedule-your-demoEpisode Linkshttps://www.inc.com/kevin-haynes/faulty-ai-leads-to-record-10000-fine-for-oregon-lawyer/91322007https://www.nytimes.com/2026/04/17/us/oregon-winery-ai-legal-fight.htmlhttps://techcrunch.com/2026/04/21/meta-will-record-employees-keystrokes-and-use-it-to-train-its-ai-models/https://cloudsecurityalliance.org/press-releases/2026/04/21/new-cloud-security-alliance-survey-reveals-82-of-enterprises-have-unknown-ai-agents-in-their-environmentshttps://techcrunch.com/2026/04/20/app-host-vercel-confirms-security-incident-says-customer-data-was-stolen-via-breach-at-context-ai/https://www.securityweek.com/by-design-flaw-in-mcp-could-enable-widespread-ai-supply-chain-attacks/https://www.theregister.com/2026/04/16/anthropic_mcp_design_flaw/https://www.darkreading.com/application-security/critical-mcp-integration-flaw-nginx-riskhttps://www.helpnetsecurity.com/2026/04/16/llm-router-security-risk-agent-commands/https://oddguan.com/blog/comment-and-control-prompt-injection-credential-theft-claude-code-gemini-cli-github-copilot/https://arxiv.org/abs/2604.15368https://venturebeat.com/security/microsoft-salesforce-copilot-agentforce-prompt-injection-cve-agent-remediation-playbookhttps://techcrunch.com/2026/04/21/unauthorized-group-has-gained-access-to-anthropics-exclusive-cyber-tool-mythos-report-claims/https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jagged-frontierhttps://www.darkreading.com/vulnerabilities-threats/every-old-vulnerability-ai-vulnerabilityhttps://www.theregister.com/2026/04/20/lovable_denies_data_leak/

This week, Jeremy breaks down a sophisticated bypass of Apple Intelligence and explores a hardware-level GPU threat that turns "vandalism" into full system takeovers. We also look at the massive data fallout from the Mercor supply chain breach and why "Claude Mythos" is officially ending the era of slow vulnerability management.Key Stories & Developments:NeuralExec vs. Apple: Researchers reveal a 76% success rate in bypassing Apple Intelligence safety filters using Right-to-Left (RTL) Unicode overrides.The 4TB Mercor Leak: The fallout from the LiteLLM supply chain attack is confirmed: 4 terabytes of data stolen, leading Meta to pause contracts and OpenAI to investigate exposure.GPU-Breach: A new technique from the University of Toronto moves beyond "bit-flipping" to gain God-mode over GPU memory, threatening cryptographic secrets.Secret Sprawl Explosion: GitGuardian reports a 34% jump in exposed secrets, with AI service credentials (like OpenRouter and Google API keys) being the fastest-growing category.The Death of the Patch Cycle: "Claude Mythos" has flipped the script—99% of its AI-discovered zero-days are now valid, forcing a realization that this is no longer an AI security problem, but a high-speed vulnerability management crisis.Episode Linkshttps://9to5mac.com/2026/04/09/researchers-detail-how-a-prompt-injection-attack-bypassed-apple-intelligence-protections/https://securityboulevard.com/2026/04/bypassing-llm-supervisor-agents-through-indirect-prompt-injection/https://cybersecurityjournal.ca/techtalk/83883-flowise-cve-2025-59528-rce-exploitation-ai-agent-builder-2026-04-08/https://cyberscoop.com/grafanaghost-grafana-prompt-injection-vulnerability-data-exfiltration/https://techcrunch.com/2026/04/09/after-data-breach-10b-valued-startup-mercor-is-having-a-month/https://www.helpnetsecurity.com/2026/04/14/gitguardian-ai-agents-credentials-leak/https://securityaffairs.com/190455/security/gpubreach-exploit-uses-gpu-memory-bit-flips-to-achieve-full-system-takeover.htmlhttps://aisle.com/blog/system-over-model-zero-day-discovery-at-the-jagged-frontierhttps://openai.com/index/scaling-trusted-access-for-cyber-defense/https://www.npr.org/2026/04/11/nx-s1-5778508/anthropic-project-glasswing-ai-cybersecurity-mythos-previewhttps://labs.cloudsecurityalliance.org/wp-content/uploads/2026/04/mythosready.pdfhttps://www.businessinsider.com/andon-market-luna-ai-agent-managed-store-san-francisco-2026-4#Worried about AI security?Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo

In this episode for April 9, 2026, Jeremy covers a week dominated by highly sophisticated supply chain attacks and the emergence of "Project Glasswing", an internal Anthropic project revealing that next-gen AI models may be "too good" at finding zero-day vulnerabilities.Key Stories & Developments:The FBI's IC3 Report: For the first time in 25 years, the FBI has specifically categorized AI-enabled fraud, which accounted for $893 million in losses across BEC, romance, and investment scams.Ollama Exposure Spikes: A Shodan scan reveals that publicly exposed Ollama instances have jumped from 1,100 in September 2025 to over 25,000 in April 2026.Critical Infrastructure CVEs: Both MLflow and PraisonAI received maximum CVSS scores of 10.0 for flaws allowing unauthenticated code execution and command injection.The Axios Supply Chain Heist: In a sophisticated "long con," threat actors (Team PCP) spent weeks building rapport with the Axios project maintainer via a fake Slack workspace. They eventually lured the maintainer into downloading malware, allowing them to inject a Remote Access Trojan (RAT) into a package installed 600,000 times.Project Glasswing (Claude Mythos): Leaked documents from Anthropic describe Claude Mythos, a model family with terrifying cybersecurity capabilities. Mythos discovered a 27-year-old bug predating GitHub; currently, 99% of the zero-days it has identified remain unpatched, leading to internal concerns about a controlled rollout.Vertex AI Permission Flaw: Unit 42 discovered a flaw in Google Cloud’s Vertex AI that could allow AI agents to bypass security boundaries and access sensitive data.Episode Linkshttps://securityboulevard.com/2026/04/cyber-fraud-cost-americans-17-billion-in-2025-ai-scams-make-list-fbi/https://insecurestack.substack.com/p/eus-exposed-ai-infrastructurehttps://securityonline.info/weekly-vulnerability-digest-april-2026-chrome-zero-day-ai-security/https://thehackernews.com/2026/03/vertex-ai-vulnerability-exposes-google.htmlhttps://fortune.com/2026/04/02/mercor-ai-startup-security-incident-10-billion/https://www.sans.org/blog/what-we-learned-axios-npm-supply-chain-compromise-emergency-briefinghttps://techcrunch.com/2026/04/06/north-koreas-hijack-of-one-of-the-webs-most-used-open-source-projects-was-likely-weeks-in-the-making/https://thehackernews.com/2026/04/flowise-ai-agent-builder-under-active.htmlhttps://www.securityweek.com/anthropic-unveils-claude-mythos-a-cybersecurity-breakthrough-that-could-also-supercharge-attacks/https://www.staffingindustry.com/news/global-daily-news/mercor-reports-data-breachhttps://red.anthropic.com/2026/mythos-preview/Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo

In this annual recap from the sidelines of RSAC 2026, Jeremy is joined by Joseph Carson, Chief Security Evangelist at Segura. They discuss a conference floor that felt more like an AI event than a cybersecurity one, exploring the convergence of agentic AI and identity security. Joseph shares critical insights from the Estonia "Digital Nation" playbook, the growing risk of non-human identities, and why organizations must move from "hope as a strategy" to a proactive resiliency model that assumes physical and digital disruption.Key Episode Highlights:The AI Convergence: Joseph and Jeremy observe that AI has become the "fuel to the fire" for cybersecurity. While AI helps defenders move at the pace of attackers, it requires rigorous guardrails like least privilege and security by design to be successful.Identity of the Machine: A major theme of the conference was non-human identities. Joseph argues that AI agents should never use human credentials but should instead rely on ephemeral, just-in-time (JIT) keys to maintain accountability and limit the blast radius.Estonia’s Resiliency Playbook: Joseph details how Estonia transitioned from a target of cyber war to a resilient digital nation. He highlights the use of "Data Embassies"—storing sovereign data in geographically distributed, diplomatically protected locations—to ensure the country can "reboot" even after a total local failure.Beyond Cybersecurity to Physical Impacts: The discussion shifts to how attackers are reverting to "cheap" physical disruptions like GPS jamming and cutting undersea data cables when digital defenses become too strong.The "Luck" Trap: Referencing the famous Maersk ransomware recovery, Joseph warns that finding a single surviving backup by chance is not a strategy. Organizations must simulate worst-case scenarios, including the loss of their identity provider (IdP) or primary cloud vendor.About JosephJoseph Carson is Chief Security Evangelist and Advisory CISO at Segura, where he helps organizations worldwide strengthen identity security and build resilient cyber defense strategies. An award-winning cybersecurity leader with more than three decades of experience, Joe has advised governments, critical infrastructure, and global enterprises. He is the author of Cybersecurity for Dummies, read by over 50,000 professionals, and a regular contributor to leading outlets including The Wall Street Journal and Dark Reading. Joe also hosts the podcast Security by Default and is a frequent keynote speaker on identity and AI-driven threats.Episode LinksSecurity by Default Podcast: https://open.spotify.com/show/0mzN5M5CkFVLn8fq5TnH0OJoseph on LinkedIn: https://www.linkedin.com/in/josephcarson/Segura Website: https://segura.security/

In this episode of This Week in AI Security for April 2, 2026, Jeremy discusses a "perfect storm" for offensive cyber operations. As AI begins to discover vulnerabilities in legacy software faster than humans can patch them, regulators are sounding the alarm on the "intolerable risks" of AI-generated code.Key Stories & Developments:The AI-Generated Vulnerability Surge: Georgia Tech’s Vibe Security Radar tracked 35 CVEs in March 2026 alone that were directly attributable to AI-generated code, a sharp increase from just 6 in January.NCSC Warning: Richard Horne, head of the UK’s National Cyber Security Centre, warned at RSAC that "vibe coding" currently presents "intolerable risks" for most organizations as software volume is on track to double every 42 months.Langflow RCE Exploited: CISA has added a critical unauthenticated remote code execution (RCE) flaw in Langflow to its Known Exploited Vulnerabilities catalog."MAD" Bugs in Legacy Tools: The "Month of AI Discovered Bugs" initiative utilized LLMs to find critical zero-day RCE vulnerabilities in decades-old tools like Vim and GNU Emacs.The Claude Mythos Leak: Anthropic confirmed a major leak of unpublished assets related to its next-generation model, Claude Mythos, following a content management system misconfiguration.Offensive AI Multiplier: Hacker crew Team PCP claimed in Forbes that they are using AI-powered automated agents to turbocharge attacks on developer tools and repository infrastructures.Episode Linkshttps://www.forbes.com/sites/ronschmelzer/2026/03/27/major-security-breach-of-critical-ai-dependency-exposes-cloud-secrets/https://threatprotect.qualys.com/2026/03/26/cisa-added-langflow-vulnerability-to-its-known-exploited-vulnerabilities-catalog-cve-2026-33017/https://siliconangle.com/2026/03/30/openai-codex-vulnerability-enabled-github-token-theft-via-command-injection-report-finds/https://www.infosecurity-magazine.com/news/ai-generated-code-vulnerabilities/https://www.itpro.com/security/ncsc-warns-vibe-coding-poses-a-major-riskhttps://www.forbes.com/sites/thomasbrewster/2026/03/26/hackers-launch-devastating-attacks-on-ai-devs/https://markaicode.com/prompt-injection-attacks-ai-security-2026/https://cyberscoop.com/ai-cyberattacks-two-years-insane-vulnerabilities-kevin-mandia-alex-stamos-morgan-adamski-rsac-2026/https://fortune.com/2026/03/26/anthropic-says-testing-mythos-powerful-new-ai-model-after-data-leak-reveals-its-existence-step-change-in-capabilities/https://cyberwebspider.com/cyber-security-news/ai-critical-rce-flaws-vim-emacs/Worried about AI security?Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo

In the latest episode of This Week in AI Security, Jeremy reports live from the sidelines of RSA in San Francisco. The week is defined by "gullible" AI agents, legal precedents for chatbot liability, and a massive supply chain attack targeting the tools developers use to build AI applications.Key Stories & Developments:The "Minion" Problem: Zenity researchers demonstrated zero-click exploits against Cursor, Salesforce Einstein, ChatGPT, and Copilot, arguing that prompt injection should be reframed as "persuasion" vectors that turn agents into malicious minions.The $10M Discount Fabrication: A red teaming analysis of over 50 customer-facing AI agents found that "persuading" chatbots could lead to the fabrication of $10 million in unauthorized service discounts and commitments.Legal Precedent, Air Canada Liable: The British Columbia Civil Resolution Tribunal ruled that Air Canada is legally liable for the incorrect advice given by its chatbot, setting a major precedent for corporate AI accountability.Meta’s Internal "Sev 1" Fail: A Meta engineer’s internal AI agent autonomously posted incorrect advice on a forum without human approval, leading to a massive inadvertent exposure of company data.LLM Fingerprinting: New academic research shows that attackers can now fingerprint which specific LLM is in use by observing traffic patterns, allowing them to target the specific vulnerabilities (like the "Grandma" exploit) unique to that model.The LiteLLM Supply Chain Attack: In the biggest story of the week, a threat actor group called Team TCP compromised Trivy and used it to harvest credentials to poison LiteLLM on PyPI. Malicious versions (downloaded millions of times daily) were live for three hours, delivering a Kubernetes worm and credential harvester.Episode Linkshttps://www.theregister.com/2026/03/23/pwning_everyones_ai_agents/https://cybercory.com/2026/03/19/claudy-day-exposes-hidden-risks-prompt-injection-flaw-in-claude-ai-enables-silent-data-exfiltration/https://www.generalanalysis.com/blog/adversarial_analysis_customer_service_agentshttps://www.cve.org/CVERecord?id=CVE-2026-33068https://medium.com/@cbchhaya/making-prompt-injection-harder-against-ai-coding-agents-f4719c083a5chttps://aiautomationglobal.com/blog/ransomware-ai-agents-enterprise-cybersecurity-2026https://techcrunch.com/2026/03/18/meta-is-having-trouble-with-rogue-ai-agents/https://arxiv.org/html/2510.07176v1https://www.bbc.com/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-knowhttps://securityboulevard.com/2026/03/colorado-moves-to-revise-its-landmark-ai-law-after-industry-pushback/https://securitylabs.datadoghq.com/articles/litellm-compromised-pypi-teampcp-supply-chain-campaign/

In this episode of Modern Cyber, Jeremy sits down with Ann Dunkin, former CIO of the U.S. Department of Energy, to discuss the critical infrastructure that powers our digital lives. As data centers and AI drive unprecedented demand on the energy grid,Ann explains why "aging infrastructure" isn't always the biggest cyber risk, how the U.S. grid is actually structured (including the isolation of Texas), and why security leaders must move from "check-the-box" compliance to active risk management.Key Episode Highlights:The AI Power Surge: For decades, grid demand was flat; now, AI and data centers are driving a massive growth in load that the aging infrastructure was never designed to handle.The "Air Gap" Myth: While older nuclear plants are safely analog, modern grid vulnerabilities live in the "two-way" traffic of IoT devices and smart meters that were never meant to be internet-connected.Nation-State Threats: The primary concern for grid security is a nation-state actor gaining a foothold to cause long-term, physically destructive disruptions as a prelude to kinetic war.Compliance vs. Risk: Ann shares her experience in the Biden-Harris administration, emphasizing that "table stakes" compliance isn't enough—leaders must use risk registers and tabletop exercises to educate boards on true threats.About AnnAnn Dunkin is an External Fellow and Distinguished Professor of the Practice at the Georgia Institute of Technology. She is also the CEO of Dunkin Global Advisors, providing strategic business advice to companies of all sizes as well as fractional CIO services. She serves as an independent director on the governing board of Global Interconnection group and the advisory boards for Bowtie Security, Openpolicy and CGAI.Episode LinksAnn Dunkin at Georgia Tech: https://research.gatech.edu/people/ann-dunkinAnn Dunkin on LinkedIn: https://www.linkedin.com/in/anndunkin/