CyberWire Daily: “AI's Impact on Business” [CISO Perspectives]
Date: December 2, 2025
Host: Kim Jones (A) — N2K Networks
Guest: Eric Nagle (B) — Security exec, patent attorney, & AI governance leader
Overview:
This episode of CISO Perspectives, usually exclusive to pro subscribers, is unlocked for all listeners. Host Kim Jones sits down with Eric Nagle—a leader in operationalizing generative AI—to dissect AI’s evolving influence on business, especially as it relates to cybersecurity, risk management, legal compliance, and operational governance. The discussion moves from foundational definitions to the practical, ethical, and regulatory challenges of deploying AI in the enterprise, highlighting lessons learned, persistent pitfalls, and actionable strategies.
Key Discussion Points & Insights
1. Initial Reactions to AI in Business
- Kim recounts a 2018 experience where a CEO declared AI the coming revolution, spurring an internal scramble to “add AI” without strategic depth ([00:11–05:25]).
- Early focus was superficial—just “tweaking operational plans and adding the term AI.”
- Kim’s own list of strategic questions (build vs. buy AI, data normalization, compliance, new threat vectors, security evaluation of AI tools) was initially dismissed as irrelevant ([05:25–08:00]).
- Four years later, the organization had to confront each of those questions in earnest:
“My peers… now found themselves scrambling to address [these] questions and so many more as the organization surged to capitalize on AI’s advantages…” (A, [05:52])
2. Defining Classic Machine Learning vs. Generative AI
- Classic AI/ML: Deterministic, pattern-detection models yield same output for same input ([09:52–11:40]; [10:21] B).
- Generative AI:
- “With generative AI, there’s a randomness component. …You’ll get slightly different answers every time you ask the same question, even if in quick succession.” (B, [11:20])
- Enables natural language interaction; these models are “regurgitation engines”—they predict the next word in a sequence to generate coherent, human-like text ([12:09–14:35]).
- Examples include ChatGPT, Google Gemini, Anthropic Claude.
3. Capabilities & Limitations of Generative AI
- Not True Reasoning:
“…It’s getting better at reasoning…but it’s really performing vector math. …It approximates [human reasoning] with ever better clarity, if that makes any sense.” (B, [15:08])
- Business Implications:
- The tech is powerful, but deploying it quickly (‘ready, fire, aim’) can amplify risk ([16:18] A).
- Bias, hallucinations, and data leakage are real and present dangers ([16:18–17:17]).
4. Managing AI Risk: Bias, Hallucinations & Security Controls
- Importance of “responsible AI”: build safeguards to prevent AI from offending, breaking laws, or harming reputationally ([17:17] B).
- AI Firewall Concept:
- Nagle describes building one of the first AI firewalls:
- Prompts go through ML modules (e.g., anti-bias); biased or unsafe prompts/responses are blocked or rewritten ([17:53–20:05]).
- “Think of it as a two way firewall…”
- Constant retraining, code detection, and prompt analysis are required due to things like prompt injection and even oddities like emojis.
- Nagle describes building one of the first AI firewalls:
- Off-the-shelf solutions now exist for smaller orgs (= not just a Fortune 400 privilege) ([20:40]).
5. Actionable AI Risk Management for Businesses
- Top 3 Guidance for AI Adoption: ([22:24–25:17] B)
- Use AI for what it's good at, avoid known weaknesses.
- Constrain the chatbot’s scope:
“Unbounded chatbots are not considered very useful. They’re much more likely to come back with off topic responses.”
- Enhance with supplementary tools: Mitigate hallucinations, monitor input/output, layer additional protection.
- Real-World Example:
- “If you’re in the air conditioning business…ask the person to identify three things…that will…optimize our scheduling…” ([24:20] B).
6. Cybersecurity Threats of Unbounded AI
- Main concern: Data Loss ([27:32]).
- Using public AI tools can mean data is used for model training.
- By contract, secured, standalone models are ideal for sensitive data.
- Dangers include prompt injection, leaking model weights, unexpected outputs.
- AI Firewall is vital for safeguarding enterprise data.
7. AI for Coding: Will Developers Be Replaced?
- Claim: “AI can code 80% as well as an entry-level engineer.”
- Eric’s take:
“I think it’s actually quite good for doing prototypes…[but] it’s a little bit overblown to basically say it’s going to replace all of our entry-level engineers…” ([29:57]–[30:40] B)
- The new required skill: Reviewing, not merely writing code; “lazy engineers produce lazy code” ([31:14–32:38]).
- Tools like CodeWhisperer and Cursor allow near-coding by non-coders, but debugging, review, and responsibility remain essential.
8. Regulatory & Legal Implications
- US lacks unified AI law; states like California (privacy) and Colorado (AI regulation) are leading—sometimes problematically (ambiguous, hard-to-interpret laws) ([34:01–34:42]).
- “We pick the most restrictive interpretation and code to that…”
- EU’s AI Act is more prescriptive—regulates high-risk cases, allows low-risk use cases more freely ([35:25]).
9. AI Governance: Building Effective Oversight
- Lesson learned: Don’t wait until “the horse has left the barn”—early, risk-based governance is essential ([37:14–39:00] B).
- Centralized, “single path” processes (“paved road”) simplify transparency, observability, and security.
- Success depends on executive will and business alignment, not just technical prowess.
Notable Quotes
- On organizational shortsightedness:
“Most…seemed focused on tweaking their operational plans and adding the term AI to existing initiatives versus looking at the broader questions presented by an AI driven future.” — Kim Jones ([04:40])
- On generative AI’s unpredictability:
“With generative AI…you’ll get slightly different answers every time you ask the same question…” — Eric Nagle ([11:20])
- On human vs. machine reasoning:
“…It’s getting better at reasoning…but it’s really performing vector math…approximate[s] it with ever better clarity.” — Eric Nagle ([15:08])
- On bias and hallucination risks:
“You can actually make the model hallucinate if you…pass Python code or other code as part of your prompt.” — Eric Nagle ([18:50])
- On using AI coding assistants:
“We had to train people to basically say: this is your code, you are responsible for whatever you check in.” — Eric Nagle ([31:35])
- On governance best practices:
“The biggest thing they wish they had was a single path that all their business units were forced to use…” — Eric Nagle ([38:20])
Time-stamped Highlights
| Timestamp | Topic / Quote | |-----------|-----------------------------------------------------------------------------------------------------------------------------| | 00:11 | Intro: Urgency around managing AI’s impact on business | | 04:40 | “I think we’re having the wrong conversation…” – Kim (on early AI planning missteps) | | 09:52 | Difference between traditional ML and generative AI | | 11:20 | Generative AI is random, not deterministic | | 14:35 | LLMs work as “regurgitation engines,” not true analysts | | 15:08 | AI reasoning is “vector math,” approximating—but not matching—human thought | | 17:17 | Building an AI “firewall” to mitigate bias and hallucinations | | 20:40 | How smaller shops can access AI risk controls | | 22:24 | Top 3 AI risk mitigations for all companies | | 27:32 | Security risks: Data loss, prompt injection, leakage through public models | | 29:57 | AI coding ability vs. entry-level engineers | | 31:14 | New developer responsibilities: Review, accountability | | 34:01 | Patchwork legal landscape in the U.S.; state regulations and ambiguities | | 35:25 | EU’s AI Act – rules by risk category | | 37:14 | AI governance: risk-based approach & “single path” for visibility and security | | 38:20 | The value of a “paved road” model for all business units |
Conclusion
AI continues to revolutionize business, but its hasty deployment can create new security, compliance, and ethical risks. Eric Nagle’s journey highlights the importance of risk-based, proactive design—building AI governance, technical controls (like “AI firewalls”), and realistic expectations about what AI can (and can’t) do. Both technical and policy environments are evolving fast; CISOs and business leaders must stay agile and vigilant, investing in responsible, layered controls and adapting as regulation and threat landscapes change.
For further reading/resources: Visit the CISO Perspectives page for the episode’s blog post and additional materials.
