KBKAST Podcast Summary
Episode: From Elastic{ON} Sydney 2026 – KB On The Go | Mandy Andress
Date: March 27, 2026
Host: KBI.Media (KB)
Guest: Mandy Andress, Chief Information Security Officer at Elastic
Episode Overview
This episode, recorded live at Elastic{ON} Sydney 2026, explores the evolving landscape of AI and security at a strategic level, focusing on AI adoption, risk optimization, and the blurred lines between observability and security. Host KB engages Mandy Andress in a wide-ranging discussion about the challenges facing CISOs, the rapid transition to agentic AI, sector-led oversight in Australia, the persistent identity problem, and the shifting dynamics between attackers and defenders in the era of AI-driven cybersecurity.
Key Discussion Points & Insights
1. AI Oversight in Australia vs. US and EU
-
Australia’s “Middle Road” Approach
- Australia is described as sitting between the EU’s prescriptive AI legislative approach and the US’s laissez-faire model, opting for sector-led oversight with high-level guardrails instead of strict upfront regulation.
- Quote (Mandy, 03:09):
“Australia really sitting in the middle of the paradigms. The EU AI act being very prescriptive, being very ‘prove to me first that everything is safe and secure before you can use it’. The US is… here are some general standards, but organizations, it's on you to find the balance and manage the risks. And within Australia I find a good balance.”
-
Early Days and Rapid Change
- There is consensus that AI is still in its formative stages—approaches developed today may become outdated within months due to technology’s rapid evolution.
- Highly regulated industries (e.g., financial services) are struggling to adapt compliance frameworks to AI-driven operations.
2. CISO Concerns: Skepticism, Identity, and Control
-
Skepticism and Apprehension
- While there is recognition of AI’s benefits, there are serious concerns about technology maturity, lack of enterprise-grade controls (especially in agentic AI), and the risk of excessive access.
- Specific anxiety centers on MCP (multi-cloud platform) servers, which often lack granular access controls—leading to broad, risky permissions for agents.
-
Identity as 'The Control Plane'
- Identity and access management (IAM) is universally acknowledged as the biggest risk as organizations accelerate agent and AI adoption.
- The proliferation of agent identities—often with overbroad permissions—runs the risk of enabling unpredictable and potentially disruptive agent actions.
- Quote (Mandy, 10:08):
"For me, it's identity. Identity is… the control plane of AI. It's the control plane of agentic AI. It is where threat actors are focusing. Because we don't do identity well today."
-
Least Privilege and Zero Trust Principles
- The conversation reiterates the need to return to classic security fundamentals—least privilege, zero trust—tailoring these to suit the scale and complexity of agentic AI.
- Granting agents only the minimal required privileges is crucial to prevent unintended or malicious behavior.
"If we do that in an AI and agentic world, we could create some very significant challenges." (Mandy, 11:55)
3. Operational Friction: Controls Slow Down AI Adoption
-
Balancing Controls and Innovation
- Overly granular, manual controls can hinder the speed and agility that AI is meant to deliver.
- For legacy systems without robust documentation or ownership, tracing, reverse engineering, and “human in the loop” checks are necessary—albeit resource-intensive and often imperfect.
"Welcome to the conundrum of the world of AI and security organizations. I say yes to all of that." (Mandy, 13:02)
-
Agents Managing Agents
- Some organizations are creating layers of oversight by building supervisory "manager" agents to audit and control autonomous agent operations (14:46).
4. Human Behavior, AI Dependency, and Productivity
-
AI as a Copilot, Not Replacement
- The increased scale and pace of data and alerts overwhelm human capacity, making AI-driven summarization, filtering, and prioritization essential in security operations.
- Mandy draws an analogy to automobiles: few people understand the mechanics anymore—the focus is on use, not on “how it works.”
"We are now operating at a scale and a speed beyond human capacity." (16:09)
-
Productivity and Summarization
- AI-driven summarization is embraced to help humans cope with attention fatigue and information overload.
"I actually think using AI in that way will be helpful… it's able to give us a better summary. It's able to potentially pull out the key messages instead of us just skimming it as humans." (Mandy, 18:37)
- AI-driven summarization is embraced to help humans cope with attention fatigue and information overload.
5. The Evolving Role of the CISO
-
CISO as AI Regulator?
- In the absence of prescriptive national legislation, the CISO plus Legal and IT collectively act as the internal policy/regulatory framework for AI governance.
-
Expanding Scope and Stress
- Mandates on CISOs keep expanding, but the “50,000-foot view” remains: focus on core security fundamentals and build robust, transparent inventory for assets—including agents.
“Agents are now assets. How do you know what agents you have in your organization?” (Mandy, 20:39)
- Mandates on CISOs keep expanding, but the “50,000-foot view” remains: focus on core security fundamentals and build robust, transparent inventory for assets—including agents.
-
Old Fundamentals Meet New Challenges
- Patch management, asset discovery, and visibility remain difficult, but the introduction of AI may help prioritize and manage these legacy pain points more effectively.
"AI is very good at that. Much better than humans… at going through and trying to find what are those [attack] paths..." (Mandy, 23:04)
- Patch management, asset discovery, and visibility remain difficult, but the introduction of AI may help prioritize and manage these legacy pain points more effectively.
6. Attackers & Defenders: Arms Race and Paradigm Shifts
-
Attackers' Early Lead with AI
- Threat actors adapt quickly, using AI to automate phishing, create self-morphing malware, and to fully “act as a threat actor” at machine speed.
- Short term: defenders are at a disadvantage; Mandy predicts it will get worse before it gets better.
"Threat actors are quickly learning how to expand their use and how they leverage AI technology into events and creating incidents." (Mandy, 26:23)
-
Defenders’ Long-Term Edge with Context
- The future advantage will shift to defenders as AI empowers them with holistic, rapid, and autonomous context—a game-changer for detection and response.
"...we will finally be able to have that holistic contextual picture of our environment." (Mandy, 29:54)
- The future advantage will shift to defenders as AI empowers them with holistic, rapid, and autonomous context—a game-changer for detection and response.
-
Timeline to the Turning Point
- Mandy estimates about 10 years before the industry will look back at today as "the dark ages" of security, where defenders will routinely outpace attackers using machine-driven context and automation.
"For me, it's more 10 years for [having] a strong contextual understanding." (Mandy, 30:49)
- Mandy estimates about 10 years before the industry will look back at today as "the dark ages" of security, where defenders will routinely outpace attackers using machine-driven context and automation.
-
Social Engineering Will Persist
- Even when technical attacks become less viable, attackers will pivot to social engineering and exploiting human behavior—ensuring security remains a dynamic field.
7. Global AI Governance: Risk, Speed, and National Variance
-
Trade-Offs: Speed vs. Safety
- Prescriptive models (like the EU) may inadvertently slow down innovation; lighter-touch approaches (like the US, and to some extent Australia) foster faster adoption but shift the risk appetite definition to industries and organizations.
"The key thing that's driving forward momentum to me is speed." (Mandy, 34:01)
- Prescriptive models (like the EU) may inadvertently slow down innovation; lighter-touch approaches (like the US, and to some extent Australia) foster faster adoption but shift the risk appetite definition to industries and organizations.
-
Is Australia’s Position Unique?
- While historically risk-averse, Australia is emerging as a rapid adopter of AI, fostering societal and industry resilience, and even taking international leadership on social/tech fronts.
“I have felt that shifting and I definitely see and feel today that Australia organizations are near the leading edge of both adopting AI and how to think about adopting AI.” (Mandy, 36:26)
- While historically risk-averse, Australia is emerging as a rapid adopter of AI, fostering societal and industry resilience, and even taking international leadership on social/tech fronts.
-
No Single “North Star” Country
- Due to complexity, there's unlikely to be a single model for others to follow; expect an evolving amalgamation of approaches, with adaptability and learning from mishaps as a baked-in necessity.
8. Outlook for 2026 and Beyond
-
Year of the Agent
- 2026 will be marked by experimentation with AI agents—especially in coding/development and personal productivity.
"2026: year of agents, where can we apply them? What can they do to help us? … A lot of experimentation, a lot of trial and error." (Mandy, 40:31)
- 2026 will be marked by experimentation with AI agents—especially in coding/development and personal productivity.
-
Accidents Will Happen—Guardrails Are Key
- Significant, unforeseen incidents are likely as organizations experiment, but resilience and anti-fragility will be the hallmarks of success.
“I do anticipate there will be some fairly significant events that happen because we can't necessarily anticipate the full either breadth of implementation or potential ramifications.” (Mandy, 39:14)
- Significant, unforeseen incidents are likely as organizations experiment, but resilience and anti-fragility will be the hallmarks of success.
Memorable Quotes & Timestamps
-
On Agentic AI Risks:
“It's finding that right balance… technology solutions within the AI space are pretty immature… ensuring there's both guardrails on what the agent programmatically can do, but also guardrails… on what's the access that agent has.” – Mandy (01:28) -
On Identity Management:
“It is where threat actors are focusing. Because we don't do identity well today… we're going to have this exponential increase in the number of identities… We're creating a disaster for ourselves.” – Mandy (10:08) -
On AI and Human Capacity:
“We are now operating at a scale and a speed beyond human capacity. So even if we wanted to remember and we wanted to do things manually, we wouldn't be successful.” – Mandy (16:09) -
On Patch Management and AI:
“AI is very good at that. Much better than humans in going through and trying to find what are those paths that you could exploit and take advantage of.” – Mandy (23:04) -
Looking Forward:
“For me, it's more 10 years for [having] a strong contextual understanding. The way I talk about it is 10 years from now, I want to look back at today's time as the dark ages of security.” – Mandy (30:49)
Timestamps for Major Segments
- [01:04] – Framing Australia’s AI oversight approach
- [06:04] – CISO concerns and trends in AI adoption
- [10:08] – Identity as the control plane of agentic AI
- [16:09] – Human capacity vs. AI scale and productivity
- [19:42] – CISO, Legal, and IT as collective AI regulators
- [23:04] – Patch management, defense in depth, and AI-driven prioritization
- [26:23] – The arms race: attackers’ use of AI vs. defenders
- [30:49] – Timeline to defenders’ advantage via AI context
- [34:01] – Risk, speed, and global AI governance models
- [36:26] – Cultural shift: Australia on the AI adoption frontier
- [39:14] – Expecting, containing, and learning from incidents
- [40:31] – 2026 as the “Year of Agents”
Conclusion
This episode delivers a panoramic view of the strategic security challenges of agentic AI, highlighting Australian leadership, persistent foundational gaps, and the long-term trajectory towards machine-driven defense. Mandy Andress’s pragmatic, nuanced analysis—rich with analogies and candid reflections—offers rare clarity for security leaders grappling with the opportunities and uncertainties of an AI-powered future.
