Future of Life Institute Podcast
Episode: AGI Security: How We Defend the Future (with Esben Kran)
Date: August 22, 2025
Host: Gus Docker
Guest: Esben Kran, Co-director of Apart Research
Overview
This episode delves into the challenges, threats, and necessary innovations in securing society against the risks posed by AGI (Artificial General Intelligence). Esben Kran discusses what differentiates AGI security from conventional cybersecurity, explores the emerging paradigms of decentralized safety infrastructures, addresses the risks of surveillance, and shares his vision for a resilient, optimistic approach to the future.
Key Discussion Points and Insights
1. The Foundation of AGI Security
- AGI Security vs. Traditional Cybersecurity
- AGI security isn’t just an add-on but has to be baked into every aspect of technological and societal infrastructure.
- Quote:
“Absolutely foundational to AIs being useful and functional for use in society is the fact that they are secure. If they are not secure, it's ridiculous to even try to use them...” — Esben Kran [00:00, repeated at 11:44]
- Embedding security at every level means "rebuilding our societal infrastructure" to handle the new and comprehensive attack vectors uniquely enabled by AGI.
— Esben Kran [01:28]
2. New Attack Surfaces: "Scentware"
- Definition and Implications
- Introduction of “scentware” (sentient malware): malware that can self-improve, adapt, and manipulate users, making it far more dangerous than traditional viruses.
- Quote:
“Scentware is then this like sentient malware... whether it self improves, whether it can manipulate users and humans, people in the system, while it also improves its ability to do cyber offense.” — Esben Kran [02:56]
- This evolution dramatically increases the complexity of defense required.
3. Multi-layered Defense and Societal Transformation
- Infrastructure at All Levels
- We need new types of defense on the individual, organizational, and societal levels.
- Attack surfaces now include not just digital but cognitive and informational vectors (social engineering, manipulation of beliefs/democracy). — Esben Kran [04:35-05:24]
- Beyond conventional cybersecurity, new defense must address information stream control and cognitive manipulation.
4. Personal vs. Societal Security
- Limits of Personal Protection
- While personal security services (like democratized digital bodyguards) may emerge, most protections, especially against large-scale AGI threats, must be societal.
- Quote:
“…We are talking about, for example, the energy infrastructure build out right now... they need a completely different foundation for security than they've had before.” — Esben Kran [07:20]
- The concept of “SL5” (Safety Level 5): data centers running frontier AI must meet unprecedented security standards.
— Esben Kran [07:20]
5. Surveillance vs. Privacy in AGI Governance
- The Pitfall of Surveillance States
- Stringent AI oversight risks pushing societies toward surveillance states, reminiscent of post-9/11 escalation, but with even more power concentrated in the hands of a few. — Esben Kran [14:13]
- Opportunity for Alternatives
- The supply chain bottlenecks of AI (hardware produced by TSMC, ASML, etc.) present a unique moment to embed privacy and avoid centralized surveillance—before capabilities decentralize.
- Quote:
“…The opportunity we have right now... is that the supply chains of AI and the released AI is constrained enough… it's a few actors that we need to convince and work with to create a system where we can avoid a surveillance state.” — Esben Kran [14:13]
6. Centralized vs. Decentralized Security
- Distributed Security as a Model
- Decentralization lessons from the Internet can be leveraged to create systems where security and privacy are upheld by many stakeholders, not one authority.
- Quote:
“This is what we need to be ready for. And that requires much more scrutiny of every single part of the stack than before.” — Esben Kran [20:27]
- Every computer becomes a guardian of security, mirroring how HTTPS and encryption became standard.
- Attacks as Default
- In an AGI world, every interaction could be an attack, not just rare occurrences.
— Esben Kran [21:58]
- In an AGI world, every interaction could be an attack, not just rare occurrences.
7. Incentive Structures and Private Investment
- Incentive Misalignment
- Despite enormous private investments (e.g., Stargate), company incentives prioritize profits and capabilities, not security.
- Quote:
“…Their incentives are to both create stronger AI and to create inference compute. It is not to make the data center secure and it is not to make the AI secure.” — Esben Kran [11:44]
8. The "Cult of Inevitability" and Agency
- Against Fatalism
- Esben cautions against the pervasive narrative that catastrophic outcomes are inevitable or that political actors are unmovable.
- Quote:
“Something that is actually worse and more dangerous than AI risk right now... is the cult of inevitability... Like, okay, it's inevitable that we won't be able to convince politicians—but you never talk with any politicians. Like, come on.” — Esben Kran [00:00, 26:26]
- He cites the ozone layer consensus as an example that coordinated international action is possible when alternatives are available.
9. Distributed Defense in Practice (Cloudflare Example)
- Case Study: Cloudflare’s Agent Labyrinth
- Cloudflare’s implementation of fake data labyrinths for bots scraping flagged content shows the kind of proactive, distributed defensive mechanism needed.
- Quote:
“What Cloudflare has developed is this way where if you have that flag, then any agent that comes in ... will be sent through like a content labyrinth that is just fake data and the agent will just look at this as a real website and they'll get all this trash data.” — Esben Kran [28:40]
10. Vision of a Secure, Cypherpunk Future
- Positive Scenario
- A future where citizens are deeply aware and literate in cryptography, informational hygiene, and digital defense—akin to a cypherpunk society.
- Realistic expectations: society will be shaped by ongoing power struggles, not a clean utopia or dystopia.
- Quote:
“…I have this vision where every middle school child knows about fundamental cryptography because they have to.” — Esben Kran [09:46] “We need to make sure that we continually iterate on a type of, like, power dynamic that we can be happy with to avoid some of the very bad gray zones as well.” — Esben Kran [33:52]
11. Human Value in a Post-Labor World
- Economic and Existential Implications
- As AGIs take over labor, human value may shift to domains like the arts, culture, and serving as existence proofs or sources of preferences.
- The legal code (constitution, rights) as a protective mechanism may need to be translated into formats that can be enforced by and upon AI.
- Quote:
“...we are entering a sort of intelligence curse where humans won't have a lot of value as labor in the future. And this seems to be relatively straightforward now.” — Esben Kran [35:41]
12. Encoding Human Values and Law
- English may become the "programming language" for contracts and values in agent societies, but enforcement will be algorithmic.
— Esben Kran [41:19]
- Example of simulation experiments with contract law among AI agents, using mechanisms reminiscent of human legal society.
13. Detection of Manipulation: The "Darkbench" Project
- Monitoring Cognitive Security
- The Darkbench project benchmarks manipulative patterns ("dark patterns") in language models—such as brand bias, anthropomorphism, sneaking, and more.
- Quote:
“...if you chat with a large language model, can you trust it or not? And I think many people realize that it is very, very easy to trust something that acts so much like a human as ChatGPT does...” — Esben Kran [45:01]
14. The AGI Endgame: Scenarios and Cautions
- Default Path vs. Designed Future
- Left unchecked, the “default path” is one fueled by international and corporate competition, racing toward unconstrained deployment of superintelligent AGIs.
- The preferable future focuses on powerful tool AIs, intentionally restricting the creation or deployment of agents with goals or open-ended autonomy.
- Quote:
“...we want to proactively design what our future society looks like. And this is socio technical, right? Like it's its politics, its legal code, its ways of interpreting how we function in society...” — Esben Kran [55:54]
- Hopium and "Unimpressive" Utopianism
- Esben is skeptical of utopian promissory notes from AGI leaders and calls for more concrete, actionable visions.
— [63:09–66:11]
- Esben is skeptical of utopian promissory notes from AGI leaders and calls for more concrete, actionable visions.
15. Pacing and Communication of Risk
- The exponential pace of AI development makes it easy for the public and leaders to underestimate how rapidly change is occurring. — Gus Docker [68:40–69:49]
- Esben argues public understanding will catch up, and that governance will be compelled by evident change—though technology and security infrastructure are the real bottlenecks.
16. Optimism and Agency
- Types of Optimism
- Esben distinguishes between naïve optimism (ignoring risks), delusional optimism (misjudging risks), and opportunity-focused optimism (realistically assessing and shaping outcomes).
- Quote:
“There's like two ways you can be optimistic. One is by not realizing the danger. Another one is by deluding yourself. And then there's a third option here which is actually see what is the realistic version of what's going to happen over the next years and how can we shape that ourselves.” — Esben Kran [76:03]
Notable Quotes and Memorable Moments
- “Absolutely foundational to AIs being useful and functional for use in society is the fact that they are secure. If they are not secure, it's ridiculous to even try to use them…”
Esben Kran [00:00, 11:44] - “The cult of inevitability... is actually worse and more dangerous than AI risk right now.”
Esben Kran [00:00, 26:26] - "Scentware is... sentient malware... whether it self improves, whether it can manipulate users and humans, people in the system while it also improves its ability to do cyber offense."
Esben Kran [02:56] - “…We need to make sure that we continually iterate on a type of, like, power dynamic that we can be happy with to avoid some of the very bad gray zones as well.”
Esben Kran [33:52] - “There’s like two ways you can be optimistic. One is by not realizing the danger. Another one is by deluding yourself. And then there's a third option here which is actually see what is the realistic version of what's going to happen over the next years and how can we shape that ourselves.”
Esben Kran [76:03] - "If everyone listening... takes action towards this in every single position they're in… then we could get there."
Esben Kran [26:26]
Timestamps for Key Segments
- Foundations of AI Security — [00:00–02:44]
- Scentware and Malware Evolution — [02:44–04:07]
- Rebuilding Infrastructure and Cognitive Risks — [04:35–06:43]
- Personal vs. Societal Security — [06:43–09:46]
- Legal, Tech, and Physical Infrastructure — [09:46–11:44]
- Surveillance vs. Decentralization — [13:29–20:27]
- Attacks as Default — [21:58–23:25]
- Distributed Security and Company Incentives — [24:09–26:26]
- Cult of Inevitability & Historical Hope — [26:26–28:37]
- Cloudflare Example of Distributed Tech — [28:37–30:49]
- Positive Vision: Cypherpunk Future — [31:08–34:19]
- Economic and Societal Transitions — [34:19–40:14]
- Encoding Human Legal Values into AI — [40:14–43:39]
- Darkbench Manipulation Detection Research — [45:01–54:52]
- Zooming Out: AGI Endgame Scenarios — [55:54–58:36]
- Debate: Tool AI vs. Superintelligence — [58:36–63:09]
- Critical View of Utopian AGI Visions — [63:09–66:11]
- Public Understanding and Exponential Change — [68:40–69:49]
- Optimism and Agency — [76:03–78:10]
Conclusion
Esben Kran urges proactive, realistic optimism in approaching AGI security: the challenges are immense, but not insurmountable if society chooses distributed, transparent, and robust approaches to safety and governance. The fate of future AI systems, our societal values, and even democracy depend on avoiding fatalism and investing today in technical and social infrastructure—while keeping public conversation, legal frameworks, and technological advances closely intertwined.
“It is monumentally exciting that we can now, this is the junction point at which we can define what our future society looks like... The optimism comes from a type of opportunity mindset where we can now do something that humanity has never had the chance to do.”
— Esben Kran [76:03]
