80,000 Hours Podcast Summary
"AI designs genomes from scratch & outperforms virologists at lab work. What could go wrong?"
Guest: Dr. Richard Melange, CLTR
Hosts: Rob Wiblin
Date: March 31, 2026
Episode Overview
In this thought-provoking episode, the 80,000 Hours team, led by Rob Wiblin, sits down with Dr. Richard Melange, the AI Biosecurity Policy Manager at the Centre for Long Term Resilience (CLTR). The discussion dives deep into the rapidly advancing intersection of artificial intelligence and biology—specifically, how new AI models are already making it possible to design potent organisms and biochemical agents with capabilities beyond anything found in nature. The conversation covers recent AI-enabled experimental breakthroughs, the shifting landscape of biothreats, evaluation of current risk mitigations, and the promise and challenge of defensive biotech innovation.
Listeners are taken on a tour of cutting-edge empirical results, the real limits (and myths) about barriers to bioweapon misuse, how regulation and model access controls are being approached, and how society might differentially accelerate defenses to stay one step ahead of new existential risks.
Key Discussion Points and Insights
1. AI Has Crossed a Threshold in Biological Design
-
Evo 2 Model: Step Change in AI Biosecurity Intersection ([01:26])
- Evo 2, a genomic language model developed by Ark Institute, can generate novel genomes for bacteriophages (viruses that infect bacteria).
- Not only are these AI-generated genomes viable, but some outperform the best known natural bacteriophages.
- “This is huge because this is the first time an AI design of a genome has turned out to actually be novel ... And they work in the lab. And more than that, they functioned better than the best bacteriophages we already know.” – Richard Melange [03:32]
-
Implications:
We can now design small organisms that do things better than nature ever has—a preview of coming capabilities for more complex forms of life and potential misuse scenarios.
2. AI Evasion of Biosecurity Screening is No Longer Theoretical
- Ricin Experiment with Current Tools ([04:44])
- Microsoft red teamers used protein design tools to create modified ricin protein complexes likely to function but evade industry standard safety screening.
- Gene synthesis companies failed to flag these as dangerous, as modifications obfuscated the threat.
- “They got them through because they’d modified it enough that the existing screening systems didn’t spot the change. ... This was, I think, as close as we’re going to get in an unclassified setting to proof that in fact, yes, modern systems can now do that.” – Richard Melange [06:44]
3. Tacit Knowledge is Less of a Barrier Than Thought
- Virology Capabilities Test (VCT) by SecureBio ([09:09])
- SecureBio’s VCT tests whether LLMs can answer high-level, tacit lab troubleshooting questions.
- Human virology experts scored 22%. Latest models (as of early 2025) scored >45%—twice as well as humans, even outperforming teams.
- “This is huge because this put pay to the claim that tacit knowledge barriers would always and inevitably be something that could never be overcome.” – Richard Melange [13:26]
4. Intent, Not Just Capability, is a Key Constraint
- Historical Context ([18:20], [22:42])
- While state actors (eg, the Soviet Union) have had active bioweapons programs, terrorist attempts have mostly failed due to errors—errors that AI could now correct.
- “Intent really is an important barrier ... but saying that intent is low is not to say intent is zero.” – Richard Melange [18:53]
5. Top Catastrophic Scenarios Enabled by AI
- Severe Catastrophe Pathways ([22:48])
- Respiratory Pandemic Viruses: Super-flus or pox viruses much deadlier than COVID-19.
- Mirror Biology (Mirror Bacteria): Out-of-this-world biology immune to Earth’s defenses—even an extinction-level threat.
- Disease X: Unanticipated, engineered threats beyond the current scientific imagination.
- “This must never happen. There are people calling for a global moratorium.” – Richard Melange on mirror biology [24:50]
6. Probability of Catastrophic AI Bio Event is Rising
- Expert Elicitation ([26:09])
- Forecast: 1–2% chance of an AI-enabled viral pandemic in 2026, with future risk rising rapidly.
- Tied to timelines for advanced, general AI capabilities.
7. Spectrum of Potential Misusers
- Actor Typology from CLTR’s 2024 Paper ([30:24])
- Novices: Little expertise/resources.
- Highly Capable Individuals: Deep specialists (eg, Bruce Ivins).
- Somewhat/Moderately/Highly Capable Groups: Ranging from terrorist clusters to state programs.
- Risk is highest for mid-tier actors who have expertise—but not the full historic capabilities of a nation-state.
- “We really said in 2024 ... the uplift really comes in the middle, roughly because the most highly capable groups already can do a lot of really terrible things.” – Richard Melange [33:46]
8. Access Controls, Guardrails, and Defensive Acceleration
- Three Response Categories ([70:10]):
- Managed Access: Restrict who can use advanced AI bio tools.
- Guardrails/Safeguards: Models refuse misuse requests.
- Defensive Acceleration (“Defac”): Rapidly advance biosecurity tech—eg, universal vaccines, biosurveillance.
- “I think is an underexplored category that I’m particularly excited about ... It would be strange to not ... make sure that we can unlock the benefits.” – Richard Melange [75:52]
9. Guardrails on Open Models: A Losing Game?
- Data Filtration, Distillation, and Limits ([98:14])
- Current methods aren’t robust—safeguards can be easily fine-tuned away.
- As compute gets cheaper, even strong filtration may only buy a few years’ head start.
- “We are automating the ability to do AI research and AI engineering.” – Richard Melange [98:59]
10. Defensive Acceleration: Concrete Paths Forward
- Primary Recommendations ([103:16], [116:57])
- AI-enabled Metagenomic Biosurveillance: "Nucleic Acid Observatory"-style systems+AI to detect weird/new sequences any time, anywhere.
- Attribution Technologies: Forensic techniques to link engineered pathogens to their makers—critical for deterrence.
- Broad-Spectrum Vaccines/Antivirals: Universal vaccines stockpiled in advance could dramatically limit the utility of engineered viruses.
- “If we can stockpile this in advance ... we can say no, if you try and engineer any flu against us, we are confident ... Society will continue functioning.” – Richard Melange [121:03]
- “I am very excited about built environment modifications ... where pathogens just can’t exist in the air.” – Richard Melange [184:26]
11. Reality Check: PPE & Biohardening
- PPE and engineered environments (eg, clean air in buildings) are robust defenses but expensive and require sustained investment. These must complement, not substitute, other measures.
12. Industry & Policy: Current Actions and Gaps
- Slow Shift Among Scientists:
Leading figures are now signing statements recognizing the risks; however, actual reductions in open dissemination of dual-use AI tools remain slow. - AI Companies’ Role:
- Must not only prevent “uplift” of terrorists and rogue states but proactively provide tools to defensive actors (biosurveillance, vaccine development).
- “It is sort of unacceptable that the people that you can pre-identify as those working on the most pressing catastrophic biological security challenges ... are not routinely getting the best AI before anybody else.” – Richard Melange [158:04]
- Governments:
Need legal levers for incident response, proactive funding/support for defensive acceleration, and preparedness even if regulated access only buys a few years.
Notable Quotes and Memorable Moments
- On Overcoming Nature:
- “I must disagree strongly when people say nature is the world’s worst bioterrorist ... We can do worse than nature.” – Melange [00:00], [38:03]
- On Tacit Knowledge:
- “Tacit knowledge barriers ... could never be overcome ... The eval put pay to [that].” – Melange [13:26]
- On AI Uplift for Bio Risks:
- “I think it will be important to really get into what tacit knowledge is… but stepping back, what did VCT really do? It’s a really great eval.” – Melange [11:41]
- On Open-Source Dilemmas:
- “For biological tools, the stuff that might be able to design genomes, ... that stuff is all open weight, the data is open. ... They’re not used to thinking about this as weapons of mass destruction territory.” – Melange [91:16]
Timestamps for Important Segments
- Evo 2 and Novel Genome Design: [01:26]–[04:40]
- Microsoft Ricin Protein Experiment: [04:44]–[07:46]
- Tacit Knowledge and VCT: [09:07]–[14:58]
- Historical Use and Intent Discussion: [18:20]–[22:42]
- Catastrophic Risk Scenarios: [22:48]–[25:55]
- Risk Spike Probability: [26:09]
- Typology of Actors: [30:24]–[35:46]
- Response Framework (Access, Guardrails, Defac): [70:10]–[73:34]
- Open Model Guardrails Limitations: [98:14]–[102:39]
- Defensive Acceleration: Surveillance & Attribution: [103:16]–[110:16]
- Broad-Spectrum Countermeasures: [116:57]–[124:12]
- Policy and Industry Recommendations: [158:04]–[161:35]
- Career Advice: [176:25]–[183:18]
- Optimistic Futurism: [184:26]–[187:27]
Closing Reflections
The rapid advances in AI-driven genome engineering and biology present not just transformative opportunities for science and medicine, but open up existential security challenges. Tacit knowledge barriers have fallen, screening regimes are showing cracks, and powerful bio-tools are largely open, with little consensus on responsible restriction or managed dissemination.
Urgent priorities identified:
- Proactively equipping defenders (biosurveillance, broad-spectrum countermeasures)
- Improving government and private sector preparedness for bio-AI incidents
- Rapidly accelerating defensive biotech developments—a vision of eradicated disease and pathogen-free built environments is possible.
Richard Melange’s ultimate message:
We can, and must, get ahead of “offense-dominant” AI-enabled biothreats by stacking smart, multi-layered defenses. There is optimism in the vision of a resilient future—but only if action keeps pace with capability.
Additional Resources
- CLTR: The Near-Term Uplift of AI on Biological Misuse (2024)
- SecureBio – Virology Capabilities Test (VCT)
- Nucleic Acid Observatory
- BlueDot Impact – Defensive Acceleration Initiatives
- Blog: Richard Melange on DEFAC projects in AI bio
- 80,000 Hours Job Board
[Podcast language and tone preserved where possible. Coverage skips intros, ads, and general housekeeping. For extended reading and updates, see episode show notes and listed resources.]
