Unchained Podcast – Episode Summary
Title: How Does Crypto Remain Secure in a World of Always On AI Hacks?
Date: April 10, 2026
Hosts: Kane Warrick (A), Taylor Monahan (B), Austin Griffith (C, Ethereum Foundation)
Theme:
How the rapid evolution of AI—especially models like Anthropic's "Mythos"—threatens and transforms the security paradigm of crypto, blockchain, and Web3. The crew discusses the growing inevitability of AI-powered hacks, the challenge of immutable systems, best practices for developers, and the new economics and architectures emerging in this high-stakes environment.
Main Episode Themes
- The existential threat "frontier" AI models like Mythos pose to crypto security
- How AI’s capabilities fundamentally shift the timeline and techniques for vulnerabilities being discovered and exploited
- Rethinking immutability, upgradeability, and code audit approaches in smart contract design
- The rise of "skill files" and agentic toolkits for rapidly up-skilling AI agents
- The impact of AI on infrastructure, DevOps, and organizational models in crypto businesses
- Payment, access, and economic models for both users and organizations deploying powerful AI
- The blurred boundaries between useful tools, threat actors, and the need for open collaboration in defense
Key Discussion Points and Insights
1. The Panic Around Mythos: AI’s Growing Exploit Capabilities (00:43–04:05)
- Anthropic’s Mythos model: Labeled “too dangerous to release,” yet has already discovered 20+ zero-day vulnerabilities in long-established software (Linux, OpenBSD).
- Kane: “It has figured out like 20 zero day vulnerabilities in decades old software, which feels a little bit like what crypto has been kind of going through...”
- Comparison to crypto hacks: Even longstanding, heavily-audited smart contracts (e.g. Balancer V2) are suddenly vulnerable thanks to AI techniques.
- Austin: “We’re going to see AI get so good that a bunch of hacks are going to happen and then not so many hacks are going to happen anymore.”
2. Immutability vs. Upgradeability: Balancing Security and Flexibility (04:05–07:03)
- Immutability is a double-edged sword: It ensures censorship resistance but means bugs in deployed contracts cannot be fixed.
- Taylor: “Immutability is like freaking amazing in so many ways, but also quite impractical and creates risk.”
- Hot take: Uniswap is often cited as a paragon of secure, immutable contracts, but confidence is waning as AI capabilities expand.
3. Are We Seeing AI-Powered Attacks Already? (07:03–11:45)
- AI assist in exploits: The crew suspects recent hacks may already feature AI-assisted discovery and execution (“Mythos is scary because… this thing, I'll just go hack you.” – B, 08:58).
- Not all hacks require superhuman AI; some are just “a dude with Opus,” but advanced models soon will be fully autonomous and much faster than people.
4. Web2 vs. Web3: Different Levels of Risk (09:27–12:00)
- Austin: “I think this is going to be worse for web 2 than it is for web 3,” because web2 infrastructure was not designed for adversarial scrutiny and instant patching is harder in legacy systems.
- However, core crypto protocols are not immune, especially as financial rewards for hacking are enormous.
5. Mythos and Building "For the Model" (12:00–18:47)
- Paradigm shift: Developers should design systems that assume extremely powerful, relentless, adversarial AI (not just human hackers) will attack them.
- Kane: “Boris... was like you should be building for the model six months from now, not for the model today.”
- Mythos introduces long-running, self-directed activity: “If Mythos can just sit there and read the entire OpenBSD code base for two weeks autonomously... a lot of the stuff that we've been doing ... disappears." – Kane, 14:18
6. Cost, Autonomy, and Model Selection (15:15–22:26)
- Austin describes his workflow: START with expensive models (Opus, Mythos) to prototype, then optimize for cost by offloading repetitive/simple tasks to local or smaller models.
- “You use the general model and you do the thing and you get it to work and you almost get product market fit, and then you go back to... making the whole company more efficient.” – C, 23:02
7. Skill Files: AI Rapid-Re-Skilling and Collective Security (26:40–33:46)
- Skill files (“skill.md”): Like “Matrix”-style instant upskilling for AI agents, bridging the gap between model training and real-world, up-to-date knowledge.
- Kane: “It makes an agent like, you know, IQ 60, right? About crypto to like, IQ 140 in...”
- Austin: “If you're out there building anything and you're expecting agents to use your anything, you should have a skill file... it's just like the robots.txt stuff.” – 31:02
- Open culture: The community shares skill files widely, but verifying quality/safety is still a challenge.
8. Real World Stories: AI Agents Making and Breaking Money (33:46–38:34)
- Example: Austin's agent autonomously deployed contracts with $250K+ at stake, some without human review, using skill files.
- Agents mirror human errors: They can “forget” crucial steps, overconfidently make mistakes, and require iterative re-skilling for operational soundness.
- Austin: “If you could just have a skill file that says don’t be lazy, that would be great.” – 32:49
9. Open Sharing Versus Secret Sauce (38:34–41:08)
- Discussion of the tension between open security (“give skill files away for free”) and proprietary advantage:
- Austin: “The skill file itself is the secret sauce for the whole company. And you're giving it away."
- Taylor: “Why would you keep it a secret, you fools? ... you could just share.”
10. The Economics and Infrastructure of AI Usage (41:22–55:09)
- Anthropic is increasingly restricting usage to API meter-based pay-per-use, moving away from cheap subscription models (users sometimes abuse this with multiple accounts).
- Austin’s workflow: subscription for everyday use; API for automated bots; always balancing cost, accuracy, and security.
- “There was like an $800 day, one day… if I just tell the bot to keep building...it can cost like a thousand dollars a day, right?” – C, 45:20
11. Knowledge, Memory, and Hallucinations (51:46–53:28)
- AI confidence/hallucinations are persistent problems; especially on historical facts or contract details.
- Taylor: “Why are we in the 1800s? ... and you just burn tokens on this.”
- Austin: “It doesn’t have a skill file for itself. It should absolutely have a skill file for itself.”
12. AI for DevOps and Infrastructure (63:13–66:46)
- Modern DevOps increasingly delegated to AI agents; they can already perform complex deployments faster and better than most human developers.
- Kane: “Back in the day [deploying] would have been four days of pain and suffering...and this thing was like: it is done, so cool.”
13. LARPing and Social Engineering (66:46–67:28)
- Bad actors and even credible-seeming projects can rapidly "slop" together fake/inept/larpy projects; vetting genuine innovation is getting harder as it's easy to fake respectable outputs.
14. Mythos: The "Big One"—Inevitable Hacks and Economic Incentives (67:40–69:49)
- Final question: If Mythos is given to select companies, will it front-run or enable a major hack on Ethereum? Might insiders or partners quietly exploit knowledge for profit?
- Austin: “If someone can front run like a set of bombers going to a place, we're probably going to see companies front run and make money by hacking something.” (67:40)
- General agreement: It’s only a matter of time before such a super-model leads to a world-changing exploit unless extreme precautions are taken.
Notable Quotes & Memorable Moments
“We’re going to see AI get so good that a bunch of hacks are going to happen and then not so many hacks are going to happen anymore. … I’m ready to rip the band aid off. Let’s do it in a bear market.”
— Austin (C), 02:27
“Immutability is like freaking amazing in so many ways, but also quite impractical and creates risk.”
— Taylor (B), 04:05
“It makes an agent like, you know, IQ 60, right? About crypto to like IQ 140 in, like, what—”
— Kane (A), 39:23
"If you're out there building anything and you're expecting agents to use your anything, you should have a skill file."
— Austin (C), 31:01
"The skill file itself is the secret sauce for the whole company. And you’re giving it away."
— (paraphrasing Mike from Radar Relay, cited by Austin), 38:34
"There was like an $800 day, one day … if I just tell the bot to keep building forever, ...it can cost like a thousand dollars a day, right?"
— Austin (C), 45:20
“Why are we in the 1800s? ... and you just burn tokens on this?”
— Taylor (B), 51:46
Timestamps by Key Topics
| Timestamp | Topic | |-------------|-------------------------------------------------------------------------------------------------| | 00:43–04:05 | Mythos: new frontier of AI-driven exploits, parallels to crypto’s recent high-profile hacks | | 04:05–07:03 | Immutability vs. upgradeability: code risks, Uniswap, and evolving threat models | | 07:03–11:45 | Are AI-powered hacks already happening? Trends, risks, ecosystem comparison | | 12:00–18:47 | Building “for the model”; long-running, autonomous AI attacks and their implications | | 15:15–22:26 | Cost vs. accuracy, optimizing model usage, agentic design | | 26:40–33:46 | Skill files and the future of agent self-improvement; open sharing culture | | 33:46–38:34 | Agent mistakes, iterative hardening of skills/rules, real-world stories of AI bot deployment | | 38:34–41:08 | Proprietary vs. open knowledge in collective security | | 41:22–55:09 | Anthropic’s resource gating, user payment models, economics of AI subscription vs. API | | 51:46–53:28 | Model hallucination, limits of AI memory/recall | | 63:13–66:46 | Delegating DevOps to AI; personal stories and organizational shifts | | 66:46–67:28 | The ease of faking innovation ("larping"), verification challenges | | 67:40–69:49 | Speculating on Mythos-enabled hacks, insider trading, front-running, and overall implications |
Overall Tone and Style
- Language: Technical but highly conversational, sprinkled with war stories from the cutting edge of both security and AI agent development.
- Mood: Candid, a touch of gallows humor, and a sense of urgent but not alarmist realism; clear camaraderie among veteran builders who both respect and fear what’s coming next.
- Memorable Imagery: “Matrix skill files,” the “bear market” as a good time to get hacked, agents being “lazy,” developers “chewing glass,” and the vision of Mythos “walking into the DevOps job.”
Key Takeaways for Listeners
- The window for complacency in crypto security is closing. AI models like Mythos will change both the attack surface and the economics of exploits.
- Immutability’s benefits are matched by risks; upgrade paths, bug bounties, and audits will never be enough.
- Building “for the model six months from now” is crucial: plan for adversaries with superhuman code-discovery and exploit abilities.
- “Skill files” are the emerging standard for keeping AI agents continuously, accurately up to date, but the ecosystem to verify and maintain them is still nascent.
- Open sharing of knowledge is part of crypto’s culture—and its best defense—but trust, verification, and clarity on provenance are constant challenges.
- The lines between helpful automation, attack surface, and outright threat actors are blurrier than ever. “What happens on chain never stays on chain”—it’s time to prepare for the next wave.
🔗 For more resources, skill files, and episode notes, visit unchaincrypto.com/uneasymoney
