The Lawfare Podcast – Scaling Laws: Caleb Withers on the Cybersecurity Frontier in the Age of AI
Date: December 5, 2025
Host: Kevin Frazier (AI Innovation and Law Fellow, Texas Law; Senior Editor, Lawfare)
Guest: Caleb Withers (Research Associate, Center for a New American Security)
Main Theme: The evolving relationship between advanced “frontier AI” systems and cybersecurity—the shifting offense-defense balance, new attack vectors, response strategies, and policy implications.
1. Episode Overview
This episode explores how cutting-edge “frontier” AI models are disrupting the established dynamics in cybersecurity. Host Kevin Frazier and guest Caleb Withers dive into Withers’ research and recent report analyzing the impact of new AI capabilities in the cyber domain. They unpack what’s genuinely novel with generative models, how old vulnerabilities persist, escalating pressures on defenders, and what policymakers and industry should do to adapt for the coming wave of AI-driven cyber threats.
2. Key Discussion Points & Insights
A. The Pre-Generative AI Landscape
- Historical context: Prior generations of AI—machine learning and automation—have long been used in both cyber offense and defense.
- Spam filtering started manual but became automated by necessity.
- Traditional AI/ML tools helped filter out bad actors, recognize malware, speed up defender workflows.
- “Machine learning has played a long standing role in cybersecurity for sure.” (Caleb Withers, 06:59)
B. What’s Different about “Frontier AI”?
-
Definition:
- Large foundation models, typically trained on vast Internet text and data; e.g., the type behind ChatGPT.
- Not just LLMs anymore: Many are multimodal (text, images) and can interface with software, interact with users, and take actions.
- “When people think about large language models in cyber... these models are also increasingly using computers, making decisions, doing all sorts of things.” (Withers, 08:27)
-
How the Offense-Defense Equation Shifts:
-
Historically, AI favored defenders by scaling protection and automating routine tasks.
-
But, new models offer attackers “returns to scale”—they can launch more sophisticated, higher-volume attacks cheaply and quickly.
-
“The approach my report takes is to sort of say, is there any reason to expect that this might not hold or that this time might be different for some of the current gen AI capabilities we see?” (Withers, 10:30)
-
Rising costs for defenders: Running models at defense scale may become prohibitively expensive—as attackers can iterate and experiment at marginal cost.
-
C. The Persistence of “Stubborn Weaknesses”
- Old vulnerabilities remain a problem.
- Issues like SQL injection (not neutralizing special elements in commands) have stubbornly persisted for decades.
- AI could help discover and remediate, but also exacerbate exposure as more non-expert coders build and deploy with AI’s assistance.
- “Some of these things are things that AI could be quite useful for... you can see it going both ways.” (Withers, 15:27)
D. Amplifying Classic Attacks: Phishing, Social Engineering
-
Phishing, especially hyper-personalized and in non-English languages, is being supercharged.
- “It just seems totally correct to say that this is probably increased by at least an order of magnitude...” (Withers, 18:30)
- Defenses (e.g., Gmail spam filtering) are AI-powered too, but detection gets much harder with human-like, language-proficient attack content.
- Emphasis on non-technical mitigations: multi-factor authentication, domain validation, and user education about spear-phishing.
- “At a certain point, if I’m sending you this flattering email... there’s not going to be much that a sufficiently sophisticated AI phishing campaign... can’t do.” (Withers, 19:56)
-
Human vigilance is still critical—but the cognitive/”cyber hygiene” burden on users increases, putting defenders at a disadvantage.
E. Time-to-Exploit Drops: The Acceleration of Attacks
- Attackers’ ability to weaponize new vulnerabilities (“time to exploit”) keeps shrinking—from months, to weeks, to days.
- AI could make this near-instantaneous: LLMs monitoring every new patch on GitHub could identify and exploit security updates before most users can install them.
- “A trend I anticipate seeing is... just as soon as something is out there to be discovered... this happening at mass and pretty fast...” (Withers, 25:16)
F. The Age of Agentic AI Systems
- Impending risk: As users hand daily tasks to “AI agents,” new attack surfaces emerge—what if a user’s agent is tricked by, or manipulated by, a malicious agent acting on another’s behalf?
- Distinction between high-value and low-value targets will be increasingly important.
- “Consumers generally don’t do that [segmentation]... is it going to be the case that in the future we’ll have two somewhat segmented inboxes, the one that deals with stuff that actually isn’t that big a deal versus the one that... no, we shouldn’t let the AI touch this.” (Withers, 36:29)
- Security is always a trade-off: Capabilities and convenience versus risk of exposure.
G. Private Sector Responses and Security Innovations
- AI labs are investing in multiple layers of defense—e.g., classifiers monitoring every agentic action, respecting website permissions (robots.txt), developing models fine-tuned for cybersecurity defense.
- But trade-offs remain: e.g., if a model can’t read some websites, it also can’t check them for malicious content.
- “Trade offs, I think, abound... we’ve also seen ... key players offering all sorts of interesting things... So, you know, people are doing stuff here, right?” (Withers, 41:39)
H. Policy and Regulation: State of Play
-
U.S. and International Policymaking:
- AI-cyber is recognized in DC as a “real, immediate” policy issue—unlike more speculative AI risks.
- Major concern is not overregulation, but the ability to respond quickly and flexibly if an AI breakthrough changes the cyber landscape.
- Instead of premature regulation, focus is on rigorous, actionable model evaluation: can models really do what attackers need, and how does that compare to human baselines?
- “I'm a big fan of CISA, for example, and I'm glad that they exist and are looking at stuff like this...” (Withers, 44:41)
-
Scenario planning, not static laws:
- “There's not enough information out there right now to know what is the definitive hard law statutory approach that we want to enshrine for the next decade...” (Frazier, 48:51)
- Keeping “the playbook in the drawer”—ready for rapid response, even if a sweeping law isn’t appropriate yet.
I. Recommendations and Forward-Thinking
- Key priorities:
- Improve and standardize model evaluations for real cyber capabilities.
- Foster cross-disciplinary convenings (AI labs, cyber practitioners, policymakers) to agree on what really matters and how to measure it.
- Maintain flexibility in policymaking; be prepared to move quickly, not just discuss risks in hindsight.
- Keep attention on the “early signs”—what models can do now at low rates may be rapidly scalable.
- Recognize the value (and scarcity) of professionals who can design robust evals and scenario plans; avoid uncoordinated proliferation of standards.
3. Notable Quotes & Memorable Moments
-
On the offense-defense “frontier AI” shift:
- “The case that AI helps defenders on net... a lot of those arguments still do apply. But... the benefit to spending more on running these models longer, running more of these models, running bigger models, we’re still seeing returns to that... if we look into the future, the cost that defenders are spending on running AI models... starting to be a material consideration.”
— Caleb Withers (09:28–11:32)
- “The case that AI helps defenders on net... a lot of those arguments still do apply. But... the benefit to spending more on running these models longer, running more of these models, running bigger models, we’re still seeing returns to that... if we look into the future, the cost that defenders are spending on running AI models... starting to be a material consideration.”
-
On persistent vulnerabilities:
- “We still see today, after decades, that’s a mistake developers make... a lot of the mistakes we see... you probably shouldn’t have done that in this day and age.”
— Caleb Withers (13:31–14:46)
- “We still see today, after decades, that’s a mistake developers make... a lot of the mistakes we see... you probably shouldn’t have done that in this day and age.”
-
On phishing and human-oriented attacks:
-
“Have we seen true transformation of cyber attack yet? My answer is not yet, except for this asterisk of, as you said, phishing for exactly the dynamics you pointed at.”
— Caleb Withers (18:00) -
“At a certain point... there’s not going to be much that a sufficiently sophisticated AI phishing campaign... can’t do.”
— Caleb Withers (19:56)
-
-
On agentic AI risks:
-
“Capabilities are probably outpacing their ability to be relied on with any degree of confidence... as is often the case in the early days.”
— Caleb Withers (34:32) -
“It might be a little bit more annoying... but that’s an example of how you can make tradeoffs between the easy way and investing in the slightly more secure way.”
— Caleb Withers (36:42)
-
-
On evaluations and policymaking:
- “The thing that I would be most excited about is... making sure that there really is that strong evaluative function going on both within industry and within government. Also... a willingness to move fast and turn on a dime and sort of think about what would be worth doing if we saw certain things.”
— Caleb Withers (46:57)
- “The thing that I would be most excited about is... making sure that there really is that strong evaluative function going on both within industry and within government. Also... a willingness to move fast and turn on a dime and sort of think about what would be worth doing if we saw certain things.”
-
On future AI progress in cyber:
- “For the cyber benchmarks I looked at that seem most compelling, they have been reliably going up, you know, over recent years... AI models are getting better at cyber stuff pretty rapidly and I think this will continue for at least a few months to years.”
— Caleb Withers (54:43–55:30)
- “For the cyber benchmarks I looked at that seem most compelling, they have been reliably going up, you know, over recent years... AI models are getting better at cyber stuff pretty rapidly and I think this will continue for at least a few months to years.”
-
On practical recommendations:
- “Paying attention to what AI models can do in the cyber domain and doing so in a thoughtful way, I think is pretty important.”
— Caleb Withers (56:16)
- “Paying attention to what AI models can do in the cyber domain and doing so in a thoughtful way, I think is pretty important.”
4. Important Timestamps
| Timestamp | Segment / Key Topic | |:----------:|:----------------------------------------------------------------------| | 04:24 | Introduction of Caleb Withers and framing of the AI-cyber report | | 05:44 | Pre-generative-AI: how ML and automation have supported cybersecurity | | 07:48 | What is “frontier AI” and why it changes the landscape | | 09:28 | How and why frontier AI changes the offense-defense calculus | | 13:15 | Persistent “stubborn” vulnerabilities in code—e.g., SQL injection | | 15:27 | The double-edged role of AI in exposing/fixing old vulnerabilities | | 18:00 | GenAI’s impact on phishing—hyper-personalization and language | | 23:00 | Time-to-exploit trends, the shrinking window for defenders | | 24:27 | Hypothetical: LLM watches GitHub to instantly exploit new patches | | 33:05 | Emerging risks with agentic AI systems (autonomous agents) | | 41:39 | How AI labs are responding with layered security, market incentives | | 42:29 | State of AI cybersecurity policy in US and internationally | | 44:41 | The importance of rigorous model evals, scenario planning | | 48:51 | Policy advice: playbooks, flexibility, scenario-based approaches | | 50:08 | The need for convenings & consensus on what to evaluate and why | | 54:12 | Misconceptions: AI’s “progress slump” versus ongoing cyber gains | | 56:16 | Closing takeaways—remain vigilant and proactive |
5. Conclusion
This episode is a must-listen for policymakers, technologists, and anyone following the intersection of AI and cybersecurity. Caleb Withers asserts that “frontier” AI models catapult both old threats and new capabilities forward—making the offense-defense dynamic less stable and more resource-intensive than ever. The big takeaways: scenario planning, flexible policy frameworks, collaborative evaluations, and relentless vigilance are the best ways to prepare for the looming AI-driven cyber frontier.
Recommended companion reading:
- Caleb Withers, Tipping the Emerging AI Capabilities and the Cyber Offense Defense Balance (full report referenced throughout episode).
Find more at: lawfaremedia.org
Contact: scalinglaws@lawfaremedia.org
(This summary omits ad breaks and non-content segments for clarity and depth.)
