Cybersecurity Today: Jeff Williams (Contrast Security & OWASP) on Mythos and AI Security
Date: April 11, 2026
Host: Jim Love
Guest: Jeff Williams, CTO & Co-founder, Contrast Security; OWASP Co-founder
Episode Overview
This episode began as a discussion about Anthropic’s new, not-yet-released AI security model “Mythos,” but quickly expanded into a wide-ranging conversation about the state—and future—of application security (AppSec), open source vulnerabilities, the evolving impacts of AI, and the coming legal and industry changes affecting software makers. Williams shares frank insights into what’s working and not in AppSec, the limitations of current security practices, how AI like Mythos could reshape vulnerability discovery and remediation, and why the entire industry must rethink foundational strategies.
Key Discussion Points & Insights
1. The State of Application Security and the Genesis of Contrast Security
[00:01-02:36]
- Williams challenges the ossified nature of AppSec models, arguing for innovation over outdated maturity models.
- (Quote: “AppSec people are locked into the idea that they've got to do it this one way… It's not working. We can do better.” – Jeff Williams, 00:01)
- Contrast Security: Uses the metaphor of 'contrast agent' from medicine—instrumenting apps to reveal vulnerabilities in real time, moving beyond perimeter testing.
2. Explaining OWASP and Its Impact
[02:36-05:44]
- Williams narrates OWASP’s origins, focusing on democratizing software security knowledge and tools (e.g., WebGoat, OWASP Top Ten).
- (Quote: “The first OS top 10... really took off.” – Jeff Williams, 03:39)
- He points out that despite updates, fundamental vulnerabilities remain unsolved.
- (Quote: "Truth is, it hasn't changed very much. It's still the same problems basically.” – Jeff Williams, 04:59)
3. Persistent Vulnerabilities and Slow Industry Change
[05:44-07:15]
- Progress in remedying security flaws is slow due to ingrained processes and inertia within companies and communities.
- (Quote: “Things in AppSec move really slowly… it really takes a very long time because... companies are very embedded.” – Jeff Williams, 05:49)
4. Mythos: AI and the Next Era of Vulnerability Discovery
[07:06-13:34]
-
On Anthropic’s Mythos:
- Mythos is an AI model engineered for advanced vulnerability discovery, already uncovering novel, long-hidden flaws in popular software.
- (Quote: "It’s found new CVEs in most major operating systems and products.” – Jeff Williams, 07:46)
- Host and guest discuss how this undermines assumptions about open source “hardened” code; most projects see little security scrutiny.
- (Quote: “Millions of open source projects... the vast majority... have almost no scrutiny.” – Jeff Williams, 08:12)
- Discovery of vulnerabilities is likened to bitcoin mining: “they're all out there waiting to get mined.” – Jeff Williams, 09:36
-
Cost and Business Model:
- Mythos is currently too expensive for mass integration ($15–$20 per pull request), which may limit short-term impact.
- (Quote: “This is going to be actually really expensive... 50 times more expensive than using traditional approaches.” – Jeff Williams, 11:53)
- AI’s value will balance just below the price of human labor for the equivalent work.
5. Open Source vs Commercial Software Security
[09:53-10:46]
- Open source has somewhat better security due to public scrutiny, but both sectors are plagued by insufficient analysis.
- (Quote: “The internal corporate applications are often given very trivial levels of security analysis.” – Jeff Williams, 10:21)
6. The Attacker/Defender Dynamic & AI Remediation
[15:04-16:35]
- Attackers benefit most when vulnerabilities are exposed but not fixed; AI must go further than detection to offer practical, cost-effective remediation.
- (Quote: “The attackers have always... the advantage in AI because they only have to find out the problems, they don't have to fix them.” – Jim Love, 15:22)
- Williams shares: “We use AI to do auto remediation ...and it works pretty well.” (15:32)
- Human skepticism about AI-generated code quality is natural, but humans are equally fallible.
7. AI in the "Software Factory" – Security by Design
[18:21-23:10]
- Williams envisions an “AI-powered software factory” with continuous feedback loops—problems detected and remedied automatically, progressing towards “secure by design.”
- (Quote: "The opportunity for us is to build better software factories that are powered by AI." – Jeff Williams, 18:20 & 26:08)
- Today, most security tools focus on detection; the next steps are remediation, threat modeling, and architecture.
- (Quote: "Maybe the next thing that topples is remediation work and we take away that horrible grunt work away from people and we start getting AI to do that.” – Jeff Williams, 19:49)
- The ultimate goal: automatic code generation with embedded, proven defenses and an assurance case outlining why the software is secure.
8. AI’s Probabilistic Nature vs Algorithmic Predictability
[21:31-22:22]
- Host raises concerns about AI’s inherent unpredictability, but Williams counters that the human element is just as (or more) unpredictable.
- (Quote: "People are fundamentally insecure too, by the way. Really bad actually.” – Jeff Williams, 18:39)
- (Quote: “There's... a lot of probabilistic elements in it called developers...” – Jeff Williams, 22:22)
9. Building a Modern, AI-Integrated Development Shop
[23:10-25:01]
- Williams emphasizes a gradual handover of tasks from human to AI with strong human validation at every step.
- Start with AI as autocomplete/search
- Mature towards structured specs input for AI agents to generate code
- Ongoing validation and human oversight
10. Cultural Barriers, Legacy Models, and the Urgency for Change
[25:01-26:41]
- The average software product remains highly vulnerable; time to remediate is far too long.
- (Quote: "The bar that we're comparing to is stupefyingly low. Like, embarrassingly...like, I wouldn’t want to tell my mother that’s how the software she’s trusting is built." – Jeff Williams, 25:32)
- Outdated security models are still in widespread use; new methods are urgently needed.
11. Staying Optimistic as Security Leaders
[26:41-27:23]
- Williams encourages solving the "next hard problem" one at a time, invoking “The Martian” as a survival metaphor.
- (Quote: "There’s one problem that comes up and you work on that one and you survive and then you work on the next one." – Jeff Williams, 27:06)
12. Legal Changes: EU Product Liability Directive
[28:23-31:34]
- The new EU directive classifies software as a product; vendors will be strictly liable for security flaws that cause harm, regardless of fault.
- Applies to all commercial software sold to EU citizens, including the use of open source components.
- (Quote: "Anyone selling software into the EU will be potentially exposed." – Jeff Williams, 29:09)
- Host notes the shift away from US rules, where software liability has been historically limited.
- Open source projects themselves are not directly liable unless incorporated into a commercial product.
- This directive will profoundly affect software economics and push companies toward better security.
13. The Future of Bug Bounty Programs
[32:12-33:12]
- Mythos could undermine the value of both bug bounties and pentesting, especially if it becomes the faster, cheaper way to find vulnerabilities.
- (Quote: “Maybe bug bounty is just something that doesn't work anymore.” – Jeff Williams, 33:02)
14. The Human Side: Change, Resistance, and Industry Shocks
[33:31-33:58]
- AI is accelerating security disruption, but changes in corporate culture and mental models are lagging.
- (Quote: "A lot of the change has to do with people's mental models...and that stuff’s real hard to change." – Jeff Williams, 33:44)
Notable Quotes & Moments
-
On legacy AppSec practices:
"The biggest risk is continuing to do what we're doing now. It's not working." – Jeff Williams [00:01, 26:08]
-
On open source vulnerability reality:
"The number of CVEs is absolutely dwarfed by the number of latent vulnerabilities that are out there that we haven't discovered. In a way it's like bitcoin mining." – Jeff Williams [09:13]
-
On AI’s current capabilities versus hype:
"I think AI can fix problems and I suspect that Mythos is better than other models at doing that. ... But it's also, you have to think about the cost and time and other factors when you think about it." – Jeff Williams [15:32]
-
On legal changes:
“It's what's called a no fault liability rule, which says it really doesn't matter how you built it. You're liable for any harm that comes from defects in your software.” – Jeff Williams [28:23]
-
AI accelerating industry change:
“Things seem to be changing faster with AI than other things in the past, so maybe.” – Jeff Williams [33:31]
Key Timestamps
- [00:01] – AppSec’s stuck mindset; need for new strategies
- [02:00–02:36] – The origin of “Contrast Security” as a name and methodology
- [03:00–04:59] – OWASP’s Top Ten legacy; AppSec challenges remain fundamentally unchanged
- [07:15–08:50] – How Mythos is uncovering latent flaws, including in "hardened" codebases
- [11:30–12:59] – Economic realities and challenges to mass adoption of Mythos
- [15:32] – AI-driven self-remediation; limitations and skepticism
- [18:21–22:22] – The "software factory" of the future: feedback, automation, and the probabilistic challenge
- [23:10–25:01] – Transitioning to greater AI automation, with layered human oversight
- [28:23–31:34] – New EU product liability rules and their transformative implications
- [32:12–33:12] – Is AI going to negate the value of bug bounties?
- [33:31] – AI is accelerating change, but culture and mentality lag
Tone and Takeaways
Williams is both realist and optimist: critical of the status quo but energized by the opportunities presented by AI and regulatory change. He advocates for bold experimentation and for the cybersecurity industry to outgrow its defensive, slow-moving tendencies. Both host and guest share a dry humor and candid skepticism about old industry myths, but ultimately urge listeners—especially security leaders—to continue problem-solving, stay optimistic, and prepare for shifts that AI and regulations will soon bring.
Actionable Insights
- Challenge Old Security Processes: Reassess reliance on outdated maturity models; experiment with new, AI-driven approaches.
- Prepare for Regulatory Change: US companies selling to the EU must understand the Product Liability Directive and its implications for software risk.
- Keep an Eye on Mythos and Similar Models: These could transform how vulnerabilities are found (and fixed), but at present cost and scale matter.
- Expect Industry Shakeups: Bug bounties, security consultancies, and in-house methods may face disruption as AI tools mature.
- Adopt a Balanced Human/AI Approach: Gradually increase AI’s role, but always rigorously validate outputs before production.
- Persevere and Stay Optimistic: Progress in security is a marathon—solve the “next hard problem,” iterate, and don’t get stuck in despair.
For more from Jeff Williams, follow him on LinkedIn. For updates on legislative change in software risk, review his posts on the EU Product Liability Directive.