Big Technology Podcast
Episode: Is Generative AI a Cybersecurity Disaster Waiting to Happen?
Guest: Yinon Costica (Co-Founder & VP of Product, Wiz)
Host: Alex Kantrowitz
Date: September 24, 2025
Episode Overview
Alex Kantrowitz sits down with Yinon Costica, co-founder of cybersecurity firm Wiz, to unpack the evolving cybersecurity landscape amidst the generative AI boom. The conversation explores whether AI represents a ticking time bomb for security vulnerabilities, how attackers and defenders are adapting, the real versus perceived threats posed by generative AI, and what emerging technologies like autonomous vehicles or quantum computing mean for society’s digital safety. Yinon shares insider perspectives on Wiz’s research (including the DeepSeek incident), outlines the core risks in AI code and infrastructure, and reflects on the balance of optimism and caution as technology accelerates.
Key Discussion Points & Insights
1. The New AI Attack Surface: Software, Infrastructure, and Tools
- AI’s Expansion Brings New Vulnerabilities
- “AI is very new as a software and the fundamentals exist…it can be vulnerable and you can actually use it in order to just run your own code on it like any other technology." (B, 02:43)
- Wiz’s research at Pwn2Own: Of six foundational AI technologies tested, four had critical remote code execution vulnerabilities (B, 03:44).
- It's Not Just the Code, but the Build Tools and Infrastructure
- Vulnerabilities aren’t just in AI applications, but also in tools like Nvidia, PostgreSQL, Redis – the underpinnings of AI development (B, 03:44–05:01).
- AI applications run on cloud infrastructure that’s prone to “misconfiguration, exposure of storage buckets, over-permissioned identities, and more” (B, 05:50; 08:19).
- Real-world example: A major software provider accidentally exposed a training dataset bucket, echoing decade-old cloud security mistakes now amplified by AI (B, 06:55).
Notable Quotes
- "The most important thing to understand about AI: It's software like any other software." (B, 05:01)
Timestamps:
- [00:42–05:31]: AI’s software vulnerabilities, Pwn2Own results
- [05:50–09:13]: Infrastructure risks and real-world incidents
2. Security Risks in AI-Generated Code
- AI-Written Code: Efficiency Meets Unreliability
- AI can generate code rapidly, but unless prompted for secure practices, it often skips critical security steps (B, 09:20).
- Ownership dilemma: If developers no longer fully understand “vibe-coded” (prompt-only) applications, who is accountable for fixing vulnerabilities or outages? (B, 10:57–12:19)
- Some reports of applications hacked hours after being vibe coded—the lack of familiarity makes remediation difficult (B, 12:19).
- Agents as Secure Code Reviewers?
- In the future, AI agents could not only generate, but also review and secure code, compounding risk without clear human oversight (B, 13:55–14:43).
Notable Quotes
- "Vibing code is a great way to accelerate, but it doesn't remove you from the responsibility of actually knowing your code, being able to address issues within the code and guide AI farther into the maintenance process." (B, 12:19)
- "Are we already seeing cybersecurity problems within companies who have had developers that have just vibe coded or AI coded applications? Yeah, actually there are very known examples..." (B, 12:19)
Timestamps:
- [09:13–16:45]: Code ownership, AI-written code vulnerabilities, future of code review
3. Threat Actors: How Bad Guys Use AI
- The Attacker’s Arsenal: Three Layers
- Attacking AI applications directly (e.g., prompt injection, extracting sensitive data)
- Leveraging AI for automation and rapid iteration: Attackers can try thousands of permutations with little effort (B, 17:38).
- Using AI for vulnerability research: Potential for “zero day” discovery and weaponization (B, 17:38, 29:54)
- Defender vs Attacker Asymmetry
- Attackers only need one success; defenders must secure everything (B, 19:25)
- AI increases the “noise” from false positives—making it easier for attackers to overwhelm security teams (B, 22:09–23:39)
- Anybody Can Do It
- From nation states to “teenagers in basements,” AI makes sophisticated cyber capabilities accessible and cheap (B, 24:19–25:02)
Notable Quotes
- “Any prompt that we expose is going to be tested also by threat actors.” (B, 18:55)
- “DDoS your security team. Actually the security team.” (B, 22:12)
- “Cybercrime… it's a business... If you think about ransomware, it's a business. It has rules to it…” (B, 25:02)
Timestamps:
- [17:38–25:02]: Threat actor behavior and AI as a “force multiplier”
4. Are the Attacks Actually Worse Now?
- Still ‘More of the Same’ (For Now)
- Despite fears, Wiz data shows AI hasn’t yet yielded a dramatic evolution in threats—attacks tend to exploit familiar infrastructure weaknesses (B, 26:04–27:39)
- Defender “blue side” progress: Improved foundational practices, transition to cloud, better detection and response (B, 26:40–28:10)
- Automation existed long before ChatGPT; AI is accelerating, not transforming, attack methods at this stage (B, 28:10–29:25)
Notable Quotes
- “Right now... we are seeing accelerated automation of the known threats, known risks. And this is a journey security has been into in the past decade...” (B, 29:25)
Timestamps:
- [26:04–29:25]: Assessment of current AI-driven threats
5. The Arms Race of Vulnerability Research
- Race to Find and Fix Vulnerabilities
- If AI starts discovering vulnerabilities (“zero-days”) faster than defenders can patch, then the threat landscape could shift dramatically (B, 29:54–32:37)
- Security tools using AI for proactive vulnerability scanning and risk reduction are already emerging (B, 29:54–33:56)
Notable Quotes
- “Vulnerability research is… the bottleneck of the security space because vulnerabilities are what allow threat actors to move from low trust to higher trust environments.” (B, 29:54)
Timestamps:
- [29:54–33:56]: Vulnerability research, zero-days, and the defender’s response
6. Wiz’s Role & Industry Shifts
- What Wiz Does
- Scans organizations’ cloud (and codebase) for critical attack paths, reducing noise for security teams (B, 33:01–33:56)
- Focus on enabling—not blocking—business innovation, especially as AI and cloud adoption accelerates (B, 34:13–36:54)
- Security Must Be ‘Baked In’
- Cites Microsoft’s “Trustworthy Computing” as proof that “security has to be built in [for people] to trust… compute” (B, 37:23)
Notable Quotes
- “Security is a cornerstone for our ability to use technologies at scale… we have to make sure we can trust cloud and AI.” (B, 37:23)
Timestamps:
- [33:01–38:39]: Wiz’s function, Google acquisition, and industry context
7. Real-World Incidents: The DeepSeek Database Leak
- Incident Breakdown
- Wiz found a wide-open DeepSeek database leaking sensitive chat histories—“not a very advanced AI-centric capability. It’s an exposed database” (B, 41:15)
- Real lesson: The pace of AI adoption is breathtaking (10% orgs added DeepSeek in a week!), and basic cloud mistakes become higher stakes (B, 41:15–46:41)
- Misinterpretation by the Media
- Yinon clarifies: Misconfigurations like this happen constantly ("everybody"), but the speed/magnitude is amplified by generative AI (B, 46:39)
Notable Quotes
- "This is a very common misconfiguration that happens." (B, 46:39)
Timestamps:
- [41:15–46:56]: DeepSeek case study, media framing, and broader implications
8. The Future: Superintelligence, Quantum, and Autonomous Vehicles
-
Superintelligence: Existential Worry?
- Thought experiment: If superintelligent AIs can break anything, would “good guys” have to attack “bad AIs” to preempt societal collapse? (A, 48:06)
- Yinon’s take: Past innovations were all seen as catastrophic risks, but “we’ve never been [in a position] where security is discussed at the same time as innovation… I’m positive.” (B, 48:37–51:30)
-
Quantum Computing
- Theoretical quantum attacks (“steal now, decrypt later”): Ongoing need for post-quantum cryptography (B, 53:02–54:15)
-
Physical World Risks: Robots, Cars, and IoT
- Example: DDoS attacks by hijacking “smart” devices. Threats may not be direct (crashing cars), but profound (mass outages, societal disruption) (B, 55:35)
- Key point: Threat actors display “amazing creativity”—risk often emerges from unanticipated uses (B, 56:38)
Notable Quotes
- “We should be always… careful about the way we apply technologies... The last thing we want to do is build trust on something that breaks at some point.” (B, 57:29)
Timestamps:
- [48:06–57:52]: Superintelligence, quantum computing, and physical world risks
9. The Human and Organizational Side
- Responsibility & Resilience
- Security can’t be left to IT: “Democratizing security” means everyone—employees, developers, IT—must contribute to cyber resilience (B, 58:22)
- The rise of deepfakes and AI-powered phishing heightens the need for vigilance—tools and training are both essential (B, 59:30–60:30)
Notable Quotes
- “Security today is geared towards finding new threats, responding to them in ways we can operationalize at scale. But we have to be aware that with new technologies come new threats.” (B, 60:30)
- “Everybody, as humans, employees, developers… we are part of this resilient system.” (B, 60:54)
Timestamps:
- [58:22–61:03]: Democratizing security, new human-centric attack types
10. Optimism and Realism: Final Thoughts
- Threat Spectrum is Broader Than Ever
- AI, quantum, and agentic technologies are changing the game
- Most breaches and risks still revolve around “the basics”
- Industry is Rising to the Challenge
- Security vendors, startups, and researchers are moving faster than ever to build AI-aware protections
- Hopeful message: “I’ll choose to leave with your perspective. Optimistic. And we’ll have to keep talking.” (A, 61:37)
Timestamps:
- [61:03–End]: Reflections and closing thoughts
Memorable Quotes
-
Yinon Costica:
- "AI is very new as a software and the fundamentals exist…it can be vulnerable and you can actually use it in order to just run your own code on it like any other technology." (02:43)
- “Vibing code is a great way to accelerate, but it doesn't remove you from the responsibility of actually knowing your code…" (12:19)
- “DDoS your security team. Actually the security team.” (22:12)
- “Any prompt that we expose is going to be tested also by threat actors.” (18:55)
- "This is a very common misconfiguration that happens." (46:39)
- “We should be always… careful about the way we apply technologies..." (57:29)
- “Everybody, as humans, employees, developers… we are part of this resilient system.” (60:54)
-
Alex Kantrowitz:
- “So all these companies, all these engineers who are relying on artificial intelligence tools because they're so new, they may not know it, whereas what you're claiming that they may not know it, but bad actors could be basically hacking into the code that they are writing…” (04:39)
- “It seems like the way that generative AI is working today, it hasn't led to this massive uptick in security vulnerabilities or even attacks, even though we're always seeing them escalate. So I'll choose to leave with your perspective. Optimistic. And we'll have to keep talking.” (61:37)
Important Segment Timestamps
- [00:42–05:31]: AI’s core vulnerabilities, Pwn2Own findings
- [05:50–09:13]: Infrastructure risks with AI
- [09:13–16:45]: AI-generated code security, code maintenance challenges
- [17:38–25:02]: Threat actor methods and AI
- [26:04–29:25]: Assessing “real” changes in attack sophistication
- [29:54–33:56]: Vulnerabilities, zero-day, and AI’s impact on research
- [33:01–38:39]: What Wiz does, impact of Google acquisition
- [41:15–46:56]: DeepSeek leak and lessons for AI adoption
- [48:06–57:52]: Discussion of superintelligence, quantum risk, IoT/autonomous vehicle hacking
- [58:22–61:03]: Demographics of security responsibility, human-centric attacks
Conclusion
This episode offers a comprehensive, nuanced look at the intersection of generative AI and cybersecurity. Yinon Costica cautions that while generative AI introduces real and novel risks, many fundamental security challenges remain unchanged: misconfiguration, ownership ambiguity, and the persistent creativity of attackers. Nonetheless, optimism shines through—tech’s resilience, industry attention, and new security paradigms are converging to build a safer AI-powered future. The message: stay vigilant, stay proactive, and don’t underestimate either the risks—or the collective capacity—to keep technology trustworthy as it continues to reshape society.
