Lead With AI
Host: Dr. Tamara Nall
Guest: Ron Gula (President, Gula Tech Adventures; Co-Founder of Tenable Network Security)
Episode: Cybersecurity Veteran Built His Own AI Clone (Then Warned About What Comes Next)
Date: October 7, 2025
Episode Overview
Dr. Tamara Nall welcomes cybersecurity veteran, investor, and animated content creator Ron Gula for a candid conversation about the fusion of AI, cybersecurity, and animation. Together, they discuss AI’s disruptive impact on software development, ethical considerations, the growing relevance of ‘datacare’, and lessons from building an AI clone of oneself. Gula, known for pioneering products and investing in 30+ cyber and AI startups, gives listeners a glimpse into the rapidly evolving world of cyber defense and how AI is being embedded into everyday tools. He also warns about potential pitfalls—like “AI idiocracy” fueled by synthetic data—and shares practical advice for everyday users, startups, and leaders.
Key Discussion Points & Insights
Ron Gula’s Journey: From Air Force to AI Investor
- Transitioned from aspiring Air Force pilot to NSA cyber expert and entrepreneur.
- Built and scaled Tenable Network Security (CEO for 16 years, took it public).
- Now mentors and invests in next-gen cybersecurity and AI startups with Gula Tech Adventures.
"I believe AI is taking over the cyber industry and I'm really a cyber security person at heart."
— Ron Gula [02:06]
Animation as a Cyber Storytelling Tool
- Gula employs animation to demystify cybersecurity for laypeople, policymakers, and students.
- Animation makes technical content more accessible and compelling, blending educational and comedic sketches.
- Emphasizes the role of AI-driven tools in democratizing Pixar-quality animation for content creators.
"We have a hard time as an industry connecting with other people... I found this sweet spot with animation where you could really, really identify some amazing technical contents."
— Ron Gula [03:20]
- Tools: Gula uses the Replicate platform (a Gula Tech portfolio company), which offers full creative control and integrates generative AI for scripting and visual assets.
"You can basically pose... your animation software just figures it out and it makes you do Pixar level events."
— Ron Gula [04:25]
- Example: Animated “Seinfeld” parodies with a cybersecurity twist and sketches tackling complex cyber themes through accessible, humorous dialogue.
AI’s Disruptive Role in Software Development
- The “holy smokes moment” for Gula: Recognizing that AI can now write most of the code for software products, reducing traditional dev teams from dozens to a handful of people.
- For startups, the emphasis shifts from hiring large engineering teams to small teams who leverage AI for coding, quality assurance, and rapid prototyping.
"The software that we use every day... maybe you want a better one, you just ask AI to write it someday. That was the jaw-dropping moment for me... Software is going to be like the Replicator in Star Trek."
— Ron Gula [07:45]
- Cautions that while some may think AI will lead to immediate developer layoffs, the real shift is toward not scaling up teams as rapidly—reallocating resources to QA and customer success instead.
Ethics and Intellectual Property in AI-Generated Content
- Stresses the evolving challenge of tracing IP in generative content and ensuring AI models don’t inadvertently infringe on copyrighted creations (e.g., generating Star Wars imagery).
- Cites portfolio company Starseer, which audits LLMs for origin/bias—marking this as the “future of cyber”.
"However, the intellectual property I still am very, very concerned about. You know, what are these things being trained on and what they're doing."
— Ron Gula [13:36]
- Notes that despite fears around AI automating jobs away, it also creates new roles (e.g., recently hired an animator for his content).
AI, Cybersecurity, and an Uncertain Future
- Observes that there’s no “9-1-1” equivalent for cyberattacks—most organizations are left to fend for themselves, driving a market for diverse cyber solutions.
- Warns that the advantage remains with attackers, and complexity continually increases risk.
- Predicts that AI will both mitigate and multiply risks, depending on implementation quality.
"Let's just say the advantage is on the attacker."
— Ron Gula [17:59]
‘Datacare’: The New Healthcare
- Advocates for a ‘datacare’ mindset—treating care and stewardship of personal and organizational data on par with healthcare practices.
- Points out major life events (caring for elders, child safety, managing digital legacy) as triggers for data risk decisions, recommends intentional digital awareness.
"I like to think of Datacare. It's just as important as healthcare, and it really frames this whole conversation of what we're doing as a society going forward, including artificial intelligence."
— Ron Gula [21:52]
AI’s Perils: Garbage In, Garbage Out
- Expresses “biggest fear”: AI models endlessly retraining on their own synthetic data, leading to degraded, meaningless output (the “AI Idiocracy” scenario).
- Draws an analogy to aircraft engines ingesting their own exhaust and cultural references (Dream Theater, Idiocracy), warning that over-reliance on AI for critical thinking could make society “stupider.”
"What scares me the most about AI is eventually this concept of consuming and retraining on bad data... The more we rely on them to not be creative, but to do thinking for us... we all get a little bit stupider."
— Ron Gula [23:21]
Practical Advice and Call to Action
- Explore diverse online communities (not just mainstream news, but Reddit, X, LinkedIn), to recognize how scams, biases, and content targeting work.
- Get familiar with terms-of-service on digital tools; use AI itself to summarize or flag red flags in user agreements.
- Personal “datacare” extends to vigilance over data location, third-party access, and evolving digital footprints.
"Everybody needs to understand where their data is, where their data is flowing and what rights you're giving people to do with your data."
— Ron Gula [32:23]
Notable Quotes & Memorable Moments
- "Animation... makes cyber and startups a lot more approaching to people." — Ron Gula [03:55]
- "90% of what [developers] code is somebody else's library... and it's a slight change to go from somebody else's library to having something written by Anthropic, OpenAI..." — Ron Gula [09:24]
- "I created a 3D avatar version of myself... people look at me talking to myself and they're like, that’s just like science fiction." — Ron Gula [15:26]
- "What you see on the Internet is not there... a lot of the content that people experience is being created for them sort of on the fly and that can be used to deceive them." — Ron Gula [27:57]
Important Timestamps
- 02:06: Gula’s journey into cyber, startups, and AI
- 03:20: The power of animation in demystifying cybersecurity
- 07:45: The “jaw-dropping moment”—AI as the software replicator
- 13:36: Intellectual property challenges in generative AI
- 17:59: The complex, asymmetric nature of cyber risk
- 21:52: ‘Datacare’ and the future of digital risk management
- 23:21: Risks of retraining AI on synthetic data; warnings of "AI Idiocracy"
- 27:57: “What you see on the Internet is not there”—on deception and digital realities
- 32:23: Tips on user agreements and active data stewardship
Rapid Fire and Predictions
- Most overrated tech trend: US social media "sovereignty" debates (e.g., TikTok bans) [26:33]
- Most underhyped AI breakthrough: The personification of AI (and how quickly we anthropomorphize tools) [27:12]
- Recommended Book: The Three-Body Problem – highlights how digital realities and deception shape perception [27:57]
- Bold Prediction: Major breaches of leading AI platforms are inevitable, driving a surge in private, internal AI development [28:56]
How to Connect with Ron Gula
- Website: Gula Tech Adventures
- LinkedIn, X (Twitter), Reddit, YouTube—search for "Gula Tech"
- All portfolio, datacare resources, and animated content available via his website [32:41]
Tone and Final Reflection
This episode blends deep expertise with approachable storytelling. Gula’s insights are practical, occasionally humorous, and always grounded in real-world impact—whether he’s animating cybersecurity concepts for broad audiences or probing the “black box” of AI ethics. Dr. Nall’s conversational style draws out both the operational realities and philosophical stakes of AI’s rapid adoption in the cyber world, making this episode valuable for leaders, innovators, and anyone navigating a future where “the advantage is on the attacker."
Listeners finish with both new strategies for data vigilance and important questions about what’s real, what’s synthetic, and how to keep both innovation and ethics front and center.
