Podcast Summary: Right About Now – "AI Productivity vs AI Security: The Human Risk Behind AI" with Fable Security
Podcast: Right About Now – Legendary Business Advice
Host: Ryan Alford, The Radcast Network
Guest: Nicole Jang, Co-founder & CEO, Fable Security
Release Date: March 17, 2026
Episode Overview
This episode dissects the tension between leveraging AI for business productivity and the significant security risks that rapid adoption can invite—particularly those rooted in human behavior. Host Ryan Alford and guest Nicole Jang (Fable Security) dig deep into how organizations can harness AI without unintentionally exposing sensitive data, highlighting why the biggest cybersecurity threats come from within: employee choices, habits, and everyday interactions with AI tools.
Key Discussion Points & Insights
1. The Human Factor: The Greatest AI Security Risk
Nicole Jang introduces Fable Security’s mission:
- Fable Security specializes in a "human risk platform" designed to guide employees toward safer software behavior in real time, a step beyond typical once-a-year security training.
- Jang and her co-founder’s ad tech backgrounds led to leveraging personalization and real-time interventions for better employee security compliance.
- Both founders previously focused on AI-driven phishing detection, which highlighted the rapidly evolving social engineering threats made easier by AI.
Key Quote:
“We deploy just in time personalized interventions when we see employees doing some things that might expose more risks than necessary for our organization. We do this better than typical annual security compliance training.”
— Nicole Jang [03:27]
2. Rapid AI Adoption: Is Security Losing the Race?
Alford’s concerns about the pace of AI integration:
- Draws parallels to early internet and social media oversharing: “...We sure are openly giving up a lot of information. Now I’ve sort of had the same epiphany with AI.” [02:21]
- Notes that AI is being adopted at breakneck speed without equal advancements on the security side.
Jang’s observations:
- Organizations vary widely in their AI adoption speed, often based on their digital maturity and industry.
- Tech-driven and mature companies get the most from AI, investing heavily in security.
- Traditional, slower-moving firms often "just check the box" for compliance.
- Developer-centric firms encourage experimentation, softening risk where possible.
- Regulatory environments (e.g., HIPAA, PCI) restrict how boldly firms can experiment.
Key Quote:
“Companies who are really thinking about investing in digital AI technology transformation... really double, triple down in their security investments and then some may still be checking the box.”
— Nicole Jang [06:43]
3. What Data Are We Really Exposing?
Alford’s real-world concerns:
- Many businesses are unsure what their employees may be putting into AI tools or where that data is going.
- “Sometimes you don’t know what you don’t know. But I know I don’t know something that I might should know about where all this data is going...” [08:51]
Nicole Jang’s analysis:
- Fundamental risks arise from two main sources:
- Lack of clear intent in prompts, leading employees to accidentally expose sensitive information.
- Overzealous integration of AI across company data systems without cleanup or permissioning.
Key Quotes:
“These things are human instructed. The way you want some things to be done requires hyper clarity on the outcome you’re looking for.”
— Nicole Jang [11:55]
“No one thinks about permissioning, no one thinks about data, they think about adoption... If your house is not clean... the underlying data is chaotic.”
— Nicole Jang [12:30]
- The acceleration of AI is pushing foundational security and data hygiene to their limits.
- Attackers are exploiting AI-driven vulnerabilities at high speed and sophistication.
4. The Arms Race: Offense vs Defense in AI
Superpower metaphor:
Alford likens the AI/cyber contest to “superhero movies”—both attackers and defenders now wield the same tools, so stakes and tactics are elevated.
Jang’s perspective:
- The security world divides into offensive (those trying to break systems preemptively) and defensive (those shoring up defenses).
- AI simultaneously empowers both sides, demanding creativity and anticipation from defenders.
Key Quote:
“I feel like we’re playing chess, right? Attackers can be offensive, we’re defensive. And so that’s why in cyber you also see offensive teams who’s trying to break systems ahead of time. We can think like attackers too...”
— Nicole Jang [14:33]
5. Practical AI Security Hygiene Tips
Prompt efficiency tip:
-
Pleasantries in prompts ("please," "thank you") cost more tokens, raising both usage cost and inefficiency.
-
“Just be direct, just remove all the pleasantry. That’s crazy for me. I’m Canadian... Turn out that's not good for AI. Turnout is costly.”
— Nicole Jang [16:22] -
Both host and guest admit to sometimes getting "mean" or impatient with AI, but realize direct prompts are more efficient (and cheaper).
Actionable Cyber Hygiene Checklist:
- Be curious, but cautious: Always ask if you should be concerned about the data you’re sharing with AI.
- Leverage AI for security: “...AI, can you please sanitize my data, sanitize my queries...” [17:42]
- Don’t overshare: Never put credit card numbers, passports, or similarly sensitive info into AI prompts or tools.
- Regular audits: Ask AI to help identify if any data is shared that shouldn’t be and review regularly.
Key Quotes:
“Just ask for it to omit and AI will do the job for you. Regularly go through — hey, are things shared that really shouldn’t be? AI can probably find out... then you can take action.”
— Nicole Jang [17:42]
6. The Meta Loop: Using AI to Secure AI
Alford comments on the recursive nature of AI safety:
"Everything’s meta with this because AI can assist in whatever thing we’re trying to solve that might be related to AI." [18:27]
Jang underscores AI’s positive role as a “reasoning partner” and security assistant:
“AI is like your reasoning partner and just does so much. I’m really excited about the future of where this technology can go.”
— Nicole Jang [18:43]
Notable Quotes & Memorable Moments
- Ryan Alford [03:05]: “And so now we’re telling AI our deepest secret so it can help us solve puzzles, write contracts... where is that data going?”
- Nicole Jang [11:55]: “AI is running—basically giving you insights faster than a human analyst... but if you don’t know what you’re looking for and you ask a stupid question that exposes risk...”
- Nicole Jang [16:22]: “If you add please and thank you, it takes up more tokens. Just be direct, just remove all the pleasantry... Turn out that’s not good for AI. Turnout is costly.”
- Nicole Jang [17:42]: “...Don’t give out your credit card information, don’t give out your passport, don’t give out your blood type. Ryan. Let’s just do the same thing in the AI world...”
Timestamps for Major Segments
- [01:00] — Nicole’s intro to prompt efficiency and token cost.
- [02:20] — Nicole Jang introduces herself, Fable Security’s mission, and human risk platform.
- [06:43] — Differences in speed and sophistication of AI adoption and security across industries.
- [11:55] — Core risks: From confusing prompts to over-integration, data hygiene, and security foundation stress.
- [14:33] — The AI-powered chess game between attackers and defenders.
- [16:22] — AI prompt etiquette: Why "please" costs you more.
- [17:42] — Nicole’s practical tips for AI-driven cybersecurity hygiene.
Where to Learn More
Fable Security: https://fablesecurity.com
Nicole Jang: Reach out via company site; Fable Security headquarters, San Francisco.
Tone & Style
The episode is candid and practical, eschewing fluff for hard-earned, real-world insight. Both host and guest are relatable, often blending humor with urgency as they share lessons—making cybersecurity approachable for technical and non-technical listeners alike.
For Listeners
Who should listen?
- Business owners, startup founders, CTOs, security professionals, and anyone adopting AI-powered tools who cares about safeguarding data—whether you’re leading a modern tech firm or just starting to dabble in generative AI.
Bottom Line:
AI can supercharge productivity, but people remain both the strength and the soft spot in any security posture. Think before you prompt—otherwise, you might be the human risk your company’s AI least expects.
