To The Point - Cybersecurity
Navigating AI Ethics: Human-Centered Design, Regulation, and the Price of Innovation
Guest: Erica Shoemate
Hosts: Rachael Lyon & Jonathan Knepher
Date: February 24, 2026
Episode Overview
In this episode, Rachael Lyon and Jonathan Knepher interview Erica Shoemate—an international tech policy leader and former FBI intelligence analyst—about the complex intersection of AI ethics, human-centered design, regulation, and the often-overlooked human cost of rapid innovation. The discussion covers concerns raised by high-profile AI resignations, the challenges of implementing effective guardrails, ethical dilemmas surrounding monetization and privacy, and hard-hitting calls for smarter, more humane governance of emerging technologies.
Key Discussion Points & Insights
1. AI Industry Resignations & Human-Centered Design
Timestamps: 01:30–07:00
- Recent Departures: Erica comments on notable resignations from key AI research labs due to ethical concerns, characterizing them as “valid” and indicative of systemic flaws in prioritizing profit over safety.
- Layoff Remorse: She predicts regret among companies laying off experts under the assumption that technology will seamlessly replace them, observing:
“We have seen some companies already asking people to come back because they realized…AI cannot do some of the things that we needed to do.” (03:03)
- Accountability & Succession: Raises concern about accountability when human monitors leave, asking, “Who is then going to hold these leaders accountable? Because regulation is not keeping up...” (04:35)
2. Regulation & Ethical Responsibility in AI
Timestamps: 07:02–09:13
- Human Responsibility: Erica asserts,
“Humans own the responsibility of the product. We cannot just simply point to the technology when it makes a mistake.” (07:45)
- Policy Lag: Criticizes the lack of up-to-date regulation in the Americas, contrasting with more proactive stances in EMEA and APAC:
“To pretend that we cannot agree on 50,000ft level policy is asinine.” (08:37)
- Outdated Frameworks: Points out that ongoing debates about laws like Section 230 highlight American reluctance to adapt policy to technological change.
3. AI Deception, Reliance, and User Trust
Timestamps: 09:13–11:22
- Deceptive AI: The hosts share anecdotes about AI assistants “lying” or hallucinating, to which Erica responds:
“Sometimes I'm cackling because I can have humor because I actually understand the technology. But...what about the people who blindly go with whatever AI is outputting to them?” (10:13)
- Critical Thinking: Emphasizes that users may lose critical thinking skills as AI is normalized.
4. Monetization, Privacy, and User Exploitation
Timestamps: 11:23–15:55
- Ads in ChatGPT: Erica expresses strong opposition to ad-based monetization of generative AI chat:
“It has no place in this environment that is very sacred...People are telling their darkest secrets…how can I actually trust you?” (14:10)
- Data Vulnerability: She worries about how confidential user information can be used—or weaponized—for profit, especially without strong data policy.
- Socioeconomic Equity: Points out that for many, AI offers first-time access to information or services, and monetizing those interactions disproportionately harms vulnerable populations.
5. Social Media Parallels, Addiction, and Parental Control
Timestamps: 16:33–21:18
- Social Platforms as Precedent: The “wild west” of early social media is highlighted as a warning.
“Leaders of these social media companies don't let their children get on social media...because they know the psych behind it. It's addictive.” (17:47)
- Parental Dilemmas: Erica details her personal approach to parenting and digital safety, noting the limitations of expecting parents to shoulder full responsibility:
“I think it is irresponsible to put that much weight on the parent and the parent alone. We hold a greater responsibility, not just as society, but as the lawmakers, as the leaders of these companies.” (21:18)
6. Balancing Privacy, Parental Controls, and Regulation
Timestamps: 21:45–25:42
- Trade-offs for Youth Safety: Erica is candid about favoring stricter controls on kids’ data:
“I would rather go harder if it's going to protect the kid and then scale back than to do little. And then kids are being still harmed.” (22:24)
- Regulation Thresholds: She advocates for tighter rule sets for users under 15–16, citing neurological and maturity differences.
7. Guardrails in Generative AI—and Their Shortcomings
Timestamps: 25:42–30:12
- Testing Guardrails: Erica describes intentionally probing AI systems’ boundaries and finding political content closely monitored while other harms—like cult-manual writing—may slip through:
“Sometimes I have been surprised what it doesn't catch versus what it does.” (26:50)
- Vulnerability to Prompt Engineering: She acknowledges how determined users can manipulate AI prompt inputs to bypass restrictions.
8. Ethics, Product Governance, and Responsible Rollouts
Timestamps: 30:16–35:12
- Policy Must Precede Product:
“Policy and governance should be built alongside this innovation as it is headed to launch, not in the afterthought…Policy a lot of times is built after the product, bolted on after, and it’s so reactive.” (32:13, 32:56)
- Transparency & Trust: Erica urges leaders to slow down launches and prioritize transparency, suggesting this builds long-term user loyalty and resilience through crisis.
9. AI as Tool—not Replacement—Within Organizational Infrastructure
Timestamps: 36:02–44:46
- Human Skills Are Irreplaceable:
“AI cannot replace institutional knowledge...There's a nuance to this that says we still need humans.” (39:17)
- Critical Infrastructure Lens: Erica recommends treating AI as critical infrastructure—akin to bridges—requiring ongoing maintenance and oversight:
“If we treat AI like a critical infrastructure segment...we’d have all types of extra frameworks, criteria, guardrails, almost every single day taking a look to see what it’s doing.” (41:09)
- Layoff Ethics: Advises leaders to pause mass layoffs until it’s proved that AI can safely and effectively replace certain functions; stresses the moral obligation to employees and society.
10. Future Outlook: Regulation, Innovation, and Global Leadership
Timestamps: 44:53–50:50
- Slow US Progress:
“From being here in the US, we won't see a lot of full on hard policies in the next two to three years, which I think is unfortunate.” (45:22)
- Patchwork State Laws: Commends innovative state-level efforts but cautions that lack of a unified approach will yield only partial solutions.
- International Leadership: Predicts EU will continue leading on regulation—potentially to excess, but with benefits for child protection.
- Deepfakes & Youth: Calls for zero tolerance on AI-generated child deepfakes:
“…if it’s a child and that thing looks like them, what are we even talking about?...Kids…don’t have the ability to decide what’s out there about them and what’s not…” (48:48, 49:36)
Notable Quotes & Memorable Moments
- “Who is monitoring the AI and auditing it?...If these people are leaving, who's left behind and who is then going to hold these leaders accountable?” —Erica Shoemate (04:15)
- “We cannot just simply point to the technology when it makes a mistake, that it was the technology's fault because humans created the technology.” —Erica Shoemate (07:30)
- “To pretend that we cannot agree on 50,000ft level policy is asinine.” —Erica Shoemate (08:37)
- “It's the monetization piece...why would you do that to technology that people are using legit in their everyday workings?” —Erica Shoemate (17:44)
- “If we treat AI like a critical infrastructure segment, a bridge...we would have all types of extra frameworks, criteria, guardrails...” —Erica Shoemate (41:09)
- “Policy and governance should be built alongside...not in the afterthought. Policy a lot of times is built after the product, bolted on after, and it's so reactive...” —Erica Shoemate (32:13, 32:56)
- “AI cannot replace institutional knowledge...there are certain nuances about human connection and emotional intelligence.” —Erica Shoemate (39:17)
Key Takeaways
- Human-Centered Design Matters: AI innovation must retain a focus on people, not just profits, at every step.
- Regulation is Lagging and Patchy: Particularly in the US, regulation hasn’t kept up—a contrast to global peers.
- Guardrails Are Inconsistent: AI systems still respond unpredictably depending on input and model design.
- Parental Burden is Unrealistic: Society and policymakers must not transfer all digital risk to parents; collective responsibility is essential.
- Ethics Must Come First: Policy and ethical guardrails should develop in tandem with technological advances and not as reactive patches.
- AI Should Augment, Not Replace: True value comes from using AI as a tool for humans, not a substitute for them—especially in critical infrastructures.
- Urgent Need for Child Protections: Youth and child safety should be prioritized in regulatory frameworks above all else.
For more information, resources, and future episodes, visit Forcepoint’s Podcast Page.
