Episode Overview
Title: Why AI Needs Ethics First
Podcast: Tomorrow, Today
Host: Shekhar Natarajan
Guest: Nekia Nichelle
Date: February 18, 2026
Location: Live at CES 2026
This episode centers on the urgent need for ethics as the foundation of artificial intelligence (AI) development. Host Shekhar Natarajan and guest Nekia Nichelle explore how the rapid pace of AI innovation risks human dignity, amplifies societal issues, and why builders must ask not only “Can we?” but critically, “Should we?” Shekhar introduces the concept of “angelic intelligence”—AI that is inherently ethical, prioritizing human good rather than just efficiency or profit.
Key Discussion Points & Insights
1. The Dangers of Rushed Innovation
- Rapid AI deployment parallels the Industrial Revolution, but at a dangerous speed.
- “With artificial intelligence, even before you know it, at 3am the call center is gone because all it takes is for you to update the code and everything is automated. And that’s a scary thought…”
—Shekhar (00:54)
- “With artificial intelligence, even before you know it, at 3am the call center is gone because all it takes is for you to update the code and everything is automated. And that’s a scary thought…”
- Ethical consequences are an afterthought as companies chase efficiency and profit.
- Notable quote:
- “Whenever you’re going after efficiency, you pull out dignity somewhere, you make it inhuman, you dehumanize things. And so we need to slow down. We need to basically understand the consequences of what we’re doing.”
—Shekhar (01:18)
- “Whenever you’re going after efficiency, you pull out dignity somewhere, you make it inhuman, you dehumanize things. And so we need to slow down. We need to basically understand the consequences of what we’re doing.”
2. Efficiency vs. Dignity: Drawing the Line
- Historical perspective: The Industrial Revolution’s slow pace allowed society to adapt morally (e.g., weekends, child labor laws).
- Now AI changes entire industries overnight, leaving dignity vulnerable to “optimization.”
- Shekhar recounts how patchwork fixes to AI’s unintended harms fall short, referencing ChatGPT and cases like Adam Rain’s suicide note (02:24).
- “Virtue has to be native, not an afterthought. Right? Like security, safety needs to be built into the system, not as an afterthought…”
—Shekhar (04:35)
3. Personal Roots of a Service Mentality
- Shekhar shares his upbringing in the slums of India and his parents’ selflessness despite extreme poverty.
- “Those aspects of humanity are not really captured today. And so all we are trying to do today is we are building technology faster. And then we are saying like, oh, if there is a risk, I’m going to add like a patchwork.”
—Shekhar (03:36)
4. From Ethical AI to Angelic Intelligence
- Transition in focus:
- AI as purely efficiency-driven
-
Ethical AI: “Do no harm”
-
Angelic Intelligence: “Doing good—what good humans do and how do I mimic their behavior.”
—Shekhar (05:35)
- Amplifying human virtues, not just minimizing harms.
5. The Perils of Wrong Optimization
- Cites his own history as a “reformed optimizer” across companies like Coca Cola, Disney, Walmart.
- Explains the tragic case of a girl, Maya, who died during COVID-19 because supply chains optimized to deliver luxury goods over life-saving meds.
- “In optimizing for the cost of the delivery and looking at the profitability of the delivery route, we ended up choosing Hermes bag over like delivery medicine.”
—Shekhar (06:31) - Acknowledges personal guilt, leading to a renewed moral obligation: “We should build things that we are proud of. Like I want to create a multi-generational impact not in wealth, but in dignity.”
—Shekhar (07:46)
6. People Over Profits: Building Trust
- “If people are treated well, they respond well.”
—Shekhar (08:04) - Trust is the long game in both brands and technology—AI systems must prioritize humans to avoid catastrophic failure.
- Nekia shares: “I have a business, you know, and that’s one of my mottos is putting people first… my team loves me, you know, they love working for me because I put them first.” (09:18)
7. The Guiding Question: “Should We?”
- Urges innovators to pause on “Can we?” and prioritize “Should we?”
- “There’s so much focus on the ‘can we’... And we forget to ask this very simple question, ‘should we’ in the first place?”
—Shekhar (09:47)
- “There’s so much focus on the ‘can we’... And we forget to ask this very simple question, ‘should we’ in the first place?”
- Tells a moving story about Margaret, a Walmart customer who gamed the healthcare system to get vital meds, highlighting the human cost of blind optimization and lack of holistic thinking in technology (11:10–12:30).
- “No one looked at Margaret as Margaret… The question should have been, what should we do in this place?”
—Shekhar (12:30)
8. Closing Reflections: Building for Good
- Shekhar pledges his personal equity in his company to a foundation supporting humanity, emphasizing legacy and social obligation.
- “We are polarizing the society. We are atomizing each other so much… Now everyone fears the next guy and his own shadow. And so this is going to be deadly dangerous if you continue to build technology and deploy technology which amplifies this behavior and the dopamine with it.”
—Shekhar (13:21–14:42) - Calls for a mass movement:
- “There should be like millions of me out there who should be building billions of angels, which makes this world a better place.”
—Shekhar (14:43)
- “There should be like millions of me out there who should be building billions of angels, which makes this world a better place.”
Notable Quotes & Memorable Moments
-
On the velocity of AI change:
“Industrial Revolution we had like 70 years… now with artificial intelligence… everything is automated. And that’s a scary thought.”
—Shekhar (00:35–01:00) -
On virtue & safety:
“Virtue has to be native, not an afterthought. Right? Like security, safety needs to be built into the system, not as an afterthought...”
—Shekhar (04:35) -
Defining ‘angelic intelligence’:
“Now we are moving from AI which is efficiency focused, to ethical AI which is do no harm, to angelic intelligence which is doing good, right? What good humans do and how do I mimic their behavior.”
—Shekhar (05:35) -
On asking the right question:
“We forget to ask this very simple question, should we in the first place?”
—Shekhar (09:47) -
On social connection:
“Now everyone fears the next guy and his own shadow. And so this is going to be deadly dangerous if you continue to build technology and deploy technology which amplifies this behavior and the dopamine with it.”
—Shekhar (14:23)
Timestamps for Important Segments
- 00:30–02:35 — The dangers of moving too fast with AI and the loss of dignity
- 03:35–05:44 — Shekhar’s personal roots and the call for virtue to be built-in, not patched on
- 05:45–07:57 — “Angelic intelligence” and stories of harmful corporate optimization
- 08:04–09:18 — Trust, human-centered design, long-term vs. short-term thinking
- 09:37–13:02 — Asking “Should we?” over “Can we?”; story of Margaret and the consequences of misaligned priorities in innovation
- 13:21–14:54 — Shekhar’s pledge, reflections on societal atomization & call for a new generation of ethical builders
Summary Flow & Takeaways
This conversation is a cautionary and inspiring journey through the pitfalls of AI’s unchecked progress and a passionate plea for designing technology with humanity at its core. Shekhar Natarajan, drawing on both personal history and high-level corporate experience, frames technology not as an end but as a means—one that must be grounded in service, dignity, and deliberate ethical intention. Rather than settling for “do no harm,” he calls innovators to aim for “angelic intelligence” that actively enhances human flourishing. Nekia Nichelle echoes these values, urging leaders and builders to radically reconsider what it means to put people first. The episode’s stories, quotes, and practical insights offer a blueprint for a future in which technology advances not at the expense of humanity—but in service to it.
