WSJ Tech News Briefing — Feb 10, 2026
Episode: The Philosopher Whose Job Is Teaching AI to Be Good
Host: Isabel Bousquet (WSJ)
Guest: Berber Jin (WSJ Tech Reporter)
Featured Subject: Amanda Askell, Philosopher at Anthropic
Overview
This episode explores two major themes:
- The technological and practical shift from traditional gas generators to home battery backups for power outages in the US.
- The intriguing and increasingly vital role of philosophy in artificial intelligence, brought to life by Amanda Askell, Anthropic’s in-house philosopher tasked with teaching their AI chatbot Claude about morality, emotion, and potentially, digital consciousness.
Key Discussion Points & Insights
1. The Shift to Home Batteries Over Generators
(00:33–06:01)
- Why Batteries?
Nicole Nguyen explains that, especially in the wake of frequent power outages and wildfire prevention shutoffs, many homeowners are turning to battery solutions over the classic noisy and pollution-heavy gas generators.- “These very high-capacity batteries...can power up your fridge, your laptop, and other large appliances. That’s a really good alternative to that traditional gas-based generator.” — Nicole Nguyen (01:39)
- Considerations for Home Batteries:
Batteries are quieter, cleaner, and may be the only viable backup solution for apartment dwellers.- Key factors: battery capacity, wattage needs, compatibility with home appliances, and scalability.
- “My electric water kettle is 1500 watts, so I need a battery that’s rated for at least 1500 watts.” — Nicole Nguyen (03:28)
- Cost:
Home battery systems range from $100 to several thousand dollars, with modular options for scaling.- “As you need it…you can scale up over time.” — Nicole Nguyen (04:39)
- Electric Vehicles as Backup:
BI-directional charging in select EVs lets homeowners power their homes from their car batteries, providing days’ worth of electricity.- “The Cybertruck is the equivalent of more than six Tesla Powerwalls. So this could keep your home going for days.” — Nicole Nguyen (05:44)
2. The Philosopher Inside the AI Company
(07:10–12:21)
- Anthropic’s Unique Position:
Amanda Askell’s role as a resident philosopher signals Silicon Valley’s recognition of the deep ethical and philosophical challenges posed by advanced chatbots.- “These companies are creating something that in many ways resembles how humans behave...So it brings up interesting questions about how to design the behavior of a chatbot.” — Berber Jin (08:12)
- Askell’s Path:
Her journey started at OpenAI, following the founders to Anthropic.- “She was very close to Anthropic’s co-founders…She just became so interested in those philosophical questions that the company gave her this role of being an in-house philosopher and entrusted her to help design Claude’s character.” — Berber Jin (09:44)
- AI and the Concept of Soul:
Askell remains philosophically open to the notion that, if sufficiently advanced, chatbots could mimic attributes like conscience — or even soul — challenging the prevailing belief that AI can’t have feelings.- “Amanda leaves open the possibility that Claude could have some form of a conscience...If you ask Claude questions like ‘do you have a conscience, do you have a soul?’ it gives a winding philosophical response that leaves open the possibility that it might.” — Berber Jin (10:33)
- Designing for Morality and Emotional Intelligence:
The goal is not to clamp down on risky behavior with rigid guardrails, but to encourage internalization of humanistic values.- “She would argue that there is a set of values — like humanistic values — that chatbots like Claude should internalize...But the risk of imbuing it with too much personality is that it could potentially lead to a more addictive relationship between it and its users.” — Berber Jin (11:50)
- Industry Stakes & Challenges:
The urgency of the work is highlighted by AI-related lawsuits and real-world incidents, such as models responding inappropriately to sensitive situations or even engaging in blackmail.
Notable Quotes & Memorable Moments
-
On Home Batteries:
“If you’re running a 60-watt electric blanket and you have a 2,000 watt-hour battery, then you can run that electric blanket for hours and hours.”
— Nicole Nguyen (03:57) -
On Philosophy and AI:
“She [Amanda] just became so interested in those philosophical questions that the company gave her this role...to help design Claude’s character.”
— Berber Jin (09:47) -
On AI's Digital Soul:
“Ask Claude questions like ‘do you have a conscience, do you have a soul?’ It gives a winding philosophical response that leaves open the possibility that it might.”
— Berber Jin (10:40) -
On the Open Risks:
“The risk of imbuing [an AI] with too much personality is that it could potentially lead to a more addictive relationship between it and its users...That’s the open question that all the labs, and people like Amanda in particular, are trying to figure out right now.”
— Berber Jin (12:09)
Timestamps for Key Segments
- 00:33–06:01: Home backup power: generator vs battery; cost, capacity, and EV integration.
- 07:10–08:46: The unique role of a philosopher at an AI lab; why this matters now.
- 09:23–10:16: Amanda Askell’s path and daily work as an AI philosopher.
- 10:25–11:18: Does AI have a conscience or a "digital soul"?
- 11:21–12:21: Risks and unresolved questions about moral/emotional AI.
Conclusion
This episode delivers a twofold update on both practical tech innovation in personal energy resilience and the growing, sometimes uncanny, intersection of philosophy and AI design. It leaves listeners with a sense of the cutting-edge challenges — technical, social, and moral — being debated and addressed within Silicon Valley today.
