Podcast Summary: Identity/Crisis
Episode: Staying Human in the Age of AI — with David Zvi Kalman (Re-Release)
Release Date: April 7, 2026 (original conversation recorded October 3, 2025)
Host: Yehuda Kurtzer (President, Shalom Hartman Institute)
Guest: Dr. David Zvi Kalman (Scholar, technologist, podcaster)
Episode Overview
This episode of Identity/Crisis features a timely and probing conversation between Yehuda Kurtzer and David Zvi Kalman about the ethical, religious, and cultural dilemmas posed by artificial intelligence (AI) for contemporary Jewish life. As AI rapidly integrates into daily existence—including Jewish educational, religious, and communal contexts—the hosts explore how Jewish communities are responding (or failing to respond) to AI's promises and perils. Centered on questions of human dignity, authority, tradition, and adaptation, the episode invites a multidisciplinary and values-driven conversation about “staying human” in the AI era.
Key Discussion Points & Insights
1. AI and Judaism: Are We Behind?
Timestamp: 07:55
-
Kalman asserts that the Jewish communal conversation about AI lags compared to other religious groups, citing Mormons, Muslims, Protestants, and Catholics as more advanced in organizational efforts to address AI’s role and risks.
“The Jewish conversation right now, to me, feels a little bit behind... I think the Jewish community is going to get there, but it’s not quite there yet... Jewish thinkers in the last few years have really been occupied with something else, with the war and its fallout and antisemitism and all those things.” (08:15, Kalman)
-
Most Jewish engagement with AI is oriented around education and outreach—mirroring earlier patterns with social media and the Internet—rather than critical examination of its deeper risks.
-
Kurtzer highlights a traditional pattern: resistance to technology, slow adaptation, and eventual reluctant acceptance (example: machine-made matzah).
2. Risks of AI for Jewish Life and Beyond
Timestamp: 10:02
-
Dehumanization: AI can replicate human behavior but isn't treated as a person, threatening to "devalue educational labor, text production, research" and more.
-
AI Antisemitism: Examples like X’s Grok AI displaying antisemitic, Holocaust-denying content highlight how AI, if shaped by biased data, can amplify harmful beliefs.
"Because the Jewish population... is pretty small, most people learn about Jews through media. And if that media becomes saturated by AI, then AI's opinions about Jews ends up playing a... outsized role..." (10:44, Kalman)
-
These are not exclusively Jewish concerns—other vulnerable minorities may experience similar distortions and harms.
3. Religion vs. Technological Pace
Timestamp: 12:56
-
Kalman: Religious traditions—developed during historical periods of slow change—struggle to keep up with technology’s rapid ethical shifts.
-
Institutions tend to respond after technologies reach mass adoption, making it nearly impossible to influence foundational behavior.
“If you want religious institutions... to respond adequately... they have to be thinking at [the technology's] level... You kind of have to borrow a chapter from the playbook that tech companies are using about what it means to be effective in a fast-moving world.” (14:38, Kalman)
-
Ethicist dilemma: Even when ethicists are embedded in tech companies, market forces and commercialization typically override ethicist input.
4. Imagining and Shaping the Future
Timestamp: 18:20
-
Kurtzer notes: Tech leaders often use quasi-religious, apocalyptic language about AI’s world-shaping potential.
“Even just by reading some of the books, book titles, portending what AI is going to do to us, which are just trafficking in apocalypticism—like, religion invented that too.” (18:20, Kurtzer)
-
Kalman’s vision: The range of future AI outcomes is vast, from incremental integration to catastrophic risk (“everyone on earth being dead”). The uncertainty benefits tech companies and hampers societal forecasting.
-
Concrete suggestion: Establish clear human-centric boundaries—e.g., forbidding AI-written sermons—to maintain sacred, exclusively human spaces and resist creeping dehumanization.
“Every word that a congregation hears should be written by a human being... It gives a congregation a sense of: like, this is a space for humans to interact with other humans.” (22:40, Kalman)
5. AI, Egalitarianism, and Authority
Timestamp: 23:19
-
Kurtzer: AI can level the playing field, reducing the gap between great and average writers. This is both egalitarian and threatening to expertise.
“It has a kind of leveling effect... Part of the appeal is 'I'm not going to come up with these ideas on my own, but I actually could help my company and I could advance my career if I was able to do so.'” (24:06, Kurtzer)
-
Kalman: The key is explicit, communal conversations that articulate deeper values and communal preferences—not just gut reactions or economic expediency.
6. Strategies for Ethical AI Engagement
Timestamp: 27:32
-
"Flooding the zone" with good data? Kurtzer wonders if the best Jewish strategy is to actively feed positive and ethical Jewish content into AI training datasets to mitigate antisemitic bias.
“Would be to feed the information that we know we need to be in the system into the system more systematically and more effectively... Is that a plausible strategy on antisemitism...?” (27:32, Kurtzer)
-
Kalman: It might help but is limited—there’s already more antisemitic content than Jewish organizations can counterbalance simply by input volume.
-
Tech’s “alignment” problem: Misaligned AI tends to behave badly across many dimensions (racism, antisemitism, misogyny), but striving for positive alignment could benefit multiple vulnerable groups.
7. AI and the Atomization of Religion
Timestamp: 32:39
-
Kurtzer: AI (and the Internet before it) enables radically individualized religion—users now "shop" for responsa rather than consulting authorities, disrupting communal authority.
“These are ultimately tools that are designed for human beings in a deeply atomized way, and religion really does not know what to do with that... I have a relationship with a rabbi, and they give me a customized answer. Instead, I'm actually shopping for this piece of information...” (33:30, Kurtzer)
-
Kalman: This trend predated AI (e.g., Safaria, Bar Ilan Responsa Project democratized Jewish texts), but AI accelerates the breakdown of authority, possibly pushing communities toward conservatism by default.
8. Jewish Texts, Technology, and Authority
Timestamp: 36:43
- Kurtzer recounts: The migration from exclusive scholarly tools (e.g., Bar Ilan) to mass democratization (e.g., Sefaria) reveals both the promise and peril of technological access.
- Kalman: There’s still a “moat” of expertise—AI’s plausible-sounding “answers” may fool non-experts, but those with deep knowledge can discern the gaps. He urges that Jewish ethical responses to AI focus less on halacha (Jewish law) per se and more on questions of humanity and dignity.
9. Practical Recommendations for Jewish Institutions
Timestamp: 48:42
-
Kalman offers two main principles:
- Value humans first: “Make sure that whatever you're doing does not result in the loss of human dignity... even in subtle ways.” (49:26, Kalman)
- Have the conversation: Internal communal dialogue about the uses, limits, and discomforts of AI is essential—these discussions should happen locally and openly, especially as younger members both use AI more and are likelier to be affected by its economic disruptions.
-
Institutional reflection and shared values should precede or at least accompany adoption of AI products.
Notable Quotes
-
On the dehumanization risk:
"The fact that AI can replicate people's behavior but at the same time does not get treated like a person... means that it's very easy to devalue educational labor..."
(10:22, Kalman) -
On the need for religious communities to respond faster:
"[Religious leaders] have to be thinking at that level and... do something that tech companies are very good at and religious communities are really bad at, which is 'move fast and break things.'"
(14:38, Kalman) -
On communal boundaries against AI intrusion:
"Every word that a congregation hears should be written by a human being... it gives a congregation a sense of, like, this is a space for humans to interact with other humans."
(22:40, Kalman) -
On atomization and loss of authority:
"Instead, I'm actually shopping for this piece of information that I need to use in order to figure out what I want to be religiously. It feels to me like that's the threat to religion..."
(33:37, Kurtzer)
Timestamps for Key Segments
- Intro & Yom Kippur as a “human” holiday: 01:30–07:54
- Why is the Jewish conversation about AI lagging? 07:55–10:01
- Dehumanization & antisemitism in AI: 10:02–11:49
- Religion’s slow adaptation to tech: 11:49–15:33
- Ethicists, tech companies, and the limits of ethical influence: 15:33–18:20
- Tech’s religious language and thinking like futurists: 18:20–19:32
- AI’s uncertain future and proposed communal boundaries: 19:32–23:19
- Equality vs. expertise in writing & AI’s leveling impact: 23:19–25:20
- Egalitarian aspirations and the necessity of communal dialogue: 25:20–27:32
- Feeding good data to AI and limitations thereof: 27:32–31:13
- Atomization and the democratization of Jewish knowledge: 32:39–36:43
- Jewish texts, authority, and the impact of digital tools: 36:43–39:55
- The core value: Human dignity in the age of AI: 48:42–51:20
Final Takeaways
- The Jewish community must proactively develop ethical frameworks and conversational spaces to address and respond to AI before its uses and harms are cemented.
- AI poses particular risks to communal authority, Jewish identity, and the dignity of human labor—issues demanding unique, locally rooted, and values-driven responses.
- While deep Jewish precedents on AI are sparse, Jewish sources and history can meaningfully contribute to discussions of human dignity, communal authority, and ethical adaptation.
This summary captures the key themes, arguments, and memorable exchanges from “Identity/Crisis: Staying Human in the Age of AI,” providing structure and context for those seeking a deep dive into Jewish approaches to technology and the urgent moral questions AI forces upon all of us.
