Making Sense with Sam Harris
Episode #466 — "What Is Technology Doing to Us?"
Guest: Nicholas Christakis
Date: March 24, 2026
Episode Overview
In this episode, Sam Harris is joined by Nicholas Christakis—MD, sociologist, and Director of the Human Nature Lab at Yale—for a wide-ranging conversation focused on the societal and psychological impacts of recent technological advances, particularly in information technology, social media, and artificial intelligence. Together, Harris and Christakis grapple with the consequences of technology on human behavior, cooperation, and mental health, considering both the threats and potential remedies. The dialogue maintains a tone of candid skepticism, philosophical reflection, and a touch of irony—offering both personal anecdotes and empirical insights.
Key Discussion Points & Insights
1. Decade in Review: The Social Costs of Recent Technology (01:21–03:18)
- Christakis' “Postmortem on the Present”:
Christakis outlines a largely pessimistic view of the last decade in technology, echoing concerns by thinkers like Jonathan Haidt. He asserts that:- Communication technology has been “quite harmful” in fueling polarization, anomie, and mental health crises.
- The rise of surveillance, even domestically, is characterized as “quasi totalitarian.”
- He relates initial skepticism of “off-the-grid” behavior to later understanding the allure due to omnipresent tracking.
- Despite technological setbacks, Christakis draws an analogy to past environmental cleanup efforts, speculating society may eventually “clean everything up” but insists it will “take half a generation.”
“We have dug ourselves into quite a hole… I think we will look back at this period as just that one in which we yielded to and were adversely affected by, and ultimately… overcame some of these threats.”
— Nicholas Christakis (02:08)
2. Personal Tech Habits & Social Media Disillusionment (03:18–05:44)
- Social Media Withdrawal & Platform Shift:
- Christakis shares his shift away from Twitter/X due to increasing toxicity, trolling, and the invasion of conspiracy theories.
- He now prioritizes Blue Sky for scientific discourse, has minimal use of Facebook and LinkedIn, and recently began a science-focused YouTube channel.
- He highlights his initial attraction to Twitter as a source of curated expertise and laments the platform's degeneration.
“In the last few years, I found [Twitter] to be just incredibly toxic… even when I tried to follow only my own people, [my feed] became full of garbage.”
— Nicholas Christakis (04:13)
- Sam Harris’ Digital Retreat:
- Harris relates that his solution was even more drastic: deleting his account entirely, only visiting Twitter for breaking news.
3. The Rise of “AI Slop” and the Erosion of Trust (05:44–07:56)
- Content Degradation & Algorithmic Manipulation:
- Christakis recounts how valuable expert threads (e.g., a military tire expert during the Ukraine war) have disappeared, replaced by inane or misleading AI-generated content (“AI slop”).
- He shares a humorous anecdote about being targeted with fake animal videos, blurring the line between reality and fiction.
- Christakis voices concern over privacy, noting that personal communications on platforms can be used for AI training without user knowledge.
“Somehow these algorithms figured out that I like to look at baby elephants… then the algorithm started feeding me slop… initially I was like, really taken in by this stuff.”
— Nicholas Christakis (06:46)
4. Harm to Youth and Potential Remedies for Social Media’s Ills (07:56–10:04)
- Harms, Lawsuits, and Section 230:
- Harris and Christakis discuss ongoing litigation in California targeting social media companies and John Haidt’s work on social media’s harm to teenagers.
- Christakis highlights a key tension: anonymity enables abuse but is also a bulwark against totalitarianism.
- Christakis predicts that privileging non-anonymous, credentialed accounts (akin to early Twitter’s blue check marks) might help.
- The conversation briefly touches on the pros and cons of Section 230, with Christakis expressing ambivalence about outright revocation, noting it enabled the Internet's rise but now is also a shield for abuse.
“I think there will be social media companies which require… non-anonymity and which people then privilege non-anonymous accounts, which I think will help.”
— Nicholas Christakis (09:41)
5. AI’s Impact on Human Interaction and the Learning Curve Ahead (10:04–15:39)
-
Toward Reputable Information Sources:
- Christakis speculates that the deluge of unreliable information may push society back toward “privileging reputable sources” and trusted media brands.
-
The Dilemma of AI’s Risks and Benefits:
- Christakis compares debates surrounding AI to the scene from Fiddler on the Roof—multiple, opposing expert opinions, all seemingly correct.
- He refrains from strong forecasts about existential AI risk, given the diversity of expert opinions (citing Sam Altman’s variable estimates of extinction risk).
“This is how I feel when I listen to debates by experts on AI… They can’t both be right. And that’s also true.”
— Nicholas Christakis (13:34)
- Social Effects of Interacting with AI:
- Christakis illustrates with an Alexa example: users expect machines to be commanded curtly, but children may generalize this to real-world rudeness.
- He describes his lab’s experiments showing that even “dumb AI” can act as social catalysts—improving human cooperation and collective performance by serving as intermediaries.
“You can think of the AI as a kind of catalyst… that just facilitates the interaction of humans and helps optimize them.”
— Nicholas Christakis (15:26)
6. The Blurring Line Between Machine and Human Interaction (15:39–18:34)
- Unintended Socialization Consequences:
- Harris wonders about the possible erosion of civility as people become accustomed to barking orders at AIs, especially with the advent of humanoid robots.
- He draws on discussions with Paul Bloom and fiction like Westworld to question whether interacting with humanlike machines could desensitize people to cruelty, or conversely, trigger a resurgence of social graces.
“Whenever I have spoken about [robots], I think we can stipulate that we will eventually get out of the uncanny valley and have robots that look, if not perfectly human, in some sense better than human… Would you want our AI shaped like that?”
— Sam Harris (17:12)
- The “Bug Light” for Psychopaths Thought Experiment:
- Harris questions if simulated violence toward humanlike robots (Westworld-style) would actually attract destructive psychologies or if the emotional and societal consequences would deter most people.
- Both speculate on the possible return of politeness and empathy when interacting with sufficiently humanlike AIs.
Notable Quotes & Memorable Moments
- "We have dug ourselves into quite a hole… I think we will look back at this period as just that one in which we yielded to and were adversely affected by, and ultimately… overcame some of these threats."
— Nicholas Christakis (02:08) - "In the last few years, I found [Twitter] to be just incredibly toxic… even when I tried to follow only my own people, [my feed] became full of garbage."
— Nicholas Christakis (04:13) - "Somehow these algorithms figured out that I like to look at baby elephants… then the algorithm started feeding me slop… initially I was like, really taken in by this stuff."
— Nicholas Christakis (06:46) - "You can think of the AI as a kind of catalyst… that just facilitates the interaction of humans and helps optimize them."
— Nicholas Christakis (15:26) - "They can’t both be right. And that’s also true."
— Nicholas Christakis (13:34)
Important Timestamps
- 01:21 – Christakis' “postmortem” on technology’s societal effects
- 03:18 – Discussion of personal social media use and disillusionment
- 05:44 – Reflection on the loss of substantive content and the rise of “AI slop”
- 07:56 – Exploring remedies: anonymity, social contagion, and legal action
- 10:04 – On future media consumption and the possible renewal of trust in institutions
- 11:56 – Christakis’ “Fiddler on the Roof” analogy regarding the AI debate
- 13:19 – Referencing AI extinction risk (Sam Altman example)
- 15:39 – Lab research: AI as a catalyst in human cooperation
- 16:00 – Theorizing about rudeness transfer from machine-human to human-human interactions
- 17:12 – The Westworld thought experiment: moral risk of humanoid robots
Summary
This episode of Making Sense presents a textured, sometimes uneasy conversation about our relationship with technology: how recent advances have destabilized trust, challenged mental health, and scrambled our social norms. Both Harris and Christakis express uncertainty about remedies, but remain hopeful that, through both legal strategies and social adaptation, we might emerge wiser and fitter for this new age. Their insights are delivered with philosophical humility, skepticism, and a touch of dry humor, offering listeners a nuanced lens on the costs and possible futures of our technological society.
