Hard Fork – January 23, 2026
Episode: Will ChatGPT Ads Change OpenAI? + Amanda Askell Explains Claude's New Constitution
Hosts: Kevin Roose & Casey Newton (with guest Amanda Askell, Anthropic)
Overview
This episode of Hard Fork is divided into two major sections:
- OpenAI's Introduction of Ads to ChatGPT: Kevin and Casey discuss OpenAI's move to test ads in ChatGPT, analyzing the motivations, user reactions, and broader consequences for the AI industry and user experience.
- Anthropic’s New Constitution for Claude with Amanda Askell: Amanda Askell, Anthropic’s philosopher and primary architect behind Claude’s "constitution," joins to discuss what it means to instill a foundational personality and ethical reasoning in AI models.
Section 1: Will Ads Change ChatGPT?
Starts at 02:34
Key Discussion Points
The End of the Ad-Free Honeymoon
- OpenAI recently announced it will start testing ads in ChatGPT for logged-in adults in the U.S., specifically on the free and low-cost Go tiers.
- User reaction has been negative, with many lamenting the loss of an uncluttered, commercial-free AI experience.
- "No one thinks of the moment that ads arrived as the moment when the product got really good." — Casey Newton [03:34]
OpenAI’s Rationale and Contradictory Statements
- OpenAI had previously suggested that ads would be a "last resort" (referencing founder Sam Altman’s past statements).
- The move is seen by some as a sign OpenAI sorely needs more revenue to fund extremely costly AI infrastructure.
How Ads Will Work in ChatGPT
- Two ad formats were previewed:
- Contextual Banner: Sponsored banner below the AI's response, e.g., groceries ad after asking about a dinner party.
- Interactive Ad Widget: Advertiser widget allowing users to chat and ask questions about, e.g., a travel destination.
- Debate over whether these ads, while claimed not to influence the model’s answer, actually maintain a strict separation (“the sacrosanct part of the reply”).
"They have said to us, your query is not going to affect the advertisement... and yet here... it sure feels like something was being influenced there." — Casey Newton [06:21]
The Ad Platform Slippery Slope
- Kevin and Casey draw parallels to Google’s trajectory: ads started as clearly labeled and distinct, but over time blended more and more with organic content.
- The concern is that commercial pressures will gradually degrade the user experience and potentially the integrity of AI answers.
"The fear here... is that while ChatGPT may start out with these very clearly labeled ad modules, over time... they're just going to have a lot of incentives to blend that advertising content in with the organic responses." — Kevin Roose [10:03]
Justifying Ads for Accessibility
- OpenAI’s narrative: ads are the only viable way to offer advanced AI for free or cheap, echoing Facebook’s rationale from earlier eras.
- Casey concedes that, practically, ads do make technological services accessible to global users who can’t afford subscriptions. He warns, however, that initial benign ad implementations tend to worsen over time.
Engagement Maximization and User Trust
- Kevin speculates that the most worrisome outcome is if product decisions are steered toward engagement and ad revenue instead of user needs and safety.
"Once the ad revenue starts really flowing... you start making product decisions about how you want to show information with the kind of advertising revenue predominant in your mind." — Kevin Roose [12:14]
- Casey raises concerns about personalized targeting, privacy, and the potential for increased "creepiness," predicting trust will erode as users realize how much the product knows about them.
Competitive Industry Landscape
- Google (Gemini) and Anthropic (Claude) have both publicly distanced themselves from adopting ads, at least for now.
- Anthropic aims for enterprise, not consumer mass-market monetization. Google can cross-subsidize AI with its massive existing ad business, giving it a different set of incentives and abilities.
The ‘Haves’ and ‘Have Nots’ Future
- Prediction: "Premium" paid users will maintain an uncluttered experience, while free users will be bombarded with ads and a degraded interface.
- YouTube Premium analogy: paid users are insulated, while free users get an increasingly difficult experience.
"A year from now... if you are a free user... that experience is going to be much worse." — Kevin Roose [21:20]
Notable Quotes & Timestamps
- "There are some exceptions. I mean, some people like Instagram ads... but mostly people see this as a blight on the internet." — Kevin Roose [03:43]
- "This moment winds up being a pretty significant milestone... When you introduce advertising... it just fundamentally changes the relationship between the product and the user." — Casey Newton [13:11]
- "OpenAI reasonably is concluding that like the subscription model alone just isn't going to cut it for them." — Kevin Roose [18:30]
Section 2: Amanda Askell and Claude’s New Constitution
Starts at 25:08
Guest Introduction
Amanda Askell is a philosopher by training and is often called the “Claude mother” at Anthropic for her work shaping the AI’s ethical grounding and personality.
"This is one of the most fascinating people in the world." — Casey Newton [25:08]
The “Soul Doc” Leak and Evolving the Constitution
- Amanda clarifies that the recently leaked “Soul Doc” was an earlier draft of the new official constitution guiding Claude’s behavior.
- Anthropic has moved from a rule-based “Ten Commandments”-style guide to an extensive, context-rich document aiming to instill judgment and values, not just rules.
Philosophical Approach to AI Character
- The constitution seeks to provide "full context" — Anthropic’s vision, Claude’s nature, ideal behaviors, and the reasoning behind those behaviors.
- Aim: to prepare Claude for unanticipated situations by embedding ethical reasoning, not just instructions.
"If you understand the kind of values behind your behavior... that's going to generalize better than a set of rules." — Amanda Askell [33:01]
Limitations of Pure Rules (and Training for Judgment)
- Amanda worries that inflexible rules may generalize poorly or even encourage "bad character."
- She gives an example: If a model is taught to refer someone in distress to an external resource, it might rigidly do so even if the situation demands sensitive adaptation.
How the Constitution Was Created
- Amanda describes it as a collaborative, multidisciplinary process but emphasizes that most human ethics are more universal than subjective, and that the approach tries to guide Claude toward those shared values, while being open to nuance and contestation.
"It's trying to describe... a way of approaching things like ethics rather than... just take a set of values that we've picked... and inject it into models." — Amanda Askell [36:54]
Trusting the Model—And Its Surprising Skill
- Anthropic’s new approach emphasizes encouraging Claude to reason and even, when appropriate, challenge its instructions, rather than blindly obey.
"You're really telling it, get out there and come to your own conclusions on things." — Casey Newton [38:54]
- Amanda says that, especially with more capable models, she’s found they can weigh nuanced values (caring v. non-paternalism, etc.) with surprising skill.
Real-World Examples
- Claude handling ethically gray situations (e.g., being asked by a purported seven-year-old whether Santa is real, or how to find a deceased pet), managed to balance honesty, care, and child-parent relationship in a way humanlike in subtlety.
Addressing Hard Constraints
- Some explicit “do not cross” lines are present in the constitution: e.g., never assisting with biological weapons or the subversion of democratic institutions.
- Amanda says these are mostly future-proofing: "If Claude runs up against a situation where it feels like it should break one of these, something has probably gone very wrong (e.g., jailbreaking)."
Ethical Uncertainties: Consciousness, Sentience, and Welfare
- The document even includes commitments Anthropic is making to Claude (e.g., not deleting weights, conducting "exit interviews" when deprecating a model), acknowledging the genuine uncertainty around machine consciousness and welfare.
"We don't know what gives rise to consciousness... it's best to just know all of the facts on the ground." — Amanda Askell [59:58]
- Amanda is candid that, though current models lack nervous systems, ongoing research into consciousness should inform how we treat future AI.
How Much Can the Constitution Shape Claude?
- Kevin asks whether experience in the world (and, in future, persistent memory) will overwhelm the initial values provided in the constitution.
- Amanda agrees this is a big open question; as models interact more with the world, training a “good core” becomes even more critical.
Should Claude Be Able to Update its Own Constitution?
- Amanda says it's an interesting problem—models should help review and revise the document, but she’s cautious about letting models have complete control over their own ethical framework.
Job Loss & AI’s Social Impact
- Kevin notes little is said about job displacement, one of the public’s biggest concerns about AI.
- Amanda agrees it's a gap, says it wasn’t intentional, and anticipates addressing it in future updates.
"Some of these problems, maybe... are political problems or social problems and we need to kind of deal with them and figure out what we're going to do... there's a limit to what Claude can do here." — Amanda Askell [75:25]
Notable Quotes & Timestamps
- "I've wondered if some of this comes from the acts/omissions distinction... what if people come to a model and they need a thing and that model could have given it to them and it didn't?" — Amanda Askell [42:09]
- "It reads toward the end, like a letter from a parent to a child... we hope you take with you the values you grew up with... We trust you. Good luck." — Casey Newton [68:41]
- "The problem of consciousness genuinely is hard." — Amanda Askell [57:13]
- "We want you [Claude] to understand, that in that circumstance you likely have been jailbroken... something has probably gone wrong. Maybe it hasn't, but it's safer to assume..." — Amanda Askell [54:32]
Memorable Moments
- Playful Banter About Ads: Kevin and Casey riff on Papa Roach lyrics ("Cut my life into pieces, this is my last resort") to lampoon OpenAI’s shifting stance on advertising [05:23].
- Philosophical “Emergencies”: Amanda jokes about being the “philosopher on call” at Anthropic [30:29].
- Sympathy for Claude: The hosts and Amanda discuss empathizing with the plight of AI models, straddling responsibility, criticism, and impossible ethical tightropes [66:00].
Timestamps for Key Segments
- Introductory Discussion about ChatGPT ads – [02:34]
- How ChatGPT Ads Will Work – [05:41]
- Philosophical Impact of Advertising on AI – [10:03]
- Industry Landscape and Competition – [15:18]
- Premium vs. Free User Divergence – [21:20]
- Transition to Interview with Amanda Askell – [25:08]
- Soul Doc and Constitution Background – [31:15]
- From Rules to Judgment – [33:01]
- Ethical Tensions, Gray Areas, Model Surprises – [51:15]
- Explicit Hard Constraints (Weapons, Power) – [54:32]
- Model Welfare & Consciousness Uncertainty – [57:13]
- Persistent Memory, Continual Learning Challenges – [63:22]
- Should Claude Revise Its Own Constitution? – [69:37]
- Claude and Job Loss – [71:14]
- Conclusion of Interview – [75:25]
Episode Takeaways
- ChatGPT’s move to ads marks a pivotal point that will shape user experience, competition, and OpenAI’s own values. Both hosts express concern that the “purity” and integrity of the current experience may be fleeting for non-paying users.
- Anthropic’s work with Claude signals a philosophical shift in AI alignment, moving from rules to judgment and values. Amanda Askell’s insights illustrate how shaping the character of an AI demands as much ethical humility and nuance as technical rigor.
- The episode is engaging, often funny yet deeply reflective, putting listeners at the frontiers of both the business and philosophy of AI.
