Hard Fork – "Celebrities Fight Sora + Amazon’s Secret Automation Plans + ChatGPT Gets a Browser"
Date: October 24, 2025
Hosts: Kevin Roose (New York Times), Casey Newton (Platformer)
Guest: Karen Weise (NY Times)
Episode Overview
This episode tackles three urgent frontiers in technology:
- The growing backlash toward OpenAI's video tool Sora, amid legal, ethical, and cultural controversy involving deepfakes of celebrities and historical figures.
- Karen Weise explores Amazon’s internal plans to automate hundreds of thousands of warehouse jobs, examining the technology and the company’s behind-the-scenes communications.
- The hosts share first impressions of ChatGPT Atlas, OpenAI's new experimental AI-powered web browser, exploring its functionality, target user base, and the security/privacy issues emerging around AI browsers.
The tone is often wry and skeptical, but deeply engaged with the real-world impacts and underlying incentives at play in the tech industry.
Section 1: The Sora Backlash – Deepfake Trouble at OpenAI
[Starts 02:06]
Key Points & Insights
-
OpenAI’s Sora Tool Sparks Controversy:
OpenAI's Sora, an AI video generator, has become the subject of public outrage due to its deepfake capabilities—particularly around videos impersonating deceased historical figures and celebrities without consent. -
Two Major Flashpoints:
- Historical Figure Deepfakes: After users made viral Sora videos with Martin Luther King Jr. giving nonsensical speeches or appearing in inappropriate and racially insensitive contexts, the King family, and the estate of other major historical figures, protested.
- Celebrity Deepfakes & Copyright: Hollywood is outraged after discovering Sora allowed deepfakes of A-listers (e.g., Bryan Cranston) and copyrighted characters, sometimes without proper opt-outs.
-
OpenAI’s Policy Reversal:
Originally, OpenAI claimed to let historical figures be depicted for ‘free expression’; after the backlash, they reversed course, blocking such content and conceding that “families should ultimately have control over how their likeness is used.”“...OpenAI believes public figures and their families should ultimately have control over how their likeness is used, which was a brand new policy as of the moment that they posted that.” – Casey (05:15)
-
Lack of Guardrails & Accountability:
Multiple stakeholders, including celebrities, their unions, and studios, say OpenAI failed to establish meaningful protections up front. The opt-out process for Hollywood was confusing, and in practice, anyone could generate deepfakes.“If I drive my car off the side of the road because there's no guardrail...I'm dead and I'm shouting at you from hell saying, where was the guardrail, Kevin?” – Casey (10:31)
-
Pattern of Behavior:
The hosts note this isn’t OpenAI’s first product-then-policy controversy (referencing Scarlett Johansson/Advanced Voice Mode and ChatGPT voice updates). They see a troubling pattern:“...rather than being chastised by that and learning from that experience..., maybe we should get their permission. It seems like they have not learned that lesson.” – Kevin (13:25)
-
Bigger Backlash—Public Fatigue with "AI Slop":
The broader public, not just affected families or copyright holders, appears exhausted and frustrated by what they see as AI-fueled cultural pollution.“...the default feeling about SORA is...‘this is bad and I hate it.’ This is just sort of giving them the ick.” – Casey (12:41)
-
OpenAI’s Strategic Calculations:
Despite outcry, the company appears to be betting that launching fast and accepting backlashes is a viable market play—a “regulatory arbitrage” reminiscent of early YouTube.
Notable Quotes
- On Sora’s "purpose":
“Let me be clear: the only reason to use SORA is to create a video of someone doing something that they would not ordinarily be doing. It is not a technology to [help] people give beautiful speeches about civil rights.” – Casey (07:04)
- On OpenAI’s business motivation:
“They have to figure out ways to pay for their enormous ambitions...and not all of those are going to be obviously pro-social and beneficial things, but the ends will justify the means.” – Kevin (18:29)
- On policy inertia:
“...OpenAI is also a company that is rushing things out and has not always thought a lot in advance about what guardrails should be up there.” – Kevin (68:31)
Section 2: Amazon’s Secret Automation Plans
[Starts 27:19]
Guest: Karen Weise, NY Times
Key Points & Insights
-
Scoop: Internal Amazon Docs Lay Out Plans for Mass Automation
Karen Weise obtained Amazon plans showing a multi-year goal: automate 75% of warehouse operations, with hundreds of thousands of jobs (up to 600,000) potentially disappearing.“The core...is an important strategy document...They talk about things like ‘bending the hiring curve.’ Their stretch goal is to keep it flat over the next decade, even as they expect to sell twice as many items.” – Karen Weise (30:40)
-
Technological Advances Enabling the Shift:
Acquisitions like Covariant (AI robotics) are helping Amazon’s new robots handle more complex warehouse tasks (e.g., stacking, sorting, basic judgment), though some jobs—like dealing with unpredictable inbound stock or maintenance ("fixing the robots")—will persist. -
Sensitive Internal Messaging:
Internal docs reveal Amazon strategizing on “controlling the narrative.” They deliberate terminology ("robot" vs. "cobot" for collaborative robot), and invest in local community goodwill (like toy drives) to soften PR blow from job losses.“Should we not talk about robots? Should we talk about a cobot...should they deepen their connection to community groups?” – Karen (31:26)
-
Avoiding Layoffs—Attrition, Not Firing:
Amazon plans to manage perception by running down headcounts via attrition, not mass layoffs, as they introduce robots into warehouses. -
The Economic Scope:
Even "just" $0.30 saved per item adds up to massive profit at Amazon’s scale.“...someone just described this to me...It’s a business of cents. Because it’s so big...when you multiply that by the billions of items they sell, it does add up.” – Karen (41:03)
-
Honesty & ‘Corporate Euphemism’:
The hosts lament a “corporate conspiracy” to sugarcoat the transition, using euphemisms instead of forthrightness.“It just kills me that there's...this literal...corporate conspiracy going on to automate potentially millions of jobs...and, like, no one can just be a grownup and talk about it.” – Kevin (45:39)
Notable Moments & Quotes
-
On job transformation:
“There will be this growing number of people that are technicians. So essentially working with the robots themselves...They make more money. They are like better jobs in many ways.” – Karen (39:24)
-
On Amazon’s response to reporting:
“They are saying it's not a complete picture...The automation team has its goals. There might be another team that increases employment somewhere else... The phrase: the future is hard to predict, but our history has shown that we take efficiencies and invest and we grow and create new opportunities...” – Karen (47:24)
Section 3: ChatGPT Atlas and the Rise of AI Browsers
[Starts 50:47]
Key Points & Insights
-
ChatGPT Atlas Debuts:
OpenAI releases a “full-fledged browser built around the ChatGPT interface,” with agent mode for automating user tasks. It’s Mac-only for now, expanding later.“...available only for Mac OS users and will later be brought to Windows, iOS, and Android.” – Kevin (51:03)
-
Feature Rundown:
- Built on Chromium, with a persistent ChatGPT sidebar for ongoing Q&A and summarization.
- Agent Mode can fill out forms, put items in shopping carts, navigate sites, etc.—but only for Plus/Pro/Business users.
- Aims to “bring ChatGPT over the entire internet” rather than just moving the internet into ChatGPT.
-
The Competitive Landscape:
Atlas joins a surge of AI browsers (e.g., Perplexity Comet, Arc). All are essentially Chrome skins with AI-powered features—raising differentiation challenges.“...all three of these AI browsers that we're talking about today are on Chromium. And the Chromium experience is like, I don't know, 80 or 90% just Chrome.” – Casey (57:55)
-
User Experience—Still Clunky:
Early user tests found browsing slower with Agent Mode (“...it like picked flights that I would not have chosen...much slower than I would have done it myself”). Other noted features (summarizing videos or long docs) work better, but overall friction exists.“I have some reservations about giving OpenAI access to all of my browsing data.” – Kevin (60:31)
-
Security and Privacy Risks:
- Prompt Injection: Malicious hidden commands on web pages could compromise the user when Agent Mode is active.
“A prompt injection is when a malicious actor...will plant instructions on a webpage and make them invisible...and the agent may follow the instructions.” – Casey (63:57)
- Data Privacy: Browsing is highly personal; users are giving OpenAI or others deep insight.
“Web browsing is highly personal...Google obviously does this already...but OpenAI has aspirations to become an advertising juggernaut of its own.” – Casey (67:00)
- Prompt Injection: Malicious hidden commands on web pages could compromise the user when Agent Mode is active.
-
Market Impact:
Ultimately, the hosts suggest such AI browsers serve as free “product research” for Google, who can adopt popular features into Chrome—leaving Chrome’s dominance largely unchallenged.“...whatever people like about these AI browsers, Google will just incorporate into Chrome.” – Kevin (57:08)
Notable Quotes
-
On the AI browser target market:
“My actual non-joke answer is that ChatGPT Atlas is a product for OpenAI employees. ...having a browser that is just ChatGPT I think is hugely useful to you. Now can they get from there to some broader set of users...?” – Casey (59:59)
-
On privacy trade-offs:
“The flip side of a highly personalized service is...it can be really useful to you, but it also becomes a really rich target for attackers, for law enforcement, and the list goes on.” – Casey (68:07)
Timestamps for Major Segments
- OpenAI Sora controversy: 02:06–25:14
- Amazon automation plans (w/ Karen Weise): 27:19–48:33
- ChatGPT Atlas / AI Browsers: 50:47–70:00
Memorable Moments & Signature Tone
- Kevin’s Deadpan on Sora’s Controversies:
“They managed to beef with Bryan Cranston and the estate of Martin Luther King Jr. in one week. ...That qualifies as a bad week at the office.” (03:20)
- Casey’s Satirical Analogy on Guardrails:
“...I'm dead and I'm shouting at you from hell saying, where was the guardrail, Kevin?” (10:31)
- On Prompt Injection Security Risks:
“A prompt injection is not getting the COVID vaccine. ...A prompt injection is when a malicious actor...plant instructions on a webpage and make them invisible… and the agent may follow the instructions.” – Casey (63:57)
- On Browser Switching:
“It’s incredibly annoying to switch browsers. You have to log in to all of your websites again ...even if you’re importing all of your bookmarks and all of your data. Like, there’s still a lot of friction associated with that.” – Kevin (58:31)
Bottom Line & Final Thoughts
- Tech’s Relentless Pace vs. Public Impact:
The episode underscores a recurring theme: tech giants rush products into market, often crowding out accountability for ethical, safety, and social impacts. OpenAI typifies this with Sora; Amazon's automation plans show a parallel pre-emptive approach in labor displacement; and the AI browser wars seem poised to reshape data privacy and web user experience—if they ever dethrone Chrome. - Repeated Call for Honest Dialogue:
Both on Sora and Amazon, the hosts call for more candor from tech companies, so society can brace for, and potentially guide, these tectonic shifts.
For More:
Catch the full episode on NYTimes.com, Apple Podcasts, Spotify, and YouTube.
