Foundering: OpenAI Part 3 – Heaven and Hell, Part 1
Release Date: June 13, 2024
Host: Ellen Hewitt | Produced by Bloomberg Technology
Introduction
In the third installment of the "Foundering" series, Ellen Hewitt delves deep into the contrasting narratives surrounding OpenAI's meteoric rise under CEO Sam Altman and the personal struggles faced by his sister, Annie Altman. This episode, titled "Heaven and Hell, Part 1," explores the internal dynamics of OpenAI, the widespread impact of ChatGPT, and the polarized views on artificial intelligence's future.
Contrasting Success: Sam Altman vs. Annie Altman
The episode opens with a vivid portrayal of Annie Altman's tumultuous living situation juxtaposed against her brother Sam's soaring success with OpenAI.
-
Annie Altman's Struggles:
- Lost Stability: "For much of the past two years, Annie hasn't been able to afford a stable place to live." (02:13)
- Frequent Moves: "Recently, over the course of just a year, she moved 22 times. That's on average about twice a month." (02:24)
- Homelessness: She recounts staying in "two months on a newly built with no running water or no electricity house... slept on floors in friends' houses." (02:45)
-
Sam Altman's Rise:
- Global Recognition: Sam was named CEO of OpenAI by Time magazine and became a prominent figure speaking to world leaders about AI. (03:08)
- Public Persona: Sam’s vision contrasts sharply with Annie’s reality. At a Bloomberg conference in June 2023, he confidently stated, "We just shouldn't have poverty in the world." (04:39)
Notable Contrast: Sam envisions a future where AI eradicates poverty, while his sister grapples with homelessness, highlighting a stark disconnect between his public optimism and his family's personal challenges.
OpenAI's Internal Disputes and the Birth of Anthropic
As OpenAI pushed the boundaries of AI technology, internal disagreements about safety and ethical implications emerged.
-
Employee Concerns:
- Dario Amadei's Fear: "This is like, this is crazy. You know, there's nothing like this in the world." (11:21)
- Existential Risk: Employees like Dario feared the AI models could "destroy us," leading to concerns about the lack of safety measures. (11:42)
-
Formation of Anthropic:
- Mass Exodus: In 2021, Dario and six colleagues left OpenAI to create Anthropic, a company focused on building safe AI. (12:31)
- Implications: Their departure suggested distrust in OpenAI’s commitment to AI safety, casting a shadow over the company's internal culture.
The ChatGPT Phenomenon
The release of ChatGPT marked a pivotal moment for OpenAI, catapulting the company and Sam Altman into global prominence.
-
Rapid Adoption:
- User Growth: "ChatGPT reached 100 million users in just two months, the fastest growth ever at the time." (14:22)
- Practical Uses: From drafting emails to aiding students with homework, ChatGPT became an indispensable tool for millions. (15:14)
-
Sam Altman's Elevated Profile:
- Household Name: Sam became the face of the AI boom, with investors eager to capitalize on the AI surge. (15:30)
- Public Speeches: At an Airbnb conference in 2015, Sam emphasized the profound societal changes AI would bring, stating, "We have an opportunity that comes along only every couple of centuries to redo the socioeconomic contract." (28:48)
Rise of AI Doomsday Beliefs
With AI's advancement, a subset of tech enthusiasts began to fear an existential threat posed by superintelligent AI.
-
Chao Chu Yuan's Perspective:
- Apocalyptic Vision: Chao believes that "the world is probably going to be swallowed by superintelligent AI soon." (20:05)
- Personal Impact: His fears led him to abandon a math PhD and isolate from friends, driven by the conviction that AI poses a catastrophic threat. (20:43)
-
Influential Evangelism:
- Public Influence: Figures like Eliezer Yudkowsky have propagated these fears, convincing many in Silicon Valley of the imminent dangers. (19:18)
- Surge in AI Safety Funding: Tech leaders, including Dustin Moskovitz and Sam Bankman-Fried, have pledged significant funds toward AI safety projects, reflecting the growing concern. (22:00)
Key Quote: Chao Yuan describes his transformation: "There's this black hole in my kind of sense of which things were important... Maybe the only thing that matters." (21:10)
Criticism of AI Apocalypse Focus
Not all experts agree with the doomsday narrative, arguing that it diverts attention from immediate, tangible AI-related issues.
-
Emily Bender's Critique:
- Distraction from Current Harms: "There's all kinds of harms that are happening right now... The more we focus on these... existential risk... the less time and effort goes into actually dealing with the real harms." (24:26)
- Immediate Concerns: Issues like racial bias, data theft, and surveillance are pressing problems exacerbated by AI. (24:55)
-
Broader Implications:
- Skewed Focus: The intense focus on catastrophic scenarios overshadows critical debates on AI ethics, fairness, and societal impact. (25:48)
Notable Insight: Emily emphasizes that "these are urgent problems, ways that people are being harmed by AI right now," advocating for addressing present-day issues over speculative future risks. (25:23)
Sam Altman’s Evolving Public Image
Sam Altman's statements reflect a balance between acknowledging AI's risks and promoting its potential benefits, navigating between heroism and pragmatism.
-
Early Doomsday Fears:
- In 2015, Sam expressed strong concerns about AI: "I think AI will probably sort of lead to the end of the world." (31:26)
- Apocalyptic Preparations: He mentioned stockpiling resources and securing a land parcel in Big Sur for potential catastrophes, although he later downplayed these statements. (31:31)
-
Shift to Optimism:
- Post-ChatGPT, Sam adopts a more moderate stance, focusing on the positive societal transformations AI can bring. He avoids referencing his earlier apocalyptic plans, aiming to present a balanced view. (32:53)
-
Heroic Narrative:
- Sam positions himself as a pivotal figure shaping the future, stating, "We're going to create a galactic civilization." (27:39)
Key Quote: At an Airbnb conference, Sam remarked on public perception: "People are much more sensitive to... theatrical extreme risk than... boring, slow, plotting risk." (34:13)
Motivations Behind AI Enthusiasm and Fear
The episode explores the psychological drivers behind the fervent belief in AI’s potential to either save or doom humanity.
-
Heroism and Legacy:
- Chao Yuan discusses the desire to be seen as a hero: "Most people don't get a chance to do that in any meaningful sense... What if this is like the most significant era of human history?" (27:06)
-
Influence of Science Fiction:
- OpenAI's engagement with fiction writers, like commissioning a novella from Patrick House, highlights the role of storytelling in shaping AI narratives. (30:07)
-
Motivational Drivers:
- The belief in being pivotal to humanity's future drives many in the AI field, fostering a culture where cosmic significance is entwined with technological advancement. (27:54)
Conclusion
"Foundering: OpenAI Part 3 – Heaven and Hell, Part 1" paints a comprehensive picture of the dual realities within the AI industry. While leaders like Sam Altman project a future of abundance and societal transformation, the episode underscores the personal and ethical dilemmas that persist behind the scenes. The narrative juxtaposes visionary aspirations with real-world struggles, prompting listeners to reflect on the true cost and potential of artificial intelligence.
Upcoming in Part 2: The series will further explore the debate around poverty in an AI-driven economy and delve deeper into Annie Altman's experiences, providing a holistic view of AI's multifaceted impact on society.
Notable Quotes
-
Sam Altman on Poverty:
"I think we are not that far away from being able to eliminate poverty effectively worldwide, certainly in developed countries." (04:57)
-
Chao Chu Yuan on Existential Fears:
"It's like, oh, hey, what if everyone dies in 10 years? That's a scary idea." (22:58)
-
Emily Bender on AI Focus:
"There's all kinds of harms that are happening right now... The more we focus on these... existential risk... the less time and effort goes into actually dealing with the real harms." (24:26)
-
Sam Altman on Public Perception:
"People are much more sensitive to sort of like theatrical extreme risk than they are to sort of like boring, slow, plotting risk." (34:13)
Production Credits
- Host: Ellen Hewitt
- Executive Producer: Sean Wen
- Contributing Reporter: Rachel Metz
- Associate Producer: Molly Nugent
- Audio Engineer: Blake Maples
- Story Editors: Mark Millian, Ann Vandermay, Seth Vigerman, Tom Giles, Molly Schutz
- Production Help: Jessica Nix, Antonia Mufarec
For those interested in the intricate dynamics of the AI revolution and its broader societal implications, "Foundering: OpenAI Part 3 – Heaven and Hell, Part 1" offers a thought-provoking and comprehensive exploration.
