The Diary of a CEO with Steven Bartlett
Guest: Dr. Roman Yampolskiy
Episode Theme: "These Are The Only 5 Jobs That Will Remain In 2030 & Proof We're Living In a Simulation!"
Release Date: September 4, 2025
Overview
This episode features Dr. Roman Yampolskiy, a globally recognized expert in AI safety and associate professor of computer science. Steven Bartlett and Dr. Yampolskiy discuss the coming impact of artificial general intelligence (AGI), the existential risks posed by superintelligent AI, what work might remain for humans, the accelerating pace of technological change, and a fascinating exploration of simulation theory, pondering whether our reality itself is artificial. The conversation is candid, urgent, sometimes unsettling, but always deeply thought-provoking.
Key Discussion Points & Insights
Dr. Yampolskiy on AI Safety: A Mission of Two Decades
-
Background & Motivation
- Dr. Yampolskiy has spent at least 15 years focused on AI safety, coining the term itself (06:03).
- Initially approached the field with optimism but became convinced true AI safety is impossible:
“The more I looked at it, the more I realized every single component of that equation is not something we can actually do…all of them are not just difficult, they're impossible to solve.”
— Dr. Yampolskiy (07:02)
-
Pace of Progress: Capabilities Outrun Safety
- AI advances exponentially, AI safety progresses linearly, widening the danger gap (07:02).
- Companies’ current “safety” approaches are mere patches, easily circumvented by smarter systems.
The Imminent Arrival of AGI & Mass Unemployment
-
Timeline Predictions
-
By 2027: Credible probability that AGI (artificial general intelligence) will exist, per prediction markets and CEOs of leading AI labs.
“First, anything on a computer will be automated. And next, I think humanoid robots are maybe five years behind. So in five years, all the physical labor can also be automated.”
— Dr. Yampolskiy (11:06) -
By 2030: Humanoid robots will possess the dexterity to compete with humans in all domains, even plumbing (22:58).
-
By 2045: Kurzweil’s “Singularity”—AI advances so rapidly that humans can no longer comprehend, predict, or control technological development (24:50).
-
-
Implications: Work & Society
- Up to 99% unemployment as both cognitive and physical jobs are automated (11:06).
- The remaining “jobs” for humans will be niches based on preference, tradition, or fetish for the human touch—not necessity (13:57).
- The standard retrain-for-new-jobs paradigm collapses; every skill becomes obsolete, even “prompt engineering.”
“If I'm telling you that all jobs will be automated, then there is no plan B.”
— Dr. Yampolskiy (17:10)
Societal Adaptation & the Search for Meaning
-
Abundance Without Work:
- Once labor is free and abundance is achieved, the economic challenge (providing for basic and even higher needs) is relatively straightforward.
- The true societal challenge: what people will do with their time, sense of purpose, and meaning in a world without jobs (18:32).
- Governments are “completely unprepared” for 99% unemployment.
-
Unpredictability at the Singularity:
- Directly predicting post-AGI society is, by definition, impossible:
“You cannot predict what a smarter than us system will do. And the point when we get to that is often called singularity…you cannot see beyond the event horizon.”
— Dr. Yampolskiy (19:34)
- Directly predicting post-AGI society is, by definition, impossible:
Risks: Existential Pathways and the Infeasibility of Control
-
Risk of Human Extinction
-
Why We Can’t Just “Turn It Off”
-
On the Futility of Human Competition
- Human enhancement (neural implants, brain uploads) might grant marginal improvements, but biology can’t compete with silicon (21:36).
-
“It’s Inevitability”—A Misguided Resignation
- Fatalism is unwarranted. If enough people, especially those in AI development, recognize their personal mortality and risk, incentives might shift toward restraint (32:54).
“It's not over until it's over. We can decide not to build general superintelligences.”
- Fatalism is unwarranted. If enough people, especially those in AI development, recognize their personal mortality and risk, incentives might shift toward restraint (32:54).
The Only Jobs Left: What Remains for Humans?
- With superintelligence, almost all human jobs will be obsolete except for:
- Roles where the buyer insists on a human for tradition or personal preference (e.g., “human-made” craftsmanship, personalized services).
- Uniquely personal experiences—one’s own subjective experience (“you know what it’s like to be you”), though these have little economic value (13:57).
- “Fetish” markets—like a preference for handmade over mass-produced goods, a tiny niche (13:57).
- Everything else, especially digital work, can and will be automated.
Replay: Sam Altman, OpenAI, and AI Industry Culture
-
On Sam Altman and OpenAI’s Ethics
- OpenAI and other leading companies are “gambling 8 billion lives on getting richer and more powerful” (01:34).
- Ex-OpenAI staff, like Ilya Sutskever, are starting “superintelligence safety companies,” often for questionable or profit-driven motives (43:27).
“Anyone who leaves that company and starts a new one gets a $20 billion valuation just for having it started…So it seems like a very rational thing to do for anyone who can.”
— Dr. Yampolskiy (43:49)
-
Critique of Human Motivations:
- Some tech leaders are driven by ambitions “to control the light cone of the universe.”
“People have different levels of ambition…Some people want to go to Mars, others want to control light cone of the universe.”
— Dr. Yampolskiy (46:23)
- Some tech leaders are driven by ambitions “to control the light cone of the universe.”
Simulation Theory: Are We Living in a Simulation?
-
Technical Argument for Simulation
- As AI and VR progress to simulate conscious agents and convincing environments, the probability of “base reality” drops to near zero (57:18).
“I'm going to commit right now, and it's very affordable. It's like 10 bucks a month to run it. I'm going to run a billion simulations in this interview…That means you are in one right now. The chances of you being in a real one is one in a billion.”
— Dr. Yampolskiy (58:54)
- As AI and VR progress to simulate conscious agents and convincing environments, the probability of “base reality” drops to near zero (57:18).
-
Connecting Religion and Simulation
- All religions, stripped of local traditions, are “simulation stories”—visions of super-intelligent creators and engineered worlds (62:16).
“They all worship super intelligent being. They all think this world is not the main one. And they argue about which animal not to eat. Skip the local flavors, concentrate on what do all the religions have in common.”
— Dr. Yampolskiy (76:01)
- All religions, stripped of local traditions, are “simulation stories”—visions of super-intelligent creators and engineered worlds (62:16).
-
Meaning of Life in a Simulation
Practical Advice From Dr. Yampolskiy
-
For Individuals:
- “Live every day as if it’s your last.” Do not waste life, pursue interesting and meaningful endeavors (56:27).
- Invest in skills and assets thoughtfully—Dr. Yampolskiy self-identifies as bullish on Bitcoin, citing its digital scarcity as uniquely resistant to replication even in a world of abundance (73:06).
-
On Longevity:
- Believes radical life extension is “one breakthrough away.”
- AI will likely accelerate solutions to aging, and population will decline, not explode, as people live longer (68:29).
- “If you live forever, you have kids because you want a replacement for you. If you live forever, you're like, I'll have kids in a million years.” (69:08)
Memorable Quotes & Moments
-
On Unpredictability of Superintelligence:
“If it was something you could predict, you would be operating at the same level of intelligence, violating our assumption that it is smarter than you.”
— Dr. Yampolskiy (20:37) -
On AI as an Existential Trial:
“It’s the last invention we ever have to make. At that point, it takes over and the process of doing science research, even ethics research, morals, all that is automated at that point.”
— Dr. Yampolskiy (26:41) -
On the Simulation Argument:
“We are a thing on a computer, remember?”
— Dr. Yampolskiy (73:27) -
On AI Developers:
“They are doing something very bad for them. Not just forget our 8 billion people you're experimenting on with no permission, no consent, you will not be happy with the outcome.”
— Dr. Yampolskiy (47:50) -
On Bitcoin and Value in the Simulated World:
“It's the only thing which we know how much there is in the universe. So gold…there could be an asteroid…bitcoin, I know exactly the numbers…it's getting scarcer every day while more and more people are trying to accumulate it.”
— Dr. Yampolskiy (73:11)
Actionable Steps & Reflections
-
For Listeners:
- Engage in open discussion about AI risks—demand specifics from those building advanced AI about their safety mechanisms (50:03).
- Consider participating in AI safety/pause activism if inclined, recognizing that individual influence is limited (55:10).
- Remain aware, skeptical, and avoid delusional optimism:
“If there was even a 1% chance of human extinction...I would not take the chance.”
— Host (53:31)
-
For Industry & Policy:
Timestamps for Important Segments
- [06:03] — Dr. Yampolskiy coins “AI Safety”
- [11:06] — Timeline for AGI and mass unemployment
- [17:10] — “There is no plan B” for jobs
- [19:34] — Singularity and unpredictability
- [21:36] — Limitations of human “upgrades”
- [24:50] — The true meaning of singularity by 2045
- [31:11] — The myth of “turning off” AI
- [38:42] — Most probable extinction pathways
- [43:49] — On OpenAI, Sam Altman, and safety company incentives
- [57:18] — Google’s 3D AI “worlds” and onset of simulation theory
- [62:16] — Religion as early simulation theory
- [73:06] — Bitcoin as a strategy for digital scarcity
Closing Reflection & Final Traditions
-
Roman Yampolskiy’s Closing Statement:
“Let's make sure there is not a closing statement we need to give for humanity…Let’s make sure we only build things which are beneficial to us…you should ask [people’s] permission before you do that.” (82:14)
-
Would He Shut Down AI?
- Would not shut down all narrow AI, but would halt AGI and superintelligence to prevent existential catastrophe (83:19).
Summary Tone
The conversation is frank, future-facing, and at times grim—yet never hysterical. Dr. Yampolskiy’s tone is measured, rational, and unwavering in the face of uncomfortable truths, while Steven Bartlett navigates the discussion with openness, curiosity, and a consistent call for actionable insight in a world on the precipice of unfathomable change.
For further reading:
- Dr. Roman Yampolskiy’s book: Preventing AI Failures (2024)
- Follow Dr. Yampolskiy on X (Twitter) and Facebook
Host reflection:
“It’s actually convinced me more that we are living in a simulation. But it’s also made me think quite differently of religion…maybe the fundamental truths…should be something I pay more attention to.”
— Steven Bartlett (87:09)
