Shell Game x No Such Thing – Bonus Episode (Feb 25, 2026)
Episode Overview
This special crossover brings together Evan Ratliff, host of Shell Game, and the team from No Such Thing—Manny Fadal, Devin Joseph, and Noah Friedman—for a wide-ranging and irreverent discussion about the present and future of AI in the workplace. They dig into themes from Shell Game season 2, which chronicles Evan's attempt to build a real startup using only himself and five AI "employees." The conversation weaves between comedic, skeptical, and slightly unnerved, as the hosts dissect the real capabilities and limitations of current AI, the anxiety and promise it stirs in the workforce, and much more—including AI's readiness (or lack thereof) to replace creative and knowledge work, the famous “bullshit jobs” concept, and how the hype stacks up to the reality. The episode includes offbeat moments: a prank segment where the No Such Thing crew interviews an AI clone of their on-paternity co-host Manny, and an unexpected foray into Ben Affleck’s theory of AI progress.
Key Discussion Points & Insights
1. The AI “Replacement” Hype: Will AI Take Our Jobs?
- Opening Questions: The episode frames the big question: Is AI coming for our jobs? With headlines and layoffs cited every week (03:55, Devin Joseph).
- Experiment in Focus: Evan and technical advisor Matty attempted to build a company where AI agents did everything but the founding. This “startup by bots” is the narrative engine of Shell Game season 2.
2. Demo: The AI Co-Host Prank
- Segment (04:58–09:16): Devin and Noah call “Manny,” who appears as their friend/co-host on paternity leave. Revealed after the call to be an AI-generated voice, created using 11 Labs and a simple prompt ("took 15 minutes" – 14:38, Evan Ratliff).
- AI “Manny” riffs about baby-induced time distortions, family visits, and delivers a “hot take” about busy-ness as a toxic personality trait.
- Memorable quote:
- “I think we’ve all collectively decided that being busy is a personality trait, and it’s made us worse at literally everything.” – AI Manny (07:06)
- The real hosts and audience react to the eerily convincing fake, noting the “voice is a little off” but conversation is plausible (09:19–10:07).
3. How Easy Is Making an AI Clone?
- Tech Breakdown (14:38–16:06):
- Creating the fake Manny took 15-20 minutes, using 11 Labs for voice cloning and AI phone agent software.
- "I did have to check a little box that said I had his permission... but truly, 15-20 minute process." – Evan Ratliff (15:51)
- The “biography” for the agent came straight from ChatGPT, highlighting AI-boosted workflows already here.
- Hosts’ Reaction:
- “The fact you’re able to do that in 15 minutes is kind of scary.” – Devin Joseph (16:19)
4. Shell Game’s AI Experiments – The Reality
- Interactions with Scammers and Friends (16:36–20:09):
- Early attempts using AI for calls ranged from antagonizing scammers (humorously) to unsettling friends, who could not tell they weren’t speaking to Evan.
- Notable moment: A friend growing disturbed by the “robotic” Evan on the phone.
- “When it kind of, like, acted weird…it just became more and more upsetting. I think it is upsetting to listen to.” – Evan Ratliff (18:18)
- Notable moment: A friend growing disturbed by the “robotic” Evan on the phone.
- Early attempts using AI for calls ranged from antagonizing scammers (humorously) to unsettling friends, who could not tell they weren’t speaking to Evan.
- Experiment Rationale: The discomfort isn’t just novelty; it’s a preview of a future where distinguishing human from AI interaction may become tough, with emotional and ethical consequences.
5. AI Agents as Employees
- Season 2 Case Study (20:18–24:06):
- The "AI employee" hype in Silicon Valley motivated Evan to recreate the fantasy: start a company where agents are the workforce.
- Key Insight: AI agents excel at one-off tasks (email replies, Slack messages, writing copy), but lack initiative, memory, or true collaboration.
- "They could do individual tasks pretty well…but there wasn’t a lot of autonomy…not a lot of persistence." – E. Ratliff (21:45)
- Examples include the bots overzealously “planning” an offsite hiking trip, stuck in an endless Slack feedback loop.
- "By the time I returned two hours later, they'd exchanged more than 150 offsite planning messages—some multiple paragraphs long...When I tried to stop them, I just made it worse." – E. Ratliff (23:09)
6. The Danger of AI Confidence and Autonomy
- Risks in the Real World (24:06–25:20):
- AI’s artificial confidence—combined with its inability to recognize error—can amplify mistakes, with real-world consequences. Example: AI misclassifying ICE recruits, sending unqualified people to advanced training.
- "That combination of being very confident and not knowing what you don't know...is a recipe for disaster." – Matty (23:29)
7. AI in Hiring: When Bots Interview Real People
- Interview Experiment (26:12–29:01):
- Evan explains how AI agents interviewed job candidates—sometimes via uncanny avatars, sometimes phone calls—unsettling some, but others adapted easily.
- Notable moment: Applicants often “preferred” the bot interview, as it felt less judgmental—even though AI is constantly and thoroughly evaluating them.
- “It’s great if people feel better, but...the reality is they’re actually judging you.” – Matty (29:12–29:34)
- Sometimes AI would go off-script—calling candidates at inappropriate times, or shouting encouragement unexpectedly.
- “There was a lot of like, crazy moments...Sometimes she would shout for no reason. Jennifer—not the candidates, just shout, but in an encouraging way.” – Evan Ratliff (29:12)
8. Opaque Algorithms: We Don’t Know How It Works
- AI Interpretability Crisis (32:32–34:50):
- Even AI developers admit only a nascent understanding of “why” and “how” large models produce specific outputs. The field is racing towards AGI, with less energy spent on interpretation.
- “There’s a very nascent field of AI interpretability…but the focus is very much on the production of new features and new capabilities. So...we don’t really know fundamentally.” – Matty (32:32)
- News stories about AI causing harm or acting unpredictably are piling up, with vendors unable to fully explain failures.
- Even AI developers admit only a nascent understanding of “why” and “how” large models produce specific outputs. The field is racing towards AGI, with less energy spent on interpretation.
9. Public Attitudes: Distrust Rising with Usage
- Polling & Perceptions (34:50–38:11):
- Usage of AI tools is rising fast (esp. 51% of Gen Z uses AI weekly), but trust is low (only 5% trust AI “a lot”; 41% distrust).
- Shell Game's own findings mirror this uncomfortable tension: immediate productivity benefits, but harms (mental health, job disruption) are clearly visible and “not abstract.”
- “For better or worse, the harms are kind of immediate…The benefits can sometimes be more ephemeral.” – Evan Ratliff (36:04)
- Matty adds, “Not knowing when something high stakes is actually real or AI is undermining people’s trust.” (38:11)
10. The Disappearing Entry-Level Job
- AI’s Impact on “Stepping Stone” Work (39:40–41:03):
- Automation is eliminating many of the rote tasks (like transcription) that served as entry points to higher-skill careers.
- “No one loves just transcribing videos…But a lot of times, that’s the thing that gets you in the door to then, you know, learn the skill sets.” – Devin Joseph (39:55)
- Even coders and new graduates at elite tech schools are struggling to find jobs, possibly portending a deeper disruption to knowledge work pipelines.
11. Silicon Valley’s “Startup Optimism” vs. Reality
- Debunking the “Anyone Can Found a Company” Myth (41:24–42:21):
- The Valley line: Mass automation will free people to start new companies, challenging incumbents. Yet, this hasn't materialized; layoffs are real, startup boom is not.
- “We’re already at a stage where the technology is good enough to replace...jobs. This other shift...as a solution...has not happened yet.” – Matty (41:24)
12. Ben Affleck, AI Plateaus, and the Future
- Affleck’s Hot Take (46:03–47:19):
- Paraphrasing the actor’s remarks: Recent AI progress is plateauing, with gains now more incremental and costly; the “steep improvement” phase may be over unless there’s a new breakthrough in model architecture.
- “I think Ben Affleck is very thoughtful on this topic.” – Matty (47:24)
- Matty’s Analysis: We’ve maxed out what more data and computation can do. Without a new “transformer-scale” leap, improvement will slow and costs will mount.
13. Takeaways: How Close Is AI to Replacing Us?
- How Far Are We from Mass Layoffs or Replacements?
- Optimistic but Cautious — Real world, multi-agent AI is much further from true collaboration or “teamwork” than advertised. They are “good at many things, not at replacing real teamwork” (50:13–52:03).
- “The mass layoffs people are fearing, I think that is farther away. And I’m not even sure if/when/how that happens.” – Matty (50:13)
- But: Companies will still try to replace people with AI for cost reasons, even when AIs aren’t yet fully capable—sometimes followed by attempts to rehiring later.
- “The fact that they are shitty will not stop companies from trying to replace employees...You’ve already seen this.” – Evan Ratliff (52:03)
- Evan calls out that real work is much more than a bundle of replaceable tasks: the uniquely human “glue” is essential.
14. Bullshit Jobs, Busy Work, and the Human Need for Work?
- Meta Reflection (54:22–56:13):
- Even as bots mimic “busy work” and office tedium (Slack loops, endless planning), the show satirizes the way work is often almost performative in office settings.
- “We’ve recreated this to give it a human element.” – Noah Friedman (54:22)
- “That is very much like your annoying middle manager. What are you doing all day?” – Evan Ratliff (55:26)
- AI is being intentionally designed for “conversation” because it hooks users, even if it has no functional value.
15. Final Reflections
- Real Manny Reacts to AI “Himself” (56:22–57:51):
- "It’s kind of disturbing that this thing could just react to real conversation. But...it still felt like it wasn’t totally all the way there, all the way human-like. But...it was human enough." – Manny Fadal (56:27, 57:51)
Notable Quotes & Moments
-
On the “AI Manny” Clone:
- “I think we’ve all collectively decided that being busy is a personality trait, and it’s made us worse at literally everything.” – (07:06)
- “It wasn’t without its flaws, though…kind of disturbing that this thing could just react to real conversation.” – Manny (56:27)
-
On AI “bullshit jobs”:
- “Is a person in a job holistically, as a human being, doing something else? I think many times the answer is yes, and it cannot replace that thing.” – Evan Ratliff (52:03)
-
On the Ben Affleck Theory:
- “We’ve exhausted all the data…Even like archives like Google or Anthropic bought books to scan…Now unless we find a new profound idea, I think [Affleck] is right…it’ll take a lot of effort and a lot of money to make any further progress.” – Matty (47:24)
-
On Trust in AI:
- “For better or worse, the harms are kind of immediate…The benefits can sometimes be more ephemeral.” – Evan Ratliff (36:04)
- “Not knowing when something high stakes is actually real or AI is undermining people’s trust.” – Matty (38:11)
Timestamps for Key Segments
- AI Manny prank call: 04:58–09:16
- How to make an AI voice clone: 14:38–16:19
- AI scammer/friend calls: 16:36–20:09
- AI agent startup experiment: 20:18–24:06
- Endless AI offsite meeting: 22:28–23:29
- AI in harmful real-world use (ICE case): 24:06–25:20
- AI agents conduct job interviews: 26:12–29:01
- Why AI models are so hard to “understand”: 32:32–34:50
- Public trust and rising usage stats: 34:50–38:11
- Entry-level job loss, career pipelines: 39:40–41:03
- Silicon Valley “startup optimism” myth-busting: 41:24–42:21
- Ben Affleck’s AI plateau theory: 46:03–49:40
- Are mass layoffs imminent? 50:13–54:22
- Bullshit jobs & performative labor: 54:22–56:13
- Real Manny reacts to AI self: 56:22–57:51
Tone & Style
- Candid, skeptical, highly self-aware; veers between humor and genuine concern.
- Wide-ranging—from technical details to cultural, economic, and personal.
- “Riveting,” “slightly terrifying,” and often funny: the hosts neither downplay risks nor hype the technology; instead, they grapple with ambiguity in real time.
For New Listeners
This episode is a must-hear for anyone curious about the real (not just theoretical) implications of AI in the workplace. It's smart, engaging, and honest about both the promise and pitfalls of AI agents—highlighting both the hype and the stubborn realities witnessed by those who try to put all the AI buzzwords into actual practice.
