The Artificial Intelligence Show Ep. 184 – OpenAI “Code Red,” Gemini 3 Deep Think, Recursive Self-Improvement & More
Hosts: Paul Roetzer and Mike Kaput
Date: December 9, 2025
Episode Overview
This episode centers on seismic shifts within the AI landscape, as OpenAI shifts into “Code Red” mode in reaction to Google’s surging Gemini 3 model, escalating competition from Anthropic, and rapidly evolving technology in autonomous/self-improving AI. The hosts also deep dive into practical implications with new tools, address growing risks and regulatory responses, discuss Apple’s AI talent woes, and reveal fresh data on AI-driven job cuts and workforce anxiety.
Key Topics & Insights
1. OpenAI’s “Code Red” – The AI Race Heats Up
- [07:54-16:28]
- OpenAI CEO Sam Altman internally announced a “Code Red”, refocusing the company on immediate ChatGPT improvements in response to Google’s Gemini 3 advances.
- Gemini 3 reportedly outperforms OpenAI models in reasoning benchmarks and has burst to 650M users.
- OpenAI will pause or delay other projects (advertising, AI agents for shopping/health, the Pulse personal assistant) to prioritize ChatGPT speed, reliability, and personalization.
- New model codenamed “Garlic” is in the works, designed to take on Gemini 3 and Anthropic’s Opus 4.5.
- Tables have turned since Google’s own “Code Red” in 2022:
- “Google is just flexing its muscles.... They have the infrastructure, their own chips, the TPUs, data centers—they can do things at much larger scale... Google seems to have figured a couple things out.” — Paul [09:59]
- Google has the financial strength to pour into AI, whereas OpenAI must raise huge capital and may face an IPO by late 2026.
- Anthropic is gaining as a focused, financially viable competitor, possibly favored for acquisition or sustained success.
- Hosts' Sentiment: OpenAI is at risk of spreading itself too thin:
- “OpenAI is just, like, scaring me. They're just trying to do so many things... I worry they're just getting too frayed...” — Paul [13:42]
- OpenAI CEO Sam Altman internally announced a “Code Red”, refocusing the company on immediate ChatGPT improvements in response to Google’s Gemini 3 advances.
2. Google’s Big AI Push: Gemini 3 Deep Think & Workspace Studio
- [16:28-28:55]
- Gemini 3 “Deep Think” Mode:
- Available to select Ultra-tier users, tackles complex logic, science, and math tasks.
- Achieves stunning industry benchmarks: 41% on Humanities last exam, 45.1% on ARC AGI2.
- Explained as leveraging “test time compute” — more computational power at inference, giving better, deeper results.
- Workspace Studio:
- Lets users build AI agents for workflows (e.g., daily email summaries) without code.
- Natively integrates with Google ecosystem (Gmail, Drive) and major SaaS platforms (Salesforce, Asana, etc.).
- Early testing by Paul highlighted ease of use and potential for productivity, but also some technical hiccups (capacity errors).
- Key point for listeners' AI literacy:
- “If you can define a workflow... you are being given the tools to make it more efficient and you don't need IT involved. Like that is the beauty of the moment we find ourselves in.” — Paul [26:43]
- User Caution: Early rollouts can have bugs, and security/privacy risks remain as workflows get more powerful.
- Gemini 3 “Deep Think” Mode:
3. Recursive Self-Improvement – Is Autonomous AI Near?
- [28:57-41:46]
- Eric Schmidt (ex-Google) warns recursive self-improving AI is close:
- Two-year (Silicon Valley) to four-year (Schmidt’s own) estimates cited for the arrival of systems capable of redesigning and improving themselves with minimal human intervention.
- OpenAI launches an “Alignment” research blog, the very first line citing recursive self-improvement as a central safety concern.
- New AI lab, Recursive Intelligence, announced with intent to “close the loop”: AI designs better chips, better chips train better AI, speeding progress in both directions.
- Paul’s explanation:
- “If an AI system gets good enough that it can meaningly help design the next better version of itself and that loop keeps going, that is basically what we're talking about.” [30:47]
- Loss of the “human in the loop” could lead to catastrophic misalignment, massive job disruption, fast takeoff scenarios, and serious regulatory, social, and ethical risks.
- Quote:
- “The path to AGI and superintelligence accelerates and all of these other things come with it... This is actually a very pivotal piece of all the other topics we talk about.” — Paul [41:11]
- Eric Schmidt (ex-Google) warns recursive self-improving AI is close:
Rapid Fire: AI Headlines & Analysis
ChatGPT Ads Backlash
- [42:32-47:19]
- Users reacted with outrage after a screenshot showed ChatGPT suggesting the Peloton app during unrelated paid tier use.
- OpenAI insists it was a “suggestion, not a paid ad,” but users see it as a breach of trust and paid experience.
- Paul’s take: “All I know is it's a really bad look... People are pissed. Like I'd be pissed. I'm paying 200 bucks a month for pro. I don't want to see some like, completely irrelevant recommendation for an app in there that looks like an ad. That's annoying.” [44:13]
Apple’s AI Talent Shakeups
- [47:19-51:22]
- Senior VP for AI/ML, John Giandrea, steps down (will leave in 2026); replaced by Amar Subramania (Google, Microsoft alum).
- Alan Dye (top interface designer) leaves for Meta to lead AI hardware studio.
- Apple struggles with Vision Pro, AI-powered Siri delays, and faces speculation about Tim Cook’s future.
- Paul: “Part of this is like they need a shakeup. But you don't want great executives leaving during that shakeup…” [50:26]
Anthropic: IPO Prep & “Claude Interviewer” AI Tool
- [51:22-59:42]
- Anthropic hires Wilson Sonsini to prepare for an IPO as early as 2026. Valuations discussed >$30B.
- New tool “Anthropic Interviewer” automates qualitative interviews at scale. Used to survey >1,200 professionals on AI attitudes, finding productivity boosts but also anxiety and social stigma:
- “A colleague recently said they hate AI and I just said nothing. I don't tell anyone my process because I know how a lot of people feel about AI.” — Research participant [56:41]
- Paul excited by use cases: “I could think of a hundred ways to use this right now. My mind is like swimming with ways to apply this concept.” [56:41]
Nvidia CEO Jensen Huang Opens Up on Joe Rogan
- [59:42-66:04]
- Huang likens AI’s strategic importance to the Manhattan Project; stresses national security stakes.
- Shares entrepreneurial journey, early Nvidia risks, role of chance (e.g., Sega bail-out, early OpenAI hardware deals).
- “He came here at age 9 as an immigrant... They would record audio messages and mail them to each other once a month... It’s just an incredible entrepreneurial story.” — Paul [64:21]
Legal Risks: Perplexity & Copyright Lawsuits
- [66:04-69:34]
- Lawsuits from Chicago Tribune, NYT, and others allege Perplexity “bypasses paywalls” and delivers summaries of journalistic work without permission.
- Paul blunt: “They have no leverage... They're going to have to settle these lawsuits or they're going to lose their company.” [68:44]
Meta Acquires Limitless (Ex-Rewind) – The AI-Powered Pendant
- [69:34-72:27]
- Meta acquires memory-augmentation pendant startup; concern over ownership of users’ private recorded data now shifting to Meta.
- “Anytime you are willing to be a guinea pig for new AI technology... ponder who gets that data when that company fails. In this case, it’s Meta.” — Paul [71:09]
Pope’s Warning on AI and Human Dignity
- [72:39-76:21]
- Pope Leo XIV warns of AI’s threat to “dignity, reflection, and authentic relationships.”
- Emphasizes risks of concentrated technological power and erosion of critical thinking for children and society.
- Hosts stress importance: “If the Pope is making AI a key part of his agenda, that matters... it affects the way people think about AI.” — Paul [74:07]
New Data: AI Job Cuts & Workforce Anxiety
- [76:53-80:51]
- Challenger Gray & Christmas tally 54,694 US job cuts explicitly attributed to AI in 2025 (~5% of all cited layoffs this year).
- Paul sees entry-level positions disproportionately affected; anticipates pain for new grads in 2026 and more layoffs to come.
- “One senior leader with strong strategic abilities and high AI literacy will be 10 to 100 times more productive and impactful.” — Paul [79:13]
AI Literacy & Parenting Survey
- [80:52-83:19]
- Survey by Deborah Ross: just 5% of parents/grandparents profess confidence in guiding their kids with AI; majority seek help spotting misinformation.
- Hosts encourage urgent focus on AI education for families: “Every conversation I have with parents, I just, I do feel this, like, sense of urgency to do more in this space.” — Paul [82:30]
Notable Quotes & Memorable Moments
- “If you can define a workflow, if you can envision something you think could be more efficient, you are being given the tools to make it more efficient and you don't need IT involved.” — Paul, [00:00], [26:43]
- “OpenAI is just, like, scaring me. They're just trying to do so many things, compete in so many different fields that I worry they're just getting too frayed in, like, what they're trying to set out to do... their financial commitments are so massive.” — Paul [13:35]
- “The path to AGI and superintelligence accelerates and all of these other things come with it... This is actually a very pivotal piece of all the other topics we talk about.” — Paul [41:11]
- “It's a really bad look... People are pissed. Like I'd be pissed.” — Paul (re: ChatGPT Ads) [44:13]
- “Anytime you are willing to be a guinea pig for new AI technology... ponder who gets that data when that company fails.” — Paul (re: Meta-Limitless deal) [71:09]
Timestamps for Key Segments
- OpenAI “Code Red”, Financial Risks, Anthropic’s Rise – [07:54-16:28]
- Google’s Scaling Laws, Deep Think, and Workspace Studio – [16:28-28:55]
- Recursive AI & Self-Improvement, OpenAI’s Alignment Blog – [28:57-41:46]
- Ads in ChatGPT: User Reactions – [42:32-47:19]
- Apple’s AI Overhaul and Executive Departures – [47:19-51:22]
- Anthropic IPO, Claude Interviewer – [51:22-59:42]
- Nvidia/Jensen Huang’s AI Geopolitics and Entrepreneur Story – [59:42-66:04]
- Perplexity Lawsuits over Copyrighted News – [66:04-69:34]
- Meta Buys Limitless, Data Ownership Concerns – [69:34-72:27]
- Pope Leo XIV on AI and Human Dignity – [72:39-76:21]
- Job Cuts and Workforce Shifts – [76:53-80:51]
- Parent/Grandparent AI Literacy Survey – [80:52-83:19]
Listener Takeaways
- The AI platform race is intensifying—Google and Anthropic are gaining ground, OpenAI is under pressure, and signs of “autonomous” AI are real and imminent.
- Practical AI literacy is more important than ever: Tools to automate workflows and qualitative research are evolving fast, but come with growing risks and responsibility (privacy, security, critical thinking).
- Society faces immediate challenges: workforce disruption, the need for strategic AI adoption, urgency in parent/educator awareness, and emerging political/religious debate.
- Vigilance is key with new tools (e.g., Meta’s hardware, app integrations in ChatGPT) — always ask who benefits and where your data goes.
- The pace and stakes of AI progress demand both optimism and caution—be proactive in learning, adapting, and contributing to informed discourse.
