Podcast Summary: The Joe Rogan Experience of AI
Episode: New Updates to GPT-5: Breaking Down the Latest Advancements
Date: August 18, 2025
Host: The Joe Rogan Experience of AI
Overview
In this episode, the host explores the latest changes OpenAI has made to its model picker and the updates introduced with GPT-5. The conversation covers new user controls for choosing between different AI response modes, the return of legacy models like GPT-4o, evolving rate limits, context window improvements, and the ongoing debate over customizing AI personalities. The host also shares personal reactions and notable community feedback on these changes, all while maintaining the podcast’s conversational and insightful tone.
Key Discussion Points and Insights
1. OpenAI’s Model Picker Returns—With Upgrades
- OpenAI has reintroduced its model picker, following user complaints about previous limitations and the confusing multi-model setup.
- Now, users have clearer options and the ability to select responses suited to their needs, enhancing personalization.
Notable Quote:
“This is something that I personally have complained about for a lot of different reasons, but they've actually changed it.” (00:00)
2. Three Main Response Modes in GPT-5
- Auto: The default router—chooses the best model for a given query (router now reportedly fixed).
- Fast: Prioritizes quick answers for users needing speed.
- Thinking: Enables deeper, more thoughtful responses, suitable for complex queries.
- Pro: Exclusive, premium mode ($200/month), with higher quality and newer features. Includes advanced tools (e.g., higher-end video generation from Sora/GPT-5).
Notable Quote:
“You also have Fast, Thinking, and Pro. So Pro is actually an upgraded mode, I believe. If you want to get access to Pro, you have to pay $200 a month.” (04:08)
Memorable Moment:
The host expresses skepticism about the value of Pro and frustration about favorite older models moving behind a paywall.
3. Return of Legacy Models and the GPT-4o Phenomenon
- GPT-4o (previously deprecated) is back by popular demand, largely due to its “warmer” conversational style.
- Community-driven: Many users, especially those using AI for companionship or therapy, found GPT-4o’s tone comforting.
- OpenAI promises ample notice before future deprecation.
Notable Quote:
“People that use AI for like therapy said that they wanted this more... People that I don't know if they grew attached to like that particular AI model were upset about it.” (07:10)
Memorable Moment:
Host recounts a Ukrainian user’s emotional post about losing access to GPT-4o as a form of support during traumatic times. (13:00)
4. Rate Limits, Capacity, and Context Windows
- Thinking mode: 3,000 messages per week (generous for most; “power users” may still hit the ceiling and get bumped to Thinking Mini).
- Context limit: 196,000 tokens for GPT-5 Thinking (not as high as competitors’ 1 million+ tokens, but still substantial).
- OpenAI states these limits may be adjusted “over time depending on usage.”
Notable Quote:
“Honestly, 3000 messages a week for the thinking model I think is probably great for most people. Maybe some power users will hit that limit and get bumped down to Thinking Mini, but…” (10:30)
5. “Show Additional Models” and Paid User Perk
- Paid users can now toggle on additional legacy and experimental models via web settings.
- 4.5 model (noted for high GPU/computing demand) is now only available through Pro tier.
Notable Quote:
“4.5 is only available to pro users. It costs a lot. It costs a lot of GPUs.” (12:25)
6. Debate Around Personality Tuning and Customization
- OpenAI is developing updates to GPT-5’s personality, aiming for a tone "warmer than the current personality but not as annoying to most users as GPT-4o".
- The host and Sam Altman both advocate for future models enabling users to customize AI personalities and levels of agreeableness, warmth, or professionalism, moving away from a "one size fits all" approach.
- Citing past controversies about models agreeing too readily with users or having awkward personalities.
Notable Quote (Sam Altman via tweet, paraphrased):
"One learning for us from the past few days is... we just really need to get a world with more per user customization of model personality." (15:30)
Host’s Reflection:
“I for one would rather just pick what personality I want... so I think this is what Sam Altman understands and a lot of people agree with.” (16:45)
7. Industry Context and Similarities with Competitors
- OpenAI isn’t alone: The host notes that Anthropic’s Claude faced backlash when newer models performed worse than older ones for coding, illustrating a broader tension between innovation and stability.
- Users may get “attached” to certain model personalities or behaviors, and abrupt changes cause community frustrations across AI platforms.
Notable Quote:
“Claude and Anthropic actually went through a very similar thing. They came out with a brand new model and coders preferred the older model...” (21:35)
Notable Quotes & Memorable Moments with Timestamps
- "This is something that I personally have complained about for a lot of different reasons, but they've actually changed it." (00:00)
- "You also have Fast, Thinking, and Pro. So Pro is actually an upgraded mode, I believe. If you want to get access to Pro, you have to pay $200 a month." (04:08)
- "People that use AI for like therapy said that they wanted this more... People that I don't know if they grew attached to like that particular AI model were upset about it." (07:10)
- "Honestly, 3000 messages a week for the thinking model I think is probably great for most people. Maybe some power users will hit that limit and get bumped down to Thinking Mini, but..." (10:30)
- "One learning for us from the past few days is... we just really need to get a world with more per user customization of model personality." (Sam Altman, 15:30)
- "I for one would rather just pick what personality I want... so I think this is what Sam Altman understands and a lot of people agree with." (16:45)
- "Claude and Anthropic actually went through a very similar thing. They came out with a brand new model and coders preferred the older model..." (21:35)
Conclusion
The episode offered a rich, candid analysis of OpenAI’s response to user feedback—balancing innovation with user attachment to legacy features. Key takeaways include the greater control offered to users, the drama and emotion tied to AI’s personality, and the movement toward customizable, user-driven AI experiences. The host’s personal anecdotes vividly illustrate how technical updates impact real lives, making this a valuable listen for anyone following the rapid evolution of conversational AI.
