Intelligent Machines Podcast Episode IM 832: Surrounded by Zuck - Inside Google Gemini
Release Date: August 14, 2025
Hosts: Leo Laporte, Jeff Jarvis, Paris Martineau
Guest: Tulsi Doshi, Senior Director and Product Lead of Google Gemini Models
1. Introduction
The episode kicks off with hosts Leo Laporte, Jeff Jarvis, and Paris Martineau welcoming Tulsi Doshi from Google. They set the stage for a deep dive into Google's latest AI advancements, particularly focusing on the Gemini Models.
[00:00] Leo Laporte: "It's time for Intelligent Machines."
2. Interview with Tulsi Doshi on Google Gemini Models
a. Competition with Meta's AI Efforts
Tulsi discusses the intense competition in the AI sector, highlighting Mark Zuckerberg's efforts to attract Google's top engineers. She views this as a sign of the AI industry's rapid growth and the high stakes involved.
[05:36] Tulsi Doshi: "I think it's like a signal of how hot the space is."
b. AI Model Training and Compute Usage
She elaborates on the strategic allocation of compute resources across different phases of AI model training—pre-training, post-training, and inference—to optimize performance and efficiency.
[06:03] Tulsi Doshi: "There's the pre-training part ... post-training part ... inference time..."
c. Responsible and Safe AI
Tulsi emphasizes Google's commitment to responsible AI, discussing frameworks for ensuring model safety, mitigating harmful content, and protecting user privacy. She underscores the importance of continuous testing and improvement.
[08:17] Tulsi Doshi: "For us, safety becomes just a critical part of every step of the journey."
d. Personalization and Inclusivity in AI
Highlighting inclusivity, Tulsi talks about supporting diverse languages like Gujarati and ensuring that AI models serve a broad range of users, thereby democratizing information access.
[37:02] Tulsi Doshi: "I think there's a ton of value in terms of being able to create more equity, ... democratizing information."
e. Multimodal Capabilities of Gemini
Tulsi introduces Gemini's multimodal abilities, enabling the AI to understand and generate content across various media formats, including images, video, and audio, enhancing user interaction and versatility.
[17:53] Tulsi Doshi: "We want the models to be able to both understand and communicate in any medium."
f. Future Roadmap and Developments
Looking ahead, Tulsi outlines Google's roadmap for Gemini, focusing on enhancing usability, personalization, tool integration, and expanding multimodal functionalities to create a more versatile AI assistant.
[31:55] Tulsi Doshi: "Another goal is personalization ... making models more usable ... multimodality continues to be a priority for us."
3. ChatGPT-5 Release and Community Reactions
a. User Backlash and Emotional Responses
The hosts discuss the recent release of ChatGPT-5 and the significant backlash from users who felt the new model lacked the personable nature of its predecessor, ChatGPT-4.0. Many treated the AI as a friend, leading to emotional distress when encounters changed.
[50:35] Jeff Jarvis: "Some users treat the chatbot like a friend... were upset when the model changed."
b. Model Adjustments and Reversals
In response to the backlash, OpenAI reverted to offering ChatGPT-4.0 for paying customers, highlighting the challenges of meeting diverse user expectations while innovating.
[48:20] Paris Martineau: "They brought 4.0 back, but only for paying customers."
c. Comparisons between ChatGPT-4 and ChatGPT-5
The discussion highlights differences in response styles between the two versions. While GPT-5 offers more concise and factual answers, some users miss the engaging and empathetic interactions of GPT-4.0.
[51:46] Paris Martineau: "ChatGPT5 was very short, terse... great contrast from 4.0."
4. AI Tools and Personal Experiences
Post-advertisement, the hosts share their experiences with AI-powered devices such as the Limitless Pin, Fieldy AI, and Omi. They discuss the practicality, ease of use, and limitations of these gadgets in everyday life.
[65:39] Jeff Jarvis: "These are the tools for the early stages of dementia."
5. AI Ethical Concerns and Industry Challenges
a. AI in Health Care Decision Making
The episode delves into the ethical implications of AI in healthcare, particularly Medicare's pilot program using AI to determine patient coverage. Concerns are raised about the accuracy, accountability, and potential for biased decision-making.
b. AI-generated Errors and Security Issues
Tulsi addresses issues related to AI hallucinations and security vulnerabilities. She explains how models can unintentionally generate harmful content or be manipulated through malicious prompts, posing significant risks.
[139:00] Paris Martineau: "It's not a hallucination. It's a mistake. Boo boo."
6. Company News and AI Developments
a. AI's Impact on Search and Content
The conversation touches upon the evolving landscape of AI-powered search engines, mentioning companies like Kagi and Perplexity. The hosts debate the balance between maintaining an open web and the rise of proprietary AI systems that could monopolize information access.
b. Open Web vs Closed AI Systems
A significant portion of the discussion centers on how AI affects the accessibility of information. The hosts express concerns over closed AI systems potentially restricting open web access, which contradicts the original ethos of the internet as a digital commons.
7. Conclusion
The episode wraps up with personal anecdotes, reflections on the rapid advancements in AI, and a look forward to upcoming guests and topics. The hosts stress the importance of understanding both the promises and perils of intelligent machines as they become increasingly integrated into daily life.
[198:56] Leo Laporte: "Thank you all for joining us. ... See you next time on Intelligent Machines."
Notable Quotes:
-
[05:36] Tulsi Doshi: "I think it's like a signal of how hot the space is."
-
[08:17] Tulsi Doshi: "For us, safety becomes just a critical part of every step of the journey."
-
[31:55] Tulsi Doshi: "Another goal is personalization ... making models more usable ... multimodality continues to be a priority for us."
-
[50:35] Jeff Jarvis: "Some users treat the chatbot like a friend... were upset when the model changed."
-
[51:46] Paris Martineau: "ChatGPT5 was very short, terse... great contrast from 4.0."
-
[65:39] Jeff Jarvis: "These are the tools for the early stages of dementia."
-
[139:00] Paris Martineau: "It's not a hallucination. It's a mistake. Boo boo."
-
[198:56] Leo Laporte: "Thank you all for joining us. ... See you next time on Intelligent Machines."
This comprehensive summary captures the essence of Episode IM 832, highlighting key discussions on Google's Gemini Models, the community's reaction to ChatGPT-5, ethical considerations in AI deployment, and personal experiences with emerging AI tools. Notable quotes provide depth and authenticity to the conversation, offering listeners a clear understanding of the episode's main themes.