Gmail’s AI Assistant Update: What’s Really Going On
Podcast: The Last Invention is AI
Host: The Last Invention is AI
Date: November 22, 2025
Episode Overview
In this episode, the host delves into the ethical and practical implications of Gmail’s recent update enabling its AI assistant, Gemini, to train on users’ email content and attachments unless users opt out. Drawing from personal perspectives and research, especially a detailed Malwarebytes Lab blog post by Peter Arentz, the episode highlights Google’s approach, user privacy concerns, practical opt-out steps, and the broader AI data dilemma.
Key Discussion Points & Insights
1. Google’s AI Advancements & Influence (00:00–02:45)
- Host’s Stance: Open admiration for Google’s AI leadership, particularly with projects like Gemini and their pioneering Transformer paper.
- “Everyone knows that listens to this podcast. I am very bullish on Google right now… But when companies do things that I think are perhaps unethical or wrong, I’m going to call them out.” (00:00)
- Context: Google’s powerful position in the AI industry makes their decisions highly influential for the tech ecosystem.
2. Gmail’s New AI Training Practice Exposed (02:45–05:30)
- Stealthy Update: Gmail now, by default, allows access to users’ email messages and attachments for AI training.
- “Google has kind of stealthily added a bunch of features that allow Gmail to access all private messages and attachments… to train their AI models.” (02:55)
- Industry-Wide Implications: Other companies may soon follow, making this a trend to monitor, not just a Google issue.
3. Reasoning Behind Google’s Data Collection (05:30–07:10)
- Functionality Rationale: AI assistants like Smart Compose and Gemini need real user data to improve and tailor experiences such as AI-generated replies and enhanced inbox management.
- Privacy Concerns: Many users are comfortable with AI reading emails for replies, but less so with wholesale data retention and model training.
- “Google’s like, well, we want to make Gemini better, more useful for everyone. But I think that is a line that a lot of people don’t feel comfortable with.” (06:20)
4. Opt-In vs. Opt-Out – Ethical Considerations (07:10–10:10)
- Default Settings: Features are enabled by default; explicit consent is buried in settings rather than openly requested.
- Host’s Take: Advocates for active, explicit opt-in rather than stealthy default-on switches.
- “If you don’t manually turn these off, your private messages might be used for AI training behind the scenes.” (08:07)
- “The lack of explicit consent to me feels like this is not the right way to go about doing this… If you automatically opt them in, I think that’s quite shady.” (09:45)
- User Experience Trade-Offs: Disabling these features may result in losing access to smarter, more personalized AI tools.
5. How to Opt Out of Gmail AI Training (10:10–13:45)
- Transparency: The host provides a detailed guide on how to turn off Gmail’s AI data-sharing settings, noting that opting out can result in losing future AI features.
- Step 1: Turn off Smart features in Gmail, Chat, and Meet via Settings.
- Step 2: Turn off Google Workspace Smart features under the broader account settings.
- Step 3: Save changes and double-check that toggles remain off.
- Speaker Instructions: “Why are there two different places to go do this?… To fully opt out of feeding your data into AI training, both of those have to be disabled.” (13:20)
- Personal Choice: The host admits he will stay opted in to keep new features:
- “Personally, I will probably leave myself opted in because opting out makes you lose access to certain features.” (11:58)
6. The Larger Pattern: Data, AI, and Consent (13:45–17:30)
- Monetization Pressure: Tech giants leverage their massive data libraries for AI advancement—sometimes through borderline practices.
- Host’s Critique: Strong call for greater transparency and user respect, not only from Google but across the industry.
- “You need to be transparent with your users when you’re going to take their data. This doesn’t just apply for Google. This is a lot of different companies that are sitting on massive piles of data.” (16:40)
- Inevitable Conflicts: As AI’s financial and practical value rises, these privacy struggles will persist and reappear.
Memorable Quotes
- On default activation (Opt-Out Ethics)
- “If you don’t manually turn these off, your private messages might be used for AI training behind the scenes.” – Host (08:07)
- On balancing features with privacy
- “Personally, I will probably leave myself opted in because opting out makes you lose access to certain features.” – Host (11:58)
- On AI’s persistent need for data
- “This is not the last time you’re going to hear about [this problem]… we will continue struggling with this problem forever, essentially, now that AI is worth so much money.” – Host (17:09)
- On industry-wide ramifications
- “This doesn’t just apply for Google. This is a lot of different companies that are sitting on massive piles of data.” – Host (16:40)
Timestamps of Key Segments
- 00:00–02:45: Opening stance on Google, AI leadership, and ethical responsibility
- 02:45–05:30: Exposing Gmail’s new default data practices
- 05:30–07:10: Why Google says Gemini needs your emails
- 07:10–10:10: Opt-in vs. opt-out and the importance of user consent
- 10:10–13:45: Exact steps to opt out and host’s personal choice
- 13:45–17:30: Broader implications, host’s critique, and the recurring AI data dilemma
Tone & Final Thoughts
The episode blends technical clarity with a critical but fair tone, acknowledging technological progress while calling out “shady” user consent tactics. The host is transparent about his own choices, empowering listeners with practical knowledge and broader context.
Essential takeaway:
Smarter AI features come at a personal data cost, and users must decide their comfort level—while demanding greater transparency from tech companies.
