The Prof G Pod with Scott Galloway
Episode: The Crackdown on Free Speech, How to Handle a Bad Boss, and Why Scott Took Down His AI Clone
Date: October 24, 2025
Host: Scott Galloway, Vox Media Podcast Network
Episode Overview
This “Office Hours” episode features Scott Galloway answering listener questions focused on three provocative topics: the UK’s crackdown on free speech online, strategies for navigating a bad boss, and the profound implications of AI’s role in therapy and relationships—including why Scott decided to take down his own AI clone. Galloway injects his signature candor and wit throughout, blending data-driven insight with personal experience and straight-shooting advice.
1. The Crackdown on Free Speech in the UK
[01:45 - 12:50]
Main Discussion Points
- Listener Question: With the rise of arrests for social media posts in the UK, what’s Scott’s take, given his American background and current UK residency?
- Current Laws: In the UK, about 12,000 people per year are detained for online speech offenses—roughly 30 arrests daily, based on statutes from the 1988 Malicious Communications Act and the 2003 Communications Act.
- Law Outdatedness: These acts were written pre-social-media and now capture even private WhatsApp messages, as in English law, “there’s technically no concept of a private conversation online.”
- Scott’s Irreverent Take:
- States his “blissful ignorance” of most UK politics but can't avoid political conversation entirely due to his internet presence.
- Relates it to the US context, describing the US as in a “shit show descent into fascism.”
Key Insights
- Freedom as a Hallmark of Democracy:
- “I generally think the hallmark of a democracy is that almost anyone should be able to say almost anything about almost anybody.” (Scott Galloway, 05:13)
- Emphasizes the difference between protected speech and illegal acts (like defamation or inciting violence).
- The Problem with Platform Immunity:
- Critiques Section 230 protections in the US: platforms shouldn’t get immunity when algorithmically elevating content.
- Suggests holding platforms to the same editorial standards as legacy media if they promote content beyond its organic reach:
- “If the platforms where all this shit is taking place no longer are protected by section 230 for algorithmically elevated content, I think 90% goes away.” (Scott Galloway, 07:32)
- Erring on the Side of Free Speech:
- Even vile or ugly online statements should rarely trigger a police response.
- “You err on the side of free speech. Platforms shouldn’t engage, shouldn’t connect profit to elevating this stupid, false and defamatory content beyond its organic reach.” (Scott Galloway, 10:47)
Notable Quote
- On Confronting Online Criticism:
- “This weekend I got very upset. Someone said something so wrong, false and incorrect and just mean… I wanted to weigh in. And of course you don't want to weigh in because that's what the algorithms want. But I believe they should have the right to say that.”
(Scott Galloway, 06:58)
- “This weekend I got very upset. Someone said something so wrong, false and incorrect and just mean… I wanted to weigh in. And of course you don't want to weigh in because that's what the algorithms want. But I believe they should have the right to say that.”
2. How to Handle a Bad Boss
[12:51 - 22:14]
Main Discussion Points
- Listener Scenario: Seasoned employee, newly reassigned under a manager they suspect is antagonistic; seeking advice due to career ambition and fear of stalling.
- Workplace Reality:
- “Welcome to the workweek. Your ability to navigate assholes or bosses or people who don’t agree with you … is really important.” (Scott Galloway, 13:35)
- Calls workplace injustices inevitable: “That’s the only thing I can guarantee you in the corporate world—a series of injustices throughout your corporate life.”
Scott’s Playbook
- Continue Strong Performance despite the bad boss.
- Proactively Communicate:
- “It's okay to sit down with this person and say: ‘I feel as if some of our interactions or the way you approach me, it feels biased and it feels unfair.’ … Highlight in a very sober, unemotional way some examples.” (Scott Galloway, 15:14)
- Seek feedback: “Is there something I can do to improve this relationship?”
- Protect Your Position & Options:
- Explore internal opportunities to move to another manager.
- If approaching higher management, always address issues directly with the boss first to avoid their bias.
- Market Check:
- “If you're doing well after eight years, I don't think it's a bad idea to do a market check and see what else is out there.” (Scott Galloway, 18:44)
- Notes that advancement often follows job-switching every five to seven years.
Notable Quotes
- On the Importance of Navigating Relationships:
- “Your ability to navigate them is kind of as important as doing a good job almost.” (Scott Galloway, 14:57)
- On Leverage and Longevity:
- “The fact that you've been somewhere for eight years and have done well means that you’re in a position of leverage. Because you’ve done well there for eight years. They don’t want to lose you and you have currency in the marketplace.” (Scott Galloway, 21:58)
3. AI in Therapy, Synthetic Relationships, and Why Scott Killed His AI Clone
[26:19 - 41:20]
Main Discussion Points
- Listener Question: Should we be concerned about partners, friends, or family using AI (like ChatGPT) for therapy or advice?
- Listener discovered his partner turning to AI for relationship advice and is worried about the isolating effects.
- Scale of AI Use:
- OpenAI reports nearly 700 million weekly users, 10 million paid subscribers, with therapy among the top use cases.
- Personal Story:
- Scott recounts building “Prof.G AI” (an LLM-powered clone of his advice style) in response to an avalanche of personal advice requests.
- Initially felt the AI gave good, passable (70–80%) advice and was used hundreds of times daily.
- Google Labs later created a more advanced “Prof.G AI” avatar—but Scott’s perspective changed before its launch.
Scott's Reflections & Concerns
- Dangers of Synthetic Relationships:
- “I worry that these synthetic relationships are making us less mammalia, that they are sequestering us from each other.” (Scott Galloway, 31:49)
- Concerned synthetic interaction reduces motivation to build real relationships, depriving especially young men of crucial socialization.
- Under-18s Should Be Shielded:
- “I don't think anyone under the age of 18 should be allowed to enter into a synthetic relationship. I don't think they have the maturity to handle it.” (Scott Galloway, 34:29)
- Comparison With Real Human Relationships:
- AI is “way too supportive, empathetic and don’t give it to you real. Don’t say, ‘Oh, shut the fuck up. Buck up, welcome to the real world.’”
- Contrasts the messy, complex rewards of real friendship and love—building resilience and true happiness.
- Advice to the Listener:
- “Using this as a resource is fine. Using it as a relationship is not cool and it’s dangerous, it’s dumb, and you’re going to be more depressed.” (Scott Galloway, 38:03)
- Encourage open, honest, direct conversation with loved ones about the healthy boundaries of AI use.
Notable Quotes & Moments
- On Taking Down His AI Clone:
- “By the way, I decided to take down my character AI after 12 hours. I just got increasingly uncomfortable with it. I want young men to figure out a way to engage with other men.” (Scott Galloway, 40:30)
- The Value of Human Struggle:
- “Victory comes from the complexity and difficulty and friction of real world relationships. … And when you figure it out, it is like the universe just says, all right, you matter.” (Scott Galloway, 36:50)
Memorable, Candid Prof G Wisdom
- On Corporate Life:
- “A series of injustices throughout your corporate life … the only thing I can guarantee.”
- On Speech:
- “Almost anyone should be able to say almost anything about almost anybody.”
- On Synthetic Relationships:
- “I just hate synthetic relationships for anyone under the age of 18.”
Timestamps of Key Segments
- Free Speech in the UK: 01:45 – 12:50
- Dealing with a Bad Boss: 12:51 – 22:14
- AI Therapy, Relationships, & Prof.G AI: 26:19 – 41:20
Final Takeaway
Scott Galloway delivers practical advice and social commentary with his hallmark bluntness and empathy—whether it’s defending free speech, handling workplace politics, or raising alarms about the subtle dangers of AI-mediated relationships. For Scott, the complexity, friction, and reward of real human connection remain irreplaceable.
For more candid business, tech, and career insight, listen to The Prof G Pod every week.
