Podcast Summary
Podcast: The Best SEO Podcast: Defining the Future of Search with LLM Visibility™
Host: Matthew Bertram (MatthewBertram.com, EWR Digital)
Guest: Jon Gillham (Founder, Originality AI)
Episode: How To Use AI Without Getting Deindexed With Jon Gillham
Date: November 24, 2025
Episode Overview
This episode explores the evolving relationship between AI-generated content and SEO, focusing on the risks, realities, and opportunities for marketers and business leaders. Matthew Bertram and Jon Gillham dive into how organizations can leverage AI for content creation without risking their site's visibility or falling foul of search engines like Google. The conversation ranges from the philosophical implications of AI’s impact on human knowledge, to the tactical frameworks and governance needed for responsible AI use in SEO.
Key Discussion Points & Insights
1. The Pervasiveness of AI-Generated Content
- Gillham’s Study: Originality AI analyzed thousands of AI Overviews (05:00) and found that:
- 15-20% of cited “Your Money, Your Life” (YMYL) content is AI-generated.
- Raises the “snake eating its own tail” problem: LLMs citing other LLM-generated content, compounding errors and degrading the quality of data across the web.
- “If AI is rooting itself in AI, there's a whole world of problems that can come from that.” (Gillham, 02:50)
2. Synthetic Data and Human Content
- Training Models:
- All major LLMs require authentic, human-generated content for training.
- Google’s deal with Reddit and Elon Musk’s approach to Twitter underline the need for large-scale, genuine human engagement data.
- Even casual tools like Grammarly introduce “micro-AI” into human text datasets.
- “For the rest of humanity, any human text, any human data set will have some amount of AI in it compared to sort of pre-2020.” (Gillham, 04:44)
- Education Analogy: Like calculators in math class—LLMs will always be present for students and professionals.
3. Academic & Marketing Use Cases: Adapting to Change
- Academia:
- Slower to adapt but eventually must acknowledge that LLMs are co-pilots.
- Value of education may shift: “If your LLM can get a better mark than you, maybe it’s time to reconsider the value of that course of study.” (Gillham, 08:20)
- Marketing:
- Too many teams lack AI governance, leading to risky or spammy practices.
- Example: Interns spinning up APIs and flooding sites with unchecked AI content, leading to harsh Google penalties.
- “The risk owner, the business owner, is accepting this risk without knowledge or awareness...” (Gillham, 09:36)
4. Data & AI Governance Frameworks (12:00)
- Bumpers & Guardrails:
- Not all AI content is spam, but all spam in 2025 is AI-generated.
- Google’s E-E-A-T update (Experience, Expertise, Authority, Trust): AI-generated content is OK if it's useful and adds value.
- “If you're competing on just words right now, you're facing a challenge. You need to add value beyond just the words.” (Gillham, 13:43)
- Google monitors for “scaled content abuse,” rapidly publishing large volumes of AI-created content.
5. Google’s Response and Site Risk Thresholds
- Recent Updates:
- Google’s March 2024 “Psyops” update: manual deindexing of sites with excessive AI-generated content (16:09).
- Platforms like Medium, LinkedIn see >50% of content as likely AI-generated; Google results hover around 20%.
- Sites with “helpful,” high-value AI content—not just mass-produced text—are less likely to be penalized.
- “When a large number of pages are getting published that is something that is easy for Google to identify...those sites are getting nuked.” (Gillham, 17:22)
6. Plagiarism & Hallucinations in LLMs (18:23, 26:15)
- Trends:
- Traditional plagiarism is declining; AI rewriting is supplanting copy-paste plagiarism.
- “Why would someone plagiarize when they can copy-paste from ChatGPT?” (Gillham, 26:36)
- Legal risks remain; decision thresholds (5-15% overlap) and proper citation are crucial.
- LLM Hallucinations:
- LLMs sometimes invent sources ("hallucinate"), referencing non-existent pages or inventing data.
- “Clearly, the LLM wants this thing to exist so it can cite it.” (Bertram, 18:58)
7. Shifting Search Ecosystem: LLMs and Brand Visibility (20:00–22:30)
- From Google to LLMs:
- The "conversion funnel" is collapsing: research happens inside LLMs, transactions (for now) still occur on the web.
- Visibility in LLMs (“LLM visibility”) is key to future brand growth.
- Google's integration with YouTube, "Buy Now" features, and change in search patterns are responses to LLM competition.
8. Content Indexing, Penalties, and the Sandbox Effect (32:45, 34:12)
- Case Study (Horror Story):
- Sites getting deindexed after sudden influx of AI-generated posts. Often, business owners unaware staff were automating content creation.
- “There's real pain...businesses laying off employees, livelihoods lost because a website got tanked because somebody...was taking on risks the risk owner...didn't understand.” (Gillham, 31:43)
- Google Indexing:
- New content takes longer to index.
- Less ranking movement between Google's core updates.
- Older content with accrued equity appears to weather changes better.
9. How AI Detection Works (37:30–40:47)
- Detection Process:
- Editorial best practices: force use of Google Docs, track writing process and time.
- AI detection is a binary classification (AI vs. human) with a confidence score, not a literal percentage of “AI words.”
- High formulaicity, overly structured writing, or formatting anomalies can trigger AI detectors.
- Not perfect—possible false positives/negatives.
Notable Quotes & Memorable Moments
- “If you're accepting Grammarly edits, there's a little bit of AI getting added to that human text... for the rest of humanity, any human text... will have some amount of AI in it.” — Jon Gillham (04:08)
- “Not all AI content is spam. All spam in 2025 is AI-generated.” — Jon Gillham (12:21)
- “If you're competing on just words right now, you're facing a challenge. You need to add value beyond just the words.” — Jon Gillham (13:43)
- “The risk owner... is accepting this risk without knowledge or awareness... there are significant consequences when people are just turned loose.” — Jon Gillham (09:36)
- “Why would someone plagiarize when they can just copy and paste from ChatGPT?” — Jon Gillham (26:36)
- “Google doesn’t hate AI, but hates AI being the thing overrunning the search results with no extra value.” — Jon Gillham (15:03)
- “It’s always very, very, very tempting to find that shortcut and click a button that... doesn’t exist now. That makes this whole process easy.” — Jon Gillham (43:14)
Actionable Frameworks & Takeaways (42:37)
- AI Governance: Make sure your executive/leadership team understands where AI is being used and agrees to the associated risks.
- Human in the Loop: Always blend human input with AI-generated content for quality and compliance.
- Add Value: Don’t compete with just words—add proprietary research, tools, original images, or unique data.
- Stay Within Thresholds: Beware large-scale, rapid publishing of AI content—Google flags “scaled content abuse.”
- Ongoing QA Process: Routinely run plagiarism and AI checks; manually review flagged content and properly cite sources.
- Be Patient: New content is taking longer to index; expect less volatility except during Google core updates.
Timestamps for Important Segments
- [02:01] – Jon Gillham introduces the problem of LLMs referencing AI-generated sources
- [05:06] – Philosophical implications: all future text data will be part AI
- [09:18] – Importance of governance and agreed-upon AI policies
- [12:21] – "Not all AI content is spam; all spam is AI-generated"
- [16:09] – Thresholds for deindexing, Google's recent crackdown
- [18:23] – Plagiarism and LLM hallucinations as risks in content marketing
- [26:26] – Decline of human plagiarism, AI as the new norm for “rewriting” content
- [31:19] – Horror story: loss of business due to unchecked AI content
- [37:32] – Fingerprints of AI-generated content and limitations of current detection tools
- [42:37] – Jon Gillham's top tips for responsible AI implementation in content strategy
Conclusion
This episode demystifies AI content’s risks and rewards for digital marketers and SEO professionals. The overarching message: AI offers incredible leverage, but requires clear strategy, consistent oversight, and a relentless focus on adding unique human value. As search moves from traditional engines to LLMs, marketers must audit their strategies and craft robust governance frameworks—or risk irrelevance, deindexing, or worse.
Resources & Contact
- Jon Gillham / Originality AI:
- Website: Originality.ai
- LinkedIn: Jon Gillham
- Twitter: @JoniginalityAI (per podcast)
- Matthew Bertram / LLM Visibility & EWR Digital:
- Website: MatthewBertram.com
- LinkedIn: Matthew Bertram
For marketers, business owners, and SEO professionals, this episode underscores that the future of search is about intelligent, ethical, and value-driven use of both AI and human creativity—because “if you’re not visible to the models, you won’t be visible to the market.”
