Episode Overview
Title: The biggest misconception marketers have about how AI models surface content
Host: Tyson Stockton
Guest: Chris Antinsky, SVP of Creative and Co-Founder at Fractal
Date: March 25, 2026
This episode of Voices of Search explores the misconceptions marketers have about how artificial intelligence (AI) models surface and evaluate content in search engines. Host Tyson Stockton speaks with Chris Antinsky about the true capabilities and limitations of current AI systems, how marketers often misunderstand their workings, and what the future might hold for transparency in AI-driven search.
Key Discussion Points & Insights
Misconceptions About Understanding AI Models
-
Chris Antinsky immediately challenges the notion that marketers (or anyone) can fully understand what AI search models are doing.
- "I think anyone saying anything about understanding truly what these models are doing is lying. And tools that try to figure out things that are black box are never going to work well." — Chris Antinsky [01:10]
- Emphasizes that AI models remain a "black box," and even specialized tools (like AI text checkers) are fundamentally limited.
-
False Positives and Negatives in AI Detection
- Tools designed to determine if content is AI-generated, or to reverse-engineer AI search behaviors, inherently have high error rates.
- "There will always be a really high false positive and false negative rate." — Chris Antinsky [01:19]
The Dual-Sided Misunderstanding of AI Capabilities
- Marketers underestimate and overestimate AI:
- Many don't grasp the real shortcomings of current AI, expecting too much or too little.
- "I think people really need to consider the drawbacks of the current models and what they're actually capable of versus what they're not." — Chris Antinsky [01:30]
- “There are a lot of people that think they’re more capable of certain things than they actually are.” — Chris Antinsky [01:44]
Limits of AI Visibility and Data
-
Superficiality of Brand Visibility Metrics in AI Search
- Discussion on measuring visibility within or across AI systems is "super inaccurate."
- "I'm not even sure how valuable it is to really even do it... If you're like a brand that's existed within their training data sets, you're not going to be invisible to them." — Chris Antinsky [03:33]
- Marketers often misunderstand how and why their content appears (or doesn’t appear) in AI-generated results.
-
Ongoing Black Box Nature of AI
- Even if AI model providers wanted to offer deep insight into output reasoning, it's not currently possible.
- "It's always going to be really rough heuristics to try and understand that until we get to next generation systems where we can inspect their actual reasoning process, which I think is coming." — Chris Antinsky [04:03]
The Path Forward: Continuous Exploration and New Model Generations
- Need for Ongoing Exploration
- Optimization and understanding require constant hands-on work with AI models, accepting their current opacity.
- "Understanding the deficiencies and the capabilities both are really important and that takes exploration and working with them consistently." — Chris Antinsky [03:23]
- Optimism About Future Model Transparency
- Chris believes future AI model generations may offer deeper insight into internal reasoning, helping marketers better analyze and strategize.
Notable Quotes & Memorable Moments
-
"Anyone saying anything about understanding truly what these models are doing is lying."
— Chris Antinsky [01:10] -
"There will always be a really high false positive and false negative rate."
— Chris Antinsky [01:19] -
"Understanding the deficiencies and the capabilities both are really important and that takes exploration and working with them consistently."
— Chris Antinsky [03:23] -
"It's always going to be really rough heuristics to try and understand that until we get to next generation systems..."
— Chris Antinsky [04:03]
Important Timestamps
- 00:46 — Introduction to Chris Antinsky and the core question about AI model misconceptions
- 01:10 — Chris breaks down the myth of truly understanding AI models
- 01:19 — On the unavoidable error rates in AI text detection tools
- 01:30–01:55 — Discussion of overestimation and underestimation of AI’s real capabilities
- 03:23 — Emphasis on necessity for hands-on exploration to work with current AI limitations
- 03:33–04:03 — Skepticism about visibility metrics and optimism for future, more transparent AI systems
Tone & Language
- Candid and skeptical: Chris is direct about the limitations in the current understanding and effective use of AI in search.
- Pragmatic and cautiously optimistic: While highlighting the existing problems, he also looks forward to technical advancements that will eventually provide more transparency.
Summary Takeaway
Marketers should not overestimate their grasp of how AI models surface content, nor rely too heavily on tools claiming to "crack the black box." The reality is complex and driven by opaque systems whose behaviors are impossible to fully reverse-engineer today. Effective SEO and content marketing strategies require continual experimentation with these models, balanced expectation management, and a watchful eye for future, more interpretable generations of AI.
