Podcast Summary
Everyday AI Podcast – An AI and ChatGPT Podcast
Episode: AI Hallucinations: What They Are, Why They Happen, and the Right Way to Reduce the Risk
Host: Jordan Wilson
Date: January 30, 2026
Episode Overview
In this fifth installment of the "Start Here" series, host Jordan Wilson demystifies AI hallucinations—the phenomenon where AI models confidently produce false or fabricated information. The episode addresses why hallucinations happen, how advances in large language models (LLMs) have reduced their frequency, and offers a practical four-step approach for individuals and businesses to mitigate the risk of hallucinations through best practices and effective human-AI collaboration.
Key Discussion Points & Insights
1. What Are AI Hallucinations and Why Do They Occur?
(04:41 – 07:50)
- Definition:
Hallucinations are instances where AI models generate confident but false, misleading, or entirely fabricated information, including made-up sources or citations. - Underlying Causes:
- LLMs process vast datasets from the internet and apply human feedback—training them to be helpful, but not always truthful, assistants.
- The core mechanism: "super smart next token prediction" (06:01), which sometimes leads to plausible but incorrect outputs.
- Quote:
"At their core, AI models are trained to be helpful assistants...sometimes they are going to make things up because they want to be helpful more than anything else." – Jordan Wilson [06:01]
2. Hallucinations: A Feature, Not Just a Bug
(07:52 – 09:52)
- Creativity and strategic thinking in LLMs stem from the same processes that lead to hallucinations.
- Hallucinations are often a result of:
- Wrong model choice for the task.
- Insufficient or non-specific prompts.
- Overconfidence in AI-generated answers without human checks.
3. Progress in Reducing Hallucinations
(10:03 – 13:50)
- Rapid Model Improvements:
- Citing the GPT model lineage, error rates have dropped dramatically:
- GPT-3.5 (2022): Fabricated up to 40% of academic citations.
- GPT-4: Down to 29%.
- GPT-5.2 (2026): Now at 6.2% error rate and a 30% reduction over the previous iteration in just three months.
- Citing the GPT model lineage, error rates have dropped dramatically:
- Enhanced Context Handling:
- LLM context windows have expanded, reducing the likelihood of "forgetting" and hallucinations over long conversations.
- Quote:
"...think of the 3pm brain fog...that's how large language models had been...the context window is how much information a large language model can retain until it starts to forget. And potentially, when it starts to forget, it will start to hallucinate." – Jordan Wilson [12:41]
4. Why Hallucinations Still Matter for Business
(15:25 – 18:36)
- Widespread Implementation Without Training:
- Enterprises are rapidly adopting AI tools but often skip staff training, resulting in misuse and increased hallucination incidents.
- Real-World Impact Examples:
- Legal sector: 486 legal cases worldwide involving fabricated AI content, including 128+ lawyers cited for hallucinated filings.
- Notable incident: "Attorneys sanctioned for six fake citations in the Avianca case."
- Consulting: Deloitte refunded part of a government contract due to phantom citations in an AI-generated report.
- Quote:
"You have companies...rolling out AI access...to thousands, tens of thousands of employees, but not giving any best practice training or even education, which is why hallucinations are still rampant." – Jordan Wilson [15:48]
5. The Persistent Nature of Hallucinations
(18:36 – 20:44)
- Root Causes:
- Reflecting real-world inaccuracies: "A lot of times large language models are reflecting inaccuracies or...half truths that exist in the real world." [18:56]
- Filling gaps: LLMs default to filling in information if data isn’t available.
- Chat interface behavior: Models are rewarded for fast, assertive answers, sometimes at accuracy’s expense—but modern models now more often say, “I don't know.”
6. The Four-Layer Method to Reduce Hallucinations
(20:50 – 31:32)
Step 1: Custom Instructions & Prompting
- Set clear boundaries for AI behavior—e.g., instructing to admit uncertainty, requiring sources for factual claims, or outputting confidence scores.
- Use custom instructions/settings in AI platforms for desired behaviors.
Step 2: Retrieval-Augmented Generation (RAG) & Grounding Responses
- Integrate company or task-specific data into AI workflows.
- Major providers now allow connecting business datasets (e.g., OneDrive, Google Drive).
- Stanford 2024: RAG + human feedback can cut hallucinations by 96% compared to the baseline.
Step 3: Active Verification Workflows ("Expert-driven Loops")
- Build a “second pass” system—use one model to generate and a separate model to fact-check responses.
- Particularly vital for high-stakes outputs such as legal or financial documents, public communications, etc.
Step 4: Observability & Traceability
- Review chains of thought/output steps.
- Treat the AI like a junior employee: always check its reasoning and sources before delivering results.
Quote:
"If you have expert-driven loops and if you go through those four steps that I talked about, I think hallucinations are no longer going to be the big elephant in the room." – Jordan Wilson [31:08]
7. Final Takeaways
(31:32 – 32:01)
- Hallucinations Won’t Disappear Completely:
They are inherent in how LLMs work, but the risk can be managed and dramatically reduced. - Key to Success:
The most effective organizations are not simply using the best models, but are systematically integrating verification and context control into their workflows.
"Hallucinations are a property that you need to manage, not something that you hope will get fixed one day." – Jordan Wilson [31:22]
Notable Quotes & Timestamps
-
“At their core, AI models are trained to be helpful assistants...sometimes they are going to make things up because they want to be helpful more than anything else.”
– Jordan Wilson [06:01] -
“Think of the 3pm brain fog...that’s how large language models had been...the context window is how much information a model can retain until it starts to forget. And potentially, when it starts to forget, it will start to hallucinate.”
– Jordan Wilson [12:41] -
“You have companies...rolling out AI access...to thousands, tens of thousands of employees, but not giving any best practice training or even education, which is why hallucinations are still rampant.”
– Jordan Wilson [15:48] -
“If you have expert-driven loops and if you go through those four steps that I talked about, I think hallucinations are no longer going to be the big elephant in the room.”
– Jordan Wilson [31:08] -
“Hallucinations are a property that you need to manage, not something that you hope will get fixed one day.”
– Jordan Wilson [31:22]
Useful Timestamps
- What are Hallucinations? – [04:41]
- Why Hallucinations Happen – [06:01]
- Model Progress Reducing Hallucinations – [10:03]
- Long Context Windows – [12:41]
- Business Impact & Legal Examples – [15:25]
- Why Hallucinations Persist – [18:36]
- Four-Layer Risk-Reduction Plan – [20:50]
- Summary & Closing Thoughts – [31:32]
Summary Table: Four-Layer Hallucination Risk Reduction
| Step | Action | Practical Tips | |------|------------------------------------------|-------------------------------------------------------| | 1 | Custom Instructions / Prompting | Ask the model to admit uncertainty, require sources | | 2 | Retrieval-Augmented Generation (RAG) | Integrate company data and ground model responses | | 3 | Active Verification / Second Pass Review | Double-check facts with a separate model or workflow | | 4 | Traceability / Observability | Review the reasoning, chain-of-thought, and sources |
Overall Tone & Language
Jordan Wilson maintains an approachable, conversational, “cut the fluff” tone, emphasizing practical advice and clear analogies. He repeatedly stresses that anyone can reduce hallucinations by combining sound workflows with evolving model capabilities—no deep technical skill required.
Conclusion
This episode offers a robust, positive roadmap for AI users—from beginners to pros—on how to recognize, reduce, and manage AI hallucinations in both personal and enterprise contexts. By adopting intentional workflows, leveraging the latest platform features, and embracing a “trust, but verify” mindset, hallucinations can shift from being a major risk to a manageable nuisance.
Join the conversation: Visit StartHereSeries.com for free resources, community access, and more.
