Abundant Practice Podcast
Episode #695: AI Use in Private Practice, feat. Dr. Maelisa McCaffrey
Date: October 15, 2025
Host: Allison Puryear
Guest: Dr. Maelisa McCaffrey
Episode Overview
This episode dives into the rapidly growing topic of artificial intelligence (AI) in private practice, particularly how therapists can leverage AI ethically and effectively in their clinical and business processes. Host Allison Puryear and guest Dr. Maelisa McCaffrey (of QA Prep) discuss common use cases for AI among clinicians, legal and ethical boundaries (especially around HIPAA compliance), evolving state laws, ethical guidelines, and best practices for informing clients. The conversation is practical, candid, and focused on protecting both client privacy and therapists' peace of mind.
Key Discussion Points & Insights
1. How Therapists Are Using AI in Practice
(02:59–06:07)
-
Documentation is the #1 use-case:
Clinicians are “using [AI] for documentation,” especially for “treatment and progress notes,” which is a primary pain point for therapists.
– Melissa McCaffrey [03:03] -
Expanded uses:
- Assisting with diagnosis
- Business templates (e.g., response emails)
- Writing blog posts
- Creating community resource lists
-
Vetting AI output is crucial:
AI tools often give “resources that don't exist, studies that don't exist, case law that doesn't exist.” Always verify information from AI before relying on it.
– Allison [04:43] -
Limits of current AI:
Even AI systems using up-to-date information can make mistakes (e.g., using outdated names, inaccurate citation claims). – Melissa McCaffrey [05:01]
2. What Not to Use AI For (HIPAA and Ethics)
(06:15–11:16)
-
Never use public AI tools (like ChatGPT or Gemini) for client data:
“If you are not using the same criteria for AI as you're using for your EHR, then you are using it unethically and likely illegally as well. … Do not use ChatGPT for individual progress notes or anything that would go into an individual client file.”
– Melissa McCaffrey [08:46] -
“De-identifying” doesn’t make it HIPAA-compliant:
“People say, ‘I'm de-identifying the data.’ According to HIPAA, that is not possible. … A progress note is inherently identifying.”
– Melissa McCaffrey [09:16] -
Client trust and ethical use:
True client consent and transparency are foundational, and using AI behind the scenes without consent can cause major ethical breaches and client ruptures. – Alison & Melissa, Reddit example [10:11–11:16]
3. Recent Laws & State Trends in AI and Mental Health
(12:32–15:38)
-
State legislation is emerging:
Illinois passed strict laws stating that AI “cannot be publicized or promoted to be your therapist.” AI can be used in mental health settings but must be monitored and approved by a licensed clinician—no unsupervised AI chatbots for clients.
– Melissa McCaffrey [12:56–14:53] -
Other states (Nevada, Utah) focus more on consumer data protections.
-
Limitations and enforcement:
There are “parts that might not be enforceable,” but setting legal principles can be a good starting point.
– Allison [15:38]
4. Professional Ethics, Client Consent & Communication
(17:42–22:12)
-
Ethics codes lag behind technology:
Most organizations are “rewriting their codes,” but the main theme is that client consent is required for AI use with client records or work. -
How to get and document consent:
-
Talk to clients directly:
“If I can give you anything… it’s do talk with your clients about [using AI] in any capacity.”
– Melissa McCaffrey [18:04] -
Most clients are receptive:
Overwhelmingly, clients say yes; conversations are often brief and non-dramatic. -
Use simple scripts and consent forms:
“I have a HIPAA compliant system that is really secure, and I’d like to start using AI to help with your notes…”
– Melissa McCaffrey [19:48] -
Document the conversation:
Use forms (from your AI provider’s resources if needed) and always note in your records that you had the consent discussion.
-
5. Evaluating AI Vendors & Protecting Client Data
(23:21–26:59)
-
Not all HIPAA-compliant vendors are equal:
Having a Business Associate Agreement (BAA) is necessary—but not always sufficient: companies may not act ethically. -
Ask tough questions:
- “How is my client’s data being used?”
- Is it “anonymized” (vague tech term) or truly “de-identified” (legal HIPAA definition)?
- Is data saved, reused, or combined with others’? Where is it stored?
-
Vendor communication matters:
Pick companies that are transparent, responsive, and align with therapist values.
Notable Quotes & Memorable Moments
-
On AI’s rapid pace:
“It changes every week.”
– Melissa McCaffrey [02:36] -
On unreliable AI output:
“My name has changed… and I was using an AI… and when I asked it why it [used my old name], it couldn’t even tell me.”
– Melissa McCaffrey [05:01] -
On sharing client data with ChatGPT:
“Please, please, please do not use ChatGPT… for individual progress notes or anything that would go into an individual client file.”
– Melissa McCaffrey [09:40] -
On therapists’ fears about AI use:
“We value the privacy of our clients so much that sometimes we make assumptions about what’s truly private and what’s not.”
– Allison [22:12] -
On industry change:
“I have personally met with most of these AI companies with the founders, and the vast majority of them are really good people with seemingly really good intentions—[but] they’re not clinicians.”
– Melissa McCaffrey [24:06] -
On asking tech vendors tough questions:
“Anonymized is a more random thing that tech companies do that’s better than nothing, but it’s not the same [as HIPAA de-identification].”
– Melissa McCaffrey [25:09]
Important Timestamps
- 02:59 — Most common ways clinicians are using AI
- 04:43 — Importance of vetting AI output
- 06:15 — Boundaries: what NOT to use AI for, per HIPAA
- 09:10–09:40 — “De-identified” records and why that doesn't work
- 10:11–11:16 — Reddit story on AI use breach by a therapist
- 12:56–14:53 — Illinois AI/Mental Health law explained
- 18:04 — How to talk to clients about AI use and obtain consent
- 23:21 — Limitations of vendor BAAs and why further vigilance is needed
- 24:57–25:53 — Questions to ask AI vendors about client data
Final Thoughts
The episode closes by emphasizing the importance of staying informed as AI evolves, being transparent and communicative with clients, rigorously safeguarding privacy, and proactively questioning tech partners. The overall tone is balanced: optimistic about AI’s potential but vigilant about protecting clients’ trust, privacy, and autonomy.
For more on Dr. Maelisa McCaffrey’s resources: qaprep.com
Connect with Allison Puryear and resources for building a sustainable practice at abundancepracticebuilding.com.
