ABA Inside Track – Episode 296: Artificial Intelligence and ABA w/ Dr. David Cox
Date: December 18, 2024
Host(s): Robert Perry Crews (“Rob”), Jackie, Diana
Guest: Dr. David Cox
Overview
In this episode, the ABA Inside Track team is joined by Dr. David Cox to explore the intersection of Artificial Intelligence (AI) and Applied Behavior Analysis (ABA). The conversation delves into foundational concepts in AI, its proliferation and acceleration in society, current and potential use cases in behavioral health, and critical ethical considerations. Dr. Cox, an early researcher at the AI-ABA nexus, brings both practical insights and a cautionary approach as the field of ABA faces an AI-infused future.
Key Discussion Points and Insights
1. Defining Artificial Intelligence (AI) (05:17)
- AI as an Umbrella Term:
"You can kind of think about it like ABA, where it's this umbrella term that refers to really any kind of system that hits at those two words...artificial [and] intelligence." (Dr. Cox, 05:17) - Types of Intelligence Modeled:
It’s more than mimicking human logic—AI can imitate non-human intelligences (like swarm behavior in animals) and is used for “creating something new [generative] or discriminating between data [discriminative].” - Generative vs. Discriminative AI:
Generative AIs create new content (e.g., ChatGPT), while discriminative AIs identify or categorize existing patterns (e.g., medical imaging for cancer).
2. AI in Behavioral Health & ABA: Current & Prospective Use Cases
- Historical Development and Acceleration (17:33):
AI research has ebbed and flowed, but recent leaps (notably LLMs like ChatGPT) were made possible due to big data and advances in neural networks. - Applications in ABA (23:16):
AI can enhance every phase of the patient journey: from diagnosis, to data collection, program selection, and administrative tasks.- Example: Automating data collection—"That's the AI I want...I want AI to just get it all for me. That would be really cool." (Rob, 09:34)
- Example: Scheduling—AI can optimize therapist-client schedules and streamline admin tasks.
- Example: Clinical decision support—AI can analyze large datasets to better recommend therapy hours or programs.
3. How AI Models Are Created and Their Limitations (26:21)
- Learning by Example:
AIs learn via large datasets with known “answers”—for example, images labeled as ‘cancer/no cancer’. - Importance of Training Data:
"These systems are only as good as the data that were used to train it...when companies are out there selling products...my first question is, cool, whose data is in there?" (Dr. Cox, 28:34) - Black Box Analogy:
Using AI is like trusting a chef’s recipe: you know the ingredients and see the result, but might not know every step in between.
4. Unsupervised Learning & Discovering New Patterns (32:42)
- Dr. Cox is enthusiastic about “unsupervised machine learning” which can reveal unexpected patterns in complex behavioral data—potentially revolutionizing how interventions are matched to clients.
- Example: Possibly identifying new ‘clusters’ within the autism spectrum or more precisely tailoring interventions to client profiles.
5. Costs, Scalability, and Practical Barriers
- Economic Realities:
Training large AI models is expensive (e.g., OpenAI is still running at a loss). - Customization vs. Scale:
While individual clinics can use smaller, custom-trained AIs (e.g. for scheduling), the holy grail of a universally applicable, behavior-analytic LLM is likely out of reach financially for the field.
Notable Quotes & Memorable Moments (with Timestamps)
-
On the pace of AI:
"We submitted both these papers technically before ChatGPT came out and the world like knew what it was, which is crazy..." (Dr. Cox, 17:33)
-
On automation desires:
"I hate doing anything with a client and also collecting data at the same time. I could do one or the other..." (Rob, 09:34)
-
On ethical AI:
“Every model is going to be biased to the data that it was trained on. Just be upfront about that.” (Dr. Cox, 45:02)
-
On data privacy risks:
"Every time you put PHI or FERPA protected data [into public LLMs], you're technically violating the law. These are things people should be aware of." (Dr. Cox, 50:10)
Ethics: Opportunities and Dilemmas
Key Ethical Pillars (46:15)
- Transparency & Explainability:
- Users must know how models are built, what data is used, and their limitations. - Equity:
- AI's cost may widen gaps—only large, well-funded clinics may afford advanced AI, exacerbating access disparities. - Beneficence vs. Harm:
- If AI is making recommendations, incorrectness could be disastrous in a clinical setting. - Autonomy & Consent:
- Clients must know and agree to data use, but true data deletion is almost impossible once included in model training.
-
Lack of Regulation:
"Every tech company that's building AI, there really is no regulation right now, no motivation to follow any ethics code whatsoever." (Dr. Cox, 43:53) -
Informed Consent Gap:
“I think we also don't have a great system in behavior analysis right now of talking with clients about how we make clinical decisions...we don't really bring that into the informed consent process now.” (Dr. Cox, 47:50)
The (Probable) Future of AI in ABA (58:23)
-
Short-Term (5–10 Years):
- Administrative tasks (scheduling, basic data processing) will be largely automated.
- Enhanced, semi-automatic data collection.
- More advanced clinical decision aids that can process complex, multi-program data.
- Improved matching of clinicians to clients (e.g., based on learning style, progress rates).
-
Human Factor:
AI handles "the 99% that are obvious and it kicks the 1% to humans", allowing clinicians to focus on nuanced cases. -
Expanded Access:
Potential for AI to reduce costs and workload, allowing BCBAs to serve more clients effectively. -
Robots as therapists?
"I know there's a few people playing around with robots to deliver therapy. That seems incredibly unlikely...we're trying to teach humans how to interact with humans." (Dr. Cox, 59:18)
Getting Involved in AI as a Behavior Analyst (61:49)
Dr. Cox's 3 Tiers for Engagement:
- AI Developers: Learn to code, understand the math, and help create tools (for the tech-inclined).
- AI Power Users: Become an expert in using the available tools and integrating them into ABA practice.
- Beta Testers/Consultants: Work with companies as subject matter experts.
Quick Start Tool Recommendation:
- Motion (Calendar app): “When I move something to my schedule, it automatically rearranges the rest of my calendar...it's using AI...simple, not scary, and incredibly useful.” (Dr. Cox, 64:00)
Additional Insights
-
Bias and Limits:
All AIs are only as good as their training data; models well-calibrated for one population may be useless or dangerous for another. -
Vetting AI tools:
Always ask: Whose data was used? How was it validated? What do the outputs actually mean for my clients? -
Rapid Change:
Researchers struggle to keep up: “By the time we got the reviews back, we were already two iterations beyond the system that was reviewed.” (Dr. Cox, 54:07) -
Hopeful Note:
Despite challenges and real risks, AI “is going to do a lot of really wonderful things for our field.” (Dr. Cox, 57:41)
Pairings and Related Episodes
Suggested Past Episodes (67:34):
- Ep. 15 – Technology and Safety Skills (with Dr. Nick VanLo)
- Ep. 25 & Ep. 93 – Virtual Reality in ABA
- Ep. 88 – Ethics of Telehealth
- Ep. 211 – Variety in ABA
- Ep. 224 – Teleconsultation (with Dr. Aaron Fisher)
Final Words
Contact Dr. Cox:
- LinkedIn: Dr. David Cox
- Email: Via Endicott College faculty page or through Rethink First
Memorable Closing:
"These are the stupidest AI systems we're ever going to interact with in our life...We have to talk about this. How are you using these tools? Where has it helped you? Where have you kind of run into some oopsies?...Let's just be open and honest." (Dr. Cox, 53:46, 54:11)
Fun Snack Pairing:
- Cookie “Bytes” (65:10, Diana)
For listeners new to AI, Dr. Cox’s message is clear: embrace the opportunities, respect the risks, and above all—maintain critical, ethical vigilance as technology becomes ever more intertwined with the practice of behavior analysis.
