
Hello, Grab Bag, my old friend I’ve come to pull from you again Because the field of ABA is always growing We disseminate studies for continued knowing And the research that was experimentally controlled With results told Within the sound of podcast...
Loading summary
A
Foreign.
B
Hey, everybody. Welcome to ABA Inside Track, the podcast that's like reading in your car but safer. I'm your host, Robert Perry Crews, and with me, as always, are my fabulous co hosts.
C
Hello, Rob. It's me, Diana Perry Cruz.
A
And it's me, Jackie, not Perry Cruz, aka McDonald. And it's almost time for me to go second because we're in October.
B
Are we finally in October? I don't actually. When is this coming out? This comes out in October.
A
In October.
B
It's a spooky episode.
A
It's not, though.
B
It's not?
C
No. Are you sure mine's not spooky?
B
I don't think mine's spooky either. What is that? We're talking about?
A
We missed the mark.
B
We missed the mark. What are we talking about? Ghouls and goblins? No, we're talking about behavior analysis and behavior analytic research, where every week we pick a topic and discuss relevant research article on that topic. Except for this week. This week we're not doing that. We're not doing that at all. We are going to be doing a grab bag.
A
Not that.
B
Not that. Wait, what's that I hear? What? I hear some dulcet tones. Hello, crab bag, my old friend. It's time for research to be read again. Oh, my goodness. You like that? It's our fall 2025 grab bag. If you are new to the show and you say, I like that thing you said about the episodes and there's a specific topic in relevant research articles, listen to last week's episode or next week's episode and you can skip this episode. But you'd be a fool to skip this episode. A fool, I say, because.
C
So, Rob, if. Yes, if you were hoping for the sounds of silence, you're in the wrong place.
B
No, not here.
A
You have to go to my house.
B
This gets work because we're all here. Everyone in your house is here. Now we are doing grab bag. And a grab bag pretty much means we pick a bunch of articles that we just wanted to talk about ourselves that either they don't fit into another and episode topic. We don't have enough articles on them to make an entire episode.
C
We've already done an episode.
B
We just did the episode on it and we're like, hey, here's a follow up. So that's what grab bag is. We also sometimes will tell the story of it's a giant bag with all the articles and we randomly pull them out. But I. I got to be honest, guys, as much as I love a good bit, I think I want to retire the Russell Russell grab bag bit. What do you say?
A
Sure, that's fine.
C
We know.
A
Yeah, we're done.
B
Okay, we're done with that now. So that's what Grab bag. That's our description of Bag bag, Grab bag, Bag, bag. That's our grab bag description from here on out.
C
No more Russell Crowe.
B
No more Russell Crowe. So we're just going to get into it with our three articles. Diana, let's just do it like we normally do it. What three articles are we going to discuss in today's Grab Bag?
C
Okay, I can tell you what those are. They are going to be the following three articles. The Effects of Group Virtual Training and Self Monitoring on Leading a Meeting by Blackman, de Janaro, Reed, Gunter, and Brerin. That was in Java 2025. Also, curriculum based Evaluation of Cultural Competency Coursework in an online Applied Behavior Analysis graduate program by Petrone, Napolitano, Miles, and Shanahan. That was in Behavior analysis and practice 2025. And finally, do persons with intellectual and developmental disabilities prefer to save the best for last in an mswo A preliminary investigation by Castillo, Frank, Crawford, Liesfeld, Doane, Newcomb, Roker, and Barrero. That was in Behavioral Interventions 2022.
B
Oh, man. We just. We ran the table on some flagship journals.
C
That's right. Y.
B
How amazing. Okay, well, I'll kick it off with the Black et al. Article. That's the one that I picked. Why did I pick this one? Because we just wrapped up Super Vision September. So I was in a bit of a supervision state of mind when I saw this article. It reminded me also of an article we discussed way back in a previous Supervision September on running good meetings. Actually, was that even Supervision September? That might have just been an episode on meetings. I can't even remember anymore.
C
I think it is.
A
52.
B
Yeah. So we talked about meetings. And for those of you really close listeners who listen to our interview with Dr. Cranick and Dr. Anzik on what really works in supervision, we mentioned this article as a recent one to come out as one of the articles discussing how to use what we know about supervision to do an experimental design to see if it actually changes behavior. And I mentioned there that we'd be talking about that article again and surprise, it's today. So if you were one of those people who didn't listen to the grab bag, don't you feel so foolish right now because you missed us talking about Blackman et al As a follow up to Supervision September? I feel so bad for you right now because you're about to miss this. Here it comes. So what if they're here?
C
They're hearing it.
B
Well, if they're hearing it, then they're just. Yeah, they're probably doing what I'm doing, which is like those people who aren't hearing it. Shame, shame.
C
Okay, so where, like your palms itch or your ears burn or something, and you know, there's a reason why you don't know what it is because.
B
Podcast fomo.
C
Yeah.
B
So what's a meeting, everybody? Well, it's a common work activity where managers and employees get together. It's a time when two or more people meet to discuss a topic. Often managers lead about three of these great times a day, with employees attending anywhere from one, maybe more. Well, one per day and anywhere from about 11 to 15 a week, which is a new estimate. CEOs might have as many as 37 meetings in a week.
C
This is way too many meetings. Too many meetings.
B
There are more meetings.
C
I want to count how many.
A
I want to see how many meetings I have on average.
B
There are probably more meetings since COVID since we all learned that, oh, I can have a meeting anywhere I am and make everyone get on their phone and, and get on Zoom or Teens or whatever. And their meeting can be in a parking lot or the bathroom.
C
So everyone's got to go zoom from the bathroom.
B
I mean, if my CEO who has 37 meetings is like, I got to squeeze in meeting 38, I'm not going to tell them, sorry, I'm in the bathroom, I'm going to go to the meeting. What are you going to do?
A
I have 20 meetings next week.
B
Wow, you're almost like a CEO. Like half of a CEO. The problem with meetings is, even though we have such a lofty goals as sharing in meaningful information, generating ideas about the future of the company or the work, solving problems, doing employee supervision activities so we all become better supervisors, or improving the interpersonal relationships with your co workers. Most people think these are a giant waste of time. The Organization for Economic Cooperation and Development in 2017 did a survey and 32 to 71% of respondents said, what a waste of time these meetings are, how ineffective and unproductive. In a different survey, half of the people just complained about how they don't like meetings. And the US alone spends anywhere from 70 to $283 billion on meetings and lose about 15 to 50% of of employee productivity when meetings are no good. So it costs a lot of money. Wastes a lot of time when your meetings are bad. How does it cost money, you say. Are they just like well catered meetings? Well, no. The problem tends to be the cost.
C
In did you get qdoba?
A
I love qdoba.
B
The cost in employee performance, the cost in job satisfaction, the cost in stress, and the cost in ways of people just going to meetings and feeling like nothing got resolved. I don't know what I'm supposed to do next and therefore I'm not happy with my job. The problem with meetings also becomes that you're wasting everyone's time. And then after the meeting, everyone's exactly where they started, but a little sadder about their lives and their job. And over time, if you continually have bad meetings, it actually punishes good meeting behavior and reinforces bad meeting behavior, meaning more and more people show up late to the ineffectual meetings, which means more and more people will also show up late, which means the meetings might actually take longer and be more ineffective because not everyone is there. People engage in more maladaptive behavior because why would I bother paying attention to this worthless meeting when I can shop on Amazon using my computer and still look like I'm present? And increases conflict between team members because everyone is really frustrated and angry. So if every time it's like meetings become a CMOR and you just engage in whatever behavior will get you the hell out of there. Now, on the one hand we might say, all right, let's not have meetings anymore. However, there is a line of research looking at how we can make meetings meaningful. How can we improve what happens in meetings? And like I said, we did a whole episode about improving meeting behavior, which were a lot of sort of discussion. And I've done this in my organization and it seems to work. So you could try these things. Some take homes, but not necessarily direct experimentation of do these recommendations actually change meeting behavior? Some of the recommendations just as a quick review before the meeting. It's important if you're a leader of a meeting to decide the purpose of the meeting, who needs to come to the meeting. You need to have an agenda of what will happen at the meeting. You need to determine how often the meeting should happen to meet its goals. And you also need to be aware of what the goal of the meeting is going to be and whether or not it's worth having a meeting to meet that goal, or whether you could just send a quick email or phone call and achieve the same goal during the meeting. You have to be able to lead a discussion and manage employee behavior. You need to set the tone of the meeting by making sure that everyone's participating. You start the meeting on time and you engage everyone in meaningful conversations towards a goal. And after the meeting, you need to be able to follow up with everyone to increase the chance that everyone does what you said would happen at the end of the meeting and to solicit feedback about how well the meeting went so that you can continue to improve your meetings. Again. That's a lot more work than just we're having a meeting. And then what do you all want to talk about at this meeting? We have a problem, let's solve it. Which is not a good meeting. And this is for in person meetings. Now here's another problem though. Do we need to do things differently for virtual meetings? Most of us would probably say, do you think in person or virtual meetings are better or worse? Many of us might say, probably virtual meetings are worse because it's a lot easier to interrupt people, it's a lot easier to zone out in meetings, and it's a lot easier to get off task because nobody knows what you're looking at on your computer. Are you looking at the meeting or are you looking at your to do list and doing other work? Right. So what can we do about this? We have some recommendations of how to improve the behavior. The question is, can we use these recommendations and can we use the technology we have for training to improve people's meeting behavior? So the goal in this study was to look at using technology to train a work behavior, specifically leading a meeting and then seeing if you could do this training in a group or in a virtual setting and improve meeting behaviors. So they threw all that together to make two experiments, which is not my favorite design. I like one experiment.
A
You hate two experiments.
C
Oh my goodness.
B
This one wasn't too bad though.
A
I can't believe you picked it.
B
Kind of looking at different things. I didn't know there were two experiments at the time. And I almost said I'm not doing this article out of principle, but I did it and I'm glad I did. So the first article looks at how can we use group virtual training alone and then add self monitoring system to improve the leader meeting behaviors. And experiment two was like, let's just do all of that at once and see if it's faster. Ta da. The end. So experiment one, we have six supervisors who said, I run meetings as a supervisor. Two worked in behavioral healthcare organizations and four were in a business office for school districts. Everything in this study was over zoom. There were five scripts created for research assistants to use to demonstrate common meeting attendee behaviors. So again, they would be things like in this script, I'm going to have an interpersonal conflict. In this script, I interrupt all the time or I go off task or I don't respond to anybody, or am I. I'm participating well. And then there was a script in which the technology just screwed up and broke and you had to deal with it. So they pretty much put these all together to create these virtual trainings which used many of the techniques we've often talked about. There was a great PowerPoint that described those effective leading meeting behaviors. Per the LeBlanc and Nozick article that we talked about on that previous episode, that's 2019 article. There was a rationale for why you would do each of these things, each of these steps in preparing, opening, managing and closing a meeting. There was a video model with on screen text for each of the behaviors. So we always want to have a model and we're doing our kind of behavior skills training setups. In this case, there were six different models with descriptions that showed what does it look like to do this effectively. The trainings themselves after all of that were like about, you know, 22 minutes to 18 minutes long, depending on kind of which company they were making the software for because they had different business roles. So they wanted to tailor the meeting contents to those to those sites. They also gave guided notes to everybody with the content of the training. And then during the training they did a group activity where they had the different scenarios that could happen during a meeting that was individualized to the different places of employment. And then they gave everyone a self monitoring checklist afterwards as a follow up with a list of behaviors to use before and the list of behaviors to do with the meeting and then a place to check off I did or I did not do these things during the meeting. What they were looking for is their dependent variable. This was true across the experiments was meeting fidelity. How accurate were the participants in in using the steps of planning and leading their meetings in roleplay activity is what they did and then they do kind of an observation. I'm not going to go into all of the details of what goes into a meeting. You can go back to one of those old articles.
A
We've all been in meetings too.
B
We've been in meetings, but in terms of the things, it's things like, you know, what's on your agenda? How are you making sure everyone communicates some of the things we already talked about. It goes into a little more detail in the article, but for each of these behaviors, you scored either a 0 because you didn't do it or a 1 if you did it or an NA if you know, for some of the things like the tech issue, it just didn't happen. That wasn't part of the script in the roleplay. So they do these experimental sessions after the training where they take 15 minutes to say, here's your meeting. There was an experimenter and three actors plus the participant themselves. And they said, all right, we're all going to log into this virtual meeting. And the script would start away, start away. So what time should people do what in the role play? And the participant would just run the meeting and respond to those things as they were trained to do in the kind of the PowerPoint and with all those models, the roleplay started and when everyone logged in and it ended when the meeting ended. So they pretty much ran a pretend meeting with a script of what to do to see how did the participant respond following the training. This was a non concurrent multiple baseline across participants design. In baseline they said, just run a meeting until you're done and then you say that's all I have and then that's the end. That was your baseline. Ran this until, hey, you are not getting any better at meetings just by running more meetings. Which no surprise to anyone. Then they do the group virtual training where they'd have guided notes for everyone. They went over the group activity, they went into the training, they did the virtual training. We said like the video models themselves were about 18 to 20 minutes of the training. The entire thing took about 90 minutes. There was the presentation. Then they do an activity in breakout rooms where they do like a collaborative activity together and answer questions and share. Then they had kind of like these six kind of questions that they asked the participants and then opportunities to ask the experimenter questions. And then they said, all right, look through your guided notes and send us this information about how the activity went after the training ends. Right. They got all that stuff together. And then they do post training where they do that virtual training activity we just talked about and to see how well did they run the meeting. There was no extra feedback here and either you met kind of a stability or mastery criterion or you didn't. And if you didn't, you would get a self monitoring sheet would also be sent to you. There was a five minute phone call with each of the participants to sort of discuss what's the self monitoring sheet? How can you use the self monitoring sheet, reviewing what's on there, describing it, telling them how to fill it out and then again they Go back to having those kind of like meetings where they kind of do the visits, the visits at the meetings and sort of see how they did. Then they did some generalization probes where there was a topic selected by the participants like I have to do a meeting on blank. And then they would, they would do it one during baseline and one during post training or the, or the self monitoring. Then they did some social validity and a perceived effectiveness study. So they would send the meeting attendees a list of kind of questions based on this perceptions of supervisory support scale to be like, hey, how well that meeting go? Do you think? To get a sense of if you change your meeting behavior, that's good. If nobody cares, that's bad. So what happened? Well, certainly what happened was a lot of people weren't great at running meetings. During baseline it was about 50% of the steps they were supposed to run. After the virtual training everyone increased to about like a mean of 79, 79%, a high 79%. Two of them were above 80%. For everyone else they added the self monitoring and then Everybody went over 80% accuracy in terms of doing all the meeting steps. And for four of them they showed generalization when they said great, do a different meeting. One was on an increasing trend and one never did post training cuz they withdrew from the study. So overall good generalization but not everybody for reasons we said. Five respondents that responded to the social validity said you know what, I like this, I think it was helpful. I think the virtual training was acceptable and beneficial and effective. I really liked having a self monitoring sheet. Whether people thought they ran better meetings. Ah, it's kind of mixed. Some people said yes, these meetings are better, some said not really any different or they weren't that great. So it looks like you can use virtual group training to teach good leading and meeting behaviors. It seemed like the self monitoring was pretty important for everybody. So that's where experiment two came from. They hadn't really come up with great stability criterion for their self monitoring phase. For everyone there was no high attrition. They modified the training to make it a little quicker, a little shorter. And so then they added the self monitoring. So they made it more of a package rather than kind of like different parts that were presented sequentially. In this case we had four other supervisors, one at a new university, two at the, you know, the rest of them at the same businesses they'd done the previous study and they did pretty much the same study except they added a portion about creating and sending an agenda and they used Google Docs instead to allow people easier time to send to documents between the experimenters so they could see what was in their agenda, what were in their minutes of their meeting, how were they following up. Everything else was pretty much the same. And then they only did a baseline and a post training because they put stuff together. They also added some students or non participating employees so that they could make sure their groups were big enough in the trainings because they had fewer participants in experiment too. And they did a follow up talk about self monitoring with each participant rather than a separate phone call later on. Everything else was kind of the same. They also added an external rater. In this one they found someone who published an article about running effective meetings and they did kind of a survey about how well did they the participants do in some of the clips they shared. We got pretty similar results in that everyone was kind of crummy variable, but a meaning of about 62% accuracy on that checklist of did you run a good meeting afterwards? We saw an increase in their accuracy in running a good meeting and a lot of stability at around 88%. It took only about two to three training sessions after the training. So practice sessions I should say to get criterion for all the participants. And we saw, you know, pretty similar good generalization as we did in experiment one. Overall the people liked the procedures that were run. They thought they were helpful, proceeded effectiveness survey was a little bit better for one of the kind of managers or meeting runners and and a decrease for the other. So again the challenge is you get some mixed results there and you also had mixed results with the external rater. Some of the skills or two of them they the participants said improved to increase. Some of the skills were different levels of yes, this is better or actually this is worse. So in summary, it does appear that you can teach new leaders to run more effective meetings according to this criteria. Not too surprising. We would like to think you could teach that using behavior skills training type experiment experiences. The addition of self monitoring and running all this virtually was kind of a new addition. So that could save a lot of time in terms of your training. And self monitoring can be a very effective addition. So why not just put it in to start? However, here's one of the limitations. While it does seem to improve the ability to use meeting skills that should create more effective meetings, it's not actually clear which of the behaviors that were learned actually had any impact, if any, with whether people liked the meetings better. So if you're not really sure if these behaviors lead to better meetings it's really hard to say whether it's worth training some of these skills. It must be because some people felt the meetings were better, but some people didn't. So there's still really work to look at. Okay. A, we need to look at more natural conditions. Maybe this was kind of too fakey overall of like a setup, that it didn't capture some of the intricacies of running meetings. Maybe there are pieces of the meeting agenda or the meeting training agenda that need to change to focus on the ones that actually lead to better outcomes. We're not actually sure which outcomes are considered better by anyone else. And does it actually do anything to run a good meeting? Do you increase productivity in your company or do you just make people less miserable being in your meetings? Because while that's good, if you can't actually make your products better or make your services better by running meetings, do we need meetings at all? So, still a lot to figure out around this. But at the end of the day, if you want to learn how to run good meetings, according to other professionals, you can do that. Does it matter? Question mark? The end. And that's my grab bag.
A
Good work. Good work. Good work, Rob.
B
Thank you. Thank you. All right. And since I'm done talking, I'm going to take a break. You listeners can take a break as well. And when we come back, Jaggy and Diana dig deep into their own grab bag dreams. We'll be right back.
A
Hi. Do you want to be a bcba, also known as a Board Certified Behavior Analyst?
C
Sure. We all do.
A
Now you can come to Regis College in Weston, Massachusetts to get your graduate degree just minutes outside of Boston.
C
Choose from any one of these Masters.
A
Of Science in Applied Behavior Analysis, Master.
C
Of Science in Special Education, dual degree in Special Ed and aba, or be.
A
Eligible for your Postmaster's Certificate.
C
You can complete your degree and be ready to sit for the exam in two years.
A
And our 2022 graduates had a 92% pass rate on the BACB exam.
C
Come enjoy approved fieldwork placements, ethics, mini handbooks, PhD levels, professors, small class sizes, and a service trip to Iceland if interested.
A
And don't forget, our program is accredited by the association of Behavior Analysis International or ABAI as a Tier 1 master's degree program.
C
Don't delay. Supplies are limited.
A
Learn more@regiscollege.edu Again, that's www.regiscollege.edu regiscollege.edu. one more time, www.regiscollege.edu See you there. Bye.
B
And we are back with our fall 2025 grab bag. For all you listeners out there, we just want to remind you before Jackie and Diana dig into the bag that ABA inside track is.
A
We don't have a bag anymore, Rob.
B
I know, I know. I don't want to get rid of the bit. Complete.
A
Okay.
B
ABA inside track is ace and quaba approved. By listening, you're able to earn one learning credit. All you need to do is, you know, listen to the episode and then go to our website, ABA InsideTrack.com and you can go to the Get Ceus page or you can click the link that's in your podcast player right now. You need to enter in some information about, you know, yourself, who you are, and all that jazz, as well as two secret code words that we are sprinkling in the episode. And sprinkle. Sprinkle. Here's the first code word. It's rosemary. R O, S E, M A, R Y. Rosemary. It's a. It's a. It's a herb or an herb, depending on where you live. Right.
C
Yeah.
B
Tea or something. What do you do with bread?
C
No, yeah, you can put it in a bread, you can put it in tea.
A
Lavender tea is delicious.
B
Okay.
C
It's rosemary.
A
Oh, rosemary tea is not delicious.
B
Oh, don't put it in your tea.
A
You can put it in bread. You can put it on tofu.
C
You can put it in.
B
Oh, tofu.
A
Italian food.
B
All right.
A
A lot of places. But do not make a tea out of it. I don't think that would know.
B
But the place you need to put this lavender, you need to put it in the box that says code word rosemary. All righty. Who's that? Jack, you're up next. What will we be talking about?
A
We are going. As soon as you said that, I had the biggest yawn.
B
I know it's going to mean boring. Grab your pillow. It's a boring one.
A
This is a short and sweet one, actually. And it is. I think it's really fascinating. I was doing a deep dive into all of the articles around cultural responsiveness for a recent paper that I'm revising, and I found this one, and I think it's really neat. And so we all know if you've listened to lots of our episodes, that cultural responsiveness is now hopefully becoming more prevalent in our field. And people are talking about it more and thinking about how we can train the skills needed for graduate students and clinicians to be more culturally humble in. In their practices. Right. And so culture really is defined as the way different groups experience the world around them. Right. And in case you haven't thought about it before, remember, people can be part of multiple cultures, right? I can be part of my neighborhood culture, my town culture. I could have a mom group culture, right? I could have a religion culture. And all those could intersect in different ways. And cultural awareness is really understanding your own culture and seeking to understand older and others culture. And so what we want ABA practitioners to be is culturally humble. And what that is is looking at an understanding that everyone is an expert in their own culture, but not necessarily the culture of others. So we need to be continually learning about our own culture and about the people around us and their culture and not just assuming that we know what we're doing. Because we knew one person, right? I knew one person in a mom group. We know all the people in the mom group. And so just this is a lifelong process of learning and responding to all the different changes in cultural. And this is really essential if we're going to become effective practitioners. And this is a really nice introduction on culture. It's like three pages long. But they, you know, preface it by talking about the initial social validity study by Wolf and his colleagues. And, and they tie in the seven dimensions. How, you know, it's, it's inherent in all of the dimensions. I wanted to say a little funny story. So as you all know, Diana and I work at Regis College and we teach the seven dimensions in one of our classes. And, and one semester we were all chatting and our other colleague, Dr. Carcina was like, we should add an eighth dimension, cultural responsiveness. And we're like, we should write a paper on that. And no joke, the next day a paper popped up in our Gmail was like, should cultural responsiveness be the eighth dimension? I'm like, who's listening to us? So when we talk about that, barely.
C
That paper had been in the works for a long time.
A
It just was like so funny. And we have it like in this slide. Sometimes we're like, yeah, this is what we said. Here's what this paper said. But that was really funny. So sometimes when I talk about the seven dimensions, you know, we all say, get a cab. I say, get a cab. So that we can include the cultural responsiveness. It's not really out of the tongue, but you know, people love that. And so they said it's really important to talk about culture when teaching practitioners, specifically when we're thinking about choosing behaviors to target, whether increase or decrease, defining those behaviors, what interventions we're going to use. And so that all kind of accumulates into what we Call as a functional behavior assessment a lot of the times. So culture is going to play a big part in that, right? Because culture plays a big part in what we do and why we do it and why and how we do it. And so sometimes behaviors may be deemed problematic by one culture and not another culture. And if we aren't, we don't have that understanding, then we are, you know, probably going to be targeting the wrong behavior or using, you know, problem behavior is very subjective overall. And so behavior analyst, you know, will become. Will come into a situation and a teacher or a caregiver might describe a behavior. Right. And that may not be culturally responsive or not objective. Right. Because there's a lot of things going on. And so behavior analysts have to learn how to pull those things out and ask the appropriate questions to get, you know, where they're at. And there may be setting events that have direct impacts on behavior, such as that the, the great one they use in here is food insecurity. Right. So that, yeah, that may, that probably will have a huge impact on, you know, if students are food stealing or, you know, or they're. They're hoarding their food. And so we really should have trainings around cultural humility and cultural competence. They call that cultural competence in here because they're like the people that we're talking about. We're teaching them the skills that they need in order to be culturally humble. They don't know anything yet. So, okay, we'll see how they say that. And they said, you know, previous researchers just said, oh, we should include more reading on diversity, equity, inclusion. And yeah, I think that's great, but we also need to do more. So they said one major shortcoming in training in this area is a lack of direct way to measure students behavior change. And so they found a great one. I think it's very fascinating what they did. So they looked at students that were in a online ABA graduate program and they looked at how they did on two complex case studies within a class functional behavior assessment class at week two and week eight. And they measured how students did and whether they asked about these specific cultural setting events based on. And the measurement was based on whether they took the cultural competency class prior to the FBA class and how they did. Or they had embedded an additional module in the FBA class to see whether if after the cultural competency module in the FBA class, would they respond differently. Isn't that neat? Yeah, yeah, yeah. So, I mean, no, I thought it was really neat. It is, right?
C
Yeah.
A
I think it's neat. So they had. All the students were in an eight week class. Right. They were mostly white and female. Not surprising given the demographic in our field. There was a total of 77 student data that were included. And so then they had like this, they had 43 of the students that hadn't completed the cultural competency course. So they were like the, the guinea pigs or the control. And then they had 34 who had previously completed the culturally competency course earning a grade of B or higher. Right. And so isn't that just interesting? And they figured that out because the students didn't have to take the same classes in the same sequence. Yeah. And so they're like, oh, look at Handy, isn't it? Yeah. So they said that the 30 first students who had previously completed the Culture Rural Competency course, 85 had completed the course in the semester immediately preceding the FBA course and 15% of them had done it the previous academic year.
C
Okay.
A
Right. So they. So here's where I'm like, wait to publish this research. They used a non experimental correlational design. We never use these. No, but you know what, what I love about this article is they felt like they had to put a graph in. They did not. But they put a graph in because they were like, it's behavioral and like, we need a graph.
C
Everyone wants a graph. You gotta see a graph.
A
So there's a graph, but you don't need it really. So part one really looks at how the students did in class two of the fba and they looked at whether they had taken the cultural competency class or not before. And then part two looks at how everyone did after they did the self administrated Cultural Competency Module within the FBA class. Right. And they called this. The class they took was called the Issues of Ethic and Cultural Diversity. It was a three credit course. It's mandatory in New York State. It's offered twice a year. Yeah. So the coursework, they talked about the coursework too. And they said that the coursework had the students made reflections based on readings and articles. They did complex case activities. The development. The course was guided by the Multicultural alliance for Behavior Analysts. They had seven standards and those include self awareness, Making ethical Referrals, Cross cultural application of ABA Technologies, accommodating language diversity, encouraging. Encouraging a diverse workforce and engaging in ongoing professional development. And so these were the central course objectives and the activities surrounded these. Right. These were the highlights as well as the ethics code. They also had to choose your own eventual module where the entire class engaged in one topic. So like for instance, cross cultural application. And the module contained examples of potential application and included reading a first person account in the form of a video and a case example, which sounds really fun. I would love to see that first person account video on what that looks like.
C
Me too.
A
Yeah. So there we go. So the dependent variable again was the student responses to those open ended components of the FBA course. So they specifically had to ask an interview question related to the complex case. And that served two. Oh yeah. So here it serves two reasons. They wanted to show whether students could show generality in the application of cultural competence. If they had already taken the class, would they then like be like, oh my last class we talked about cultural.
C
Right?
A
Yeah, variables. Maybe I should include it here. And they had two setting events which were related to culture and two which were not. So they wanted them to to choose the setting events. Right. That were culturally. And sometimes they didn't. Sometimes they didn't. And they also wanted to. It was a requirement for the program for them to do this. So these are the two reasons that they did this. So the case study one was again in class two at the end of class two and they, the students were required to ask an interview question based off the indirect assessment, such as a functional assessment interview. When they were given four setting events about a specific client, two again were cultural and two were not. And case study two is very similar, but it was in class eight after they took that module, data were coded on which setting event was questioned and they used a chat, a chi square test for independence to look at the correlation. So here's good bad news. Good news is that after Class 8, people that took the cultural competency class before the FBA class actually did better. 67% of them did better than the other ones. But the bad one, the sadness here is that there was no difference after class two with that case study one on whether they took the cultural competency course before.
C
Huh.
A
Right. So I'm really, I was like very sad about it. And here's where the graph came in. Right. Because it was like really low percentage wise and then so it. But you know, after they had that like I will not call it a booster for the people that already took the cultural competency and like a introduction for the other ones, they were more likely to ask a culturally competent question. So I love. Right. So results suggest that embedding reoccurring themes throughout coursework may be beneficial.
C
Okay.
A
Right. It also might not need to be a standalone component. So you might not need to have a three credit cultural competency test. Maybe you just have to put it in all of your classes.
C
Yeah, yeah, right. Say like everyone's still in school, so they're still learning.
A
Learning, yeah.
C
Too.
B
And kind of to that, to that point, Jackie, going back to the, you know, is cultural, Cultural humility. Kind of the eighth kind of the, the dimension. Eighth dimension. To some extent I'm like, well, it kind of. I kind of probably lump it. Not that it can't be, but like lumping under maybe generality of like, am I paying attention to the environment where this behavior will have value or no.
A
Value or applied or effective.
B
Definitely applied as well right now.
A
Or it's less about the.
B
This is a separate skill. And once you learn the skill, you're going to be. And more being. This is something you just need to take into account thinking about all the time. So it's a part of every class, not its own thing.
C
Right.
B
I mean, it could be its own thing because maybe it's so much content.
A
But.
B
But could it be both? You know, and if you're like, well, that's not something I want to spend as much time on. Okay, but you have to spend time on it in the context of using your clinical skills and improving your clinical skills. If you don't want to, then write papers on it and like really do a deep research. Okay, fine. Maybe you don't need that to still use the skill effectively to effectively help others.
A
Right. Yeah. So I just thought that was a really out of the box way to start thinking about how to incorporate culturally responsive kill kills skills into graduate programming. Yeah, I'm done.
B
Very cool. All right, I'm, I'm, I'm wondering if we're going to have a theme with these, with these articles. So let's hear about Castillo at all.
C
Okay. I don't know what the theme, the underlying theme is though.
B
Why would we be making it up? Because these are not related articles.
C
Okay. I was going to say. So my article, just to remind everyone the title was Do Persons with Intellectual and Developmental Disabilities Prefer to Save the Best for Last? In an mswo, which I think the.
B
Way you read that is do Persons with intellectual and Developmental Disabilities Prefer to Save the Best for Last?
C
Okay.
B
That's how that goes.
C
Thank you. I said it wrong. This is, I think, preliminary investigation. Yeah, a preliminary investigation. I think this is a very interesting question. So usually when you're doing a preference assessment, the assumption is the first thing that the person picks is their most highly preferred thing, the one that's most likely to function as a reinforcer. And the last thing that the person picks is the least preferred thing, least likely to function as a reinforcer. However, there is always the possibility, and I've thought about this before too, that someone might be what they call saving the best for last. So the last thing that you pick is actually the most preferred thing. And I think, I think about this because I that's the kind of person I am like, I would definitely save the best for last. And I've always thought, well, if that's the case for your client, you just can't really like use this type of study. You can't use an MSWO or you should like always be aware of that possibility. But I had never really thought like how you would determine if that were the case beyond just general observation of their performance on the MSWO compared to like their everyday selection of items. So this article kind of like gets into that and says, is there a way we can figure this out? And is this a potentially important component? They hearken back to a few other published examples where at least one participant in the study seemed to have the same pattern where it seemed like they saved the best for last. So call et al 2012 is an example of this.
A
When you were reading this, did you go sometimes the sun goes.
C
Yes, Rob just made that behind the.
B
Curtain J to go get something.
A
I had to go.
B
Missed my bit.
A
Oh well good. I'm glad that we both feel that.
B
With a new bit. See, that's how it works, the law of consistent bits.
A
Oh good.
C
Right. So if we have this happening and individuals are saving the best for last, then we might end up with skewed results. We might be leaving out the most likely reinforcer. There's not a lot of examples of this going on in the literature, but they also say, well, that's probably because those studies don't demonstrate clear findings and so they likely aren't getting published. And that is another issue in and of itself. And a kind of similar study, Fritz and colleagues in 2020 did a comparison of the highest and lowest ranked items from an MSWO and then did a reinforcer assessment, concurrent operants arrangement with those. And in that study they found that for 20 to 30% of those participants, they what they thought would be the highest didn't match what happened in the reinforcer assessment. So that's sort of further indication that there might be something afoot in this preference assessment. So that's why they wanted to look at it. In the current study, there were four participants. They all had a diagnosis of autism or developmental disability. They were aged three. Nope. They were age nine to 15. And they did exactly what you think that they did. First they did a thing called a delay sensitivity study, which was just.
A
I probably wouldn't have thought of that.
C
No, that. Sorry, that part isn't. Okay, you thought. It sounds very fancy. But what it is is that they show them highly preferred thing and said, do you want this now or in three minutes? And if they said now, then they got it now. And if they said in three minutes and they set a timer and waited three minutes and then they got the thing and then they did that three times and everyone wanted it right away, just so you know. So they were like these. Under other set of conditions, these participants want things right away. Right. So that is called a. Oh, I lost it, sorry. Positive time preference. And if you want to save everything for last, it's a negative time preference. But obviously it could depend on the context. So that part maybe was unexpected, but the next two parts are totally expected. Then they did an mswo. They did one for edibles and they did one for leisure items and they counterbalanced which one came first for which participant. And they did at least three sessions of each or until they had a measure of stability. And then they took the highest and the lowest item and then they did a reinforcer assessment with those. So in the reinforcer assessment, each participant had a free operant task to complete. For two participants, this was tracing letters. A third participant, it was single digit addition and subtraction. And then for the last participant, they started out doing addition, but her responding was very low because it didn't really matter what the reinforcer was. She was not really into doing the addition problems. So then they switched over to tracing letters. So her results are like a little bit wonky. And so they evaluated this in a single operant arrangement with a progressive ratio schedule for the highest item in one session and then the lowest ranked item in the next session and then did a comparison across the two of the overall rates of responding and then the break point. So as the schedules increased, the number of responses required before producing a reinforcer increased and at some point participants would no longer continue to respond. And so that was the break point measure.
A
I like that. Clever.
C
Yeah, yeah, yeah. So there were criteria to end, because you might wonder, like, well, how long did these go on for? So either if they didn't respond for a minute, they always had like a stop picture present. So anytime that they requested to stop, they could stop. Or if it went on for 30 minutes. So it's a long. They had a long time that they could respond, but they obviously didn't have to respond. So in baseline, it was that set of conditions I just described, except that they were told they could complete it or they were prompted to do one. And then they said, you can do more if you want, but you're not going to get anything. And you can stop at any time. And they look to see what responding look like under that set of conditions. And then they did the reinforcement sessions, which was the comparison conditions I just described. They ran at least three of each type of session, and this was arranged alternating. And they stopped when there was stability reached. So at least three sessions, or if there were no more than three responses different across three consecutive sessions for that particular stimulus, and it was set up the same way. They prompted one response. Then they said, you can do more if you want, and if you do, you will get X item. You can stop at any time. And then the progressive ratio schedule was set up. They took the average of the baseline, rounded down one. Oh, it's like one response lower, basically. And then for. If the. If the mean was less than 20 responses, then they increased by 2. And if it was over 20, then they increased by 5. So, for example, Jackson started with one response, and then the progressive ratio schedule was 2. So after one, he'd get the purported reinforcer. After two more responses, he would get what?
A
I just love the word purported. Yeah, I remember when I learned it, I kept being like, well, I think this is a purported field. And you're like, that's not right, Jackie.
C
You can also say putative.
B
Yeah, putative is the one I'm like, constantly like, what sounds like a putative.
C
Okay. So it would be like an FR1, then an FR3, and et cetera. Okay. And then so everyone's was. Was relatively low. Like the highest anyone started at was 5. And then that was a progressive ratio of 5 schedule. And they had good IOA. And then the results that you might want to know about are that everyone had clear first and last choices. So they all established clear hierarchies. I just wanted you to know they had many cool items that were included. They have a whole table.
B
I think that's a cool item. A table. Wow.
C
No, no, I love tables. No, the table was in the study.
B
Hide under it. Climb on it.
C
The table had the items in it so you could go check it out. I've listed a few for you here. Like chips with French onion dip. That was one Item. Oh, right.
B
Regular chips with French onion dip. Now that is.
C
They didn't the best. It is the best, right? Slime.
B
No, no. I don't want to dip that little.
C
That water hoop game where you press the button.
A
We're all. I love that. We're all like, yes, right?
C
And then o chips is that you say that. I didn't say.
B
Yeah, but uts makes like crazy brands.
A
It's like, yeah, but it's probably the. Probably the regular one.
C
And then I was like, go to.
A
A local thing and they give you restaurant and they give the kids the free chips. That's chips.
B
I got to tell you, if I don't have my French onion dip with a plain chip, I'd rather have a flavored chip.
C
I think wavy lays. You have to have French onion dip with wavy lays.
A
I agree with you.
B
Crinkle cut kind of UTs plus French onion dip.
A
I can't believe we're. I can't believe we're unanimous on this table here.
C
Absolutely.
A
I only can have like two of those because it is chopped full of salt. Man, it's so delicious.
B
It's a salt delivery device.
C
I did check to see where this study most likely occurred. It was in Maryland, cuz I thought.
B
So they should have crab chips.
C
Was just New England, but I guess it may be Maryland. It's trickled down there too, because we didn't have that where I grew up.
B
No.
C
Okay. So just so you know, all there was like a lot of good sounding stuff in this. If you're going to do an edible preference assessment, they. They chose some good items. All right. And now you might want to know what happened. Right?
B
No, it's good talking about the food. So I move on.
C
So there were four participants in this study and for one of them, the thing that they thought might happen happened. So the last place item actually from both the edible and the leisure had higher rates of responding and higher break point than the first ranked item. So that was Jackson. He preferred fruit snacks over the veggie straws and the slime over the hula hoop. Just so you know, Even though he picked veggie straws and hula hoops first and fruit snacks and slime last. And then for David, he had kind of similar break points for the leisure item, whether it was first or last place, but higher for the first place edible item, which was the French onion dip. He was the French onion dip guy. So we're all on board with David's preferences. Connor. It was equal for the edibles and he Had a higher break point for the first place item in the leisure assessment. And then Laura had higher breakpoints for the first place leisure item and for the edible item. Okay, so there were four participants. And then for one of them, it was pretty clear that it was the save the best for last scenario. Which makes for this small sample, 25%.
A
Yeah, it's big.
C
Which fits in with those earlier findings from the call et al. Because they had, I think, one out of seven in that study. I might be misspeaking. It was only one guy, but I think there were several participants in that study. And then the Fritz et al. Was 20 to 30%. So what that means is that this could be happening more often than previously thought. And it's definitely something to look out for. Although doesn't solve the problem of the MSWO being an issue. They said one thing that you could do is make the reinforcement schedule for responding during the preference assessment more like what the reinforcement schedule is going to look like for responding in real life.
A
That makes sense.
C
So they can sort of, you know, get around this type of issue. But it's always something I thought about, and lo and behold, here it is now, you know, published as an issue. So it's something that you got to think about.
A
I never thought about it because I never save the best for life.
C
Oh, see, no, Yeah.
B
I think for them, I would assume, for the most part, at least in that very specific, controlled environment, if someone's like, here's a bunch of stuff, you'd probably take the one you want first. It's not like Christmas. It's not like, oh, do you want to wait for all your presents? Oh, we got three days till Christmas. That's kind of exciting. Of like, here's a bunch of candy. Well, I'm going to take the one I want the first, because this is going to be over in five seconds. Like, that temporal component just never really jumped out as something that I think would be salient for anybody, but apparently it is.
A
Yeah, maybe not for you and I. We're just. We're grabbers, I guess.
B
Most people are grabbers. You want. You want what you want now, but not all the time.
C
Yeah. And that's it.
B
All right. Well, look at us. That brings us to the end of grab bag dissemination station. We don't do our grab bag because these are three articles about different things. But my theme that kind of didn't quite work with yours, Diana, but maybe a little bit was a theme of interesting thought. Great experiment to sort of answer a question. But the larger question still remains, and I think that's kind of there. So in terms of meetings, we can use virtual technology to teach better meeting behaviors, but does it actually do anything good for the people going to the meetings consistently enough to be meaningful? For, for, for Jackie's, we sort of had the idea of, hey, cultural competence as its own class can be helpful for a number of reasons.
A
But.
B
But is it really the same as teaching use of those skills in all aspects of practice? Maybe. I'm not sure. And then, Diana, with yours, it's, hey, the MSWO might have some flaws when people save the best for last. And that's a problem I guess we need to resolve.
C
I mean, isn't all research like this?
A
Yes.
B
So, okay, maybe the theme is isn't it fun to have research that just. It's like an episode of Loss. You know, it's asking more questions than it actually answers.
A
I never watch Loss.
B
Oh, well, you either will think it is an amazing show or you will hate it.
A
I'm not gonna ever watch it.
B
You better not, because right now it's 100% pro lost podcast. I'm just letting everyone out there know. And anyway, speaking of watching Lost, that's what. Let's go do that instead. Let's end this episode. Thank you all so much for listening to ABA InsideTrack. We really appreciate it. If you could please leave a review on Apple Podcasts or wherever you like to get your podcasts. If you have not already, you can check out our website, abainsidetrack.com you can find links to all of the articles we discussed, as well as purchase ces. If you want even more ABA inside track content or you want to get your episodes a little ahead of time, you can subscribe on Our Patreon page patreon.com Aba InsideTrack. You can subscribe there for free and get articles. Sorry, get the episodes that we put out just the same as in your regular podcast player. You can also vote in all of our polls on our listener choice episodes, on our book club episodes. But if you want the really good stuff, you could subscribe at the 5 or $10 levels. At $5, you're able to get our listener choice episodes ahead of everybody else with a cool video component, as well as get one CE per listener choice episode. That's for a year. At $10 or more, well, you get the book clubs right away. Unlike everyone else who has to wait with bated breath for last year's book clubs to come out a year later, you get it right away. And two CES for every book club for no additional charge. That's a lot of Cesar. Just for saying you're a great podcast and we enjoy listening to you, which we say. Thank you so much. That was really kind of you. Again, that's patreon.comaba Inside Track a couple other things as we wrap up. Don't forget the second secret code word. It's hard to forget. I haven't said it. I'll say it now. It's bridge. B R I D G E Bridge. It's a structure that connects two places. Maybe there's a river or a chasm and you need to get from one side to the other. You better have a bridge.
C
Maybe there's some troubled water.
B
Oh, there we go. Now we're getting back to the title there. And some final thanks. Big thanks to Dr. Jim Carr for recording our intro and outro music, Kyle Sturry for interstitial music, Indian of the podcast doctor for his amazing editing work. We'll be back next week with another fun filled episode, but until then, keep responding.
C
Bye bye.
B
Sam.
Release Date: October 8, 2025
Hosts: Robert Perry Crews (Rob), Diana Perry Cruz (Diana), Jackie McDonald (Jackie)
Episode Type: Grab Bag – Three unrelated articles, each chosen by a co-host
In this Fall 2025 "Grab Bag" episode, the hosts review three recent research articles in applied behavior analysis (ABA), each focused on a distinct topic. The "grab bag" format lets them discuss interesting new studies that don't fit into a single theme or series. This episode provides listeners with accessible breakdowns of:
The tone is informal and engaging, with the hosts sharing personal anecdotes, critical insights, and their signature humor.
Authors: Blackman, de Janaro, Reed, Gunter, and Brerin
Journal: Journal of Applied Behavior Analysis, 2025
Presenter: Rob
Segment Start: [03:51]
Experiment 1:
Results:
Experiment 2:
"It does appear that you can teach new leaders to run more effective meetings... [but] does it actually do anything to run a good meeting? Do you increase productivity... or just make people less miserable?" ([21:53])
"Most people think these are a giant waste of time... And the US alone spends anywhere from 70 to 283 billion dollars on meetings..." ([06:09])
Authors: Petrone, Napolitano, Miles, and Shanahan
Journal: Behavior Analysis in Practice, 2025
Presenter: Jackie
Segment Start: [25:21]
Retrospective evaluation in an online ABA graduate program.
Compared student performance on culturally relevant tasks (interview questions during FBA case studies) based on:
Participants: 77 students (mostly white and female).
Comparison groups:
Jackie:
"We all know... cultural responsiveness is now hopefully becoming more prevalent in our field... And cultural awareness is really understanding your own culture and seeking to understand others’ culture." ([25:29])
Diana:
"Results suggest embedding recurring themes throughout coursework may be beneficial… It also might not need to be a standalone component." ([36:26])
Authors: Castillo, Frank, Crawford, Liesfeld, Doane, Newcomb, Roker, and Barrero
Journal: Behavioral Interventions, 2022
Presenter: Diana
Segment Start: [38:01]
"I think about this because that’s the kind of person I am—I would definitely save the best for last... but I had never thought—how would you determine if that were the case?" ([38:28])
"I never thought about it because I never save the best for last." ([49:52])
"Hello, grab bag, my old friend / It’s time for research to be read again." ([01:09])
"In a different survey, half the people just complained about how they don’t like meetings. And the US alone spends anywhere from 70 to 283 billion dollars on meetings..." ([06:09])
"[About cultural humility becoming the eighth dimension of ABA:] It was like someone was listening to us—the next day, a paper appeared." ([27:54])
Dissemination Station ([50:37])
Rob notes that each article is an example of research opening up new questions rather than solving them—much like the show Lost.
"It's like an episode of Lost. You know, it's asking more questions than it actually answers." ([51:44])
Each article presents a promising intervention or insight but also highlights the need for continued research:
End of Summary