Loading summary
A
Foreign.
B
Hey, everybody. Welcome to ABA Inside Track, the podcast that's like reading in your car, but safer. I'm your host, Robert Perry Crews, and with me, as always, are my fabulous co hosts. Oh.
A
It'S me, Jackie McDonald. I'm number two for the next 12 months.
C
And it's me, Diana Perry Cruz.
B
Hello.
A
Hello.
B
I'm so glad you guys have created this weird rule. I think I've said that I'm tired of.
C
I said that a lot of times now.
B
Okay, well, you've also said that you're going to make the change repeatedly. All right, this bit, it's done. We're moving on. We're going to talk about behavior analysis and behavior analytic research, which is what we do every week on this here podcast. We talk about relevant research articles around some sort of behavior analytic theme. See, look at that. I tried a new opening.
C
I liked it.
B
You like that one?
C
Yeah.
B
I should have written it down. Oh, well, we'll go back to the old one another time.
A
You can actually re listen to it because it is a podcast.
B
You know what I don't want to listen to is my own podcast.
C
Your own intro.
B
No, I used to do that. I don't need to do that anymore.
C
You're pretty consistent, honestly.
B
I know I am. Great.
C
Because I've been looking at the transcripts for fun.
B
Yeah.
C
And you're pretty consistent in your intro and your outro.
B
I mean, according to the transcript, the first thing anyone on this podcast says is the word fun. Foreign.
C
I think that was like the. Maybe it's like the sound it makes when it plugs in or something. I don't know.
B
We've never said that to start a podcast.
C
Yeah.
B
And we're not saying it to start this podcast. Well, what are we going to talk about this week, folks? Well, we're going to talk about the changing criterion design.
C
That's right. It's the unicorn of experimental designs.
B
What, in the sense that nobody uses it?
C
Yes.
B
It's rare.
C
It's rare. It's super rare.
B
The only people who want to use changing criterion designs are grad students. It's their first study ever, and they're like, I'm going to do this probably.
A
No, I don't think so. Sometimes, Yeah, I don't think so. This is a very utilized design. If you are trying to increase or decrease something, that's already happening. Right. The only problem, the people get a bad rap with changing criterion because you're not actually teaching anything. Right. Because the person already has the skill in their repertoire.
B
Yes.
C
And We.
A
And you're just trying to slowly increase it or slowly decrease it.
C
Yeah, that's right. So it has. It can be very useful under a very specific set of conditions.
B
It's a weak experimental design. It's. It's. No, I mean, it's not the strongest.
C
It can be the strongest.
A
No, it's not the strongest, but it can be if you use bidirectionality. Just saying.
B
I mean, it's. It's one of those things.
A
I feel like I'm fighting for the changing criterion against you, too.
B
I don't hate it. It's fine. It's a fine design. But there's a reason we don't see a lot of it in research. It's because usually there are other criterion, experimental control, criterion, that are going to provide, you know, a more robust study. Whereas in practice, it may be more utilitarian. But anyway, we'll talk about it.
A
All right.
B
We did. I guess that's it. That's the episode. Thanks for coming, everyone. Diana, what are we going to be discussing today?
C
Oh, sure, Let me tell you. We have three. Three articles to talk about in this episode today. They are Best Practices in Utilizing the Changing Criterion Design by Klein, Houlihan, Vincent and Panahan. And that was in Behavior analysis and practice 2017. Super article. I know. We'll get into it. Really solid stuff.
B
I think they're a little. They oversell it a bit in the title.
A
I again disagree.
C
Best practices.
A
No, I think it's great. It doesn't mean that it is the best practice. They're saying, here's how you can use the Changing criteria design.
B
No, no. It's a very thorough article. But then they. Well, there's not like a. This is the only way to do.
A
It, because there's many ways that you.
B
Can do many ways.
C
Yeah, okay.
A
I love that article. Diane and I actually simultaneously both chose.
C
That's right.
B
We did dislike the article. I just think if you're going to call something best practices, I want to read it and like, oh, my God, I will use these practices every day. And I think when you're like, here are a couple good practices and here's some stuff that people do, you know, use what. Use what feels right for you. That's not best practices. That's like strong practices or great practices, but best practices. Try again, Klein. At all.
C
Best practices is like a phrase that you use when you're like, this is the best that we know. These are the current best practices. All right, well, here's what's probably not Best practice is calling some calling children obese and non obese. But that's what's in the title of the next article, which is Effects of a Variable Ratio Reinforcement Schedule with Changing Criteria on Exercise in Obese and Non Obese boys. That was DeLuca and Holborn all the way back in 1992.
A
I can say they did have an operational definition of what constitutes obese and it was from the American Pediatric Association. So yes, I don't think it's the best term, but at least it is a medical term and it was defined. Yeah, I'm just going to solely sit on these articles.
B
I'm just going to be like antagonistic.
C
I mean, it's a 30 year old article, so they probably would name it something differently today. And then finally, using mnemonics, remote coaching and the range bound Changing Criterion Design to Teach College Students with IDD to Make Employment Decisions by Brady, Hearney, Downey, Torres and McDougall. That was in Education and Training in Autism and Developmental Disabilities, 2022.
B
All right, yeah, great. Let's get started by just doing a quick overview of what constitutes a changing criterion design. So for folks who have not used one or it's been a while since you've read your Cooper book, Changing Criterion Design is one of our kind of, you know, four major experimental designs, at least according to Cooper et al's you know, textbook in which we have stepwise marks and we manipulate some dimension of a single behavior that's already, like you said, Jackie, already in an individual's repertoire. So it's not exactly a new skill acquisition design. It's more of a improvement or deceleration of a behavior in. In the repertoire.
C
Yeah.
B
Use it if you've got gradual shifts towards like a goal. So again, you need to either make increases over time. Like maybe it wouldn't be safe to make a huge increase. Like for example, if you're using change of criteria in like a weightlifting program, you wouldn't be like, you only lift 10 pounds. What if you lifted 200 pounds because you'd hurt yourself. You want to go a little bit slower.
C
Yeah.
B
And you probably only exercise one more time. They'd be like, hooray, our reinforcement system worked. But then they broke their arm and said, I'll never exercise again. So not, not so good. Or if it's a behavior that is sort of resistant to change and you're not able to sort of make those, you know, fabulous kind of withdrawal design changes of like behavior on, behavior off and sort of more changing just some dimension of that behavior slowly over time. Now the changing criterion design is one of those designs that I kind of love hearing reading about the history in this article because there's that sense of. And this is all from the Klein, Klein et al article of. It's sort of like the new kid on the block. Like it only came around in the 70s, supposed to. All the other designs which feel like they were in the Bible, they're so old and we can't ever change those designs. So we've got a book by hall in 1971 and then a follow up study by Weissenhall about reducing daily cigarette smoking was sort of the first use of changing criterion designs. But most people look at Hartman and Hall 1976. So if you want to say I'm using changing criterion design, you will put as your citation probably Hartman and Hall 1976.
C
That's what you should do.
B
I mean, well, if you're using like, like, like an original change in criterion design. We'll talk about some more recent modifications to the change in criterion design in our examples. And it sort of also referred to some sort of like Sidman design from the 60s. But at the end of the day they described it as kind of like, it's like a multiple baseline design for shaping procedures. Which is sort of true. Though it's a little weird to say that because you can also use a changing criterion design within your multiple baseline design. So I don't know, it's like multiple baseline squared. So again you want to accelerate or decelerate some dimension of a behavior. One of the challenges with change of criterion is while it is a very useful design, clinically, experimentally, you can really mess it up and sort of produce change that if you were say trying to publish, you're going to be told there's no experimental control here. You just did something and your client might be happy, but you might not get your paper published. When you look at sort of the baseline logic, just like with any other study, you have to have a way to sort of do replication and prediction. And really what you're looking for is when I have some sort of criterion, I want to see a change in the, some dimension of behavior. Could be rate, could be accuracy, could be magnitude. And you're sort of looking for this stable responding at each step so that you can make predictions like see, I made a change in whatever my, you know, an independent variable is and it showed this change in accuracy or rate or whatever, whatever dimension you're measuring.
C
So it's a really pretty precise.
B
Yes.
C
Type of Behavior or experimental control that you would need to have, because you're not just looking to change behavior in a direction, which is usually kind of like what we're going for, but actually having behavior adhere or cluster around a particular numeric criterion, which is much more difficult to do.
A
And that's why I think it's a really great design. Right. Because you have to follow. You know that if you don't see experimental control, it's because you haven't picked the correct steps.
C
Yep. Yes. So it's, it's not easy to pull this one off. And like you said, Robin, other like, really important factors that the behavior should already be within the person's repertoire. Ideally when you're starting this. And it's very often the case that this is going to be most successful if there is some level of rule falling or understanding of those types of like numeric measures as well. Because if you're saying, hey, you do five problems, you get the reinforcer, and then tomorrow you need to do seven problems, having that verbal component, that instruction following piece can really help behavior adhere to those criterion much more quickly.
B
So within the design, if you want to strengthen your experimental control, you need to have lots of different sort of magnitudes of change. And while you usually are trying to either have those magnitudes of change go in one direction, you may also want to add what they call mini reversals. I like this. To differentiate from regular reversals, we're going to have mini reversals where you change your independent variable, you change some aspect of it. So, so that you see either an acceleration or deceleration of the behavior. So example, when you're looking at your changing criterion graphs, you see your step, step, wise, up, up, up, and then all of a sudden it goes back down.
C
Yes.
B
Which again, in clinical practice, most people would say, what the hell do you think you're doing? I want this to go up, up, up. And experimental control is usually not a reason people are excited. Oh, experimental control. I understand now.
C
Thank you.
B
You also want to change the length of phases. So some phases have to be longer than others. Again, you don't have to do this, but if you want to show experimental control, you need to sort of do all of these little steps to have any sort of chance of anyone saying, aha, I see your experimental control.
A
Those mini reversals are also called bidirectionality. Just so.
B
Well, that's, that's what happens. That's what you're demonstrating.
A
Right. But I just want to make sure.
C
That bi directional criterion requirement, because they.
A
Say that on the BACB exam. So any students that are listening, they won't say mini reversals. It'll say demonstrate by Jackie.
B
I was told these are the best practices in the title. So it's mini reversal. You call it, you call the BACB and you complain and say, I want mini reversals on the task list.
C
Get me Jim Carr.
B
Jim, come on, what are you thinking? This article says get Jim on the horn. I know, for the next, the next task list. Let's change that.
C
Yeah. So if, if we're talking about baseline logic, right. Then we're talking about prediction verification and replication. So by changing the length of the varied criterion levels, right. So maybe one is three and then the next one requires five sessions, let's say at that particular criterion. Then in doing that we're able to make predictions about behavior would continue at that particular criterion level if we weren't to make an additional changes. And then we verify that prediction by extending that session out. And then each time that we're changing the criterion level, we. Well, the first time is the initial demonstration and then after that we're replicating that. But by doing so in the reverse direction, that is a mini reversal. A mini reversal using that bidirectional criterion. That is a further strengthening of our demonstration of replication because it goes against the sort of the expected continued increase. Right.
B
And as you change the magnitude, if you sort of make some like smaller jumps, maybe one slightly larger jump, if you see that stable responding around the new criterion. So the sort of, you know, the, the dependent variable points are sort of on your new changed criterion line. That is another way to demonstrate a strengthening. There are two variations on the changing criterion design, both by McDougall. One is the range bound changing criterion design.
C
More on that coming up.
B
Where instead of just sort of saying here's our new criterion, it's you know, five or whatever, you have an upper and a lower criterion at each of your different kind of subphases. This is recommended to use when you're worried that gains going too quickly could, you know, cause, could cause injury or impact your long term change. And it increases experimental control. Some because you show that behavior change is sort of sensitive to two criterions, the upper and the lower limit. However, you also are then telling someone like you can't respond any farther than these ranges. So you know, it, it might have, well to receive the reinforcement, to receive the reinforcer.
C
So, you know, that's how it should be.
B
Yeah. And then they also describe a second one, the distributed criterion design, which I'm just Going to kind of quote this because the article didn't care too much to describe this beyond this paragraph. Manipulate. You can manipulate one behavior across interdependent but discrete contexts or interdependent behaviors in one context. It's sort of seen as maybe like a behavior. You'd use it in a maybe behavior management program. But the authors then say this design is weak and they don't want to talk about it anymore. So we won't talk about it anymore as well because it's apparently not a best practice.
C
It's a worse and we'll leave it at that.
B
Or a mediocre practice. Maybe, maybe not worse. The range bound changing criterion design. I did like that. And again, like you said, Diana, one of our explanation or example articles will use the range bound criterion.
C
Yeah, yeah. The distributed one, I believe is trying to get at like bringing in like multiple behaviors and then sort of attempting to demonstrate them within. Within this model. So the axis is no longer modifying along some numeric value, but now modifying along a variety of behavioral axes. I don't know if I'm describing this well either because I also haven't used it. And you know, the potential challenge here is that you end up doing just a bunch of AB designs that, you know, if you had arranged it differently, you could have made this into like a multiple baseline across behaviors or across participants or something like that by staggering those initial baselines. But if you just have it set up where you had the same number of baseline and then you started doing like some of these like permutations, then you kind of just have a weak AB design. Right. So the, the strength of the changing criterion design is, is usually in being able to pinpoint that reinforcement criterion and then demonstrating that behavior adheres to that criterion without going very much below or over that. But another key point here that you kind of brought up is that there has to be the opportunity for the individual to respond in some pre operant manner such that they could respond above or below the set criterion. Because if it's just stop, you're like.
A
Oh, you meant it. Bye. You don't do that. Right. You have to give them the opportunity.
C
Right. You don't like rip away the worksheet when five are done, because then all that's happened is you've taken away the opportunity for additional responding versus showing that they, that their behavior is contingent upon the particular reinforcement criterion that's in place or responding to that criterion. I should say. So That's, I think, a really important point of the ccd and sometimes that can not fully be present. And that would demonstrate to me a weaker representation of the ccd.
B
Yeah.
A
So what we say that CCD is changing criterion design.
C
It is.
B
Yes.
C
It's not like the Catholic.
A
Right.
C
CCD class, Catholic. But it's like, yeah, that's where you're like learning communion.
B
So, like, if somebody maybe like, started this podcast and then ignored it for the first 10 minutes and suddenly hear us talk about CCD, like, Whoa, that's got like, really religious and spiritual, right?
C
Or it could be like, how many Bible verses do you learn at CCD using a CCD design rise per week?
B
Wow, that sounds like a social significant problem that needs to be solved. I mean, I don't know. I don't teach ccd. Maybe they're like, I've only. They were a better way to teach more verses. How's everyone going to learn Genesis?
C
I really prefer Peter Gabriel on his own.
B
Oh, but all right, let's see. Oh, here are some other best practices to know. One question always comes up of like, what do I choose for my first criterion? And the major way most researchers will choose that first criterion are either using the mean of the baseline rate, having the mean of the baseline rate, using that, choosing the highest or lowest point at baseline if you want to do a range, or you get someone who knows what it is you're trying to change, like a professional in whatever the change the change dimension is. And you say, hey, what do you think would be a good target to start? And the authors actually say that's their favorite because it's the most sensitive to the ability of the individual versus just like, I don't know, you were around here. So 10% more seems like a good number, which is not necessarily best practice. You want to have at least two criterion changes to demonstrate control, though some people say three might be a better minimum. And then everything else is up to you. What's the phase magnitude? What's the phase length? Do you want to use mini reversals or not? How many shifts will you need? Oh, the sky's the limit when it comes to setting up your chain criterion design.
C
One million permutations.
B
You could do everything, all the changes, and that's. And that's it. So again, the authors say it's not always perfect in research, but it is a really nice intervention tool. And then they even said, let's do a survey of studies using changing Criterion design before 2014 to determine are there any other patterns or any other things that we want to know about the changing criterion design. And this is a summary of 106 articles and 267 change in criterion graphs. And some things to keep in mind, at least as of 2014, is that about half of the studies were done with children, 82% were for individuals without a diagnosis, which is very different than most research in our field, where way higher numbers of the studies are participants who have some sort of reported diagnosis like IDD or autism. Schools were pretty prominently used. About 45% of the studies took place in a school setting, which is interesting. Just like every. Every other study, about, you know, half were done by an experimenter or a therapist, but teachers and staff were actually 27% of the studies. So again, a little bit higher than you sometimes see in our other, other experimental designs. 64% of studies were about increasing behavior, and 57% were looking at frequency as the key dimension, followed by magnitude. This is used a lot for targeting academic procedures about academic behaviors. About 19% of the studies. And then a lot of the other targets were things like sports or communication or self care or safety or habits or compliance or medical. And if you wanted to know what actually was the highest of all of these, it was other.
A
Well, yeah, didn't you. Did you ever read that study where they were looking at trying to decrease smoking consumption and someone was smoking like six packs a day? You couldn't really. Absolutely. There would be like, oh, I can't remember what it was, but like, go cold turkey. Right? Like, so that's why this change of care is on. It's like decreasing this bad habit.
B
Well, in terms of what category of.
A
Study that would be other.
B
One thing they did note, and this is going to come into best practices language, is that very few of these studies actually had any sort of behavior assessment or preference assessment included. Which seems odd because if you're trying to change behavior, you probably want to know the function. Although if we're talking about sports, like, what's the function of trying to get better at throwing your baseball or soccer goals? Like, I don't know, you want to get better. Automatic reinforcement, some sort of socially mediated consequence.
A
Right.
B
But preference assessment seems like they'd be pretty important in terms of helping individuals sort of identify what they, you know, what they want to earn as they reach their criterion. Right. Or what they want to happen. A lot of the studies, about 71%, use some sort of a contingency as an intervention, though 22% used no contingencies at all. Usually it was reinforcements. About 73%, though some of these studies, about 22% used a punishment procedure. What was interesting is that a lot of the studies like we kind of already mentioned used some sort of. It says 70%, which the math doesn't add up. So I think it's more of like an addition to the reinforcement.
C
But that could be.
B
I went and double checked because I was like, these numbers are not. That's 140%, 170%, but they use some sort of non reinforcement or non punishment strategy, like feedback or modeling or prompting or self monitoring or some sort of like a contract. So there are a good number of these studies that did not use, you know, what we'd think of as just contingency management. They used other strategies. About 10% used this CCD as their shaping procedure. And a lot of them look, when they look. They kind of looked very closely at sort of the designs. And again, a lot of them used kind of a weaker AB designs, about 63%, the 70 of those studies. So, you know, used a multiple baseline design with a CCD in, you know, within it, usually across some sort of behavior. So they're changing multiple aspects of multiple behaviors. Right. Which sounds very hard to keep track of. And they. 63% of the studies did not do any kind of reversal of either a regular reversal or a mini reversal, which they were not happy about.
C
I know that's the issue.
B
Not happy about.
C
I also have issue.
B
Yeah. And well, what percentage above baseline were used? It's like all over the place. It's like kind of averaged out in a lot of ways. It's like 9% used a 5% above or below baseline. 11% used a 5 to 15, 21 used 15 to 35, 21 use 35 to 100%. 35% of the studies actually went 100% over the mean of baseline as their first criterion. But again, that sounds really stupid. Like why would you want to go that high? Usually the baseline was zero. So mathematically became above 100%.
C
I don't think it's.
B
That doesn't really work that way.
C
Zero is still zero.
B
They didn't. I think what they meant is just the idea of there was no behavior occurring and then there was.
C
I had to pick somewhere to start.
B
Sure it's more than 100%, but you're right, mathematically, it's undefined. So there are lots of ways to use the changing criterion design. But a lot of studies, at least as of 2014, used pretty poor experimental control. Only 25% of the studies they looked at did the authors say, you know what? I think experimental control was solidly determined. And usually the failure was a little variation in phase magnitude and no reversals at all. So shame, shame, shame. And they also saw a lot of, you know, smaller variations of things like phase length or maybe only two changes in the criterion. Sometimes they'd use, like you were saying, Dyna, restricted responding, which is its own problem. But with all that said, if you wanted to get better experimental control, just don't do what 75% of these studies did in that you want to have a mini reversal, you want to have more variation in your phase magnitude, you want to have more than two changes in criterion, you don't want to restrict responding. But some other key things.
C
Just do those things, those things you exact mentioned, and you will be doing better than 75% of published research in this area.
B
They also say if you're changing behavior that it seems worthwhile. Maybe do an FBA to determine the function. Definitely use a preference assessment. And it also wasn't super clear how many of the studies were including the individual in determining what they want their criterion to be. So add that component, why don't you? How high do you want to go? Where do you want to be? Why is that a good or bad idea? Like be a part of the study itself. Right. And then also include more training of the client into using the changing criterion design themselves. So it's not something being done to them, it's something that they are a part of.
C
Absolutely.
B
So not just looking at their consent to being part of your study, actually being active participants in the study and. And using more social validity measures.
C
So many apps that could like do this for you. Yeah.
B
This feels like a very easy one to sort of track and then graph for you and then let you see your. Your stepwise changes. Right. Yeah, but I don't. I'm sure. You know what, I'm sure there's one that actually does this, but I don't know what it is because there's too many apps.
C
I'm more mean, like, build it in, like Duolingo should like have this model. Right. And they'll be like, oh, this is a rest week. You only need to do three days this week. Right. It's like you're bidirectional.
A
They kind. Oh, yeah, they don't really go down.
C
Yeah.
A
They usually progressively get harder.
C
Yeah. Until you be so easy to just build it into those things.
A
Until you miss one and then it goes back down to easy again.
C
Yeah, but that's different.
A
That's different.
C
Yeah.
B
Okay, well, now we know some best practices and most of these are great practices. Are they the best? You know, need more research there. But I understand if you wanted to think of what's the best that we know right now.
C
That's. That's what it means. Yeah.
B
And they should put that in the title.
C
When you know better, then you can do better. That's why we need more of these types of studies.
B
All right, well, now that we know what goes into a changing criterion design, the best changing criterion designs. Let's take a break and when we come back, let's describe how they've been used with some example research. We'll be right back.
A
Hi. Do you want to be a bcba, also known as a board Certified Behavior Analyst?
C
Sure. We all do.
A
Now, you can come to Regis College in Weston, Massachusetts to get your graduate degree just minutes outside of Boston.
C
Choose from any one of these courses.
A
Masters of Science in Applied Behavior Analysis.
C
Master of Science in Special Education, dual degree in Special Ed and aba, or.
A
Be eligible for your Postmaster certificate.
C
You can complete your degree and be ready to sit for the exam in two years.
A
And our 2022 graduates had a 92% pass rate on the BACB exam.
C
Come enjoy approved fieldwork, placements, ethics, mini handbooks, PhD levels, professors, small class sizes, and a service trip to Iceland if interested.
A
And don't forget, our program is accredited by the association of Behavior Analysis International, or ABAI, as a Tier 1 master's degree program.
C
Don't delay. Supplies are limited. Learn more at regiscollege.edu.
A
Again, that's www.regiscollege.edu.
C
Regiscollege.Edu.
A
One more time, www.regiscollege dot edu.
C
See you there.
A
Bye.
B
And we are back talking about the changing criterion design. But before we continue this discussion, I want to remind all of our listeners that ABA insidetrack is ACE and KWABA approved. And by listening to this episode, you're able to earn one learning credit. Hooray for you and for us for learning about the changing criterion design.
A
This was my idea because I love this design.
B
Hooray for techie.
A
And everyone was like, I don't really know.
C
And I was like, please, my idea.
A
Maybe it was both of ours because I actually love this design. I wish I could do all my research on the changing criterion design.
B
I think there'd be times that it probably wasn't the best fit, though.
A
No, absolutely. Yeah.
B
No, I can absolutely do that. I will. I'll never be published again. And where were we? Oh, yeah, ces. And all you need to do is finish listening to the episode. And then go to our website, ABA InsideTrack.com to put in some important information. One of the pieces of information are going to be two secret code words that we've hidden in the episode. And I'm going to give you the first of those code words right now. It is gnome. G, N O M E. It ain't Alaska. It's the little mythical creature with a funny pointy hat. A gnome. Because why, Diana?
C
The gnome mobile. The gnome mobile. We're hunting for gnomes in the gnome.
B
That's not. Well, that's not why you told me you said it was gnomevember.
C
That's all that's going through my head right now.
B
Oh, okay.
A
So I don't know. It's January, so.
B
Well, when we recorded, it was gnome November.
A
But it's January.
B
You gotta get ahead for the holidays, man. We need some time off.
C
So we went to a local place nearby botanical garden. Oh, yeah. Okay. And they have gnomes out and little like painted statue gnomes. You have to go around all the places and find them and they're so cute.
B
Yeah, yeah. So that's your code word.
C
Yep. Gnome.
B
All right. So when we left off, we talked about the best practices for changing criterion designs. Oh. One thing I did love is I always thought the plural of a changing criterion design would be. Oh, you need lots of criteria in your changing criterion design. It is not. It is criterions.
A
Yes, it's criterions.
B
So if you, like me, were like. Well, the plural, the plural of criterion is criteria. True. But if you are talking about the research design, at least according to the best practices article, you want to refer to them as your different criterions, which grammatically makes no sense, but.
C
Interesting. I didn't realize that.
B
Yes. Unless the whole thing. Unless Klein et al were. Were joking with us and they're like, psych. These aren't the best practices because we should use changing criteria when talking about lots of criterion.
C
Oh my God. But let's drunk in white for CC or for scd.
B
Yep. But let's talk about two examples. The first example is going to be kind of a classic change in criterion design. And then we're going to talk about the range. The range.
C
The rbcc.
B
Changing criterion design. Yes.
A
You're funny.
B
Well, range bound. Range bound. Range bound.
C
I've got it taken care of.
A
Yes. So here's my classic one. When we're looking at inactivity in young children, and I guess in the 90s around this time, they were really focused on making sure that all kids were really fit and healthy.
B
That feels like every time.
C
I know.
B
Since the 80s.
A
Well, they're really wanting to do it.
C
It's probably the President's Fitness Challenge happening.
B
They couldn't pass their President's, by the way.
A
I cannot do the visa and reach. I do yoga every single day.
C
That's the only part of it I'm good at.
A
Oh, I, I cannot, I cannot.
C
I've never done a pull up.
A
Oh, I can put.
C
I, I think I just have arms that are more disproportionately longer than my legs. So I can reach past my toes.
A
Well, I cannot.
B
And pull up or chin up.
C
You had to do both.
A
You had to do both. And then I just climbed the.
C
Hung there, the rope.
A
I would have loved to see that, by the way.
B
Anyway, what about the shuttle run? You do the shuttle run?
C
Yeah, I could do the shuttle run slowly.
A
So the previous research really kind of looked at this as well. They looked at using a fixed ratio. And we all know that fixed ratio has this break and run pattern. Right. And so one minute of duration counted as one ratio, like one frequency thing. And that was really effective. So they really wanted to look at previous research and extend it using the variable ratio schedule. Because what we know about simple schedules of reinforcement is the variable schedule makes this very high, stable responding on a cumulative record. And so they're like, okay, maybe we can further increase exercise, both the time you're exercising and the amount that you're exercising to see. And they wanted to use a change in criterion design because it can start at a participant's specific level and we don't have to do it as a group. Right. So each participant had their own individual baselines and we moved from there so they didn't have to do a lot of group stuff. And they could gradually increase via, via small successive increases. So they had six 11 year old boys, they were at school, they were asked if they wanted to participate in this research. They said, do you want to come and exercise on a stationary bike in the nurse's office five days a week? And all of them said yes.
C
Oh, yeah.
A
And so three, whatever was happening in.
C
Class must have been really boring.
B
You change the criterion for interest in class.
A
So three were considered of typical weight and three were considered over 20% of their. Over 20%, at least of their typical body weights. None of the boys were taking medicines, all were considered healthy and none were doing any weight control management. The boys that they were calling obese were at at least in the 97th percentile for weight and ranged around 32% to 161% over the range of what their typical body weight should have been. And the non obese boys were around 50%. One was slightly higher, but they said, oh, it fell in the range, so it's fine. All were of average height. And this took place in my favorite place, Winnipeg in Canada. Oh. So they probably had to exercise and it was probably in the winter and that's why they had to ride bikes in the nurse's office, because it snows non stop there. I went there once for a job interview and there was a husky walking down the street in a full snow suit.
C
Oh, my God. It was. That's adorable.
A
Awesome. Yep. It was very cold out.
C
Did he have an owner?
A
Yeah.
C
He was walking on two feet, actually.
A
I guess everyone walks. Every husky is a tusky town. All right, so all they needed is a stationary bike. And this time, I think this is clever. They counted one wheel rotation as the response.
C
A pedal.
A
Yep, a pedal. And it makes that sound. And on the bike there was a bell and a light that went off on a VR schedule. And each time the bell and the light went off, a point was delivered to the boy and then the boys could take trade on those points for backup reinforcers at the end of the session. They also noted that bikes were set to moderately low resistance. So it wasn't like they were like making them truck up the hill. Yeah. So they took baseline for eight sessions until stability was demonstrated. And again, like I said, each boy served as their own baseline. Then they added the VR criteria for eight sessions. That was sub phase one then. And so what it was is I.
B
Think it was a VR criterion.
C
Oh, so sorry.
A
So, so sorry. That was subphase one. And that was 15% above baseline was what they wanted. And then the second sub phase was 15% above that, and then the third one was 15% above that.
B
So they went with the classic, I'm going to pick a number.
A
And then.
B
That's good.
A
Yeah.
C
Big jumps though.
A
Huge jumps. Yep. And then they went. They reverted back to baseline where nothing happened. There was a mini reversal. You say a mini reversal.
C
Yep.
A
And then they went back to that third VR schedule.
B
Oh. They used three different criterions.
A
They did a good job for five sessions at the end to see what happened.
B
Well, except for that 15%. They should have asked a professional stationary bike person.
A
Hi, I'm an exc. In stationary cycling.
B
They didn't have Those in the 90s, I guess.
A
I bet they did.
C
Why wouldn't they? That's like Peak. That was stationary bike time.
B
Gene Fonda. Jet.
C
I was just gonna say Jane.
A
Richard Simmons.
C
Oh, legging. Put on your leggings, then put on your swimsuit, then put on your belt.
B
Yeah.
C
And then get your hair, your headband. Put on your leg warmers, wristbands, and your Reeboks.
A
There you go. Okay, so here's the. The point here is that they. All the boys went in individually and they could exercise as long as they liked. So they could get off at any time. Which is nice when you think about ascent, because I know we just talked about ascent. So I was thinking about that. Yes. Or until 30 minutes had a lap. Because they can't just ride bikes all day.
B
Yeah.
A
Probably much to their sadness, because the boys did ask to ride more. And that was one of their subjective measures. Responding of social validity is that the boys were like, can we invite our friends and can we ride more for more points? So that was.
C
What did they trade the points?
A
Toys.
C
Okay.
A
They actually did a reinforcer assessment here. So I'll tell you about that.
B
There we go. That's a best practice.
A
Yeah. So prior to the first session, again, the child were asked if they wanted to exercise and then they did. So they had a reinforcer survey that each of the boys took for backup reinforcers. And they took. The top 10 were chosen. Higher preference items were used for higher ratio requirements. And the. They were like typical toys.
C
Right.
A
So they brought a whole like bunch of 10 toys. They laid them out and they just had to earn points for those toys. Yeah. So that's all they really did. And it worked for the most part.
C
Right.
A
All the boys had. All six boys had. Yeah, Six boys had very low rates of baseline responding in baseline. And then when the VR schedule, most of them conformed to the. To the schedule, except for one boy went a bit higher.
B
Yeah, they all kind of went a little. That's. That first. Changing that first criteria and they all were a little bit higher. I mean, it wasn't like crazy. Well, there was that one who was almost like exponential growth for a little bit early. Reverse exponential growth.
A
Right. And this is. Look, they're looking at mean revelations, revolutions per minute. So it's like the response per minute. But then when you see the increase, each of the three VR increases, the data conform to that. So each time that they increase the response requirement, the boys followed that followed en suite. Right. And they did a great job. When they returned to baseline, you saw behavior decrease.
B
Oh, and I misspoke. I said it was a mini reversal. This was Just a reversal.
C
It was back to baseline.
A
Back to baseline. And then when they went back to the third VR, everyone conformed back to where their schedule was. For the most part, there was more increases above the criterion than there were decreases.
C
I wonder how hard it was to know at the time how close you were to the criterion level.
A
I don't think you.
C
Because it's like. And it's a mean revolutions across the entire session.
A
Yeah. So I don't think they know.
B
So I think they just were. They were getting.
C
That might have been why it was more variable than you wanted. Because they were. There was probably a little bit of guesswork involved there for them to know. Am I actually going fast enough? Right.
A
Or not.
C
Or maybe it was reported to them differently based on, like, what was available for the screens. It was like, old.
A
There was no.
C
Just the old kind of.
A
Yeah. It's just the light in the sound.
C
Nothing.
B
They were getting points.
A
Yeah. They just knew they were getting.
B
And all they would know is, I am getting points. I'm doing points.
A
And then they would just bike until.
C
The experimenter was giving out the points as it went.
A
No, they had a bell, a bell light. A bell and a light.
C
Lord.
A
So each time the bell and light went off, they got a point tally.
C
But how. How is the bell going off?
A
Because the. It's on a variable ratio schedule of a revolution of the bike.
C
Okay.
A
Right. So let's say some sort of a.
C
I've read the study multiple times and still never really fully. Okay.
A
Here. So here we go. How about this? So they. They set it up so, like, maybe the third revolution, the bell would pop up. Right. Then the sixth revolution, the bell would pop up was predetermined. The experimenter programmed the bell light to the revolutions prior to the kids coming in.
B
I'm sure it was some sort of a circuit board situation.
A
They explained it.
C
Okay.
A
But it was okay.
C
Great.
A
Yes. And then the other thing that I found interesting was their other figure where they looked at how much people were exercising. And this. Actually, I feel like this. The second graph is great clinically, but it kind of undermines the changing criteria.
C
Yeah, totally.
A
Right. Because it is. Oh, look, after we put reinforcement in, Everybody exercised to 30 minutes and wanted to do more, but.
B
Right.
A
We talked about best practice and you have to keep going.
C
Yeah. There's a ceiling. There's a ceiling effect here.
A
So this kind of undermines their last gram a little bit.
B
Yeah.
A
But everyone was okay.
C
Yeah, I think that's fine.
A
Everyone. It is fine.
B
Right.
C
It wasn't the main criterion.
A
No. Everyone was happy with the results. And everyone exercised the same at the end no matter what. Boy you were so that's great. Boys asked if their friends could participate because they found it fun. They asked if they could bicycle longer. And then they were like. The participants were like. All the boys actually convinced their parents to buy them bikes now because now they like biking.
C
Yeah.
B
Kudos to the editor of this one for allowing what is like one of the sweetest little like epilogues in the discussion of like one boy's parents said Peter finally loved going outside.
C
Yes.
B
Playing. And he loves playing now. And oh, they never thought it could be true for that. It was really nice.
A
Yeah. Peter, he went from a size 36 to a size 32. Everyone said he was unrecognizable shy boy before now very happy. The principal even said he made a dramatic change. He's doing better in school. He's swimming. He's doing like other activities. So. Oh. They all participated in a track and field event at school. And one did. They all did well. One did really well and went to the state championships. And the parent like the discussion was like very heartfelt.
B
But I don't think any of them played hockey after. So Winnipeg would consider this a failure.
A
Yeah, they really absolutely would. No hockey stars because then it was summer vacation.
B
There's no excuse when you play hockey in Canada.
C
And then the funny thing, it's a four season sport. Right.
A
They said future research should look at girls because we have not.
C
Right. Yeah. It's like 51 of the population.
B
Only boys like bikes.
A
Yeah, there you go. But I love. I like this study. Not because it's talks about these boys and girls. But the.
C
The design is unnecessary to break it down by obese and non obese.
A
I agree.
C
The results do not are not broken.
B
Down said increasing bike riding boys.
A
You're right.
C
That's true. You don't need it at all. You don't. It doesn't matter. And then the way that they named the boys also I find upsetting. We don't even notice to get into it. Okay. I didn't think I even noticed what there was.
B
I didn't. It was a theme.
C
Yes.
A
Was I didn't notice.
C
The non. Non obese boys are all start with an S and the obese boys all start with a P. Okay. I feel like it's like skinny versus some other pounds.
A
Oh. I think.
C
I think there's a lot of things P could be that like I didn't like it. But that might not be what the.
A
Authors meant, I didn't even know.
C
It was also a long time ago. Sure. And maybe they would do something differently now, so.
A
Sure.
C
Okay.
A
Yeah. Good. It's a good. It shows the design. Well, I did not even see that.
C
I know.
A
Oh, there we go.
C
It is a good demonstration of change. Yes.
B
All right, well, let's get out of the just disgusting 90s. Let's move into the modern, enlightened era.
C
My mom totally had one of those stationary bikes.
A
My grandma had one.
B
I bet that it weighed 500 pounds. It was just like, pure. Just metal all over the place.
C
So heavy. No one ever used it. It just. Just hung clothes on.
B
Probably had one of those things. You're like, I want to change the. The amount of, like, weight resistance. And you're like, turn the rusty knob on the bottom.
C
It did have, like, a little odometer type of thing that showed you how fast you were going.
B
Did it light up?
C
Didn't light up or ding? No.
B
Oh, well, you got no points for riding a bike. You just had to do it for fun.
A
For funsies.
B
All right, let's. I want to. I want to go back to the future and talk about the Range Bound.
C
All right. This one is in the distant Future, the year 2022. And I really.
B
By future, I mean past.
C
I really liked this episode. So just as a reminder to everyone, the title here was Using Mnemonics, Remote Coaching, and the Range Bound. Changing Criterion Design.
B
By episode, do you mean this article?
C
Is that what I said?
B
I love the fact that you. I really like this episode. And some people might say, oh, Diana just got a little confused in Diana's notes. I like the focus of this episode.
C
Sorry. Yeah, well, every time I introduce the articles, I also say, we had a couple good episodes. Yeah, that's careful.
B
If you do 10 years of podcasting, everything becomes an episode.
C
Yeah. Yeah.
A
Life is an episode. She's like, man, this episode was great.
C
Somebody should watch this episode.
B
I measure my life. And when I go to bed, did I do what. What research did I read for this episode?
A
I'm talking about her life episode.
B
There's lots of episodes.
A
Each day is a life episode.
C
On my own TV show, I'd watch it. Okay, so I really liked this article in that they, you know, sort of couched their research that they were doing here within, I think, what they viewed as important work. So they talk about why you want to focus on preparedness for employees for employment. Sorry. For students and individuals with intellectual disability. They talk about the. The value of employment. It contributes to self worth and quality of life. Only 10% of people with intellectual disabilities are part of the paid integrated workforce. And that really results in poor employment outcomes for this population, underemployment, lack of financial independence, lower quality of life, etc. So they say we can, you know, work to improve this by introducing work based learning opportunities in post secondary education settings. And so work based learning means using paid or unpaid work opportunities to what they called increase employability by teaching a lot of pre employment type skills. And they said a great option is having these programs built into ipse, which is inclusive post secondary education. They found that individuals who participate in those programs produce more employment opportunities and higher salaries for them over time. So these workplace learning opportunities can help to build pre employment skills. And that could include a bunch of different things. But the one that this study focuses on is decision making skills. And so what they were really focused on here was decision making skills related to whether one might want to have a particular type of job, because that could influence the choices that you then make in terms of job sampling and doing some of these paid or unpaid work opportunities. So that was, I mean there's like a million things that could be targeted here, but that was just one of them. Yes. And so they said you could work on these decision making skills again a lot of ways. But one way that we want to look at is using a mnemonic device. And I love saying that word. And they said there's evidence for this in the past and a mnemonic device is just a way to remember something.
B
So like my very educated mother just.
C
Served us nine pizzas, except now it's not. It's just nachos.
B
Okay.
C
There's just no Pluto anymore, right? Or like the Great Lakes are homes. You can remember them that way. I could keep going, but I won't.
B
Nope, that's good.
C
Okay, so they said that mnemonic devices have been used in the past for helping folks remember how to complete job applications or make decisions about job preferences, etc. You could also use covert audio coaching and they incorporated that here, which is obviously why it is coming up app in this one they specifically did remote audio coaching because it was via zoom. So the current study is an extension of two previous studies by the some of the authors of these study. The study here, kearney and Torres 2022 and Torres at all 2021. And the current study was combining remote audio coaching with teaching slash using the mnemonic device to teach employment decision making skills to college students with intellectual disabilities. And the reason why we're talking about it today is because they used a range bound changing criterion design in order to do this whole thing. So before I tell you about that, there were three participants in this study. They were all in this IPSE program at a local college for students with some type of intellectual disability. These students had that diagnosis as well as an ASD diagnosis. So there was Davita, who was age 20, Kara age 24, and Hunter aged 24 as well. And it was all done via zoom. But they, their parents had been instructed to not interfere with the sessions. Basically, yeah, I forgot what the.
B
They had a term they kept using. It was another acronym. And I kept being like, I don't remember what that procedure was. I remember else as their, as their mnemonic. And then I realized, oh, they just gave a fancy name to. We used like bug in ear technology over zoom.
C
Oh, that's rac. Rac Remote Audio Coaching.
B
I was very annoyed at the addition of an acronym for like. And we just talked to them over Zoom with Bluetooth headset. Like, that doesn't need a name. You just did that. That's what you did.
C
That's what you did. Yeah, yeah, yeah. So the McDougal is also an author on this one and McDougall 2005 is when they introduced the range bound changing criterion design idea.
B
It's an option just like in every research meeting. I don't know everybody. What about a range bound criterion design? Huh? It's like, fine, whatever, McDougal.
C
Trust me. So the idea here with the range bound is that rather than there being a single criterion that needs to be met, there is a range instead. So, so instead of it being like, oh, you have to do five, it's. You could do four to six and anywhere within the range criterion.
B
And lower.
C
Exactly.
A
Yeah.
C
So anywhere within the range receives the reinforcer. This gives people a little bit more autonomy, right? They talk about it in terms of it being a purposeful stepwise progression. And the advantage here is that you could use it to potentially shape new skills. It also is great because the participant doesn't do too much too soon is how they described it. Because doing too much too soon could lead to careless errors, overly variable responding, or impulsive responding. So the idea here is that we're going to go slow and steady. You have some choice in the matter, but you need to respond within this range in order to receive the reinforcer. So it kind of prevents people from moving too quickly through the steps.
B
I, I don't, I don't know if they necessarily demonstrate like it does teach bet you know, it's in the discussion. Like it does teach better because the change in range allowed them to focus their answer. No, but we don't know it sounds good. Like that sounds like a good reason to use it. Whether or not that is actually what is happening. You know, it didn't seem like it hurt.
C
Yeah, that's. It's true that you don't really know if that is an advantage, but it's an option, I guess you could say. And then within the rbcc you can still do the bi directional criterion, which we've already discussed as being pretty advantageous if you're looking to establish stronger experimental control with the ccd. So that would be an advantage here.
B
You could do a mini reversal.
C
Yeah. And then in this study they also added in a generalization condition and a follow up condition at the end. So they, they have a lot going on here, I think, to, to demonstrate the strength of this design for this purpose. So the DV was. What was the dv?
B
It was like, it was like how.
C
Many of what I wrote down is not correct?
B
I think it was like how many components of the ELSE mnemonic did they do?
C
Yeah, so they used the mnemonic. The mnemonic was available, but that wasn't the dv. I don't know why I wrote that. The DV was how many reasons did they give for whether a particular job would be good for them?
B
Yeah, according to. And they scored it according to like which of the mnemonics in the ELSE acronym.
C
Yeah, kind of. Well, at the end of it all it was just total number of responses.
B
So how many answers did you give?
C
Pretty much yes. I don't know. Yeah, I don't know, guys. So the, the mnemonic was else E, L, S, E. And they actually have the little picture in the article of what that looked like and they showed it on a PowerPoint screen to the participants during the appropriate time. So E was, is the education level required for this job a good fit for me? So they could respond to that L, do I like the particular job responsibilities? S, do I have the skills needed for this job? And then E, again, am I satisfied with the earnings of this job? And they can answer each of those by thinking about is this how what is needed for the job? And then do I kind of fit those criteria? So there's like two components to each of the ELSE parts of, of the mnemonic. Oh, there it is. The criterion was the number of reasons provided by the student on whether they did or did not want the job in question. And then the materials, they used a website called career1stop.org which has a lot of little video and photo vignettes available on there of different jobs, different vocations, which is kind of cool. So within each session this was all done via zoom. Like I said in the baseline, the experimenter asked them, which job do you want to learn about or consider today, A or B? So they built in choice, which I love. The participant selected one of them. They then watched the video from the CareerOneStop.org website and then they said, do you think this is a good job for you? Tell me the reasons. They didn't give them any other guidance in the baseline condition. And then in the intervention they did basically what we just described, but they added in some additional components. So they also, in making the decision on where are they going to start, the criterion for the changing criterion design. They did what you already suggested, Rob, which is they asked around. So they asked 12 vocational rehabilitation counselors what they thought were an appropriate number of reasons that a college student with an intellectual disability might give in order to help find a job that's right for them. And they, they call this their reality base for their target criterion. So the low lower boundary was an A mean of three reasons and the upper was a mean of seven with a total range of one to ten depending on the person. And then they like use that as a gauge. But then for each individual participant they really chose their own levels anyway, but it was just to make sure they weren't like totally off base. So in the intervention they did it, set it up the same way. They said, please tell me X range of reasons why you might want this job. And so the ranges were like, tell me two to three reasons, tell me four to six reasons, etc. And then they would provide as many reasons as they could. And then if they needed more assistance, the experimenter pointed them towards the else mnemonic and said, oh well, you could also think about, you know, what the earnings are and try to help them along those ways.
B
Yeah, but they could choose which one from else they have to.
C
Yeah, they could, they could choose. And then the final criterion was set at 11 reasons. And they talked at that about that at length. In phases two and three, they went on to have a few more in the range each time. And then phase four required a lower range. So they did do the bidirectional criterion here.
B
Mini reverse.
C
Yeah, yeah, yeah, that too. And then the only thing I, the only thing that I was like as they Said the session ended after the student provided the requested number of reasons. So did they like totally shut it down at that point or did they say maybe like, is there anything else, you know? Or did they have the student say, I'm all done saying reasons. If they had just had one of those phrases in there, I would be like, I would feel satisfied that they didn't remove the opportunity to respond further. But as it's written, I actually can't quite tell.
B
So it's.
C
Maybe they did and that part just didn't get included.
B
If they said anything else, that's a prompt. Maybe I know one more. If they say that's enough, this interview is over. That's what an interview really would end. You know, you do you have some sort of response restriction in an actual interview. But you're right, they could, can, could they just keep going? So, you know, it's a little bit of a weakening of the design. But again, the purpose of this is to have them do better job interviews. Not necessarily just to have the strongest experimental design and use a mini reversal.
C
They absolutely did. They varied the length of the phases as well. So they did all, all those things awesomely. And then they also, like I said, had a generalization and a follow up. So the generalization was with a job coach, so not the experimenter. They did those throughout and look to see when there weren't any of these additional like range boundaries criteria in place, what did the individual do? They lined up pretty closely actually with the actual data and then they did a follow up at the end. Later on the follow levels were a little bit lower, but again the criteria weren't in place and they were much higher than baseline. So in that sense they achieved their goal.
B
And some of them were, some of the, the final targets were higher than the job coaches had suggested. They go, yeah, yeah, yeah.
C
No, it was like starkly different from, from the beginning. And then they also did a percentage of conforming data analysis. This was another acronym, the PCD that was in there. That was from McDougall et al. 2006.
B
Geez, McDougal's just throwing all this stuff in.
C
The guy's killing it. Yeah, right about that. And this looked great as well. So looking to see like how far off of the criteria were they.
B
Dougal, stop trying to make fetch happen.
C
He made it happen. It's. I believe it's a man. So it looked good. It was 100 for the intervention and then 67 for the generalization because there was some variability. But you would, I don't feel like that's an issue like you would expect there to be and follow along the ranges. I know like one, two.
B
Yeah maybe two of kind of generalization outside of those ranges is nice.
A
I love the range actually. It gives you.
C
I know it gives you like I will use it.
A
It. It gives you like a hot place like you know, like you.
C
I know. I like the range as well.
B
Yeah, we're teasing McDougall a little but we do love this range bound criterion.
C
Design and the author of this study is Brady. Just to make sure giving Brady Brady had all the credit here as well. We don't want anything to be misconstrued there. Everyone did social validity, their Pittsbins, the VR counselors, everybody loved it. And that might be all I need to tell you about this study.
B
It's a great, great example of using the range bound criterion design for a really socially significant behavior. I love the.
C
Yeah, it's cool. Graph is cool.
B
Job interview skill coaching.
C
Yeah.
B
Unfortunately all of these skills are pointless now because you're supposed to just write your information into chat GPT to send it off to a business where they use chat GPT to not give anybody a job. Unfortunately that's, that's the new way to do job interviews.
C
So Rob, we're all so tired.
B
Trenchant commentary from Rob over here. Oh well. But back in the old days before chat GPT, this was great. 2022, the last time you would actually talk to a human about why you thought you'd be a good fit for a job. I'm going to face. All right, well, thanks Dan. Well, that's.
C
You bet.
B
Let's move in the dissemination station. Oh well, I gotta grind the gears of the train a little more over here with commentary. My AI type 5, I guess. What did we learn about the changing criterion design?
A
That it's useful.
B
Let's get, let's. Let's do a little more than just it's useful.
A
I think it's useful and I think we all love the range more than just the arbitrary range bound versus the range round versus the arbitrary like percentage wise. So that's. It gives you more of a. Like it gives you more of set boundaries of when it's appropriate versus when it's not. Right. So I think that's that, that we like that.
B
Yeah. Well you still, even with the range run though, you still have to decide are you increasing, decreasing.
A
Oh yeah.
C
100%.
A
Yeah.
B
Or you know, with, with professional guidance.
A
Right. But you still have that, that range where if it falls anywhere in that Range, you know, you're good.
B
Yeah.
A
Whereas you, if you don't have the range, you're like, it needs to fall exactly on this criteria. And like, well, what's acceptable? Three above, four below. Like, you don't really know.
C
Five below.
A
Wink, wink, everyone.
C
What? It's a.
A
It's a store.
C
Oh, I know. You made it sound sexy. Oh, I didn't mean to.
A
I just. I just actually just, you know, you.
B
Should see what you can get for $5. Wow.
C
I've actually.
A
This is the first year I've ever been to five Below and I was like, wowed.
B
So it's a fun store.
A
It's a great store.
B
Fun little store.
A
I think the other thing that we. We came about is that we need more research actually.
C
Right.
A
On the changing criterion design. A little bit.
B
I mean, it's not necessarily more research. It's. We need more research using stringent criteria as to how to effectively use the changing criterion design. It needs to have, like we said, you want to have like three, three or so criterion is a good minimum. You want to have varying phase lengths. Maybe try your range bound. Maybe have a professional help you talk about what changes would be appropriate changes to make as you're setting your criterions. Make sure to say. Criterions. Yeah, criteria.
C
Yeah, yeah.
B
And you know, change. Change the magnitude of. Of the shifts as you can within those ranges. Kind of a nice. I'm curious, could you combine an increase. Decrease in the. The width of the ranges to capture some of that magnitude shift between the criterion?
C
Probably.
B
I wonder maybe. Or would that just be like not showing any experimental control? My range was 0 to 100. Look at this. Wow. Responding stayed there.
C
Yeah. I don't know.
B
Maybe within. Within reason.
A
Right. And I also think that this is helpful for clinicians when they're thinking about behavior management programs or behavior change programs that you don't need to like, go from 0 to 100 all the time.
B
Yeah.
A
Right. Like you can make small successive changes, and that's okay. And sometimes it's necessary.
B
Well, let's hope that folks start using a little bit more changing criterion now that they have a better sense of some of the limitations that they themselves can control for to make a better design. All right, and that's it for the changing criterion design. Thank you all so much for listening to this episode of ABA Inside Track. If you want even more ABA Inside Track, please subscribe to our show. Wherever you'd like to get your podcasts, you can also subscribe on our patreon page. Patreon.comaba InsideTrack to get all of our episodes a week ahead of time. And if you subscribe at the $5 and $10 levels, you're able to get access to all of our polls, including our listener choice and book club polls. Listener choice episodes are chosen by you once a season and you get a free CE for listening. At the $10 level, you also get access to our book club episodes the minute they come out. Everyone else, you got to wait a whole year and you don't get two free CES for listening to our discussion of books. We got some upcoming books or books that came out. We just did Atomic Habits. So if you were like, I kind of want to read Atomic Habits, maybe you want to listen to the podcast first and you'll hear our detailed review of the book.
C
My life's going to get so much better now.
B
Again, that's patreon.com ABA Inside Track. You can also find us on abainsidetrack.com where you can find links to all of the articles that we discussed on this and all of our previous episodes, as well as a place to purchase ces. And speaking of purchasing ces, if you go there for that purpose, you better know our second secret code word. It's garden. G, A, R, D, E, N. Whether it's a secret garden or a garden that everyone can find, you just need to know it's a garden.
C
Sound garden.
B
Could be a sound garden. Yeah.
C
Do I get to do pairings or no?
B
Oh, yeah, I forgot about pairings. Sorry.
C
Okay, great.
B
Sorry about that.
C
That's okay.
B
It's not in the notes, so I didn't see.
C
It is. No, it is now.
B
It was not there when I looked.
C
Da da da da. It's time for pairings. Pairings is the part of the show where I tell you about past episodes you might want to check out if you thought this one was interesting.
B
I changed the criterion for where the.
C
Secret code word went.
B
It's range bound. It could be before or after pairings.
C
You gotta tell me ahead of time if you want my behavior to conform. Here you go. You could Listen to episode 111, Behavior Analytic Language. Episode 113, Visual Inspection of Data. Episode 146, Elopement with Dr. Megan Boyle. And the reason is because that is an excellent example of a changing criterion design. We just didn't talk about it today. Because we talked about it then. I meant to say that sooner. Episode 239, Behavioral Instruction with Dr. Kendra Guinness. Because we talk about teaching graphing skills and then episode 256, Celebration Charts. Explain with Jared Van. Yeah, check out any of those. And then I also like to recommend a snack to go with this episode. The snack today is chai lattes and cigarettes. My favorite because.
A
I know.
C
You know why? Because the example in like some of our classes is Jackie's decreasing her number of chai lattes per week.
A
Which was true. Actually. I use a change of criterion design for this.
C
I know. And.
A
And there was another example of someone smoking six packs a day.
C
Yeah. And then the cigarettes are just the.
B
Classic, classic changing criteria in design.
C
All right. And that was pairings. Please enjoy.
B
That they made you do in grad school. Remember that time you did decrease smoking? First you had to take up smoking, then decrease it.
C
Yeah.
B
There's a change of criteria, an increase and then a decrease. You just do a real rehearsal at the end. Anyway, thanks so much for listening to ABA Inside Track. We already did all our plug information in the Raw.
C
We did that.
B
It's range bound. It's range bound. You see. But you know, a couple.
C
I think all we gotta say is.
B
Some last big thanks to Dr. Jim Carr for our intro outro music, Kyle Sturry for interstitial music, and Dan Thabit of the podcast Doctors for his amazing editing work. Perhaps he'll edit pairings to go after the other section. Who knows? Then this plug won't make any sense. But he doesn't need to. Wherever it goes, there it is. We'll be back next week with another fun filled episode. But until then, keep responding.
A
Bye bye, bye bye.
Release Date: January 28, 2026
Host(s): Robert Perry Crews, Jackie McDonald, Diana Perry Cruz
This episode of ABA Inside Track dives into the nuances of the Changing Criterion Design (CCD), an experimental design “unicorn” in behavior analysis. The hosts clarify myths, explore the design’s clinical and research applications, and critique contemporary and historical studies using CCD (including the range-bound variant). The discussion balances practical “best practices” with an honest appraisal of the design’s strengths and weaknesses.
[32:35–43:02]
[43:54–59:05]
| Segment | Timestamp | |--------------------------------------------|---------------| | Defining CCD & Rarity | 01:48–04:49 | | Components & Experimental Logic | 05:35–10:21 | | Best Practices Discussion | 14:24–25:40 | | Klein et al. Survey Findings | 18:24–24:39 | | Practical App Discussion | 25:29–27:33 | | Classic Cycling Study Review | 32:35–43:02 | | Range Bound CCD Example | 43:54–59:05 | | Final Thoughts / Dissemination Station | 59:53–62:49 |
Diana’s Pairings for Further Listening:
Suggested “Snack” Pairing:
“Chai lattes and cigarettes”—inside joke about CCD applications in reducing habitual behaviors (65:48).
The episode provides a comprehensive, insightful, and often humorous look at the changing criterion design’s role in experimental and applied behavior analysis. Listeners are encouraged to experiment with CCD, embrace best practices (as currently defined), and not be afraid to “do better than 75%” of the published literature through careful design and patient-centered application.