
Loading summary
A
Welcome to season two of Derms on Drugs, a video podcast brought to you by Scholars in Medicine. The best educational platform in dermatology and provided at no cost to medical providers. Derms on Drugs is where cutting edge derm meets theater. Miss Comedy. I'm Matt Zyrus from Docs Dermatology, and each week I'm joined by my residency buddies, Dr. Laura Faris from the University of North Carolina and Dr. Patton from the University of Pittsburgh, where we use our 60 years of combined derm experience to discuss, debate and dissect the hottest topics in dermatology. It's everything you need to know to be on the cutting edge of derm, and you'll actually have fun listening. New episodes drop every Friday on Scholars in Medicine, Apple Podcasts, Spotify and other major podcast platforms, and a reminder that there is a video component that has some of the key figures and tables from the articles we talk about. So let's go ahead and get into it. This week we are going to have a really interesting topic that sounds boring at the outset. So network meta analyses. So these get published all the time in all the journals and we're going to go into the main idea of should you believe them? Because a network meta analysis basically takes a bunch of studies, kind of puts them all together with some fancy statistics and then tells you which drugs you know, which drug works the best is the basic idea, but they don't always have consistent results. So we're going to get into it right now. Dr. Faris, why don't you go ahead and get us started off?
B
Okay, great. Thanks, Matt. So I am going to talk about an NMA for Hedradinitis separativa. So this was MIT Garg's group, and this was published recently in jama. Dermatology, Efficacy and Safety of Medical Interventions for Moderate to Severe Hidradin Separativa. A living systematic Review and network meta analysis. Okay, so what was this? What they did was they basically, basically looked at several clinical trials of like 39, 25 trials, 39 treatments, 6,000 patients or almost 6,000, and looked at efficacy and safety. So what did they focus on? Phase 2, 2 Phase 3 trials looked for 12 to 16 week endpoints, and the main one that they were looking at was high score 50. And then in addition to that, safety. So what they're really trying to do.
A
All right, we're going to get into the what they're really trying to do as we go forward. Just give us the what's, what's, what's the answer? What's it, what did it tell? What did it say? What did it say?
B
Okay, what did it say? You know, what it said I thought was kind of like not what we expected, but it was like Adalimumab. It's pretty good and it's hard to beat.
A
So part of that is a good catch word. Adelima. It's a good, a good commercial adalimumab. It's pretty good.
B
It's pretty good.
C
Okay.
B
That's right. My marketing career, this whole academic thing doesn't work out for me. So once a week, adalimumab was kind of the, the standout here. So what does that mean? Nothing really beat it with indirect, you know, comparison. So bimakizumab, which I think we all know an anti IL17AF dual inhibitor, was also a strong contender. Secukinumab IL17A inhibitor also showed strong efficacy but did not beat Adalimumab. The other one that looked really good here, so they didn't just look at FDA approved drugs, they looked at ones that are under investigation was Sonolocumab. So you might think, what is that? Well, that is a nanobody that is kind of like bimakizumab in that it targets IL17A and IL17F, but it's small. So, you know, I think the idea with these small nanobodies is that maybe they, you know, they can penetrate, penetrate tissue better. So that one actually looked like numerically beat Humira. But, you know, the confidence intervals were wide because this is too, you know, phase, these are like small phase two study. So, you know, you can't really show superior superiority. The other one that looked good was lutechizumab, IL 1 alpha and IL 1 beta inhibitor. And then povercitinib, which is a JAK1 inhibitor. You know, overall generally, you know, low discontinuation rates, around 5%. So, you know, I thought that this was interesting. So, you know, I tend to say I'm going to start with adalimumab and see how that, how patients do with that. And from there, there I'll move to an IL17 inhibitor. And I still feel good about doing that. The thing that I thought was interesting was infliximab, which I also think is something that tends to be pretty effective for HS and that we use because we can go to high doses, didn't do so well. But you know, I think that was also based on sort of the size, quality and control of those clinical trials. So I think that, you know, Humira still holds the crown.
C
So the question I had was that IFX1? IFX? I don't think that that was in Fliximab.
B
Oh, okay. Well, then that would explain a lot.
A
It's kind of got to be infliximab. I mean, it's. Because just when you look at the dosings of it, it's like.
C
But. So we. But we dose that on migs per kig. And if you actually. Because I AI'd this, and I'm like, what do they mean by ifx1 in this paper? And he came up with this antibody that binds to C5A.
B
Oh, I mean, I get that. Is. That is something that is under.
A
And they're probably. I don't know. Have there been any phase 3, phase 2, or 3 clinical trials of infliximab? I don't think they ever really pursued.
B
No, I don't think so. I assume that this was. Yeah, that's a good point.
C
Yeah, I was. I was confused about that. I meant to text you guys before.
A
Yeah. And then you thought you'd just make us look bad on air. Thanks, man.
B
Yeah, that was good. I like that. I'm gonna. I'm gonna put that in my back pocket.
C
It's not making you look bad. It's making me look good. I think there's a distinct difference there.
A
Fair enough. So we're. All right, let's. Let's move on to our next study. I'm just gonna make the one comment that it is often interesting. I. I believe that there's a little bit of uniqueness to the first study in a new indication so that, you know, the. The. It's kind of a unique patient population at that point. And so maybe that makes adalimumab look a little bit different. But we'll get into that with our guest and for everybody.
B
And I will say one other thing. I think HS studies are really hard to do. I think, you know, Pazi scoring me versus you this week, next month is pretty consistent. I think it's really hard with HS to do these scores. So I kind of attribute some of the variability and outcomes to that across all HS studies, not just here.
A
Agree. All right, so let's get into our second one. This is what really got me interested in network meta analyses. So in 2022, March and April, two network meta analyses came out for atopic dermatitis. One titled Systemic Immunomodulatory Treatments for Atopic Dermatitis, Update of a Living Systematic Review and network meta analysis. The other, Comparative Efficacy of Targeted Systemic Therapies for Moderate to Severe Atopic dermatitis without topical corticosteroids, Systematic react review and network meta analysis. Right. So you kind of. I assumed naively at that time that basically if. If you did two network meta analyses for the same disease, you should get pretty similar results. And while the results were like, vaguely pretty close, you know, there were some big differences. So the biggest one that really jumps out at me is that in the one that was published in. That jumped out at me at the time was the one that was published in JAMA, Dermatology said that abrasitinib 100mg was statistically significantly not as good as dupilumab, whereas the second one that was published in a different journal said that Abro 100mg was essentially identical to dupilumab. And so it really made me start to be like, well, how do these network meta analyses work? And as you start thinking about these things, it really does make you go like, huh, this, this is open to a lot of sort of depending how they design it, what studies they decide to put in and out, what endpoints they use, how they do all of the statistical mumbo jumbo. It really affects the outcomes of these. And so that's actually why we're doing this episode, is to try and give people an idea of, like, how do you try and decide if you should listen to these things or not. So, so let's go ahead. Next to Dr. Patton, who is going to give us an article that. That sort of touches on this.
C
Well, yeah, the other thing I wanted to say was because the one NMA was funded by abvi.
A
Yes, it was. And shockingly, it. It did show because.
C
Well, the other thing that stood out was a patacitin of 15 milligrams. And the ABV NMA was better than doopie. And then in the JAMA one, it was like, it's about the same as dupy, the 15 milligram dose. So that was the other thing that jumped out at me, given the. Who funded the.
A
Yes, I. You are. You are correct that that did does sort of jump out at you as well, you know, so that's one of the topics that we'll get into is I've kind of learned that if a network meta analysis is funded by a drug company, I generally pay no attention to it at this point. And I can't wait for our guests to come on and tell me if that's barking up the right tree. All right, Dr. Patton, go ahead.
C
All right, so, yeah, my paper was titled the Effect of methodological choices and inclusion criteria on network meta analysis results in psoriasis. This was by Galimi et AL and the BMC Medical Research Methodology Pathology, April 2025. So we talked about NMI NMAs and what they are. You know, you talked about that NMA funded by AbbVie. One that always jumped out at me was it was a meta analysis on hair loss treatments published in 2023. So it do testeride, finasteride, all those. And it came out with the best thing was this natural product formulation, ALRV5XR. And I'm like, what? What is that? And it turns out the meta analysis was done and funded by the company that makes ALRV5XR. So just that, that was one that jumped out at me. I just remember thinking, right, if, if it's done by a company, you don't trust it. So the authors acknowledge in the background part of the paper, overlapping NMAs can provide sometimes discordant results. They took a bunch of psoriasis studies. They specifically took them from the Cochrane Review. So those, those studies have already been kind of vetted. So you would say that they did the meta analysis on previously vetted studies. It was from the. Yeah, the Cochrane Review. They changed analytic methods, outcomes. They were looking at which studies to include and exclude. And they ran 560 different network meta analyses on about 20 different psoriasis drugs. I didn't understand a whole hell of a lot about what these guys were talking about. I mean, this is the part that's confusing about meta analysis as I mean, I am a clinician. Clinician. I don't understand what it means. So it's like, do I trust this? Don't I? Are there things that you can look for our guests hopefully can help us there, but maybe you don't need to understand it. There were very slight.
A
That's right. I don't understand how AI works, but I can still use it, right? Sometimes it doesn't matter if you know. But yes, I would like somebody to, I would like to have a way to tell if, if the AI I'm using is likely to be hallucinating and lying or not.
C
Yeah, they, the study showed. So all these different analyses, there were very, very slight variations in efficacy of individual drugs, even when different methods were used. And if you were using any of the NMAs to make a treatment algorithm for psoriasis patients, what are going to be my six top choices? What are the six most effective? It didn't change much. Right. Your top six were bimi ics infliximab which that one surprised me a little bit. Secukinumab, which that one surprised me a little bit too. Sonolicumab, geselcumab, Berdalumab, kind of right in that mix. That's what that figure 3 shows. It's. It's like looking at Pazi 90 from weeks 8 to 24, and there's all these dots of where each drug ranked. Bimmy was always ranked first or second. And the bigger circle in the 1 ranking means it ranked first more often than it ranked second. The infliximab data is interesting. Right. So there were NMAs run where infliximab, and they don't specifically go into the dose, but infliximab was number one that ranked the highest. So that's where you could have a drug company where Janssen made the first, like, Remicade, Right?
D
Yep.
C
So they could say, all right, we're going to hire a bunch of PhDs. They're going to say, we did an MMA NMA. Like, they could easily run 560, cherry pick the one that showed it to be the best. And this is where I think the problem lies in trying to interpret NMAs.
A
Yes.
C
And so.
A
Yeah. Well, let's finish up. Ixakizumab was anywhere from 1st to 7th, right?
C
Infliximab.
A
Infliximab, yeah. Anywhere from 1st to 7th. It had the biggest sort of spread.
C
Yeah.
D
Right.
C
Now, they said that a lot of the poor results of infliximab was when they looked at one particular study where it was compared against methotrexate. For some reason in the paper, they call that like, versus placebo and they refer to that trial. There was no placebo. It was infliximab versus methotrexate. Maybe they meant like, comparator or something, but. And they were in methotrexate naive patients. And so infliximab and methotrexate were like. Infliximab wasn't much better than methotrexate in that trial. And that was the one study where they said that was the outlier. That's where it got kind of dinged.
A
Okay.
C
For not being as effective.
D
Okay.
B
That's a pretty small study. Right. That wasn't a major study. So you got sample size issues. I also think infliximab, the real infliximab, not my fake infliximab from my paper is also hard because, like, there's not a lot of standardization to. Sometimes that's not standard. How they're dosing it, right?
C
Dosing frequency. Yeah.
B
Frequency or constant? Yeah.
A
Yep. All right, well I think with that.
C
Yeah, we can bring in our guest.
A
All right, so we have got our special guest, Dr. Aaron Drucker from the University of Toronto who is the NMA guru of the world as far as I am concerned. He actually was the person who published the, that NMA that I talked about on my part. And we are going to have him on to try and help the derms on drugs figure out how to, how to assess meta analysis. So Dr. Drucker, great to have you on the show.
D
Thanks for, for having me. It is one of my favorite things to talk about. So looking forward to talking about anime with you.
A
All right, so we thought you were.
B
Going to say we were one of your favorite podcasts to listen.
D
That too.
A
That's obvious. That's obvious. So first let me give for, for our listeners my super simple non expert sense of what a network meta analysis means. It means that you, you take all the studies and you decide which, you know, you put in whatever criteria you want to use for which studies you're going to include. And then the network is like, okay, drug versus placebo, but then if you've got this drug versus that drug, you plug that in too. And if got then that drug versus a third drug. Now you, you can kind of link these drugs together in a network and my guess is that they are very robust when you have multiple trials comparing different drugs to another. But that whenever the network is really everything versus placebo without many head to heads, my guess is they lose some of their robustness. Is that a generally accurate assessment, Dr. Drucker?
D
I think for a lot of the things we're going to talk about today, the answer is, you know, it's not that simple. I think a network meta analysis that is most drugs connected to each other through placebo is going to make you question whether it's robust or not. But it's not necessarily that it's no good. So for example, our network command analysis, our living network MET analysis for atopic dermatitis started out with mostly all these new biologics, JAK inhibitors connected to each other through placebo. And we put out results and we could, you know, make some assessments that we thought were pretty robust about how these drugs compared with each other, but breathed a huge sigh of relief when we started to see some head to heads and they lined up with our network meta analysis. So it ended up that it had been robust all along. But you need those head to head studies those connections to things that aren't just through one comparator, usually placebo to really assess that.
A
So that's one of my first questions is, is you know you get, sometimes you've got a network meta analysis and then you get a head to head study. And if they differ in their results, do you, it seems to me like you should put more weight on the head to head study than on the network meta analysis. But we're always told that meta analyses are the gold standard evidence. So that makes me quite like am I right that we should, that the head to head should trump the network meta analysis or what do you do in that situation?
D
So if I'm doing a network meta analysis and I have a result that the term for this is incoherence where that it doesn't cohere.
A
That's also the term for Dr. Patton on Friday nights after 8pm that's up.
C
To way before 8.
D
So if the direct evidence doesn't agree with the indirect evidence, that's a big problem. I'm going to have a big problem feeling good about the network meta analysis that I'm going to try to publish and get out there. And I think people who are reading it are going to have a problem believing it. Because if you have a well done randomized controlled trial of two drugs head to head, that does trump the results of indirect network meta analysis. And it doesn't mean you necessarily did something wrong statistically, but probably there was something different about that trial's population than the populations of all the other trials or the dose was different of the drugs that you're studying. Or maybe most of the trials in your network are out to eight weeks and that was a 16 week study, something like that. Because really they should line up. If they don't, it suggests there's important differences. And if you can't find any of those important differences, it's still a major problem in terms of believing the overall results of the network.
B
So can I ask you, does that. So the endpoint you pick has to really matter, right? So like with psoriasis, PASI 100 is going to look different than PASI 75, right? So there's, you know, now as like drugs get better and better, there's lots of things that can get a lot of people to PASI 75. So PASI 100 is a higher bar. So is it better to pick the highest bar or something intermediate or does it not really matter when you're doing these?
D
I think that's more of a clinical question than a network meta analysis question. Because ultimately, like you guys have been talking about, the end users of these network meta analyses are going to be clinicians and patients who are trying to decide between different treatments and, and if PASI 100 is more important to them when choosing between drugs, then that's the more important outcome that you should be using as a main outcome for your network meta analysis.
B
So we should really be thinking about that when we look at them, like what were they looking at? And if I really just care about safety or I really care about, you know, mot pazi 75 and I should be looking at that and what do you do if, you know, like if they have high score 50 and 75, like how do you put all that into a model?
D
So you have to treat each outcome separately. I mean you kind of could combine, you figure out some statistical way to combine them. But I'm a bit of a purist there where I like the outcomes to really be the same outcome if I'm going to combine them in one analysis. What we do in our living network meta analysis is we have all these different outcomes that we assess separately. So we run analysis for easy score as a continuous outcome and for the dlqi, the quality of life measure as a continuous outcome. We also run an analysis for easy 75 and easy 90 and investigator global assessment IGA success. So we run all these different outcomes. We decided on them at least our most important outcomes before we ran any version of the network meta analysis. We started out with the continuous outcomes. We added on some of those binary outcomes because people were asking for them. But that's another thing that's really important is you want to make sure that all these outcomes that you're choosing, you're choosing them before you go about starting to run the analysis and not picking which outcome you're prioritizing based on what the results look like.
C
Do you think the drug companies, do you think they do run like that one paper, the psoriasis paper? Did you think they run hundreds of NMAs and cherry pick or is that something you would never comment on because you they come after you?
D
I'm not worried about them coming after me. I've never been in those rooms. I don't really know.
A
He's Canadian so he's probably safe.
D
Yeah, I'll certainly be polite about it. But you know, I think it's possible that they run multiple versions of it. But you know, what I think is also possible is that, you know, if they. Let's just, let's just Take the example in atopic dermatitis of concomitant topical therapy. So some trials, they allow concomitant topical therapy. Patients are encouraged to use, you know, triamcinolone or whatever, you know, once or twice a day while they're using their new biologic and in other trials are not allowed. And if they do, they're considered, you know, to be using rescue therapy in their treated differently in the analysis, considered a treatment failure. So if you have a biologic and you ran one study with topicals and another study without, and you notice that your difference against placebo looked better for the drug in the trial with topical steroids than it did in the trial without topical steroids, you could say, hmm, well, maybe let's do our network meta analysis just using the trials that include topical steroids. You don't necessarily need to run the NMA beforehand. You might have some idea going in of what are the better parameters to choose. Again, you know, I've not been in those rooms. I don't know how they do this, I don't know how they make these decisions. But that's where a protocol is really important. You want to see a protocol out there before an analysis is done so you can decide, you know, did they decide on these important things before they went about doing the analysis?
A
So how much work is it to do a network meta analysis? Is it like you've got a computer program you type in, okay, here was the percent of people who got the easy 75. Here were the number of people in the study. You do that for, you know, the 15 studies and then you hit go and it tells you, I assume it's not that easy, but I have no idea how much you know, goes into.
D
Can be that easy. There are statistical packages out there where you can just plug in those simple numbers like you said, and you can actually get some pretty good results. Depending on what you're looking at, it can be pretty robust. But if you're doing anything more complicated, then you often need something more than that. So particularly for our analyses where we're looking at continuous outcomes, not just these yes or no, did they meet easy 50 or not? When we're looking at things on a continuous scale, it's a little bit more complicated, needs some more detailed coding. So you could pick, do a network meta analysis just using some web based computer program that you could probably find, you know, online doing a Google search. But for the more complex ones, you probably need something more.
C
When you, when, when the NMA came out, the one done by Abby we were kind of maligning them earlier, but in fairness, the study that they showed upatocytinib, 30 milligrams, probably the most effective therapy, your meta analysis came to the same conclusion, right, against dupilumab, probably more effective, but it was that 15 milligram where their paper said the 50 milligram was better than DUPY, whereas your paper showed 15 milligrams equivalent to DUPY. When you saw their NMA and you saw it was Abby, could you read through the paper and be like, oh, I know why they showed that better, or is that like somebody who knows all about meta analysis? Is that like, impossible? You just don't know.
D
So I, you know, I don't remember the specifics of that paper and the ins and outs of their methodology and how it differs from ours, but a lot of this is interpretation in and, you know, network meta analysis. One of the big advantages of it that you guys have been talking about is that you can rank the treatments. You can say, this is one, this is two, this is three, this is four, you know, across whatever efficacy parameter you're looking at. But those ranking statistics are really oversimplified. There's no confidence interval. There's no, you know, way to assess how certain you are that that treatment is one. If you just look at those face value ranking statistics and, you know, off the top of my head, I think upadacitinib 15 milligrams in, in our network meta analysis is better in terms of its ranking statistic than Dupilumab. But when you look at the actual number, whether you're looking at Easy 75 or the absolute change in Easy, the difference between Upad and Sitnib 15 and Dupy is so small that even if a ranking statistic says this one's two and this one's three, the difference isn't big enough that I care that one's two or one's three, they're so similar. So again, it's all about interpretation, some nuance. There's going to be some spin. There was a nice article from that Cochrane Psoriasis team who've done a lot of great methods work in NMA that showed that network command analyses that were published, sponsored by industry, were more likely to have spin in their abstracts and in the manuscripts than ones that were either not funded or were funded by academic funders. So that's another thing. Even if the statistics are kind of the same, you can spin it one way or the other.
A
So how would you recommend so for our readers or listeners who maybe are going to be reading some network meta analyses. You know, as somebody who's got as, as deep an expertise in this as it is possible to have, how would you recommend, like trying to decide, should I listen to, you know, I think I saw one of these a few months ago and it said, you know, xyz, oh, here's an, Here's a new one. Like. Or just, oh, I haven't. Oh, here's one of these. I haven't seen them before. Like, how would you recommend people go about trying to figure it out?
D
Sure. Well, I think I've talked about a protocol a few times. A protocol is super important. Right. In a randomized trial, we expect there to be a protocol. We expect it to be on ClinicalTrials.gov and we can follow it along and see what were their planned outcomes and then what do they report in their paper. The same thing should happen for a network meta analysis where you know what they're going to do before they do it, and then the results in terms of how they report them should match up with what they said they planned to do. That's one thing that's really important. I think industry sponsorship is a bit of a red flag. It's not necessarily going to be bad, but I think, you know, for network meta analysis, there's no good reason for an industry group to do a network meta analysis. And if there's already academic groups doing them, they're not that hard to do. Like we talked about before, you don't need, you know, tons of money of industry to do them. Industry is great at doing clinical trials and they have the money to do clinical trials, but they don't need to be the ones doing network meta analysis necessarily. And then I think a third thing to look at is looking for that nuance. It should not be the conclusion is this drug is number one, period. Right. There's usually more nuance to it. You want to know about the other drugs. You want to know by how much better drug one was than drug two. And if everything is just focused on, well, this one's the best. I mean, that sounds like spin to me. Even if in fact it is true that that one's the best, there ought to be more nuance in the interpretation.
A
So the idea that these should come more from academia does make a ton of sense to me. And before I forget, will you plug your online living website? It's www.eczema therapy.org Correct.
D
Eczema therapies.com so very close. And yeah, so our network Analysis is, is what we call living. So we run a new search every four months and even before we publish results in a journal, we often publish results online. We don't necessarily run the statistical analysis every four months if there's nothing really important happening, if no new important trials have been published. But we do put results up there as soon as they're available because jamaderm doesn't want to take a new network meta analysis every four months from us. So we keep things fresh up there.
A
Is there a reason you guys only do it for Eczema? Why don't you have www.psoriasistherapies.com and a www.hstherapies.com and why don't you help us all out? And then we could all just go to them and say this is the guy we trust.
D
I'm flattered. But also I would point out that the Cochrane group, who's doing that psoriasis network analysis, they're also doing terrific work. So they're living psoriasis network meta analysis. I put a ton of trust in and I think it would be really redundant for us to do something similar. So it doesn't all have to come from me. There's other people doing great work as well.
B
Can I ask you a question from this is from a more academic perspective and I know not everybody, probably most people listen to us, are not in academia, but I'm always trying to think of ways to engage our residents in, you know, clinically clinical research. I think something like this really would appeal to them. Right? Like they love treating patients, like to think about how are we doing the best. If you had like a two minute summary of you're a dermatology resident, you're smart, but you don't have a PhD, how would you get involved in learning? I think this would be a great, you know, sort of academic area for them. How would they get involved in learning to do this and doing it?
D
It's a great question and you know, for me I got involved with it completely by chance. I was doing a master's degree at Brown and my mentor was Abrar Qureshi who didn't really do a lot of this network meta analysis or any meta analysis stuff. But the teacher of my meta analysis course I did as part of my master's degree was an international expert in network meta analysis and happened to be just starting one on basal cell skin cancer. And that's how I got my first experience with it. I think the first step, if you're a trainee interested in this area is to read some papers on the basics of network meta analysis, to read some of the really well done network meta analyses in dermatology. And then if you're trying to find your own area in this, the key is to find something that hasn't really been done or that hasn't been done well. So you've got to find a disease state that you're not just going to be doing something redundant to what someone else is doing. Because most of the time the results are going to line up and if you're just replicating something, it's not going to be all that meaningful. So if you have a disease state that you're interested in that doesn't have a network meta analysis, or it has one, but it's old, or it has one, but you read and you're like, well, there's a lot of spin here, or I don't agree with all the trials they put in here. I would have done this differently. Even just thinking about it from a clinical point of view, then that's, I think, where you can make a real contribution.
C
So you did the NMA and. Right. I think Patacitinib ranked well. Doopie, I think is competitive because of its safety profile. Are you ever approached by the drug companies to say, hey, you want to talk on our drug? And do you say, I mean, would you or would you be like, you know what, I really want to be independent and not be associated or have people look at my work and think, oh, I must be biased, Is there, is that hard to do? Or have they never asked you? And so it's been easy?
D
No, I get approached, you know, to do consulting work or even sometimes to have discussions like we're having now with some of the drug companies to sort of explain to their staff, you know, what the network meta analysis is. And the easiest thing for me has just been to have a blanket no, because, you know, I'm involved with the American Academy of Dermatology guidelines and I'm a non conflicted member of that. And as soon as you start to do anything, then all that gets thrown up and is in question. So I think there's those concrete examples of how it's been important for my career to not be conflicted. But then also I think just reputationally, I think part of reason people believe our work is because I'm not conflicted and I certainly wouldn't want to mess that up.
A
All right, I'm going to jump in and say, while their discussion was going on, I googled Living network meta analysis for psoriasis and it did take me to the Cochrane review page. And just for anybody who is wondering, according to that meta analysis, the most effective drug was, was infliximab and then it went to bimakizumab. Let me, let me find it again here. So it was infliximab. Let's see here. Moderate certainty evidence, Xeligecumab, which is not on the market, then bimacizumab, lebrecizumab, ixakizumab, sorry, and rizankizumab. So again, infliximab drug that's not on the market, then bimakizumab, ixakizumab and rizankizumab were the results.
B
It's gonna be like your new Billboard top 40.
C
What is it that's like Infliximab is cheap. I think in some of the other like per Pazzi score, infliximab is by far the cheapest because of all the biosimilars available. I think we need to seriously reconsider infliximab as a, as a first line.
A
Not unreasonable. I think it's. My guess is it's not as safe as the other ones. It may lose efficacy over time and you got to go, I guess you can send people to their house to do the infusions now. So maybe it's not, you know, for again for your, for your regular practicing derm. I think setting up the home infusions is just so outside of what they're used to doing.
D
The other thing I'll say is that, you know, one of the assumptions of network meta analysis is called transitivity. But essentially it's that everyone in one trial could have just as easily been randomized into another trial. And the infliximab trials were so long ago that the patient population going into a biologic trial then is probably not really the same as the patient population going into biologic trials now. And so there may be some sort of carryover effect just from the fact that the people in those trials were different. We're going to run into that in eczema too, where the people who are in the original DUPY trials are not going to be the same kind of people that are in the new OX40 trials now. And there are statistical ways to look at that, but that might be part of why infliximab is coming out.
A
Well there, yeah, it's an interesting thing. I always think that the DUPY results sort of the subsequent head to head studies which didn't necessarily show that the jacks at. Well, which basically showed that the jacks at intermediate dose are no better than Doopie. My guess has always been that the JAK companies were surprised at those results because it seems like they likely wouldn't have done those studies if they knew those were going to be the results. But it's an interesting challenge. So I think we've gotten as much clarity into the incredibly murky world of network meta analyses. My main takeaway from this. So I guess the other question I would say, are there any. Do you have any rules of thumb for how many studies should be in a meta. A network meta analysis before it becomes useful? I mean, I've seen network meta analyses that have four studies in them. Is there? And I assume the more studies the better, the more patients the better. But is any kind of a rule of thumb there?
D
I don't have a real rule of thumb, but I think generally you like to see at least some head to heads, not everything connected through placebo. Of course, that's not always possible and the results can still be meaningful even without that. And you'd like for at least one of your comparisons to have more than one trial so that it's not just one study connecting everything you have, at least for one of the comparisons that you're making.
A
More than one trial, what does that mean? More than one trial? So you have at least two trials of drug A versus placebo. Is that what you mean by that?
D
Yeah, exactly.
A
Okay, is there a metric of the average distance of connection? So like if you said to me, okay, for every, you know, for this one, every single one goes through placebo, okay, your distance of connection, your average would be two, right? Drug A to placebo, placebo to drug B. Whereas for every head to head, now you've got a one, right, which is drug A to drug B. Is that is to, you know, so the more head to head you have, the smaller that number gets. Is that, is that a metric or am I like, did I just come up with something that I should trademark?
D
Yeah, I think that sounds like it would be a worthwhile statistic. I've not seen that, but it might exist, but I haven't seen it. What I do is I just look visually and we call it hub and spoke, where everything's just on that placebo spoke and all these got all these hubs coming off of it. So that's how I assess it. But a number would certainly be useful.
A
Okay.
C
Hey, look at you, Matt.
A
It's coming up with questions that unanswerable that's my. But I. I would give an answer. Right. That's unanswerable question that I make up an answer and nobody can really challenge it.
C
Good idea.
B
Speaking of unanswerable questions, you got some trivia for us?
A
Let's go, Patton, what do you got? So, Dr. Ducker, again, the rules are you gotta let Patton finish reading the question. As soon as he finishes reading, you can shout out your answer.
C
All right, so I just went into statistics and interesting historical facts in my little deep dives here.
A
Before you do that, I gotta say one statistical thing that I always thought was a big thing, but it's never come up in real life. The idea that if drug A got 100% of people 50% better and drug B got 50% of people 100% better and 50% of people 0% better, they would both show the same average improvement, but would be very different drugs. And that's, I think, always been interesting. Like, I always thought that'd be a big. You know, one would have a tiny standard deviation and one would have a huge standard deviation. But just. I've never really heard, like, statistics people don't talk about it. So I guess that isn't something that really comes up where we get drugs like that. All right, anyways, that's enough of a digression.
B
Prescribing them. Yeah.
C
All right. These two men were the only authors of the 1958 paper published in the Journal of American Studies Statistical association titled Non Parametric Estimation from Incomplete Observations, Everyone. What's that?
A
Is bay one of them?
C
No, but you're on the right track. We've. We've heard these two names thousands of times.
A
Watson and Crick.
C
No, they did statistics, maybe survival curves.
D
Kaplan.
C
I think Drucker got it out for Ferris. You finished at the same time, but drucker started. So 1 point to 0.5.
B
It's the Canadian delay coming over the border.
C
Yeah. I should just give him.
B
Yeah, he gets it.
C
Yeah. So it was Kaplan and Meyer, Edward Kaplan and Paul Meyer.
A
All right, that's the study that brought Kaplan, Meyer curves into the world.
C
Yeah.
A
Okay.
C
They both kind of came up with it independently and then somehow got to talking to each other and they're like, let's do it. We'll publish it.
A
Okay. Interesting.
C
Be interesting to see how they decided who would go first. Because, man, it could have been Meyer, Kaplan.
A
Yeah.
C
It was that close.
B
Alphabetical.
A
That sounds right.
C
Okay. All right, number two. Using statistical graphics, this woman showed that poor hospital condition caused more deaths than Battlefield injuries in the Crimean War.
A
Nightingale.
C
Yeah, Florence Nightingale. So she was kind of like her own. She did her own, like, statistics and she put together. It was called the Coxcomb graph. Really cool representation. You can still find it. She paid more money to print it in color because she thought that was a more effective way of presenting her data. And basically the British army was like, oh, my gosh. Like, we need to increase sanitary conditions in our hospitals. Our guys are dying from non battlefield stuff. Dysentery, things like that.
A
Okay, all right, one way.
C
This one's way too long. But I just. It was a thing where I was like, huh, that's why we do that. All right, you guys ready?
A
Yep.
D
Yes.
C
William Sealy Gossett was a chemist and statistician that worked for the Guinness Brewery in Ireland. And he developed a test to optimize brewing processes using small samples. Because of a company policy, he was not allowed to publish his methods under his own name. What pseudonym did he use on the publication describing his test, later called the T test?
A
Cox.
D
No. Wallace. No.
C
You usually see this as the blank T test.
B
Student.
C
Yeah, Student. Oh, that's why it's called Students T Test. He had to publish under a pseudonym, so he just picked Student as the name he would publish under. And that's why it got called the Students T Test.
B
I thought it was because students learned how to do it. I never knew where it got it.
C
I always thought it was something like that too. Yeah.
B
All right, shoot.
A
I thought I was pretty.
C
That's it.
D
I thought it's a three way tie.
B
That was.
C
It was a three way tie. You guys did great. That was our best competition ever.
A
Yes, this was pretty good. Well, all right. I want to thank Dr. Drucker for joining us. This has been a. An interesting conversation that I think has made me better at thinking about network meta analyses. And I hope it's hoped. I hope it's helped our listeners as well. I want to thank everybody for joining us. We hope you learned a few things. Hope you laughed once or twice. And mostly we're hoping you're planning to join us next week. Until then, I'm Matt Zyrus.
C
I'm Tim Patton.
B
And I'm Laura Farris. And we are Derms on Drugs.
Host(s): Matt Zirwas, Laura Ferris, Tim Patton
Guest: Dr. Aaron Drucker (University of Toronto, NMA expert)
Release Date: October 10, 2025
This episode tackles the often confusing but critical topic of network meta-analyses (NMAs) in dermatology. The hosts, joined by expert Dr. Aaron Drucker, break down what NMAs are, why their results sometimes differ (or conflict), how industry involvement might affect them, and how clinicians can interpret these analyses. Expect lively debate, practical advice, nerdy statistical deep-dives, and the group’s signature humor throughout.
“A network meta-analysis basically takes a bunch of studies, kind of puts them all together with some fancy statistics, and then tells you which drug works the best ... but they don't always have consistent results.” — Matt Zirwas [00:36]
“If you have a well done randomized controlled trial of two drugs head to head, that does trump the results of indirect network meta analysis.” — Dr. Aaron Drucker [18:49]
“For network meta analysis, there's no good reason for an industry group to do a network meta analysis. ... Industry is great at doing clinical trials and they have the money to do clinical trials, but they don't need to be the ones doing network meta analysis necessarily.” — Dr. Aaron Drucker [28:39]
“Adalimumab. It's pretty good and it's hard to beat.”
— Laura Ferris [02:49]
“I always thought that’d be a big ... you know, one would have a tiny standard deviation and one would have a huge standard deviation. But ... statistics people don’t talk about it.”
— Matt Zirwas [41:12]
“Rankings are really oversimplified. There’s no confidence interval ... There’s going to be some spin.”
— Dr. Aaron Drucker [26:21]
“For our listeners ... how would you recommend, like, trying to decide, should I listen to ... [an NMA]?”
— Matt Zirwas [28:00]
“Find a disease state that doesn’t have a network meta analysis, or it has one, but it’s old ... then that’s where you can make a real contribution.”
— Dr. Aaron Drucker [32:25]
Despite their statistical complexity, NMAs are here to stay—and critical for drug comparisons in dermatology. But trust is built on transparency, methodology, and independence. As Dr. Drucker notes, “...it ought to be more nuance in the interpretation.” Dermatologists should arm themselves with a healthy skepticism—and always look for the protocol before buying into the rankings.