Transcript
A (0:05)
Daniel, thanks for doing this.
B (0:06)
Happy to be here.
A (0:08)
So give us a sense of this incredibly viral sensation that has been open evidence in terms of what type of coverage it has of American doctors today.
B (0:21)
As much as we would like to think that it's going especially well for us, I would say sort of say as a qualifying point, that in all of the sub industries of AI, you're seeing an acceleration in compression. Right. So the adoption cycles, even outside of open evidence, before we get to open evidence in other fields of knowledge work and coding and so on, are hyper compressed. Right. It used to take half a decade or a decade for something to become standard and now it seems to happen in two years or a year. So the same things happened with open evidence in about 18 months. It's become the operating system for clinical knowledge in the United States. It is used something like 20 times more than the next most used platform of any kind in our specific segment, which is high stakes clinical decision support for doctors. So high stakes clinical decision support for doctors is a specific category of medicine. It's distinct from say, paperwork or it's distinct from scribing. Those things are part of the workflow of being a doctor. But the stakes and the consequences are different. If you get it wrong, you can go back and do it again. That's not the case with a patient. You have to get it right. You have one shot to get it right. And so clinical decision making, of which clinical decision support is in service of, is unquestionably the highest stakes area of medicine. We're probably the only company working at the tip of that sphere. Most people have self selected themselves out of the problem of high stakes clinical decision making, certainly through an AI lens, because they view it as ambitious.
C (1:54)
And could you explain it more to our Lance? Because I think fundamentally it's about taking information and then translating that into specific, either recommendations or diagnosis for a patient. Can you tell us more about how that works?
B (2:04)
One way to sort of simplify it down is at its foundation, it's a search problem, but it's a very semantic search problem. So most search traditionally works with keywords, right? So like flights to Barcelona or hotels in Barcelona, most of the keywords there can be captured in a couple of words and certainly in a sentence. And that's sort of traditional Google search. Even if you were to think about clinical decision support as a search problem, simply describing your search query, if you want to think about it that way, usually takes many sentences. So an example I like to give is you have a 44 year old female Patient, she has moderate to severe psoriasis. That's the red stuff on your skin. You're a dermatologist. That's so far so simple. You would just prescribe one of the many creams you see commercials for on television. Except she has Ms. So now it gets interesting because you want to treat her psoriasis, but you don't want to make the Ms. Worse. And you are not a neurologist, you're a dermatologist. So neurology is not your specialty, but you don't want to go refer her to a neurologist because you want to treat her psoriasis. And if you just keep referring people in circles, medicine never happens from the ether. You might have heard as a dermatologist that the new classes of psoriasis treatments, which are biologics, they're IL17 inhibitors, and IL23 inhibitors might have some interactivity with the neurological dimension of a patient's condition. That's about all. You know. You didn't learn this in medical school because IL23s were FDA approved in 2019. Right. So one of the great themes of open evidence is that the sort of golden age of biotechnology is sort of the dark ages of physician burnout because it's just impossible to keep up with all the new drugs and all the new mechanisms of action and so on. So, you know, it was approved in 2019, you might have graduated medical school in 2005. Right. So you didn't cover the medical school. And that's it. That's kind of, that's what you know. So your question then is, you know, for a 40 year old female patient with moderate to severe psoriasis, is an IL17 inhibitor, and IL23 inhibitor, more appropriate and more safely tolerated with respect to not aggravating the Ms. Now that's not an academic question. That's a very consequential question. IL17 inhibitors will actually make the Ms. Worse. IL23 inhibitors are safe and well tolerated in case of Ms. That's an example of where medicine can go wrong. Because even five or ten years ago, either you're referring that person to a neurologist, in which case you're just getting referrals in circles and medicine is not happening. Or unfortunately, what would more likely happen is they would just 50, 50, and that Ms. Might be aggravated. And it's well known, and it's been often repeated that medical error is a third leading cause of death in the United States after heart disease and cancer. But even that statistic kind of understates it because that's just looking at death, right? In the case of, in my example, this patient is not going to die as a result of taking an IL17 inhibitor. She's going to have a relapse of Ms. And so it's not just that medical error historically was a leading cause of death, it's that as many people died from medical error, probably a factor of 10 to 100, as many people had a comorbidity or condition that became aggravated and got worse and so on. So coming back to your question, that whole string is the search query. And so you can't just do search in a traditional way where you sort of say aisle 17, because that's not really what the question's about. Nor does the physician have the time to go read book chapters on this stuff. What you need is a semantic understanding of the query in the way that another human physician would semantically understand that query. And then it's actually quite deterministic and simple after that. Once you semantically understand the query, you can. From the world of published biomedical literature, you could find the exact snippets in a Phase 3 RCT randomized controlled trial in the New England Journal of Medicine that tested each of these things and found that one aggravated Ms. And the other didn't. So once you have a semantic understanding of the query, the rest is fairly deterministic and it's almost a search problem. But all of the juice is in connecting the very complex semantic meaning of a medical scenario to the answer where the answer might be in a Phase 3 RCT in the new England Journal of Medicine and in a snippet, not even in the abstract, but in the methodology section or in the population section.
