Daniel Oberhaus (23:14)
Yeah. And it comes back to what, what Paris was speaking to earlier about the, the psychiatric surveillance economy. And so kind of close the loop there. The, the challenge with a lot of these tools right now is we, we don't have a good understanding of what Data is going to, or actually like, I'll unpack this a little bit more. So the reason why a lot of psychiatrists and people in the mental health profession find the idea of using AI so attractive is because if you go to a mental health provider, assuming that you're not in an inpatient facility, you're likely going to see them maybe for one hour a week, maybe you can talk to them on the phone or like text them, whatever, but it's relatively little contact. So the therapist or the mental health professional, we'll just use psychiatrists as shorthand, has like a very limited data set. And when you come to the, you know, an outpatient meeting, for instance, you're relying on the patient's recall. How was your, how was your last weekend, how have you been feeling, etc. And so there's, you know, surveys you can send out for momentary ecological assessments, for instance, to kind of get more in situ data. But the vast majority of the time that you are awake, you have no data on that patient, how they're feeling, their behaviors, their thought patterns, et cetera. What's nice about AI is we all walk around with a pretty sophisticated computer in our pocket right now and we're just bleeding off all of this digital exhaust all the time. And so the thinking around a lot of the kind of push for AI and psychiatry is that we can use this data that we're generating because most of like most people are spending a significant amount of time every single day in front of a digital network device. And this data, even if it's not, what am I writing into the computer, it might just be like my typing speed or my scrolling speed. And these things that are almost happening at a subconscious level might hold clues to my mental state and my mental health. It's a really appealing idea. There's very little data to back it up that it works. And in fact, the NIMH former chief who was there for about 10 years, his name was Thomas Insel, he left to go work at Google to pursue this idea. And then he launched a company called Mindstrong to pursue this idea. It's the most well capitalized startup in history, raised $100 million and 2023 it completely shut down. And they never said why. Presumably it's not because of how well it worked. And so there was very little data this entire time about this. And so the challenge right now is like, that is a very attractive idea. If that works, I'm all for it. But we don't know what data is going to have like, the best read in terms of actually measuring my mental health so I can correlate it with perhaps a diagnosis. How do I know that I'm about to enter into a crisis? So you basically have to be monitoring everything I'm doing all the time around the clock to make sure that, A, I don't have a crisis while you're not watching, and then, B, well, maybe it turns out that my typing speed was like the X factor that you can map to the. Like, it correlates very highly with major depression, but we don't know that. So right now we're kind of just hoovering up everything and seeing, like, what. What matches best and trying to fit the box around this person. So that's kind of the surveillance economy aspect of it. And then to the asylum piece, that's kind of an interesting historical analogy in the sense of asylums. Like, prior to roughly the 19th century, the way that, you know, the mentally ill, what they would have called like, you know, people with madness, were basically just warehouse. They were put in prisons, they were cared for, they were rich by kind of, like, they just hire a benefactor to, like, put them in a outhouse somewhere where they were kept away from polite society. Then in the 19th century, around the Enlightenment, a bunch of, I think people with very good intentions said, well, hey, maybe we can use the asylum for therapeutic purposes, and we can actually use it as a place for rehabilitation. People in prisons were thinking about the same thing. And just like, you know, it shouldn't have a custodial function. It should have a healing function that went really, really well for a few decades. And then people just kept coming, and the asylums essentially became overwhelmed. And so the number of patients in the asylum in the US hit its peak in the 1930s, I believe. You have to check me on that. But, like, pretty late into the game in the 20th century, before the population started declining. And then there was, of course, the turnout of all the patients around the Kennedy administration with disastrous results that we're still dealing with today. But the asylum basically became a victim of its own, like, good intentions in the sense of it just became overwhelmed, and they weren't. It was no longer able to fulfill its therapeutic purpose and essentially reverted back to a custodial function. And so I think something similar is happening now. It's just much harder to see because it's not physical. But these algorithms are beginning to be run on any sort of institutional computer, right? So you're seeing suicide detection algorithms in K through 12. You're seeing them in colleges you're seeing them in government institutions, you're seeing them in offices, because happy workers, it turns out, are actually more efficient. So you can say it's for their mental health, but really it's because you'll get better products. But judgments aside, these things are being implemented on our computers. And you can say, well, maybe I don't work in an institution. This doesn't affect me. If you use Facebook, it does, because they have a suicide algorithm monitoring you too. Not if you're in the EU because they banned it due to privacy reasons, but if you're in the U.S. they do. So it is there. It's already here. And it's only.