C (3:48)
Yeah. And I think from our own dmage experience, obviously the world of AI is moving extremely fast. The massive billion dollar foundational model companies are moving as quickly as they can. Google, Apple, Microsoft, Facebook, Salesforce, they're all embedding or creating foundational models as quickly as they can. So it's very clear that the generative AI at a foundational level is going to be in the core tools we're all used to using as quickly as possible. If you're on WhatsApp, there's already WhatsApp meta AI you can talk to now. So it's starting to come out like into the mass billion plus user consumer products in a very real way that was envisioned six months ago, but not quite there yet. And now Microsoft's talking about putting or it is in their os. Apple's about to embed something in their os. So there's these core capabilities that'll be available to everyone. And for Dimagi, we are still racing as fast as we can towards equitable AI, saying, okay, that's great for the people who are already on the tech curve and already getting buying MacBook Pros and exposed to these tools and technologies that'll take care of itself. The market's driving that very fast and seeing prospectuses on up and coming foundational model companies who are saying, if we get a 5% market share, that's a trillion dollar company. So they're like, they're not Claiming they're going to be the foundational model. They're like even a slice of that kind of core foundational market is going to be a trillion dollar company. So the market is clearly investing heavily in those companies and that'll continue to happen. The use cases above those are also racing ahead. We've seen tons of progress in medical AI, in legal AI, in business workflow AI. We're going to continue to see rapid progression. I anticipate they are going to get notably better than humans pretty quickly in some types of tasks. One of the things that we're doing now that Brian's leading with our team is looking at multi agent use cases where we're saying an AI is going to get. You can make an AI really good at answering a very specific set of questions, but maybe it's not good at answering every question you could throw at it. So if you have a family planning question or you have a TB care question, maybe the same bot can answer both of those well, but maybe you want two different bots or agents that can answer those independently once you detect the question that's being asked. So we're really excited to explore those multi agent use cases and then we've really pushed hard into our research and our work on low resource languages. So the models are getting quite good at being useful in languages that you wouldn't think. Brian can go into some detail on this and that's really exciting because that's a huge barrier to testing. There's the user experience and the language experience of interacting with the model and then there's what it's trying to convey. And the what it's trying to convey in English is already crazy impressive in terms of empathy and response and almost being better in some ways than a human would be at helping you think through problems or talking to you about certain issues. But if it only works in English and major languages, then it's going to not reach probably some of the most important equity use cases. So we're really excited about what we're already seeing and I think there's going to be a lot more progress on that ahead as well. Lots of huge areas and we've been very fortunate to receive additional funding across numerous use cases. I divide them into three areas. Now we have our direct to client work. So that's an AI that's exposed directly to end users. There's coaching use cases where we're trying to support frontline workers, community health workers, AG workers, with a coach, supervisor, assistant, not to replace the human, but to augment that, and we touched a bit on that on episode two. And then also a new use case that we weren't talking much about that's really been quite popular is a kind of program manager assistant. And we break that assistant into kind of three buckets. One is a knowledge assistant, which is what a lot of people picture with a Q and a type bot that's trained on your data that can answer questions, a data assistant that can interpret and analyze and help understand data that you might be dealing with as a program manager, like which county has the highest burden of disease. And then a workflow assistant. If you're reviewing documents or moving documents across teams, how can you do that? Imagine a program team of 10 people who's running a epidemiological program or a surveillance program. An AI that can really help all 10 of those team members is something we're getting increasingly excited about. So there's a ton going on, both within dmaggi and obviously way more outside of damagi, but a lot of progress at the same time. I think everybody's holding their breath on when's the next jump in foundational models. So we're recording this on May 22nd. OpenAI just released ChatGPT 4.0 and some people were guessing that might be ChatGPT 5 or 4.5. It's not. It's an amazing step in some directions, but not a huge step in terms of the next level foundational model. And one of the interesting things that's hard to put it in context, the jump that Anthropic just made. So they, a month or 2 back released Claude 3, which was a massive difference between Claude 2 and Claude 3. And people now view it as on par with OpenAI ChatGPT 4. So one, people are excited, there's some competition because OpenAI's ChatGPT 4 is pretty far out ahead. But two, the fact that other companies are replicating these massive step changes in performance and capabilities is also, I think, giving to me some credibility to the people who think we're on a very fast curve right now and just have to be patient. There was a podcast that we can link to in the show notes from the co founder of Anthropic with Ezra Klein from the New York Times. And one of the interesting things on that podcast I found was he's look, if you've been on the inside of this, we've been seeing really impressive step changes for years and so the fact that the next one hasn't come out in months doesn't bother us. But to consumers who just got exposed, saw this huge jump between 3.5 and 4. Now it feels weird that you haven't seen that next jump, but they're all like, no, nobody inside this world is worried about the progress. And maybe that's salesmanship or wishful thinking and people are worried. But his view is like, to the outside people who have not been doing this for a decade, it feels maybe slow, given how fast 3.5 to 4 was. Because if you're inside and you're testing these and you're like one of the engineers, you're like, you're just. This is, this is just how R and D works. We're moving forward and some huge things can come out next.