
Loading summary
A
We published a paper by way of example with one of our big customers, Merck, that illustrated how by making judicious design decisions at the very beginning of the clinical trial, you can save upwards of $100 million in the downstream cost of the trial.
B
What is the budget typically expected pre AI for this clinical trial phase of new development?
A
The overall cost of developing the drug and running the trials and doing everything necessary to bring a drug from conception to market is of the order of 1 to 2 billion dollars.
B
Wow.
A
And these trials oftentimes run 10 years or more.
B
How faro health using generative AI and.
A
So we're using AI to greatly accelerate that process from a matter of weeks or even months down to potentially 20 minutes to get the first draft.
B
Patrick Leung is a pioneering force at the intersection of AI and health care. From Google duplex to hedge fund innovation, he's now driving Faro Health's mission to slash drug trial costs and bring life saving treatments to patients faster. Welcome to Using AI at Work. I'm your host, Chris Daigle. Each week we'll be learning how today's business owners, entrepreneurs and ambitious professionals are getting more done with smart use of tomorrow's tech. Let's get started. Right now, every business leader is asking the same question. What are we going to do about AI? If this is you, chiefaiofficer.com has the answer. We give you a simple path forward where we provide executive and team training so your people know exactly how to safely use generative AI in their day to day. We also manage the deployment and implementation to make sure tools actually get adopted and deliver results. And we'll also guide company wide transformation so AI becomes part of your operating system, not just another shiny object. The companies that act now will increase productivity, cut costs and grow faster than their competitors. Those that wait will get left behind. So if you want to make AI work in your business, visit chiefaiofficer.com and see how we're helping companies of all sizes finally get results from AI. Hey everybody. Welcome to another episode of Using AI at Work with an amazing executive who is using AI not only in his day to day, but their business is based on it. I'd like to introduce Patrick Lung from Faro Health. Patrick and I had the opportunity to connect before the episode to kind of discuss what they're doing, but also how he's using generative AI in his day to day. So Patrick, before we get started, if you don't mind, just maybe introduce yourself to the community and let us know kind of what brought you here today?
A
Sure. Thanks for having me on the show, Chris. It's great to be here. So I'm a tech entrepreneur and executive. I've been in this industry since the dot com boom way back in the 90s and I worked at Google for about 11 years. And as part of that I was part of this team called Google Duplex, which produced this very, very super realistic conversational AI system that could call up businesses on the user's behalf and book restaurant reservations.
B
I heard those demos.
A
Ye yeah. And at the time it really felt like, wow, we're now in this world of the movie Her. And this was before large language models came along, but it just sounded so real that people were really freaking out about it. And so to me that was in some ways the dawn of the AI kind of revolution that we're in right now. That of course was greatly accelerated by the introduction of live language models, which Google played a big part in with the invention of the transformer architecture. And so since Google, I've been basically applying AI in different ways for a couple of years at a hedge fund a New York to Sigma Investments. And then I started my own company using AI to predict tropical rainforest ecosystem growth. And now I'm at Faro Health where we're applying large language models and other forms of AI to make clinical trials better and bring drugs to market quicker and hopefully at lower cost to the user and so on and so forth.
B
Nice. So tell me a little bit about the impact that Faro Health is having on by introducing AI into this process for pharmaceutical companies.
A
Yeah, well we're sort of tackling this problem right at the beginning, which is when clinical scientists first start conceiving clinical trials and designing them. And so it's sort of a general engineering principle that the earlier on you're able to make changes and corrections, the higher impact and lower cost those changes are. And so we published a paper by way of example with one of our big customers, Merck, last year that illustrated how by making judicious design decisions at the very beginning of the clinical trial, you can save upwards of $100 million in the downstream cost of the trial. And so it just shows you like, you know, a few clicks and you can make these changes to the design and it has these found implications over the next potentially up to 10 years during the course of the trial.
B
So what is the budget typically expected pre AI for this clinical trial phase of, of new development?
A
Yeah, well, it's really kind of interesting. I mean the cost of clinical trial of bringing drugs to Market the overall cost of developing the drug and running the trials and doing everything necessary to bring a drug from conception to market is of the order of one to two billion dollars. And these trials oftentimes run ten years or more. And this has actually been getting worse over the last 50 years. There's this kind of ironically named Eroom's law, which is Moore's Law spelled backwards, which basically says that every nine years the cost and time required to bring drugs to market double. And so there's a lot of concern around this. Obviously it's caused delays in terms of really groundbreaking new treatments making their way into patients hands. And so we are looking to use AI to bend Moore's Law, to bend Eroom's law, I should say downward, and really start to counteract this trend of much more expensive, ever spiraling costs. And so we're doing that in a couple of different ways. One obvious way is by automating clinical writing. So there's this whole plethora of documentation required by regulatory authorities, starting with the protocol document, which is a very complex document that involves a lot of structured data as well as lots of verbiage around describing certain safety aspects and the goals and rationale of the trial. And there's many, many different sections that need to be drawn up by a team of clinical writers. And so we're using AI to greatly accelerate that process from a matter of weeks or even months down to potentially, you know, 20 minutes to get the first draft. So that's just one obvious example. I mean, there's other things we're doing that I think are actually in many ways sort of more, more interesting in terms of using AI to optimize the very design of the trial and suggest what activities and how they should be used in the trial itself, as well as performing automatic research. But it gives you an idea. I mean, there's just this whole green field of opportunity where we can apply these language models to really greatly improve the process by which clinical trials are created.
B
You know, so I, for the listeners, I think this is a good point. Now this is something that's very complicated, sophisticated, technical, and if they're able to leverage AI to be able to compress that cycle from weeks to days, let's say just think about what it could be doing in your business if you're not doing something as technical or precise as clinical trial, pre trial documentation and things like that. So I think that's a fantastic opportunity. So Patrick, before we jump in, I didn't know about your work with Google Duplex And I'd like to ask a few questions specifically about. Go about AI voice. And I don't know if you are paying much attention to what's happening in the space, but currently, right now, if I wanted to, I could spend 20 to $50 per month to be able to get access to a voice engine that would be able to take human voice input and reply with synthetic voice output pretty quickly. Have you been paying much attention to the developments in that space in particular?
A
Yeah, I mean this technology has been around for a while actually. Like it predates large language models and at the time we were building Duplex, we were kind of transitioning from using voice recordings to voice synthesis. At the time the quality wasn't quite there. Like we really, really wanted that system to be extremely lifelike. And so we were using kind of recordings that we would sort of stitch together in a very intelligent way. But nowadays, and for some time now, actually AI based systems have been able to synthesize voices really convincingly. And you can, I remember there was this kind of use case of, I don't know, I can't remember who the celebrity was. Was it Matt Damon or something where they were kind of making Matt Damon say anything and then they can make him sort of speak French, which to the best of my knowledge he maybe can't. So it's sort of like it's pretty both awe inspiring as well as slightly scary. Right. Like we're in this world now and you know, now there's videos as well where you can sort of paste someone's face onto a video and make them sort of do or say anything.
B
Dance.
A
Yeah. So, you know, we're entering this world where it's certainly going to be very disruptive and feels like entertainment and advertising and maybe even. Yeah, I mean the field of acting, who knows? And modeling, we're already seeing cases where there's digital AI based models. So, you know, this is just one little slice of disruption that all this AI technology is creating.
B
Yeah, that's an area that I'm paying a lot of attention to because in my usage of it, AI voice isn't quite there. The latency they advertise is supposed to be less than human latency when it comes to a response. However, in practice I'm not seeing that. So I'm always interested. That's not the topic of our conversation, of course, but I'm always interested in. I want to hear somebody say yes, it's ready, because I've got a lot of applications backed up for it, waiting for that. So Patrick, let's talk about not specifically what Faro Health is doing right now, but how you guys internally are using. Let's just stick with generative AI in the conversation. Obviously you're demonstrating to clients that we can compress a process from weeks to days. How are you guys seeing that impact internally with just operating and scaling the business?
A
Yeah, I mean, we've been really aggressively kind of adopting AI all over the place in our organization. I mean, there's so many different things it can do for an organization to kind of become more productive. I mean, starting with engineering, right? I mean, the obvious use case with all this large language model kind of revolution is coding. And so we've certainly used AI to produce prototypes, to vibe code prototypes and evaluate them. Like it's a great way of quickly building some kind of feature concept and getting in front of customers and getting feedback. We're also using it to write unit tests and do other forms of development. Like when we start encountering a new API or a new system that no one's really used before, we can use AI to very quickly build a proof of concept integration and access that system really easily without having to learn all the necessary API calls. But even beyond engineering, Certainly QA and DevOps, which are adjacent areas to software engineering, we're using AI to accelerate and automate the production of tests and code and IAC infrastructure as code and things like that. And even beyond technology, we have UX designers and we have salespeople using AI to perform research and to put together, maybe even to help craft slides or emails and things like that. And so the possibilities are really kind of endless. I mean, there's just so many different ways in which this technology can be used. And the fact that it's also multimodal, where they can take images as well as text as input and it can produce images as output. Like it makes this technology so flexible and so usable by non technical people that I'm seeing. It's just different from any other technology revolution I've been involved in. It's really different from the dot com boom or the crypto boom or many other kind of like even the image recognition boom. There was a sort of a smaller AI revolution 10 years ago, right, where people were saying where there were all these new capabilities coming out as far as recognizing people's faces and generating images and things like. Like that even that's been completely eclipsed by this technology because it's so accessible.
B
So you've been around this for a while. What are your thoughts as far as the acceleration of it. Do you think that we're. That, that the acceleration is increasing? Do you think that we've probably seen kind of maximum velocity when it comes to improvement of the models and accuracy of their output and things like that. What are your thoughts on that?
A
Yeah, I mean, I am not in the camp of people that think that large language models before too long will be AGI, artificial General Intelligence, and we'll be having these kind of philosophical discourses and they'll be creating masterworks. I don't believe that. And I think there's a few reasons why. The first is that there's only so far you can get by analyzing web pages and even sort of books out there. Like, I think we've already got to the point where a lot of the useful information has been processed. And so it's not like we can sort of, you know, have future generations of these systems ingest 10 times more data in order to become smarter. So we are starting to hit the limits of I think, how much different useful data we can feed these things in terms of really rich textual input, not in terms of quantitative information, which there's vast amounts of every single day. I think another is just more inherent to the design of these systems themselves. Like, I don't know if your viewers have heard of ARC AGI, but it's this kind of test for, quote, artificial General Intelligence. And if you go look at the ARC AGI site and look at the kind of tests that they're giving these systems, you would probably, on the face of it, think, wow, that's it. That's all it takes. That's how we're defining AGI, like moving blocks around on a screen like it's that kind of stuff. In many cases that kind of a smart kid could do because all you're doing is sort of doing pattern matching and this kind of thing. But what it illustrates is that even though these systems are capable of generating these convincing looking text paragraphs or these convincing looking images, when it comes down to it, their ability to creatively think, like to solve problems that they haven't seen before, is very, very limited. And Apple also published a paper earlier on this year, I was just sharing this around the office actually yesterday, where they talk about how even though these systems are so smart, they can't even play something like Towers of Hanoi, which is a relatively simple kind of logic, past about nine or 10 different stages, it just totally falls down. They said it totally collapses even if you give them the algorithm itself for solving this problem. And so I Think, and I know this is not entirely fair, but if you characterize these systems as being sort of super well informed, glorified autocomplete, where they're taking a string of words and predicting the next most likely word, there's only so far that that approach can take you, in my opinion. And so I think while yes, there's going to be disruption in the job market the same way there is with any technology revolution, the chances that humanity is going to be completely usurped by large language model based systems in the next three to five years, which many people out there in my field actually believe, I don't think it's going to happen. I do think there are safety issues for sure that we need to be aware of, but I guess that's my position on the matter.
B
You mentioned that three to five year timeline. I think that when GPT 5 came out in November of 2022, people were talking about a three to five year timeline then.
A
Yeah, well there's this kind of joke in the AI world that got blown away by large language models where for the last 50 years everyone's saying that general intelligence is maybe 20 years away and that's held over the last 50 years. So now I guess Silicon Valley came along and it's like, well now it's three to five years because we're faster, but it's still the same kind of effect right where it seems around the corner for a long, long time.
B
And it's interesting because when that came out, people were amazed at what was possible with 3.5, which when you compare that to what you can do with some of the capabilities of Gemini or Claude, with the coding or just GPT with general capabilities, it's a lot better for sure, but it's not, it's not what one would expect on a three to five year timeline. Like, oh, we're only three years out from Major. I mean that's my opinion and I guess it might be a little biased or you know, because I'm immersed in this, I use it and have been active power user for, you know, since I got my hands on it really. So I don't know, just, just an interesting observation on my part. So I'd like to talk about what you guys have been able to do because this is a, an AI native company, right? Like you guys, the thesis, I would imagine when originally launching was leveraging AI to compress these cycles of clinical trials. Were you able to find talent that was I guess because you're in San Francisco, right?
A
We're Actually in San Diego.
B
San Diego.
A
Great.
B
Better weather. Were you, were you struggling to find talent that was AI fluent enough to be able to really contribute to what was happening in the company?
A
Yeah, I mean, I think first of all, it's worth pointing out that we were not an AI startup from inception. So we were more of a kind of a traditional SaaS company software service. And so in the last couple of years we've been pivoting and in some ways actually cannibalizing our own kind of product and team in order to really heavily pivot towards AI to the point where most of our new product engineering efforts are centered around AI. And so that necessitated both retraining existing engineers as well as hiring specialists, hiring people who had actually implemented AI features in the past and had a data science background. And to answer the second part of your question, yeah, it was hard to find people actually, despite the fact that everybody sort of has been learning about this and is trying to become an AI expert. So what we observed is that there was this kind of effect where people would really dress up their resumes to look like they were AI experts and in fact had done enough reading to sound kind of convincing. Yeah, but when it. But we sort of, you know, I was lucky enough to hire someone really good in the first year that I was at the company. And so together we spent about four months trying to find really good people to form the nucleus of his team. And the clincher was just having an actual coding test where we were asking questions, we were asking the candidates to do things that you had to have real experience actually implementing these AI systems in order to do reasonably well on. And it was just surprising in somewhat shock blocking the number of candidates that kind of fell down. And I didn't even really consider the coding test to be that hard, but it was kind of like you couldn't fake it. And so it took us months and months to find people, despite all these promising looking candidates with really good resumes. So I don't know what your kind of listeners can take away from this story, but I think a lot of it is, look, if you're going to become an AI, if you want to pivot into this, into this field, really gain some experience, it doesn't have to be professional experience, you can do your own projects, but just know what you're doing. So when people ask you a technical question, you don't fall down.
B
You know, I think one of the points that I take away from this is that companies are, you know, anecdotally and in the media and everything, you're seeing that companies are looking for talent that is AI enabled. Right. However, if you're not already an AI enabled individual, it might be hard for you to vet candidates, just like you said. I mean, I can get on there and here's the job interview or here's the company, here's the job description, you know, write up my resume kind of thing. And I'm sure I know as a matter of fact that that's happening. So this idea of, of identifying a few tests, essentially the equivalent of a coding test, but for hr, for sales, whatever it is, when somebody comes in and says that they're an AI capable user of this, in that role, I think that's a fantastic idea. How would you, I guess with a tech background, it might be hard for you to answer that, but how would you recommend people go about, let's say we've got an expert on here or a professional on here listening, they like that idea. They want to introduce that into their, their vetting and hiring process. How? But they don't know, like they're not actually users themselves. What did you guys do that they might be able to borrow for their own process?
A
Well, I guess overall Faro strategy was to sort of hire me and then I had the wherewithal to actually be able to pick kind of like a senior leader to lead our data science and AI initiatives. And then he had the skill set to vet individual contributors on this team. And so one strategy is to say, like, bring in someone senior who has a proven track record and kind of can come with some decent references. So even if you yourself are unable to fully technically vet them, I mean, another strategy is to bring in people, you know, who can as an advisor or as a favor or whatever. Right. And so sometimes our, sometimes some of our investors do that with me and say, hey, would you mind interviewing this candidate for another portfolio company of mine? Well, of course, you guys are investors. I'd love to help you out. And so that's another strategy is if you know someone, or if you know someone who knows someone who's really good, do their favor. You can bootstrap that way and bring in someone good and then they can help hire people from then on.
B
You know, I hadn't considered that, but that's a whole nother industry right there. Like the AI expert who supports other companies in vetting candidates for senior positions that require or would expect an AI literacy level. That's a great idea.
A
Yeah, definitely worth the investment. I mean, hiring is the most important thing you do really, when you're building a company. So it's worth the extra effort and investment and calling in some favors maybe even to do that. And if you do have, if you are a tech company that's VC funded, then a decent investor will definitely help you with that too.
B
And they'll probably know some people. Yeah, for sure, absolutely. So as far as training of the, the, the staff, what are you guys doing to make sure that. Because as you know, like, and, and a reference to Moore's Law, if somebody maybe they went and got a certification.
A
Right.
B
But they got it two years ago, unless they've been actively on the front lines of paying attention to what's happening with developments and capabilities of the tools, they're not going to be current. So how are you guys making an effort to make sure that your teams that are expected to be using this stuff are like knowing that this tool's not ready yet, but, oh, try it in a month.
A
Yeah, I mean, I think the biggest thing you can do to keep up is to hire people who are really curious and motivated to keep up. So for instance, we didn't have anyone do all these big courses or certifications or anything. As I mentioned, we did hire a small team that specializes in this. But for the most part for the existing engineers who are not data scientists, we basically said, look, we have these really challenging problems. Like for instance, one of them recently was building a system to take an existing PDF like it's a super complicated multi hundred page protocol document and parse it and figure out what is the structured digitized representation of this trial. So it's almost like the opposite of taking the digitized representation and generating a big protocol document out of it. It's taking existing protocol document parsing it really challenging problem. And people basically said, wow, this is really interesting, let's go do some research, let's go read up, let's just learn by doing. And so that's what we did. And luckily we have a team that has a caliber that's able to actually do that and figure things out on the job and take whatever guidance they need to take or collaborate as needed with the data scientists. But essentially figure it out. Because at the end of the day, using a large language model to solve a problem doesn't require a degree in AI. It just requires experience applying an API of some third party external system and understanding the nuances of prompting and evaluating results and things like that. And then you could build a system without even really understanding how the AI system actually works.
B
I'm Going to chase a little bit of a shiny object here and address that PDF is because I've seen that, to me it's like, oh, it's a document. Would be easy to ingest that and understand what's happening. But why PDFs in particular is because it's more like looked at like an image that happens to have text. Or what's the challenge with the parsing of the PDFs?
A
So I'm not talking about OCR, which is a well solved problem. Whether it's a PDF or a word doc, doesn't really matter. What I'm talking about is making sense of what you're reading. Okay, so we're getting all these text blobs in from paragraphs, we're getting these tables, even sort of images, references, all this kind of thing. How do you reconstruct the design and the internal structure of a clinical trial or some other highly structured kind of form of data or representation of something from all this text? Like that's the hard part and that involves things like, for instance, at the center of the clinical trial is this thing called a schedule of activities. It consists of all these different activities that are performed on the patient during the course of the clinical trial. What are those activities? Well, we have a library of those activities, but they're not necessarily worded the same way as things in the clinical trial. And so we have to mash them up. This is a process called entity resolution. It's a very sort of like a well worn kind of problem in data science is to kind of normalize things and then we can do all sorts of cool things. Like we have these data sets we've ingested that have the cost and complexity and blood draw and all these different useful metrics that we can then go and analyze the trial and provide all these metrics and insights and so on. But I mean that's just one example, right? So it's not just about reading the text of the PDF, it's about making sense of it all.
B
Okay, that helps. Patrick, do you do a lot of like speaking at conferences or industry events and things like that?
A
Yeah, I do. I speak at two or three conferences a year. And you know, we're a small company so we have to sort of still be pretty careful about which conferences we would choose to invest, make that time and money investment in. But I spoke at the DIA conference earlier on D.C. and also scope in Orlando. So I enjoy it. I enjoy getting the word out. Like it's fun.
B
Have you seen, I don't know, an Evolution of the sophistication of the questions that you get after you come off the stage from people.
A
Yeah, I mean definitely over the last couple of years I've observed kind of like people come in being more versant in just how large language models work, what are the ins and outs of applying them. I see other speakers and other vendors with some of this starting to use some of the same concepts and terminology, like the critical need of evaluating the results of the large language model in order to kind of prevent hallucinations and emissions and things like that. And so definitely things are as you would expect. With all the attention and all the capital being poured into this space, people are becoming more mature in terms of their questions that they ask and so on and so forth. But that said, you know, these large language models are complex and they are lots of nuances to getting them to work properly. And so it's sort of, it's always a bit of a interesting design kind of challenge to figure out the right level of how to pitch things like how do I communicate things in my presentations so that they, they pick it.
B
Up, they can actually understand.
A
So it's valuable.
B
Yeah, yeah. So one of the things, obviously we primarily focus on a non technical use of AI. Like you've already got operations that are occurring. How do we introduce AI to accelerate or improve the quality of the user when it comes to their deliverable, that they're responsible for their role. And one of the things that we noticed, we do a lot of training. One of the things that we notice is before the training we would always ask people to self identify. On a scale of 1 to 5, where are you with your understanding of generative AI and how to use it? The business. And in 2024 it was a lot of ones and twos. Starting this year we started seeing a lot more threes and fours. And I thought that was a sign that okay, great, people are using it. But upon investigation like oh, tell me how you're using it there. They were using it for basic things such as writing emails or summarizing documents. Things that happen a lot in the, you know, knowledge work, let's say. So it, it I, I realized that yes, they're using it more but they don't quite understand what's possible. But because they're using it, they're self identifying as threes or fours. Right higher up on which, which I would still say that falls into the two category, let's say are with the conversations that you're having with people when you, when you're Presenting or when you're, you know, you're at a cocktail party or whatever, people talking about it that say that they know the stuff, do you feel that you're like, okay, yeah, they're, they're on parody with some of my people. Or do you think that this self identification is like that same experiences happening on the technical side? Does that question make sense?
A
Yeah, I mean, I think it's interesting, you know, like there was this study that, that we, that we found recently where someone had actually examined programmers using large language models to automate their work and so on. And the results of the study were that actually using AI to code is kind of a detriment for the most part until you hit about 50 hours. Once you've been, once you've really kind of taken your lumps and tried a bunch of things out and failed and learned a bunch of lessons the hard way, then there were a few people, maybe one person really, who was significantly more productive and had 50 hours under his belt and had a lot of interesting observations about the ins and outs of using AI during the course of the study. And so I think that kind of generally holds. Like if you're a smart person and you're trying a bunch of things out, then once you hit a certain critical threshold, maybe it's actually saving you time or maybe the results you're seeing are actually really worth it. But it takes time. Like any new tool, this is sophisticated. You can't naively go in there. I mean, I think there are certain things that it's really, you can get some immediate quick wins with like summarizing text or generating a convincing sounding email that you then go and kind of handcraft, especially for people who don't particularly like writing like that can be a kind of a quick win. But as far as actually performing really valuable work, there's a lot of research going on around this. Like sort of the complexity of tasks that AI is able to actually reasonably kind of automate and add value to is still relatively low. And so I think there was another study kind of talking about what is the length of tasks that AI is able to effectively kind of automate. And it was of the order of maybe, I don't know, a couple of days to maybe a week. And so when you're talking about a longer term project that requires a lot of strategic thinking and so on, then AI still doesn't really help with that. And it also sort of dovetails with what I said before about once you hit a certain complexity of like the tariff hanoi problem where after 10 turns the LLM starts collapsing. And so there is this frontier of complexity that the AI is able to actually sort of comprehend. And probably a lot of people out there in the audience have tried this, right, where you try to pass something increasingly sophisticated like a whole book. Because these days the LLMs can supposedly handle a lot of tokens, right. And so you try throwing out a whole, even like a complicated McKinsey report or something, and it's like, does it really understand what's going on? Yes and no. Right. It doesn't always get the nuances. And so I think this is what we're talking about. And so what we found was that in many cases you have to really divide and conquer. You got to take the problem, break it down into much simpler sub problems, and then you've got to take the results and then kind of synthesize them and build these kind of agentic architectures to do this. And then before you know it, it's actually taking quite a lot of human effort to actually put these agentic systems together. And you've got to be something of an expert. It takes hours to many, many, many dozens of hours to kind of become good at that. Doesn't require a degree in computer science, doesn't require you to do super complicated courses in AI and Andrew Ng, Stanford, understand what's going on at all. It requires real world experience of just trying stuff out and being smart about the results that you find.
B
That's fantastic advice. So for any of you that are listening, that are experiencing some frustration on your learning curve, keep practicing, keep experimenting. That's how it's done. So, you know, I was going to ask that, Patrick. So if you've got this complex operation that you want to agentify or AI in your business, you know, my question was going to be, would it be better to look at that process in chunks and then address it chunk at a time so that output from one day or week long process where it started to kind of cap out on its capability, that output would then feed into another chunk of that process.
A
Yeah, I mean, what you're describing is the very rationale for agentic systems in the first place is that if we could make wave of Magic 1, you just describe at a high level the problem and then the enlarged language model sort of come out with the results. Right. But for a number of different reasons that just doesn't work, including the AI's limitations as far as the complexity of problems it can handle at once, but also including kind of like understandability, you Kind of want to know what's going on inside the system so you can debug it if there's bad results coming out. And maybe you want to divide and conquer and say, hey, you take this piece, you take this piece with your team. You can't do that if it's one big monolithic prompt. Right. And so for a number of different reasons, this agentic architecture is becoming super, super popular out there, with good reason. And so yeah, in general in life, breaking down your problems into manageable chunks, like with software engineering, the best practices, if you have a task that you think is going to take more than a week, break it down ideally into tasks that take a couple of days. Certainly anything over a week, you got to break it down. This is a standard agile kind of guidance. And I think the same thing applies in the abstract to many different problems that are out there, like divide and conquer. And we've been doing that with very great results with what we're doing at viral health, with clinical trial design.
B
So I know that San Diego is high density life sciences, pharmaceutical, all that type of thing. So in, in your ecosystem of San Diego, as you're interacting with, you know, at industry events that might be local or talking to somebody, a friend who works at another company that's quasi competitive or whatever, are you seeing certain roles within the leadership that tend to be more AI enabled and some that are kind of lagging? Obviously you're a CTO so that, that you know, by default. I would Hope that other CTOs were as fluent in the understanding of AI's application as you are. But have you noticed any kind of trends on, on which other, outside of the technology space, which other executive roles tend to be really grasping the concept and like theory and strategy of AI?
A
Yeah, I mean a couple of obvious examples are chief Security officer type of roles.
B
Right.
A
Where now what we're observing is that there's this kind growing awareness that there's safety issues and privacy issues with these large language models. Right. And so we've really been spending a lot of time putting together an AI governance model and there's various regulatory standards arising around that as well. But just what happens, how do you avoid the large language model from sort of mixing data from different customers? Or how do you ensure that rogue employees or rogue users won't come along and start doing this prompt injection attacks to make the system do bad things. Right. So this is becoming a big thing thing and every ciso, Chief Security Officer or Chief Information Security Officer out there needs to start thinking about this Another obvious example is product like if you're a chief product officer, you need to be putting together your AI product strategy. So even if it's not, if it doesn't involve coding or whatever, just figuring out how is your product going to incorporate AI? I would say people involved in strategy. Of course this is a classic, a technology disruption scenario of the kind that Clayton Christensen wrote about. And the innovator's dilemma, right, where the laggards who don't adopt this technology are going to get left way behind. And I think it's sort of more intense than the usual disruption because it's not like hard drives where it's sort of a faster horse. It's kind of like this is dramatically changing a lot of different industries at once and a lot of different roles at once. And so it's going to have more far reaching consequences than some of the other technology sort of disruptive innovations that he talks about. So anyway, it gives you a sampling. I think at this point every executive should be thinking about how they can be using AI in order to whether it's further the business, cut costs, improve productivity, whatever the case may be. Because if you won't, then your competitors will be.
B
So you mentioned that you guys are paying a lot of attention to governance. What does that look like for you guys? What are the main concerns and how are you going about identify? Like even thinking about what the issues could be and addressing those?
A
Yeah, I mean some of the ones I mentioned before, you know, like everybody we talk to, every customer, potential prospective customer we talk to, is asking what happens with my data? Is it going to be mixed with our competitors data? Is it going to be safe? Is it going to be subject to, you know, the usual sort of safety regulations and process and so on? How do you develop your software? How do you ensure that the models you're developing, the systems you're launching are of high quality? It's different from normal software. You can't just compare the result with an expected result and say, well, is this what I expected or not? Because generative AI is non deterministic. You can't expect the same results with the same inputs. And so there's this whole thing around process, around evaluating results and scoring the results and ensuring you don't regress and things like that. And then increasingly the same way that people can craft really kind of malevolent sort of queries or inter malevolent input into a web form and potentially do a lot of damage on the back end of the system. Same thing holds here, right we're giving AI all this power now. Like we have all these agents out there that can access file systems and do all this stuff. Like there are now prompt injection attacks where people can craft prompts that are designed to make the AI do bad things and access data it shouldn't and say things that shouldn't. So how do you protect against that? So this gives you a sampling, right? There's much more to it. Like we have a whole lot senior person on our team that does nothing but think about this. Wow. So I think it's just really important for companies to get ahead of this because there will be disasters, there will be things that happen where, oh, here's an example of some bank that made all these bad transactions or something that happened as a result of having an AI chatbot on its server, on its system or something like that. There's going to be some kind of series of events like that that really greatly heighten the perception and the risk around this. And so best to, best to get ahead of it. Especially if you are a company like us that's selling to large enterprises where the stakes tend to be really high.
B
You know, one of the questions that we get a lot from, again we're focused on non technical businesses typically is the model's using data to train and how do I, how do I know ChatGPT isn't using my stuff or how do I know that my information will stay secure? Do you get that question when you're talking to non technical business peers? And if you do, how do you explain your comfort level with the security of using my IP in a generative AI model to improve it or to evaluate it or to that sort of thing.
A
Yeah, I mean we've already seen cases where cleverly engineered prompts have been able to reveal some of the underlying data that has been used to train the models. And of course the large language model vendors themselves, OpenAI and Anthropic and companies like this, Google, Gemini, it's very much in their interest to prevent this from happening because they're all for the most part cloud providers and they understand the stakes here. And I think to answer your question, yeah, every single customer we talk to asks this, aren't you going to be handing your data over to OpenAI? And then therefore anyone using ChatGPT will be able to access it. And so there's a number of different levels you can answer that question. One of them being, well, wherever possible we use private instances of the large language model that are only accessible to us. And so it is Therefore impossible for the public version of ChatGPT to have access to any of your data. And then we have sort of various policies and assurances from the large language model vendor itself that they've made public and basically need to abide by. And you can argue by analogy, like, hey, sort of this is the same level of safety as using say, Google Docs or Amazon AWS and so on. And so there's just a number of different, I think, assurances and arguments and evidence that you can provide to people to assure them that this is not going to happen. Because obviously for many companies, including especially those whose kind of like competitive differentiation is their ip, whether it's a joint company or whether it's a finance company or whatever, this is the crown jewels, right, that we're talking about. And so if your system handles those crown jewels, then you have to be really sure that they're going to be. Be safe.
B
Yeah, you know, I had that question. We were sponsoring an event for vistage, which is a kind of like a CEO peer group network in Chicago yesterday. The question came up from every single person that I talked to.
A
Yeah.
B
And the way that I would kind of position it is that this is a very frontier model environment, is extremely competitive, a lot of money going into it, there's a lot at risk. Everybody wants to be the recognized kind of champion of that, that like this LLM is the, the Kleenex or the Coca Cola of, you know, using LLMs. And everybody goes, oh, well, they say, they, you know, they kind of roll their eyes and go, they say they won't use our data, but how do we know, how I look at it is that our incentives are aligned. I don't want you using it. And if one case came out where I had the settings configured correctly, we followed the best practice yet. My results still showed up and somebody else's output. If it was, if that happened with OpenAI, Google would exploit that, Anthropic would exploit that. They would say, see, look, look what's happening over. You can't trust them. You can trust us. So I kind of see it. Our incentives are aligned. I don't want you to use it, you don't want to get caught using it. So therefore you're going to do everything you can to protect my data. And I think between what, what your position is and kind of how I've been looking at it, that's going to be a great response for me to use when this come up. Because companies, you know, I saw a meme the other day or Like a little comic strip. And it was like, who are we? And the other side was CEOs. What do we want? AI? When do we want it? Now. What do we want it for? We don't know.
A
Right.
B
Like, like there's just, yeah, there's this FOMO kind of in the marketplace and obviously you and I understand that you can compete again as an AI enabled company can compete, compete with companies that have 10 times more resources, bigger brains, all that stuff if they're not using this. It's like, like you said, it's not a faster horse, it's a different, completely different paradigm.
A
Yeah, I actually have something to say in response to the early part of what you just said, actually Chris, which is that I think it's incumbent upon like if your company is really using AI in a way that is going to be handling private information of another customer or person or whatever, it's incumbent upon you to really research like look at the providers, look at AWS, look at Azure, look at OpenAI, look at Anthropic. These are all the main vendors. There are others, of course, Google and so on. But just look at what their policies are, look at what they commit to and make your own decisions. It's all a matter of public record. You can go to Anthropic site for instance and see this is our enterprise offering. And here are all the privacy assurances that we can offer you. You or look at this is how Azure gives you this kind of private instance of GPT. And so that's why this is the best solution and just make your own decisions. There's offerings out there, but I think it's incumbent upon companies to really sort of do the research and compare these different vendors and figure out what's best for them.
B
Yeah. And for those of you listening, say I wouldn't even know how to evaluate that. Take those policies and review them in a large language model and ask here's my concerns based on this policy. You know, are we in a, a risk averse environment or a safe environment for us to be using these models for our stuff? I think that's great advice if you do that.
A
Best to use someone else's large language model to evaluate a given vendor.
B
Yep. Eliminate that bias. So outside of, in your department, but also across the company, I'm sure that people, other executives in your company look to you for perspective or counsel when it comes to how they should be approaching AI and their areas of responsibility within the business. What are you telling your own peers within Faro Health when they come to you for that kind of advice?
A
Yeah, I mean, I think by and large it's been really just be very, very curious and try things out. Because many times our assumptions about the way things work and the best way to do things are challenged and potentially disrupted or disruptible by this new technology. And so rethinking potentially the way that we DO QA or DevOps or early stage prototyping and so on. And I think that the most important thing is you learn, is you are kind of failing fast, meaning just trying things out and seeing what works and what doesn't work and basically learning what the best way for your organization to really use this technology is. I don't think there's any hard and fast rules about which model to use. We're using a bunch of different models, they're good at different things, they have different strengths and there's trade offs. Because as we know, these models can get expensive. If you're talking about the state of the art OpenAI or anthropic models, they cost 20 times plus more than the run of the mill standard models. And so you have to be really judicious, especially if you're a high volume business, like a sort of a consumer facing service. But if you're using it for internal use, maybe that's less of an issue. But just keeping tabs on what the best model is to use for each task is really important. And I think just evaluating, not blindly trusting the output of the model, that's really important too if you're using it for Vibe coding, which is so tempting, especially if you're not a technical company that has software engineers and you're kind of using this technology to really build software, maybe for the first time, just be a little careful. Maybe try to get someone who knows what they're doing, like a software engineer to take a look at the output of the code. Because sometimes it can be amazing and sometimes it can be a real mess and like a liability basically. And so the difference between having a really well designed system and one that's like sort of spaghetti code, it can be the difference between totally worth it versus forget it, rewrite it from scratch.
B
And I know that the answer to this question I'm about to ask you could change next month with as fast as these things develop. But have you identified one model in particular that you feel is really doing a good job at supporting the development of code or the Vibe coding?
A
I mean, it's interesting because in many ways Vibe coding using AI to code is the ultimate use case for AI because it's sort of consumer facing. Like there's millions and millions of programmers all around the world. You can benefit from this. You know, when the output is right because you run the program and you can see did it work or did it not work. So you get this instant feedback that results in rapid evolution. And it's high value, really high value. I mean, software engineers are highly paid and so making them more productive is super, super valuable. And so that's where I think a lot of the innovation is going. Because of these three factors I mentioned, there's probably more I'm not even thinking of right now. And so, yeah, I just think that that scenario where things are evolving really fast and there's no one model I point to, for a while it was sort of like, oh, Opus 4, this is the way to go. But then I was playing with Gemini and I thought actually for what I was doing, Gemini was better. And now there's kind of like, what is the latest OpenAI? There's some GPT code thing, Codex or something that people think, oh, blows away all the other ones. And so it seems like every single month there's a new kind of horse to beat in this race. And I think it's just like, just try all of them, see which one works for you. They got pros and cons, cost, time, capabilities and just figure out what works for you. They're all good, they're all going to help you. You know, I've tried, I've tried all three of them, Gemini and obviously there's more than just just three, but Gemini, you know, GPT and Anthropic Claude and they all can do a great job. I've just found for, for that one project that I was using it for, that Gemini was the better, not because I'm an ex Googler or anything, but it just happened to be there. So one little tip, right? If you are into vibe coding, definitely make your system write a decent spec. I did this thing where I asked Anthropic Claude to do that because I was having trouble getting this system working to basically build what I wanted to build. It wrote this really nice spec, I fed the spec right into Gemini and it reconstructed the app very easily. You can can very easily kind of like kind of COVID your bases and try out different models by having a spec that essentially serves as the ingredient for the instructions to actually build a system from scratch. It's good practice anyway to have a good spec, right, because it helps you understand things, especially as your system gets more complex. It Helps people understand things, you know.
B
And I think that that would extend outside of just the technology as well. Like any type of strategic work that you're going to be doing with it, like have an understanding of what you would categorize as ideal output.
A
Absolutely. I mean, we're doing all this really complex work with respect to analyzing protocols and figuring out how do we optimize the design. And this has nothing to do with coding really. This has to do with quantitative analysis and medical expertise and things like that. So we have, in some cases, our CEO is actually, he's a clinical expert and he's not a coder and he's building this incredible. He's helping our team build this incredible system that does this detailed clinical analysis. Right. How cool is it to have a CEO that can do that?
B
Yeah, right.
A
You know, but it's not like he did a computer science degree to do this. And so I think, yeah, for sure. The things I'm talking about that apply to coding apply to a lot of other problem domains where you're trying to really use the larger language model to solve these complicated domain specific problems.
B
So, Patrick, one of the, I guess, things that has come up throughout this conversation today has been your emphasis on curiosity. Right. When it comes to using these things. So for anybody that's out there that isn't a CTO at a cutting edge, you know, bio sciences company, this curiosity extends to what the domain that you do know. So if it is executive level leadership, whatever area that you're responsible for, take that existing experience and discretion that you have on that topic and use that to guide your usage of the models as you're getting your 50 hours or your 500 or your 10,000 hours. So, Patrick, do you share a lot of your perspective on this anywhere in particular? Are you a LinkedIn guy or what?
A
Yeah, I do. I don't actually post enough about this. But what you're saying is actually it's a very deep question, like the nature of curiosity. I have given talks about career development and at least what I've seen over the last 20, 30 years of doing this, meaning being sort of like an executive in the technology world, is that curiosity is one of the key traits that determines how far you go in your career. So it's not just about large language models, it's about life. And in fact, even beyond your career, to make progress as a human being in the world and to become everything you can be in the world, curiosity is really essential to that. So there's various exercises and practices and all sorts of things you can do to kind of bring this out. If you don't consider yourself to be a naturally curious person, like just wondering about the world, I think it is a basic human trait, but just even knowing that, knowing how valuable it is to be curious about things and just asking questions. Super, super valuable.
B
I love it. Patrick, this has been a fascinating conversation. It was definitely outside of my area of experience when it comes to industries, but the information that I've gotten from you today is is I've been able to translate some of these points for sure into my conversations with the non technical business leaders. So I want to thank you for being here. I really enjoyed the conversation and I look forward to sending you more business.
A
Yeah, thanks Chris. I really enjoyed the conversation. Thanks for having me on. Absolutely.
B
Thanks everybody. So we'll see you on the next episode. Please check the show notes for further contact information for Patrick and to follow what they're doing at Faro Health. See you on the next episode. Thanks for tuning in to Using AI at Work. Don't forget to subscribe for more conversations about how to use AI at work and a special thank you to our sponsor, Chief AI Officer for empowering businesses with AI education and training. Visit their website for a free AI Readiness Assessment and AI Strategy Guide to help you get started using AI at Work. That's www chiefai officer.com. follow us on Twitter at the handle Using AI at Work and visit www.usingaiatwork.com for free resources to help you harness AI in your role.
Episode 83: Using AI to Scale Marketing and Revenue Teams with Patrick Leung
Date: December 22, 2025
Host: Chris Daigle
Guest: Patrick Leung, CTO of Faro Health
This episode explores how AI—especially generative AI—is transforming the design and execution of clinical trials, with a focus on Faro Health’s work in dramatically reducing cost and time-to-market for breakthrough medical treatments. Chris Daigle and Patrick Leung dive deep into practical enterprise uses of AI, strategies for recruitment and upskilling, governance, and how AI is impacting both technical and non-technical business functions.
Patrick, drawing on his background at Google Duplex and hedge funds, discusses how Faro Health’s pivot toward AI is reshaping operations—from clinical documentation to company-wide productivity—and why curiosity and hands-on experience are now essential traits for AI-ready teams.
"By making judicious design decisions at the very beginning of the clinical trial, you can save upwards of $100 million in the downstream cost of the trial." — Patrick Leung [00:00]
"We're using AI to greatly accelerate that process from a matter of weeks or even months down to potentially, you know, 20 minutes to get the first draft." — Patrick [04:07]
End-to-End AI Integration
"We’ve been really aggressively kind of adopting AI all over the place in our organization." — Patrick [10:02]
AI Accessibility Has Changed the Game
"It’s different from any other technology revolution I’ve been involved in." — Patrick [11:49]
Challenges in Hiring True AI Expertise
"It was just surprising ... the number of candidates that kind of fell down. And I didn't even really consider the coding test to be that hard, but it was kind of like you couldn't fake it." — Patrick [18:16]
Lessons for Non-Technical Hiring
"Identifying a few tests, essentially the equivalent of a coding test, but for HR, for sales, whatever it is ... I think that's a fantastic idea." — Chris [19:10]
Upleveling Through Curiosity and “Learning by Doing”
"We basically said, look, we have these really challenging problems ... let's go do some research, let's just learn by doing." — Patrick [22:35]
"I am not in the camp of people that think that large language models before too long will be AGI...." — Patrick [12:30] "Using AI to code is kind of a detriment for the most part until you hit about 50 hours." — Patrick [29:01]
Enterprise Concerns: Data Privacy & Model Safety
"Every customer we talk to, is asking what happens with my data? Is it going to be mixed with our competitors data? Is it going to be safe?" — Patrick [37:03]
Advice for Evaluating Providers
"It's incumbent upon you to really research like look at the providers... look at what their policies are, look at what they commit to and make your own decisions." — Patrick [43:09] "Take those policies and review them in a large language model and ask here's my concerns based on this policy." — Chris [44:08]
AI-Enabled Roles Beyond Tech
"At this point every executive should be thinking about how they can be using AI ... because if you won't, then your competitors will be." — Patrick [36:41]
Evolving Industry Maturity
Foster Curiosity & Experimentation
"Curiosity is one of the key traits that determines how far you go in your career. So it's not just about large language models, it's about life." — Patrick [51:25]
Don’t Blindly Trust Model Output
For more from Patrick Leung, connect via the show notes or reach out to Faro Health. For AI readiness assessments, visit chiefaiofficer.com.