
Loading summary
A
What are the fundamental concepts folks need to know of getting to higher quality products?
B
The most important thing is looking at data. Looking at data has always been a thing, even before AI. There's just a little bit of a twist on it for AI, but really the same thing applies when you see.
A
A real user input like this. You actually look at what users are prompting your AI with. You realize it's very vague.
B
Absolutely. That's the whole interesting bit. It's like once you see that people are talking like that, you might actually want to simulate stuff, stuff that looks like that because that's the real distribution of the data or that's what the real world looks like.
A
I'm sure our listeners expect some like, magical system that does this automatically. And you're like, no, man. Just spend three hours of your afternoon, go through, read some of these chats, look at some of them with your human eyes, put one sentence notes on all of them, and then run a quick categorization exercise and get to work. And you see this have actual real impact on quality and reducing these errors.
B
Yeah, yeah, it has an immense quality. It's so powerful that some of my clients are so happy with just this process that they're like, that's great, Hamill. We're done. And I'm like, no, wait, we can do more.
A
Welcome back to How I AI. I'm Claire Vo, product leader and AI obsessive, here on a mission to help you build better with these new tools. Today I have such an educational episode for people like me that are building AI products. We have Hamil Hussain who is going to demystify debugging errors in your AI product writing good evals and show us how he runs his entire business using Claude and a GitHub repo. Let's get to it. This episode is brought to you by GoFundMe giving funds, the zero fee daf. I want to tell you about a new product GoFundMe has launched called Giving Funds. A smarter, easier way to give, especially during tax season, which is basically here. GoFundMe Giving Funds is the DAF, or donor advised fund from the world's number one giving platform. Trusted by 200 million people. It's basically your own mini foundation without the lawyers or admin costs. You contribute money or appreciated assets, get the tax deduction right away, potentially reduce capital gains, and then decide later where to donate. From one 1.4 million nonprofits, there are zero admin or asset fees. And while the money sits there, you can invest and grow it tax free so you have more to give later. All from one simple hub with one clean tax receipt. Lock in your deduction now and decide where to give later. Perfect for tax season. Join the GoFundMe community of 200 million and start saving money on your tax bill, all while helping the causes you care about the most. Start, start your giving fund today in just minutes@gofundme.com howiai we'll even cover the DAF pay fees if you transfer your existing DAF over. That's gofundme.com howiai to start your giving fund. Hamill I'm really excited for this particular episode because I have been building products for a very long time and, and this has been one of a few times in my career where the how and what of products that I'm building are so different than what I've built in the past. They're technically different, they're different from a user experience perspective. And then they have these non deterministic models on the back end that I'm somehow as a product leader responsible for making output high quality, consistent, reliable, interesting user experiences. And it's such a challenging problem. And what I love about what you're going to show us today is how to approach that systematically, that quality of product building in an AI world systematically, and how you use different techniques to get AI products which are new to all of us, from good to great.
B
Yeah, I'm happy to be here, excited to talk about it.
A
So you know, this is such a new thing for product managers. I'm curious if you could start with the fundamentals. What are the fundamental concepts or things that you think folks building AI products really need to know about the process of getting to higher quality products. And then I know you're going to show us a couple examples of how to do that.
B
So the fundamentals really come down to the most important thing is looking at data. And I believe from working with many product managers in the past is looking at data has always been a thing like even before AI. You know, like I'm pretty sure that product managers that like can like write a little bit of SQL are okay with spreadsheets, looking at numbers, looking at metrics, you know, that feels like it's kind of table stakes for being a good product manager nowadays. And so there's just a little bit of a twist on it for AI, but really the same thing applies and it's just like, okay, how do you do that for AI? And that's, that's what we teach and that's what I'm going to show you today.
A
Great. And I cannot agree more. I think one of the most transformational skills I learned as a young baby chicken product manager was being able to write SQL and actually do my own data analysis and exploration. But I think the surface area is so broad now with AI and the data is different. So why don't you show us what we should be looking at when we're building these AI products?
B
Yeah, so let me share my screen a bit. Let me give you some background first. So this is one of my clients. The name of the company is called Nurture Boss and as you can see it's an AI assistant for apartment managers or property managers. And really like you know, you can kind of get an idea from their website which I'm showing right now. You know, it's a virtual leasing assistant. So you know, they, they help with the whole topple funnel of like helping set up appointments, helping prospective residents like find their apartments, setting up appointments, questions about rent, so on and so forth. Kind of like trying to reduce the toil of property managers still having humans in the loop. And so when they came to me, they had already prototyped something out, you know, kind of vibe checking it just like everyone does and put everything together. But they wanted to know like okay, how do we actually make it work? Well, because they fails in weird ways and doesn't always do the right thing. But it feels like okay, every time you fix a prompt, we're not really sure, like maybe we're breaking something else or is it really improving things as a whole? We don't really know, we're just guessing. We're just kind of like looking at it and is getting vibes. And that is a very uncomfortable feeling of trying to scale a product. Okay, so the first thing that I'll jump right into is this idea of traces. So traces are this concept of, from engineering but it doesn't have to be scary. Basically like, and it's very topical for AI because with AI usually have many different events are especially like for a chatbot you have multi turn conversations where you're going back and forth. With an AI there might be retrieval of information, they might be calling some tools and external tools, internal tools, so on and so forth. And so you want to log these traces and there's, there's many different ways to go about it. But just to kind of show you exactly what happened at Nurture Boss, let's go into what that looks like. So this is a platform called Brain Trust. There's a lot of them. This is One called Phoenix, which is like the same exact data in here. It doesn't really matter. You can see they're both the same, right? So what we have here, let me just go into a single trace. So this is what I would call a trace. I can make this bigger so you can see in the full screen and you can see what an AI interaction looks like in this product. So you have, okay, the system prompt. You are an AI assistant working as a leasing team member at some apartment. These are all fictitious because these have all been scrubbed for PII stuff. You know, your primary role is to respond to text messages. So this is receiving text messages, okay? And you have a whole host of rules like respond, you know, provide accurate information, answer any question for residents, do the following, you know, provide this website, for example, if you had asked for a rental application, provide this, so on and so forth, all these rules, right? And this is a real user saying, hello, there's what's up to four month rent. I don't even know what that means.
A
I got you, I got you. Let me read it. Hello, hello there, what's up? Two four month rent. I thought I had it. I thought I had it.
B
Yeah, it's unclear, but okay. I mean, like, it's fine. This is real, this is the real world. These are real traces. So, you know, and then there's a, there's a tool call here. Get communities Information is calling this tool, this internal tool and the tool call result comes back with this information. And this is all hidden from the user. The user is not seeing this tool call result. You're like, okay, here's information you can use about the community, blah, blah. It's not even sure, like this is the right tool call. We'll get to that in a moment. And then the assistant goes, hello, we are currently offering up to. So this is like back to the user. This is what the AI responds to the user with. Hello, we are currently offering up to eight weeks rent free as a special promotion. Please note the applicable lease specials and concessions can vary, blah, blah, blah. Okay, so like is this. And I have a cheat sheet for myself about what is actually right and wrong. Okay, so like the comment here is the user is probably asking about lease terms and stuff like that, not about specials. So like it's not really clear, like this is the right. This is not like what we want. And this is so realistic, right? Like everyone has experienced AI. Like this is like it's kind of is being helpful, but it's not really doing what you want to. And it's actually pretty challenging because it's not really clear what the user wanted. So you could go in a lot of different directions of this.
A
You know, when I'm testing my own AI, this is such an eye opening example because when I'm testing my own AI, I ask it good questions and I spell correctly and I'm very clear. But when you see a real user input like this, you actually look at what users are prompting your AI with, you realize it's very vague. They say stuff like what's up? The, the question. There's, there's no clear question. And so I really do think looking at real user data kind of can get a developer or PM out of their own mind on how they think users are going to interact with the system.
B
Absolutely. It's very critical that you do this. And so now you might not have this data. And I just jumped right into a real example just to set things off. And we can go into all these different rabbit holes of like, what if you don't have data and stuff? I just want to like ground it in like, okay, so set the stage. Like this is kind of one foundation is like you have to have data. There's different ways to get it. One is you can log it from your real system and you have these things to look at. Another way is like, okay, you can have synthetic data where you sort of generate. With an LLM, you can generate questions like this, you know, hello, what's. You know, it might be hard to generate stuff that looks like that because I don't even know. We don't know what it means. And probably an LLM won't generate stuff like that. But that's the whole interesting bit. It's like once you see that people are talking like that, you might actually want to simulate stuff that looks like that because that's what if that's the real distribution of the data or that's what the real world looks like. You might want to challenge your LLM or your AI system appropriately. Okay, so let's step back here. So you have the system it's doing. It's like there's stuff like this happening. We can look at another trace if you want, just to kind of get an idea. And this is, you know, this is not pre scripted. I didn't memorize what's going on in these traces. We're just looking at them naturally. So this is something, this is another apartment complex, Meadowbrook Apartments, same idea. So we won't read the whole system prompt again. Okay, so we'll scroll down here. Let's get to what the user is asking. Walk in T O R. So this must be another text message situation. And the assistant says our team tries their best to accommodate. Walk ins me get you now. That's hilarious. Like, I don't. What. Why is the LLM that's surprising. Like, why is it saying me get you to someone who can help? Maybe he's trying to mimic the, the toe the user somehow. And then it does. Like, yes. And then. Okay, great. So it seems like this one maybe is okay. Let's see what we ended up annotating. Yeah, we said this one is okay. There's. There's some metadata down here about our labels, which we'll talk about next. But yeah, you can. So you can see, like, this is a real system. There's many different things that can happen here. So the question becomes like, okay, okay, so we talked about this, like writing SQL and data. But, like, how do you take that same mindset to this? Like, what do you even do with this? Right? You have these, like, crazy, like, interactions. Like, how do you analyze this without go. Without getting stuck? Because, like, this seems like intractable, right? At first pass.
A
No, I was just thinking, I was like, what is the SQL query I write to get, like, the first prompt? And like, how do you query for. Give me all the first prompts that include typos. Like, give me all the first prompts that are ambiguous questions. It just feels almost insurmountable. And then, you know, you showed us two examples and it's two of probably thousands and thousands and thousands. So going through it manually is probably not super scalable. So I'm curious, what is the systematic kind of solution here?
B
Okay, so the systematic solution is called something called error analysis. Error analysis just means it's kind of a counterintuitive process that's extremely effective. And it's dumb, but it's accessible to everybody and it works. And it's not something that I made up. It's been around in machine learning for a really long time. Because actually machine learning has the same problem. Like, before generative AI, like, we had these stochastic systems that can do, like, a whole number of things. And like, how do you actually, like, analyze that and like, figure out, like, what's going wrong and improve it? So error analysis has two steps. The first step is writing notes, and it's called open coding. And it's basically like journaling what is wrong. So if we go back to like, that, that other trace that we saw. So let me just Go back to it. Like the first one, we would step into this trace and we would say, okay, like every, every observability tool has their own, let's say different ways to take notes. You know, already have a note in here. Assistant should have asked follow up questions about, you know, about the question, what's up with four month rent? Because it's unclear user intent. I just is writing notes about what is going on.
A
Okay.
B
And you do that for like a hundred traces. Randomly sample a hundred traces and you do that and you stop at the most upstream error you find. So you read this and you see what's going on and you're like, okay, the user intent. Seems like we didn't do a good job of like clarifying what the hell that they need.
A
Yeah.
B
And so I think that's the most upstream problem in this sequence of events. So I'm going to go ahead and just write that as a note.
A
Yeah. And you say focus on the most upstream problem because you presume that if you can get early intent, early kind of clarity, correctness. Right. The rest of the system is more likely to be correct downstream.
B
Yeah. Because it's causal in nature. So as we have the sequence of events, whether it's like user prompts, tool calls, retrieval for rag, whatever it may be, any error at any point along the chain, you know, like will cause downstream problems. And so to simplify our lives for this purposes of error analysis heuristic, you know, eventually you do want to care about the different errors and different downstream, but when you're starting out, just focus on the upstream error. Because we're trying to make it tractable and this is like the way that you're going to get results fast. So basically what you do is you go through and you collect a bunch of notes and then what you do is you can take these notes and you can like download them or whatever and you can categorize those notes and you can even put these notes into like ChatGPT. It's like, hey, here's all my notes. Like can you bucket these into categories? And you kind of have to go back and forth with it a little bit like, hey, these are my notes. These are the categories. I think, like you're missing a category. Whatever. Now with Nurture Boss, what we ended up doing is we actually made. One of the things that we highly recommend a lot of people think about is to make your own custom annotation tool. Like there's, you see, this is here in Brain Trust and it's also here in Arise Phoenix. They're very similar. You can see this is a very similar looking UI and you have, they even called it error analysis here. And you can like add your notes, like, you know, whatever, and you can save those notes and same thing. If you're going to be looking at a lot of data, you don't want to slow yourself down and you want to be able to have like very human readable sort of, you know, output. And sometimes like this markdown stuff is like not that readable and you want to make sure that, okay, like it makes sense to you and you can fly through it as fast as possible. So, you know, it's really easy to Vibe code this stuff because ultimately what you're doing is like showing data. So in the, in the Nurture Boss situation. So as you might have gathered, like they have multiple channels that customers can contact them on. They have like text message which will be. Which we saw. They have email, they have a chat bot on the website, so on and so forth. So they just wanted something they could like navigate faster. Just like Vibe coded essentially. I mean they have the per. We were developers, but you know, we're using AI in our process and do this very fast. Is okay, like what channel is the trace from? And then like some other filters about like, hey, did we already annotate this or not? And then just kind of have some statistics at the top. You know, this is like what the annotation like looks like. It's kind of very similar, but just like dialed into what we wanted. And like, you know, we just took notes and then what. For Nurture Boss, what we did is okay, we had an automated process that would summarize, like categorize those notes into like, what are the biggest issues. And then we would just something very simple like counting. Counting is always powerful as you know, as a product manager you can go into a system, the SQL, your experience, like writing SQL queries, you know how powerful counting is, Counting remains powerful. And so you can count these issues. Right. So like, okay, for Nurture Boss. I don't know if you can see my screen here if it's too small.
A
Yeah, yeah, that's great.
B
Is okay. What are the most, what are the biggest issues after doing that error analysis exercise which only took, you know, a few hours. Yeah, it's like, okay, we're having a lot of transfer and handoff issues. We're trying to transfer the you, the customer to a human. We're having a lot of tour scheduling issues. So like they're trying to schedule a tour, but Like a rescheduled tours. In this case, we found that, like someone's asking to reschedule. There is no rescheduled tour. But like, the AI doesn't know that. It just keeps scheduling more tours, which is bad follow up. So AI not following up when the user has a question, sometimes incorrect information provided. Okay, so you see, these are kind of the count. And now we're not lost. Now we know what we should be working on. We know. Okay, you know what? We should fix this transfer handoff issue in this tour scheduling issue. We have confidence. You know what? We're not paralyzed anymore. We know. Okay, this is what we need to fixate on our AI.
A
This episode is brought to you by Persona, the B2B identity platform helping product, fraud and trust and safety teams protect what they're building in an AI first world. In 2024, bot traffic officially surpassed human activity online. And with AI agents projected to drive nearly 90% of all traffic by the end of the decade, it's clear that most of the Internet won't be human for much longer. That's why trust and safety matters more than ever. Whether you're building a next gen AI product or launching a new digital platform, Persona helps ensure it's real humans, not bots or bad actors. Accessing your tools with Persona's building blocks, you can verify users, fight fraud, and meet compliance requirements all through identity flow's tailored to your product and risk needs. You may have already seen Persona in action if you verified your LinkedIn profile or signed up for an Etsy account. It powers identity for the Internet's most trusted platforms. And now it can power yours too. Visit with Persona.comhowiai to learn more. I love this. Just to recap, so you're taking these traces of these real conversations and, you know, you don't even have to read all of it. You have to read till you hit, hit a snag, right? To hit an obvious sort of like incorrect or high friction part of the experience. You have Vibe coded an app that makes it really easy for the team generally to go in, annotate these, rate them, sort of like good quality, bad quality, automatically categorize them, count them, and then you have a prioritized list. And you're like, here are the problems that I need to go solve. And what I love about this is, you know, I. I'm sure our listeners expect some like, magical system that does this automatically. And you're like, no, man, just spend three hours of your afternoon, go through, read some of these chats, look at some of them with your human eyes. Put one sentence notes on all of them, and then run a quick categorization exercise and, and get to work. And you see this have actual real impact on quality and reducing these errors.
B
Yeah, it has an immense quality. It's so powerful that some of my clients are so happy with just this process that they're like, that's great, Hamill, we're done. And I'm like, no, wait, like, we can do more. You know, you've paid for more. Like, you know, whatever they know. This is so great. Like, I just feel like I know what to do. And so they find so much value in this, like, process that. And it is, like, very important. This is something that no one talks about. When you talk about evals. Like, well, how do you write an eval? What eval do you do? What tools should you use? Before you get into all that stuff, you need to have some grounding in, like, what eval you should even write because there's infinite eval. So, like, in this case, we would write. We wrote an eval about tour scheduling issues and we wrote an eval about transfer handoff issues. And we felt really good about that because we knew that, like, that is a real problem. And we, we knew how to write the eval because, like, we saw the error and we knew how to find data to test that eval because again, we already tagged it and we saw that error, which is exactly the way you want to do it.
A
Yeah. And what I also like about this is it does take the burden off your users. I mean, so many people try to collect this data by like putting a little thumbs up and thumbs down or little comments, like, I even have that on parts of my product. And yes, it is useful, but it only gives you a sliver of the kind of self identified errors in the app. And users are highly tolerant of systems. And so sometimes those errors just don't get escalated by user. They'll either abandon or they'll just work through too many steps to get to the outcome that they want. They'll have a quality experience. And so I think just taking the burden on yourself and saying you're responsible for looking at the data, you can create simple ways to categorize it and then you have a prioritized list. Now, if your client is willing to go the next step and do something about this and write evals and fix prompts, what are your kind of next steps here? What's another example of where we're from here?
B
I just want to talk about just for a minute. Okay. So this particular technique is so powerful and not that many people know about it. So I actually recently did a training with OpenAI show showing the people at OpenAI like, you know how this works for domain specific evals. If you want to learn more about like this, we had Jacob, the founder of Nurture Boss, like walk through like this whole process in like two minutes. So you can find it on this, on this page if you like. Okay, so to get to your question, like what do you do now? Okay, so you have like, you know, you've done your error analysis, you have like prioritized these things. So like now what do you do? So now you get into writing the evals. So now you have to decide like what kind of evals do you want? There's different kinds of evals. So there's reference based evals, which is like, you know what the right answer is and maybe you can write some code. You don't need like an LLM to do the eval for you. Or if it's more subjective in nature, then maybe this transfer handoff issue, maybe it's more subjective in nature, then you need an LLM judge. What you can do is you can start to write those evals. I have this blog post here about evals in general. There's this diagram. It's really hard to put this whole thing into a diagram honestly, but because you know, it can be. It's kind of. It's not, it's non linear process. But really what you want to do is okay, we already covered like logging traces and there's two different kinds of. There's different kinds of evaluators or evaluations. There's the kind of like unit tests which is like. Well, I would say like code based evals and then there's like models. So like LLMs, you know, code base evals. So like you know, for example, what is. What kinds of things that be good for code base eval is like, okay, if you have like user IDs showing up in the response or something like that. Okay, you can test for that in code for.
A
I have to say you're saving my life here because I was thinking what is one of these unit tests I need to write? And that is exactly one of them, which is my tool calls need uuids and users definitely do not. So that's a, that's a great example of one for anybody that's writing a chat bot that does a lot of kind of tool calling.
B
Yeah, because they can show up by accident. Like, you might have the UID in the system prompt, and it inadvertently shows up in output for some reason. The other. And you don't want that. Okay, you want to write these tests with. No matter what kinds of tests you write, you want to create test cases, and sometimes you can gather those from your traces. Sometimes you might want to generate synthetic data. And so, you know, this is like a prompt for a different real estate agent assistant called Reach out, which is for residential real estate. And this is kind of like a simplified version of your prompt, right? 50 different instructions that a real estate agent can give to their assistant. It creates contacts on their CRM. Contact details can include name, phone, email, whatever. And basically, you know, it can generate synthetic inputs to a system that then you can then log traces from. I'm going to jump around a little bit, so we'll kind of come back to that. Okay, we already covered logging traces. You know, this is another, like, custom log annotation thing yet again, because we really emphasize this, that it's really important to remove all friction doing this so won't linger on this too much. And basically, one kind of thing you want to do is like, okay, if you're using LM as a judge or anything else, what you want to do is. So one thing that's usually skipped when we talk about LM as a judge is like, people just using LM as a judge off the shelf. Like, they're like, writing a prompt, they're saying, okay, judge it, and then reporting that. Let me actually go to a different blog post that is a little bit better for LM Judge, which is this one. Okay, so LM as a judge. So you often see sometimes in LM eval land, like, a dashboard that looks like this. Helpfulness, truthfulness, conciseness, score, tone, whatever. What the hell does that mean? Does anyone know what that means? Nobody knows. No one understands concretely, like, if the helpfulness score is 4.2 and it goes to 4.7, like, do you really know, like, what's wrong? What changes? No. And so there's a lot of guidance in how to create an LLM as a judge. It's probably too much for this podcast to, like, tell you all of the things. And this blog post is quite long, like, enumerating, like, how to do it correctly. But the main things that you need to keep in mind is, like, one, you need to have binary outputs, like, is it good or bad for a specific problem? So for, like, you know, the handoff problem for nurture boss, like, okay, was there a problem or not. And you want specific evaluators for specific problems. Number two is like you want to, you need to hand label some data which you already kind of do an error analysis and you want to compare the judge to the hand labeled data so that you can trust the judge. The last thing you want to do is like throw up a judge on the dashboard like this. And then like people don't know if they can trust it. And the worst thing you do as a product manager is like start showing people evals and then at some point the people's perception of the product or their experience of the product doesn't, doesn't match the evals. They're like, hey, like it's, it's broken. But the evals are showing that it's good. And that's the, that's the moment like people lose trust in you and then they'll like, it's going to be really hard to regain that trust. And so the way that you make sure you can trust these automated LLM evals is to, you know, measure sort of agreement with these hand labels.
A
Yep. So what I'm hearing from you in terms of LMS judge is these general buckets with arbitrary ratings against them, not useful and will often work against you. You want to write specific binary outcome evals for specific tasks. So you want a set of evals that are like, does this get scheduled correctly? Yes or no. And so you're making a list of evals that the LLM as a judge is evaluating that gives you a pass, fail or yes, no, true, false, binary outcome, very simple. And then you're doing the additional layer of work of validating that the eval itself is valid by actually looking at that outcome and saying do I actually agree with this LLM as a judge evaluation of the quality of this output and that those steps together are going to give you a much more comprehensive view of how your product's performing. And then, and then that, that second layer of human evaluation, it's going to give you more confidence that either your LLMs judge is good and is evaluating your outputs correctly, or you actually need to tune that judge itself to get to higher quality evaluations. Is that kind of a summary of, of what as well?
B
And the thing that's really important is like it's really difficult to write any LM judge prompt if you don't do this because the research shows and there's some research that my co instructor for the course that I'm teaching, there's a paper called who validates the validators and the research shows that people are really bad at writing specifications or requirements until they need to react to what an LLM is doing to clarify and help them externalize what they want. And it's like only going through this process of sort of, okay, writing detailed notes and critiquing things that you can then start refining the LM judge.
A
Great. And so we've covered sort of traces and errors, annotation, you have kind of how to build unit tests that are automated tests. Of course you're looking at it manually, you're doing LLMs judge the correct way. Now tell me, I've identified all these problems. I have these evals that give me data. How do I write a good prompt? Like are there, are there some techniques or you know, what do I, what do I do? Are there things that you found consistently in the next step of improving your system instructions, improving your tools, where you actually have to go solve these problems are effective?
B
Yeah. So when you get to like the errors that you have, so like, you know, you're going to use these evals and you're going to deploy it at scale. Okay. It's like you're not looking at all your data, you're looking at a sample of data and you're going to score your element as a judge against like a sample of label data and you're going to deploy that at scale and you're going to like look at where are there errors. And it's pretty like, you know, you have to make a judgment call on like how do, how do you improve your system based on the errors you're finding? Like is it a retrieval problem? Is it a prompting issue? Is it, should you be putting more examples in the prompt? And you know, there's not really a silver bullet there. I would say, you know, retrieval is its own sort of beast. It tends to like retrieval tends to be the Achilles heel of a lot of AI products. You know, where things tend to go wrong. But sometimes, yeah, it's just like, especially in the beginning you're going to find a lot of low hanging fruits. Like for example, in Nurture Boss, the system prop didn't contain today's date. So when the person said, hey, can you do a schedule for tomorrow? AI had no idea what like we don't know what tomorrow is. But didn't, they didn't tell the user that. Right. We just guessed. So like, you know, that's really obvious. So there'll be like obvious things you can fix and then there's like lesser obvious things you can Fix you could try, like, prompt engineering. So there's a spectrum of like, okay, prompt engineering all the way to, like, fine tuning. Most people shouldn't get into fine tuning. I will say that if you do all this eval stuff, fine tuning is basically free because you have all this infrastructure set up to do all these measurements and curate data, like high signal data that is difficult and that difficult data that those difficult examples where your AI is not getting. Right, that's exactly the stuff you want to fine tune on. That's like the very high value stuff for fine tuning. So. And yeah, fine tuning is not so hard in the recheck case. We had to do fine tuning to get the extra mile. But in most cases, like, it's prompt engineering. There's no magic prompt engineering tricks. It's really like, I would say there's a lot of experimentation that you should engage in.
A
Well, and one of the things that I found so interesting as an AI builder that comes from a software engineering background is now I have a natural language surface for bugs in terms of my system instructions and prompts. And I had this experience recently on Chat PRD where we were really having a hard time with tool calling. Like, one of our tools just was intermittently not being called, no matter what the user would say. And it was really hard to pin down. And we have this, you know, monster system prompt. And I went through and there was like two words in the prompt that were just incorrect. They were incorrect. It was about uuids, but it was like, incorrect. And as soon as I deleted those two words, which had just been, you know, typed in by somebody and pushed in the repo and blah, blah, blah, our quality of that tool calling shot right up. And so I just have to, you know, we have to, as product people, as engineers, start thinking of the full surface area of our product. And it's not the construction of the agent or the chat bot itself. It really goes down into what words are going in and out of your system. And it's a complicated surface area to debug and keep track of because it's unstructured. But it's super high impact in my experience.
B
Yeah, definitely. You know, when it comes to tool calls, actually, let me show you one thing that always comes up is people wonder, like, how do you evaluate agents? Because, like, you know, there's so many different handoffs, like, how do you actually, like, do it in real life? So let me see if I can share that. Okay, so I'm sharing, like, the book that we give students in our Class, let me go to the table of contents. So there's all these different areas. We'll kind of skim towards the agent part of it. So there's like analytical tools you can use for everything, you know, for agents you can build these transition matrices. So going from one step to the other, where are the errors located in like what agent handoffs or what steps you're being hand handed off to what other steps. So like in this case, okay, we have this like generate SQL to execution SQL. That's where a lot of this like errors are happening. And then you can, like, then you can narrow it down. So as you get more advanced into evals, it's a very deep subject. You, there's a lot of analytical tools you can use to kind of go about things. It is very interesting. Like as a product manager you can get really far with AI assisted notebooks.
A
Yeah. What I was going to say about this from a product manager perspective is this is really put from the frame of errors and evals, but even just analytics for Agentix systems, figuring out what your users are trying to do. I, I haven't thought of this idea of actually mapping out the different conversation to tool or tool to tool handoffs. And even if all of this was working effectively, a product, product manager's ability to see the data of its agent's behavior from a tool to tool handoff perspective and really identify like where are users trying to get value out of the system also can do things like drive roadmap ideas. Right. If you're seeing, okay, people are just writing SQL, executing SQL, like we need to dig into what other things around that could we build for users that are interesting. So I like it from the error perspective. I also like it just from the product discovery perspective.
B
Yeah, definitely, that's, that's very true. Um, yeah, I like that.
A
Okay, so you've shown us how to I, the other thing that I like that you've shown us is that there's no way to do this than just do it. Like I, people want these tricks, they want some hack, they want some off the shelf solution. And you're saying like honestly look at the data, build yourself a solution. If you have to validate it yourself, do the hard work. And if you do the hard work you can actually create these leaps in product quality and experience. But right now you just, you just gotta look at the data and make some decisions and make things better. So I think this has been super illuminating in terms of helping people like me that are building AI products, make them higher Quality. Let's spend just a couple minutes on a totally different topic, which you are running this business, you're running a course. You are clearly an expert in AI. What tools are in your stack for kind of running your day to day life or at least your business life.
B
Yeah, so I do a lot of writing and I do a lot of communication with clients and you know, I also want to reduce my own toil and so let me share my screen again. Yeah, it's probably easiest to show you. Claude project. So I have all these cloud projects. So. Okay. I have like one for copywriting. I have a legal assistant. I have consulting proposals. Consulting proposals is pretty interesting. So it's basically like an example of consulting proposals. It's, you know, so it's kind of funny. I have skill level partner Palantir is expert generative AI, blah, blah. And you know, I give it some instructions on the, on the other like let's say proposals I have. And you know, I have like this prompt, you know, whatever, get to the point. Writing short sentences, whatever. And basically I have a lot of examples. And basically anytime I have a intake call with a client who wants a proposal, I give this the transcript and then it's made. It's basically almost ready. It's like just need. It takes me about a minute to kind of edit it and get it going. So that's proposals. You know, I have one for the course which is like, you know, a lot of context about my course, which is like the entire book. I have an FAQ that's very like extensive that I've published. There's all the transcripts, all the discord messages, office hours, you know. And again, my prompt is like, hey, your job is to help course instructors to create standalone interesting FAQs. These are, this is like a writing prompt that I have everywhere.
A
Do not add filler words. Don't repeat yourself. Get to the point.
B
Yeah, yeah, yeah. It's very, you have to really, you know, and so. Okay. Like, yeah, and it's just, you know, this stuff here, you know, so this like one for the course. There's, you know, there's one that helped me create these things called Lightning Lessons, which is basically like, you know, this lead magnet. So there's all kinds of stuff like this.
A
I see you and I share a general counsel here.
B
Oh, okay.
A
With Claude AI.
B
Oh yeah, right, exactly. Yeah, There you go. So there's that. And also have like, you know, my own software that I have. Yeah. So I have. Let me see. I can find it. I mean it's not, I'm not really advertising it, but I have like YouTube chapter creation and then I basically have this thing that will create blog posts, like out of YouTube videos. So like, let me show you an example. So like this one, basically what I do is I take a YouTube video and it becomes an annotated presentation. So you don't have to watch the video.
A
Yep.
B
Like you can just, especially if the video has slides, what it'll do is screenshot all the slides and then have a summary under each slide about what was said. So you can consume like a one hour presentation in like, you know, whatever, five minutes. And that's really good because like, you know, I have, I teach a lot and have a lot of content and so I distribute notes. So all of that. So like a lot of that stuff, educational stuff is part of my workflow. And that this is used like this uses Gemini. Essentially what it does is it pulls the transcript, it pulls the video. I can put in the slides all at once and have a lot of examples and I give it to it and it produces this.
A
Yeah, I've heard this in a couple podcasts that we've done recently that folks really like. Gemini for video information ingest seems to be the fan favorite for taking basically YouTube videos or other video content and turning it into text or other, other applications that you can extract from. From that. So try the Gemini models for that folks, if you're.
B
Yeah, it's absolutely brilliant. It's amazing.
A
Cool. Okay, so you have cloud projects for every little part of your business. I love the proposal workflow. It's something that we, we, we folks that do enterprise sales could probably make. Make some use out of. I'm about to start doing blog posts on all the How I AI podcasts. So maybe I will download your repo and give that a little spin. And then you're using Gemini models to extract out content and share it as, as templates. And then you have. Oh, look at these prompts. Got a GitHub with prompts.
B
Yeah. So I get GitHub with prompts. This one is private. But just to give you an idea, conceptually, like I. It's basically a mono repo of everything. The reason that is is because I like to have Claude code open hands, you name it. And basically what I say is because all these things are all interrelated. Right. Like a lot of these projects. So like, you know, this is my, my blog is in here. This is my blog. For example, this is that, that like YouTube thing. I just showed you this Hamill project. This is like something else that fetches discord. This is about copywriting proposals, whatever. And I just point AI at this repo and, you know, there's like, Claude rules in here that says, like, okay, what is this repo about? And like, where do you find stuff? Like, okay, you know, this is like, if you need to work, like for writing, you should look here, you know, so on and so forth.
A
So, my friend, you have buried the lead here because we could have done an entire episode on just this repo. What this makes me think of is, you know, five, five years ago, there is this big, like, note taking, second brain. Where do you put all your information so you can have access to it forever? And I see this and my little engineering brain goes, obviously it should go in a repo and it should be a combination of data sources, notes, articles, things that I've written, things that I like, and prompts and tools to actually do something with that. So you have given me a personal project that I'm going to go work on in the next couple days. Because I think this is how I, as somebody who lives with cursor or Claude code as sort of co pilots for everything I do, this is how I would want to organize my data and my prompts to be able to do something with it.
B
Yeah, I don't want to be locked in. Right. Like, to any one provider. And so this is how I do that, essentially.
A
Amazing. Okay, we might have to have you back to go through this thing in detail. This has been so great. I have two lightning round questions for you and then I will get you out of here. I know you're a busy guy. My first question is, you know, a lot of what you showed us requires someone, a person to go through with their human eyes, read things and evaluate. And I'm curious, whose role do you think this is? Is this the product manager's role? Is it the engineer's role? Is it the subject matter expert's role? Who. Who does this?
B
I think the subject matter expert is very central. A lot of times the product manager is the subject matter expert in SME in a lot of organizations. Like, they're kind of the person that everyone looks to for, like the taste of, like, hey, this is what should be happening with the user. So I would say a lot of times it is the product manager that should be doing that annotation. Now when it gets into the analysis, it's really interesting. It would be good if a product manager, like, the more you can do, the better. Just like The SQL and the stuff that you know about. At some point you do need, probably need a data scientist when it gets advanced. But you know, the more you learn, the better and vice versa, the more data scientists learn more product skills, you know it's going to be better. It's hard to predict like you would, you know, there's always this tension or this kind of, okay, can we collapse roles, can we collapse the product role and this like data scientist type AI role? I'm not sure. It's yet to be seen. I don't think so. There's a lot of service area actually. There's something called AI Engineer, there's AI product manager and there's also still this data scientist aspect. So those three roles are still operating on this problem. And there's definitely a lot of service area for all of them, especially as you scale.
A
The one other thing that I would call out or my hope is in addition to the technical building teams who are sort of proxies in my mind for the subject matter experts. So a lot of times the product manager is a proxy for like the leasing agent in this example, they understand that user, they understand what high quality is. But you know, I would really love to see folks that are in operational or more functional roles come in and actually contribute to the quality of the products. Because you know what makes a good user experience, you know what makes a good leasing agent, you know how they should speak and what they should do. And I think there is an opportunity for, for folks to lean in and bring that expertise to bear in a way that scales across a company that if you're willing and brave to do it, I think product teams would welcome in kind of like non technical colleagues into this process to add some more kind of user empathy and subject matter expertise.
B
Yeah, definitely. Yeah. The more you can bring like the actual required taste in the product sense into the process, the more the. Yeah, because that's essentially what you're doing when you're annotating.
A
Yep.
B
Doing this error analysis. And the error analysis is the foundation for everything.
A
Yep. Okay, and then my final question, Ask everybody I know you're very structured and you'll tell me, you'll look at the data and then figure out exactly what to say. But you have to admit sometimes AI is very frustrating and doesn't do what you want it to do. Do you have any back pocket prompting techniques you use? Do you yell, are you all caps? What, what's your, what's your strategy?
B
AI has frustrated me the most is writing. Because like writing I Don't want the writing to sound like AI.
A
Yeah.
B
And it's hard, you know, that's the last thing you want in certain situations for your writing to sound like AI. And not that AI is like wrong, it's just that, yeah, you want to make sure your like flavor is coming across. And so, so one thing, one thing is like, okay, I showed you my writing prompt, a little bit of it, I can share it with you separately. Also is like, provide lots of examples, but then also take it step by step. So for writing, what I do is have it write an outline and then I have it write the first one or two sections and edit it very carefully. Now one tip is use something like AI Studio that allows you to edit the output of what the LLM is giving you. That's really important because, like what that ends up doing is it creates examples for the LLM in. In kind of right there.
A
Yeah, in line. Yep.
B
Yeah. And so, yeah, you want to edit the output and you know, yeah, something like a notebook or AI Studio, there's not too many things that let you edit the output. But once you do that, once you like, do that hard work of like that, those examples especially like the thing you're trying to write now, then it starts to work really well.
A
Yeah, it was one of the most important things that I built into my. My AI product was every asset that gets generated has a real time editor for the user to update and then those updates go back into the model. Because I just think if the central value proposition of your product is writing, which mine is, it's one of the hardest stylistic challenges I've seen AI struggle with, it all sounds like slop. Like I can identify AI writing from a mile away. And so yeah, I found this like incremental optimization. First outline, then draft, then edit, then refine process. Takes a while, there's some latency in the experience, but it ends up netting higher quality. And then just like use it as a draft, edit it, get the system, get the system to be better. So that's really, really great feedback.
B
Is this for Chat prd?
A
This is for Chat prd. Yep.
B
Yeah, very cool.
A
Yeah, you know, I have high standards for writing too, so it was important to me. Well, this was so great. Where can we find you and how can we be helpful?
B
Yeah, hamill.dev is my website. You can also find me Hamil Hussain on Twitter. And yeah, I'm teaching a course on maven, as you know, about evals that go into all these subjects very deeply. But yeah, that's where to find me.
A
Great. Yeah. And for our listeners that don't know, Lenny's list is on Maven, including a How I AI section that I think features your course so you can check it out there. Thank you so much for the time. It was super educational, very practical. I'm going to take these tips for right away and go improve my own product. Have a great day.
B
Yeah. Thank you for having me on.
A
Thanks so much for watching. If you enjoyed this show, please like and subscribe here on YouTube or even better, leave us a comment with your thoughts. You can also find this podcast on Apple Podcasts, Spotify, or your favorite podcast app. Please consider leaving us a rating and review which will help others find the show. You can see all our episodes and learn more about the show at howiaipod. Com. See you next time.
Podcast: How I AI
Host: Claire Vo
Episode: "Evals, error analysis, and better prompts: A systematic approach to improving your AI products"
Guest: Hamel Husain (ML engineer)
Date: October 13, 2025
In this episode, Claire Vo and Hamel Husain dive deep into the practical, systematic approaches for debugging, evaluating, and improving AI-powered products. The conversation explores foundational concepts for PMs and AI builders, actionable workflows for error analysis, writing impactful evals, and iteratively optimizing prompts and system instructions. Hamel also shares a behind-the-scenes look at his personal AI-enabled workflow for running his business with tools like Claude and GitHub repos.
Identify Key Evals from Error Analysis:
Write evals focused on actual, high-impact problems (e.g., transfer failures or tour scheduling), not generic metrics.
Reference-vs-Subjective Evals:
Validation and Trust:
Always validate LLM-judged evals against a hand-labeled sample to avoid metric “drift” and loss of trust.
| Timestamp | Segment | |------------|----------------------------------------------------------------| | 04:29 | Foundations: Looking at Data for AI Product Quality | | 05:33 | How to Analyze AI Traces—Real Example Demo | | 10:37 | Importance of Real User Data vs. Synthetic or “Happy Path” | | 14:30 | Systematic Error Analysis: Manual Coding & Categorization | | 17:20 | Counting & Prioritizing Issues | | 20:10 | Example Results from Nurture Boss Trace Analysis | | 23:26 | Impact of Manual Error Review—Clients are Delighted | | 24:33 | Moving to Writing Domain-Specific Evals | | 27:45 | Automated (code-based) Evals versus LLM-as-Judge Evals | | 30:20 | Dangers of Untrustworthy evals and the need for validation | | 33:14 | Research: Who Validates the Validators? | | 34:39 | How to Actually Improve System Prompts | | 38:15 | Analytics for Agentic Systems, Transition Matrices | | 41:34 | Hamel’s Workflow: Claude Projects, Gemini for Video, GitHub repo| | 47:00 | "Second brain" concept applied to AI work via GitHub | | 48:27 | Who Should Be Doing Annotations? Division of Labor | | 51:25 | Back Pocket: Practical Prompting Tips (especially for writing) |
This episode is a must-listen (or must-read) for anyone shipping AI-driven products and looking for a reality-based approach to quality, reliability, and prompt iteration—free from hype and grounded in hands-on, systematic practice.