Transcript
A (0:01)
Creating great products isn't just about product managers and their day to day interactions with developers. It's about how an organization supports products as a whole. The systems, the processes and cultures in place that help companies deliver value to their customers. With the help of some boundary pushing guests and inspiration from your most pressing product questions, we'll dive into this system from every angle and help you think like a great product leader. This is the Product Product Thinking Podcast. Here's your host, Melissa Perry.
B (0:37)
Hello and welcome to another episode of the Product Thinking Podcast. It's time for Dear Melissa, this is the time of the week where I answer all of your burning product management questions. If you have a question for me, go to dearMelissa.com and let me know what it is. I answer them here every single Friday. This week's question is from a junior product manager and it's all about knowing when a feature is ready for delivery. Let's dive in. PMs make great investors if you're a product leader curious about angel Investing, check out AngelSquad. It's where over 2,000 operators from Google, Meta and Apple learn to invest in high growth startups alongside Hustle Fun. I've been a member for years and highly recommend it. They've given me a few 30 day guest passes to share, so head over to Go Angelsquad Co Melissa and make sure to act fast as the passes are limited. Dear Melissa, One challenge I often think about is knowing when a feature is well defined enough to move into delivery. Do you usually follow a framework or rely more on experience and intuition? This is a great question and a big struggle for product managers when they're first starting out in their career. So if you've had any kind of agile training, they do usually have a ready for dev type of system where things are specified, there are acceptance criteria that means that something's done that has actually been worked on. You can follow that as a framework, but I try to keep it a little bit more loose in my area and make it more collaborative with developers. What I've seen as well is that sometimes people will make take that acceptance criteria and those specs so far that developers won't start anything unless everything is so clear and detailed all the way down into the little nitty gritty on all of these features and I don't think that's realistic. So so we always want to balance. We need more clarity with we're overthinking this or we can't actually start this until it's really ready or very well specified. So for me, the answer isn't a very rigid framework. So for me, the answer isn't a very rigid framework. It's more about understanding what types of confidence you need based on your specific situation. So there's actually two gates that you should think about. The first gate is validation, and then you should think about specification. So two different hurdles, right? Is it validated enough to build and is it defined enough for developers to actually start coding it? There's different questions that we need to actually answer in both of these phases. And you cannot skip validation and just go straight into delivery. Right? But the level of specification that you can do when you're thinking about delivery can vary dramatically based on your risk. So let's start with our first gate, which is validation. So we want to think about do we have enough confidence that this is actually worth building? And in validation, we should think about de risking the why and the what, not about the how. So we should be thinking about what evidence do we have that users will actually want this solution? What's the bar for validation as well, when we think about risk tolerance? So in this area, we want to think about high costs, we want to think about risk to the users, we want to make sure that when we are actually looking at how much validation we need and how much confidence we need, it matches both the investment and the impact that we want to put on this. So there's a lot of different types of validation that you can do. You could do user interviews, prototypes, A, B, testing, competitive analysis, all of these different things so that we can make sure that what we're building is actually right. So once we've gone through all that data, then we actually have to think about what kind of confidence we need here to say this is ready to go. So this is the hard part, and this is where a lot of junior product managers get stuck, is trying to be 100% confident that this is the right thing to build. You will never be 100% confident unless it's a reported bug that comes back all the time and people are like, just fix this. And that's not what we're really talking about here. So when you think about how confident do you need to be, think about the risk, what's the risk profile? And risk happens in a couple different ways. I think about things like cost, risk, how expensive is it to actually build and maintain? The higher the cost, the need for stronger validation, we can think about technical risk. Does this touch sensitive systems, data integrations? When we think about technical risk, does it affect the code base For a very long time. Is it really hard to build? The higher the risk? Now we're going to need clear specifications and make sure that it's very well validated before we move it into depth. User impact risk. Could this break existing workflows, frustrate users? How many users does it actually impact? Is there sensitive data that's actually in here? Again, the higher the impact, the need for more validation, and then we got business risk, right? Is this the bet the company is making that will put us into the future? Is there implications for the business on the way that we're going to change or evolve in the future? Is it more of a nice to have or is it something that we really need to do to help this business succeed? The higher the risk, more validation we need. Now, again, you're never going to get to 100%. So what is good enough? That's where this becomes a little more art than science. If you're like, I'm 60% confident in it, but your risk is, let's say, like medium to high, that's actually pretty good. 60% confidence actually is pretty high. A lot of times like 50. 50, you could do that. How fast is it to test? Right. Can you actually put this out, see if there's any traction? If the risk is lower, if it's higher risk and you really got to nail it, maybe we need a little more validation. So again, that's going to come a little bit more with experience than a hard and fast rule. But you should be talking to your team, you should be talking to your leaders to figure out what's our appetite for risk at different stages. Now, second part of this is specifications. So is it clear enough for development to start? And that's probably what you're talking about here too. So when we're thinking about having development start on something that has been validated, so again, it's past validation. This isn't about having every detail figured out, but it is about having enough clarity and context so that they can start intelligently. So a core question that you can ask yourself. Can a developer read the specifications or what I'm actually writing out here, user story specifications, details about our product and understand what they're building without guessing. So you need things like clarity on user interactions, data requirements, right? What data do we need to actually pull out of the systems digest? Understand what data do we need from our customers? Those types of things become really important because they shape databases and underlying architecture for developers. You also need to think about anything like integrations, all of that different stuff. So you don't need pixel perfect designs. Usually for a developer to get started, you don't need every edge case to actually be solved, but you do need to have enough clarity on what it is that we're building here after that's been validated. So I like to think through things like user flow, how does someone actually use the feature? Can you walk through a happy path? Can you actually do this? So maybe you have prototype or something that the developer can get an idea for, or concept. Usually that would be enough. Combined with a data requirement, data requirements. On what information do we need to collect, store and display integration points? Like, does this need to talk to other systems, APIs or services? All of those might be enough for developers to say, hey, I can get started on the back end of things while you fine tune the front end. We have enough to go on where we can start exploring, handle this a little bit, and then we'll work through it iteratively. You also want to make sure that before they start as well, you have things like success criteria figured out. How will we know if this is working as intended? That will be the last hurdle that we go through, which is like acceptance criteria that we think of in Agile. Does it work? Are they able to do these things? Are we catching errors? All that stuff will need to be figured out before we can actually ship it. But there is actually a lot of stuff that we can figure out along the way. Detailed UX polish, maker interactions. Those things usually improve through iteration. So maybe the developers are putting together the first iteration of it, showing it to you. And the UX designer. And the UX designer is like, move this, do that. You might be clicking through and say, oh, we forgot to include this edge case, or we forgot to have this work like XYZ for these types of customers. Over here, you can iterate through that. So I don't want you to think that you need to have everything figured out before developer can touch it, but you do need to largely have the concept available, start to understand the flow, start to understand the functionality, and then you can usually iterate and optimize that way. And that's going to be faster than specking out every single little detail. So how can you tell where the balance is here without a ton of experience in it? Let's talk about some red flags that you're just not ready for development. If developers are asking you the same clarifying questions over and over and over again, you did a really bad job of building context up front and explaining what they're building to them. So they might not have enough definition when it comes to flows. They might not have enough definition on the why to be able to fill in some gaps. So you're going to want to go back to the drawing board, explain the why, explain the concept, explain the product, how it should work, how users are going to use it. Maybe you need some more specified flows, functionalities, how it should actually work. That's a really good sign that it's not quite there yet. And this is a big thing that does happen in a lot of teams and a lot of product owners will tell me and product managers when they're first starting out. Oh, I'm so frustrated because my developers won't stop asking me questions. That's on you. You do not build that context there. Now some developers though, again are a little bit more junior or need a little bit more handholding or are used to that. So you also need to know your team a little bit. So in this case, if your developers are a little more, let's say junior or need some more specified requirements, if they're consultants as well, third party, this goes for that too, then you're probably going to need more specifications. So you need to know your team. You need to get to know your team. There another sign that might not be ready for developers. Let's say they get into it and they're surfacing issues where it does not work for a large group of customers or users. They come back, they said, hey, we, we were using this data to test on and it's just not working. There's way too many edge cases. That's telling me that you didn't think through that product in enough detail to find something that works and scales for most of your customers and that's what you want to do. So you either did not scope it correctly and say, hey, it's only going to work for these customers, or you didn't think through a solution that actually satisfies a lot of the cases that you want to go through. So again, that would tell me to go back to the drawing board. If your team is debating fundamental assumptions rather than implementation details, again, that means that it's not ready for development. So if your product is validated, you've tested the concept, the users understand it, there seems like there is value here, you feel pretty confident depending on your risk profile, your developers are more about how are we going to implement this? And they understand the concept, they understand the flows, they understand how it's going to work. And they're not just what is this that we're building that's telling me that you are ready for development. You do not want to get into analysis paralysis. A lot of stuff can be figured out along the way. When we're building on our team at Product Institute, my developers and I are going back and forth, usually off of a prototype or some concept as we build it. And that's okay. That's what you want. You want to have it collaborative, you want to be working together on these things. That's really what's going to keep you agile and it's going to keep you iterating and making things that are great for your customers. So it does not have to be perfect, but it should be defined enough where we can all understand it and you're building that context. So I hope that answers your question. Thank you so much for sending in a Dear Melissa question. Again, if you have a question for me, go to dear melissa.com and let me know what it is. And next Wednesday we'll be back with another amazing guest. Make sure that you like and subscribe to this podcast so that you never miss an episode. We'll see you next time.
