
Loading summary
Jyoti Nukla
Stop applying the PM job until you have a true understanding of the AI fundamentals.
Akash
AI Product Manager is bs. Is it actually real or is it.
Jyoti Nukla
When I look at the industry landscape for AI pms, there's a few critical distinctions that many people miss.
Akash
This is Jyoti Nukla. She's been an AIPM at Meta, Amazon and Netflix. She's one of the world's most experienced and senior aipm Director of AIPM at Netflix. Feels like a dream job. So why did you leave Netflix in
Jyoti Nukla
a career changing way? A lot of opportunity out there and with AI jobs increasing, I wanted to take the time to like go full time into this.
Akash
What is the roadmap to becoming an aipm?
Jyoti Nukla
The first is understanding the difference between what an AIPM does versus what a regular VM does. And the second would be.
Akash
Today's episode is a masterclass in AI product management. If there is only one video you are going to watch, this is it. Do I need to learn these technical concepts like rag and fine tuning to become an aipm?
Jyoti Nukla
I'm going to teach you everything, everything that you need to become an AI.
Akash
So with all these RO layoffs, is it really good to work at companies
Jyoti Nukla
like Meta and Amazon.
Akash
Really quickly? I think a crazy stat is that more than 50% of you listening are not subscribed. If you can subscribe on YouTube, follow on Apple or Spotify podcasts, my commitment to you is that we'll continue to make this content better and better. And now on to today's episode. Quick note for my audio listeners, there are some things that we showed which you can see on Spotify or YouTube, but we've edited this audio so that it's a really good listening experience nonetheless. Jyoti, welcome to the podcast.
Jyoti Nukla
Thank you. I'm so excited to be here.
Akash
So I want to start with the hard questions. Okay. You know, you had this title, AI Product Manager, but I keep hearing that AI Product Manager is bs. Is it actually real or is it hype?
Jyoti Nukla
Yeah. So let me give you a data driven answer because I've been on both sides of this, like hiring AI pms at Meta, Netflix and Etsy, and now talking to like dozens of companies about their AI strategies. So when I look at the industry landscape For AI PMs, there's a few critical distinctions that many people miss. And the way I would say that the kind of roles that exist are two folds. One is a traditional PM with AI features added on. So this is probably the 80% of what's labeled as AIPM Jobs out There right now. And these are where PMs are leading LLM capabilities. And probably they're adding these AI features to existing products. So think of it like your chatbot that you're adding to your customer service portal or you're adding some AI summarization to your document. Now, the core product existed even before, even before you added or bolted an LLM onto it. So that's the traditional PM with AI features. The other type is the AI native PMs. And the way I think about that is probably this is a new category of PM roles that are opening up. And I would say about 20% of the ones out there are your AI native PM roles now. And here the product is AI. It's not a feature, it's not something that you just bolt onto the product. So think of it like Things like your ChatGPT GPT or your GitHub Copilot or your Claude, your cursor, your perplexity.
Akash
Yep.
Jyoti Nukla
The key characteristic would be the product is fundamentally probabilistic and so the value proposition is literally impossible without AI. And you can't build your chat GPT without an LLM. So AI here is not just enhancing the product, it is, is the product.
Akash
Okay, so two different types, 80% in traditional products, 20% in AI native. So there's basically 4x more open roles for you in those traditional companies. And we heard some of the companies you work for, those products existed before AI, but you were working on AI within them. So if somebody wants to become an AI pm, what is the roadmap to becoming an AI pm?
Jyoti Nukla
Yeah, and before I jump into the roadmap, I don't want to talk about what type of AIPM roles exist along the stack. So at the top, I call this the application PMs. Now here the PMs own the end to end user experience. They're thinking about how the users interact with AI. How do you build trust, how do you make AI reliable enough for everyday use? They need to understand the AI human interface and interaction patterns. This is probably the easiest for someone who wants to convert from a traditional PM role to become an AI PM role because this encompasses a lot of the existing product management skills along with AI knowledge. So this is the easiest to get into. The second is the platform PM. Now here is where the PMs are building tools that other teams who are building probably application products are using. So think of it like developer platform or model orchestration systems or evaluation frameworks or observability tools. Here the PM would need to understand Both the technical infrastructure and developer experience. So here maybe you're not building straight up for end users, you're building for other builders here. And the last is the infra PMs where these PMs are building the foundational systems that power all of these AI products like probably say vector databases or GPU orchestration or optimizing kernel level compilation or optimizing model serving. So as you see, the lower you get into the stack, the deeper your expertise needs to be. So this is harder. So easiest would be here the easy and hard.
Akash
Okay, this makes sense. And what is roughly the percentages of roles at each of these three buckets?
Jyoti Nukla
I would say you'd see about 60% of roles with application PMs, about 30% roles with platform PM and maybe 10% roles with infra PM.
Akash
Okay, makes sense. So the hardest roles are actually the smallest is kind of the good news.
Jyoti Nukla
Yeah.
Akash
So can you walk us through? Let's say somebody has their goals still on infra. Like what are the key concepts to know?
Jyoti Nukla
Yeah, whether it is application, platform or infra, some of the key concepts are the same across. So and that's what we are going to like talk about today, where we are going to talk of these five techniques, the first is understanding the difference between what an AI PM does versus what a regular PM does. And the second would be determining when to use AI because I think now there is this hype around using AI that it seems to be like the technique that everybody wants to go reach. But knowing when to say yes and when to say no is a very powerful skill that a PM should possess. The third is then we look into what are the AI techniques and what, what are the options of AI techniques that we can go choose from. So we look at some menu. And the fourth is now if we then decide that yes, we need to like use Genai for this product, then we'll learn about a few core concepts around AI agents, prompt engineering, context, rag evaluations. And last but not the least is we will learn about delivering AI products. So we'll learn all the way into deployment.
Akash
Let's do it.
Jyoti Nukla
Perfect. Let's get started. So what is or who is a product manager? So a product manager is essentially the CEO of the product. A PM owns the product and its associated decisions along with it. So what the product managers do is balance all of these three domains, that is the ux, the tech, the business. And remember all of these functions, PMs lead without authority, they don't report into them. So PMs need to be able to influence these teams and make hard calls. That is irrespective of whether it's an AI PM or pm. This is like the baseline of what a PM does.
Akash
Yep. Of course varies a lot between companies.
Jyoti Nukla
Absolutely, absolutely. And again, here are some traits of a good PM to just bring us all into the foundation of the baseline of what we mean when we say a good pm. Right. Defining a clear vision for your team. Being customer obsessed, which is understanding what the pain point really is and understanding the market landscape, aligning with your stakeholders around the vision and building the vision for the product. And the fifth one is the bread and butter for a product manager, is prioritizing product features and capabilities. And last but not the least is creating a shared brain for your product managers and your team to make independent, independent decision making. So what is the core skill that differentiates PM from an aipm? The core difference here is you see how traditional PM products are deterministic. However, AI products are probabilistic. Where traditional products have predictable behaviors, AI products are inherently probabilistic. That is the same input can result in different outputs. Now if I have a button in a traditional product, I click on it every single time, it will open up into like the next page for example. But every time you ask an AI product because it's probabilistic, it can produce different outputs. So now as AIPMs, you must think in terms of quality distributions, in terms of what's your acceptable error rates and it's no longer a binary success versus failure. And so aipm you tackle questions like what is the error rate that our users can tolerate before your trust breaks for that user? And how do we handle these edge cases that handle that occur probably say 5% of the time? Or do we need a fallback deterministic system to begin with? Also what is different here is Data is your first class citizen. Now with AIPMS where traditional PMs can focus on features and user flows, as an AIPM, you must treat data as a product experience because poor data will create poor experiences. And so having a good data strategy is a prerequisite before you even start working on your implementing your AI product.
Akash
Today's episode is brought to you by JIRA Product Discovery. If you're like most product managers, you're probably in JIRA tracking tickets and managing the backlog. But what about everything that happens before delivery? JIRA Product Discovery helps you move your discovery, prioritization and even road mapping work out of spreadsheets and into a purpose built tool designed for Product teams capture insights, prioritize what matters and create roadmaps you can easily tailor for any audience. And because it's built to work with Jira, everything stays connected from idea to to delivery. Used by product teams at Canva, Deliveroo and even the Economist. Check out why and try it for free today at atlassian.com product-discovery that's a T L a S S I a n.com product-discovery Jira product discovery build the Right Thing Today's episode is brought to you by Vanta. As a founder, you're moving fast toward product market fit your next round or your first big enterprise deal. But with accelerating how quickly startups build and ship, security expectations are higher earlier than ever. Getting security and compliance right can unlock growth or stall it if you wait too long. With deep integrations and automated workflows built for fast moving teams, Vanta gets you audit ready fast and keeps you secure with continuous monitoring as your models infra and customers evolve. Fast growing startups like LangChain Rider and Cursor Trust Advanta to build a scalable foundation from the start. So go to vanta.comakash that's V A N T a dot com a a k-a dash s dash h to save $1,000 and join over 10,000ambitious companies already scaling with Vanta.
Jyoti Nukla
And also because. Go ahead. Sorry.
Akash
Yeah, the data piece is actually one of the underrated ones because people often hear data and they kind of shrug their shoulders and say oh, I understand basic statistics, but there's a lot more to it in terms of data pipelines, how the data is being cleaned, how it's being put in there, what's being used as training data. So there's a lot of N that people need to realize exists under each one of these topics.
Jyoti Nukla
Yeah, and like I always say, garbage in will lead to garbage out. So if you do, if you have your data that is not in higher quality like what you expect, then your model outputs will also not be as closer to reality as you would expect. Yeah, and similarly your model behavior here with AI products would be iterative versus a fixed feature or of a traditional product. In this case the earlier button example that I was talking about, every time you click that purple button it will lead to something similar versus here you're iterating with your model. Any new change you need to retest your model, you need to understand what is changing among your model behavior. So it's a very iterative process. And then your unit economics are also very different where your traditional Products have predictable cost structures. Now, because of this probabilistic nature of AI products, your unit economics of AI products are also variable. It depends on how long your LLM probably would give you an answer or how short of an answer it could be. And last but not the least, you now need to emphasize a lot more on responsible AI and guardrails. Because traditional products, it's easier to focus on bugs and edge cases. But AI products need to be able to handle potential harms, bias, misuse and emergent behaviors that weren't explicitly programmed into the model itself.
Akash
Yeah.
Jyoti Nukla
So moving on, now that we have understood what an AIPM does, moving on to how do you now determine when to use AI and when not to use AI? Now there's this report from MIT that many people are aware of. Now, the reason why AI pilots fail, there could be several reasons, but one of the key factors that the paper called upon was picking the right opportunities to apply AI to go solve a problem. It seems common sense, but it is not as common where several teams are choosing the wrong problems to go and apply AI. The reason being choosing good problems to apply AI is difficult. And that's, that's what we learned today is when do you use AI in a product?
Akash
Because apparently 19 out of 20 people are choosing the wrong one.
Jyoti Nukla
Absolutely. Yeah. So here's when AI makes sense. So you have to choose AI when AI is well suited for some of these specific patterns, like when you have a pattern recognition in complex data, when patterns exist in your data, but they're too complex for humans to manually define. So for example, in products like YouTube, machine learning is used to view, identify the patterns of users who are watching videos, which would be impossible to capture with simple rules. So the relationships between your user behavior and their content preferences, it's multi dimensional for a rule based system to capture. So pattern recognition is probably a very good use case when AI could be applied. The second is when AI really excels is when you have historical data over several years to predict future outcomes. So like for example, at Amazon we used AI to forecast inventory needs to predict based on a complex mix of seasonal trends or upcoming promotions and even weather patterns. And these models could consider hundreds of different variables in ways that humans simply cannot process effectively. So in prediction use cases, AI is a great choice. And also when you need to create, say, personalized individualized experiences for thousands or millions of users at scale, then AI becomes incredibly valuable. That ties back to the pattern recognition because probably there are several patterns and variables that could then impact. So for example like content recommendation engines are classic examples of where AI thrives. So if your use case about personalization, then that's a good place to look at applying AI.
Akash
And what are the bad places?
Jyoti Nukla
Yeah, and I would say that, I wouldn't say it's bad, but here's where heuristics, which is rules based is probably sufficient, where you don't have to stick to applying AI. Now before I dive into this, what are heuristics? And I would say heuristics are nothing but a simple set of rules like your if else if this happens, then do that. If this, then that these are all probably based on your past experience or probably something that works in that industry. Now I would say heuristics or your rules are probably sufficient when explainability is non negotiable in your industry. Because it's really hard for AI models to have high explainability. They are interpretable tools but explainability is still low. Or when there are clear rules in your domain, like for example tax calculation. We are at that time of the season where everyone's thinking about year end taxes. And so tax calculation software is a very good example. Tax codes are complex, but they're explicit, making them perfect for like say some rules based implementation. And so if you, if there are some clear rules and comprehensive rules in your domain, probably sufficient to start with heuristics. And when data is limited because I need lots of data to be effective. So if you're launching a new feature or you're entering a new market where historical data doesn't exist exist then starting with heuristics. And a rules based approach is probably better than force fitting AI to it. And also the other place where you, your development speed is critical, where AI systems generally take longer to build and implement. So for MVPs or time sensitive features, starting with traditional methods could be like the right business decision.
Akash
So you've determined your AI usage. It's not one of these things where the explainability matters, the speed to market matters. How do you select the right AI techniques?
Jyoti Nukla
Yeah, so let's dive into it. So what are some AI techniques that we could look at? And when we look at AI these days, people jump straight to like oh, let's use ChatGPT or let's build with an LLM. But honestly a simple machine learning model would have solved the problem in a week and at fraction of a cost. So let's break this down in a way that's useful for product decisions. So when I think of AI techniques, I think of them in Three buckets. The first is this traditional ML where this is your, your regression models or your random forests or your XG boost, the stuff that's been around for years, it's mature, it's reliable and honestly, it's still powers most of the AI that you interact with daily. The second is deep learning. Now this is your neural networks, your computer vision, your speech recognition, and this is this thrives where, when you're dealing with say image, video, audio, any form of unstructured data that has some sophisticated pattern recognition. And the third is where we get to gen AI where it's your LLM, your diffusion models, your stable diffusion, your ChatGPT, your cloths. Now here's what's interesting. These aren't competitors, they are tools in your toolkit. And the best AI products, they usually probably come in multiple approaches. So I would say choose ML when you have structured data and you need to predict or classify something. So think in terms of say spreadsheets. The sweet spot for ML is when you're predicting a number or a category or you have historical data with clear patterns or you, you need the model to explain its decisions or speed and cost really matter. So some examples of where traditional ML techniques still thrive is like your fraud detection or predicting the customer churn for your websites. So as a pm, a question you should ask is can I put this problem in a spreadsheet with clear input columns and an output I want to predict? If the answer is yes, then start with ML and don't have to complicate it. Looking into deep learning Use deep learning when you're dealing with say perception tasks like image, video, audio or when the patterns are too complex for traditional machine learning to capture. And think about it this way. Deep learning shines when humans can do the task easily. But it's really hard to write extreme explicit rules for. Like for example, when I see your face, I know this is Akash. It's easy for me. But if you ask me, write it in code as to what are those features that make me think that this is Akash, this if then statements, it's impossible to translate this problem into. And that's where deep learning comes in. So some of your examples like medical image diagnosis or manufacturing defect detection, where computer vision can scan products on an assembly line and figure out is this widget cracked or is this label misaligned. Classic examples of where computer vision could be used, voice assistance, which is converting your speech to text. All of these are great with deep learning. So now a question that as a pm, you should ask is, is this a perception problem? Am I dealing with images, audio, video? If yes, and you need to understand what's in that media, probably deep learning is your friend now. Now here's the catch. Deep learning needs more data, more compute, and is less explainable than traditional machine learning.
Akash
Right.
Jyoti Nukla
Trade off that you as a PM need to be aware of.
Akash
And then Gen AI.
Jyoti Nukla
Yeah, the hot topic. Now use Genai when you want to understand, generate or reason over natural languages or images. Now the breakthrough with LLMs isn't that they can just write, it's that they can read, they can comprehend context, reason across information, and they can respond appropriately, which is fundamentally different from any traditional AI system. So Genai is the right choice when you are dealing with a natural language interface where your users need to interact with your product using some conversational language, not probably just clicking buttons or filling forms. Genai is a good starting point there. Content generation is the other use case where you want to write copy, you want to write product descriptions, you want to write email drafts. So if you're creating net new text or images, Genai is a good use case to be applied. And when you need reasoning and synthesis. Now LLMs can take information from multiple sources, understand context and make judgments. So unstructured reasoning and synthesis, Genai is your friend. So as a pm, the question that you have to ask is, does this task require reading or writing in natural language language, or do I need common sense reasoning, not just pattern matching? And are my users going to interact conversationally with this product? If the answer is yes to any of these questions, Gen AI is probably in your solution.
Akash
Yeah. And then there's the whole angle around AI agents as well. Right. Where most people are building AI agents into their products or they're building MCPs into their products so that agents can interact with with their products. So you also probably need to be thinking about, you know, are there agents we can be taking that we are generatively creating the generatively planning all those skills that you just talked about for gen into a product.
Jyoti Nukla
Yeah. And that's a great segue for us to get into the core building blocks that you have to know, starting with AI agents.
Akash
Let's do it.
Jyoti Nukla
So the first concept that we want to touch upon is what is agents or what is agentic AI? So agentic AI is a system that can make decisions and take actions on your behalf or on its own to achieve some goal. And you're not explicitly telling it what order it needs to follow. It understands your goal and tries to reason and Find the path to go and achieve that goal.
Akash
Goal.
Jyoti Nukla
So the true thing that differentiates AI agents is it is goal oriented. Now, looking at the core building blocks for an AI agent, the first is perception. Perception is how the agent perceives information, like your text input image or sensor data or API connections. This is basically how the agent will receive input. The second block for a agent of considering the building blocks would be the reasoning. This is how the agent processes information and makes decisions. Here's where your models could live, like your LLMs or your classification models or planning algorithms. All of them live here. The third building block would be your execution or action systems. This is how the agent affects its environment. Either that is through generating text or making those API calls or controlling hardware. This is how the agent actually takes action. And the fourth is learning. This is the feedback mechanism of how the agent evaluates outcomes and improves.
Akash
So when do you use a workflow versus an agent?
Jyoti Nukla
Yeah, so there's a huge difference between workflows and agents. Now a simple workflow. Both are, by the way, AI systems. Now, workflows are predetermined sequences of tasks where every thing is defined in terms of how the process will go and execute. Think of these as some automation pipelines where AI serves as some sort of a powerful component within that overall workflow. An example would be like your invoice processing workflow where you have step one, extract data from PDF, step two, validate against these rules. Step three, now have the AI system or agent evaluate and step four, go and update this multiple system. Whereas agents are goal oriented systems where they can independently decide how to accomplish those objectives. So the key characteristic of workflows is this predictable patterns, execution paths, there's human defined decision trees of how things have to go and there's probably deterministic outcomes of this and standard expectation of what the output would look like from one node to the next node. Whereas an agent, the characteristic of the architecture of an agent is very different, which let me walk you through what that looks like. So here is an agent architecture. So we have the agent, there is model, there is memory, and then there are tools. So the agent is the brain or the orchestrator. It controls the entire workflow, deciding what needs to be done, which tool needs to be called the model. Here it could be a language model, it could be a machine learning model. So you could have your GPT or CLAUDE or any of the models here. Memory is where it stores context and historical information. This is what allows your agent to be stateful, to be able to remember past conversations or previous actions and tools. These are the general utilities that your agent can use to extend its capabilities beyond what just this model could do. So you like a weather API or a booking system API. It could be a search API, it could be a code execution engine.
Akash
Any of these ones Today's podcast is brought to you by Pendo, the leading software experience management platform. McKinsey found that 78% of companies are using Genai, but just as many have reported no Bottom Line Improve Improvements so how do you know if your AI agents are actually working? Are they giving users the wrong answers? Creating more work instead of less, improving retention, or hurting it? When your software data and AI data are disconnected, you can't answer these questions. But when you bring all your usage data together in one place, you can see what users do before, during and after they use AI, showing you when agents work, how they help you grow, and when to prioritize on your roadmap. Pendo Agent analytics is the only solution built to do this for product teams. Start measuring your AI's performance with agent analytics at Pendo IO Akash that's P E N D O IO Aakash before we dive deeper, let's talk about something every PM faces. Getting alignment on product decisions. You know that feeling when you're trying to explain a user flow to engineering or justify a design choice to leadership and you're just describing it with your hands. That's where Maubin comes in. Mabin is the world's largest library of real world mobile and web app designs from industry leading apps like Airbnb, Uber and Pinterest. Instead of spending hours taking screenshots or hunting for inspiration, you can instantly find exactly how successful products handle onboarding, paywalls, checkout flows, whatever you're facing. Over 1.7 million product builders use Maubin to benchmark against best in class products and show their teams proven solutions. Whether you need to convince stakeholders there's a better way to handle user activation or research how top apps approach feature discovery, Mobin gives you the visual proof to back up your product decisions. Check out Maubin.com Akash that's M O B B I N.com A K-A-S- and get 20% off your first year. Today's episode is brought to you by the AIPM certification on Maven, run by Mikdad Jaffer, who is a product leader at OpenAI. This is not your typical course. It's eight weeks of live cohort based learning with the leader at one of the top Companies in tech. As you know by now, the future of PM is AI. And this certificate will give you the learnings plus the hardware to show you are ready for an AIPM role. I myself took the course and recommend it put on by the amazing team at Product Faculty including Mo Ali and Pavel Hearn. It's worth it. Former students come from companies like OpenAI, Shopify, Stripe, Google and Meta. The best part, your company can probably cover the cost. So if you want to get $550 off, use my code AKASH550C7 that's a a k a s h 550C7 and head to maven.com product faculty. That's that's M-A-V-E-N.com P-R-O--u C-T-F a c u L-T-Y and now let's do a
Jyoti Nukla
hands on exercise to go build a workflow and then we'll also build an agent.
Akash
Love this. Let's see it.
Jyoti Nukla
Yeah, and for that I will use N8N.
Akash
And why N8N?
Jyoti Nukla
It's really a, it is low code, no code. So it's easy for, for anyone to like go and build workflows or agents. It also has a very strong community and so you'll always find forums where if you're stuck you can ask questions and you can quickly get results. So this really allows for anyone to actually go build agents and workflows.
Akash
Got it.
Jyoti Nukla
So today what we are going to build is first we are going to build a workflow just so we all understand what a workflow looks like and then when we do the agent we can see that difference. So in N8N there are different types of nodes. There's trigger nodes, so these could be your nodes that start your automations. So like for example trigger manually or there's on a schedule or when at start. So there are different trigger events. So first we'll start with our trigger event which is we wanted to trigger manually. Next on triggering I want this to go and call and make a HTTP request for open weather. We're going to use something called Open Meteor to get the information from that API. So here's this free weather API that we are going to use and how
Akash
do you find like good APIs, Good
Jyoti Nukla
old plain Google search. I start off with like all right, I want to build this, so what do I need? I need a weather API. Let's go and search. This API is for almost everything, lots these days so it's easier to just Search.
Akash
Got it.
Jyoti Nukla
So from here I can search for what area I want to get a information for about the weather details. So here is. I live in Los Angeles, so I'm going to take Los Angeles information. So I'll say try API and if I set the latitude longitude, which I already have for Los Angeles and I can say how, what all do I need? I need it gives you a lot of options like temperature, I can get rain, I can get cloud cover, a lot of things. So I'm just okay with temperature for now. So I'm just going to go ahead and here's the API URL that I need to use. I'll take this API URL. I'll go back to my N8N. Now I'll go and create a note called HTTP Request. Now this note is what will go in access and make a HTTP request to a URL. So I'm going to use the get method, which pretty much gets for me to like post this URL and gets the information. I'm going to use the get method and I'm going to use the API URL that we copied. And let's see if I run the execute step here. Now let's see what information we get. So you see how we are able to capture. So the right side is the output. You can see the information that it captures from that API. I'm just going to pin it so we can use it for later. Now the information that we get is not in a way that's easy for a workflow to go and execute. We're going to add a code node to do some code modifications. I'm running the code in JavaScript and now you may say like, all right, Jyoti, but I don't know how to code, which is great. What I pretty much did is I went to ChatGPT and I said, this is what I'm trying to code. I want to code in JavaScript. I'm going to paste this into N810 and ChatGPT generated this code for me that I'm going to use now.
Akash
Easy enough?
Jyoti Nukla
Very.
Akash
So that's your normal go to workflow is pretty much use ChatGPT for all the coding that you need to use on Unity 10.
Jyoti Nukla
Yeah, it's very easy that way. So it gave me this whole block of code to go and use and I pretty much just took that and I say execute the step so we can see how the node executes. So you see how it captured that information and added this. Because I gave this message, you're saying in Los Angeles, the high is the high today is this temperature and low today is this temperature. By the way, I'm still a Celsius person, so I, I do everything in Celsius. All right, so now we have this code. And now what I wanted to do is send me an email. So you see, there's no intelligence here. It is basically step after step after step. Now one of the step here could be an agent which does something and then hands it over over to the rest of the workflow. So here N8N has great integrations. So there's an integration for Gmail. So I can click on Gmail integration and then there is a lot of actions that it could take. So I'm going to choose the message node. So the send message node. I already have a credential, but if you don't have one, you just have to do this and say create new credential and do a Google auth and that's about it. It'll create a credential for you. Easy peasy. And now let's say I want to send it to Jyoti atnext gen product manager.com and the subject should be weather report for today and emailed. Hi. We'll just keep it text because HTML sometimes goes into spam, so we'll avoid all of that. And now if I execute this step and for message, we'll just copy the message that we have from here. So I'm just copying and pasting that here. So I just dragged that message and put that into this field here. And now if I execute this step, it has sent the email.
Akash
That was easy. Wow.
Jyoti Nukla
Very easy.
Akash
So now we have a basic workflow.
Jyoti Nukla
So you see, I got this message saying Los Angeles, the highest this. So this is a basic workflow, but this has no intelligence. So we'll go and add some intelligence to this. So let's use the same N8N workflows and now create an agentic workflow instead of a plain workflow. All right, we're starting from scratch again. We're going to add a chat on chat message as a trigger. And now we'll add an agent. So by default it gives me an agent that I can now get started with. So you can see how it has the model node, a memory node and a tool node that we could add. I'm going to add the model node. I'm going to add OpenAI chat model. Now this requires me to connect it to the OpenAI API. So I have already done that. But if you have not you could just create a new credential and add your OpenAI API key and that will connect it. And you can also choose from the list what model you want. I'm okay with this with GPT 4.1 mini. So I've connected the agent to the model. I'll also give a simple memory so that it remembers things and has a place to store. And now let's add tools. So one of the tool is a getweather tool that we want to add, which is a HTTP request. I'm going to create a HTTP request. The same things that we did before, we are going to do that again. The method is still the get method and I'm going to paste the same URL that we got from our weather API. And then I'm going to add one more tool, which is the email. I'm going to say Gmail and I'm going to add the same information, weather today. And unlike our workflow where we had to define the message, here we could just say let the model automatically defined by based on whatever message it's getting, the agent can decide what that message should be. I could also add a description and say unique weather information. And that's it. You can see how we are not defining any code. We are not saying how to convert that into a particular phrase. We're not writing any of that steps. We are going straight to saying, here is the HTTP, here is the, Here's a tool to send an email. And now we let the agent do all of these tasks. Okay, so now let's run this so I can type a message and say, what is the weather today in Los Angeles? You can see it's running. It went and called this tool HTTP request and it gave me an information. The weather today in Los Angeles has temperatures from 14.5 Celsius Celsius early in the day, rising to about 17 Celsius in the morning. All right, so it gave me information about weather and you can see it didn't execute Gmail. I never said don't execute this, don't execute that. But it didn't execute because the agent determined that all it needs is the get weather and that's the only tool it needs to use. But now if I say send the message, and now it has used my Gmail tool to send me a message. So let's look at that. I got this message from the agent. So here, this is a classic example of how we are not telling which tool for the agent to use. The agent determined based on the question we asked and based on the task,
Akash
that's what makes it actually AI. An actual AI workflow, not just a regular no code workflow.
Jyoti Nukla
Yeah.
Akash
Awesome. So how do we go further here? What's next? Are we going to learn RAG system?
Jyoti Nukla
Yeah, but before we get to rag, I want to talk about a critical concept that every product manager working with AI agents needs to know. And that's prompt engineering and context engineering.
Akash
Yes.
Jyoti Nukla
So let me start with prompt engineering, because this is where most of us begin our journey with AI agents. So think of prompts as a primary interface between you and the AI system. And when I say primary interface, it literally is like that. The prompt is how you tell the agent what to do, how to behave, and what outcomes to expect. First, there is system prompts. Now these set the overall behavior and personality of your agent. So for example, if you're building a customer service agent, your system prompt might be to establish an empathetic personality, that the agent has to be professional and always verify customer identity. The second is user prompt. Now these are the prompts that an actual user inputs to an LLM or interfaces with the chat product. Now that's simple enough, but here's where it gets interesting is how you design your system to handle these. Unpredictable nature of user inputs is what determines how your agent responds. Because users won't always ask the questions the way you expect them to. And that's where the power of prompt engineering techniques like your few shot come into picture. Few shot examples are where you show AI what good responses look like by providing some additional response responsible.
Akash
This is the really underrated one where you actually put in an example. This is a good response. This is a bad response. People think this is a lot of work, but when you're engineering the system prompt for an agent, it's actually worth it.
Jyoti Nukla
Yeah, And I found this to be incredibly powerful in production systems to provide your AI with examples of what good responses look like by providing examples. So instead of trying to describe what you want in abstract terms, you could show AI concrete examples of ideal interactions. Now that you know prompt engineering, let's move into context engineering. Now context engineering is where magic happens in production systems. Because context engineering is about managing everything the AI needs to know to do its job effectively and doing it within the constraints of context windows. Now, AI models have context windows. That's a limit on how much information they can process at once. Plotsonnet, for example, has 200k token context window. Now that might sound a lot until you start loading in your company's knowledge base, the Conversation history, the real time data, the user prompt. Suddenly you're making hard decisions about what stays and what goes. So I think about context engineering in three layers. First, there's immediate context. That's the current conversation or task that the user is having. The second is session context. That's the session information of the user's recent interactions. And the third is knowledge context. This is the broader information that your agent needs to reference. And here's something that I have learned the hard way. Context window management directly impacts your cost because every token you process costs money. So if you're carelessly loading your entire knowledge base into every interaction, you're burning through your budget really fast. So context engineering is more like an art of knowing what to load and when.
Akash
And what's an example of that that you learned the hard way? Did you guys overspend at one of these companies because you just had engineered way too much context into it?
Jyoti Nukla
Yeah. And that's when we actually started off with understanding that there is probably a way to dynamically figure out, based on the information to load, what context is needed. Like what information from your knowledge base could be loaded in. And that's the prompt flow and orchestration patterns. That's where they come in. So when people say prompt engineering is dead, it is not dead. It is part of this holistic context engineering that encompasses prompting strategies as well.
Akash
So how do we dynamically pull in this information quickly?
Jyoti Nukla
So that's where now we are going to learn about rag, which helps you figure out based on the prompt, you can retrieve the right information and load it. So let's dive into rag.
Akash
So for my money, this is like the most important skill, guys. Like this is the point to just lift up off your phone as well and just look at, think deeply about how am I going to learn this concept? Because every enterprise that's implementing AI internally for their workflows, every product, they're generally using a Rack system.
Jyoti Nukla
Absolutely. And like I say, RAG is nothing but retrieval augmented generation. Now, it's very simple, but has tremendous amount of value that it provides. And so when people say, oh, should I go and fine tune my model? I'll be like, no, let's start with rag. Because rag might solve 80% of your problems. Now, like the word says retrieval, it retrieves information from the knowledge base and then it augments it along with the user input before passing it to the LLM. That allows the LLM to now have the context to be able to generate an output that is foundationally in, deep rooted in the knowledge base of that company. So let's look at the RAG systems. So let's say you have a document, several documents, of course in a company you chunk them and chunking is nothing but breaking them down into smaller pieces. Almost like think of like you have a storybook and if you're saying after every 20 pages, just rip it, it could be one chunking strategy is a fixed chunking strategy. So you take the document, you break that down into smaller chunks and then you convert that into pass it through an embedding model and store it in a vector database. Now when a user queries, the user query is also passed to this embedding model, converted into a vector. And now this vector goes into this vector database to now find the nearest neighbors. Similar approaches that would answer this user's question. It retrieves that information from the database and adds it along with the user input and passes this to the LLM. The LLM now has the documents, the relevant documents from from the vector database and the user input to now generate a response that is fundamentally rooted in the information. And so something to talk about here is fine tuning. Many of them reach out to me and ask can we take an LLM and fine tune it to our use case? Fine tuning should never be your first option. Maybe not even your second option. It's something that you have to consider after you have exhausted prompt engineering, context engineering approaches and RAG approaches. A practical hierarchy that I recommend is before you go and start with fine tuning is to first start with prompt optimizations to see if you can get better results. Then you can optimize your context engineering to figure out what context is being loaded into the context window of an LLM. Implement RAG. Most of the cases that I see, almost 80% of your use cases get solved with RAG versus fine tuning.
Akash
Especially if you've done really good prompt engineering at the top.
Jyoti Nukla
Absolutely. And only if these three don't suffice, then you should go for fine tuning.
Akash
I think because fine tuning is there in the API documentation, people immediately jump to it. But first, first follow this sequence.
Jyoti Nukla
So let's see how to build a RAG system.
Akash
I'm excited for this one.
Jyoti Nukla
So we are going to use something called langflow again. Langflow is a no code tool that allows you to build RAG systems with just blocks and connections.
Akash
And why Langflow? Why not N8N or any other tools
Jyoti Nukla
you could use N8N too. What I have seen is Langflow is more more first approach to agentic AI. And therefore it's easier to build rag systems and Langflow. But you can build rag systems and N8N as well. I would say this is just another tool that I'm introducing to our users. So now anyone who is comfortable with N8N could try with that. Langflow is another tool that also very nicely sits in with the Lang chain langsmith kind of community. So some of your tracing capabilities and all of that can come through easily as well.
Akash
Got it.
Jyoti Nukla
So starting with an empty blank slate, first we are going to build the flow where we are going to take a document and chunk that up into pieces. So for that we'll do the load data flow, which is starting with a file. So given a file I can select a file and add it. So in this, this case I'm adding State of AI in Business 2025 report. So I'm adding that and then I need to split text. So this is where I'm chunking my document. You can see it provides me different options like chunk, overlap the chunk size. I'm just going to keep as is and I'm going to say connect this file to this input so this file will go into the split text and then get chunked up. I'm also going to to call for OpenAI embedding because I'm going to use OpenAI's embedding model and I'm going to use the embedding three small. And again, I've already given my API key. But if you haven't and you're using it for the first time, you'll have to Give your API OpenAI API key here. Next is you need a vector database. Now there are lots of options in Langflow you could use Pinecone, you could use Chroma DB. I'm just going to use Astra DB. And AstraDB also has an API key that I'm that I have already provided here. Now, in terms of database, I've created a database called Rag Demo. But you can also create a database by clicking on plus new database and creating a refresh new database. Once you've selected the database, you have to choose a collection where these chunks will go and get stored. So I, I am choosing langflow as my collection which I've created. You can choose, you can go and create any new collection from here. Now with that I'm ready to make my connections. Now the chunks that get passed from the split text, I'll connect it into ingest data and my embedding model, I'll connect it to the embedding model on the Astra db. This is our Load data flow. Let's run this. So the flow was built successfully. We don't have much to see here because it's being saved. And all it did was it took the file, it chunked it up and saved it into our database in astradb. So now let's build a retriever flow which will actually where a user can ask a question and then retrieve answers from that text or from the document. So we'll start with a chat input because a user needs a way to ask a question. So we'll start with a chat input. We are building a retriever flow. We'll also have our embedding model. Remember, even the input is now vectorized. We'll add our embedding model, the same embedding model we used in the data flow. Now, that vectorized information would go to your Astra DB to search from those documents. So here I'm choosing the same database and I'm going to choose the same collection where it has 136 records. Now I'm connecting my embedding model and I am connecting my chat input as a search query. Now, the data that comes from this needs to be parsed. So we'll add a parser and connect a data frame. So in this case, if you see here, if I convert this into a tool mode, so here you see search results. If you just click on there, you can choose that instead of search results. I want it to be a data frame. I choose my data frame and I connect it to the data or the data frame piece of my parser. From here I needed to create a prompt template to capture so where I can give instructions. So I'm going to choose a prompt template where I'm going to give system instructions when I'm going to say given and I'm going to take the context from before. I'm going to say context. You can see that I've created this prompt variable context, and say given the context above, answer the question as best as possible. And then we'll add our language model. We'll do our connections in a second and then we'll have an output from the language model. All right, we have built all of these frames. Now let's just start connecting them. So the prompt from here goes into the input. We have the context here, so we're
Akash
making that a prompt variable, correct.
Jyoti Nukla
So that way I can add the question too. So this will pass. I can take my chat input right now, here, and connect it to the question. It receives that input. It also takes the context. It's connected to the input and now we just have to connect the model response to the output. Okay, that's about it. And now let's run it and see the flow was built successfully. Then if I go to the playground I can ask a question. So this is where we'll go back to what I have built before and will show.
Akash
Yep.
Jyoti Nukla
And so if I say what is this document about? Here's where I have the document is. Report title the Genai divide state of AI in business 2025. So it gives me more information about what this document is.
Akash
Nice. We've built a RAG system. People have got to see the basics of rag. So this about covers for you guys all of the building blocks we went through. When do you even build with AI? What AI techniques do you choose? What are the key building blocks of AI? Prompt engineering context, engineering workflows versus agentic workflows. And finally RAG systems. This is the roadmap. You want to go ahead and start learning, not just watching these podcasts, but going out implementing on your own own so that you know them cold so that when you hit your AIPM job you can help engineering teams actually build these. You're not going to be using a no code tool to build in an actual product, but by doing a no code tool you get to learn the in and outs, you build the intuition. Some of the intuition we talked about where we say, hey, always do RAG before fine tuning. If you practice that, if you try to do fine tuning for a problem and then you try to do RAG for a problem, you'll very quickly build that your intuition on your own and it will allow you to be an effective PM in these situations. So you have been cracked into a ipm. A lot of people want to crack in. What is the right roadmap? What are the best hands on projects to build to become an aipm?
Jyoti Nukla
I would say always start with don't think of it as projects, think of it as building products. Now you could build a use case, a pain point that you have built for that use case. Then rather than just after building it and be like I'm done with it, actually take it forward. Think of it as a product, launch it, have your friends and family try to use it. Now all of a sudden you have real users, you'll have things that break. So you are now doing things like a real product manager would. And by building, taking it from a project to a product will actually start giving you the confidence to put those projects in your resume. You can give a lot more information and clarity for your recruiters when you talk to them rather than saying, oh, I attended this course or I did this project. Now all of a sudden you have a lot more richer details. This breaks in these use cases. Here were the challenges that I had to overcome and that gives you a lot more, more richer information and data to go with.
Akash
So products, not projects. Should people be creating a portfolio and what does a good portfolio look like?
Jyoti Nukla
If so, yeah, your portfolio should definitely have some sort of an app that's solving a real problem that you have built for. There are a lot of no good prototyping tools today that you could use. Build an agent. We just built a simple agent. Build an agent that solves a problem. Build a rag system. We just saw how to build a rag system. So look for problems within your area and try to build examples of those as portfolios that you can then take it and not just call it a project, but have real users and convert that into products. Now all of a sudden your resume has three products that you have orchestrated.
Akash
How important are certificates? What does a AWS ML certification get you?
Jyoti Nukla
Yeah, certificates are a great way to signal to your recruiters, your hiring managers that what you have learned is not just theoretical, but also something that is credible and accredited. So you could go to. For example, I offer at Next gen Product manager AI Product Management course and we teach everything that you got to do to learn about AI product management. And you could learn those concepts, you could do a lot of hands on projects and then go take that AWS AI Practitioner certificate. Now that gives you a credible information or credible certificate to go across and tell your hiring managers of how it's not just something that you have learned, but it's also accredited by aws.
Akash
Speaking of aws, I wanna talk about you and your career a little bit. You worked at aws, you worked at Meta, you worked at Netflix. How do the AIPM cultures differ at those three companies?
Jyoti Nukla
Yeah, very different. Let me start with Amazon or aws, which is where I started my career. It's a very customer obsessed document writer kind of a way. I think Amazon invented the term PRFAQ or, or the six pager where everything at Amazon starts with a press release and a frequently asked questions document. Even before engineering even starts a single line of code or before a design mockup is created. And the philosophy is we work backwards from the customer. You start thinking from the customer problem and you articulate why existing solutions don't work and then explain how your product solves it. That's the PRFAQ or the six pagers that's used for strategic reviews. Now it's not just a document for the sake of document, it's taken very seriously. This PR FAQ is reviewed all the way up by your VP or even sometimes Andy Jesse. And it's a very documentary writing heavy culture where I think Amazon PM spend probably 40 to 50% of their time writing documents.
Akash
Whoa.
Jyoti Nukla
So you become an exceptional writer at Amazon, there's just no option around it. At Meta, I think it's the complete opposite in terms of process. If Amazon is about rigorous upfront planning, Meta is all about experimentation and iteration. And the culture ethos reflects that, right. Like it says, move fast. So at Meta, product management is expected to be deeply technical where you're able to understand the code base, you're able to go through the insights of how something was implemented, you're able to talk about how to ship multiple variants, how will you test them against your control groups? And you let the data tell you what works. I think of all the companies I've worked with, I've seen Meta having the most sophisticated experimentation infrastructure in the industry. And as a pm, there you live and breathe statistical significance. At Netflix, the philosophy is context over control. But perhaps it's the most unique approach to product management amongst the big tech. Instead of having some rigid process or some documentation requirements or some approval hierarchies, Netflix invests heavily in making sure everyone understands the strategic context and then they trust you to operate independently within that context. So you're expected to be an exceptional communicator. You don't have to always be very formal with documents in the way that Amazon expects. But it's all about building alignment through conversation, presentation and shared understanding. So you need to be very comfortable with ambiguity and be able to define your own swim lane.
Akash
All three of those companies, Meta, Amazon and Netflix, they're kind of notorious for having hard cultures like performance oriented cultures. Amazon just laid off 30,000 people, met, has the rolling layoffs. Netflix is known. Even Reed Hastings, his solely step back from his own role. Different people will leave. There's pressure everywhere. How is it working in these companies? Would you recommend it to other people?
Jyoti Nukla
It's an absolutely phenomenal experience. I think I've learned a lot from working at these companies. I have built the documentation, customer thinking rigor, like working at Amazon. The first thing you you learn and it gets ingrained in you is working backwards from a customer pain point. With Meta, it's all about how do I test quickly, how do I once I know What I want to build, how do I test quickly, what should that experimentation culture look like? And with Netflix, I have learned truly about what does autonomy mean? And the power of autonomy and the shared experience of building consensus and working together towards a shared vision. It shapes who you are as a person, the kind of insights you get as a product manager. And the scale is phenomenal across Amazon, Meta, Netflix, every feature that you build is probably used by millions of users. So the scale that you get to work with is amazing. And that would be an experience that I would encourage everyone to have at some point in their career in their roster.
Akash
So why did you leave Netflix? Director of aipm at Netflix feels like a dream job. What was the story?
Jyoti Nukla
Yeah, so I have been an AI PM for the past 13 and a half years in the field of AI. Believe it or not, AIs existed before LLM. So I've been in the field of AI for so long, I have about 12 patents in the field of AI. And with so much of AI growth happening, I thought to myself, hey, I really derive a lot of satisfaction from actually teaching product professionals how to transition into being an AI product pm. I've been teaching for the past two and a half years and the greatest satisfaction I derive is when someone says your experience, your insights were so powerful that I was able to go crack that interview. And now my pay is like 2x of what I used to get. Immense satisfaction and in a career changing way. And so I said, you know what, with, with a lot of opportunity out there and with AI jobs increasing, I wanted to take the time to go full time into this and spend my time teaching and consulting companies to draft their AI strategy to apply the learnings that I have from leading scaled AI businesses and products to help their portfolios. So I took the jump.
Akash
So I asked this question. You can share as much as you want or as little as you want, but obviously Netflix is known to pay well. If you've worked at places like Meta and Amazon, they're known to pay well. So people would assume you're raking in the dough. As a teacher, what can you share with us? How is the business of Jyoti doing now that she's no longer a full time pm?
Jyoti Nukla
So I am a newcomer to this field. Although I've been teaching for the past two and a half years. I did that as a part time and I would say I've added two new courses. So I used to only teach AIPM because I just didn't have the bandwidth back then. But now that I'm going full time in. I added two new courses. One is on diving deep on agentic AI. So this is for someone who is already aware of AIPM fundamentals and is now looking to go and lead and build AI agentic AI products. And I also introduced a PM accelerator specifically helping professionals crack product interviews be product sense, product execution, behavioral. And I'm seeing great interest across all the three from different groups. Most of the groups that I work with are folks who are getting into AI now for the first time. And I also see that my I don't advertise my agentic AI or accelerator outside, but the courses run full just because all my previous students who took AI PM continue down the funnel to agentic AI and PM accelerator. So it's been a great experience going into this full time. I'm just two months in, so it's probably too early to figure out how things are, but I'm really excited about it.
Akash
If I'm reading between the lines, you might not have hit director of AIP Net Netflix, but you clearly see a path to getting to more. Is that fair to say?
Jyoti Nukla
Absolutely, yes.
Akash
All right. That is the potential you guys can get as a course instructor. Jyoti, thank you for sharing your knowledge so in depth so freely with all of us. Really appreciate having you on the podcast.
Jyoti Nukla
Thank you so much for having me. I am thrilled to be here.
Akash
All right everyone, we'll catch you later. So if you want to learn more about how to shift to this way of working, check out our full conversation on Apple or Spotify podcasts. And if you want the actual documents that we showed, the tools and frameworks and public links, be sure to check out my newsletter post with all of the details. Finally, thank you so much for watching. It would really mean a lot if you could make sure you are subscribed on YouTube, following on Apple or Spotify podcasts and leave us a review on those platforms that really helps grow the podcast and support our work so that we can do bigger and better productions. I'll see you in the next one.
Host: Aakash Gupta
Guest: Jyoti Nukla (ex-Director of AI Product Management at Netflix; AI PM at Meta, Amazon, and Etsy)
Date: March 23, 2026
This episode serves as a definitive masterclass on becoming an AI Product Manager (AI PM). Aakash Gupta interviews Jyoti Nukla, whose career has spanned AI PM leadership at top tech companies. Together, they debunk myths about the AI PM role, discuss the essential skills and distinctions between traditional and AI PMs, and provide both a conceptual framework and practical hands-on guidance for breaking into and excelling in AI product management. The episode also covers differences in AI PM culture at Meta, Amazon, and Netflix, and concludes with actionable job search and portfolio-building tips.
Explainability is crucial (e.g., taxes).
Rules are clearly defined and data is limited.
Development speed is more pressing than AI’s value.
| Timestamp | Segment / Topic | |------------|--------------------------------------------------------| | 01:46 | Is AI PM a real role? Overview of AI PM distinctions | | 04:35 | Three types of AI PM roles (Application, Platform, Infra) | | 08:27 | Core differences: AI PM vs. traditional PM | | 15:36 | When should you use AI—and when not to? | | 20:58 | Choosing the right ML/AI technique (ML, DL, GenAI) | | 26:49 | Workflows vs. AI agents; hands-on N8N demo | | 44:21 | Prompt engineering & context engineering explained | | 49:03 | RAG (Retrieval Augmented Generation) demystified | | 58:44 | How to build a portfolio to crack AI PM roles | | 62:40 | Company culture: Amazon, Meta, Netflix | | 66:44 | Is big tech still worth it? | | 68:11 | Jyoti’s move from Netflix to full-time teaching | | 70:07 | How Jyoti’s education business is doing |
This episode is a treasure trove for anyone aiming to break into or advance in AI Product Management. Jyoti Nukla brings battle-tested wisdom from the front lines of Meta, Amazon, and Netflix, while Aakash grounds the conversation in practical, actionable steps.
The journey to AI PM is mapped with clarity: master fundamentals, build real products, focus on practical techniques like RAG and prompt/context engineering, and internalize the difference between AI-native and traditional feature work.
A must-listen (or read!) for any PM, technologist, or builder for the AI age.