
Explore UX research for AI and how to build trust in AI experiences with insights on usability, emotion, and human-centered design.
Loading summary
A
Welcome back to Insights Unlocked. In this episode we explore how AI is transforming UX research and what it takes to build experiences users actually trust. Leah Hogan talks with Priyanka Kuvaletkar from Microsoft, who shares how researchers can go beyond usability to measure trust, emotion and impact while staying strategic in a rapidly evolving landscape. Enjoy the show.
B
Welcome to Insights Unlocked, an original podcast from User Testing where we bring you candid conversations and stories with the thinkers, doers and builders behind some of the most successful digital products and experiences in the world, from concept to execution.
A
Welcome to the Insights Unlocked podcast. I'm Nathan Isaacs, Principal Content marketing manager at UserTesting. Joining us today as host is Leah Hogan, a principal Experience Research consultant here at User Testing. Welcome back to the show, Leah.
C
Hi. Thanks for having me.
A
Our guest today is Priyanka Kuvalekar. Priyanka is a senior UX researcher at Microsoft, leading research for Teams Calling and agenic AI experiences for enterprise collaboration. Beyond her work at Microsoft, she's a content creator on Instagram, sharing insights on ux, AI and career growth, a speaker at conferences like Grace Hopper Celebration and Women in Tech, a mentor to professionals on ADP List and at the University of Washington, and an accessibility champion dedicated to keeping empathy at the center of AI innovation. Welcome to the show, Priyanka.
D
Thank you so much for having me. Nathan and Leah, it's a pleasure being a part of your podcast and sharing my insights.
C
I am really excited to have this conversation partially because I myself am a very passionate, I guess, researcher and speaker and I would call myself just below expert level when it comes to accessibility, but very passionate about helping everybody to leverage the tools and technologies that are available to us these days. But actually I want to start
D
with
C
how you got here because you are really at the forefront of helping to understand where we go with regard to AI empowered experiences. And I'd love to hear about how you got here.
D
Of course I always enjoy talking about my journey into UX research because it really wasn't a conventional path. My bachelor's was actually in architecture, so I'm an architect and a UX researcher. I was researching and designing physical spaces for people before I ever touched a digital world and understanding how they move to the space, what they need, how design shapes their experiences. And around 2016, 2017 when I was just completing my bachelor's, I was considering grad school and I asked myself, do I want to take this knowledge of user centric thinking and apply it to the digital world? And that's when I actually came across the concept of user experience and it immediately resonated, honestly. So I just decided to jump in and I pursued a master's in user experience and interaction design. And the program was also heavily research driven. And that's where I discovered my calling for research and I think across my masters to really unlock and explore my calling for research, I took up multiple opportunities to sharpen my research skills and learn more about the users and the diverse audience I was designing for. For example, one of the eye opening campus projects that I did was studying the interface of an EKG machine. I was a fly on the wall in an operating room which was an amazing opportunity. So I was watching how the doctors pre in that process and how they interacted with the EKG machine interface, which is, I feel it's deeply complex and can be complicated for both doctors and patients. And I think that moment really hit home for me in terms of why user centricity is so important. And when we skip understanding the users that we are designing for, the design fails them. And even in high stakes rooms such as this one, and that's, I think literally that was my first instinct of research and why I really wanted to pursue it as a career. So from there I worked across different industries. My first job was at Korn Ferry. I was the sole researcher on their HR products, which gave me a strong foundation of owning the entire practice and understanding what qualitative and quantitative research is in the corporate world and actually practicing it on products. And after that I moved into Cisco. I was the lead research researcher for WebEx Meetings and Messaging and I was also working on AI initiatives. And eventually that led me to Microsoft where I'm now at. So I now lead research for teams calling and agenda experiences. I'm also an accessibility champion, conducting research with people with disabilities. And I've had the privilege of working on AI across two of the largest enterprise collaboration platforms. And the questions I study now around the trust and value value and empowering users with AI rather than replacing them, that I think really drives me in this moment.
C
Yeah, and I think, you know, a lot of conversations have really been focused around, you know, how do we create experiences that are supportive of people in an appropriate way in this new space where instead of interacting with screens, we're having conversations, sometimes literally conversations with voice. And obviously the last several years have been quite challenging with regard to how we've evolved communications and communications platforms. So I'm sure that is interesting. But that leads me to my next question which is really around kind of in your everyday, you are seeing How AI is changing, how it is that we're. And when I say we, I really do think about the broadest range of human perspectives being representative, being represented. And it's really challenging to, I think, define what a great experience looks like in that space because it could actually manifest in a lot of ways, as you all know. So how do you think of AI and its impact on what the definition of a great experience is?
D
Definitely, it's such an interesting question because, yes, things have changed a little in terms of like, how do we define great? Like, is it just from the product and user interface standpoint or is it also trust, which is I think a huge factor in terms of driving successfully experiences. So I think a great, great experience from a UX standpoint has always been rooted into like, are we solving the right problem for the right users? And especially like, is the product or the feature, is it usable, findable, discoverable and accessible? And that's what all great experiences were. And they felt right, they added value, they were delightful to use. And now with AI into all of that, I think there's this additional layer that has become crucial, as I mentioned before, which is trust. And trust has many dimensions. I'm learning it every single day of how exactly are we defining trust, especially from a research standpoint. Trust is built through an AI experience that is transparent, that keeps human in the loop and in control. It's built when the AI drives accurate and consistent results without hallucinating. And it's about being responsible, recognizing its own errors and recovering from it, asking before, assuming and learning over time, all of this has started like it's packaged under trust at the moment. So as a researcher, as somebody in this space, and I do a lot of research on voice AI, I've been understanding that we are now studying this to make sure that we are building what we are building isn't just a great experience, but something that users genuinely trust and because they trust it, they would want to use it again and again. So kind of creating that hook of if it's an experience that is building on the user's trust and actually truly adding transformative value, that's what they would want to come back to again and again. So what makes your experience the greatest? That's where the whole concept of I think market fit is now really emerging. Like, does your AI really have the product market fit? Is it truly adding value to a workflow and is it transformative? And at minimum, is it trustworthy enough for users to actually want to use it? And I think that's the foundation of what a great experience in the age of AI is at the moment.
A
If you enjoy Insights Unlocked, you need to be at Crafted 2026. It's user testing's live experience for the people who are shaping what comes next. Builders, creators and insight leaders. Join us May 28th in Seattle for hands on workshops, thought provoking panels and inspiring keynotes covering everything from AI to research that drives strategy. It's not just another tech conference, it's designed to push your craft further. Learn more@usertesting.com crafted2 6 the link is in the show notes.
C
You know, it's interesting that you bring up the concept of market fit because I think a lot of the technologies were developed before we had figured out what the fit was going to be. So in a certain sense we're kind of reverse engineering like what is that fit now that we've got the technology, like how does that work? And actually that is, you know, kind of leads up to the next question that I have because you know, you've talked about how UX research is really evolving now because trust is really at the center of a lot of what it is that you're doing. And that product market fit is something that you have to explore. You know, there is an element or there traditionally has been an element of does it work? So really does the feature function deliver the value that users expect it to and that the business requires in order to drive value for itself as well? And I'm curious, how do you like in a very nuts and bolts way, evaluate whether something is trustworthy and credible and whether people are having the appropriate emotional response to that whole experience?
D
Definitely. I love this question. It's kind of, I think in this age of AI we're constantly so heads down studying different things and I really love that this question kind of things makes me think back on, okay, how exactly am I doing this and how do I articulate it? So this is really at the heart of what I do at the moment. Evaluating AI experiences requires I think going beyond traditional just a usability metrics from a UX standpoint. So like task completion and satisfaction scores of course matter, but they don't tell you whether someone trusts the AI and whether they felt in control or how the experience actually made them feel. A big part of my approach is the intentional research design. So with voice AI, for example, I recruit. I start from like an inclusive recruit. So diverse participants, diverse in how they speak the language that they use, their interaction styles and then evaluating how the AI responds to each one of them in different sets of times. Is it consistent across different users, across different tasks and different sequences? Because inconsistency directly erodes that trust for the emotional response. That's where I think AI from a research lens becomes really important. One approach is behavioral coding of recordings. So going through the interactions that your participant has had with AI and coding specific moments of hesitations, confusion, frustration and disengagement. So literally this could be you recording the whole session that your participant has with the AI and you run through those recordings to actually code the them and identify these different moments. So not relying on what people say about the experience, like not just relying on that, but also what actually happened in that moment and evaluating that experience. And I also look at how trust evolves within the session. So did someone start cautiously in their interaction with their AI and warm up or did they start confident and then something broke it? So that arc is far more telling than just a single rating. And then there's evaluating AI's output themselves, systematically assessing the AI's response for tone, for accuracy, appropriateness, consistency, and almost like a rubric based evaluation. So a research led AI evaluation is what I usually call it. And the accessibility is of course a critical lens to it. As I've mentioned previously, I'm really passionate about accessibility research and making sure that we are studying our experiences with people with disabilities as well as to ensure that we are designing for every type of users regardless of their ability or disability. I think that's really a couple of methods I have been leveraging to evaluate beyond whether just something works to something that builds trust and credibility. And you're also factoring in emotional response as a distinct dimension.
C
It's fascinating to me that you say that because when you're describing it, it sounds like what we've traditionally done in qualitative research, right? Where you're, you're coding things and I think looking for those moments where there could be discrepancies between what people are saying and what they're doing, but also really deepening that evaluation of the emotional piece. So that's interesting because you know, I'm just as a little aside curious about do you prefer to do that yourself or do you have potentially in AI take a first pass and then check.
D
Oh, interesting. So kind of using AI to even evaluate the experience that I'm talking about, that's interesting. At least for now I have been really using the human researcher and the first touch of evaluating these experiences. So let's say for the voice AI that I was talking about, so I'm evaluating it task by task with participants and then actually going back to the recording to identify these moments of what caused confusion. Another thing that I've been doing is letting the participant keep doing these tasks and interacting with the voice AI and immediately connecting with them after every single task to actually get that contextual feedback that, hey, you just had a conversation with the AI 30 seconds back, what happened and what changed and doing that across a different set of tasks so you can actually evaluate what's been changing. So for now, I have been taking the first stab in evaluating, but at least in terms of analysis, yes, there are different AI tools that I have been in fact using to speed up the research that I do.
C
Yeah. So I love hearing that and I bet a lot of our listeners and watchers will love to hear that too, because I think that there's some push to defer some of that, you know, that first passive analysis to have because it is so labor intensive and time intensive to systems. But, you know, from what you're telling me, it really is important for now at least to stay close as a researcher to that.
D
Definitely. I think I take it as it's not about AI coming in and replacing what the researchers analyze. I think of it as humans partnering with AI to become as efficient and productive as possible while still keeping your human judgment and your user researcher judgment at the forefront. Because the AI can definitely help you generate these themes. I think previously, before I used AI, I'd always been kind of spinning up my Miro board on my figma to literally start with digital stickies and cluster themes that I'm seeing and draw different affinity maps. But now I think when I have, let's say, 10 sets of foundational interviews, I have been leveraging different tools. Even within being at Microsoft, I'm heavily using copilot agents and spinning up agents that help me with this initial thematic stab. But since I have actually done these interviews, I'm using my own judgment that, hey, okay, the AI is telling me this is a particular team, but here is an additional layer that this is what the users actually meant. Because I observed them in the interview, in the way they navigated with the interface and the way they shared their friction points.
C
So yeah, yeah, I love that answer because, like, really being very judicious about how it is that you maintain that actual connection to the people that you're working with is so critical. And I think a lot of people are trying to figure out where are those moments of intervention where I can add the most value and remain close to Users to people in a way that like makes sense. And that's, I think a really great way to think about it.
D
For sure. For sure. I think we are kind of in this age where AI fluency and usage of AI tools, there's so much of pressure around this whole thing that you have to use AI or you don't want to fall behind. So I think of it as, I think it's a great opportunity to really build on that lens of, okay, what are some tools that I can really upscale on? Not just the tools, but how I can also be fluent in terms of the whole concept of, of AI. How do you evaluate like LLMs, how do you understand hallucinations? How exactly do these, study these experiences to make them future proof and trustworthy enough. Along with, of course, using these different tools that help you stay efficient and productive and generate insights faster so you can make impact within your product teams at the right time and at the right moment. So just a few thoughts.
C
Yeah, yeah. So shifting gears just a little bit, I'm curious. There's been over the last couple of years a lot of conversation about researchers and how they sometimes have tended to be more order takers or tactical in their research strategy than more strategic in research. And I'm curious about what your approach is to making sure that you're not just validating. I mean, and that's actually even a tough word to use because, you know, we don't necessarily want to validate things because that closes off exploration. But how do you make sure that you're influencing the roadmap and how it is that AI powered experiences get built?
D
Of course, I think this is something that every researcher working across different organizations can relate to. There's always this battle between, wait, I just don't want to evaluate or validate things. I really want to do research from the ground up. In the age of AI where prompt or prototype and coming up and building these different ideas has become so easy, it's more important than ever for researchers to be able to build from the ground up and actually provide research that develops that foundation of your AI innovation. And to be very honest, for me that is different layers of how I do it. And one of it is one of the biggest things for it is partnership. So it starts with how you structure your partnerships with your team. So I personally work with a core squad. So my PMs, engineers, designers, data science, we are literally closed knit and we are working together every single day. And I am really intentional about being a part of these strategic discussions and not Waiting to be invited. So if that means me building up these weekly syncs and taking out an extra time to just build that partnership and really stay in loop of what my team is doing, I think that really helps me find these different moments where I can actually pause and make the team really think that, hey, are we just building something and going and validating our thinking or do we want to take a step back to really make sure we have the right data to make these decisions to begin with? So that partnership is really crucial. And I'll be honest, there's always this little fear that comes with it. Especially with if you're like in an opinionated cross disciplinary team. Should I say this? This happens to me a lot. Like what if someone does not agree? And I think there's an art of overcomin. The worst that can happen is your team doesn't align. And that's okay because it becomes like a data against data battle. Right. So as researchers, we have the power of knowing what users actually want and need and what, what is the problem that they're facing. So we're not just pushing for building on assumptions. And I think how I usually do it, it can be as tactical and specific as in every single readout, putting together key insights, opportunities and tying it back to product and engineering milestones. So I think when you build that visibility on research, your partners equally start to see you as like a strategic contributor. That's exactly how it should be, but they start noticing that and you're invited in these conversations. So that's really how I'm trying to be really strategic in my research and not tactical. And, and to be honest, if I can say, I don't see it as two separate things, especially in AI, because something as tactical as literally changing where your AI presence or something, an icon that indicates your AI is present. If the tactical research that you're doing changes that, that's actually going to have a huge strategic impact in terms of building transparency and keeping the humans aware of where your AI lives and what it is doing. So, so that's just a few thoughts.
C
Yeah, I was just, you know, it's really interesting that you pull together the thought that there isn't as much of a distance between strategic and tactical insights because even subtle things can make a huge impact with folks kind of to follow up on kind of that point. And then the point that you made around milestones, a lot of researchers aren't quite sure how to best tell their story in alignment with those critical milestones. How do you do that? How do you get people to understand, hey, this insight drove this outcome or was associated with this milestone?
D
Definitely. I think again, there are multiple different things to it. Especially if you're talking in terms of, of generating these insights quick enough to make sure that you're actually making product impact. There's one part of like making sure you have these multiple, as I said before, multiple syncs with your team. An example of this could be literally these stand ups that engineers and PMs have on a week to week basis being an equal contributor to that. And I know that's traditionally not how research works. Especially when you're doing things like foundational research, which can be like four to six weeks long, you don't have insights to share every single week, but it's kind of building that presence. And even if you're able to share like smaller insights or the trend of things that you see, I think that itself can make impact. So that's what I've been typically doing that being present in these meetings, kind of sharing initial, being very cautious in terms of labeling that hey, this is initial insight, insights, this could change over time. But contributing those insights so that your team is equally interested and really excited to see how things are being uncovered. That's one way how I'm doing it. The second thing, as I mentioned before, literally being very prescriptive in how you're showing your insights and you are actually partnering with your engineer and PMs to track the ado like Azure DevOps or Jira tickets of the milestones that they have and how your research is impacting those milestones and what is, what needle is it moving? The third thing would be also to evangelize and cross reference your research. Right. So once you package your research readout and like share it with your team, I think, I think of it as a star dust moment. So you, it's like the situation, task, action, result, you delivered your research, but what happens after the dust settles down? Like what happens once you deliver your research, make that impact, what happened next? So kind of of creating that loop of hey, I did like a share out a month back, here were the results. Can you actually tell me what has changed and if there are certain things that haven't changed, why? Like what is the dependency or what decisions have changed because of this? Just having that sort of a loop following through, using your research to evangelize it across different disciplines and creating that visibility is how I'm really trying to make sure I'm generating these insights that move the needle with my team.
C
Yeah, and I think moral of the story is just like you are there with, alongside the team and just as accountable as everyone for delivering value and being able to attach to outcomes. That's powerful. So we all know, and you just alluded to this, that product cycles move fast and especially with AI because the, the barriers to prototyping and building are swiftly going away. So how do you keep up?
D
Right, so I love this topic because I think in the past couple of months have been in the thick of researchers becoming AI builders as well. We can build things for our own workflow and that is such a, I think a cool moment that we are all in. So for me personally, I have integrated AI across my research workflow and I think it's really been transformative in terms of speed and as I mentioned before, just generating insights quickly. So I build these different agents depending on what I really need. So if I'm building a survey, I use the Microsoft Copilot survey agent to spin it up quickly. So of course I would work on the plan and come up with a rough idea of what I want to study and I'll partner with this AI agent to put it together in a form quickly so that I can share it with my team and get feedback when that data comes back. Specifically, let's say for this survey, then I'm letting AI take the first pass of analysis just to give me an idea of what are some themes that are emerging quantitatively and even qualitatively. So just kind of using these tools have been super, super interesting. There's this other tool that I've been relying on called Marvin AI for research analysis. Again, it's approved by our organization, so that's something that we use and that has been interesting in terms of thematic clustering and pulling reports together. So the way I see AI right now is helping you take that first pass on data at the moment so that you have more bandwidth and time and energy to really focus on that strategic impact and storytelling. I think that's the shift that I'm seeing every single day. And as I said before, I think everybody is an AI builder now, so that includes researcher. And what it means to be an AI builder as a researcher is not just generating insights at speed, but even experimenting with new methods and finding different way of evaluating experiences and rethinking approaches that we have. Like just the other day I was thinking about, I was doing this foundational study with a participant and I was like, I'm hearing about these different ideas this participant is sharing with me and I Was wondering, wait, like, what if I literally could use Claude code to prototype this whole idea and kind of like co create right in the session? So it's really interesting how these tools are helping us ideate at speed and come up with these different methods while still keeping our strategic insight really in place and making us strategic contributors.
C
Yeah, so I love that you're actually talking through the tools and platforms that you're using because I think that pretty much everyone, I say everyone, all the platforms out there have some technology that's integrated, that leverages machine learning or large language models in some way. Also, I think embedded in what you just said is you're going to have to learn how to code, because cloud code works great, but there comes a point where you have to be able to get in there and actually manipulate the code yourself. So you're going to have to learn Python, you're going to have to, to really expand your skill set to be able to support this new motion. So that can be intimidating. But how do you, what, what do you, what advice do you have for folks to get over some of that intimidation?
D
Yeah, I'm going to be very transparent. I definitely feel that intimidation day to day as well. Because, hey, like, I'm a researcher. I am like, I've done a master's in research and design and my background is in architecture. So coding for me is like, it's definitely. I never imagined like 10 years back I would be in this place working with tools like cloud code and like coding up different things, which is so cool. But yes, as you said, there is this whole part of like really understanding what exactly is happening on the back end. Right. Like the front end is looking pretty, but what about things? What exactly is cloud code doing? So I think my advice for this is, of course, it's not that I have to or somebody has to learn Python, but it's more so even just using these tools to understand what's happening in behind the scenes. So typically if I am pulling something together in cloud code, even on a personal front, really, like just building a quick agent or a quick app app for anything, I literally, this is very specific, but I use it in Claude code plan mode. So the plan mode specifically walks you through the thinking and asks you about things that need to be done and designed before the AI actually does it for you. So I'm using Claude code and Claude research simultaneously. So just the Claude desktop apps to kind of brainstorm and letting Claude walk me through that process and explain to me in simple language what exactly is it doing. So a researcher like me, who does not come from a developer encoding background, I at least get the thinking behind it, if not the language itself. That's one way how I'm doing it. And apart from that, I think it's in the. We're in the age where everybody needs to be AI fluent. Right. As I said before, really building on that AI fluency for someone who's just breaking into ux, I think dedicating time to understand concepts like hallucinations, human in the loop, grounding, prompting, prompt engineering, model evaluation, bias, guardrails. So all of these fancy words really literally doing your homework to get familiar with it. I think once you have that base ready, then you feel much more confident when you're interacting with these tools. And I'm no expert, I'm honestly learning every single day. But I like that we have this option of learning as we go because that's not how it was before.
C
Yeah. And I think just keeping a growth mindset and knowing that things are evolving quickly and so just being curious and understanding that we're all learning is a great way to lower the some of those barriers and fear about like learning more.
D
Definitely, definitely.
C
So I'm curious you in the introduction, we learned that you are very passionate about your content creation and mentoring folks. And you know, a lot of researchers tend to be more behind the scenes folks. And so I'm curious like what drives your philosophy around being actually a public researcher, Being out there more in the
D
public as a researcher. Honestly, what really drives me all is just sharing what I learn as I learn it and making it more accessible to people so I'll be real. It's again like also a challenge that I like giving myself. So every time I apply for a conference or sit at a career panel or even if I'm recording content for my Instagram channel, I'm the one there to answer questions and I feel that anxiety. Right. So even for this podcast, as I mentioned before, like this is such an amazing opportunity and I'm so glad that I'm here. But there's always this voice of like, wait, am I truly adding value? So this is honestly as much as a push, like a push for myself and putting myself out there, out of my comfort zone as much it is as giving back. So my energy isn't just about making impact, but it's about making the right impact, keeping it really authentic, being approachable and sharing something that's thought provoking. And I think that's kind of how I've always been. I really Enjoy giving back to community, to my friends, to people that I know, and really honestly being, being there for people and helping out. That's just, just how my personality has been. And I think I've taken that to content creation now. So that's really my intention behind it. And I'm constantly, I feel in this bubble of building with AI and learning something new, and I don't want to just keep it to myself. So whether it's content on Instagram or it's mentoring and speaking at conferences or delivering any webinar, I think I covered the. I'm trying to cover that whole spectrum. So research like product sense, career code building with AI, I really enjoy talking about it and I want to make it accessible to everybody. Whether you're just breaking into tech, you're a junior, a senior, or you're just curious like about, like, do I have what it takes? Can I break into this field? And if I'm able to make that impact on someone's professional journey, whether that's through coaching them for interviews and looking at the portrait portfolio or helping them think through how to use AI, I think that really honestly fills me with a lot of joy and fills my, my cup. So my philosophy on building in public, if I'm being very honest, is that I want to work. I want my work to mean something beyond just my day to day nine to five. I want to contribute, I want to share a perspective, and I equally want to learn from folks. So that's, that's what keeps me going.
C
Well, I love hearing that because it's a really great way to approach just finding your. The place that fits for you in the world. And actually in that vein, I'm curious. One of the biggest challenges, so, you know, one of the things that I do very frequently is talk to students that I work with about where it is that I see things going. And I'm curious about what you tell your mentees about how to focus their energy and develop their skill set, especially in this landscape where things are moving so quickly.
D
Of course, of course. And I think as I touched on this before, I think the biggest thing I would say is to being like AI fluent, it cannot be an afterthought at the moment. And I know it can be overwhelming, but it's more so just learning. If you fail while you learn, that's completely fine because no one's watching everybody. I think everybody is in the same journey every single day. There's something new happening in the world. So it's more so being really open and Embracing this learning journey and being, being AI fluent. And as I mentioned before, it's just not just the tools, but even the foundation of how AI works and how AI products are built and familiarizing yourself with the concepts. So the reason I say this, this is so important is because when you're sitting with your engineering, data science and PMs, you want to make sure you're following the conversation and you are speaking their language equally and asking the right question. Questions. Right. So for that I feel this foundational knowledge is super crucial. I know that researchers are increasingly expected to be builders, as I mentioned before, and I find it overwhelming as well. But my advice would be to be creative like exploring and building agents and skills with Claude code that can help you in your workflow or even help you even in terms of different methods that you're experimenting from a research standpoint. For example, the co creation exercise that I talked about previously and I feel we have the opportunity to explore these low code design tools and build something within a couple of minutes. So just exploring that, building that for your portfolio. If you're just breaking into ux, you have the I think opportunity to leverage these different tools available to build something for your portfolio and then bring your human judgment to tell the story behind why you took certain decisions and what impact are you trying to make. Because at the end of the day I think the goal isn't to use AI for every single thing, but it is to kind of free yourself from doing the repetitive mundane tasks and pour your energy into what actually matters so that you can drive impactful strategic outcome and impact?
C
Yeah, well, thinking about it from the opposite end of the spectrum. So you know there are a lot of your leader on your team and of course leaders are having that same conversation from a different framing. And so obviously there's a lot of pressure on budgets. What are the things that you would advise leaders to do in this space where you know there are a lot of potential ways to focus your effort, time and resources? How do you choose?
D
For sure, I think from a perspective of someone who is working on the ground as a researcher and doing hands on research. I think I would love to answer from that lens. I would say that this is more of an opportunity and not just pressure definitely like budgets are tight and with, with AI changing products so fundamentally that building with user lens is like building without user lens is actually the real risk. So I would really encourage research leadership to recognize that. And the first thing I would actually would love to see is just creating that space for the team to upskill and become fluent in AI and also understanding what are some moments where research insight like the human judgment comes in versus what are some things where AI can actually help you become more productive and more more efficient. So having that kind of leadership push and creating that space to really experiment with AI tools and sharing with your team what you're learning and leaning on your partners to learn together is something that I would really suggest as somebody doing that hands on research. Second, and this is something that I see firsthand is leaning into, into questions that only researchers can answer. So if AI in AI, those questions are around, as I said before, trust and transparency and evaluating that and really finding that market fit for what you're building. Right. I think really working on these questions and identifying how you bring that human judgment in is crucial. And thirdly, I think empowering your team to use AI in their own work workflow. I'm sure there are many individuals that feel overwhelmed with how things are moving and the pressure to use these different tools. And sometimes you just need that additional push and that sounding board to kind of like, hey, it's okay to explore. Here's what you can learn. And you know, maybe as a group we can discuss what we have learned and how we can actually use this in your workflow versus what we cannot. I think just having that sort of over open field and that a group of people that you can rely on is something that I think leadership should really focus on building. Yeah, yeah.
C
And I've absolutely heard from other leaders, especially at Microsoft, that a culture of experimentation has been really powerful in letting people very consciously get exposure to training and resources to just experiment and see what works and also understand that not everything is going to. So yeah, yeah, that's awesome. So I just, I wanted to thank you so much for this conversation. I know I've learned a lot and I hope that our listeners have too and invite you to share. Just how does someone learn more about you and what it is that you are creating from a thought leadership perspective?
D
Yeah, definitely. Again, I just want to take this moment to like, thank you so much again for this amazing conversation. I think it really helped me even reflect on everything that I do. And again, thank you Nathan as well for inviting me and giving me this opportunity. And if anyone wants to connect with me, I'm always up for a coffee and conversation so you can find me on LinkedIn. I'm starting to get pretty active there and always happy to hear from people and help wherever I can. As I mentioned before, I also share content on Instagram under UXR pri so uxr pre that's where you can find me. And again, just message me there. I'll be more than happy to chat. And if you're specifically looking for mentorship, I'm actually on ADP list. I'm a mentor there. So I I would love to hear from anyone who found this conversation helpful and wants to continue discussion, needs any guidance or just wants to chat. I I would always love to connect so there are some ways to connect with me.
C
Great. Thanks again. I appreciate the conversation.
D
Thank you so much.
B
Want to keep the conversation going? You can find the show notes@usertesting.com podcast if you haven't already. Don't forget to follow us on Apple Podcasts, Spotify, Overcast or Google Play so you never miss an episode. And if you enjoyed today's show, please share it with a friend or leave us a rating and review on Apple Podcasts. And until next time, this is Insights Unlocked, an original podcast from User Testing.
Guest: Priyanka Kuvalekar, Senior UX Researcher at Microsoft
Host: Leah Hogan, Principal Experience Research Consultant at UserTesting
Date: March 30, 2026
Producer: Nathan Isaacs
This episode explores how AI is fundamentally changing the landscape of UX research, emphasizing that measuring usability alone isn’t enough. The conversation with Priyanka Kuvalekar reveals how researchers must now evaluate experiences for trust, emotional resonance, and strategic business impact, and how to stay ahead in an era where AI is embedded into everything we build. Through candid insights and practical examples, Priyanka shares actionable advice for researchers, teams, and leaders navigating AI’s rapidly evolving role in customer experience.
[02:32 – 06:04]
[06:04 – 10:07]
[12:21 – 17:21]
[17:21 – 20:41]
[20:41 – 24:55]
[24:55 – 28:08]
[28:47 – 34:53]
[32:27 – 34:53]
[35:11 – 39:12]
For Early-Career Researchers and Mentees
[39:12 – 41:25]
For Leaders Under Budget Pressure
[41:25 – 44:19]
“When we skip understanding the users that we are designing for, the design fails them.”
— Priyanka Kuvalekar, [05:18]
“Trust ... is built through an AI experience that is transparent, that keeps human in the loop and in control.”
— Priyanka Kuvalekar, [08:28]
“The AI can definitely help you generate these themes... but here is an additional layer—this is what the users actually meant, because I observed them.”
— Priyanka Kuvalekar, [17:55]
“We are kind of in this age where AI fluency...there’s so much pressure... I think it’s a great opportunity... Not just the tools, but how I can also be fluent in terms of the whole concept of AI.”
— Priyanka Kuvalekar, [19:43]
“As researchers, we have the power of knowing what users actually want and need. We’re not just pushing for building on assumptions.”
— Priyanka Kuvalekar, [22:44]
“The goal isn’t to use AI for every single thing, but it is to kind of free yourself from doing the repetitive mundane tasks and pour your energy into what actually matters.”
— Priyanka Kuvalekar, [41:19]
For show notes, curated clips, and more resources, visit usertesting.com/podcast.