
Loading summary
Benjamin Shapiro
The Martech Podcast is a proud member of the I Hear Everything Podcast Network. Looking to launch or scale your podcast, I Hear Everything delivers podcast production, growth and monetization solutions that transform your words into profit. Ready to give your brand a voice? Then visit iheareverything.com.
From advertising to software as a service to data, across all of our programs and clients, we've seen a 55 to 65% open rate.
Joyce Gordon
Getting brands authentically integrated into content performs better than TV advertising.
Benjamin Shapiro
Typical life span of an article is about 24 to 36 hours. We're reaching out to the right person with the right message and a clear call to action. Then it's just a matter of timing.
Welcome to the Martech Podcast, a member of the I Hear Everything Podcast Network. In this podcast, you'll hear the stories of world class marketers that use technology to drive business results and achieve career success. Here's the host of the Martech podcast, Benjamin Shapiro.
69% of marketers say that they're unable to consistently deliver personalized experience according to Salesforce's 2024 State of Marketing Report. That's almost 7 out of 10 of us that can't figure out how to turn an abundance of customer data into a tailored experience. Here's why Your customer data is in silos. Your tools need clean and unified data to deliver results. So how can you turn your data chaos into the type of customer experiences your prospects and your customers expect? I'm Benjamin Shapiro and joining me today is Joyce Gordon, the head of AI at Imperity, which helps unify fragmented customer data to powerful personalized marketing. And Joyce is going to explain how you can combine AI and a unified data foundation to transform data your customer experience strategy. Joyce, welcome to the Martech Podcast.
Joyce Gordon
Thanks so much, Ben. It's really great to be here.
Benjamin Shapiro
Excited to have you on the show. Excited to dig into this combination of data and artificial intelligence and personalization. I feel like it's the topic of the year.
Joyce Gordon
It's been the topic of several years.
Benjamin Shapiro
Well, the ball bounced your way here. You're an AI expert and all of a sudden that makes you like the bell of the ball. So before we get into talking about AI and personalization, you're one of the people that actually know how this stuff works. Give me a quick bio on what it means to be an AI expert. Like what are the skills you have and the experience you had to make you actually an expert in artificial intelligence, not just using ChatGPT.
Joyce Gordon
Absolutely. Well, I've been working in AI and ML for the last decade. I started my product management career at a really small startup. I joined right after the seed round called Kastora, which was focused on customer analytics. Helped build the product team there and grew it from seed stage through Amparity's acquisition. We grew from about seven to 70 people during that time and at my time at Amparity have focused on everything from building a platform for marketers to building our predictive analytics and machine learning project to now AI. I also studied statistics and math in college, so this is a space I've been in and a deep interest I've had for a really long time.
Benjamin Shapiro
All right, so when we're talking to you, we're not just getting the perspective of somebody that knows how to use ChatGPT and Claude. You actually understand how they work. Explain to me how AI has changed personalization in the past year.
Joyce Gordon
Browns have been talking about one to one personalization forever basically since the dawn of time. But it's kind of been a dream. That's super challenging to realize in the past. And the reason is one of creative constraints. Creative teams are continuously underwater and there never really was enough creative available to make one to one personalization a reality. Generative AI takes the cost of content creation and drives it to zero. So what that means over time is the creative constraints we've faced in the past, they're going to start to lessen now. It's not so easy. It's not as if you can take Gen AI, throw it at your data, create one to one personalized versions for everyone. There's still a lot of challenges. Let's down in the way. And so brands are starting to move in the direction of one to one personalization, but they're not totally there yet. Where we typically see brown start is with a segmented approach. So in the old world before Gen AI, maybe you had enough creative for five different segments when you were launching a campaign. Now maybe with Gen AI and using AI to create the content, like the imagery, some of the email subject lines, some of the actual copy, maybe you can actually launch a campaign with 20 segments. And what most brands are doing now is they've got a human in the loop. So humans are reviewing the output. They're making sure the AI does things like like match. The brown voice is sending the message they want doesn't have any legal liabilities. So over the past year we've seen broader segments to more narrow segments. And I think over the next year we'll see even more movement towards one to one personalization. So Those segments will get smaller and smaller as brands get better at using the models, as they lay their data foundations, as they think about how to introduce gen AI into the workflow, and then also as they build systems to actually evaluate the Gen AI content at scale, enabling them to figure out where to focus.
Benjamin Shapiro
Let's talk a little bit about building the structure, because it seems like that's the topic of the month right now. I've been reading and trying to research the concept of MCP servers, but it seems like everything we've been doing is human in the loop. I am writing a prompt, I am dropping some data in, I'm analyzing it, and then I'm trying to figure out how to apply that. Now we have this notion of being able to tap into your data sources and have LLMs be able to unify everything and make decisions and then actually take actions. We're sort of getting into the agentic part of the agentic AI era. Am I right? How big is this MCP servers? Help me boil this down.
Joyce Gordon
Let's start with what is an MCP server? So MCP stands for Model Context Protocol.
Benjamin Shapiro
I knew that one.
Joyce Gordon
Well, you're already ahead of the game. This was a standard rolled out by Anthropic. And really what it is, is super simply is it's kind of like an API for agents. It's how different agents talk to one another. So if you've got your customer data on one MCP server, you might want to get that data to your conversational AI agent on your website. Because if you think about conversational AI, for example, if I'm shopping for a dress to wear to my friend's wedding in Hawaii, you really want that conversational experience to understand what products I've purchased in the past, whether I'm in the loyalty program, what items I'm interested in. So the MCP server is a protocol that makes that data available to other agents, like that conversational agent.
Benjamin Shapiro
One of the things that in my research, I watched a bunch of YouTube videos basically on, like, what is an MCP server? It seems like this is really a hot topic right now. And the best description that I had is all of these different tools, they can all speak, but they all speak different languages. Your Salesforce database is speaks English. Your Marketo database speaks French. And the MCP server is essentially the translator. You feed all of the data into it and it understands all the language. So then you could have it all combined and read back into a single language. So it is the filtering layer that helps take all of the disparate data and unifies it into kind of a common language that you can then interpret through an LLM.
Joyce Gordon
Yeah, that's exactly right. I absolutely love that description. To your point, Ben, it's a common language where all of your different agents can talk to one another. And I think one thing that's interesting to think about is where things are and how they might evolve. So I would say as we think about conversational AI experiences today, there's a lot of challenges there. Like there was the Air Canada example.
Benjamin Shapiro
Give me the Air Canada example. What happened?
Joyce Gordon
So Air Canada provided a fare to someone. I don't remember the exact details, but I think the AI incorrectly offered a customer a discount that should not have been offered.
Benjamin Shapiro
All right, well, at least the door didn't blow off.
Joyce Gordon
Yes, exactly. But there are challenges around. The AI is providing offers or experiences that might not exist. So hallucinations, there are legal challenges, obviously they're brown voice challenges. So where we typically see brown start with these conversational AIs is to really constrain the use case. So if you're a ski slope, for example, maybe your AI to start is only going to answer questions about ski school and not what meals you could buy. It's not going to let you book Lyft tickets. And that's super important because if you've got a constrained use case, it limits the risk and enables you to really deliver a resonant customer experience and then you can expand beyond that over time.
Benjamin Shapiro
I interviewed Jordan Crawford, who's a go to market specialist and he's like next level AI guy too. And he always uses the example of, I think it was a car company like Toyota where you can go onto their customer service chatbot and then ask it to write a Python script for you and it'll like actually do a pretty good job. So it's like you need some parameters to be able to deliver these personalized experiences. We can feed real time data, customer data, in theory, using an MCP server. Who's doing this well, who's actually able to create customer experiences that are personalized and tailored and where are they kind of falling flat?
Joyce Gordon
I would say the brands that are doing this well have kind of constrained the use cases to start. We're all still learning about this technology. It's hard to believe that ChatGPT was launched in November 2020. So we're still very early. I would say the companies that are doing things well today, and many of them are parity customers, are focused primarily on how can we move from bigger segments to more micro segments. So you might not even notice as the customer, but you are getting a more customized email based off your loyalty status, your churn status, your price point than you were before. And then we started to see a conversational AI takeoff when it's focused on a specific use case as opposed to saying we can answer any question for the brand. So the example with the ski slope I mentioned before is like an actual customer. So there's a lot of examples like that out on the market.
Benjamin Shapiro
Okay, so we've got this notion that artificial intelligence is helping us move from sort of our static, we've got a couple of segments into very micro segments. Eventually we're going to work our way down into one to one personalization. Give me the framework to utilize customer data for that one to one AI personalization.
Joyce Gordon
So before the framework, I think my overarching feedback, probably as a product manager, this is just life philosophy. But always start with the use case. If you've got kind of a loyalty chatbot experience, the data you might need and the complexity of that is going to be different than if you are starting with a use case where you have a chatbot that needs to answer anything. So start with the use case. But once you've got the use case, I'll talk through a couple items that are just really helpful from a data perspective. The first is if you have a conversational AI and you really want that AI to be able to answer a lot of questions about your brand. You want it to be able to answer questions about loyalty. You want it to be able to help find a dress to wear to your friend's wedding. You want it to understand how to recommend credit card offers to people so they can sign up for your credit card. It's going to need a lot of data across different sources and being able to deliver comprehensive experience there. It really starts with identity. So the first piece is developing an identity spine. So that's how do you know that the same customer shopped online versus in store versus transacted with your loyalty program?
Benjamin Shapiro
An identity spine. I've never heard that one before.
Joyce Gordon
That's a little bit of an antarcticism. It's the idea that we can understand who Ben is across all of your different data sources and touch points. And that's so critical for a conversational AI experience, particularly if these experiences seem human, like customers, expectations are even higher than if the AI seemed a little bit more robotic. So that's really where to start. The second piece is making sure you've got the right attributes on the customer. So what are the things about Ben that are really going to play a role in curating that experience? These could be things like your predicted customer lifetime value, what products you'd purchased in the past, even embeddings containing maybe which types of creative you've responded to. So all of that data is going to be really key.
Benjamin Shapiro
Before you go on, I'm sorry to interrupt you. I want to ask a question about that. How much is too much? So if I'm feeding customer data to an LLM and we're giving. Let's take a use case of I'm selling retail, I have a T shirt company and I'm selling gender, name, location, size, creative they've responded to, maybe even questions they've asked. But I also might be able to map to, like pages they visited all sorts of other data and information. When you start giving the more nuanced information to an LLM, is there a sense where you can give too much information and it can't parse what's important? As opposed to like, hey, I'm going to prioritize that this person once visited a women's T shirt section because I was buying something for my wife and now all of a sudden it's like he likes men's and women's T shirts. But I'm a man and I want to buy T shirts that are large or extra large for men primarily. How do you give the context of what information is priority as opposed to just giving everything and letting the LLM decide?
Joyce Gordon
If you were to rewind, let's say a year ago, context windows, it's the number of tokens you can pass an LLM.
Benjamin Shapiro
How much information did you.
Joyce Gordon
Exactly how much information you could pass the LLM, they were much smaller than they are today. Now, they're often millions of tokens. You're able to input into a context.
Benjamin Shapiro
Window, hundreds of pages go on.
Joyce Gordon
That said, Ben, you're bringing up a really important point. Just because you can pass all that information, it might be too much for the LLM to really reason over and you might start to get some responses that are not as curated as you want. One very core part of agents is this idea of tool calling. These are different tools the agent has that it's able to access. Let's say that we take your question. We use a lot of noun parity in our AI experience, but let's say that we take a question like returning to this one, I'm shopping for a dress to wear to my friend's wedding in Hawaii, you might feed that question to the LLM and actually ask, hey, what types of data do you think would be helpful here? And then you might actually have a tool call that retrieves that data specifically and passes it to the context. So having as much data as you can, as long as it's high quality, is great, but then you're going to need to have some sort of retrieval step that looks up what are the important pieces of data for that question and probably passes that in. Now, all of this is trial and error, so for some rounds, maybe it's fine to actually pass the full customer profile to the context. But if you're starting to get answers that seem irrelevant, using more of a tool calling or like a rag like approach could be really helpful at honing in on the responses you're looking for.
Benjamin Shapiro
If I ever go through a career change and decide to go down the coaching route, I feel like my ethos and tagline would be make a plan to make a plan. And what you're talking about is akin to that, but for AI, which is you are giving information to have AI write a prompt so AI can execute the prompt with the right information. And I know it sounds a little circular, but I cannot stress how much. In everything that I've done in building our AI agents, like the production of our podcasts and our research and our guest sourcing, it's always, here's a bunch of information, help me write a prompt that I can use to distill this down and execute this task, and then I take the prompt that's returned and run it. And it does a much better job than if I just try to write the prompt myself. And essentially you're saying the same thing is you can't just feed the raw data in and say, give me the answer. It's feed the raw data in and say, refine it to figure out what I need. And then you execute based on that middle prompt.
Joyce Gordon
That's exactly right. To take things back to AMP parity for one sec, we have a product called AMP AI where you can ask and answer any questions about all of your customer data, which could be hundreds of tables with hundreds of fields each. It's really complex. And when you ask a question, actually the first thing AMP AI does is it formulates a plan for what are the steps it needs to take to answer that question and what's the research it needs to do along the way. What are the research queries it needs to get and what context does it need to load in order to answer your Question. So, Ben, that's very similar to the way you're thinking about things.
Benjamin Shapiro
I did this yesterday, I swear to God. I built one of our first AI agents, or what I'm calling an AI agent for figuring out who should be on the podcast. And I have this big context document. Here's everything you need to know about the Martech podcast. Here's a second document that's a scoring rubric. So here's how we evaluate our guests. Like here's the one out of 100 point scale, and then here's all the searches you have to do to be able to evaluate and effectively figure out what the scores should be. And then I take all that information and I'm like, all right, here's the context of a guest application. Read through the context document, understand the scoring rubric, conduct all these searches now, assign the scores, and it's exactly what you're talking about with app parity. It's like it needs to think of how to execute the task, what research it has to do before it tries to just go and do it again. Make a plan. To make a plan. Write a prompt. To write a prompt. There's this repetitive task of refinement and strategy that you and the AI need to do to effectively execute.
Joyce Gordon
That's totally right. Yeah. When I think about success with agents, I really think about three things. Number one, do you have the right underlying data? Number two, it's all about the tools. So that's like making the plan and making sure the plan has access to the right tools to pull in the right data and also modify the agent's responses. And then the third piece, which we haven't touched on as much, is the evaluation. The evaluation is really, really important. How do you judge on an always on basis if the agent's respons are good or not good? And more importantly, for the situations where things are not going as well as you'd like, how do you actually root cause what those challenges are? Because that feedback cycle is imperative for taking that information and actually using it to refine the outputs.
Benjamin Shapiro
In your approach, the AI can learn and has memory. And when you're executing a prompt, you need to be able to feed back whether it was a successful execution or not, and the AI will start to refine itself. Let me ask you a question, because I've been spending a lot of time, like I said, building our ranking agents and our topic, sourcing things like that, trying to make each of our interviews a personalized experience. It's a ton of work right now. So what are the ways that you can implement an AI personalized experience without taking on truckloads of dev work?
Joyce Gordon
I would say a conversational AI is probably not the place to start, especially if you're new. I would start with something less risky. Risky and human in the loop. And you can actually get started really, really quickly here. You need some data, you need an LLM, and you need some people to review the output. Like you can even do it in ChatGPT, you could do it in databricks. So I would start with a really constrained use case. So maybe, for example, you want to personalize your loyalty welcome email. And for that you might need to know information on someone's product preferences, maybe their gender, maybe their age, maybe what their signup source was. And then you can take that with a couple examples for how you would actually want to personalize the email, pass it to the LLM and then have the LLM produce the output. And I would say one of my biggest tips are people will often try to use LLMs to personalize and they're like, this doesn't match our brown voice. It's not what we're looking for. If you can pass 5 to 10 examples of how you would actually personalize the output based off different pieces of customer data, you'll get far better results than if you just leave it all up to the LLM.
Benjamin Shapiro
So what matters more? Does it matter more that you give a tone of voice or some structure and rules for it to try to interpret, or is it better to give less? Like, here is our style guide for the chatbot and more. Here's other examples of copy. Is it better to try to define it up front or just give examples and let it interpret and figure out the style guide?
Joyce Gordon
This is anecdotal experience. There's no hard and fast rule here. I find that if you've got high quality examples, that's typically the best route. And then typically what I'll do is I'll pass the high quality examples or look at the outputs. And if there are things I don't like about the outputs, I'll say, don't do that. For example, I was prototyping an AI that would produce personalized emails and one of the inputs was whether it was a family shopper or whether it was an individual shopper. So whether it was just a person shopping for themselves. And in the personalized email it would say something like, oh, is your shopping for you and your kids? Here are things you might be interested in. And we all know just because You've bought kids items in the past. It doesn't necessarily mean that you have children or you're shopping for them. So in addition to the examples, I put in a rule and I said, if you see a family shopper, you can show some kids items, but don't specifically call out in the copy that they're shopping for their kids. So that iterative approach is really helpful and it ends up being a mix of examples and then also rules that you're going to hard code into the prompt.
Benjamin Shapiro
Yeah. Whenever I am writing LinkedIn posts, if I give examples to an LLM, it always tends to too closely replicate the examples. I had this problem where in every LinkedIn post I was using an LLM to help me edit it, always put in the term game changer. Because in the two of the five examples that I had, it was like, this is a total game changer. So it always put that term, interpreting it as my tone. And I'm like, all right, but it's too formulaic. So I want something that is unique and changed around how we do it. So I kind of lean towards here's rules and style guides with key phrases to use. But don't be too repetitive. Instead of giving a ton of examples, it's chicken or the egg, you could probably do it both ways.
Joyce Gordon
Yeah. And I think in this example, one thing I may have tried and that there's no hard and fast rule here. To your point, I may have gone back and said use the examples as inspiration, but don't replicate them too closely and do not use these following phrases or something like that.
Benjamin Shapiro
All right, so when you're doing personalization, you're basically saying start with human in the loop. You're taking your data manually, uploading it, giving it to an LLM and having somebody publish it. You're not automating the process first to figure out if it works. Talk to me about when you're putting things into production. How are you taking that manual process and then building a personalized experience that doesn't need a person?
Joyce Gordon
I think this is really the billion dollar question. Most brands are still in the human in the loop phase. If you wanted to move more towards like an always on one to one personalized experience, Obviously you need to figure out how all of this fits in your workflow. You don't want to be uploading creative or email subject lines to your tool. It needs to all flow through. But the really big challenge here is one of evaluations. So you're going to need a system that continuously evaluates the quality of the LLM outputs before it actually goes out in market. And a really common approach here is what's called LLM as a judge. So you actually ask an LLM to review the work of another LLM and it's up to your brand to really define what the criteria are. So some things I've seen in the past are relevant. So if there are specific pieces of customer data that you want to make sure, definitely to personalize, does the output actually include those? Some other things are around safety, is there anything offensive in the output that maybe should be flagged? And then there are also things around offers or other pieces of information that could be dynamically added, making sure they're in line with your brand. So the brand needs to sit down and kind of think about what are the key checks that they want always to be done on the creative. And then certain creative will probably get flagged for human review to take a look. But as I mentioned, most brands are really moving in this direction, but still using a segmented approach today, just because it's so important to get those evals right. And it's quite a challenge.
Benjamin Shapiro
We're still using AI to execute manual processes for segmentation and building personalized experience. It's doing the work and we're the editor and publisher. Give me your forecast for what's the next step. What does the future look like when it comes to these personalized experiences?
Joyce Gordon
Well, maybe take this in a slightly different direction relative to our conversation so far. I've seen a couple announcements over the past several months that I think are really interesting. Stripe announced an order Intense API where it can purchase on behalf of users. Perplexity and Amazon announced something similar where they're also able to kind of purchase on behalf of users. So I think what this points to is we're going to see more aggregator platforms like the Amazons of the world, the Perplexities, the Open AIs have their own shopping agents. So instead of going to a brand directly to shop, you might go to one of these agents and say, hey, I'm going skiing on fail this weekend. Can you help me identify a jacket? And what those shopping agents might do is using MCP or potentially a different approach. We'll have to see. They're going to reach out to different retailers or different brands in general, and they're going to pass some information on the user's context and question and ask for recommendations for that given question. So with that fail skiing example, they're going to send along information about the user's question demographics and try to get recommendations back and then they'll kind of be the aggregator for what those recommendations are and the user will be able to purchase actually directly through those experiences. What that means for brands is maintaining the first party connection is going to become a little more challenging and really important. So there's a couple things that I think are really important there. Obviously product quality, differentiation and brand are important, but also things like making sure you're delivering a fantastic personalized experience on your own website or property and loyalty are going to be really important for maintaining that first party relationship as well. So I think we'll see more personalization within the context of brands and their experiences and also more agentic personalization with aggregators as well.
Benjamin Shapiro
Yeah, it's funny, my interpretation of what you're saying is your website's not going to matter anymore. And I know that you're saying you have to build a better personalized experience. There will still be the value of brand and probably loyalty programs and that's how companies will drive sort of first purchase conversions. But it sounds like I'm looking for a ski jacket. Rank all the ski jackets for me. Here's the ones that I liked in the past. Ooh, I like this color. No, I don't like that one. You know what, send me all three and then you can execute in the LLM to actually conduct the purchase and have the product shipped. So now we don't actually have to go from I did my Google search, I clicked to the link, I'm in the first party website, I place the transaction. You also don't necessarily have to go onto Amazon to execute. So there's this sort of interesting change. First party data is going to be much harder to come by if everybody is executing their transactions in the LLMs, even if it means there is more volume, more velocity, less friction, less data. It's going to be a brand new world.
Joyce Gordon
An analogy that I think about often is Amazon. I think there's a lot of parallels. So some brands, they basically make their entire business on selling on Amazon and that works for them. There are some brands that don't sell on Amazon at all. And then there's some brands that do some of their transactions on Amazon, but not all of them. And for the brands in the middle in particular, I think we've all gotten the experience where you get a package in the mail and there's a little piece of paper in the package inviting you to sign up for loyalty or offering you a special offer so they can tie that transaction back to you. I think brands are really going to be focused on ways to maintain that customer relationship in light of this new world.
Benjamin Shapiro
All right, and that wraps up this episode of the Martech Podcast. Thanks to Joyce Gordon, the head of Generative AI at Imperity, for joining us. If you'd like to contact Joyce, you can find a link to her LinkedIn profile in our show notes or visit martechpod.com you could also visit her company's website, which is mparity.com and if you haven't subscribed yet and you want a daily stream of marketing and technology knowledge in your podcast feed, hit the subscribe button in your podcast app or over on YouTube and we'll be back in your feed every week. All right, that's it for today, but until next time, my advice is to just focus on keeping your customers happy. Foreign.
Thanks for listening to the Martech podcast and I hear everything. Production Looking to launch or scale a podcast like this one for your brand? Then visit iheareverything. Com.
MarTech Podcast ™ // Marketing + Technology = Business Growth
Episode: Mastering AI Personalization with Customer Identity Data
Release Date: June 30, 2025
Host: Benjamin Shapiro
Guest: Joyce Gordon, Head of AI at Imperity
In this insightful episode, host Benjamin Shapiro welcomes Joyce Gordon, an AI expert from Imperity, to discuss the evolving landscape of AI-driven personalization in marketing. The conversation delves into the complexities marketers face in leveraging customer identity data to deliver tailored experiences.
Benjamin Shapiro [01:15]: "69% of marketers say that they're unable to consistently deliver personalized experience according to Salesforce's 2024 State of Marketing Report."
Joyce emphasizes the critical issue of fragmented customer data residing in silos, which hampers the ability to create cohesive personalized experiences. She underscores the necessity of clean and unified data as the foundation for effective AI-driven marketing strategies.
Joyce Gordon [01:55]: "Your tools need clean and unified data to deliver results. So how can you turn your data chaos into the type of customer experiences your prospects and your customers expect?"
The discussion highlights how Generative AI is revolutionizing personalization by reducing the costs associated with content creation, thereby enabling brands to move closer to one-to-one personalization.
Joyce Gordon [03:39]: "Generative AI takes the cost of content creation and drives it to zero. So what that means over time is the creative constraints we've faced in the past, they're going to start to lessen now."
A significant portion of the conversation is dedicated to explaining MCP (Model Context Protocol) servers. Joyce describes MCP servers as the translators that enable different AI agents to communicate seamlessly by unifying disparate data sources into a common language.
Joyce Gordon [06:14]: "MCP server is a protocol that makes that data available to other agents, like that conversational agent."
Joyce outlines a strategic framework for marketers aiming to achieve one-to-one personalization. She stresses the importance of starting with a clear use case, developing an identity spine to recognize customers across various touchpoints, and ensuring the right customer attributes are identified and utilized.
Joyce Gordon [10:49]: "Always start with the use case. If you've got a loyalty chatbot experience, the data you might need and the complexity of that is going to be different than if you are starting with a use case where you have a chatbot that needs to answer anything."
Addressing concerns about overloading AI with excessive information, Joyce discusses the balance between providing sufficient data for personalization and maintaining AI efficiency. She introduces the concept of "tool calling" as a method to selectively retrieve and utilize relevant data.
Joyce Gordon [14:06]: "If you're starting to get answers that seem irrelevant, using more of a tool calling or like a rag like approach could be really helpful at honing in on the responses you're looking for."
The conversation explores the gradual shift from human-in-the-loop systems to more autonomous AI-driven personalization. Joyce outlines the necessary steps, including robust evaluation systems and continuous feedback loops, to ensure quality and reliability in automated personalized experiences.
Joyce Gordon [18:37]: "Most brands are still in the human in the loop phase. If you wanted to move more towards like an always on one to one personalized experience, obviously you need to figure out how all of this fits in your workflow."
Looking ahead, Joyce forecasts a landscape where AI aggregators, akin to Amazon or Perplexity, serve as personalized shopping agents. These agents will interact with various brands to curate and execute personalized purchasing experiences directly for consumers, posing new challenges and opportunities for maintaining first-party customer relationships.
Joyce Gordon [25:00]: "We're going to see more aggregator platforms like the Amazons of the world, the Perplexities, the Open AIs have their own shopping agents... what that means for brands is maintaining the first party connection is going to become a little more challenging and really important."
The episode concludes with a reflection on the necessity for brands to adapt their strategies to maintain strong customer relationships amidst the rise of AI-driven personalization. Emphasizing the importance of quality, differentiation, and seamless personalized experiences, Joyce and Benjamin highlight the transformative potential of AI in shaping the future of marketing.
Benjamin Shapiro [27:55]: "First party data is going to be much harder to come by if everybody is executing their transactions in the LLMs, even if it means there is more volume, more velocity, less friction, less data. It's going to be a brand new world."
Key Takeaways:
For more insights and to explore the tools discussed, visit martechpod.com or connect with Joyce Gordon on LinkedIn.