
Loading summary
A
Hello, I'm Andrew Mayne and this is the OpenAI podcast. Today we're talking to Asad Awan about ads in ChatGPT, how they'll look, who will see them and how the company will preserve user trust.
B
Ads are shown to people who are on the free and the go tier for pro and plus and for enterprise. There are no ads. Is creepy. Okay, if it is good, it's not. We are in the business of trust. So I think if we have to say what is our core business is like to win users trust.
A
So from a consumer point of view, why ads? Why now?
B
It goes back to our mission which is bring AGI to all of humanity and to benefit all of humanity. So when you have a consumer product which like, you know, 800 million plus people who are using this, then how do you take the best version of that product to everyone? And ADS is one of the most proven models to be able to do that for consumer products. And I think the other part of that mission is how do you benefit all of humanity which is like you want to take the best model, you want to give the highest limits, usage limits to people. You want for the ads to be actually helpful both to the users and the businesses as well. So I think it's a very natural fit for a company whose ambition is actually to take the best AI to all of humanity.
A
It's a very interesting decision because on one hand you could say, hey, we're going to take what we perceive as the high road and say we're not going to do ads. But also we're not going to give a really good amount of usage for it and limit that and sort of maybe use not the most capable models but sort of, you know, say take that approach versus embracing it.
B
Yeah, yeah. I think if the goal is to truly democratize access, I think ADS is a good model. I think maybe what is hidden in that statement is can ads be bad? And the reality is how do we think about the principles of ads? How do we actually set a really high bar for what ads should be on this platform? How do we make them actually useful? So when we were starting off we thought like, hey, what would be the core principles that we would announce to the world that we would be proud of, that we would stand behind and as a result create a really great product. So just to give example of the principles, like number one, the answers need to be independent from the ads, both visually but also in how the models are trained and how the system works so that you can always trust the Answer like, the whole product is based on trust, so actually it needs to feed into that. The second is your conversations are private. If you have a sensitive conversation that will never have ads in it, and the conversations are never shared with advertisers. So while we do the matching between the best ad that can be a useful thing in a conversation, the advertisers don't get to see that we do that matching internally. And then of course, like, you know, as you introduce ads, a big question is, how did you know about this data? Like, that's the difference between the user trust and just like doing something which is relevant to the user. User. And our goal was how do we make something which users can transparently understand, how can they control? And we can go into that because I do think there is a high bar to set over there because you could have some transparency, some control, which most products have. But what would be a really good version of that is something that we've been thinking about. And finally, once you add ads, I think you have to set the incentives for the teams, for the company in a way that actually continue to focus on user value. So you don't want to just get empty calorie time spent on the platform. You want to build a very useful product and automatically one good ad is good enough, actually. So we don't optimize for time spent on the platform and focus on the user. So these are the principles. So I think like, so connecting this back to your question is like, how do you. Why, why should we add ads and how to do it? I think a part of that is take it to all of humanity and that's the best business model to do that. But prevent all the negative things that can happen if you're not doing it thoughtfully. And I think being upfront with our principles, being very clear, that is how we're starting and then how we will test how we will improve and how we become kind of a learning organization with respect to this. I think that's our goal.
A
So you said basically there's going to be a separation. So if I'm talking to ChatGPT about like, hey, I want to start drinking smoothies and stuff, it's not going to all of a sudden blur it out like, well, here's a blender you should buy.
B
Absolutely, yeah, I think so. Both in terms of what the model knows. The model doesn't know whether an ad is there or not. If you ask it like, hey, what's this ad saying? It'll say, I actually don't know but you can actually press some buttons and add to the model if you want to ask questions. Totally, totally.
A
Whatever is being displayed in the ad space, the model has no idea. That's it.
B
That's right. And I think in, in. And both visually as well, so that the user can very quickly say, hey, this is the answer that I got from the model. And then there is a bottom banner which has ad in it, which is very clearly distinct. So visually also you don't confuse that. Of course, we will learn how that experience evolves, but the goal is to both keep the system, the models, very, very separate and add this kind of downstream of.
A
Okay, yeah, that's. I think that's a very important distinction because I think some people have sort of, kind of tried to sort of spin that there's some sort of collusion between the ad part and the model part. But you're saying the model is completely separate.
B
Yeah.
A
So it's interesting. So as I'm having a conversation, something might come up and I can then click and say, okay, tell the model, hey, I saw this. Then it knows what's going on.
B
That's right. Yeah. You have to go like in the, in the first experience, you explicitly have to press a button is like, ask ChatGPT about this ad. And that would be as if you took a link from the Internet, asked a question about it. So it's almost the same. We don't want to make that experience harder. But. But if you say, hey, what is this ad talking about? It'll say, I don't know, it's easy.
A
To start now and say, oh yeah, we're going to do the great thing. We'll do it right. But 10 years later, when there's an entire division in charge of ad revenue, you might say like, well, do we need the wall between the model and the ads?
B
Yeah, yeah, I think maybe there's multiple, multiple angles to this. So one, we are in the business of trust. So I think if we have to say what is our core business is like to win users trust and give amazing answers to the question that they're asking, that's in the consumer product side and of course on the enterprise side, trusted everything, which is like you're entrusting us with your most important data, we need to of course maintain that. So because I think that the ambition and the vision is so expansive, I think trust is the central point of it. We want to have devices which are helpful for you. If we truly want to be your best personal assistant, then you need to be able to share your most important information, but know that it will be dealt in a way which is like how you would treat it yourself. So I think our business model is trust. This is very different than many other scenarios. If you're just doing like kind of a transactional stuff, like again, you give the question and answer came back and that's the end of it. Like a search query, I think that's okay. But it is not a long term relationship. I think if you think of content discovery, certainly, I mean like it is just pushing things but. And trust is not a core component of that for us. I think the whole product, whether it's enterprise, whether it's consumer, whether it's anything, devices in the future, they're all centered around trust. So for us it's kind of imperative. For others it could be optional. And I think different companies are known for different things and we do want to be known for trust. So I think connecting this to the question which is, will be drift. I think you can't drift when the incentive is set up to be the best at this. And this is the goal that we want to achieve. Everything else is, is there to support that vision. But the Uber principle is trust.
A
OpenAI has a very huge number of people using the free tier. And then there's also paid subscribers and people who do that and pro users, whatever. How are ads going to play out across the platform?
B
Yeah, so ads are shown to people who are on the free and the go tier. And for pro and plus and for enterprise there are no ads. And I think that's an important thing. Like the context in which the company operates is actually like multiple missions which come all together to bring AI to everyone, which is when enterprise use it. That's a very specific context. There is no ads over there and there is a specific business model around that which is very powerful for subscribers who want the best, highest limits and very advanced features. I think that also works. But for a lot of people, lot of consumers, the best way to do that is to have, have high limits and free usage and then add ads which are actually useful.
A
Yeah, I've heard people talk about, you know, part of the goal of this is to avoid making the free tier just like the most limited thing available.
B
Absolutely. I think that, that is, that is the, that is the most, I think frustrating things for a lot of real users. Just like you ask five questions and then it just stops in other businesses. Right. I think we want to grow that a lot more and I think it fits with our overall goal that higher Usage limits is better and how do we fund that and be practical about it.
A
So going a little behind the scenes, how are these decisions made? Like who's in the room talking about this?
B
Yeah, I think this is a good opportunity also to talk a little bit about overall there is a company culture and I think different companies are different cultures which results in different products. And our company has like this DNA of a research team. Right. So we have much more, I think like rigorous debates, rigorous understanding of how should we make these principles, how does incentives work, how does the model of this going to work in a way that doesn't get corrupted later on. So we have had a lot of debates on that which actually resulted in these principles which resulted in this rubric with like hundreds of roundtables with like folks around the company on different areas, not just like working on ads, but everyone on every different part of the company giving feedback to create these principles. Then we convert those to a very simple rubric. I think the rubric is user trust is the most important thing. User trust more than user value, which is then more important than advertiser value, which is more important than revenue. And I think while this seems very straightforward, it's actually a very, very, very, I think in depth decision. So we can go into a little bit just like user trust more than user value. A good example of that is if I showed you a really good ad and you liked it, you clicked on it, you bought something, but later on you ask the question, was this app listening to me? And is the mic on? That's not user trust. You probably did provide some value. So for us our goal is like we cannot have that. The users need to believe and understand and control what's happening. So that's just one example. But I think once you set that right rubric up then even bottom up the team thinks like that. But of course as we have different decisions at different level, I think we have a pretty rigorous process on. We discuss privacy within the company, how we discuss safety within the company and there are very clear forums for that. And then of course as we make decisions at leadership level, we one go back to the simple rubrics because although the rubric is simple, it's actually pretty in depth and actually is very discriminating. If you think about these kind of questions, it's like should the ad be so good but the users don't know. Where this data came from is creepy. Okay, if it is good, it's not. So I think maybe it follows from.
A
There, what am I going to See, on my end, what kind of controls do I have? How is personalization going to work?
B
So I think a big part of actually delivering really good ads is allow personalization. So when I say I want to go on a trip to Yosemite, and then that shows me camping gear, because that's what I like to do. But of course, I think the flip side of that, how do you gain user trust is like, how did you know about this? How did you learn about this? So one is transparency aspect, which is like, you can see what is the data that we have on you which is being used for ads. The second is the controls, which let you see, say, which part of the data from your past chats can be used. Of course, sensitive chats, those are never used. But you can clear your data, which actually nobody else does, which is kind of a crazy concept. You can clear your data. So we don't know and we won't use that. You could say, don't use my past chats, if that's what you care about. Or you could say, turn off personalization fully. Of course, there is the other extreme, which is like, I don't want ads. That's a form of control. And that's where I think upgrading to the Pro or Plus version to completely stop ads is also there. So I think all the way from the spectrum of, like, I really care about this. I don't think this is the right business model. Like, Pro and Plus is the right business model for like, hey, I don't know what we were talking about yesterday. I'll just clear my history. Great, do that. Or it's like, hey, I'm more comfortable with like, you know, clicks on the ads being used, but not my past conversations. You could actually do that as well. Of course people will learn, hopefully experience, like, how it improves their experience. We have a very high bar in how we use these things. But in the end, the users need to know and be able to control that.
A
What will be kind of the expectation for how many ads I'm going to see or how often this would come up?
B
Maybe the Uber principle still goes back to. In that context, is there a good ad to show which is useful? If it is not, we'd rather not show you anything. In fact, as we roll out this test, you'll see that there will be very few ads because we want to be both conservative and we want to learn where to insert those. But the principle is a little bit more around. Is it useful? Is it helpful? Does it add to what the user is doing? And can we actually show a really good product as well? So keep the quality of the content very high as well. Keep the quality of the ad very high as well. Keep the relevance very high. If we can't find a good match, it's fine, you don't need to show an ad.
A
You mentioned that. Sensitive conversations, how do you know what something is, sensitive or not?
B
So that's actually one of the big strengths of OpenAI. I think both for our organic work and a lot of research in the company has gone into defining what's sensitive. Like it is health politics, like violence, like many different kind of verticals, very, very in depth definitions of that. And then of course using some of the best models to actually predict and understand the conversation and saying, marking it as sensitive or not. I think like I've actually never seen such high precision in any product so far in my career. What we have been able to build over here by taking in those policies, there's a team which works on defining those policies very, very rigorously and then sharing them with internal external partners for review. And then of course the enforcement that comes from the prediction system that actually says, hey, like this is matching this policy, so don't do that.
A
We've talked a little bit about this, but I'd like to touch back on this again, design. So where is that going to head? What are they going to look like?
B
As we were designing this product, I think like, of course we set up a very clear principle that the answers are separate from the model. And then the question was like, how does that actually look in the product? And on that spectrum on one side is like, how do you make it look native so that it's not jarring? And on the other side there is a question which is like, hey, how can it be very clearly separated out? And I think you can debate on both of them and there is values in both of them. And we wanted to kind of set up the experiment in a way which is that we can learn as we go. So we take the conservative option and still keep that principle in mind. And as we learn through building the product to getting the data, evolve it. But the idea still is how can we maintain that principle of the answers being very clearly separated from the ads and having a very clear understandability and visual distinction. I do think we will evolve the formats and I think they will get even more useful and better over time. But that principle is constant and within the options that we had, we started with something which is clearly separate out in conservative.
A
So you explain kind of At a technical level, how there's a separation, how the model doesn't see it, but also for guardrails and stuff. And I think you mentioned this before, but if I'm talking about, you know, you know, saying like, hey, I'm afraid of this trip, and it's like, well, hey, how about some life insurance? You know, that's not going to happen. But how do you guys put in guardrails? And how do you decide what's appropriate ads and what's not?
B
So, so maybe there's two questions in there, like, what's appropriate ads or not, in which context is a reasonable one? And the second is what are the controls in place so that, like, you know, over time this doesn't dissolve. So I think maybe a part of announcing our principles and being very clear internally for our rubric was to actually set that up in the first place. Then automatically a lot of the governance within the company, how we make decisions, follows from that onwards. So it's like, hey, I want to make this change to the product. Do this fit with this principle? Do they fit with this rubric that we have already set up? That's the first pass. I think the sensitive context is something that we take very seriously as well. Very simple things like conversations around health or politics or other contexts where their ads don't fit. And that data is not going to be used for making ads, even matching ads. You just filter it out. So the first layer is really, see, does an ad belong here? If it does, and it can be helpful and additive, then add it. I think this goes back to the principles like, actually, you don't ruin, neither for users nor for businesses by showing many ads because you don't want advertisers to pay randomly for impressions. You don't want users to see too many ads. You want to share the one right ad. And being one of the best AI companies, I think that's hopefully something we'll do really well.
A
Every time we do an episode, we get a few people who go in, the comments are like, no ads, no ads, no ads. Now's your chance to talk to those people directly.
B
Yeah, I think in some sense when people say no ads, I feel like there is a perception, and it's not wrong, that because I think maybe how the industry has evolved, that there is some suspicion around how this works. So I do think it is kind of incumbent on us to come up with better principles, better clarity, better rules on how we are going to do this. I think there is, again, this whole ad industry, if you Think about the online ad industry. It's like maybe 20 years old compared to many other industries which are hundreds of years old. So I think maybe we are in the third inning of this where we are saying, okay, we have learned from all of these questions and problems that people have. I think when people say no, as I do believe that they have valid questions and concerns around privacy, it's on us to do a really good job to earn their trust through better transparency, through better control, through building. That is also delightful. I think there'll still be skeptics. And then I think we have a way to upgrade because I think that's a valid choice as well. But enabling really good ads with good principles, I think it's possible. I think a big part of it is having really strong AI to power these ads also so that they are actually useful. And then as a result bring this product to so many people with higher limits.
A
Some of your competitors have been having a little bit of fun at the idea of ads.
B
Yeah. I think different companies have different missions. Our mission is to take AI to all of humanity. And of course we have different contexts. So we have the enterprise business, we have our subscription business, and we have a very, very huge consumer base using our product. So I think within that context, we need to serve each one of them. We will have a really robust enterprise business and there will be no ads over there, and then we'll have a very robust consumer business and ads will help us grow within that. So I think if that's not your mission, maybe it doesn't make sense, but our mission is to, to actually build in all of these contacts and we believe they're all actually related in how we build the best AI and then actually take it to everyone. And I think the good part is that we have different verticals in the business line. So it's not just an ads company. There are some companies which are purely just ads companies and their incentives are actually different. But I think we have a much more holistic view on this.
A
And also, when you're not serving hundreds of millions of free users, it's easier to sort of say, eh, we don't have to do this.
B
I don't think it's like a vision which is set in abstract. This is truly a vision which is like, how does AI actually help people? And if there is this elitist view that some people get to use it and some don't get to use it based on who can pay, I think that itself is a pretty big fork in the road in terms of how AI can be valuable to people. And I think our position is pretty much like everybody needs to have access to the best AI.
A
I have friends at small businesses and they are always trying to figure out how to promote themselves and do that. Could you explain from that point of view, like, what it's going to be like for people who are actually trying to reach new audiences?
B
Yeah, I think that's such a good question. Like literally, I have a few friends who started this e commerce company selling shoes and they did almost everything on their own, like the founders, which is like, go to the factory, get this done, get the logistics done. But when it came to ads, they actually had to hire like three performance marketers to do the work because it's so, so cumbersome, so analytical. If you don't do it right, you could end up wasting a lot of money. So I do think the vision has to be where almost as easy as you are prompting nowadays for questions. You could say, my goal is sell these shoes more in Midwest and go. And then it comes back as like, hey, I tried some experiments and I think this is the right bid. Given your price point, this is the right way to think that. Do you want to spend more money on this? And then you continue that conversation and almost become an agent for that. But today, literally, a small business has to hire performance marketers, which could be one of the biggest costs in some sense, actually, like, you know, just that cost of running ads through that is actually one of the biggest costs in there, which of course then makes things more expensive. So I think the vision would be that it is as easy as just steering and telling what you need from your business or describing the what, but not having to think about how it will work and how many campaigns and how much dollars and everything else is like, hey, I want to spend this much, I want to grow my business this much. These are the constraints. And ads are created and run to match your constraints in some sense.
A
Yeah, it's a very interesting way to think about it because auctions were revolutionary. The idea that you just go in there, I want to put these words out there and pay for that to do it. But that created an entire ecosystem, all this sort of expertise and stuff that you have to do. And it's really hard for small businesses to try to play in that space.
B
Yeah, I think as the competition on that increase, I think the people who had more time and money to spend on optimizing that and analyzing the data and then running the best possible ad got the benefit from that. Versus if I didn't know that, hey, actually I think I gave an example of an actual brand which is Allbirds. It competed with really big brands on shoes, but somehow they found that every designer in the tech company is going to love my shoe. And finding that niche and then actually being able to create your creatives, your message to focus on that made them win in this. Like if you go in Silicon Valley, you'll see all words everywhere because of that. So I think like. But that I think is not accessible to everyone. If you are very analytical and you have a whole team of people who think about that, you could do that. But theoretically the best products can come to life if we can find out that where the right niche distribution for it is and really go go for that. I think another story on this is like that there is this company which creates a vegan ramen instant ramen, which I love because I don't have to feel bad about eating it. But it's such a weird concept, vegan instant ramen. If I was just thinking about it without knowing about this company, I was like this can't exist, who will want this? But I think a really good product can help you find and discover those niche audience. Then you build a really good product. So I think enriches everyone's life from that perspective by enabling creation of those products, selling of those products. Maybe it's not a multibillion dollar company, but that's great. That's still. It's like a really big SMB which is growing and it serves these people who care about that very specific problem. So I think it really enriches people's life. If you are able to create products for these niches, what does this look.
A
Like in the future where we're using things in a more agentic way? How do ads even work 10 years from now?
B
I think a next step would be more actual conversational ads where you could truly kind of understand what this product is about. The next version would be can it work behind the scenes and actually aggregate the best discounts and best deals and the best version of the product. Like for example, if I know that I like ramen and let's somehow chatgpt has understood that preference of mine, then it could find that for me. I didn't even know that that product exists. Then in behind the scenes it could actually say, oh, actually I found a vegan Raman. Maybe that's something that's valuable. And of course there is a marketplace where somebody could say, hey, help people who are like this. To discover because discovery goes from both directions, right? Like of course, I'm searching for something and then people want me to discover something and there's a match between those. So I think it will be more agentic, but in the future, but at least the current modalities, I think we start from there, improve it and make it relevant, make it controllable, understandable, trustworthy. And as I think the systems evolve, the native, the organic products evolve, this will evolve as well. With that.
A
Excellent. Well, Asad, thank you for explaining this and look forward to seeing what's going to happen next.
B
Awesome. Thanks for having me.
Date: February 9, 2026
Host: Andrew Mayne
Guest: Asad Awan
This episode explores OpenAI's decision to introduce ads into ChatGPT, focusing on the reasoning, guiding principles, user experience, and future directions. Andrew Mayne interviews Asad Awan, who provides an insider perspective on how ads will be implemented, how they'll look and feel to users, the controls in place to protect privacy, and how OpenAI is orienting its business model around user trust and broad accessibility.
On Trust as Core Value:
“Our business model is trust. This is very different than many other scenarios.” – Asad ([05:39])
On User Control:
“The users need to believe and understand and control what’s happening.” – Asad ([08:57])
On Ad-Model Separation:
“The model doesn’t know whether an ad is there or not. If you ask it, ‘Hey, what’s this ad saying?’ It’ll say, ‘I actually don’t know ...’” – Asad ([04:12])
On Advertising for Small Businesses:
“The vision has to be where almost as easy as you are prompting nowadays for questions ... Do you want to spend more money on this? ... It almost becomes an agent for that.” – Asad ([20:38])
On Future of Ads:
“The next version would be can it work behind the scenes and actually aggregate the best discounts … Like, for example, if I know that I like ramen ...” – Asad ([24:23])
On Responding to "No Ads" Feedback:
“I do believe that they have valid questions and concerns around privacy; it’s on us to do a really good job to earn their trust through better transparency, through better control, through building. That is also delightful.” – Asad ([17:41])
The episode spotlights OpenAI’s determination to expand access to advanced AI while addressing user trust and privacy. Asad Awan explains the company's measured and user-centric approach to deploying ads, the meaningful controls and transparency offered to users, and how high standards underpin every step—promising an advertising model that could mark a significant break from industry norms. As OpenAI tests these ideas in the real world, it remains committed to adjusting and evolving based on feedback, striving to prove that ads and user trust can coexist in service of making AI truly universal and beneficial.