
CEO Prashanth Chandresekar on running the most popular developer forum in a post-ChatGPT world.
Loading summary
Sponsor Announcer
Support for this show comes from MongoDB. You're a developer who wants to innovate. Instead, you're stuck fixing bottlenecks and fighting legacy code. MongoDB can help. It's a flexible, unified platform that's built for developers by developers. MongoDB is acid compliant enterprise ready with the capabilities you need to ship AI apps fast. That's why so many of the Fortune 500 trust MongoDB with their most critical workloads. Ready to think outside rows and columns? Start building@mongodb.com build support for this show comes from Amazon Ads. It's time for some fresh tech consumer insights and Amazon Ads just released a study that's challenging what we thought we knew about electronic shoppers Remember when we assumed everyone bought headphones out of necessity? Turns out, according to their new data, only 54% do. The rest they're impulse buying or chasing the latest new product launch. For brands, this means rethinking their approach to reaching customers throughout the purchase journey, from building awareness to capturing those crucial purchase moments. Ready to rethink your Strategy? Head to advertising.Amazon.com to learn more.
Sponsor Announcer 2
Support for this show comes from SC Johnson. We've all been there. Choosing not to wear your new white shoes because there's a 10% chance of rain Bending awkwardly over the tiny coffee table to enjoy a sip of your latte, not ordering the red sauce. Those feelings of dread are what we call stainxiety. But now you can break free from your stainxiety with Shout's Triple Acting Spray that has stain fighting ingredients to remove a huge variety of stains so you can live in the moment and clean up later. Just breathe and Shout with Shout Triple Acting Spray. Learn more@shoutitout.com.
Host (Nilay Patel)
Hello and welcome to Decoder. I'm Eli Patel, editor in Chief of the Verge, and Decoder is my show about big ideas and other problems. Today I'm talking with Prashanth Chandrasekhar, who is the CEO of Stack Overflow. I last had Prashanth on the show in 2022, one month before ChatGPT launched. And while the generative AI boom has had tons of impact on all sorts of companies, it immediately upended everything about Stack Overflow. And in an existential way, Stack Overflow, if you're not familiar with it, is the question and answer for developers writing code. Before the AI explosion, it was a thriving major community where developers asked for and received help with complicated problems. But if there's one thing AI is good at, it's helping developers write code and Actually not just write code, but develop entire working apps. On top of that, Stack Overflow's forums themselves were flooded with AI generated answers, bringing down the quality of the community as a whole. You'll hear Prashanth explain that it was more or less immediately clear how big a deal ChatGPT was going to be, and his response was pure decoder bait. He called the company Emergency, reallocated about 10% of the staff to figure out solutions to the ChatGPT problem, and made some pretty huge decisions about structure and organizations to navigate that change. Three years later, Prashant says Stack Overflow is now very comfortable primarily AS an enterprise SaaS business which provides AI based solutions that are tailored to different companies internal systems. Stack Overflow also operates as a big data licensing business, selling data from its community back to all those AI companies large and small. That's a pretty big pivot from being seen as a place where everyone can go to just get help with their code. So I had to ask him, does Stack Overflow even attract new users anymore in 2025 when ChatGPT or Cloud Code can just do it all for you? Prasanth said yes, of course. And you'll hear him explain that while AI can handle simple problems or thorny complex problems, you really want to talk to a real person, which is where Stack Overflow still brings people together. You'll hear us come back to a single stat. In particular, more than 80% of Stack Overflow users want to use AI or already using AI for code related top but only 29% of them actually trust AI to do useful work. That's a huge split, and it's one that I see all over in AI right now. AI is everywhere, in everything, and yet huge numbers of people say they hate it. We hear this feedback in the Decoder inbox and the comments on the Verge and on our videos on YouTube. Everyone says they hate AI, but the numbers don't lie about how many millions of people are using it and apparently driving some benefit. It's a big contradiction and it's hard to unpack, but Prashanth was willing to get into it with me, and I think you'll find his answers and his insight very interesting. Okay, Prashanth Chandrasekhar, CEO of Stack Overflow. Here we go.
Interviewer (Nilay Patel)
Prashanth Chandrasekar, CEO of Stack Overflow welcome to Decoder.
Prashanth Chandrasekhar
Wonderful to see you again. It's been, as you said, a hot minute. Three years I think is the last time when we spoke. So great to see you again I.
Interviewer (Nilay Patel)
Should have said, welcome back to Decoder. You were last on the show in October 2022. One month later, ChatGPT launched.
Prashanth Chandrasekhar
That was an interestingly timed interview. Right before the world changed.
Interviewer (Nilay Patel)
Right before the world changed. Software development, certainly the thing that has maybe changed the most since the AI models have hit. There's a lot of new products in your universe to talk about, and there's what Stack Overflow itself is doing in the world of AI. So I want to talk about all of that, but first, just take me back to that moment. We had spent an entire conversation in 2022 talking about the community and moderation, how you were going to build a funnel of people, like learning to code, learning to use Stack Overflow. That was a big part of our conversation. The pipeline of engineers, both learning to write software and then be a part of the software development community. That was very much on your mind. And then all of software development changed because of the AI tool. So just describe that moment for me because I think it contextualizes everything that happened afterwards.
Prashanth Chandrasekhar
It was definitely a very, very surprising moment. I don't think an unexpected moment in many ways, because here comes this technology that obviously some people knew about, but not in a way that obviously captured everybody's imagination using this beautiful interface. And we were in the middle of wrapping up our calendar year, and at that point it was, here we go. We were thinking about our priorities for the next year and pretty, pretty much came to. It became very clear, you know, what we needed to focus on, because this is obviously going to be this very, very huge change to how people consume technology. And this is, you know, welcome to technology. It's constantly changing and things, especially this wave, I think it's completely unprecedented. I don't think there was any sort of analogy or any other sort of prior wave that I could look to, including the cloud and maybe the Internet. But, you know, I don't think, you know, we're still sort of fully sort of consuming what that is at the moment. But I would say, yeah, so we had. We went into what is the equivalent of a Code Red situation inside the company. It was an existential moment, especially for our public platform, because the primary, you know, the jobs to be done, if you will, is all around making sure people got answers to their questions. And here you go, you have this really, really slick interface, that natural language interface that allows you to do that, you know, on a moment's notice. So we had to sort of organize our thoughts. And what I ended up doing was carving out 10% of the company's resources to very specifically focus on a response to this. And we set a very specific date to respond by in a meaningful fashion. So we said, hey, the summer of 2023, I was going to go speak at the We Are Developers conference in Berlin. And I effectively told the company, hey, we've got six months to go and produce our response, at least our initial response, because obviously this is going to keep iterating and so on. And that's how we mobilized the company. We had this, we acknowledged it was a code red moment. We carved out a team of 10% that was about 40 people or so. So we're somewhat of a medium sized company. And then we got to work. And that was the moment.
Interviewer (Nilay Patel)
Take me inside that room. Very few people ever get to send the memo that says, it's a code red. Right. This is not a thing most people ever get to do. I mean, maybe you think about doing it, but maybe no one's going to read your memo. Everyone has to read your memo. You're the CEO.
Prashanth Chandrasekhar
Yeah.
Interviewer (Nilay Patel)
Take me inside that room where you say, okay, I have identified an existential threat to our company. People have come to us for answers to software development questions. Again, the last time I show, you were talking about the idea that there were objective right answers to software development questions.
Prashanth Chandrasekhar
Yeah.
Interviewer (Nilay Patel)
And that the community could provide them and vote on them. Well, now you've got a robot that can do it and can do it as much as you want, as long as you want. And now with tools like cursor can maybe just do it for you. Right. With tools like cloud code can maybe just run off and do it for you. Okay, so you've got that and you say I need to take 10% of the company. I'm curious how big the company is. I know there's been some changes, but yeah, 10% of companies at 40, 50 people, how did you identify? Okay, this is the moment I need to pull these people in the room. I'm making this decision and the right answer is 40 or 50 people are going to set aside their time to deliver me a plan by the time I give my next keynote.
Prashanth Chandrasekhar
The instinct to do that has come from a couple different experiences. My experience right before that was at Rackspace in the cloud services space. And the business I was actually running at Rackspace was all around, how do you respond to Amazon Web Services as a cloud technology threat? I was in the team that was ultimately, we built that business from the ground up and that it was effectively the 10% of Rackspaces population that went and created that. So I had some practice on what does it mean to see and respond to a disruptive threat that you're encountering. So this was my turn now to put that into motion at Stack by appointing somebody like myself at Rackspace when I did it, to go do exactly the same thing. The other data point was if I go all the way back a couple decades ago, or more than a couple decades ago, when I was in business school, my professor was Clayton Christensen and he wrote the book Innovator's Dilemma. I go back to that because I have always thought about that in the context of technology, because technology, it is a very consistent theme. Leave alone other industries that every so often you will have disruptive threats and there's a very specific way in which you need to respond to that. The history suggests that you should carve out an autonomous team that has very different incentives and can pursue things in a very different way relative to the rest of your business. And remember, our company, Stack Overflow has really two parts. We have our public platform which had this sort of this web kind of big disruptor, which we should talk about more broadly by the Internet. But then the other side of our business was the enterprise business, where we're serving large companies with our private version of Stack Overflow inside companies. So thankfully, that was. People continue to see value in having a knowledge base that's very accurate. And increasingly over the past few years, it's actually been even become more valuable because you need really great context for AI agents and assistants to work. And I've got plenty of examples. We could talk about that. So that's where that response came from. Nilay, instinctively, is that I had sort of been through it in a couple different dimensions prior to that. And just in terms of how I communicated to the team, the memo was actually like a series of memos. I mean, in terms of every Friday I send a company email. I just sent one right before I got on here. And I am pretty transparent in that, in that here's what's on my mind, here's what we should be doing, here's what some great things that happened, here's some people that demonstrated core values. So I've done that religiously for. I've been at the company for six years. I do that every Friday. So the team basically knows what's on my mind. And so it was, you know, it wasn't like this big one, big memos, you know, kind of activated. It was a series of emails leading up to this moment saying here's what we're going to be got to respond to this. Here's what we're thinking about now, and so on and so forth until it basically I could put the flagpole down and say, hey, by the VR Developers Conference, we have to produce a meaningful response both on the public platform as well as on the enterprise front. Because obviously this great opportunity now to integrate AI into our SaaS application because that obviously is a different vector also. So hopefully that helps.
Interviewer (Nilay Patel)
Did you actually type the words code red.
Prashanth Chandrasekhar
In equivalent? I think I used disruptive, I used existential moment. I used all those things, but I don't know if I use exactly the words code red.
Interviewer (Nilay Patel)
I just think about that moment where you're like, all right, I'm going to hit the C and the O. I'm saying these words. It's happening.
Prashanth Chandrasekhar
Yeah. We have a very specific communication cadence with the company, obviously like others, and the tone and the seriousness of what we were working on was very obvious to people. Especially when you carve out resources and you take people away from certain teams, people are going to ask like, wow, like what about my stuff? And here you go. This is the reason, right? So it becomes very obvious.
Interviewer (Nilay Patel)
How did you make those decisions to pull people away? How did you decide which people? How did you decide which teams?
Prashanth Chandrasekhar
Certainly I think with this is a hard problem to solve. So you certainly want to, I think very talented people, but I think certain types of people who are willing to break glass or go against the grain and not be sort of encumbered based on historical sort of norms. And so I think we so very specifically picked a combination of people. The people who are leading it were more newer people who had come from the outside of the company because, you know, remember we're going through a transformation. I joined a company that was engineering led in 2019, all about this public platform and we were transforming into this, this product led organization. So we appointed somebody that was very specifically a newer person who had come from the outside and who was interested in building highly innovative, fast iterating sort of products and had that sort of DNA and had that sort of drive to do it. I also personally stayed much closer to it and I actually in fact ran product for an interim period of time myself with that person reporting directly into me. And so that was another way to sort of stay very, very close to what was happening on the ground until the actual launch. And the rest of the team was a combination of very talented engineers, designers and some people that had context of how the site worked in the past who could provide us with all the unlocks that we needed.
Interviewer (Nilay Patel)
I think about Stack Overflow and probably too reductive of terms in this context. You have inputs, you have outputs, right? The inputs are users answering questions, the outputs are the answers to those questions when people come and search for them. There's a whole community that makes that system run. The software platform sort of manages that community. Then you've got moderators, but it's really inputs and outputs, right? There's people who are asking questions and people are answering questions. Both sides of that are deeply affected by AI, right? And I think this comes to the open web part of the conversation where the input side is being flooded by AI generated slop. And in 2022 you had to ban AI generated answers and Stack overflow. And then on the output side, the ability for AI tools to just supply the answers is overwhelming. So let's just break it into two parts. How did you think about the input side where there's going to be a flood of people saying, oh, I can answer these questions faster than ever by just asking ChatGPT and pasting the answer in. And maybe that's not good enough, but I can just do it. And then how did you think about the output side?
Prashanth Chandrasekhar
We noticed two things right out of the gate. One was the number of questions that were being asked and answered on Stack went through the roof because people started using, to your point, chatgpt to answer these questions. And then they were able to that sort of fuel this kind of spike, which is kind of counterintuitive, but I think people just felt like, hey, wow, I can gain the system, so let me just go do it. And very quickly. We are extremely shrewd and our community members are amazing at figuring out what's real and what's not. And they were able to call out very quickly that these posts were actually ChatGPT generated. And that's kind of what initiated the ban, which we completely supported and still support, by the way. So you still cannot answer any of the questions on Stack Overflow with AI generated content. And the reason for that, anile, is because what we have is effectively our proposition is to be the trusted, vital source for technologies. That's our vision for the company. So for us, it's all about making sure that there are only a few places where you can go where you're not dealing with AI slop, where you can actually, a community of experts have actually voted up and curated this in a way that you can trust for various purposes. So on the input side it made sense to do that and we continue to do that. Fast forward a little bit now. I would just say we have done many, many things to, even though we've had high standards to ask a question on Stack Overflow. Now we've created all sorts of new entry points into the site. Our AI assist feature that we just launched actually in GA earlier this week, which has been super exciting to watch how users are using that, which is effectively an AI conversational interface grounded on our 90 million questions and answers. Then the ability for people to ask subjective questions going back to our last conversation three years ago. Now people are able to ask just open ended questions because there's a place for Q and A, which is the canonical answer to a question, and so on. But there's also a place for discussion and this conversation because there's so much changing. So it's not like all the answers have been figured out. So let's actually just make sure that people have an ability to do that. And that's aligned with our mission of cultivating community, which is one of the three parts of our mission, the other one being power learning and unlocking growth. And so we have done all these things to make sure that we're not restrictive on the entry point and the kind of the question asking experience. The other thing that on the answer side, we also realized that it's very important to go wherever the user is spending time. So now that the world has changed and people are in fact using cursor and GitHub, copilot and anything else to write their code again, our goal is to be the vital source of technology. So let's show up wherever our users are. So we've actually become a lot more headless more recently. We've launched for example, MCP servers for both our public platform as well as our enterprise product. And so what people are using our platform to do now is not only invoke those MCP servers, let's say from a cursor when they're writing code and say what's the difference between version one and version two, but also to be able to write back, which is very unique in the industry, to write back to our platform straight from cursor if they want to engage on getting a deeper answer, and so on. And so that's been our product principle, just go anywhere where the user is and hence. But ultimately we just want to be the source, whether it's inside companies or outside companies, to be that trustworthy, vital source for technologists.
Interviewer (Nilay Patel)
How do you monetize in A world where you're headless, right? Where you're just another database that someone's querying from cursor. How does that make you money?
Prashanth Chandrasekhar
So our enterprise business as I mentioned, is what we call Stack Internal, which is now used by 25,000 companies around the world. Some of the world's largest organizations, banks, tech companies, retail companies, use this product to be able to share knowledge internally. And now increasingly they're able to use that trustworthy knowledge to power their AI assistants and AI agents to go do various things. A good example of this is Uber who has something called Uber Genie that is a customer of ours. And with Stack Overflow Internal they have thousands of questions and answers on our platform. Uber Genie plugs into that content through our APIs and then it's able to go into things like Slack channels and automatically answer questions and drive productivity that way. So you're not bothering people, right? So it's rooted in the context that's in the organizations knowledge on our platform. That's our primary business, the enterprise business. The second business which we actually built only over the past couple years is our data licensing business. So one of the things that we also noticed was that a lot of the companies that, you know, the AI labs were obviously leveraging our data for LLM pre training and post training needs and Dragon Rag and indexing. We put up a whole bunch of anti scrapers, we worked with, you know, third party companies and we did that and very quickly we got calls from a lot of them saying hey, we need access to your data, let's work together to formally get access. And so we've had to do that and now we've struck effectively partnership agreements with every single AI lab that you can think of for the most part cloud hyperscaler that you can think of companies like Google, OpenAI, all these folks, and even partnerships with a long tail of the databricks, snowflakes some of the more kind of on the secondary point, even though they're not doing LLM pre training and that's been our second business more recently and the third one, which is the smallest part of our company is advertising. So I think most people assume that Stack Overflow is supported entirely by advertising, but it's only about 20% of our company's revenues and we have again a very captive, very important audience of developers who do spend time on the site. And so we have large advertisers that want to get their attention on various products. And in fact now is a time when there's a lot of competition so they, they want to do that increasingly. So those, that's how we make money. So in the context of becoming headless, for us, it's about our enterprise product is, you know, it just works in a subscription and kind of hybrid pricing. So that's how we make money there. The data licensing is similar and that if they want people on access, they got to pay for that. And then yes, advertising is limited to some of the largest companies and you know, they, they pay us for that. But there's always going to be, I would say it's an and versus an or right. People are not, we're not going to be completely headless. I think we just want to give the user the option to be headless. Plenty of people still come to the site and so in that case we're able to balance that out.
Host (Nilay Patel)
We have to take a short break.
Interviewer (Nilay Patel)
We'll be right back.
Sponsor Announcer
Support for this show comes from Vanta. Customer trust can make or break your business. And the more your business grows, the more complex your security and compliance tools get. It can turn into chaos. And chaos isn't a security strategy. That's where Vanta comes in. Think of Vanta as your always on AI powered security expert who scales with you. Vanta automates compliance, continuously monitors your controls and gives you a single source of truth for compliance and risk. So whether you're a fast growing startup like Cursor or an enterprise like Snowflake, Vanta fits easily into your existing workflows so you can keep growing a company your customers can trust. Get started@vanta.com decoder that's v a n t a.com decoder vanta.com decoder support for decoder comes from Quo. It's hard to overestimate how important customer communication is for a business. A quick response can be the difference between making a sale and not making a sale. A quick and personalized response can potentially win you lifelong loyalty. Quo spelled Q U O is a business phone system that makes sure you never miss an opportunity to connect with your customers. Quo spelled Q U O is a business phone system that makes sure you never miss an opportunity to connect with your customers. Quo works right from an app on your phone or computer. Your whole team can share one number, collaborate on calls and texts like a shared inbox. And Quo is not just a phone system, it's a smart system. Their built in AI logs calls, writes summaries and even sets up next steps. And if you can't answer the phone, Quo's AI agent can qualify, leads, route calls to the right person and make sure no customer is ever left hanging. That's why over 90,000 businesses, from solo operators to growing teams are using quo to stay connected and look professional. Try it for free when you go to quo.com decoder. That's Q U-O.com decoder. You can even keep your existing number quo. No missed calls, no missed customers.
Prashanth Chandrasekhar
Fox Creative.
Interviewer (Nilay Patel)
This is advertiser content from snapdragon now.
Sponsor Announcer 2
Brandon flight 247. Maria, hi. I'm almost to my gate. Yep, I'm flying home for the holiday. You need me to hop on a client call right now?
Sponsor Announcer
Sure.
Sponsor Announcer 2
I mean, yes, yes, not a problem. Give me one sec. I'll grab my laptop. Mom, hi. I really can't talk. No, mom, I have to get on a call with a client. Yes, the scary one. I love you too. Okay, bye. Bye.
Interviewer (Nilay Patel)
Come on, open, open.
Host (Nilay Patel)
Okay.
Sponsor Announcer 2
Good morning. Yes, hi everyone.
Interviewer (Nilay Patel)
Oh, no, no, no, no, no, no.
Sponsor Announcer 2
Did the battery just die?
Interviewer (Nilay Patel)
When you're on the go, you need.
Prashanth Chandrasekhar
A PC that can actually keep up with you.
Sponsor Announcer 2
Where is my charger?
Interviewer (Nilay Patel)
PCs powered by Snapdragon X series processors.
Prashanth Chandrasekhar
Provide multi day battery life so that you decide when you're finished, not your PC. Snapdragon.
Interviewer (Nilay Patel)
Less plug time, more go time.
Prashanth Chandrasekhar
Learn more@snapdragon.com laptops battery life varies significantly based on device settings, usage and other factors.
Host (Nilay Patel)
Foreign.
Sponsor Announcer
Support for the show comes from Crucible Moments, a podcast from Sequoia Capital. Every startup, no matter how brilliant the tech or how committed the founders, will eventually face an unthinkable obstacle. Whether they choose to pivot or to double down is often the difference between success and failure. Like the autonomous drone delivery company Zipline, which originally produced a robotic toy before landing on their current business. Or Bolt, now one of the largest rideshare and food delivery platforms in the world. But it started as an Estonian transportation company. Those are the kind of stories that Crucible Moments is all about. Deep diving into the make or break moments that set the course for some of the most important tech companies of our time. With interviews from some of the key players that made these companies a successful. Hosted by Sequoia Capital's Rule Off Botha, you'll hear firsthand stories about the triumphs, setbacks and often unexpected decisions that shape these tech giants. Subscribe to Crucible Moments today. You can listen@CrucibleMoments.com or wherever you get your podcasts.
Interviewer (Nilay Patel)
Welcome back.
Host (Nilay Patel)
I'm talking with Stack Overflow CEO Prashanth Chandrasekhar about what an existential crisis for the company the launch of ChatGPT was. And of course that led me ask why would anyone new come to Stack Overflow now?
Interviewer (Nilay Patel)
Do you think new users are going to come to Stack Overflow? Stack Overflow is a product of the mobile era, right? There's an explosion of software development, there's an explosion of community, there's a culture and the value of building apps and services, and there's new tools. Stack Overflow is one of the central points of gathering for that community in that era. New developers today might just open cloud code or cursor or GitHub or whatever and just talk to that and never actually venture out into a community in that way. Do you think you can get people to come to Stack Overflow directly and seek out answers from other people, or are they just going to talk to the AIs?
Prashanth Chandrasekhar
I think that the for simple questions, by the way, when we saw the decline in questions early on in 2023, what we realized is that pretty much all the question declines had to do with very simple questions. Complex questions absolutely still get asked on Stack because there's no other place. The LLMs are only as good as the data, which is typically human curated, and we're one of the best places for that, if not the best for technology. It's still a very active site, a lot of engagement, a lot of monthly active usage, and the questions being asked are quite, I would say they're advanced questions. But what we're also increasingly seeing through our new mechanisms that we've opened up because we want to answer your question, we want to give people other reasons to come to site other than just getting their answers. So we have had to broaden our site's purpose and hence the mission of cultivate community, power learning and unlock growth. So what we've done with those things is let's open up new entry points, new ways for people to engage. So we in fact, for example, Nile unlocked the ability for humans to chat with each other to get directional guidance. So that's been a very, very popular feature on the site where people are engaging with other experts. So we have, for example, people asking OpenAI API questions as an example, and they can go into the OpenAI chat room and be able to engage with other people that have similar questions, as an example, or Python experts as an example. We also opened up ability for people to demonstrate their knowledge by opening up challenges so effectively, like hackathons. That's another idea that we've got where we've opened up a whole bunch of series of channels, very popular feature. Now people spend time to go and solve these challenges that we post and then that way it accrues to their ability to showcase their understanding of the fundamentals, which I think is very important in terms of the world, where the world is going. Because if people are just using Vibe coding tools and cogent tools, I think companies that are bringing in young talent ultimately have to know that they're relying on the people who not only took the shortcut, but also understand the fundamentals. And so we're one of the few places where you can actually prove that you've actually learn the fundamentals. So that's the other reason why we've opened up these new mechanisms. And then the third reason, third part of the mission, which is unlocking growth, is we also want to enable people. There's going to be a lot of job disruption as a function of all this and people's jobs are going to change quite dramatically. Junior developer jobs, even though I think it's a short sighted move by many companies to stop hiring them, considering you need a pipeline, those people are going to need a home, then you're going to need to connect with other people to be able to sort of progress and learn and get jobs. And so jobs is a very important part. We struck a partnership with indeed this past year so that they partner on tech jobs. So it's just to broaden the scope of our site so that there are many other reasons other than asking the question, which they will still do, but we want to give them more reasons to come to the site. Also.
Interviewer (Nilay Patel)
This comes, I think, to the big tension in all of this and I see it playing out on all kinds of different communities. I see it playing out in our own comments. In a lot of ways you want to build a community of people who are helping other people get better and that is being disrupted on every side by AI. And communities that are built around people are pretty resistant to the incursion of AI. This has definitely happened on Stack Overflow. Your moderators have essentially revolted over the ability to remove AI generated answers as fast as they want to. When you partner with OpenAI, a bunch of users started deleting content so it wouldn't be fed into OpenAI for training and you had to ban a bunch of them. How are you managing that balance? Because if you build communities around people, I would say right now anyway, the culture is those communities will push back against AI very hard.
Prashanth Chandrasekhar
I would say one of the most important things that we're focused on, and I've spent time on over the past few years is this whole push and pull, as you describe it, of like, how do we think about AI in the context of our site? Because it's pretty clear to us, and to me that if we don't modernize the site in the context of us leveraging AI as an entry point, et cetera, that it's going to be less relevant over time. And so that's not good. So we've taken a very aggressive stance by incorporating AI into the public platform now with AI assist, as I mentioned, which has been fantastic to see. And I'll walk you through the decision on why we did that. And then same thing on the enterprise side. There's definitely, if I think about the user base at Stack Overflow, it's kind of like a big nation, right? Like we've got 100 million people and there's definitely people on both sides of the spectrum. What's interesting is that there is a. We have something called a 19990 rule. 1% are the hardcore users who have been spending a lot of time with their blood, sweat and tears curating knowledge, spending their time on the site, et cetera, contributing 9%, doing it in a medium way, and then 90% sort of consuming and mostly being a lurker on the site and the distribution on the site. When we ask people on whether or not they're using AI, our own surveys basically say, if we took a look at the Stack Overflow 2025 surve survey, over 80% of our community members are using AI or intend to use AI. 80%. Right. But, you know, the trust level on that answer on. On when they're using AI is only about 29%. Only 29% of our user base actually trusts what's coming out of AI, which is actually quite appropriate considering where we are, because, you know, there should be skepticism of this new technology. So there's enthusiasm to try it, but not to fully trust it. And with this 1990 rule, I think what we have is we have a core group of users that are always going to be the protectors of the original mission of the company, which was to create this kind of knowledge base that was completely accurate and do nothing more than just that. And then you're going to have. We have a very large number of people who are, let's say, younger developers, people, the next generation of developers who are looking to leverage the latest and greatest of tools. And it's very clear to us, based on surveys that we've done and additional research, that they want to use natural Language as the interface. To be able to do this, it is the most meaningful seed shift sort of change in terms of computer science development. If you look back to all the way to even object oriented programming, many, many decades ago, that wasn't such a huge boom actually. It didn't actually create the step change. But now we're in this moment where everything's been unlocked. So I think it's a huge change effort and we've had to decide that hey, we've got to be able to respect the original mission, keep accuracy at the heart of it. And so we're not comfortable with using for example AI for answers because it will generate slop because it hallucinates and that's hence the trust score is low. But why don't we incorporate natural language interfaces? So that's the preferred way to sort of engage. And so we ended up doing that both on the public side as well as on the enterprise side of the house. And that's been really well received by the vast majority of users. But there will be always a vocal minority that will push back against in corporation this because there's a lot of, you know, not beyond just the site, there's just a level of, I think broader concern about what all this does to Jobs and if we let the, you know, cat out of the bag, then what's going to happen, you know. So I think there's, there's that obviously concern also, which is understandable.
Interviewer (Nilay Patel)
Let me put a pretty fine point on that. I think I understand that in a, maybe a sharper way if I am somebody who in your 1% who spent a lot of time on Stack Overflow helping other people. And the reason I answer questions for free on your platform, which you monetize in lots of ways, is because I can directly see that my effort helps other people grow and I'm helping other people solve problem that is one very self contained dynamic.
Prashanth Chandrasekhar
Yeah.
Interviewer (Nilay Patel)
The last time you were on the show, our entire conversation was about that dynamic.
Prashanth Chandrasekhar
Yes, right.
Interviewer (Nilay Patel)
And how you, and how you got people to participate in that dynamic and the value of that dynamic. And then suddenly there's a very clear economic benefit to the company that owns the database because they're selling my effort to OpenAI.
Prashanth Chandrasekhar
Right.
Interviewer (Nilay Patel)
Which is a thing that is happening across the board.
Prashanth Chandrasekhar
Right.
Interviewer (Nilay Patel)
We're gonna, we're gonna do all these data licensing deals with all these AI providers, they're gonna train on the answers that I have painstakingly entered into this database to help other people. And now the next generation of software engineers is Gonna get autocomplete. That's based on my work and I've gotten nothing. I mean I've heard that from lots and lots of people, right? I've heard that in our own community. I think I have felt that as various media companies have made these deals. How do you respond to that? Because that feels like what you were providing was a database that you had to monetize in some ways. But the people, the interaction that the people had was the value. And now there's another kind of economic value that is maybe overshadowing it or recasting or recharacterizing the interaction that people have.
Prashanth Chandrasekhar
There are a couple of points there. One is, I think if you think about the original DNA of this company and why people came together to do this thing, when I joined the company I asked the question like what's people's incentive to spend time doing this? And I asked the founders specifically, Joel Spolsky, about this and his point is that the software development community is effectively. It's very altruistic. People just want to help each other out because people understand how frustrating. I used to write code many years ago. I recently picked it back up with some of the code gen tools which is interesting compare and contrast. But I just remember how frustrating it was if you get stuck on something. And so Stack was obviously a huge boon when it was created to unlock this. And so it was truly out of that. That was the reason we also asked the question like should we along Even before ChatGPT, should we incentivize users by paying them for that? Should we give them monetary benefit? And that wasn't like a high ask by our user base. We went to research people so people were not in for the money. Plus it complicates things because how do you judge the payment for a particular JavaScript question relative to a particular Python question? It's just a very. Goes down a rabbit hole which is I think a untenable sort of rabbit hole. And so this commercial aspect. So that's one. So what is the original reason why people got together and it was about the mission. Secondly, in terms of I think why we have to do this and is it unfair and so on. The primary reason why we have to go down this dietary licensing route of why we've had to do it is because the model of the Internet has literally been turned upside down. People relied on. And I know you talk about this Nile with the doordash problem. I think the model of the Internet has literally where people are go to search engines and Go to websites and you monetize off of ads. That is completely. I really empathize with content sites that are heavily dependent on advertising because I think most content sites, their Traffic is down 30, 40%, something like that. So with this huge seed shift and where companies that support these platforms have to, ultimately we're a business ultimately. So what we have to do, we have to do what is necessary to adopt a new business model to survive and thrive and do all the things. And so thankfully for us, we've had an enterprise business which is independent of all of this. Thankfully for us, we were now able to, we still had the advertising business for large advertisers, still cared about this, you know, on our community. And so data licensing only felt right in terms of making sure that we can effectively capitalize on the moment, plus also be able to invest back into our community so that people who are there for the right reasons saw the benefits of that. So we've invested with all these new features I just mentioned, whether that is all these new content types or challenges or chat or AI assist or any of these things, all takes resources to go and build. And so we've had to go and, and we leverage these funds that we received to be able to go do that. Now in the future we may consider other ways to make it. For example, should we pay our users, give them a piece of the data licensing revenues, perhaps we'll ask that question always. There's always ways for us to continue. But right now, this is the current.
Interviewer (Nilay Patel)
Setup that we have you mentioned. To get to the data licensing deals, you had to put up a bunch of anti scraper tools. You had to go into secondary and tertiary layers of the stack to get deals from databricks and other kinds of providers. The AI companies, they were just scraping your site before, right? They were. I mean they're probably still hard. Like whether or not they're paying you, they're probably still just going through the front door because that, all of them appear to be doing that. Did you have to say we're stopping you and then go get the deal or did you say, hey, we know you're doing this, but you have to pay us or we're going to start litigating.
Prashanth Chandrasekhar
Somewhere in between, I would say, I think we, we put up, we put up the anti scrapers very quickly. We even sort of changed the way in which people received our data dumps. And you know, because we want against it as a balance, because we never wanted to prevent our community users from grabbing our data for their Legitimate needs, you know, for them to do their school projects or PhD theses or anything like that. So we've continued to be open about our data for our community members, but they have to be community members and there can be companies looking to commercialize off the data. So we were very specific about the policy terms putting up technology that prevented people from grabbing it. So we know exactly who is scraping, who's not scraping. Some parts of those were outreach to those folks to say, look, stand down because you're just, you're obviously putting up even the pressure, you're putting a lot of pressure on the servers by doing what you're doing, so take it easy. Or.
Interviewer (Nilay Patel)
But I think my characterization of those companies is they don't care or some of them care and they want to be good citizens and some of them absolutely do not care and they would prefer the smoke. You can just categorize. And there's a reason Amazon is suing Perplexity. Like they told Perplexity to stop it and Perplexity won't. As we're speaking, say this morning, the New York Times is suing Perplexity. Then there are other players reacting in different ways and they're striking different kinds of deals. Walk me through one of those deals. When you went and struck your deal with OpenAI, was it, we're going to stop you and if you want the door to be open again, you have to pay us, or was it, you know, this is wrong, we can take all the technical and legal measures, but we should actually just get to the deal correctly. Walk me through that conversation with some.
Prashanth Chandrasekhar
Folks like them, we were already, you know, we were incorporating something like OpenAI into our product. Remember the code Red situation where we were about to announce our AI response? So we were actually using that technology to do what we had to do to incorporate AI into the public platform as well as our enterprise product. So we had a relationship with them and we also said, look, this is not going to work, it's not tenable and this is the new way of working. And so meaning we need a new arrangement, a business arrangement for you to use the data. And so let's actually have a conversation. And to credit to them, they were very partner centric around that. I was very impressed by both OpenAI and companies like Google who are all very open to engaging on this topic and wanted to be responsible AI partners. They got it immediately, even before we asked them. It wasn't like this kind of big, let's go have this conversation from the ground up and justify why it had to be done. We just said, look, look, this is what needs to happen because this is a new business model. And we got into the conversation pretty quickly in that, okay, let's actually have a constructive way to, you know, what exactly are you looking for? Which format of data do you want to scrape the content? Do you want bulk uploads? Do you want API calls, you know, what do you want? So we got into that whole mix and then of course there is the conversation around and these are recurring, just mind you Nilay that these are recurring revenue type of deals that these are not, not one time payments. So yes, they were very collaborative partners. But you're right, there are players who, they're contradictory. They say something and their actions I think prove others other things in terms of how they've engaged. And so there are people that are holdouts for sure, people that are not exactly consistent with their word. And that's unfortunate. And I think every company like us has to do decide what we do about that. And you know, we're in various stages of these conversations with various people on, you know, how to make sure that we ostensibly get them to sort of do the right thing.
Interviewer (Nilay Patel)
Now you have to name one of those companies who, who do you think is, is holding out differently than their public posture.
Prashanth Chandrasekhar
I'd rather not be sweet. I think you've actually like characterized all the usual suspects that you, that you're covering are the usual suspects that you know, we are encountering is how I would put that. Yes.
Interviewer (Nilay Patel)
Let me ask you about the recurring revenue piece and then I want to get into the decoder questions because I think they'll be illuminating after this conversation. There's a sense, right that we've done all the pre training that we're going to do, right that scraping the Internet is not the future of these models, that there needs to be some other leap. Stack overflows Existing corpus of information is the valuable thing, right? It's the, it's, there's a lot of information there. There's 20 years of stuff in that database. What's the value of you have to pay us again to train the next version of Gemini or GPT. And the value of there's incremental information being added to the existing database because that seems like a clear split to me.
Prashanth Chandrasekhar
The way that we have thought about this is that every model that's being trained, you're training it on some corpus of information. You're going from GPT X to Y. And so in the new model that you're Training, if you're leveraging our original data or some derivative of that from a prior model, then you have to pay us for it. That's effectively the legal requirement to be able to do that. And so it's a cumulative aspect. Right. So let's not forget that. And so people have to pay for the cumulative data. It's not just the fact that it was used back in the day. And yes, obviously relative to 20 years, one year's worth of information is going to be less, but that's why you're getting 20 plus one. That's the idea. And so that's just the way the legal agreement has been set up.
Interviewer (Nilay Patel)
Is it per year, is it every year's worth of data is a chunk of money or how does that work?
Prashanth Chandrasekhar
No, it's just, it's a cumulative, it's like the whole corpus past, you know, historical data as well as, you know, the anything that's going forward for the following year. All that is sort of one accumulated sort of data set and that's, you know, charge is one effectively.
Interviewer (Nilay Patel)
But that doesn't. So like this year's data doesn't get, get pulled into the training set for Gemini 3, which just came out. Right. Every new question, answer and stack overflow since Gemini 3 has come out is not incorporated in Gemini 3's training. So you're kind of betting that they're just going to train ever bigger models.
Prashanth Chandrasekhar
That's right.
Interviewer (Nilay Patel)
Is that how it's structured in your mind?
Prashanth Chandrasekhar
Yeah. And some companies have asked, you know, they've asked for very, very wide use cases. You know, there's pre training use cases even beyond that you can leverage the data in many, many different ways for AI and non AI use cases, search use cases and so on. But correct. I think that there may be scenarios where larger models are built and our data is going to be useful for those scenarios, but there's going to be rag indexing, there's going to be post training needs, there's going to be all sorts of scenarios and it's quite interesting to see some of the frontier labs ask for very specific slices of data that they find it to be find very useful. And we're able to remember we've got a lot of, there's not only just a question and answer, but we've got the comment history, we've got the metadata history, we've got the voting history, we've got the user A has gone down this path history. So it's a lot of excellent context for things like reasoning and to really sort of be able to be useful to really mimic the human brain. It's effectively one human brain that's been documented almost.
Host (Nilay Patel)
We have to pause here for another quick break. We'll be back in just a minute.
Sponsor Announcer
Support for Decoder comes from Superhuman AI promises a lot, but in practice it often ends up being yet another tab you have to keep track of, slowing you down. I don't know about you, but I need fewer tabs and apps in my life, not more. Say hello to Superhuman, the AI productivity sweep that gives you superpowers everywhere you work. With Grammarly, mail and coda working together, you get proactive help across your workflow, from writing to preparing for meetings, presentations, and so much more. Unlike chatbots that live in separate windows, Superhuman's AI is in the apps and tabs where you already are, like your email docs. And everywhere you work. Think of Superhuman as your AI dream team, proactively helping you go from to do to done faster. Superhuman knows what you might need and offers suggestions. It guides you in the moment so you sound like your best self and stay focused on what matters. It doesn't make you superhuman, it gives you the tools to prove you always were. Unleash your superhuman potential with AI that meets you where you work. Learn more@superhuman.com podcast that's superhuman.com podcast Support for this show comes from LinkedIn. Imagine if any of the movies that included the line I need the right person for the job settled for I'll just take about anyone. How many heists would have failed? How many deals would have fallen through? How many secret spy missions would have ended in disaster? So why would you accept just anyone when hiring for your business? When you need the right person for the job, you can turn to LinkedIn Jobs. And now LinkedIn Jobs is stepping things up with their new AI assistant so you can feel confident you're finding top talent that you can't find Anywhere else. With LinkedIn Jobs AI Assistant, you can skip the confusing steps and recruiting jargon. It filters through applicants based on criteria you've set for your role and surfaces only the best matches so you're not stuck sorting through a mountain of resumes. Hire right the first time. Post your job for free@LinkedIn.com partner, then promote it to use LinkedIn jobs new AI assistant, making it easier and faster to find top candidates. That's LinkedIn.com partner to post your job for free. Terms and conditions apply.
Sponsor Announcer 2
Support for this show comes from Adobe, who are introducing the all new Adobe Acrobat Studio. Now with AI powered PDF spaces. Look, I'm sure when I say PDF you have a very specific thing in mind. And I'm guessing it's an email attachment. Certainly not a dynamic asset that can help elevate your business. But Adobe Acrobat is changing that. It's time to do more with PDFs than you ever thought possible. Need AI to turn 100 pages of market research into 5 insights with a click. Do that with Acrobat. Need templates for a sales proposal that'll close that deal. Do that with Acrobat. Need an AI specialist to tailor the tone of your market report to sound real smart in real time. Do that with the all new Adobe Acrobat Studio. It's time to reimagine and rethink what a PDF can actually do. Learn more@adobe.com do that with Acrobat. That's adobe.com dothatwithrobat.
Sponsor Announcer
Support for this show comes from Amazon Ads. It's time for some fresh tech consumer insights. And Amazon Ads just released a study that's seriously challenging. What we thought we knew about electronics shoppers. Remember when we assumed everyone bought headphones out of necessity? Turns out, according to their study, only 54% do. The rest, they're impulse buying or chasing the latest drop. They also found out that 45% of gamers buy completely spontaneously. And despite all the digital options, they're reporting that 38% of games are still bought in physical stores. Not exactly what you'd expect in the age of downloads. For brands, this means rethinking their approach to reaching customers throughout the purchase journey. From building awareness with streaming TV ads to driving consideration through display advertising, all the way to capturing those crucial purchase moments and more. Ready to rethink your Strategy? Head to advertising.Amazon.com to learn more. That's advertising. Amazon.
Interviewer (Nilay Patel)
Welcome back.
Host (Nilay Patel)
I'm talking with Prashanth Chandrasekhar, CEO of Stack Overflow. We've talked a lot about AI and models and disruption. These are all big issues. So how do they fit into the fundamental work of being a CEO, which is to make decisions and keep the company running.
Interviewer (Nilay Patel)
This, I think, brings me to the decoder questions. You've restructured the company. There's been some rounds of of layoffs. You've obviously refocused on the SaaS business in a real way. I think we should talk about that. But the idea that we're going to train ever bigger models and that will be the growing part of the business versus we might just want some slices versus it's actually retrieval Augmented generation or rag. That's going to be the future for a lot of these other businesses. You would make different decisions based on which one of those is going to grow faster. And I don't think anybody knows.
Prashanth Chandrasekhar
Correct.
Interviewer (Nilay Patel)
Maybe you know, you can tell me if you know or you know someone who knows. But we're in a very nascent period for all of this kind of development. How have you structured the company to even out all of that risk and be prepared for how do people actually need the data in the future? Because I don't. Maybe, you know, but I don't think anybody. I don't think. I know.
Prashanth Chandrasekhar
Yeah, yeah, it's hard to predict clearly. And also, you know, who knows, we've got some brilliant minds like the Demis Hasbis at Google and others who are coming up with the next generational leap of whatever the equivalent of transformer technology is to, you know, to keep going towards this ultimate goal that they're all pursuing, which is aging AI. So yes, you're right. It's hard to know exactly what shows up and when. However, the way we're structured as a company is effectively two parts nearly so the one part is the enterprise business. So we have a product team, engineering team, we have obviously go to market, a go to market team focused on that. And that's very, very clear, the enterprise products business. And then the other side of the house is what we call the community products side of the house. And the community products team focuses on the public platform, all the features that we talked about so far, AI assist and all the subjective new questions and chat and all these things. And this is the community site and the data licensing business sits in that group and so that they are obviously tied to the, you know, the kind of the engagement on the site. And so there's a kind of a virtuous cycle there. So that's how we're split. And that includes again product resources, you know, a small go to market team, insuring folks, etc. So that's. And then also a community management team which spends a lot of time with the moderators, etc. To engage there. So that's. So it's split down the middle. And of course our other functions support both.
Interviewer (Nilay Patel)
How big is stack overflow today? I know you laid off almost a quarter of the company in 2023 because of traffic declines. You've built other businesses. How big are you today?
Prashanth Chandrasekhar
We are about 300 people or so.
Interviewer (Nilay Patel)
And do you think the revenue you're going to see from data licensing or your SaaS business is going to allow you to grow again?
Prashanth Chandrasekhar
We believe so, yeah. We're a growing company, we're profitable. So you know, financially we're thankfully in a very good spot. And now it's all about placing bets on the, on the highest growth opportunities. And we believe creating this knowledge intelligence layer inside the enterprise through our stack internal product is a phenomenal growth option because customers are pulling us in that direction, which is fantastic to see. Seat.
Interviewer (Nilay Patel)
So that's where you're headed. Those are some of the announcements. I do want to wrap up by talking about those. I'm just focused on the Future is the SaaS business for enterprise, the future is data licensing. Are you still seeing declines on the public website?
Prashanth Chandrasekhar
I would say it's been stabilized for a few months. So I think the engagement and the activity on the site are actually pretty stable. The drop in questions that I was mentioning previously were all the simple questions and it seems to be now has sort of come to a place where it is, you know, the complex questions are being asked. We have a consistent number of people on the site every day. We have something called a heartbeat. In fact, anybody can go check it out. There's always, you know, if you go to stackelflow.com, you'll see it at the bottom, how many users are online at the moment, sort of thing. And so you'll always see, you know, very consistent number there. So that's, so it's, I think it's, it's hard to know how to predict the future, but certainly I think the, the worst of it was back in, you know, 23, 24 for sure.
Interviewer (Nilay Patel)
The question I ask everybody in decoder, as you well know, is how do you make decisions? The last time you were on the show you said that you wanted to be on the front lines as much as possible and you wanted to be informed by people who are on the ground.
Host (Nilay Patel)
Has that changed in the past three years?
Interviewer (Nilay Patel)
What's your decision making process?
Prashanth Chandrasekhar
Yeah, not, not really. I think it's very important for leaders and people like, you know, CEOs etc, to have the full context because these decisions cannot be delivered, you know, kind of, you can't have filtered information. So I spent a lot of time with users, a lot of time with customers really understanding what they care about. And that's how we've even decided even something like as controversial as the AI assist feature to our conversation. It wasn't obvious if you were not close enough to the ground to say let's go and build that. Because if you Just listen to the headline statements. It seemed like, hey, people don't want that. Wanting to create AI into stack overflow. But the reality is that that many, many users, the 90% I was mentioning, you know, wanted a natural language interface. That's what they are comfortable using these days and that's what they wanted. So that's why we decided to do that.
Interviewer (Nilay Patel)
One of the things that I, I see everywhere is that split. You, you mentioned it before 1990, right? There's very vocal minority. We see it in our own traffic on the verge. We, we cover AI deeply. We are told that everyone hates it. I understand why, I understand the, I understand the comments. That's what I'll say. I get it. And then I see the usage numbers, I see the traffic. On our coverage of AI tools, I see companies like yours saying everyone's using it and there's some gigantic split there that is unlike any other split I think I've ever encountered in covering technology for the last 15 years where everyone says they don't like it and then they're using the hell out of it. The only other one I can think of that is lightly comparable is how people feel about Adobe. Everyone uses the tools and everyone's mad at the Creative Cloud Subscription fee is basically the only comparison I have. It's not a good one, it doesn't map one to one, but it's as close as I've come to that split. What in your mind accounts for that split with AI where people don't like it, they're very vocal about not liking it and then we see the numbers and everyone's kind of using it anyway.
Prashanth Chandrasekhar
I think it comes down to that data point I shared earlier, which is that 80 plus percent of our user base wants to use AI or already using AI for code related topics, but only 29% of the that population trusts AI. And trust is a very deep word. It's like, why don't you trust something? You don't trust something because you don't think it's producing high integrity answers, accurate answers. You may not trust it because it may replace you one day and you don't like that either. But at the same time you're obviously going to be curious on what is this force that's going to be such a force. And so you want to keep trying it and using it and apps getting better and ultimately hopefully leveraging it to your benefit in a way that you can be relevant as an individual developer in the future, being able to go a lot faster and so on. So I think that's probably the, probably the reason because I think especially the developer audience, I think they're very discerning, kind of a, let's say analytical audience and they can be prickly if things are not, not, let's say deterministic, the way it has been for a very long time. And so this is a very probabilistic sort of technology. So the fact that it's almost like going to a casino and using a roulette wheel, you're going to get a different answer every time. It's not necessarily comforting for somebody who's writing very, very specific code, looking for very specific outcomes. And I think people will get used to that over time. It is a mind shift change for people writing software. And that may be the reason that why people are intrigued because it is so powerful as a technology. Don't get me wrong, we use Vibe coding all over the place at Stack, right? All the features I mentioned to you, our designers and our product managers. Vibe coded it first to show it and get user feedback before we went and built it. So we've embraced these tools internally for their benefit. So there will be ways in which you can feel comfortable using it. But still, I think that's the core.
Interviewer (Nilay Patel)
Reason I actually want to talk about that dependency, right? You know that it's not trustworthy, but you are building products with it. You are building products to enable it. You've mentioned it several times. The big announcement this week is Stack Overflow AI Assist. You've talked about it several times throughout this conversation. You're betting that this is what people want, right? You're betting that an AI powered tool on Stack Overflow will help more people and then maybe that thing is going to hallucinate like crazy and give people the wrong answer. How do you make that bet when you know that the users don't trust it? But you still have to roll out the tools because that's where the industry is going.
Prashanth Chandrasekhar
We believe we've actually unlocked a very important aspect of that trust issue which is. And responding to it by our AI Assist feature provides it's a RAG plus LLM solution. So effectively it provides an answer that first goes out into our corpus of information of tens of millions of questions and answers. We have 80 to 90 million and those are first used to produce a response. And then if they don't, then there's a fallback option where it goes and leverages OpenAI, for example, who's our partner to be able to go and produce trustworthy knowledge from other parts of the web. And so it's first searching through our trusted attributed knowledge base. It produces the links so people can go down that path and learn more about it, which is very important to us. Right. Attribution and so on. And so that's how we are navigating this element of hallucinations. And we're constantly testing it. And it's not perfect. Right. There will always be improvements, but we're also looking at where the world's headed. And if these models continue to get better, then we should benefit from those improvements. And ultimately we should have the best solution because you've got grounded human context plus the LLM strengths as well.
Interviewer (Nilay Patel)
I think the thing that I'm most interested in is the faith that the models will continue to get better. I'm not 100% sure that's true. I'm not sure that LLM technology as a core technology can actually be intelligent. As you're saying, people are very attracted to the natural language component of these models and the interface shift that's happening, the platform shift, that we're all going to develop software with natural language or let the LLMs reason basically self prompt themselves into an answer. Right. There's something there that seems risky. Are you perceiving that risk today? Are you, are you factoring that in? Or are you saying this is where we're at now and we have to continue until something changes?
Prashanth Chandrasekhar
I think the first thing around, the improvement on the LLM, let's call it plain, I'm with you. Right. Like it's hard to know how things are going to improve when you just think that, okay, it's plateaued. When you thought about the past six months, boom, here comes Gemini 3. And again, we're proud partners of Google. And here we go. It is a seed shift. It blows every other model out of the water. And now we've got a Code Red situation in other companies, other LLMs competitors.
Interviewer (Nilay Patel)
Sam Allman did type the words Code.
Prashanth Chandrasekhar
Red, by the way. I want to be.
Interviewer (Nilay Patel)
That's very good.
Prashanth Chandrasekhar
Perhaps that was the way for me to go back in the day. But I would say. But the point being that that is, it is surprising that you're able to produce that sort of a leap when you seemingly things have plateaued. So I don't know, I can't predict that. Right. Because these folks are, are deep, deep in the subject, Demis and others. And so that's true. But there's also going to be other, I'm sure, innovations that we are not even privy to. Like I was explaining previously, like if Transformers were obviously a huge development in this space, they may be something that these AI research labs come up with that we're not even aware of, that's going to be ultimately pushing things. Ultimately, we know that the compounding effects are very real. We've got got unlimited compute, you've got extremely powerful chips and GPUs that are now even lowering their costs. And I was at AWS Re Invent this week where they talked about the trainium chips, trainium 3 and trainium 4 being built out. So there's going to be just the proliferation of these. And then you got access to data that we've already talked about. And so these things, when they combine and compound together, it's going to produce very magical outcome. So I think that's the belief and why it's rooted, you know, why my own assumptions are rooted in the fact that it's going to improve the split.
Interviewer (Nilay Patel)
And the reason I'm asking about this in the context of the tools you're building and everyone using it and only 29% of people trusting it, is you've got to bring that number way up to reach the returns that every company investing in AI is trying to reach, including yours. I don't know if the core technology can do that. I don't know if you can stack a bunch of technologies to do that, that. But I do know that one version of the future of software development looks like everything is vibe coding all the time. Another version of the future of software development looks like writing intensely long prompts for models that are pages and pages themselves, which seems ridiculous to me, but maybe that is the future. And another version looks like, oh, we return to humans at the cutting edge of software development and they are co developing with an AI model and then maybe asking sec. And like that feels like a richer, more interesting future. But it's unclear to me where we are on that spectrum or how that even plays out.
Prashanth Chandrasekhar
We obviously have not only a bird's eye view in the context of our public community with this 29% data point as an example, but we, our enterprise customers, give us a clear view on where they are. Right, because the ROI questions being asked very heavily inside companies and clearly I think that 2026 is going to be the year of rationalization if 2025 was the year of agents, at least where every tool is being tried out inside these companies, there's a very open landscape for CTOs to go and buy various tools, test various tools. And so I think it's been, I think A tremendous time for some of the companies building these tools. But 26, the CFO pressure with, hey look, okay, these productivity improvements have to come from these. We're going to hire less people. As annual planning happens, there's going to be tremendous pressure in the system to prove out what the real value is. Everybody at a senior level that I've talked to acknowledges that this is a big shift and they all are leading it pretty hard. They're all waiting for the improvements. I think most people, most companies will say that they have seen improvements in the small groups where they've tested these tools, but that's sort of a self selecting group because they're the enthusiasts and so on and they will see great productivity gains, which is probably true. But there's an absolute drop off in productivity as you think about the adoption across the enterprise and probably for the same reasons, by the way, you're telling employees to use tools which may put them out of a job, so why would they want to do it? Or more fundamentally, if these tools are not perfect and they're hallucinating and they're going to be held accountable, then that's not good either. And then of course, as the process changed, the mindset changed, the ability you have to completely change the workflows of how you work, all the enterprise change management work. Which is why our solution which is Stack Internal is building this human curation layer, knowledge intelligence layer so that you can, with the MCP server on top of it, the knowledge base in the middle, and then our ability to ingest knowledge from other parts of the company to create these atomic Q and A that are extremely helpful to root your enterprise knowledge in through these AI agents. That is that solution that we, that's why we've gone and really gone hard at producing that and we've seen a really, really strong response from our customers. We have some of the world's largest companies, you know, leveraging and testing and building this with us, you know, HP and Eli Lilly and all these companies, Xerox. And so it's been amazing to see them gravitate towards because they want to fulfill that ROI point that you're making. And what are the gaps? It's trust again. And so they want this trust layer through a company like Stack that they can actually insert in between their data as well as their AI tools.
Interviewer (Nilay Patel)
When you say the age of rationalization, what I hear is you think the bubble is going to pop in 2026.
Prashanth Chandrasekhar
Yeah, I think certainly the exuberance in just trying out various tools and Unlimited budgets on this AI budget I think will ultimately it'll come to roost. I'm not sure about the bubble bursting. I think there's definitely going to be corrections along the way. There's no, no question about it that this like, you know, if you look at history, but the number of vendors that are selling into these companies, what I'm very surprised by is that there's similar functionality. There's four of them being tested within these companies. You know, certainly all these companies may have received whatever they've got to 100 million in ARR, but at some point there's going to be churn when the CTO decides, you know what, I'm only going to use maybe one and maybe a second one as a backup. No different from the cloud. You know, if you think of the multi cloud world back in the day, you know, people didn't have three clouds out of the gates. I mean now you have maybe one primary cloud and one secondary cloud. This is different. But at the same time I don't think you're going to have like four different wipe coding tools in my opinion.
Interviewer (Nilay Patel)
When you think about Stack Overflow as being that trust layer, right? I mean that's the value add, that's what maybe you can charge a premium for it over time. You're still dependent on a tool that only 29% of your users trust. How do you and I know you're talking about RAG and your other systems for doing that? How do you think you bring that number up with Stack Overflow? Is, is that possible for you to do or is that the ecosystem has to do it for you?
Prashanth Chandrasekhar
I think it's an ecosystem point more generally speaking because you know, that is more a reflection of people's, you know, what they have access to beyond Stack, right? They've got access to all these other options. And so what we can focus on is being the most vital source for technologists. And so for us it's about making sure this content is excellent, it's high quality, but also it's a great place for people to again cultivate community, connect with each other, learn and grow in their careers. But I think the way in which we can do it is through the fact that we are working with all these big AI labs and the fact that our trustworthy knowledge that's been human curated painstakingly is going to flow into these LLMs which ultimately produce trustworthy answers. So we're sort of one layer behind. But that's, I think where we operate, we operate in that trust layer. Or the data layer, if you will, in the context of LLMs. So that's our indirect contribution to that 29%.
Interviewer (Nilay Patel)
Prashant, this has been a great conversation. You're going to have to come back sooner than three years next time. What's next for Stack Overflow? What should people be looking for?
Prashanth Chandrasekhar
Biggest focus is going to be making sure that we build this enterprise knowledge intelligence layer for companies to truly use AI agents in a trustworthy way. So, you know, our Stack internal product that we launched a couple of weeks ago at Microsoft Ignite, in fact is, you know, very, very excited about that on the, on the enterprise side as well as of course on the public platform, as I've mentioned throughout, to help our community users connect with each other, really learn as well as grow their careers. And there are going to be so many avenues and new entry points like our AI system assist and our subjective content and chat and other things that people hopefully find very useful as things change around them very rapidly. And, you know, they can be part of this amazing community and help each other out. So those are the two focuses, enterprise as well as our public community.
Interviewer (Nilay Patel)
All right, well, when the bubble pops next year we're going to have you come back and that's how you predicted it.
Prashanth Chandrasekhar
Thank you, Nele. I appreciate it.
Host (Nilay Patel)
I'd like to thank Prasanth for joining me on Decoder and thank you for listening. I hope you enjoyed it. If you'd like to let us know us know what you thought about this episode or really anything else at all, drop us a line. You can email us atdecoder the verge.com we really do read all the emails. You can also hit me up on threads or Blue Sky. You can also leave a comment on our fancy new YouTube. You can watch full episodes at Decoder Pod and we have a TikTok and an Instagram. They're also at Dakota Pod. They're a lot of fun. If you like Dakota, please share with your friends and subscribe wherever you get your podcasts. Dakota's production the Verge and part of the Box Media Podcast Network. The show is produced by Kate Cox, Nick Stat. It's edited by Ursa Wright. Our editorial director is Kevin McShane. The decoder of music is by Breakmaster Cylinder. We'll see you next time.
Sponsor Announcer
Support for this show comes from Amazon Ads. There's a lot of folksy wisdom out there regarding advertising truisms, things that people like to say, but that's not the same thing as hard data. Luckily, Amazon Ads just released a study that challenges what we thought we knew about electronics shoppers Remember when popular opinion assumed everyone bought headphones out of necessity? Turns out only 54% do the rest they're impulse buying or chasing the latest new product launch. For brands, this means rethinking their approach to reaching customers throughout their purchase journey, from building awareness to capturing those crucial purchase moments. Ready to rethink your Strategy? Head to advertising.Amazon.com to learn more. That's advertising.Amazon.com.
Sponsor Announcer 3
Support for this show comes from Amazon Ads. Every business owner has been there. You put a significant amount of money into an ad buy and then wonder, did those ads actually have an effect? Luckily, there's Omni Channel Metrics from Amazon Ads Omnichannel Metrics helps advertisers understand how their Amazon Ads campaigns drive sales both on and beyond Amazon. While campaigns are still mid flight. OCM measures performance across streaming tv, video, audio and display, helping you understand what's driving results across the full funnel. Using Amazon Shopper Panel data plus third party signals, you'll be able to see beyond Amazon Product Sales Units sold and roas. Whether customers buy on Amazon or at a brick and mortar store, you'll understand the full impact of your campaign. Measure comprehensive sales impact to better understand purchase behavior and drive greater efficiency, effectiveness and roi. Tired of guessing where your ads are actually driving sales? Capture the full impact of your media spend with Amazon Ads Omnichannel Metrics. Head to advertising.Amazon.com to learn more. That's advertising.Amazon.com.
Date: December 15, 2025
Guest: Prashanth Chandrasekhar, CEO of Stack Overflow
Host: Nilay Patel, The Verge Editor-in-Chief
This episode explores the post-ChatGPT world for Stack Overflow—a platform historically at the center of software development communities. Nilay Patel discusses with CEO Prashanth Chandrasekhar how generative AI has upended Stack Overflow’s business, community, and technology, and how the company is shifting its focus toward enterprise SaaS and data licensing. The episode underscores the major tension: AI is everywhere, millions use it, but very few trust it.
| Timestamp | Topic/Quote | |-----------|-------------| | 05:59 | Prashanth describes the “Code Red” moment after ChatGPT launches | | 08:14 | How the “Code Red” was communicated and organized internally | | 14:32 | Impact of AI on Stack Overflow’s input/output—banning AI answers and fighting “AI slop” | | 19:06 | Business pivots: SaaS, data licensing, and ad revenues | | 27:23 | Will new generations of coders use Stack Overflow at all? | | 31:36 | Backlash from community moderators/users and the 80%/29% usage vs. trust data | | 34:57 | Managing the tension between community ideals and business evolution | | 36:27 | On the necessity of data licensing and changes in the internet’s business model | | 41:53 | Negotiating data licensing with AI companies like OpenAI | | 44:48 | Structure and logic of licensing recurring revenue from AI labs | | 54:44 | Stack Overflow’s headcount and financial state in 2025 | | 57:12 | Nilay on the unique usage vs. trust split with AI (“unlike any other split” in tech) | | 58:13 | Prashanth explains developers’ skepticism—but widespread adoption—of AI | | 60:42 | Launch of Stack Overflow’s AI Assist and addressing trust in AI’s output | | 62:37 | Will LLMs keep getting better? Prashanth bets “the compounding effects are very real.” | | 68:02 | Predicting the “age of rationalization” (AI bubble correction) in 2026 | | 70:29 | Stack Overflow’s future focus: enterprise AI agent trust layers and public community |
End of Summary