
Loading summary
A
Is the near uniform move of AI companies to agent super apps going to pay off? Let's ask Perplexity's Chief Business Officer right after this. This week I'm live at Knowledge 2026, ServiceNow's annual conference in Las Vegas where enterprise AI moves from promise to production. I'm sitting down with ServiceNow's president and CPO Amit Zaveri on the platform strategy powering it, all the people and technology leaders on what AI means for the workforce, the engineering team behind ServiceNow's Nvidia partnership, and what it really takes to ship AI at scale and ultra duty on deploying AI across 1300 stores. These are the conversations you won't hear anywhere else. And new episodes are dropping. This week on my YouTube page we've all heard the stat 95% of AI initiatives fail. It's not because the technology isn't ready, it's because you don't have the right process or the right partner. Meet Aboard. Aboard is your partner for AI transformation, which means they listen, use their very own powerful software tools and deliver exactly what your company needs to thrive in the age of AI working with big and small clients. Aboard always delivers in weeks, not months. Your AI revolution is just beginning. Visit aboard.com to get your AI rollout right. Welcome to Big Technology Podcast, a show for cool headed and nuanced conversation of the tech world and beyond. We have the Chief Business Officer of Perplexity here with us. Dmitry Shevalenko is here with us in studio and Perplexity, as you may know, is one of the many companies moving towards this agentic super app style product with Perplexity Computer. Now they are joining OpenAI with Codex and Anthropic with Claude Code as one of the many companies moving towards this agent that can control your computer and get stuff done for you. And today we'll talk about where that's going and whether it's going to be a real business. Dmitri, great to see you. Welcome to the show.
B
Thanks for having me. Looking forward to the conversation.
A
So we're here in mid-2026 and I got to be honest, I thought at this point you would be a subsidiary of Apple. Hasn't happened yet.
B
Well sorry your poly market bet there didn't pan out.
A
Just be clear, there was no poly. I just thought it was a good idea but it hasn't happened.
B
We have a great blossoming partnership with Apple. They actually are really excited about what we're doing with personal computer and how it uses Mac Minis.
A
It's a nice growth area for them.
B
Yeah, we found a way to work together there. But we're having too much fun being independent and a lot of the world is realizing that the power of multimodal orchestration, mass multimodal orchestration. What was first a wrapper is now a harness. We're really excited about the future ahead.
A
To me, the main criticism, I was obviously very vocal saying Apple should buy Perplexity. I think they actually gave you a call. I'm not taking credit, but maybe I contributed. The reason why I thought it would be a good tie up is because, you know, all the criticism was, oh, Perplexity is just a rapper company. And I was like, these guys actually know how to build AI products. Obviously the search engine, the browser comment looked pretty cool. And then this new computer application where Perplexity will take over your computer on your request and do things for you is really where AI is heading. And as you mentioned, it accesses multiple models as opposed to just being tied to one. I thought that would be a good acquisition for Apple, which has clearly struggled to take these models and translate them into working products, at least so far. Maybe they'll figure it out with Gemini. What do you think about their CEO John Ternus or their incoming CEO John Ternus?
B
Well, Apple's always been an incredible hardware company. I think this is an era where hardware will matter even more because software is going to face waves of commoditization pressure. I actually think it's a really smart pick and we're excited to see what they build and we want to build really powerful solutions that work well with Apple hardware.
A
Okay. We're going to get, you have a partnership with Samsung, so we'll get to that in a bit. Let's not bury the lead here though, which is that Perplexity gained, I would say mass awareness at least in the tech industry because of the search engine that you built. Aravind, the Perplexity CEO was very vocal in saying we're going to take on Google. We have this new way of doing search and look out. And when we look at the usage of consumer AI, something very interesting has happened over the past, I would say six months, which is that use is pretty much flatlined. If you look at the DA use of, of generative AI apps from Apptopia, for instance, there is sort of a, a flattening that starts late 2025. Even looking at Perplexity's market share of, of AI of AI search, it was close to 20%. I think this is again according to Aptopia, mid 2025 and it really has decreased kind of flat over the past month or so. According to similar web your traffic about 5.2 million average daily visits up 2% over the past month compared to 182 million for ChatGPT, which also isn't growing too significantly. That's up 5%. The question for you is everyone is now pivoting to this super app, this app that can control your computer. You guys OpenAI Anthropic. I'm wondering is this happening from a position of strength which is that, okay, we're just going to move, move here because the technology is so strong, or is it potentially a reaction to the fact that consumer AI hit a ceiling and you need something else?
B
So. Well, I'll tell you that I don't know those metrics that you shared, but the stats I look at every morning is our revenue. And we started the year at under 250 million ARR. And Aravind recently shared that as of a month ago we crossed 500 million ARR. Clearly we're creating value for our users. When we actually go back and understand who was using Perplexity, even when it was more focused on, let's say, consumer AI as you define it, people were actually using Perplexity for work and knowledge related tasks. So they were coming to us as much as we were talking up this is the Google search killer. People were using Perplexity to get ahead at work even when they weren't using the enterprise version. This was their secret weapon to be more productive, have greater leverage as they build businesses, create businesses. And so in some ways we haven't shifted our focus. We're really going to meeting our users where they always were. And what's possible now is, and this really started, you couldn't have built something like computer before November, December of last year because model capabilities advance where you can have longer time horizons for running tasks where you're not just answering a question, but you're actually doing work as an agent on behalf of the user. One thing Perplexity has always prided ourselves on is being the best at understanding what the new emergent capabilities are and finding ways to make that accessible and useful for a broader population. That's where we focus. But I think revenue is a much more honest metric than top line maus, which I think can include in it a lot of hype and exploratory activity, but aren't as tightly coupled with value.
A
Okay, but I'm going to give the alternative perspective here, which is that the MAUs matter. Typically MAU of course, monthly active user. When you're typically in a growth surge, you start talking. I mean every company, every tech company, they grow users and then they have this big user base. And then when the growth slows, you start hearing about average revenue per user. You need more users to have a bigger user to have a bigger revenue base, don't you?
B
Well, we're not talking about average revenue, we're talking about total revenue.
A
I guess that's the next step, but go ahead.
B
I would say historically that's been true for consumer Internet companies because MAU is a proxy for ad revenue. Right. And has been reported like we're not focused on advertising based monetization. We realize that there is when a core value prop of perplexity is accuracy. It's really hard to reinforce that to users when you also have ads running alongside the answer. And so I think some of why MAU matters less is, at least for us, is we're not trying to go to advertisers and say look at all these users that you can show ads to across all these different demographics. So that might be part of the shift in focus as well.
A
Yeah. I mean, to support your argument, Anthropic does not have the lead in users whatsoever and doing crazy amounts of revenue. So if you figure out this enterprise use case, you could be a massive company. We're looking at Anthropic and OpenAI are both going to have trillion dollar IPOs and we'll have many large companies I think that will follow them in the generative AI world. But let me get your take on if you. Well, let me just get your take on the consumer side of things and then we'll move more on the enterprise side. Even if people are using these products for work, they're such powerful tools and you know, they were like ChatGPT was the fastest growing consumer product ever. I guess it still is, but that growth has tailed off. What do you think is behind this flattening of consumer AI product growth overall? Let's just take over the whole industry because it's certainly happening. Is it just that like they kind of hit saturation or we know there are fears about AI. Is it people are just too afraid of AI? What's your best diagnosis there?
B
I think some of the use cases got ahead of where people were curious to explore what is this AI thing, but their behaviors didn't change. But I also think there's a fusion of consumer and prosumer that we find very interesting. A lot of people are now empowered to explore launching a side business or explore doing that project that they never had the activation energy for. Now, because you have these super powerful tools at your disposal, you're more than happy to spend, you know, money behind that because you feel like you get leverage there. So I think consumer to us is not just people using perplexity to look up the weather. Right. You don't need AI for that. And so I think part of what the broader industry needs to do is educate users on what is possible now. Like people refer to this as the capabilities overhang. Right. Where the models got a lot more powerful, especially in the last six months, and people are still using them in a very web 1.0 way. That's just going to take time for that discovery to catch up. But I'd say this is less relevant for perplexity. But I'm confident that everyone will prefer to have a more intelligent set of software they use to help run their life.
A
Web 1.0 meaning like information retrieval.
B
Yeah, just like the most basic. Yeah, like, okay, like sports scores, like weather, you know, basic news like that. That's, you know, that's where still a lot of people are. You don't necessarily need these new agentic capabilities for that. There's all kinds of other things people can be doing. And the thing that we're going to realize is the constraint on making the most of AI is our own curiosity. Right? That's the bottleneck. And that's why perplexities, we design our products to spark curiosity, to activate it. That's a big part of our brand is curiosity. Because when we kind of zero out, what gets commoditized, what doesn't? The uniquely human ingredient to taking advantage of all this will be curiosity and agency.
A
Let me give you my belief on why we're seeing this slowdown. And we can sort of. Because this does lead right into the agentic use cases when we've seen the biggest spikes. They've been around some of these multimodal use cases. So not text. I mean ChatGPT got to 200 million users because of text. People were interested to see what AI could do. So I think that novelty and that interest built the foundation. But I'll just use OpenAI. For example. Where OpenAI saw the biggest surges was after voice hit. Remember that demo where it sounded so much like Scarlett Johansson she threatened to sue OpenAI. You see an inflection point in growth there and then images. The Studio Ghibli moment still was just one of the, like I need, I mean, I know somebody that created like seven OpenAI accounts just to. Because they kept getting rate limited on the usage. And so of course you'll probably see a user spike there, even if it's not individual users. So that to me is like. As companies have shifted away from those things, we know that SORA is going away at OpenAI, obviously they're still doing images. They just released a great second generation of their latest image product, OpenAI did. But there is going to be this sort of moment of adjustment among people from going from what the AI companies were initially telling them, chat and images and voice, to this new use case, which is like we think that the model should take your computer over or whatever. The model through a harness should take your computer over and let you do stuff. And that will naturally lead to a divot.
B
Yeah, I mean, I agree with the thesis. Right. A lot of those spikes in usage were novelty driven. Right. I mean, your friend that created the seven OpenAI accounts, I bet they haven't created any Studio Ghibli images in the last 30 days. Right? I don't see those around anymore.
A
It's fully gone from the family chat.
B
Yeah, it is though you still see some people's profile pictures are like Studio Ghibli. And so that is a warm reminder of that era of AI. I think the novelty spikes are great because it raises broad awareness and it brings people in and then people have to discover their own kind of habitual use cases, but you can't. Yeah, novelty is what it is. I mean, nanobanana had a similar moment for Gemini and I think you could see now it's kind of there's been a reduction there too. Ultimately we see value in the most economically productive aspects of AI. And that's why for us a core foundational investment has been accuracy. And you almost think of search and accuracy as two sides of the same coin. Right. You need to have best in class search so that whatever you're doing with AI is grounded in the most up to date, highest quality sources, best snippets of that information working for you. I do think the. I don't think it's fair to call us what we're doing a pivot, but I think we're mapping our investments towards what are the most economically productive uses of AI that have the most enduring value and effectively what's happened now. And I mean, you're probably a great example of this. You're running an independent business that previously, if you were not using AI, which I'm sure you're using in many big and small Ways you'd probably need to hire a lot of people.
A
Marketing agency, maybe a software developer. It is crazy at being so heavily invested in learning the tools, what you can do with that.
B
Yeah, you're like the, we should do a case study on you because you're exactly what we see as the future of the economy. Someone with high agency. You had a vision of running your own media business that hopefully one day becomes a media empire and you're able to make very quick, rapid progress on it because you have a. A team. I think of it like we all just got 100 employees. The shift we're seeing in both prosumers and in the workforce is everyone now gets to operate as an executive because your job is to wake up in the morning and think about, okay, what are the useful tasks that I can deploy? The 100 agents that are on standby to grow this thing. That's again, very different than casual chat and generating images. I think those things feed into each other because sometimes the spark of curiosity requires the quick question and answer. And so you want to make that minimally, you want to make that delightful, easy, low friction. So then people are inspired to go after the longer horizon tasks. And so we see them working well together. Right. But you know, the future of AI is what you're doing.
A
Yeah. And it is interesting because I do use these, you know, I just cited the groups I wouldn't need to hire because I'm using this stuff well. But by having access to the tools, I'm actually able to do a lot more, I would say, economically productive activity than I would have been if I wasn't constrained by them. So for instance, because like, I'll have like a little extra margin because I don't have that marketing agency, well, maybe I can use that to host an event, which by the way, folks, what we're going to be doing on June 18, Arvind Srinivas, CEO of Perplexity, is going to come speak with us. I'll link it in the show notes. If there are still tickets, you should definitely join. But that's something that exists because there's a little bit higher margin and we can invest in doing an event because of that. So I think there's like, we'll see a very interesting transformation of the economy of this stuff works the way that many anticipate that it will. And I've never really been bought into the gloom and doom hypothesis around it, but I guess that's a different discussion. Let me just sort of ask the natural follow up to what you just said, though, which is if chat images voice were part novelty to cause this explosion of interest in generative AI,
B
why
A
are you sure that this computer style use or super agent use case is not going to be similar? For instance, just to make the bear case? Maybe it is also a lot of people trying out these apps and saying, oh, that might be useful, but then there could be a pullback from, I'll just give one example, then I'll turn it over to you. Sinking my teeth into Perplexity Computer, which is Perplexity's agent or super agent, I guess is the best way to describe it. And I added suggestion, created a daily digest email for myself. So it connected to my Gmail, it's connected to my calendar, it tells me which emails I need to respond to. What's going on today, what I should be thinking of the headlines. It's pretty cool. But is there also a chance that that could just potentially be like, oh, that was kind of a cool new use case, but not like a revolutionary use case? Because you could have said the same thing about chat images voice, that they were cool use cases, potentially revolutionary? Maybe they're not. Maybe they have potential to be that way. So why is this not another one of those novelty use cases?
B
Yeah, so what we're seeing with computer is people are generally using it the way you were describing the way you're running your business, where it's like you now don't need to hire dedicated staff or a dedicated agency to do your marketing, to do event production. You're gaining leverage from these tools. What we're seeing is the longer people have had access to computer, this stuff is still brand new, but they're using it, consuming more computer credits every week than the previous week. We're actually just in the extreme upward part of the ramp. It's a big part of why revenue is ramping as well. We're certainly not seeing that. And I think the fact that people are now the mental model is not. This is like I'm spending on software. People are thinking about this as this is actually part of my payroll budget. I have a team of digital agents, digital workers. And sure, the workers have to show up and do a good job to earn their paycheck, just like, you know, people do, but their capabilities are increasing and we're getting better every day of connecting the models to different tools, improving the virtual machine that it runs on. So I think none of the usage of computer right now that we're seeing has a novelty effect. It's all kind of Being tied in or people are willing to pay for it is tied into those economically productive scenarios. So we're incredibly bullish on it. And as people in AI like to say, the models are only going to get better from here. Right. So the capabilities will increase. I think consumer is really hard to get right if you don't have network effects. And so again, I think some of the studio Ghibli, like the voice, those early video gen examples, I think that's very different than what we're seeing with computer now.
A
So what should I mean? You mentioned that people, as they use it, they use more credits.
B
Yeah.
A
What are some of the use cases that you're seeing? I mean my email I think is pretty fun. I let that go. But I also see taxes.
B
Yeah, I mean it's any. So we actually are launching this week 36 different workflows that go on top of computer. This is everything from building a financial model of a company to filing your taxes if you're a wealth manager prepping for a meeting with a client. Again, this takes advantage of connecting to your internal data systems, your snowflake, your databricks. Just last night I ran an analysis of what are the models that are being used inside of Perplexity right now? Like what's the distribution between Opus 4.7 and GPT and Gemini? And it got a very elaborate result back. And I know 0 SQL, I can't code if my life depended on it. And I didn't bug a single data scientist at Perplexity. And I was able to do this because we connected Perplexity Computer to our Snowflake and I was able to pull in that analysis within a few minutes. That in a previous world that would have been 10 emails and I certainly would not have been able to get it at midnight as I wanted to dive into that. What we're seeing people do is be able to operate with much greater velocity, whether they're accomplishing marketing objectives, analytical objectives like building product. We're now able to prototype new features instantly. We have people on our content team that submit pull requests, basically ship code that goes live into production without engineers being in the loop. That's all being run through Perplexity Computer.
A
How much can you trust this stuff? Again, going back to this taxes example, I don't trust it to do my taxes. Am I just a Luddite or is there legitimacy to the worry that if it gets something wrong I could get a letter from the irs?
B
Well, actually I would flip it the other way. The way people are using computer is to double check the work done by their accountant and finding significant errors done there. Right. So actually one of the workflows that we're most excited about is called Final Pass. And you submit PDF, a presentation, a spreadsheet, and it basically does a detailed fact check on every assertion and claim in that document and both in terms of fact checking against the outside world and then for internal consistency. And we actually ran through a Gartner press release about their earnings and found four glaring mistakes in it where they misstated the earnings. We're going to have a fun marketing exercise where basically go through public companies press releases and run final pass through them and show just how much error lives in the world right now. I think. But to get to the heart of your question, I think there's always going to be three fundamentally human activities when it comes to using AI. One is, we talked about curiosity. You have to give it the spark. You have to define. We say we're shifting from an era of instructions to objectives. You have to define what are the objectives for what is the marketing success that you want to see and then the AI will accomplish it for you. So you need the agency. The second part is just like you need to error correct and double check the work of a human. We need to get really good at understanding where AI might go sideways and do validation testing. And that's going to mean different things and different use cases. And then the third piece is good taste. Right. Only humans are going to deeply know what other humans will find interesting and cool. I don't think AI is going to. AI can be a great brainstorming partner, but ultimately that's going to require discretion. And so, yeah, I think fact checking, error correction, those are going to be essential skills. But it goes both ways. As I said with taxes, there's plenty of errors that humans are making right now. And let's use AI to catch those.
A
The question is if people will stop at people will use these tools the way that you intend or whether they will just say, all right, screw it, I'm going to replace my accountant entirely. But I guess you're responsible for that if you do that.
B
Yeah, I mean, just like you're responsible if you hire a cheap accountant and they mess up. Like ultimately that's going to create a headache for you if you use a bad AI or not using it properly. That's also on you. So accountability doesn't go away with AI. And yeah, we need to develop a good sense of how do we. I have a good way of spot testing when I get an output from AI. What are the things I'm going to double click on to make sure there was no silly mistakes.
A
Yeah, and I love the final pass idea. I mean, I've been doing that for all my stories. I will upload the interviews and then upload my draft and be like, what did I miss? What outside context is there that I should be considering? And so it's just natural that that type of approach would be applied to other things like taxes, financial projections, even, I don't know, marketing presentations could be thrown in and be like, just triple check the numbers, which I've been doing. And it's quite good at that.
B
Yeah, I mean the really fun one was I presented to the senior leadership of Bain, a management consultant, management consultancies. They publish all kinds of reports and we had a lot of fun showing them some errors in some of the public reports they found. And the people that worked on it were in the room and so they were giving each other some trouble for it. But yeah, I think there's still a lot of value to unlock in using AI to fact check humans.
A
Okay. But to get this to work right, you have to trust a company like yours tremendously. Actually, let me just read you some of the permissions I had to enable for just my daily email, see and download. I can't believe I actually went through with this, by the way. See and download contact info automatically saved in your other contacts. See and download your contacts, see the list of Google Calendars you're subscribed to. See, add and remove Google Calendars you're subscribed to. View and edit events on all your calendars, view availability in your calendars, see and download any calendar you can access on your Google Calendar. Read, compose and send emails from your Gmail account, see and download your organization's Google Workspace directory. I guess I see now why people are working on the Mac Mini. Because, you know, and this is enabled for me right now as we speak, that perplexity has all this access to like, you know, all my mission critical, you know, technological infrastructure. I mean, maybe computer right now is like writing up client emails and sending them. I don't know.
B
Well, you do know, right? Because you're ultimately, you know, you're, you're choosing to initiate the task like nothing is happening kind of autonomously, right? Like again, the agency is still, you know, human triggered. Like you're, you're ultimately still directing and you know, you don't need to give all those permissions to get a lot of value out of Perplexity Computer. I mean this is a conversation I have with, with many businesses is, you know, start with zero connectors and just, you know, see the value there. Because there's a lot you can do with, you know, just interfacing with all the outside world's data and making more sense of it. But you're ultimately to unlock the full value. If you think about this as a digital worker, if you hire people, you also give them access to even greater permissions. And people make mistakes too. Right.
A
They tend to work slower than the AI does.
B
Yeah. Again, another crawl walk run that I would suggest is we have the capability for businesses to allow for read access but not giving write access. Meaning it can create the daily digest but it won't send the emails on your behalf. Which is like that's the part where people are like well what if it goes and spams 1000 folks with the wrong confidential information. Yeah. So again, so that's like the read write. I think that's like a way. And again we, you know, with our business versions we offer very granular controls and I think that that's the path forward there. But we spend a lot of time getting the engineering on this. Right. One of our advantages in the space is the only thing we do is build the product. We don't train pre trained foundation models, which means all our locus of effort is exactly on making those interactions first of all transparent to the user. You were able to know exactly what you're giving us permissions for and then make sure that it is error proof in terms of adhering to those permissions.
A
So do you think that the technology today is trustable enough that what I did is not crazy? And if so, why do you think so many people are running this on a Mac Mini? I mean there was a Mac Mini in your ad for Perplexity Computer.
B
So the Mac Mini it's actually the other way where it lets you get even more. Right. Because with the Mac Mini you can then get access to your imessages which you can't with the permissions you got there. With the Mac Mini also the agent can run 247 even when your laptop is closed it can run those long horizon tasks. So I wouldn't necessarily interpret the Mac Mini as like a. I want because the inference is not yet happening locally. Right. It's still happening.
A
Do you think it will?
B
Well, I certainly think that as models get more powerful you will certainly be and as local CPUs get more powerful as well, you're going to be able to distill powerful reasoning models to a size where they can run on a Mac Mini. Now, I'm not going to offer you a timeline on when you're going to get the 8020 where some of these workflows can shift towards local inference, but I think hybrid compute, where certain tasks will run in the cloud and certain will run locally. I think that's a pretty safe bet to assume that that will be the right way to anticipate how these systems will work in the near future.
A
Yeah, that's the bear case to the data center build out is that eventually you do all the training in these massive data centers and then you sort of distill it and run locally on a Mac Mini.
B
Well, again, I didn't say 100%, I said, but if the work that you're doing in the cloud is so computationally intensive, you might still need all that data center build out. Right. So there's kind of, I think we're under anticipating all of the broad types of computation that more powerful models will bring to bear. From the perplexity point of view, we don't have strong opinions on the data center build out, but there's nothing I see that indicates that that is a bubble or anything like that.
A
Yeah. Okay, so just to sort of wrap this part of our discussion, the Mac Mini is not a way to ensconce the agent away. It's to give it access to more and let it work harder.
B
Yeah. And again, with kind of even more granular control and more access to your local files. Obviously you're giving those granular permissions, but yeah, currently those systems don't support local inference.
A
Obviously you're doing this. We've just heard at length from OpenAI on this show about their ambitions to build this super app with codecs at the heart of it that obviously will take your computer over the. They call it a new way of using a computer. And then of course Anthropic has done this with cloud code and cloud cowork, which I can't believe how I'm still stunned at how much permission I've given these things. But the payoff is pretty intense in a good way. When you do guess you got to take risks in life. Why is Perplexity going to be able to compete with these two giant companies in the same product arena?
B
Yeah. So when we first set about building Perplexity, we made a very intentional decision to be model agnostic. That was kind of very contrarian at the time because the easiest way to raise capital in 2022 was to say you're training a model. Especially with our founder's background, that could have been a very easy story for them. They believed back then, and it's proven to be the case that models would end up specializing. And that is actually one of the most powerful things about computer is on a single given task, it will use different models for different parts of that task. I have little kids and I love, whenever I'm trying to get them to learn about things, I'll create mini podcasts for them. They're very personalized when I do that. Computer will use, and this changes week to week, but it'll like to use Opus for planning the task. It'll use GPT models for writing the script, because GPT is a good writer. It'll then use Gemini models for generating the audio. It will then sometimes actually use Grok for fast research because Grok is a very fast model. It will use Sonnet for writing the Python code to stitch together all the audio clips. That's just in one single deliverable task. It used four different models. The one thing that codecs is never going to be able to support is running Gemini models. It will always be in the GPT family. Same thing for Claude. They're not going to have GPT models. Gemini is not going to have GROK models. Our value as a multimodal orchestrator and being an aggregator is we can tell a user whatever is the best intelligence that exists in the world today that can help you accomplish your task, we're going to be using it and we're not going to be discriminating because of the models we happen to train or the ones we have a special relationship with. And that is a very powerful value prop. And that's something that endures over time. I think the second piece that is foundational that I spoke to briefly earlier is accuracy. When we were focused on the V1 of perplexity, which was ushering in this transition from links to answers, the core technology investment we made in our own tool was search. You need the most accurate grounding so that whatever the intelligence is processing, the source input is as high quality as it can be. That's something where we have a very powerful data flywheel that's been running for over three years of compounding. As people use the product, we see which snippets the models use, which ones they don't. That reinforces the intelligence of the index and what we do on search. And so accuracy is another thing that is very differentiated in Perplexity Computer compared to some of those other Products. I'd say the third structural differentiator, this one you're going to say might be soft and fuzzy, but I think it matters is usability. When I talk to businesses, something comes up. Often is the alpha for a company that is not an AI company is not in them building their own internal tools with AI necessarily. It is in the depth of their adoption. How do they, culturally, how do they through training, through the right type of management actually get everyone to use these superpowers the way you're using them. Right. And you're doing it because you have to. Right. Because you wouldn't. You're seeing the necessity.
A
Yeah, I'm a psycho who likes to pressure test these things.
B
No, but you're seeing. Yeah, like you would be. I mean I don't think your type of business model would work necessarily. It would be much harder margin, it would be smaller. Yeah. You wouldn't be able to grow this fast. Right. And so if you're part of a 5,000 person organization, you don't necessarily feel that same pressure that you feel. Right. And so I think the organizations need to figure out how do you actually, how do you create that pressure for that middle line worker? So they feel that and we need to do our part in that, in making proxy computer super easy to use. That's why we're launching workflows. Because the example you had of you know, how to prompt AI to do the fact checking on your articles. Right. And you probably have a certain process that you use there that you repeat. For a lot of folks, they look at the open prompt and it's terrifying.
A
Yeah.
B
They don't know.
A
Blank page for a writer.
B
It's a blank page. Exactly. It's the new writer's blog. It's the scariest thing you could ever look at. Yeah. And it's like, and you hear about all your reporting is like, oh my God, AI is changing everything. You need to be ahead. You're going to get disrupted. And that's again why we need something like workflows which takes all these complicated scenarios and use case of AI and just breaks it down into a simple UI where you don't need to provide open ended instructions. Right. Objectives. And so yeah, so summing it up, the reason we're going to continue thriving in a very competitive space is we're the best orchestrator and aggregator of all the intelligence. We're the only AI company fundamentally committed to accuracy as like a core principle. And that's where we've made our big technology investments along with orchestration. And usability, which is really a design problem as much engineering problem. It matters and it's something that we've always had an edge in and we're going to keep innovating on.
A
Yeah, well the question is if these AI providers allow you to continue to use the models because they have shut down competing companies. So I want to take a break and I want to go over that with you and then talk a little bit about the variety of models you do orchestrate, including the Chinese models. You have Kimi K2 in there, so let's do that right after this. Most leaders know how work is supposed to happen, but when it comes to how it actually gets done day to day across tools, teams and handoffs, they're mostly guessing. That's exactly the problem Scribe Optimize was built to solve. Trusted by over 80,000 enterprises, including nearly half of the Fortune 500, it gives leaders a live view into how work is really happening across improved business apps without interviews, manual process mapping or extra effort from the team. And because it's continuously analyzing real workflow activity, the insights stay current instead of going stale. The moment a process changes. You can see which workflows are happening, where time is going, and which tools are involved. It automatically surfaces top issues, explains why they're happening, and even recommends ways to fix them with estimated time savings. And importantly, it's built with privacy in mind. So activities only captured in admin approved business apps and user level data is anonymized by default. The kind of visibility that used to take months, now it's just always on. If you're ready to stop guessing and start seeing, Visit Scribe. How BigTech that's S C R I B E How BigTech look, if you have a kid in school right now, you know the drill. What you take 20 minutes of homework, ends up taking two hours and usually ends in tears. And every good tutor? Well, they're fully booked for months. This episode is brought to you by Brainly. Brainly is an AI powered personal tutorial built by educators, not a general purpose chatbot. It doesn't just give your kid the answer, it walks them through step by step explanations so they actually understand the material. It learns how your child learns, diagnoses when they're struggling, and builds a personalized learning path in under three minutes. Available 24 7. There's no scheduling headaches and it's just a fraction of the cost of a private tutor. Finals are coming. Build your teen's study plan now. It only takes minutes. Go to brainly.com bigtech to get 50% off your first Brainly subscription with my code Big Tech that's B R a I n l-y.com BigTech
C
Insurance isn't one size fits all, and shopping for it shouldn't feel like squeezing into something that just doesn't fit. That's why drivers have enjoyed Progressive's name your price tool for years. With the name your price tool, you tell them what you want to pay and they show you options that fit your budget enough. Hunting for discounts, trying to calculate rates, and tinkering with coverages. Maybe you're picking out your very first policy, or maybe you're just looking for something that works better for you and your family. Either way, they make it simple to see your no guesswork, no surprises. Ready to see how easy and fun shopping for car insurance can be? Visit progressive.com and give the Name your price tool a try Take the stress out of shopping and find coverage that fits your life on your terms. Progressive Casualty Insurance Company and affiliates Price and coverage match limited by state law
A
and we're back here on big Technology podcast with Dmitri Shevalenko. He's the Chief Business Officer of Perplexity. Dmitri, this is really great rich conversation. I appreciate it. I've written about this. One of the big problems with all these AI use cases converging is that it used to be for these big AI model providers they have the demo products like the ChatGPT. This is the previous way of operating and they'll offer their model that you can pay for intelligence and build whatever you want on top of it. But as we get to this style of agentic use case where everybody wants to build this stuff now, some will not be competing, but there's interest to have their own products like Claude Cowork like Codex, be the sort of system or agent of record, so to speak, that handles all this stuff. And I think they might even prefer a world where that would just be a single app to rule them all. You're orchestrating their models. So long term, aren't you at least dependent on their benevolence to allow you to use these models even as you compete with their core products now?
B
Yeah, I think ultimately all these companies are platform businesses in addition to product businesses, and they aggressively petition us to use their models. They give us early access, they want us to run evals. We have the exact opposite dynamic right now where they're more than happy to take revenue from us. They're the beneficiary of more consumption of computer credits as well. I think because they are all competing with each other on their platform businesses as well. And there's open source, which is, you know, continuing to push at the frontier. Not necessarily at the frontier, but pushing at it. All those competitive dynamics are very healthy for us. Now, I agree with you. If we lived in a world where there was just one frontier model that was twice as good as the next best model, that would be a bad scenario for perplexity. I wouldn't deny that. But since this industry has kicked off, there's never been a moment where the delta between the best model and the second best model was more than maybe a 10, 15% gap. Again, best model is probably. I shouldn't even be using that phrase because it's best model at what? Right. Yeah, it's the subspecialization. The specialization is also a hedge against those sort of competitive dynamics. I lose more sleep about us preserving our execution velocity and continuing to build our culture and our company through the intensity of the space rather than a us getting cut off scenario, because I'm not seeing indicators of that.
A
Your example of the model's competitiveness is very interesting. We're at this point where the models are very smart. We have Anthropic, for instance, won't release Mythos because it believes it's too intense for cybersecurity.
B
Great marketing, by the way.
A
You think it's marketing?
B
No, I'm saying regardless of whether it is or isn't, it is great marketing.
A
Do you think it's mostly marketing or truth about the product? I ask everybody this, so I'm curious.
B
I think everyone will have their own. I don't think we don't have access to Mythos, so I can't speak to it out of firsthand exposure.
A
Yeah, but the people you speak with in the industry, believers or mostly.
B
I think there is a. I think what is a real concern is that models will be better at exploiting cyber vulnerabilities than they are at fixing them.
A
Just like you can find these problems in the consultant presentations.
B
Yeah. So I think that arbitrage. I think that's a real concern. I think that has already, but I don't know if there's been some new capability that didn't already exist. I mean, you've been noticing there's been more hacks and things over the last few years before Mythos. I think this has been building up for a while.
A
I guess that was a long wind up on my question to say, isn't there going to come a point where these models are just all kind of smart enough and compute becomes a commodity that right now we're in this buildup and eventually we just see parity among models, even though they're unbelievably smart and just like a lot of compute infrastructure and then sort of a price war that brings the price of all this stuff way down.
B
Well, if a be good for you. Yeah, that'd be good. I mean that's like in that scenario because again, open source would catch up too. Right. But again, if we reach some kind of plateau, then you'll actually see even the local inference becomes more relevant because there'll be more investment there. I think it's really hard to make long term predictions in this space. I'm fond of saying that the thing I'm most confident in is that six months from now I'm going to personally have a perplexity, a top three priority that today I don't know what it is. The model companies themselves, when they're baking the cake of a new model, they don't know what it's going to taste like until it comes out. Meaning the capabilities. Like when you train a model, you're not necessarily training it, you're making improvements, but you don't know exactly what the new capabilities are until it's out there and people start using it. In some ways that's a core skill we've developed at Perplexity is zeroing in on when a new model becomes available. Where is the actionable value for a user.
A
Yeah, I mentioned this before the break, but you use the Chinese models. Kimmy, K2 is in perplexity. I don't see Deepsea getting there anymore.
B
So to clarify, we never integrate into Perplexity any product or API that is hosted in China. We have ourselves post trained. The weights we have post trained open source models that come developed by Chinese labs. We run those in U.S. data centers. We post train them for accuracy and removing things that are not accurate from them. Well, different countries might have certain political agendas that they try to integrate into models.
A
Can you find those in the models?
B
I mean we've published some research on that with Deep Seq.
A
If you go back, wouldn't answer questions on Tiananmen Square.
B
Yeah, there's those sorts of. Now again, we also solve for that with grounding with accurate search. Right. And that ends up if you're using the model fundamentally for reasoning, that becomes less of an issue. But it's really impressive what the Chinese labs are doing and the progress they're able to make. I think open source is good overall for users is ensuring that pricing remains competitive. Obviously there's more we can do in the post training space on an open model than a closed model. That lets us accelerate our work around accuracy, conciseness, adhering to certain task workflows.
A
When Jensen says it's important for the entire world to have their AI built on a Western or US AI infrastructure stack, if you could do what you just did, what you just told me with Kimmy K2, which is download the weights, post, train it the way that you want, why does it matter where the models are developed? What does it matter if, let's say China has the lead in open source?
B
What would be a bad scenario is say that the best open source models, their architecture is done in such a way where they don't run on Nvidia chips, they only run on Huawei chips. I think the scenario Jensen is concerned about, rightfully so, is where software drives the hardware cycle. Right. And where, you know, imagine that the flip of the scenario where right now Chinese companies are trying to get access to Nvidia chips because that's where the model architecture is. Right. And they need the Nvidia chips to be able to run them in an efficient way. What if it was flipped the other way around where, you know, the Huawei chips are the ones that US companies would need to get. Right.
A
That makes a lot of sense.
B
Yeah.
A
So then China can export control the US and control AI.
B
Yeah. So I think that's the. I think that is when you have this, like, why didn't he just say
A
that in that Dwarkesh interview? It's like, it's a very straightforward answer Anyway.
B
Well, Jensen is very good at comms, so I wouldn't. I think there's a new. I mean, there's certain things he can't say probably too, that can't say certain names.
A
Yeah, we can say it here on the show. That's fast. But the model, the Chinese models are good.
B
They are, you know, they're pushing the frontier. They're not at the frontier, but they're pushing it.
A
Yeah. All right, I want to end here. There is this interesting argument and I think you have a perspective on it at Perplexity, and this is a great article from CNBC that Deirdre Bosa wrote. AI demand is inflated and only anthropic is being realistic. I think the crux of the argument is that people have been running massive amounts of workflows on these $20 or $200 a month plans, and there's a lack of ability to serve them. And so therefore these AI companies are showing immense demand and going and raising money based off of it where that everything's going to change once you have to actually charge per token as opposed to unlimited. Like you wouldn't do an unlimited electricity plan or an unlimited fuel plan. But for some reason a lot of these companies have been doing this. Do you think that this is like a legitimate issue that she's pointing out that basically like we don't really know what AI demand is because it's been subsidized so heavily for so long. And if so, what's the answer here?
B
So we at Perplexity, we've never subsidized paying users. If you're on a pro or max plan. Thank you. You're contributing to our success.
A
You're welcome.
B
And we see great retention. So clearly folks are finding value there. That's actually why computer credits are so important. So that as you have, because you can have a certain computer task cost you $50 for, say it's like video generation and it's like Long Horizon running. One task can cost up to that much and then you have certain tasks that cost 5 cents. There's no way to encapsulate all of that in a subscription product. I think the mental model I would have is AI is going to become a lot like Costco where you pay for the membership and that gets you in the store. And that's actually the part of Costco's business that is the highest margin. And then you have everything you're buying in the Costco, you have confidence that there's a max margin. And those are kind of like computer credits. And some people go to Costco and they just buy the hot dog. And then there's people who go and spend thousands of dollars every trip. And that depends on their needs. I think she's reacting to some. I think it was cursor advanced this data point that Claude code was subsidizing a subscription tier. I think that will normalize over time.
A
But.
B
But the behavior we're seeing with computer credits, where people are paying for usage, right. There's no subsidization, there's no kind of breakage that's driving it and finding value and paying more every month as they use it more, I think it's a safe investment in all the compute and data centers.
A
Okay. Really the final question, I mean, how do you keep up? Perplexity has been, I would say early on, three trends. AI search, AI browsers, and now this computer use must be tough to set strategy as a company with things changing as quickly as they do. So what is the process that Perplexity uses to make decisions about strategic direction and product plans with all these capabilities just kind of blasting all the time?
B
Yeah. I think part of it is keeping a very lean team. As We've increased our ARR by 5x from 100 million to 500 million. We only grew a headcount 34%.
A
You only have 300 people.
B
Yeah.
A
Crazy.
B
This is what I try to share with companies outside our walls, is you're going to be. The world will keep changing faster, and so your only way to adapt to that is to be quick at making decisions and not tying yourself to one path. That's also a lot of the not to bring it back to why Perplexity Computer is great, but you don't want to be tied into one model if another model is going to be better three weeks from now. Right. The world is very unpredictable, and so you want to have agility and you want to make quick decisions and be willing to revisit your decisions. Right. And I think having the humility of not knowing what the world's going to look like two years from now is a big part of being successful in that world.
A
Yeah, I mean. I mean, I wrote a book with this title, but it is always Day one. Really, really sort of felt that way beforehand. But in this world, you can't be tied to any legacy. You have to just basically see what the new is today and how it works and. And take charge. And you guys have been good at doing that. So thank you. Demetri, it's great to see you again. And thank you again for coming on the show. Hopefully we can do this again soon.
B
My pleasure. Thank you.
A
All right, folks, definitely check out the link in the show notes for the 618 event. Would love to to see you there. And until then, we'll see you next time on big Technology Podcast.
Host: Alex Kantrowitz
Date: May 7, 2026
This episode explores the shift in the AI industry from simple text/voice/image chatbots to sophisticated "agentic" super apps—tools that autonomously orchestrate tasks for users across devices and data sources. With guest Dmitry Shevelenko of Perplexity, a leading AI search and productivity company, the conversation covers the commercial promise and practical realities of AI agents, their adoption in the enterprise and consumer sectors, and how competition, trust, and business models are shaking out in this rapidly-evolving landscape.
| Timestamp | Segment | |-----------|-------------------------------------------------------------| | 01:10 | Perplexity’s journey from “wrapper” to agentic harness | | 05:02 | Industry-wide plateau in AI user growth | | 06:22 | Perplexity’s revenue growth vs. user growth | | 15:25 | AI adoption: novelty vs. productivity value | | 17:38 | AI as “100 digital employees” for creative users | | 23:39 | Launching 36 new agent workflows in Perplexity Computer | | 25:50 | Trust and reliability of AI agents for high-stakes work | | 30:12 | On data permissions and user trust in AI agents | | 33:45 | Why run Perplexity Computer on a Mac Mini? | | 37:00 | Multi-model orchestration: unique value proposition | | 47:57 | Will model providers someday block Perplexity’s access? | | 55:35 | Why hardware lock-in is the real geopolitical threat | | 58:49 | AI as a Costco-like usage/membership business model | | 61:14 | How Perplexity stays nimble and adapts to rapid change |
This episode offers an insightful and candid look into the future of AI productivity agents. Dmitry Shevelenko discusses how Perplexity is shifting from a novel AI search company to the architect of agentic super apps that increase human “leverage” in real economic activity. The conversation goes deep on what drives sustained use versus hype, how agentic apps earn long-term trust amidst privacy concerns, why a model-agnostic strategy is vital, and the competitive dynamics shaping the entire industry.
For anyone seeking to understand the next stage of AI—where digital agents act as genuine workforce multipliers rather than mere novelty chatbots—this is an essential listen.
Note: Advertisements, intros, and outros have been omitted from this summary.