
Loading summary
A
Hey, everyone. Welcome to the Lydian Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by swix, editor of Lydian Space.
B
Hello.
C
Hello.
B
And today we're finally joined by the epic return of DD Das. Welcome back.
C
Thank you for having me, guys, again, I'm so glad to see you. All of us have different jobs now.
B
All different jobs. All different jobs. Classic Bay area. It's been two years, right? So last time it was April 2023. You joined us remote and you were still at Glean back then I was actually even also looking at the Claude timeline. Uh, so Claude one was March 2023 and Claude two was July 2023. It just feels like so long ago.
C
Man. I remember the time when I don't know when what your first experience using Claude was, but. But mine was. I remember early Glean. There was somebody from the company was like, hey, there's this interesting new LLM that's not OpenAI and the only way you can talk to it is by tagging Claude in a Slack channel. And I had some bizarre interaction with Model for a whole new product.
B
It's the best model.
C
And now fast forward to now and I'm like, okay, we've come quite away.
B
Yeah. I think actually they only recently introduced Claude in Slack. Right. Or like publicly come back.
C
The comeback.
B
Yeah, yeah, yeah.
C
It's like how it started now. Claude is chatgpt and Slack.
B
Claude and Slack. And so since then, I wanted to start with Glean, obviously, because, you know, we're going to cover a lot of startups in this episode. So Glean has. Glean was like a billion dollars, I think, based on my research, and now it's at $7 billion. So your. Your options are good. What's your take on, like, how Glean's going and the market in general?
C
I would say that now being on venture side, I have a bit of a different take than I would have had at Glean. But broadly, one of the things that I love about Glean is it's such a boring, unsexy company that became sexy later. So from 2019, I remember going to parties in the Bay Area and I would say enterprise search. And it's shutting down the conversation right there. Like nobody would ever ask a counter question if you said enterprise Search, they're like, that sounds boring as hell. Leave me alone. And Fast forward to 2022. Enterprise search gets more. Got more conversations. It was like, interesting. Tell me how you're doing this search. I think what was nice about that observation is in those three years we did a lot of work and didn't take shortcuts on a lot of things that ended up generating a lot of value for us now. And I can go into what all of those things are. But if you look at Glean from a high level business, it is top down enterprise sal sales. It's very hard to rip and replace. We expand contracts very easily. Because the TAM is so large, every knowledge worker could use a version of enterprise search and then the AI on top, I still call it search, but information retrieval in the enterprise. And we solved a lot of critical problems. I can go into that too in order to get there. Then comes December 2022, the ChatGPT moment and everything that's happened since. And now when I look at Glean, it's a different world. We were very quick and correctly prioritized LLMs earlier on. It did a lot of good for our business and the company. But now there's fire from a lot of angles. Like everyone wants to be a part of the enterprise search story and it makes sense. I mean it's a large unconstrained Tam. LLMs are particularly useful for gathering information. Obviously consumers are interesting in enterprises, therefore interesting. How do you do this in enterprise? Well, gather all the knowledge and then put an LM on top. So that being said, I'm still very happy with Glean stock. You know, Glean's also valued at 7 billion, not 100 billion. So I'm, I think the company has a lot of growth. I think it's done a lot of the hard work that nobody's willing to do. And I also think, you know, VCs have a tendency, including myself now, to, to trivialize a problem into a one sentence sort of narrative. And with Glean, that narrative was often oh well, you guys built this enterprise search thing which never work. And then AI came along and it started becoming a thing, which I think is not the story at all. I really think we did all the hard work to build search and AI happened to accelerate our go to market motion at the right time. And now I see companies trying to tack on search. It's not easy. I know the kind of like last mile stuff we did for some of our customers. And I just know that when I think about other companies I'm like, would you really go all that distance? It's not a moat. The moat is just we did the hard work. And so I'm pretty happy. I mean things can go any direction, but I'm pretty happy with the way Glean's going right now.
A
And just to spell out the two main challenges. So one is obviously Claude, I think today launched Enterprise Search.
B
I was going to say, I have a screenshot. Did you see like, hey, we're introducing Enterprise Search.
A
I'm like, yeah, son of a gun. And then the other side, you have the data providers adding this rate limits, kind of like Salesforce has done with Slack. And it feels like that part is more challenging than like the competition from other companies. Like yeah, any. How do you think about that?
C
Two questions, I guess. Competition and the rate limits on the rate limiting side, it's happened for several of the of the SaaS tools. I think one advantage that Glean has is. Well, the first thing, let me address the premise of the argument. When I think about why SaaS tools would limit API access inherently, it never made sense to me. I can see why you do it for business, business reasons. Maybe you want to launch a competing product, but Glean doesn't eat into your revenue. If you are Slack and you've sold, call it a hundred seats at a company and you have Glean at that company, if anything, Glean only shows Slack results to the hundred seats that you've sold. So we aren't eating into your business. So from primary first principles, business logic, I don't see why you do it. If Glean is on Slack and more people are searching through Slack, it actually lets you sell more seats, not less because we don't reveal permissions to people who don't have access. If we were to do that, then I could see maybe a business case like, oh, you're taking the Slack data that I've only sold one license for and you're showing it to a thousand people. That's problematic. But we're only showing it to the licenses that you've sold. So firstly, that's my first point. The second thing is we do have thousands of integrations and in a lot of enterprise customers, Slack is really important and that's a critical data source. But we also have many, many more. And so it's just, you know, the law of large numbers. So maybe if everyone decides to shut it down, it could be more problematic, but if one person does, then you know, less. And the third thing I'll say is if you talk to the customers, they're also super unhappy about this because they're like, look, we bought your product, we own the data, you don't own the data. And so if we want to buy another product to use our data in Slack, we why can't we do that, why are you blocking the API? So those are the three prongs of the argument. I don't know how this will all end up, but I don't think it's that sensible that it is like this. And I'm still optimistic that we'll clear out some of those issues.
B
Yeah. Anything else you want to say? So obviously we're about to move to Anthropic and Anthropic just launched Enterprise Search and so what would you say? As a veteran of Enterprise search, Anthropic should take note.
C
The question of the labs competing with Glean has always been a thing since 2022. Sam Altman, like we were just discussing earlier, like Sam Altman once came out and said, if you're investor in OpenAI and one of these five companies, including Glean, we don't want you as an investor.
B
That's just fact.
C
Yeah, but yet here's what I see. Look at the revenue of Anthropic and OpenAI right now. These are billion dollar revenue scale businesses. Glean is several hundred million dollar revenue scale business. So the way I think about, and this can even allude to how I think about startups, right to compete and right to win is for Anthropic and OpenAI to build a deep enterprise search system. It doesn't make them that much money. They have to put all this effort to make what, an incremental 100k sell, 200k sell, maybe even a seven figure cell. Is that moving the needle on your, you know, five plus billion dollars in revenue or 10 plus in to the end, the end of the year for OpenAI? Not really. And the amount of effort it takes to get there is big sales teams, huge FTE teams, tons and tons of customization. And my question is like in the long, long run, you could build a semi reasonable enterprise search tool. If you really want to go deep, I don't think you will ever dedicate the people to do it. And the last thing I'll say is you think about from an anthropic engineer's perspective, you joined a big AI lab to work on models, not to build Google Drive connectors.
A
Right.
B
It's a meme like, you know, I build the fucking integrations, build the integrations.
C
I think, I think I'm still very bullish. But yeah, competition happens.
B
So yeah, it's actually I wasn't asking about competition, it was just more about what are the hard problems that people don't appreciate.
C
Oh, okay, we can talk about that.
B
That was probably a safer category for you, you know, basically like, you know, and I'm in this boat as well. I've joined an inter enterprise AI company that has to worry about and build for these issues. And I'll just give you one very example. Until this point we never had to deal with two slacks. And Enterprise has like when you acquire another company you have different systems and they all duplicate and they all overlap.
C
Yep. Oh man. I have some great stories about Devin.
B
I'm sure there's some pro user version of this, but I still haven't figured out how to use Devin properly with two slacks.
C
Wow.
B
Because Devin's also has one slack.
C
That's funny. That's funny. So slack workspaces. That reminds me of that. That was the thing that we had to address at Glean. I feel like every enterprise company has like the same sort of hurdles.
B
No, no, no. We looked at each other. We're like oh yeah, we're a real enterprise now. We have two of everything.
C
That's funny. Okay, Glean, bunch of interesting problems. I'll talk about some of them. If you want to prod, feel free. I think number one most interesting to me when I joined the company was consumer search was largely regarded to be a solved problem. Not really, but largely the way most consumer search systems work is by aggregating feedback data on how users use search, whether they click hover, how long they stay on a website. And that's what powers ranking systems to get better over time. Very, very powerful critical way of how like Google, Bing and all the above work in enterprise. If you take a 10,000 person company, even if every user issues two search queries a day, which is quite a lot, say even five, I don't know, that's just not enough volume to have any meaningful quantity feedback for this to be relevant. On top of that, add to the fact that freshness is way more critical in the enterprise in certain ways than it is in. There are more freshness seeking queries in enterprise than there are in consumer. And then number two is the distribution of queries in consumer is very head heavy. It's not in enterprise. In enterprise maybe the query that everyone wants to search for is benefits or payroll. It's just not that useful really. Like it's every person's doing a job and they have different needs and they have different things they want to look up. So given all of that, the techniques behind the hood, under the hood that work for consumer, they don't translate to enterprise. You have to invent a whole new set of signals that actually makes Enterprise search work and evaluation becomes very, very difficult too on consumers. You have tons of data to pick and choose how you want to evaluate what's the right result to show for this query in enterprise. And I have this story a lot like we look at some of our customers data and we would look at each other and go like we don't really understand what this query means. We don't really understand what these results are. We don't know what is the right ranking or not. We have actually no idea what we're doing here which happens. It's so out of domain for even us. Some of our customers are working on very specific problems. So all of those, that's one huge, huge challenge. How do you make ranking work in enterprise in a great way? There's many. I'll touch on the more Second interesting one. Second interesting one is selling productivity tools to enterprises are challenging because as no matter what ROI argument you make, people aren't actually buying tools for roi. People buy productivity tools because their users like using them. So for example, when people buy Slack, I don't think any buyer is going like let's measure how much faster are how much more productive our team is getting by using Slack. It's probably not even getting that more productive. That's not what they're looking at. They're kind of saying everyone uses Slack, it's pretty useful. I'm going to keep Slack. I don't think we're going to turn that one. If you take that analogy to search and search systems, the issue is search systems aren't inherently viral or growthy. Slack has a very clear virality moment like everyone's talking to everybody else. And so that's just how you have to speak in search. It's kind of a one player game. You're not really sharing things, you're not really talking to everybody else. So the challenge for us was like how do you get sell a productivity tool by getting everyone to love this on day one for a product like search it's not easy. If you look at how Google did it, they had Chrome. So great, like have a great source of sense of distribution, get everyone to like query and then they'll learn to love it hopefully. So we had to figure out what that meant in the enterprise as well and how to get everyone to like adopt and embrace and love this new tool.
B
Yeah.
C
So two, two of the many, the pointers.
A
Yeah. Just a question on that.
C
Yeah.
A
Was there any? Because you know, oh, you have a new search tool, it's like go search and it's like, what am I searching? You know, like, what was that blank canvas onboarding for people.
C
And several different things worked well for us. I can think of two at the moment, but I'm sure there were many, many more. I'll say one of them was, say, for. For a handful of companies. Like many companies, actually, we would say, we want to take over your new tab page. And then the critical part was tell us what we need to do to earn the right to do that. No one wants to give away their new tab page.
B
So.
C
So, so we went the last mile and there were companies who were like, well, we have a new tab page. We're pretty happy with it. So we'd ask, do you have a search bar on it? They'd be like, well, yes. I'm like, okay, what is. What is that using? And they'd be like, well, it's using our internal thing. I'm like, do you like it? Clearly not. That's why you're us. So let's just rip and replace that. But doing that extra mile was pretty important. So that's one new tab. The second one that we liked was Chrome Extension. And then doing the. I forget what we call this, but when you were on your native product and you were issuing a search query, we ran a lot of evals and we thought we were better at every product at their own search. So if you were searching on Google Drive, we will do a glean replace of the search bar and the page pretty natively. And it would teach people to use glean and be like, okay, that's pretty useful. I think these results are great. And it automatically filters to Google Drive anyway, so, you know, functionality is not lost and we would slowly get people to be into the ecosystem that way.
B
Yeah, superset adoption, something that OpenRouter also does. Okay, so Anthropic, we have to obviously address the elephant in the room. You guys are huge, huge Anthropic investors. I think right after you maybe got promoted or you became a partner, you guys led the D. What was the chronology that I think we did part.
C
Of the C and then the D and then every single round. We had more than Burrata.
B
Yeah, obviously one of the greatest companies in AI. I honestly had no idea that we would be sitting here. Anthropic is 10x in the time that you've been at Menlo. And what's it like being an anthropic investor? What do you think about what are the considerations back then versus now?
C
Anthropic is the fastest growing software Company of all time. I think I can say that fairly. I haven't been disproven yet, so I.
B
Think the people say that. But everyone says we're first to 1 billion, first to 100 million. I don't know, it's hard to tell.
C
But I do believe the numbers are 0 to 100 in one year, 100 to a billion in one year. And this year would be 1 to the projection that is public is 9. But even, even to this point, like I know a lot of people, we've seen the graphs on Twitter and a lot of some of that is bullshit, some of that is gmv, all this other stuff. But in Anthropic's case, I think it's like fairly legit revenue and I do think it makes it the fastest. Definitely at like the 1 billion plus scale. I can't think of too many examples. So clearly has outdone itself. I would say that when we invested in the company, it had no revenue. I mean that's just fact. So our first investment had no revenue.
B
It was a $18 billion or 4 billion. Four billion, right.
C
It's been fascinating to see this company succeed. I couldn't have predicted it. All of us, this was beyond our wildest expectations. I think whether or not it continues to perform at this rate, I believe it will. But it is already somewhat of a generational company in many ways. And so. And it's kudos to the team to deliver these awesome results. One of the risks, I would say kind of taking a tangent, one of the risks with a company like Anthropic is you essentially had a team of extremely idealistic researchers. And very often the standard deviation of outcomes when you have teams like that or similar to that is quite large. There was a world where maybe they would have not worked at all and would have absolutely fizzled to the ground. But I think it is the same qualities that would make them have a high propensity to fail made them had a high propensity to succeed. And if you look at there's many other things they did right. But if you just look at a product like Claude Code, there's not many product innovations in AI that I can think of that are so critical as something like that. Because we had the whole chat era of rag systems and ChatGPT that was a critical innovation. But since then there was a lot of followers, a lot of deep research, which is kind of, I would say an addendum, a couple of other things happening here and there. Agents. Cool. But you know, if you think about agents that actual end consumers use and gain value from. In my mind at least. Cloud code was the first time I saw that in a terminal in a weird interface. It was just weird. It was like every PM's nightmare. No PM would have thought of that. And so it's.
A
Except for cadwu.
C
Yes, except for Cat Woo. And so, you know, it kind of gives. It goes to show how Anthropic is able to function as a company to be able to innovate like that, which is quite rare, especially for that scale.
B
To some extent I think you just hire good talent and then let them loose with a lot of tokens, see what they come up with. They tend to build good stuff.
C
Like it's interesting to talk about. Right? Like, because take OpenAI and DeepMind as a comparison point. Like I think we'd all agree they all have great talent, but they all don't innovate the same way. And it's always been interesting like just as an academic exercise to think about like different leadership styles and maybe from the outside looking in, you'd be surprised how little I actually know from an investor standpoint about how Anthropic actually operates. But it seems like it's a company that has, you know, such high retention numbers on employees because they are very free spirited in how they let the employees guide the direction of the product versus other companies which are much more either top down or prescriptive or like hey, we need to go after this and we need to go after that. It's like, hey, let's see, let's see what happens. Try.
B
Yeah, I think at my last conference Signal Fire had some stats. They track all the LinkedIn pages of everyone and like Anthropic has the best retention and it's like a net gainer whereas everyone else is a net donor of employees to Anthropic or something like that.
C
I'm referring to the exact same article where I think their retention one year retention on employees is 80% which in AI world is quite wild.
A
Yeah, I mean Anthropic does not have image generation. They do not have a IMO goal winning model. I feel like they just do their own thing. They do it great.
B
They have nice hats. Yeah, thinking caps. So actually I really want to discuss this but I don't know how to. But I think I need to get some marketing PR agency person because people actually forget 2024 they had out of home advertising campaigns which sucked. Everyone was like dogpiling on them. And then this year it's slightly changed. It's still anthropic butthole. But they just decided to focus on thinking and suddenly everyone loves them and they have the cafes and all that. It's a very interesting public image rebrand. And I don't know if it's because the models are just better or it was actually like pr, which one comes first? Like chicken or egg? Like models or pr?
C
It's a good question. Yeah, it's a good question. I would say though, like ignoring the model side, I do think this one is aesthetically better.
B
Yeah. Purely. It looks nicer.
A
Yeah.
C
And the vibes and I don't know.
B
I have sat in those meetings and it's like someone's pitching you an idea and you're like, I don't. Looks good. Okay. And then it becomes one of the most hated campaigns of all time. And then one year later, someone else comes with a slightly different looking idea. And it's like the words are different in four ways. They chose slightly different words, but it's not that many words. And suddenly that one is the one that works.
C
Well, as somebody who writes online a lot, I can relate to a couple of things. Different can be the difference between something people care about and not in glean. I had such run ins with marketing because the first campaign we actually did this campaign, I was just like, really? AI for work that works.
B
Okay, Is that a hit?
C
No, I mean in enterprise, like how does one even measure what is a hit, what is not? I mean, no one really cares enough, I feel, one way or the other. But we've all seen like really cringe AI ads. If you've seen the Cisco ad in the airport, I hated that one for a while. All kind of generic, so I like that. Anyway, I like the anthropic one.
B
Okay, I'm going to sprinkle in some of your tweets. So you had, you had one ad about the billboard where the reddest guy was like, my boss really wants you to know that we're an AI company. I thought that was the single most honest billboard I've seen in San Francisco.
C
Absolutely. I think it's the testament to all the comments of people going like, yeah, I relate. I mean, we've all heard it, everyone. It feels like even on the technical side, people are struggling to catch up, gain a sense of meaning again. I've had developers go like, fuck, man, like, is this it? Like, what do I do anymore? And even that's happening on the technical side of people who semi understand what's going on. On the non technical side, people are like so there's this new thing, it's AI and genuinely my boss literally just wants me to do something in it and I don't really understand other than ChatGPT is quite helpful.
B
Yeah, I have some charts. I don't know if you have any of these in mind, but I'm just going to sort of bring up some of the Anthropic charts. I think it's just, I want to just put it on the record for people who are not paying attention to understand in 2023, according to these are Menlo numbers, right. 2020 market share for OpenAI was 50% and when mid 2025 you guys have OpenAI at 25% market share, anthropic was at 12, now at 32.
A
It's like API, enterprise API market share.
C
Correct. So I should clarify that that is enterprise LLM API spend the market that.
B
Anthropic happens to focus on.
C
And critically it's also also spend numbers, not token numbers. So I think those clarifications are important and also the methodology is going and surveying vast amounts of enterprise users on how they are doing their spend. But that being said, yes, the point.
B
The point is remains market share of OpenAI has gone down. It's not a negative. Obviously OpenAI has done super well. It's just that diversity has gone up. It used to be there was basically only one choice and now there's like three or four legit Frontier labs. Maybe more than that if you count all the open models as well. But I think it's just super interesting and under discussed still that you can actually build a sustainable advantage as a Frontier Lab.
C
I'm sure you guys remember there was a lot of conversation at some point about the commoditization of models and to an extent maybe it's happened. I mean a lot of the frontier models are neck and neck on a lot of things, but in practice, and this, this data was in that market map of that market survey as well is that once people like something and they get used to it, they don't really churn off it once it fits their needs. And so we've seen a lot of that. So there's a lot of like churn and hobbyist developer type category. But in terms of enterprises, often what will happen is they'll buy up large chunks of long term compute and dedicated instances in which case you just don't churn. Right. Like this is what you use. So I think that's part of the effect and you know, to commend OpenAI like OpenAI was just focused on something else, which is, you know, they have, they've launched the most incredible consumer product that we've seen since God knows when. So, you know, they will probably not focus on enterprise until now again.
B
Yeah.
A
How do you re underwrite the company internally as you invest? So, I mean, even since we talk about clock code. Right. It's like, I think that was like a pivotal moment in the trajectory of Anthropic. What are the things that matter to you when you're looking at a company like Anthropic? Does this market share number matter? How do you evaluate both the opportunity and what are the numbers that you really care about versus, like, sure, higher market share, but that's not what we cared about.
C
I don't think the market share number is. The market share number is more. Is more critical to understanding the TAM at that stage, to be very honest with you. At the stage that we invest in Anthropic now, like the only things that would really move the needle on the decision is here's the revenue, here's the margin, and here's the trajectory and here's the other markets we may be able to underwrite that they want to go into that they may be early in or planning on going into. I think it's really difficult to underwrite on market share other than knowing what like the potential, potential cap of the TAM might look like. So the pie will also expand potentially. But other than that, I don't think it's just like it's a nice vanity metric more than anything else.
B
Yeah.
A
In your mind, is it kind of like people in crypto are always about the flipping of Ethereum and Bitcoin. Is this something that matters? Anthropic can go to 50% or is it OpenAI was only at 50% in a moment in time, which was a new market. Yeah. I'm curious how you think about that.
C
I don't want to color like the way Anthropic probably or the way all of us think about this, but I just don't think it matters that much in my view. I'm a very paranoid person with startups and companies and technology. And so in my view, I'm like, great, now let's make it last. Or like, great, but what's next? And so to me it's like, nice to have. It's really not. I mean, look, if we're investing in a round right now, which is like north of 170 billion, sure it matters, some of the numbers matter. But the future of the Company is, is all the value is really in what we underwrite as the future. And the future means that I'm more concerned about what's happening next. What are the new models, how do you gain market share, what has to be done, what are the new products that, that are going to be built? I'm less concerned about, like where it's at right now in terms of market share, but that's just me. I don't want to speak for others.
B
Yeah, I think the new models are really good. I mean, Opus 4.1, Sonnet 4.5, Haiku 4.5, all released in the last few months. And it's really interesting. I think OpenAI and Gemini are in this sort of price war a little bit with the Pareto frontier that I track in terms of LMSIs versus the pricing. And Claude can still charge a premium, but still have a lot of market share, obviously. And I think that's just because they have a better model and people just naturally gravitate to it, especially for coding, but also other things. And I just think articulating what makes a model good is just very, very difficult. Obviously this is benchmarks and evals and everyone has like, okay, today is your turn to be best at suitebench and then tomorrow is my turn. But it's really stupid. We're just talking about 0.12 differences in Suitebench, but wonder if you're talking about like, okay, I am investing $13 billion in Anthropic for Series F to underwrite Cloud 5, right. What does it have to do? What kind of conversation does that look like? I have no idea. I'm not saying that, you know, but.
C
I'm just like, I would say that despite what you said about the premium, I think everything you said is true. I still do worry. I think cost is a concern for a lot of people and so the Pareto frontier does still matter. I'm glad Anthropic's where it's where it's at right now, but who knows where that changes when it comes to, like Cloud5 and thinking about the future. One thing I think about, actually that's really nice is I think we can take for granted right now that furthering the intelligence of models and ChatGPT a consumer product does not lead to more users or more retention. It only is really applicable to us, the thin slice of users who care about very smart type queries.
A
Right.
C
And I would say maybe like under 10 million. Right. Maybe that's just a random estimate, but Most of the 800 million users on ChatGPT are asking like, how do I fix my dishwasher? How do I like rephrase this email that I've sent to somebody and that's done. Like we know how to kind of do that. So what's interesting there is now that means we're at a point in consumer where maybe it's too early to say, but OpenAI is kind of one, right? Like how do you catch up to something where model quality is not going to be differentiated? You already have the users, you already have the retention, you already have great product and people are paying. But the interesting thing about Anthropic is if you look at coding, that's probably never going to be the case. There's always an increasing frontier of how good you could be at a task like that and we're nowhere close to that frontier. So it's more possible to underwrite the quality of the future models versus an OpenAI where it wouldn't be as much of a revenue driver on their consumer business as it would be for Anthropic. Topic.
B
Yeah, talking about coding. Let's just talk about it because I think this is also a very fun discussion. One, there's what are the margins of cloud code? Which there's some numbers. I don't want you to get yourself in trouble. But then there's also how do you think about the CLAUDE wrappers? Right. And we've talked to Bolt and Lovable, but then also I'll put cognition and cursor in there as well. Right. How do you think about this market of basically there's a whole ecosystem of startups, they have all done really well, built on top of cloud.
C
I think it's great.
B
Is it sustainable?
C
I don't see why not. I mean, I kind of will allude to the margin question, which is like, can Anthropic continue to do this strategy? Which I'm not going to comment on the margins, but if you are trying to build out a enterprise friendly business, there's two broad approaches, right? Like high customization and high price, which is usually less scalable. And then you have low customization, low price, which is very, very scalable In a SaaS world, I guess it's a slack Palantir continuum. And so this is kind of different, but generally Anthropic wants to play here where scale fast, keep it cheap, get everybody on it. If we trust that most people or a significant number of people will stay on Claude if they continue to build products on top of it, then I think that's A win for the ecosystem and it's a win for Anthropic. I don't see why they would care. I think the interesting thing, and again, I don't know what Anthropic's future plans are, but Ben Thompson obviously talks about this is classic strategy which is every time you own the, I guess the means of production, you will end up getting into the markets that your users use it you for. And so the classic Amazon example, which is like first you are the market where people sell, you find all the places that you can sell things that are commodity at high volume and then you start creating batteries and Amazon branded batteries and then you push out a bunch of people who sell batteries. So that that's a risk I think for those companies that use Claude heavily and rely on Claude to think about. But at this point of time we're too early. Like I don't think Anthropic is anywhere near thinking about that because you're still very much, much competing with other models on that layer.
B
Yeah, playing a different game. Yeah, it's interesting. Would you rather be an investor? This is basically model layer versus app layer. So far model layer has won and I think there was kind of an app layer summer and then now it's very back to models again.
C
I mean I like the discussion, I like the discussion because I was at a dinner where somebody was talking about this kind of question and I was thinking about it more just at that dinner and maybe this is an ill formed thought so like feel free to push back.
B
Yeah, we're riffing. Yeah.
C
But when I think about like moats, it's a classic like VC startup banter in my mind I think the moat is what is the hardest to do in any part of the stack. And so when I think about people that like tend to dismiss, there's other aspects to it too. But people tend to dismiss like oh, you know, the app layers will capture all the value. Well if the app layer is easier to build, I think the model layer is harder and therefore will naturally capture all the value net of competition from other model providers. So said a different way, it is far easier for Anthropic to try to go into one of the spaces of the apps than an app to try to go into the space of Anthropic, which makes me feel like one is more defensible than the other, all else equal. So I think both can thrive and that's ideally what everybody wants.
B
But yeah, I think very brutally as an investor and as a human with my own limited Time on Earth. If anthropic can go from $4 billion to $183 billion in two years, then everything else is a waste of time. You know what I mean? You kind of do want to really get this right. You can't just be like, like, oh, everyone's great and sort of hedge your bets. Sometimes you have to go all in on the right thing and you spend a lot of time and effort identifying the right thing. And so, yeah, that's what I'm trying to do more of these days.
A
I think the means of production thing is interesting because cloud code only makes sense to be built if it's like the best thing. Right. Because if cloud code is like mid, they're better off promoting Devin and cognition to sell more tokens. So I'm curious, as the market gets more competitive on one way, it's like, well, we don't want you to use Devin because Devin supports all the models and so we end up losing some of the revenue. But I think there's. Right now, cloud code is obviously the best way to use the cloud models, so it drives the most usage. But I'm curious, in the future there's going to be more pressure on like, hey, this product actually needs to be great to make sense for us again to invest our resources into building it.
B
Yeah. So going from model lab to model lab plus product company, which is what OpenAI has done, I would push back on.
C
Well, a, I don't think everyone would agree that cloud code is the best way to use Claude. I've heard multiple people, even in the last few months say that I'm a cursor guy. Like, I'm a Devin guy. Like, people have their preferences, so I don't think it's set in stone. However, a Claude code is a great way to use use Claude also. And there are nice flywheel effects, obviously, because once you capture the way people are using Claude code, you also get so much data to then make cloud code better over time. So I think those are the two main reasons. But at this point of time, maybe this is, this is oversimplifying, but I can't think of too many apps that have a very meaty layer on top of the model that's like very impressive, yet there are somewhat meaty layers and it's getting there. It's a time thing as well. Right. Most of these companies haven't existed for more than two years. So I think it gets there, but I don't think we're at a point where we're like Holy shit. That app has so much stuff, Interesting things and technology built on top of the model, where it becomes so difficult for the model company to go and try to compete. I think tomorrow, if Anthropic decided to or OpenAI decided to take on another app, given their distribution and their engineering and the fact that these are still not as thick as you'd like them to be. Technically, they could. Whether they should or not is different, but they could. And that's something I do think about.
B
Thank you for engaging in all this very meaty discussions. Yeah.
A
You don't even work at Anthropic, so I know we put you on the spot.
B
Yeah. But this is what I want to get on the podcast, because a lot of people don't get the chance to talk about this. This is like a normal SF dinner. The last hit on Anthropic, I'll point out, which is more fun, which is there was a new CTO joining Anthropic from Pesit. And, you know, you're like the king of Indian posting. What's the significance of this for you? You know, last time you were on the podcast, you talked a lot about, like, the Indian university system and all that. And to see this guy rise up.
C
And in India, largely academics, holds the same sort of prominence as sport would hold in America. Everyone talks about it. It's Asian culture. Everyone talks about it. It is top of everybody's mind. It is something a lot of people want to. And it's an extremely competitive society with a very large population the way. And everyone, on average, people are quite poor. So education is seen as the means to social mobility by a large amount of people. In India, the way it works is similar to countries like China or some other countries where you take a big exam, you get ranked, a million people take the core engineering exam, and the top 10,000 get in, and the top 200 get into computer science. That's how hard it is. That's pretty hard. And those top 10,000 get into IIT, everyone's heard of that. That's like where a lot of the great Silicon Valley people, from Sundar to many other people, come from. From IIT and in India, often, what I've seen, and this is something that I'm generally very curious about, is like, what is the motivation of humans and what is the dictator of outcomes in their life and their career? And one thing I've noticed a lot is a there are some societies that are inherently, I think, less meritocratic, where you get so judged for what you have in the past that you're not allowed to prosper later. And I think largely many work environments in India and other places in Asia can be like that. Number one. So you're not judged on the merits of your work, you're judged on the merits of what you've done. And number two, there's a very strong self fulfilling prophecy effect of. I've seen people who underrate themselves because they, they think they, they couldn't be number one at something.
B
It's like your own mental, it's your.
C
Own mental block where like I couldn't get into like, I don't know, you know, people in the Bay Area also like this Bay Area is kind of like Asia. In the Bay Area, I know people who grew up who are like, I couldn't get into a, a good college, therefore I am stupid and therefore I should not work that hard. Right. Like it's, it's inherent that they could be smart. They just believe they're not. And that also has an effect, psychological effect on your long term prospects. You look at a guy like Rahul Patel who's become the CTO of Anthropic and he's not from a top university in India. Some people obviously debate that, but in general I don't think it's, it's a really well known university in, in India and, and he's come to a society that is quite meritocratic and he sort of worked his way up to a position of such prominence. I don't know him, I don't know what everything else he's done, but it's testament to the fact that, you know, I think this is why it resonated with so many people is even though you didn't have the opportunities early and even though you might not believe you could do it, if you work hard enough in certain environments for a long time on things you care about, anything can happen. And I think that's why I wanted to share it. I thought it was.
B
And you choose to work at Stripe and Glean and do well. I think choosing the right company is also a very like, okay, if you're not going to do the credentials but path you have to be lucky and selective and working at good places. And a lot of people make that mistake. And I definitely did. I had good credentials and I worked at bad places. And yeah, it's very interesting that kind of.
C
You work at a pretty good place right now.
B
Yeah, but I took a long time to get there.
A
I mean, just that, you know, this is funny, I have this like automated Pockets research. And when they send me the email about you, and it's like, you know, Didi has a strong presence in AI and immigration for the top two topics that it talked about. Yeah, let's talk about the Ontology fund. So it's $100 million fund in close partnership with Anthropic. Talk a bit about that. I think people are really curious about how close that actually is.
B
Yeah.
C
So the Anthology fund we set up when we invested in Anthropic around the beginning of last year. And the sort of idea was, okay, Anthropic, again, it's so hard to think about. Anthropic was a very different company back then. It was a much smaller company. And they were like, look, we. There's incentive for us to run our own fund. OpenAI runs their own fund. There's a developer ecosystem that we want to create around this. It's really nice to have great startups that are using Anthropic, close to Anthropic, building around Anthropic. And we said, okay, but we had a discussion about, do you want to have it inside Anthropic or do you want to have it outside Anthropic? Because inside Anthropic would mean something, would mean a corporate venture fund. You'd have to hire for that. You have to have a whole role. And typically, if you look at corporate venture funds in history, obviously besides OpenAI as a notable exception, they tend to not be very good because all they prioritize is who uses my stuff the most. And that's not a good way to invest in companies. So we thought this would be better. And the incentives on corporate venture funds are a little bit, not misaligned. So we did that. And now we look back at this fund, obviously Anthropic is in a very different place. We've. We've funded about 40 companies. The rate, it's kind of a hard thing to calculate, but the rate at which companies graduate from when we invested in them to the next round is significantly higher on Anthology fund companies. And we write both small and lead checks. I mean, the thing the two. Several notable companies from the Anthology program have been open. Router, Good Fire. There's a company called NDIA Prime, Intellectual Whisper Flow. So there's quite a handful of pretty interesting things here. And yeah, I think what the other really nice thing about it is it really allows us to move fast on companies that, you know, where we may not feel immediately comfortable or ready to write, like the full check. So we can, like, participate in a round and then get closer and hopefully go and build a relationship and lead that in the future. Lead that the next round in the company in the future. It also lets them get really close to the Anthropic ecosystem. So we have all these events with the founders and all execs and things like that, and people really enjoy getting it from hearing it from the horse's mouth. Now, I think I would say Anthropic is in such a different place. It's no longer an unknown entity. So the program gets a lot of demand, but. But people kind of know what they need to know. And so we're still working on how do we make this program more useful and more beneficial for founders and anthropic alike.
B
Yeah, also congrats on all this. I think it's pretty successful. One reason I'm trying to highlight this for Lin Space is also how does AI change venture? And that's something that Alessio was exploring as well. And that's why I don't really know how to categorize Anthology funds, because it looks like. Like a kind of like what conviction is doing what YC is doing maybe, but later stage. Right. Some of these already have their Cs, some of these already have their. A abacus is in there. Is that our abacus? No, that's a different abacus. But what's the model? What are the predecessors that you draw inspiration from for setting up this fund? Or do you just not. It's like a corporate venture fund managed by Menlo, somewhat funded by Anthropic, I.
C
Would say you can think of the companies that go into Anthology in three categories. One is strategically important to Anthropic, and those could typically be somewhat later around somewhat bigger companies. Two are companies that are using CLAUDE heavily and are just great companies to be in. And three is just very, very early stage founders that are very high potential that may potentially be using CLAUDE models and Anthropic and so on. We don't require people to use a certain model or the other, so we keep it pretty open and we do everything from like 100k check to a $20 million check. So, like, I think the. It's. It's really broad in terms of what we can do, and we wanted to intentionally keep it that way when it comes to where we draw. There's some old, old examples, but I don't think it's really relevant. There was a fund called Ifund that Kleiner did with Apple way back in the day. It was kind of similar.
B
How did that turn out?
C
I don't remember. I don't actually have enough data on that. But that's one example.
B
You know the answer?
C
No, I'm sure there are some great companies that came out of it. I just don't know who the details about what was in it. So, yeah, I mean, I think so that that's kind of how it's been for us and I think it's been a really great program and we've had, I mean, we were excited about the companies that we could lead the rounds in as well.
B
Yeah, I wanted to get quick hits for people who maybe never heard of Goodfire. And I know them because I've invited Mark to my conference and I've been to a bunch of their events actually. I'll just give you that list. Right. Goodfire and Prime Intellect are in your research category. There's others with diffusion based language generation, novel architecture. It's all over the place. Research is the most wild west of this. How do you view research investing?
C
I can talk about any of those companies briefly as well, but the way I view research investing is it is extremely hard to pull off. But when you pull it off, the results could be very remarkable. One of the hard parts is the tension between do you keep investing in research hoping for something that yields a better result, that leads to a better product, or do you try to monetize and scale what you have already? That's tough. It's a really tough thing to do. It's a really tough decision to make. When you're working with those founders, you're on that board, it's like somewhat anxiety inducing. When you're thinking about this. Even from an investor standpoint, do I just get to a couple million avr, Do I start doing something or do I keep the research bet strong? The way I think about research investing overall is honestly follow where the talented people have the most competence and then have an idea around how this could be useful in what I call a top down way. It's not really top down, but the way I frame it is if I fast forward 10 years from the future, what do I think is very likely to exist and what are the ways I can get there? If I do believe strongly that there's something like that and I believe there's this team very strongly headed towards that direction, I can sort of draw a dotted line and go like, okay, maybe we can see something here. So that's how I broadly think about it.
B
So concrete example, Goodfire is the most interesting one. Mechanistic interpretability. I didn't even think that was a market that was worth investing in, but obviously anthropic does and they seem like they have good vibes. What's I guess the summary of your take on the company?
C
The way I think about the company is right now almost all frontier and some many non frontier AI models are complete black boxes. You don't understand why they produce the outputs they produce. All of the eval and studies on them are empirical studies, not, not intrinsic to the model. So it's like, hey, here's the outputs we saw and therefore this is the benchmark score or this is how we think it did. If we believe as a society that five and 10 years later in the future these models are going to be critically important for making pretty heavy decisions. Whether it's, I call it anything from whether somebody should get a loan or insurance or a legal decision, then I don't think that the black box approach is long term scalable. It's just not how society can function where it's. You say, you throw your hands up and say, well this is what the model said. And then I asked it, explain yourself. And it said this other stuff, great. Like that's kind of what we have today. That's the best thing that we have. Mechanistic interpretability is really going into the weights of the model and trying to figure out why did the model do what it did did. And one of the more concrete and relatable examples of this that, you know, guys may be aware of is GPT4O had this phase of sycophancy that a lot of users really liked, but it's kind of one of those things that's not as easily detectable in an eval unless you know you're specifically maybe testing for it. Even then it's quite hard. It's very personalized. It's not like any keywords might arise obviously, but it is something that is quite easy to tell in even current interpretability methods. You can tell when a model is being sycophantic. You can tell when a model is trying to lie. You can tell when a model is trying to steal or persuade you of something. And so I think if we further that research direction 2, 3 years in the future, we will be able to understand why models say what they'd say. It's brain surgery for LLMs is my catchphrase. But doesn't apply to LLMs only all models. And that is a pretty important insight into deploying AI. AI at scale.
B
Yeah. And you don't know the business model yet. Don't, don't need to. As long as we figure it out.
C
There are some ideas that we have but not ready to talk about publicly and some that are working also. It's not right to be public.
A
Does it feel worthwhile to do this on such small models? Because I think most of the work is done on the open source releases. Like how much of a gap is there between what they're able to do and then translate that into doing it for scale.
C
Like they've shown that even for the biggest open source models you like, even for like deep seqs big models, they can do it. And in general, like scaling is not the bottleneck. Obviously access to the weights would be a bottleneck, but not but they're in.
A
The anthology fund, so they can work with Anthropic.
C
They can work with Anthropic, but they don't have cloud access. Cloud weight access.
B
So for listeners who want to HEAR More about Mecanturp, we did a podcast with the Mecanturp team, Emmanuel from Anthropic. So so that's your 101 there. We'll do something with Goodfire at some point. Prime Intellect, another very hypey company. You don't have to say it, but I know it's very much in the water that they have raised a very large round. So I ignored distributed AI for a long time. It's usually crypto people coming over saying like, hey, we have these gpus all over the place. We will somehow ignore the speed of light. And just like you can use our gpus to train models. That's why I ignored Prime Intellectual. I was wrong. Tell me why I was wrong.
C
You may not be wrong. I mean, look, I could be the kind of person who goes and shills all of their companies and says this is the best thing ever. And if you don't think it's going to be a $10 billion company, you're wrong. Every company has risks at this stage and Prime Intellect has their fair share of risks. And whatever went through your mind went through my mind when I was looking at that company. I do strongly believe in like I'm sure you've seen this quote too is in the quote of Pessimists are probably right often, but they rarely change things. And it's an easy thing to say, but when you're investing, it's something to about, which is there's a lot of things that could be potentially wrong with Prime Intellect for sure. But the thing that I really liked that drew me to them is wait if they were right about a couple of things, what could go fantastically. Distributed training is one of them. Access to talent I think is one of the things that I underwrote for them. The ability to hire fairly great people away from people like other labs is really hard. And so they, I think they can do that. And the third thing I think is, is there's a broader vision to Prime Intellect that is not yet realized yet where the first step of that was distributed compute. And we'll see if they realize that.
B
Yeah, well, Will Brown's been on the podcast multiple times and they've launched kind of like a verifiers SaaS platform or something or a marketplace. I'm not really sure what exactly. I should probably try it out, but it's very interesting.
C
The other thing I'll just say out there is everything in AI changes every three, four weeks. So I'd be a fool to say I could tell what this company is going to do.
B
Yeah. Well, all I'm trying to do is try to capture for people who are not in the loop on these are the companies that people are talking about. Right. Okay. So let's at least hit on Open Router. Maybe one more of your choice that maybe is less known, but you want people to know more about it. Openrouter we have to cover big deal. Obviously, I do think this one. I was relatively early on in terms of I saw the products, I saw what he was trying to do and it clearly has done really well. I did not know he was taking investment or I would have invested.
C
He wasn't.
B
Okay, say more, say more.
C
Open Router was sort of my. I don't want to make this about me. It's really about them. But in my mind it was my, my, my darling because I'm just like, man, I entered venture and I'm like, that is the company I want to build.
B
Yeah. And for. I think we're skipping a bit. Let's explain who Alex is, what he did before. Like.
C
Right, so I'll give you, let me give you the background. Open Alex is a phenomenal, phenomenal founder. He started a company called OpenSea before which was the NFT company. Obviously that at its peak was I think a $14 billion but more than $10 billion company did not meet that valuations expectations. But look, there's many things out of control and in your life. Then Alex started this company called Open Router and what gravitated me towards it initially was two things. One, it was very clear from my time at Glean that This is a perfect problem where engineers all think it's easy until it becomes so annoying to keep maintaining this. That's the sweet spot because no other person, no other company will gravitate towards it. Yet it is. So it is kind of thorny to be able to maintain a portal that accesses a bunch of models. The nuances are quite tricky and annoying and boring. So that's one thing I like. Second thing I liked is I was pretty convinced that if there was a market for anything like this, it would have to be a PLG motion. I think go so far as to say for in any SaaS market, if there can be a PLG motion, the PLG motion will win. What I mean by that for, for like if you're not. People are not familiar with venture words like PLG is all users have to be able to access and self serve the product and try it in order.
B
For that to be without talking to.
C
Anyone, without talking to somebody on like the classic like get on the phone on a SaaS website. So those two things really drew me to the business. And then of course third one is just quality. Like there's these small details at OpenRouter. Just like beautiful website, beautiful landing page. It's not some like SaaS trash of like here's what we do and he produces product solutions about us. Like I am so sick of that. You land on the page, it's a developer page. It's like here's how many people are using what models. Love it. I'm like, this guy knows what his users really want. And all of those were compelling. I went out to New York to talk to Alex. He ignored me a bunch of times forever. I'd write him what I call love letters. I'm like, hey man, love it dude. Like it's so cool. I don't even want to invest. Just talk to me. I don't really care. I just, just want to meet you. I have so many ideas and interesting things and it was one of those companies where I generally felt that way. So when I did meet him, we, you know, started jamming on things and I don't know the VC motions of how to sell. So I wasn't really even trying to do that. But when I told him like, look, if you are ever going to raise, I will make it happen. I just love everything about this. So that's how we ended up doing the round. I think the company is interesting from a business model perspective. I get this question a lot, lot. How does this business model scale? And I think Right now the business is doing fairly well.
B
Volume, well it takes like 5% of everything.
C
There's that business model but then there is a reasonable threat vector where you know what if the spend on the net goes down over time as tokens go up. So you do take, you do carry some risk of the prices of LLM falling to a point where the business does stops working. And I know many other companies take that risk as well. Well so that's one risk of the business on just pure consumer spend. Second risk would be keeping people on a. A lot of hobbyists use open Router and they tend to churn and then a lot of enterprises will use open router to evaluate and then go pick a model that they want to settle with later. So that's a problem to fix. And so those are two of the risks. But overall I think they've just like been executing phenomenal elite.
A
Yeah. How do you think about the Vercel AI gateway for example? I think that's been, I mean I'm a fan of openorizer.
B
We'll also do it Vercel.
A
Yeah, I'm interested where you already have like I use next JS right and it's like well I just use AI SDK. AI SDKs comes with AI gateways. Like kind of makes sense to do it. How do you think about this market and like how tied you need to be to like the actual application development versus you're just kind of like this what's the hey, we don't have, you know Open router doesn't have a developer framework for example, you know, if we're in a partners meeting that's maybe what I what I would ask.
C
My simple answer is I don't think the AI gateways of other products are ever going to be their first priority. And the other simple answer is I think OpenRouter has this mind share and momentum that just doesn't go away overnight. So it would be similar to asking like hey I'm OpenAI in 2020, what if somebody else does this? Yeah, I mean they could or 2022 like they could but like we are so far ahead in some ways already. I think the last thing is I think that they have built a lot of smaller things that are non obviously useful that other people probably won't sweat the details to go out and build. And so when I say that I'm like it's everything from like here's something that nobody ever like even cares about about open router but they have a feature flag where you can only want to go to certain LLMs that do not retain your data. They go to that level of granularity of thinking about what do the users actually want. And that's one example. Another example is their detail on the provider level. Almost nobody has provider insights. There was a very interesting side study of how Kimmy K2 did this whole study of different.
B
The verifiers.
C
The verifiers.
B
Okay.
C
But I think that's interesting, like the fact that people don't really acknowledge this, but the same open source model or the same closed source model can be served by different providers and have different context windows, different quality, different latency, different throughput. Where would you go to see all that information? Well, you see it on openrouter and there's some elements of scale where there's enough people using the different providers to get that data. So all of those things I think are somewhat defensible on Open Router and hopefully more over time.
A
Yeah. And I think their leaderboard charts are like one of the best growth hacks.
B
Because very good graph graphics.
A
Yeah. Especially people that are into open source AI are always posting these things saying, hey, open source is up, we're back.
C
One thing I used to joke about is OpenRouter is the only non Elon company that Elon has tweeted the most about for obvious reasons.
A
Groko has one number one right now. Yeah. I'm sure that Sam Code Free plan.
C
A good week where I was like every day it's like, open router, open router, open. I'm like, yeah.
B
And so for those who don't know, that's because Grok Code Fast is like a top model.
A
Yeah.
C
Because it's free.
B
Yeah, because it's free. Yeah. There's a lot of gaming right. Of this stuff where it's like, oh, we'll give it to you for free. But then we'll say we're very popular. I'm like, yeah, you're free because you're popular, right? Yeah, you're popular because you're free. The other way around. Okay, very cool. And okay, so there's a bunch of others. We're not going to go through all 40. What comes to mind? What do you want to talk about? What do you think maybe is a very interesting company in your portfolio that more people should know about?
C
I'll talk about Whisper and Inception are the two. I want to talk about Inception.
B
Inception is not even here.
C
That's why I was. Huh?
B
Yeah. We can talk about the company without saying the name.
C
Yeah. Okay, let's Try that. Let me try that. And then.
A
But I mean also like Inception, it's like, like if I Google Inception, it's not like I'm finding it.
C
Anyway, let's talk about these two things. A Whisper I can talk about first. That's a clear one. So Whisper is a company that does, you know, a very, in many people's eyes, something very commodity, which is voice dictation on your phone and laptop. The things that I really liked and that stood out to us about Whisper was in that quote unquote, commodity market. They are in my mind like the fastest and best and most delightful product that kind of in many ways set the frontier of the nuances of how to make this easy. Press your function key on your Mac, talk to it. It's always on. It has fantastic accuracy as you're dictating. If you ever stutter and go like, oh, no, I didn't mean that. I actually meant this. It knows what you meant and it goes and corrects it. I find that they have this metric they use called zero edit rate inside, which is, you know, amount of times.
B
You don't need to edit.
C
Correct. And their zero edit, I think is north of 80%, which is insane for a voice dictation product. So I, you know, many other risks of that business too. But one thing I think I love is users love it. Users stay on the retention is great and it might make voice suddenly work because if you think about computing, people type slower than they talk. And so it could. It is unlocking this new, faster way that people feel comfortable talking to their computers. That really didn't happen in voice dictation before. And it's not just a Whisper model, which is a common question I get.
B
Yeah. For people who don't know, it's wispr, which you got to spell it somehow. I mean, the question here is always like, it's the same thing. Voice is very commodity. I actually happen to use Super Whisper. Right. Mostly influenced by Jeremy, actually. And then granola is very popular. Notion has like this notion speech thing like how what's the plan?
A
This is every.
B
This is why I'm not an investor. How do you survive?
C
Basically trying to reason about why you should be the winner.
B
Even ChatGPT Desktop has like the, you know, has some shortcuts for stuff. I don't know if it like does exactly the same thing, but like, you know, it's not that far away. Anyway, you're excited about it. I do see a lot of tweets about Whisper and it's one of those things where, yeah, the PRG is getting me, man. I'm like, should I switch? I don't know. My thing's fine. But what if it feels better on the other side? I don't know.
C
Well, we'll see. We'll see how that plans out. There's some interesting plans to get it to be a cooler product, but we'll see the other company and again, okay.
B
We'Ll call this stealth company stealthco.
C
One thing I find very interesting about about Stealthco is comes in the purview of research. We talk about different architectures all the time. One of the most compelling alternate architectures for AI is diffusion models. So one thing that I think is really interesting about it is you talk a lot, Sean, about the Pareto frontier of latency cost quality. Diffusion models today are, I would say 80 to 90% of the quality at one tenth the cost and latency. So has huge implications on obviously the stock market, which is kind of Nvidia and many other things. But also like there is clear examples that you can show of use cases where that might be very valuable because there are many applications that work in volume that do not require high quality, but definitely require better latency and everyone could use some cheaper models. So, you know, there are. I think there's an interesting area of research there. Maybe it gets to Frontier, maybe it doesn't. The one thing I want to draw attention to with diffusion that I think is particularly interesting is left to right. Reasoning for code doesn't actually really make sense because in code we don't like, we might sometimes write code left to right, but after you write code, you go up and down and figure out, hey, is this variable set? Did I do this? There are many bidirectional dependencies in code, so there's a natural tendency to lend itself to diffusion models where you can imagine, like as you are denoising, you fail fix partial issues in different parts of the code at once versus this reasoning paradigm where you kind of have to figure everything out and then go give your final answer.
B
Yeah, I like that a lot. Especially for syntax structures like C, like languages where you need to open and close a bracket and all that and hold that state. I think the question is always sort of the hardware lottery of Transformers. Transformers is all you need. And diffusion is kind of like a different branch off of that tree of research. They are related, but we might be too far gone down the Transformers tech tree to come back and then go down diffusion being the point where they might never be frontier because we've just had four more years extra of Transformers LLM research.
C
Yeah, it's true. I think about this all the time thinking about in the course of history, what are the significant moments where if only something forked off a different way that maybe there would be a completely different paradigm of outcome.
A
Yeah.
B
And usually the worst tech wins, like Blu Ray, dvd, HD DVD or something like that. I think there's a lot of variations of this. Even like, I think there was discussion about AC versus DC currents back in Edison's days. There was this big fan fight between Tesla and Edison. I don't know if you.
C
I'm aware of the very, very basic details. But it's so interesting, right, because you take something like this and then the question becomes like, okay, do we bet on it or is the timing just off because something took off and we can't pull this rocket ship back to Earth and so we've lost that fight. I don't know. I'm not a purist scientist anymore where I believe the best ideas and things win. I think in markets it's very often obvious that that's not true. I think a lot of things go into winning and sometimes it's out of your control.
B
Yeah, it's very true. And speaking of anthropic and things that happened this year, MCP happened this year and when MCP came out, I was sleeping and then when they came in and did the workshop with me and I think as you see, a lot more noise and I was like, okay, there's something to this. And now it's basically kind of de facto one and as the interop layer for all the labs and all the models. And there's no reason why this could have won versus anything else. Apart from it was well specced out, it was backed by anthropic. It's kind of a similar thing. I don't know if it's the best, but it was good enough.
C
Yeah, it happens so often. It kind of makes it tricky to not just investing, but in general to think about ideas. We see this with startups as well. It's very heartbreaking. Every once in a while, know you, you'll meet a founder where I'm like, your idea is fantastic, your execution is great. I just don't see it work because the market dynamics are not in your favor. And maybe I'm wrong about some of them. But you know, when you say market.
B
Dynamics, is it TAM or something else?
C
No, it's sometimes it's like, I don't see that. Like you are a small Group of people trying to wedge something into a market. We know how long that takes and we know the other forces at play. And if I don't, like, I just don't see. Imagine a single person running in a tunnel with a light at the end, with the tunnels closing in on you. You could be the fastest runner in the world and you might not make it out of the tunnel. That's kind of the analogy. And so you might be doing everything right. It's just that that window is not there. Or at least I might not think that window is there. I do think a lot of companies fall into this bucket of ideas.
A
And so to me, in a way, I almost think of companies like Mosaic ML in a way which is like, hey, we got this amazing team. We can help you find two models and yeah, but nobody. The market dynamic, there's really nobody finding, tuning models. And part of it is like, the open models are not that good. And part of it is like people don't really have good data. They don't have the expertise. And again, if you go back now, now there's like, you know, RL environments and like RFT is like the next wave of that. And it's like, maybe they'll be able to get in the window. But it's just interesting how, you know.
C
Now then the other flip side of that is, and yet they get acquired for this.
A
Amazing. But yeah, because the market is just so big. I mean, even if you think about something like, yeah, diffusion models for text, right. It's like, you know, it's a bet. It's like if you sell it for a billion dollars, right. It's like 0.01% of like Nvidia's market cap. And so it's like, okay, well, the amount of money being spent in the space is large enough to justify betting like the same way Instagram was like 1% of Facebook market cap. This is similar where it's like, man.
B
Databricks is rich enough.
A
Exactly. It's like, you know, they really want.
B
You to know that they're an AI company.
C
Exactly.
A
And now they're worth a hundred billion. I mean, you know, like without Mosaic ML. Exactly. It's like without Mosaic ML, maybe they're not on the same trajectory. It's like, I don't know, maybe they are. Because, you know, you guys have ever.
C
Talked about like the roll up companies, which is my favorite, like little the pen. Yeah, Yeah.
B
I didn't know that was a topic of yours.
C
It's not really a topic of mine. I just Find it quite interesting to see how. Speaking of AI companies and markups, there are companies, obviously not going to name them, but there are companies who go like, hey, here's like a small company that does a million of ARR completely with humans. I'll buy it for 2 million and then I'll do some of it with AI. But now I'm an AI company and a million of ARR and AI company world is $100 million valuation. And so. So it's pure multiple arbitrage on the category that you're in.
B
But yes, that's cynically. Haha. But then what if it actually works? Because the hard part is getting the customers, the hard part is getting the domain expertise. Drop a bunch of software engineers in there and automate it, make it scalable, make it cheaper and like, yeah, maybe it works.
C
No, you're right. Yeah, absolutely right.
A
I think it just.
B
He funded a company that bought a tax firm, so. Yeah, yeah, no, look, accounting firm, a law firm.
A
Law firm, yeah.
C
If it works, it works. I just think what was interesting to me is like you can 50x the value of the company before you actually landed anything with AI yet.
B
Yes. But then you use that funding and the equity to like hire the people. It's weird. So there's this concept about, always talk about, which I'm surprised people don't really understand, is reflexivity. The belief that something can be true can make it true, even though it's not true at the time that you believed it.
A
Yeah, that's venture capital. Yeah, just give money and everybody's like, oh, they raised 300 million. It's a great company. I love that company. It's like, yeah, I'm an investor in it, so I love it too. And it's like all the employees are like, I love this company. My stock is worth a lot of money.
C
There's also that effect that's very clearly in venture capital, where not just what you said, which I agree, also happens, but imagine there's, there's times where people funnel so much money into a company before it's really like prime time, which dissuades anybody else from entering that market. And then they become the de facto winner of the market because they cancel the competition with funding. And you can think, and I'm not gonna name the categories, but you can think of numerable categories in this market, in this paradigm, software. That's already happened.
A
Yeah. And I feel like even in AI, it's like maybe two and a half years ago when ChatGPT came out it's like, this is cool. But a lot of enterprises were maybe skeptical of is this trend going to continue? But then once you start seeing tens of billions of dollars being put in OpenAI and anthropic and it's got to.
B
Work, especially you can deploy in hardware, which I think at that point you're building infrastructure and infrastructure very capital intensive and you actually can do the math. It's not humans anymore. It's like machines and land.
A
Yeah, exactly.
B
Power.
A
Amazon is building all these training chips and all this infrastructure for Anthropic. It's like, do you really think they're dumb? You know what I mean? I think at some point it's like, same with Stargate. It's like, do you think all these people are dumb? And you're saying the models are not that good.
B
It's like the podcast we released today with Kyle, he was still kind of skeptical that they had 500 billion for Stargate. And I'm like, not only do they have the 500 billion, they have the next like trillion lined up. Mostly because the projections. I think I've been talking about this a lot and I'm very out of my depth because I'm not Dylan Patel, but I think it's probably the biggest story of the year beyond the models, just the infra build. And I think people don't understand. The roadmap is very, very strong for the rest of this decade, at least for OpenAI to go from 2 gigawatts of compute this year to 30 with everything they've already announced. And then there's a plan for the next 125. Like the United States uses 300. It's like crazy ambitious.
C
Do you think, like, I guess a question for you guys also, because I don't have a good answer yet, is the, the belief is always obviously bitter lesson pilled. Right? Like, you buy more compute, therefore you get the most models.
B
And is it, by the way, is the anthropic relevant thing?
A
Right.
C
And, but like is, I guess, is that that necessarily true? Like there could also be a world where that's just not not true. So, you know, you are kind of.
B
This is what makes it bitter. It's like, what if it doesn't apply to me this time?
C
Right, right. And I think, you know, being in Sam Altman's place, that's absolutely the right chess move to play. But you know, I do wonder what happens if like all this investment in compute doesn't actually lead to economic gain. Slash better models, slash everything else.
A
But I Feel like we reached the point where, like, the models are good enough, enough that even if the next generation is not 10x better, we'll be able to use the compute. I mean, and again, the data center.
B
Is like, that's the cope.
A
They're writing it down for like 30 years. So it's like, you know, can you run GPT5 Pro over the next 10.
C
15 years, given the amount they're spending on compute? This is a general question I'm not criticizing at all. Is even if everyone was using Claude, like whatever, codecs, Claude code, whatever all the time, like inference demand is not that big globally, right?
B
Not yet.
C
You would have to believe. So what would you have to believe for that to be true? Because there are 800 million weekly active users.
B
This is what Greg Robin says, like a GPU for every human on earth. I'm somewhat shit posting. I'm somewhat shit posting. But they actually say this on their official comms, so I'm just repeating him.
C
I don't necessarily disagree. I'm just trying to work backwards to what do we need to believe to get there? Because ChatGPT computers, not that much.
B
Correct.
C
Right. So they're not doing it like agentic stuff. Maybe they will be in the future. Most people are doing basic Q and A type queries, by the way, I.
B
Put it up on chat. So if people watching on YouTube, they can see this, which is this year, OpenAI spent $7 billion on compute. Only 2 of that was for all of their inference. The remaining five was R and D. So all of ChatGPT, all 800 million users, all of Sora, all of like all the sort of API volume, 2 billion. And they have two and a half times that for R and D. Right.
C
And so my point being, like, yeah, if inference is one thing, I don't know how that will scale to that volume. But then you'd have to believe that the rest of it goes into R and D and therefore produces models that are so much better, that therefore have more demand, et cetera. But if in any case that, like, I don't know, the incremental marginal is not that big, then that's the risk of the bet.
B
Yeah. So to disrupt OpenAI, you need to have more efficient research because right now it's pretty inefficient, spend five to get two. So what OpenAI did to Google is what the next OpenAI has to do to OpenAI. You know what I mean? Google was spending a lot of money, Facebook was spending a lot of money, and they didn't come up with anything. OpenAI did. And it was a small, tiny little startup and they had GPTs. And I like Radford, but someone else may or may not come up with that.
C
It's like that classic quote, your margin is my opportunity. Google was milking those margins and they didn't want to spend the compute for every search query. And so now OpenAI is willing to.
B
So we've covered a lot of topics. Thanks for indulging. I think this is like, for me, it's like a survey episode. Episode of like, here's everything. We're also catching up with the former guest and it's always nice. Maybe we can end it on this coding interview thing, which literally you tweeted about today. What is the situation that I guess engineers should be aware of. And I think this maybe ties into LLM psychosis a little bit.
C
So I tweeted, I'll just cover the tweet first. I tweeted about this guy who wrote a blog post about. He was in an interview from a. I didn't think it was a legit account. He thought it was a legit LinkedIn message where he was interviewing for the company. They sent him a coding interview. They said, clone this repo, run this code, make this edit kind of not untraditional. So it's pretty, pretty run of the mill type interview. It happens. And in that interview he claims that he went to Cursor and asked whether the code had anything, any vulnerabilities or anything he should be aware of. And it revealed that it had some link. It had a byte array that compiled into a link that would go and take a bunch of private information from each. So that was the tldr. And, and I tweeted about that saying, you know, like the. The world. Interestingly enough, it was solved by Vibe coding, but it could very easily. The world of Vibe coders who don't really look at code, I imagine, are more susceptible to being in attacks like this and in the future. And. And it got me thinking about a lot of things like what is. What do attack vectors even look like if people aren't looking at code? There's so much that can go wrong, I think. And what are the implications on model safety and how models behave in those environments? So that's one. But I think the broader thing, and I'm curious what you guys think about this is what I've been noticing more and more is I was having this conversation yesterday with some of my close friends where some of the joy of coding used to really Be you're stuck on this annoyingly hard problem and you just bang your head against a wall and you want to kill yourself. And then eventually you're like, I've figured it out. And then you solve it. And that's the muscle that you build when you, you improve and get better. And now I find myself even doing this. It's so hard to do if you just have a constant slot machine that might give you the right answer. And who knows if it will, who knows if it doesn't, but you just pull it all day long. Please fix, please fix, please fix. And what does that mean for the craft of engineering or software engineering in the future? I don't know. Like this vibe coding stuff, I mean great for the rest of the world. That was not an engineer, but I'm now seeing, seeing how it's affecting the trained software engineers and it's kind of like a drug for them and it stops them from living their own life, which is doing the engineering.
B
Because it turns your brain off.
C
Because it turns your brain off.
B
Yeah. I think self driving cars, people thought about this first. This is why when you drive your Tesla you have to keep your eyes on the road because they don't want you to turn your brain off. And we don't have that equivalent in developer environments yet. Maybe we should watch your eyes.
A
We remove one word in the code. Which one was it? Write it back.
B
So my answer, I happen to have shipped a model today, two models. And part of that is actually what I've been calling the semi async value depth. And a lot of it I think is my reflection on coding agents in terms of. We started with copilot, which was tab autocomplete complete. And then when we went all the way to clock code which is very async, very. It could take 30 minutes, could take 30 hours. I don't know. It just runs. I think something that cognition is very interested about is fast agents or something I've been writing about more is fast agents is where under a certain level you actually want to just be in a mind meld with the human and AI to have fast responses so that you can, can get helpful assistance if it helps. You can get out of the way if it doesn't help. And that is actually where you do your hardest problems. And then the async agent is where you do the commoditized, dumb, boring labor stuff that you know how to do. You just don't need to do it. But when you are actually very deep work and focus and you're working on a hard problem. You should be applying your human intelligence augmented by AI in an unintrusive fashion. Which I think is the way that obviously I think it's like it's a pro human message but it's also like a really interesting area of research for us.
C
But that's almost like to play devil's advocate there, that's like telling somebody, well, I'm going to put the cigarettes right here. I know you love smoking, but please don't do it.
B
It's not a cigarette.
C
It's right here. It kind of is. There's an analogy right to be made here. It's it, it's a cigarette for your brain because you do not think anymore when you pull that button. And, and over time I feel like, you know, the brain will get weaker if you don't use it for that task. And I like your message. I mean I would ideally, like if I had a team of engineers, I would also tell them the same thing. But I mean I worry about the reality which is that's not what they do in many cases.
A
But I mean you gotta ship the thing, right? Like I agree, but at some point you gotta close the ticket and merge a pr. So how are you going to get that code done? Right. It's like they are doing it or they're going to get fired if they're.
B
Just generating the enterprise way or the other B2B SaaS. Yeah, it's interesting. Okay, so maybe I'll put it this way and I want to see how you respond. Okay. So we have the fundamental formula for coding agent performance. It basically is find the right files and then write the right files. That's it. So read and write. Read the right files and write the right files. That's it. Right. So actually what fast agents can do or what I just did today was basically the equivalent of a heads up display. Like give you more info but you still take all the actions. So we help you read, read faster, read more efficiently, read with more focus, but you still write. And so I think that's not a cigarette so much as we try to be helpful and we're evaluated on the helpfulness of the reading and the comprehension so that you can hold everything in your head that would be the pitch.
C
It's true. I don't know how the product looks. I would love to eventually play with it with the SweetGrep and all of that stuff. But there's a world where I think the product decision also goes a long way into how people use it. So if it is like that, then maybe. And I think when people use. Even for example, if someone uses a cursor, a lot of people like the fact that they can see the code and then they kind of have to hit the final exit accept.
B
Yeah. So human in the loop.
C
Human in the loop. But you know, I still, I worry, I still worry, but. And I worry the most about like the younger kids. Right. Like you, you think about the people growing up in college. How would you ever get yourself to think if you just had this like clearly more intelligent thing than you.
B
Yeah.
C
At least for like. And I don't want to like rate myself too highly, but if I'm working in a domain that I understand, I can at least tell AI model you're doing the wrong stuff. Definitely don't do that. Don't write that at all. That's a terrible file. Why are you creating four files for this? But if you think about what it looks like to a 18 year old CS major freshman, they're just probably like, I guess that's how you do things. And they can't hold it at that. So when they, their training is just a little bit different.
B
Cool.
C
Yeah.
A
Hi Didi, thanks for indulging and welcome back and thanks for coming back.
C
Thank you guys. Always fun chatting with you guys.
Episode: Anthropic, Glean & OpenRouter: How AI Moats Are Built with Deedy Das (Menlo Ventures)
Date: November 14, 2025
Host(s): Alessio (Kernel Labs), swix (Editor of Latent Space)
Guest: Deedy Das (Menlo Ventures, ex-Glean)
This episode centers on how foundational AI companies build and maintain moats, focusing on Glean (enterprise search), Anthropic (frontier AI models), and OpenRouter (model routing/integration), through the lens of engineer and investor Deedy Das. The discussion spans startup battles, AI model commoditization, infrastructure arms race, and the rise of new application and research layers in the AI ecosystem. Along the way, the hosts and Deedy dissect the challenges and opportunities facing AI-first companies—both model labs and app builders—while offering insiders’ perspectives on venture, talent, and shifting market dynamics.
[01:19–07:22]
Notable Quotes:
"It's such a boring, unsexy company that became sexy later." (C, 01:38)
"The moat is just—we did the hard work." (C, 04:35)
[09:09–15:15]
[15:32–27:56]
Notable Quotes:
“Anthropic is the fastest growing software company of all time... 0 to 100 in one year, 100 to a billion in one year...I couldn’t have predicted it.” (C, 15:54)
"You hire good talent and let them loose with a lot of tokens, see what they come up with." (B, 18:53)
[30:44–37:22]
Notable Quotes:
“The moat is what is the hardest to do in any part of the stack.” (C, 33:33)
"As an investor and a human with my own limited time on Earth, if Anthropic can go from $4B to $183B in two years, then everything else is a waste of time.” (B, 34:28)
[41:06–48:01]
[53:14–65:41]
Notable Quotes:
“OpenRouter is the only non-Elon company that Elon has tweeted the most about.” (C, 60:18)
“Diffusion models today are, I would say, 80-90% the quality at one-tenth the cost and latency.” (C, 64:08)
[69:06–73:18]
[73:20–76:10]
On the Joy (and Future Risk) of Coding with AI:
On Cultural/Global Talent:
On Venture Capital Reflexivity:
On Model Lab Moats:
The conversation is candid, energetic, and sprinkled with Bay Area/SF “insider” self-deprecation, market banter, and references to both technical and cultural elements in the AI world. Deedy’s perspective bridges hands-on engineering, “hard” startup experience, and now high-stakes venture. The hosts prod for specifics but also riff, making the show both accessible and relevant to practitioners, founders, and investors.
For deeper dives, exclusive charts, and full company list, head to latent.space.