Loading summary
A
Tomas, welcome to the show.
B
Pleasure to be here. Turner, thanks for having me on.
A
I know it's kind of funny we had never met before last week and then we were at the Sameil's thing and then we're recording this podcast and then next week we're going to be at the Beyond Summit. So it's like three. Three times right in a row.
B
Three. Pete, let's go. Let's do it.
A
And so really quick, for people who don't know what is Theory Ventures, we
B
are an early stage AI focused venture firm. We invest anywhere from 1 to 45 million typically in B2B software and infrastructure companies.
A
How would you kind of, I guess summarize sort of the state of AI today? A little bit of an open ended question, but how do you kind of think about everything that's going on?
B
All out Sprint. That's the way it feels. Okay, so why do I say that? The first is there aren't enough GPUs for anybody, so people are sprinting to buy GPUs or rent them. I think the second thing is model improvements. Model only remains state of the art for about 41 days even though it's several hundred million or a billion to train, maybe less. And then there's also an all out Sprint for customer acquisition. Buyers are the most open they've ever been to trying new things. And, and so if you can capture many of them, you'll have a big business. And then the businesses themselves are growing at unprecedented rates. So I think everybody is sprinting.
A
Yeah, I guess maybe the first one you mentioned the GPU prices. What does that even mean? Like for somebody who is not super familiar with, I guess, any of this, like how would you just explain that to a smart person who is hearing this for the first time?
B
Yeah, so to run a machine learning or an AI model, you need a gpu, which is a particular kind of chip that does lots of calculations in parallel at the same time. Matrix math it's called. And if you have a MacBook, you have one. In fact, you have an excellent one, maybe one of the best that you can buy for your computer. And you can run small models which are effective there. But if you're a company that offers an AI product, you can't just buy a whole bunch of MacBooks. You need to buy servers that have lots of these GPUs in them. And those GPUs are hundreds of thousands of dollars most of the time. And the prices increase every week because there are not enough of them. And so you can either buy them and run them in your own data center or you can rent them from other people. And there aren't enough because one, the models that we're building are much bigger than we thought. They're now trillions of parameters. The, the demand for those models is much bigger than we thought. Primarily because of things like openclaw and Agentic tool calling and coding. And there's not enough memory, there aren't enough CPUs, which is really important. And that's mainly because most of these chips are produced by a company in Taiwan called the Taiwan Semiconductor Manufacturing Company, or tsmc.
A
Very creative name.
B
Yeah. But you know, it's coming back like, you know, like Nabisco. You know Nabisco is an acronym.
A
No.
B
National Biscuit company.
A
Really? Okay.
B
Yeah. And there was this wave of like, you know, American Motors. Right. I remember I met this gaming company. It's a total tangent, but they were, they were called the Brooklyn Packet Company. It's like, that is an awesome name. And so we're. Anyway, we're starting to see some of these like locality based names come back again.
A
Well, it's either that or you pick a, you like make up a word. Like we were at a point where people, their name would be like computify IO because you had to like make something up to come up with the name.
B
Yeah. To buy the domain name at some reasonable price. Yeah.
A
And now the same thing's happening with GPUs. The prices are going up.
B
That's right. And then the other dynamic is just power and land. Can you find a place to build a data center? Takes three to five years, maybe seven years to build a new power plant or to buy a jet turbine to power your data center. And then you need to build a data center itself, which takes 18 to 24 months. So there's all these lead times. Right. Atoms finally are starting to be really important in the world of software.
A
This episode is brought to you by numerl. Numeral is the fastest, easiest way to stay compliant with US sales tax and global vat. It's easy to set up and they automatically handle all registrations, ongoing filings, and their API provides sales tax rates wherever you need them with all the integrations you need. Numero Sports. Over 2,000 customers in both the US and globally. And they pride themselves on white glove High touch customer service. Plus they guarantee their work and they'll cover the difference if they mess anything up. They're fresh off a Fundraise closing a $35 million Series B from Mayfield which they're going to reinvest into building an even better product. If you want to put your sales tax on autopilot, check out Numero at their new domain numerl.com that's n u m e r a l.com for the end to end platform for sales tax and VAT compliance. This episode is brought to you by Flex. It's the AI native private bank for business owners. I use Flex personally and I love it because they use AI to underwrite the cash flow of your business, giving you a real credit line. The best part is 60 days afloat, double the industry standard. Flex has all the features you'd expect from a modern financial platform like unlimited cards, expense management, bill pay that syncs with your credit line and their new consumer card, Flex Elite. FlexElite is a brand new ramp like experience for your personal life. A credit card with points, premium perks, concierge services, personal banking, cars and expense management for your family. Net worth tracking across public and private assets and a whole lot more fully integrated with your business spend. One card for your businesses, one card for your personal life. One card for everything. To skip the waitlist, head to Flex 1 and use My Code Turner to get an additional 100,000 points worth $1,000 after spending your first $10,000 with Flex Leap, that's Flex 1 and Code Turner for $1,000 on your first $10,000 of spend. Thank you Flex. And now let's jump in. And I know like a lot of people are making these projections like we need 10x more next year and then the following year we need whatever the number is 10x more from that, like can we even keep up? Like what do you think is going to happen?
B
No, we won't keep up and we will overbuild. But okay, so capex spending, so that those are dollars that are spent to build out data centers, will, I mean we'll be at like 1.2 to 1.4 trillion this year. And out of US GDP, that's about low to mid twos as a percentage basis. So it'll be one of the. Right now there's World War I, World War II, just in terms of large infrastructure projects. The railroads peaked at 7.7 and then there was the Eisenhower development of the national highway system. And I think this year we will exceed that. So the question is, will we beat 7.7% and get to around 2.1 to 2.4 trillion in a year or two? I mean, yeah, it's definitely very possible. It's definitely Very possible. And you can see Google is outspending Microsoft in GPUs, even though GCP is significantly smaller than Microsoft. And that tells me that there's some very sophisticated math to justify those build outs. And as long as that math works, we will continue to build. And then the electric grid will have to be reimagined because it was never conceived to handle these kinds of volumes.
A
Do you ever reach a point though where this kind of plateaus, like with the railroads and with the national highway, like it's just kind of, we built the highway, it's like it's there.
B
Yeah, but I mean, the highways continue to grow, right? I mean, I don't know, you know, 101 in San Francisco, they keep adding lanes and LA I5. But yes, I mean, we will, we will keep going, keep going, keep going. And very likely overbuild because it's impossible to determine when the economics either change or the demand changes. Why would the demand change? Well, we have every year, I think Google for the last two years has said they generate 80% more tokens per GPU hour than they did the year before, which is doubling productivity. So that's a really big deal. Then you have segmentation. So you can say, I don't really need a super fancy model to update my CRM. I can use a model that's running on my computer. So you could actually have a lot of the workloads going on your MacBook and that will happen. But for now, I mean, I don't know what AI penetration is as a percentage of ultimate penetration, but I have to admit it's less than like 2%. You know, I have to estimate it's probably something like that, which means we can grow 50 to 100x from here.
A
So do you think what's kind of going to happen then? It sounds like is we're going to sort of hit a wall and how much we can expand the infrastructure capacity. So we'll have to just get more efficient essentially with what we have.
B
I think that will happen. I mean, you have this token maxing era. Token maxing is putting a leaderboard in your company and seeing how many tokens you can use. And that's a lot of fun. I hit 250 million one day, literally did everything through an AI. And after burning a couple thousand dollars, you're like, okay, that was fun once.
A
What did you do? You were telling me about that before the token max. What did you make it do? And then what did you accomplish from spending the thousand dollars?
B
You can't do it just by querying ChatGPT. There's no way you might get to a million tokens that way. Parallelization is absolutely essential. So you have to create a plan for what you want an AI to do that particular day. Coding is huge because it's reading large existing code bases and then anytime. I think the best technique is anytime you thought to do something with a computer. Do not start with the browser, do not start with your email client. Go to the AI and try to figure out how to do it with the AI. And then you can rent. The challenge now is many of the clouds will stop you from doing this because they're like, hey, they'll hit you with a rate limiting error of 4.29 and they'll say or 529 and they'll say too much for you, Tamash.
A
So this could be like you could say, go and read the entirety of Twitter and just give me a summary of the best tweets or all of Reddit or something. Just give it an insane task.
B
Read all my emails. Listen to these 50 podcasts. Transcribe all of them. Download these 10 GitHub repositories, install them and see if they work. Benchmark these four local models, which one's faster? Download a bunch of startup presentations and analyze them. Extract all kinds of stuff. Find for me the 10 most important academic papers in the last week and anything that comes to mind for the last two weeks. Download all of the earnings transcript of every public technology company. Draft 10 blog posts and pick a best one and then critique it like an editor. Actually the harder part is not getting the AI to do it. The harder part is creating a workflow where you anticipate multiple steps and forks. If you can do that effectively, you can go through a tremendous amount. For those who are coders, there's this beta feature within OpenAI's codex called Goal, where you just tell it, this is what I want you to achieve and then it will just continue going. I was chatting with a friend who said he had his going for 18 hours and it worked. And I did it this on Wednesday. I was frustrated with a dictation app and so I just told Codex, just replicate this app so it works. And then it, you know, whatever, 45 minutes later it said here you go. Yeah, and so that cost $15, but I'll only pay that $15 once.
A
Now you have this dictation app that you can use whenever.
B
Yeah, dictation is another way of driving a lot of tokens or bitcoin yeah.
A
Interesting. And yeah, because I do feel like we've all seen those headlines at this point where like, you know, meta actually had to get rid of the leaderboard, I think was one of the most recent things because people were probably doing something sort of similar where you're like gaming the leaderboard.
B
Yeah. I mean, you know, you can, you can use jet fuel in your car and it'll go faster, but like, you
A
know, will it actually go faster?
B
Yeah, yeah. I mean, you know, racing car fuel. Octane is the fun. So hydrocarbons form all of propellants. Right. And there's hexane, which is. I think it's C6H6. And then there's septane, and then there's octane, which when you get, you know, there's 87, 89, 91. And that's the amount of octane that exists within gas. And. And then if you get racing fuel, the octanes are much higher. They're 91, 95, 105. And so the amount of energy per unit volume is significantly higher. And your car will produce a lot more horsepower if you put racing car fuel in it because there's just. The explosion is stronger.
A
So you actually will go faster.
B
Yeah. I mean, at the top end. Sure. I mean, make. Provided that everything in your engine holds together. This is not an endorsement of putting in nitrous into your Prius and seeing if you can break 200 miles an hour.
A
Yeah. But I guess it's kind of the same thing with AI. It's like you may have a super powered model or you're running all these different parallel tasks, but are you even doing them properly and have you set things up to make that even worth it?
B
Yeah, exactly. So let's ground this in some numbers. Right? So a very small model might be a few Hundred million to 2, 3, 4 billion parameter models. Those are great for dictation, they're great for grammar, cleanup, transcription, those kinds of things. Then you have the next range of models, which I would put it like the 25 to 35 billion parameter models. And they're almost anything that you can do with a computer, aside from coding, you can now achieve with one of those models. And they'll be faster on your laptop than they will be talking to cloud or OpenAI. They're just faster at it. And then there's another class of models that's like 120 to 150 billion. Those are not that often used. And then you have the state of the art models, which are trillions of parameters. And they can do architecture and implementation of very sophisticated code or novel math
A
discoveries and then those are the ones. So it's basically, I think I've seen if you follow what sort of some of Dario and Anthropic's kind of positioning. It's like it costs a ton to train these models. And as the revenue starts ramping, you start getting profitable on these models, the older models, but then they're training the new ones which are even more expensive to make, which makes it so it looks like they're losing money but the revenue gets even bigger and, and it's kind of like these stacking super expensive to train work way better. So eventually we get to a point where they just start making a ton of money. Is that kind of how this is going to go?
B
I think it resembles pharmaceuticals more than it resembles software where you might spend three years researching a drug and then you. I think in pharma, I'm not deep in pharma, but I think you have 17 or 20 years with an exclusive patent on whatever the next statin to reduce cholesterol. But with AI models like we said, you have 41 days to be state of the art. You can see there's a company called OpenRouter, which is an open source router of model calls. You can see the share shifting pretty significantly. In November of last year Groq, which is the XAI model, had pretty significant share above 15 percentage points. Today it has a lot less. And then you can see the share shift as a result of the subsidies from OpenAI or Anthropic on their different products or new models. GPT 5, 6 and so you don't have 17 years to recoup your investment costs. You have to keep running faster. The other dynamic that's really important is as these models and the training data becomes larger and larger, there's a great paper that talks about how the model performance will ultimately converge. And we're seeing this at the beginning you could see GPT4 was significantly better at agentic tool calling. And then I don't remember exactly what the Claude model was that kind of caught up. And then Gemini was really strong in this particular domain. Maybe it was math. And another one was great at humanities last exam which is a knowledge retrieval benchmark. And now they've all, they're adding more and more benchmarks and they're all more complete and the differences between them are increasingly subtle.
A
And so ultimately then the advantage is just do you have people using it like it doesn't even matter how good the model is sort of. Because they're all the same and it's more. So is it like the behavior or like the. Have you captured the workflow in some capacity or.
B
Yeah, I think we're going to get to a place where you reach a minimum viable intelligence. Where if you work at any company with a computer there's whatever. I don't know what it will be but some minimum. Let's say it's a 30 billion parameter model in late 2026 and if you have a computer that you can run it, that's good enough. It's just like we're not giving everybody inside of a large company a state of the art laptop. Like you have a. Whatever. You might have an IBM PC that's pretty good, you might have a MacBook Air that's pretty good. But you don't have an M3 Ultra with 512 gigs of RAM. To everybody there's a minimum level of performance that's good enough. And then you kind of upgrade every two years or three years depending on your company's policy. You can imagine we get to a very similar place with models where you say like okay, I have the current Gemma model From Google, it's 31 billion parameters and I can do most of my things on my laptop and that's fine for me. And then the frontier models, then they push into the domains of high performance computing, math, research, materials research, chemistry and really pushing PhD level analysis further and further. And you have some companies, Dow Corning, pharmaceutical companies who are willing to pay a huge premium for that, but everybody else will use sort of a mid range model.
A
When you can build anything, amplitude lets you know how to build the right thing. Use human language to get complex answers about your products. No more manually selecting events or building charts or dashboards. Just ask. Use agents to sense changes in customer behavior, decide what's causing them and ask you if it's okay to fix it. Continuously in the background while you work. Get the answers you need while building directly in the tools you are already in. Like Claude Cursor, lovable and more and for the first time understand if your agents actually work. Measure quality, debug failures, experiment and measure their ROI with agent analytics amplitude with AI analytics. All you have to do is ask. You had a post recently, AI at discount is the name of the post. And I guess essentially the premise is anthropic. If you actually look at it, it looks like it's actually trading depending on how fat. Like if you take the rate of growth, it's Actually trading at quite a bit of a discount considering how fast it's growing. I think on the other side though, everyone's like, oh these AI companies are so overvalued, blah, blah, blah. So what is actually going on with how these companies are being valued by the markets?
B
Yes. Okay, so when a company is growing really fast in the public markets, many people, not everyone, but many people value it on what's called a forward revenue multiple basis, which is a fancy word to say estimate the revenue in the next 12 months and then you take the market cap and divide it by that estimate of the revenue growth. It's called a forward revenue EV to forward revenue multiple. Anyway, most software company, and you can benchmark the fastest growing software company today at scale, aside from a pure model company is Palantir. And they're growing at like 60. The projected growth rate is 68% which is
A
good for their size.
B
Yeah, yeah, that's like a mid sized venture scale business five years ago. But this is a publicly traded billion dollar plus revenue business growing at 80% and they traded a very elevated multiple. And then if you look at anthropic, okay, so anthropic year over year grew 30x and now it's closer to 43x. They, it went from a billion in run rate to 43 billion in a year. Right. It's just like absurd.
A
I mean just was like breaks all laws of business just like ever. Like it's just like impossible for that to happen. You think they were like committing like fraud or like scam or just like it's fake.
B
Yeah. They added in the month of April, they added all of Snowflake's revenue plus all of Palantir's revenue in a month. Yeah, I mean just monstrous. So anyway, let's say they grew 43x. Okay, what do you think they'll do next? It's almost absurd. What do you think they'll do next year? But even if they're at 43 and then they get to 100 and they're valued at 900, well they're kind of valued around single digit forward revenue multiples. And then you look at Palantir and it's valued at, I mean 30x35x and so anthropic is actually trading at a discount, which is kind of wild because the growth rate's 80% versus 4,300%.
A
Yeah. And is that like, does the market not expect Anthropic to continue to grow that fast? Like do. Is it just people saying, okay, this is like not Sustainable, it's still growing really fast, whatever. But we're going to assume that this slows down or is it a private market thing? It's just because it's harder to get access to it and technically anthropic and just price it whatever they want. Really? Is it just not a fair price? That's just out of whack one.
B
It's very difficult to project forward revenue. The revenue is non recurring. I mean some of it is contracted, but it's unclear. The third part of it is at some point the revenue growth will be limited by just total amount of GPUs. So the Anthropic and SpaceX AI signed an agreement so that 25% of the Colossus data center which is focused on training will now be allocated to anthropic. But at some point there just aren't enough GPUs. And so what happens to a business that's growing 43x in a year that starts to grow at say 30% which is still, I mean $43 billion revenue base. You're talking about adding 12 to 15 billion of revenue a year.
A
It's like they just added that in a month and now they're going to add it in a year. It's almost unrealistic to think it's going to slow down that much.
B
Right. So what are you underwriting? What do you think it'll be? I don't know. And then there's also the capital intensity you need to build out these data centers. Do they need to raise debt? What does that look like? How much dilution are you taking as an investor? So there's a lot of unknowns. And then there's this trope. Trees only grow so big. Have you ever heard that?
A
No, but it makes me think of the law of large numbers. Or just like this company could never get that big. If you look at textbooks they'd say the law of physics says you cannot go from 1 to 43 billion in a year. Like it's just impossible.
B
Right. And so I remember when we had the first I was growing up and anyway, I remember when we had the first trillion market cap company and that seemed staggering. And now we have four companies that are around 3, 4 trillion, maybe 5. And I mean one interesting question to ask, like break the ice at a dinner party of people who really care about finance is when do we have the first $10 trillion company? I mean, I don't know. It's definitely within our lifetimes. Is it 2030? Is it 2035? You have the devaluation of the dollar, and then clearly these companies are growing really fast. But it's a. I mean, my perspective, it's inevitable. And so, like, will Anthropic be the first $10 trillion company? But it's kind of hard to imagine, right? Like, who's going to take the other side of that bet? I don't know.
A
Yeah, well, especially when you consider two years ago, they arguably had like, no business like there. Like, wasn't. The thing that exists right now is just not there. Right. And now it's suddenly the fastest growing of all time. So. But I think the interesting thing, I think you also wrote about this publicly recently. The strategy that they're taking is sort of similar to what Google did, where you're kind of commoditizing the compliments, I think is how you describe it. How do you think then about just the strategy the anthropic's taken then with all the products?
B
Yeah. So Jason, I think it was Jason from A Smart Bear, wrote this blog post in the early 2000s called commoditizing the Compliments. And the idea is, if you have a really good business, what you want to do is look at all the people who have businesses around you and make all of those products free so that more people end up using your product. That's called commoditizing the complement. You commoditize everything that's complementary to you. Okay, so let's make this concrete. So if you're Google and you make money when people click on search ads, you want to make it so that people click on as many ads as possible and what's in it? So, and I was at Google from 05 to 08, so I saw a little bit of this from the outside. But. Okay, so what did they make free? Well, it used to be paid for email. Okay, Email was free. And then it used to be that you paid for video hosting. Video hosting was really expensive. But then they bought YouTube and made that free. And it used to be that you would pay a license to have an operating system on a mobile phone. Then they bought Android and then they made that free. And then it used to be that you would buy a dedicated GPS device for you to navigate your car from one place to another. And then they end up buying Keyhole and making Google Maps and Google Earth free. And then they bought all these books and chopped them up and. And scanned all of them and put them in the index. So it was just driving more and more searches. Google Docs, same thing. So you're Just using the Internet more and by virtue of the fact that you're using the Internet more and it was free, so there was less friction, you would go to Google more and then you would get more ads. So if you're anthropic, you can run a very similar strategy. You are anthropic. You are selling inference, you are selling a prediction of an AI system. And then what you want to do is, okay, well, there was all this workflow software the previous decade. Maybe it's legal software or finance software or account. I'm just picking categories at random here. But you don't really want to charge per seat anymore. That's silly. Because the amount of money people will pay per seat, maybe it's $500 a seat per month compared to the amount of inference, they'll buy $2,000 a month, just give away the $500 seat and have them buy more inference, you'll make a whole lot more money and then you have less competition. And so, I don't know, I'm just observing from the outside, but that's a very game theoretical optimal way of maximizing. When you have a really phenomenal business, you just want to make sure everything else is free. So there are many different queries. Many queries as possible.
A
Yeah. And then why does that become so important then? Paying for the inference that you mentioned? What does that even mean for somebody who doesn't know what inference is?
B
Inference is like when you ask AI a question or the AI does something
A
for you, it's like the process of them doing the retrieval and doing whatever they do with the GPUs that they then give to you, essentially.
B
Yeah, that's right. So all these systems are basically, they're word prediction machines. And so when you ask what is the capital of Italy? It's then creating a sentence where it's predicting and anthropic. And the other companies charge by the word. It's called a token, but it's really effectively by the word. And so the more words that you. The longer the answer or the greater the amount of information you give the model, the more expensive the query. So if you have a really large code base and you have lots of. Or if you have a really large legal case, or if you have lots of PDFs and you want the AI system to analyze it, that's a very expensive query because it turns out that the input tokens or the data that you give the model is somewhere around 90% of the overall cost of asking that question most of the time or more 90 to 95%. And so anyway, inference is what model is predicting to answer your question. And so, yeah, I mean, if, yes, anyway, I'll stop there. But that's the idea. If I can just get the system to ask more questions. And it's not like I don't want to give people the impression, it's not somebody sitting there and typing and asking about a particular case. It's let me create a workflow. So to analyze a startup, let's say it's like, okay, find the backgrounds of the founders, create a bottoms up sizing of the market map, help me understand the backgrounds of the team, compare this to other companies, and then all of a sudden the tokens that use the amount of information you're feeding to the model number of words that you're analyzing and predicting explode.
A
So really Anthropic's business model in their strategy is just get people to do as much as possible in Anthropic products. Just use it for things.
B
Yes. And this is why openclaw is so strategic, because what do I do? So openclaw is a little assistant that lives on your computer and you can create a task list for an AI and you can say, find for me the best place to visit in Italy. Go and schedule this with this person.
A
It's kind of all the things you described earlier that you can do with Claude.
B
Yeah, but instead of doing them synchronously back and forth, you can create a huge long list and then those tasks can take 30 seconds or they can take three hours. And that's how you token max when we were talking about that's how you jump from a million tokens a day to 100 million or 500 million tokens per day.
A
And anthropic and OpenAI, it sounds like want people token maxing, technically like the highest margin version of token maxing, which is probably like a B2B workflow in some capacity.
B
Exactly. And you want people thinking that they no longer want to interact with a computer without AI, which I think many people in the valley are already there. And because you can just do so much more, because I can just enumerate this list of tasks and then Claude or somebody else, some other model will just burn through that backlog.
A
Is there anything that you're not using AI for right now on a computer?
B
There are some tasks. I mean, I'm on an Android and so it won't answer SMS messages because that pipeline's broken. But no, you really want to stay. I mean, there's this great book called flow, which talked about how do you get into a place where when you're working you're just directly connected and it's kind of tied. There was a philosopher named Heidegger who talked about the design of tools. And if you think about using a fork, once you learn how to use a fork, the fork becomes an extension of your hand and you don't feel a difference. And I think working with an AI is like that in the sense of I can just tell it, I can use the most native, I don't have to learn to type. It's probably a dying skill.
A
Yeah, you can literally voice doc do it.
B
Yeah, I can just dictate what I want it to do. And then if it has enough information about the way that I work and it has access to my systems and have helped it produce its own, make its own fashion its own tools, then it can work as if it were me. And why would I? I mean, I'll give you an example. I was on a plane going to Atlanta and they told us in the waiting area there's no WI fi, right? And so half of the people are relieved because they can watch a movie guilt free and the other half, the workaholics, like, oh gosh, what am I going to do for three and a half hours? And so I sat there and I tried to find a really fast Internet connection so I could download a local AI model because now I look at the laptop and I'm like, wait, you're so dumb. Right? Or it's the same feeling when you get into a self driving car and you start operating it and then you get into a regular car because you're someplace, it's like, why won't you drive yourself?
A
Yeah, right, yeah. So I have one more question on this sort of inference topic. So I actually don't know on a tactical level how this works because the anthropics business, you could say they basically sit on top of a cloud provider and they're basically this layer on the cloud provider. How does that actually play out just in the sense of how that sort of like that business model works? Because like, do they need to build their own cloud provider eventually? Because they're just kind of like a, a GCP wrapper or like a AWS wrapper. Really at the end of the day
B
they can decide, right? So you can own, you can own the buildings and the chips inside. They're called data centers, which Google does. Right. So let's, let's think about this three layer cake. This. There's the Data center. And then there is the chip inside the data center, the gpu, the chip that's analyzing, and then there's the model. So let's look at those three layers. Google has all three. Google manages its own data centers. Google manufactures and designs its own chips called TPUs, tensor processing units. And then Google makes its own model called Gemini and Gemma. And that is a great business. And then you can say, okay, Anthropic does not own the data center. It does not design its own chips. It just makes a really great model. And that looks a lot like Netflix. So Netflix competes with Amazon. Amazon has Prime Video, but Netflix runs a lot of their infrastructure on aws. Both businesses can succeed. There are pros and cons to each. A great segue is, okay, let's look at SpaceX AI. SpaceX AI has a data center. They don't have chips, so they're missing that middle layer. And they have a model. So there. It's an Oreo. Well, an Oreo with nothing in the middle. Oreo with a vacuum.
A
It's Oreo when you take it apart and lick the icing and then you stick it back together. There you go.
B
That's right. Or you put somebody else's icing in it.
A
Yeah.
B
So there'll be different strategies and you need different amounts of capital in order to do that. And you'll have very different margin structures. If you can vertically integrate, which means own each layer, I think you will ultimately be significantly better off because you can design the chips and the data centers for your algorithms. Whereas if you're an algorithmic or a model company, you will definitely have a say in how those chips and those systems are designed. But you are not the only customer fair.
A
And you probably need to have enough scale to justify the investment into all your own stuff. Because it's not easy and it's not fast and not cheap.
B
No, it takes. I mean, it might take you.
A
I don't.
B
I mean, Google has been developing the TPU since 2012. Right. Amazon has been developing their own chips called Trainium and Inferentia, I think, for the last five or six years. And it probably takes seven to 10 years to get to a place where you are at state of the art. You have executed enough cycles to really be there. And so at some level of scale, sure, if you're one of the five most valuable companies in the world, Apple has its own chips. All the M1 to M5 silicon that you and I run on our computers, that's. That's proprietary and it's a big advantage.
A
So then the play is probably, if you're anthropic right now, maybe at some point you need to start doing that. But it's really just get as much adoption as you can, get as much usage, get as much revenue to have cash to work with, to now fund all this stuff. And to your point that you started talking with this, it's just a sprint. Go as fast as you can to get there.
B
Yeah. If you have a significantly better model, you will win share. And the opportunity cost is so huge and the willingness to spend is enormous because if your model is meaningfully better, you might add 100 billion to your market cap in a quarter.
A
So I think it begs the question, where do you think is a good place to be investing in AI today? Is it over because Google is vertically integrating and win everything? Maybe anthropic and OpenAI went on the edges. Is it wide open for startups? Obviously you're investing in startups, so maybe this is like a load question, biased question. But like what do you think the opportunity is today investing in AI?
B
There are certain markets that are uninvestable because they are on the direct roadmap for the large companies that are incredibly well capitalized. Right. So agentic coding. I think if you were to start an agentic coding company today, it'd be very difficult because it is probably the most important market and you have so many businesses whose roadmaps are pointed in that direction.
A
Is it the most important because it is so tied to that inference thing that we talked about where there's just so much inferences flowing through that.
B
Yeah. Okay, so why is. Great question. Okay, so why is agent decoding such a phenomenal product market fit with AI? The first is there is a lot of spend in software, so the market today is really big. The second reason is software engineers are largely very expensive. So there's a lot of labor spend as well. So there's technology spend and there's labor spend. Both are very large. The third is the demand for software I would argue is infinite. You and I as we age and all of us will only use more software, we won't use less and it will become increasingly sophisticated building on the previous software. So you have labor spend, software spend and a very fast growing market. And then market with infinite demand. And then the last thing is it is a set of tasks that an AI can test whether or not the AI's answer is correct.
A
Because it's just so objective, rule based and it's just like, you know, if you Got it correct or not?
B
Yeah, it's like math. Either the equation resolves or it doesn't. And if an AI system can test that itself, well, then, sure, you can just let it spin overnight until it has satisfied all the different equations or all the different parameters that you've defined for the piece of software. So that's called a closed loop problem. You can just have the machine spin faster and faster and faster. And so the combination of all those four, it makes it really great, makes these systems perform exceptionally well in software. Where is that not the case? Well, let's say we asked it to paint impressionist art. I mean, you and I can debate, like, our Monet's lilies is a zenith of impressionist art. You could say, no, Pizarro is bee's knees. And so it's subjective, it's open loop, it's the blog post. When we summarize this great episode that we did together, there's no objectively best blog posts. And so that's not a closed loop problem. And so their AI has a much harder time because you can't just let it spin. You have to say, okay, you have to apply judgment as a person and say, that's enough.
A
And the reason that people that all the big, the biggest AI companies are going after agent decoding is because it's ultimately like the biggest TAM and the biggest opportunity. So then you're basically, you're almost like accepting that you're maybe settling for smaller, less interesting markets, but then there's an opinion to be had of like, well, these are actually still very big markets or they may be very strategic for these other reasons.
B
Right. Yeah, it's like, you know, I mean, after Google in 2006, would you have started a search company?
A
Probably not. Maybe did DuckDuckGo? I don't know. I think DuckDuckGo maybe started around then and it's still alive. But yeah, like, I don't know if I would have invested in it.
B
No, it's just really tough because you, you don't attack your opponent in the area they are strongest.
A
Should you think that there may be some jockeying where, I don't know, a company that's not in agentic coding that we all know of and hear of every day, just suddenly kind of emerges and has created a position to kind of, you know, ladder themselves in there or something?
B
Well, I'm, you know, cursor right there. All the dynamics around cursor and the brilliant business that they have built. So that's definitely an interesting one to Watch and you have Poolside which is releasing US open source models. So now sovereign AI, AI that is limited to a particular country has become a critical geopolitical issue until you have companies that are building models for India and companies that are building models for Japan and United Arab Emirates. And so maybe there's a market segmentation. You say I want to be the best agentic coding system for India. Well, okay, there may be a market segmentation there that makes sense just the way that you might have a vertical search engine to compete with Google that was focused on travel for a long time and that was a standalone vertical. So it's not to say that you can't segment and then compete within that segment. I don't think you can just go and say, okay, I want to win the United States agent decoding market as a model provider. That would be tough unless you really have a meaningful scientific advance, a mathematical advance.
A
How do you think then about what are the opportunities that are interesting? How do you figure out is this side market, this other market, this non incumbent market that they've already captured? How do you figure out what's worth going after?
B
Well, okay, so let's think about the markets where clearly they've demonstrated an interest the incumbents. So agentic coding is one. The second one is health. OpenAI has a great team pushing health products. You have anthropic launching a collection of skills on Monday of this week tied to finance and the automation of finance. That'll be important. There's legal work that's associated so the legal market is definitely in scope for them. Anything around infrastructure and software automation is definitely core. And so those are some of the markets. I'm sure there are more security. Clearly they will push. I don't know if the model companies, I doubt they will dominate that market its entirety. They will be a supplier more than an individual competitor. But there you have six markets where the direct competitive dynamics of the largest AI companies you must consider and you can either invest and say I'm going to, I believe a company is sufficiently far ahead that one of the incumbents must buy or partner with them. Viable investment strategy. Or you can say okay, there are 10 markets they really care about and I'm not investing in any of those. I'm going to go pursue markets 11 through 100 and then I'm going to analyze each of those market dynamics. How many competitors are they? How many venture backed competitors are they? How likely is it that the customer population adopts software? If you're a longshoreman, the odds of you adopt AI I think are pretty low. But if you are say in the business of back office automation and you are an insurance company or a third party logistics company, pretty high. And then the question is, do the model companies care about that market or not?
A
So then what's your lens for thinking through kind of this whole SaaS apocalypse? I feel like we've gone through these waves of people are like, oh, every software company's dead. And then I don't know now if it's flipped or it's like they're not all dead. I'm not sure where we're at. It's hard to keep track. But in terms of that side, if you're a mature software company, how do you think about the defensibility?
B
Yeah. Okay. So the public market's value growth, it remains the most important factor as an input to valuation. So that 50 to 60% correlation, you're
A
saying the growth rate of a public company, 50% of its valuation is just depending on how fast it's growing.
B
Yeah, 50% correlation. It's explained by it. Which are the three fastest growing segments in the public markets. The first is security, the second one is data and then the third is a core systems infrastructure. All of those have tailwinds from AI. The slowest growing ones are vertical software companies and then like productivity apps where some of them are seeing negative growth. And then I forget the third but so there really is a distribution. You can't look at it as all publicly traded software companies. There's a distribution. The faster growing ones are doing fine and then the ones that are slowing or contracting will be punished. The one really interesting question actually this will be fun with you. Turner is okay, imagine you are 2001 and the dot com crash has just happened. And you're looking at all the venture backed and publicly traded software companies. They were building on prem software. So you would have a CD and you would get a box of software at a store and then you would install it.
A
Right.
B
You're the head of it for your company. And then after 2002 some number of companies moved to the cloud. Which companies were big during the boxed software era that transitioned to the cloud that survived, maybe even thrived.
A
So I was, I was born in 1991, so I was about 10 or 11. So I'm trying to give you the perspective I would have as like a public market investor in 2001, 2002 at the time. I'm just trying to think of how it even size that up. I guess looking back in hindsight maybe Adobe.
B
Yes, great. Yes.
A
But that is not really what they did in 2001. Right. They slowly transitioned to the cloud over the past 25 years. But I mean it probably didn't start in. It probably started in like 2005 or something.
B
Yeah, no, that's right. Okay, so Adobe is a great case.
A
Maybe Salesforce.
B
Salesforce is post cloud. So they launched directly on the cloud and then their banner was no software, which meant no on prem software.
A
Okay, maybe Oracle, but I don't know how fair that would be to count.
B
Very fair. Okay, yeah, so you're on it. So you have the B. So you have Adobe clear market leader with Photoshop and InDesign and all those things. You have Intuit.
A
Yeah, that's a good one.
B
TurboTax and all that stuff. They made the transition dominant in their category. You have SAP, 50, 60 year old software company.
A
That is a common. Though the AI stuff is all going hard at SAP now, I feel like.
B
Yeah, that's right, yeah.
A
So can they survive again? We'll see.
B
Anyway, you keep going through this exercise and we were able to name about seven to eight companies that navigated that transition. Out of how many?
A
I have no idea. Do you know how many there were?
B
I mean hundreds. I think it's order of hundreds.
A
So are there characteristics of some of these? Is it that they had a very specific customer that they served? That and, and were they like, did they have management teams that took the cloud seriously? Maybe like that feels like a big component of it.
B
I think the characteristic is that they were near monopolists.
A
So it almost didn't matter what they did, whether they made the change in three months or 10 years, they just would eventually manage it.
B
You think about Oracle, I mean transactional databases inside of banks, who's ripping that out? Right. It still hasn't happened. Right. Intuit, there's nobody else even close. Adobe, I mean before Figma. Name a competitor that mattered to Adobe, didn't matter. SAP, can you name another enterprise ERP system? And I'm being a bit glib here, but I do think they just had tremendous control or tremendous presence within their markets which bought them time and they clearly had the resources to be able to figure out how to make the transition. And as a result customers couldn't leave to a better alternative because maybe there were or there weren't. But I think it really is like a dominant market position that buys you the time and gives you the resources to learn how to transition, maybe affords you the opportunity to buy a market leader and Then integrate that DNA plus the product into the next evolution of the business.
A
So it'd probably just be paying attention to. In pretty much all these categories. There's probably a bunch of these AI native companies and it's just seeing these incumbent publicly traded. How does a product seem to be evolving relative to these new companies that were founded in the past couple of years? And are they able to make these changes fast enough to just continue to keep their dominant position? There's probably like a couple and there's also a lot more that won't do it properly.
B
It's really hard. Yeah. I mean ServiceNow has about three or four different AI companies. Right. They've definitely been aggressive. That would be an example. But there are many companies that really have not yet responded and we'll need to.
A
Yeah. In terms of kind of maybe. I can't remember if we were talking about this before we started recording or not, but just the impact of AI in the economy compared to some of these other economic cycles. Did we hit on this a little bit? I think railroads was like the peak. I think you said it was like 7.6% of GDP or something like that.
B
That's right, yeah. So I think we'll be about to mid 2% of GDP within this year. And in Q1, 75% of GDP growth
A
is AI and a lot of this is data center build out.
B
Yeah. So there's the construction, the manufacturing, the assembly, the chips, the networking associated with it anyway and then all the labor that's associated with that and then the revenue that's generated from it, which fastest growing market. So yeah, I mean I think 75% of all US GDP growth, if it continues to grow at this rate, the US overall GDP will continue to grow much faster and then it will go from 31 to whatever it is, 33 or 35. And then if we can get to 7 or 8 or 10% of that, you're talking about three and a half trillion a year of investment going into AI in the intermediate future. This is a big business, it's a big industry.
A
Well, and you think about. So like the scale of. I mean Cloud's maybe an interesting example, mobile, like did they make the economy grow faster? I'm not actually sure. Like they, I feel like they had to have.
B
Oh yeah, of course. I mean the networking build out, this was, you know, when you were 10 and I was 18. But like the. There were, I mean before the Internet was broadly adopted, everything needed to be connected, every house needed to be connected, every building needed to Be connected fiber and copper. And so you had huge gdp. I mean, not nearly close to the scale, but significant gdp. When you had Nortel networks and Quests and all the initial Internet service providers who were in the telephone companies, adding new telephone lines that were ultimately replaced by fiber, that drove a lot of the 99 boom. That big network, Juniper Networks and Cisco and all those businesses, they were explosive. Very similar to this era.
A
Yeah, well, and then, I mean, that begs the question, it didn't end that well in 2001. Do you feel like, is there sort of a bear case to be made in terms of just being careful or being cognizant of where we're at in the technological or economic cycle or the capital, the debt cycle related to all this stuff? Is there any kind of thing that you keep top of mind when thinking through that?
B
Yeah, I mean, it is a lot different than 2001, because 2001 revenue models of many of the businesses were not known. Like Amazon, okay, fine, in the fullness of time. But like Peapod, which was yesterday's Instacart,
A
we didn't have phones. Like it was literally, you're placing your food delivery order on the computer or whatever.
B
Yes, right.
A
With dial up, like it takes three minutes to load.
B
Yeah. So it's a different era. I think you can legitimately say AI converts electricity into work just the way that gasoline is converted into work if you use a lawnmower. And it can meaningfully improve the productivity. Right. You have like Boris Czerny from Anthropic who talks about he can ship 30 to 50 times as much code with AI as not. Okay, we're turning electricity into real work. Okay, so what are the things to worry about? The first is, yeah, the credit markets. I mean, you have to. Many of these data center buildouts are built with 80% credit and the, you know, OpenAI. I think SoftBank Limited the size of the debt. I think they dropped it by 40% this morning. So, well, we'll see what happens there. When you borrow money, like you borrow money for a house to pay for a mortgage, you are providing the house as collateral to that mortgage. And the lender looks at the house and says, okay, what does the inspection say? How long will the roof last? How much investment? In the very same way people who are lending to data centers have to look at the GPUs. How long will those GPUs last? Are they productive? Will they fail at some level? And there's a debate about how long those inference GPUs are productive. So that's a big one. So the credit market is definitely one. I think the argument at some point, the token maxing wave, I think in the back half of this year will wane and everyone will say, yeah, you're burning a lot of electricity and you're buying a lot of intelligence, but what did it do for the company? I think that's definitely coming at some point. But overall, it's hard to paint a negative picture.
A
I mean, part of the negative, just like general perception, is there's gonna be like all this job loss or whatever. I think the other one is water usage in data centers or something like that and contaminating the land. Noise pollution maybe, I don't know, whatever the argument is for the data centers. But I know the other side, though, with jobs is it's actually not going to cause job loss if you look like every technological revolution, it always ends up actually creating more jobs. What do you think will be? Or at least what are you seeing? Maybe it's still pretty early, but what kind of new jobs do you think we'll see from a lot of the AI build out?
B
Yes. Okay, so let's talk about why there are more jobs. There's not a finite amount of work to be done. There's this thing called a lump of work fallacy, which is there's a total amount of work to be done every day across the globe. And there are a certain number of workers and they have to allocate their share, and then once they're done, they go home.
A
I've never heard this before, but makes sense. Yeah.
B
Yeah. But if you're a workaholic or you're married to a workaholic, you know, there's always more work to be done.
A
My wife's like listening to this, like, yes, let's.
B
Right. And so, okay, what ends up happening? Well, you know, you used to like write Java code, and then somebody used to review that Java code.
A
Well, great.
B
Now you no longer have to write or review that Java code. You have to architect that system. And then you have to make sure that system is now resilient. And it turns out in order to compete, you can no longer just offer a point solution. You need to offer six times the breadth of the product. Okay, get to work. Right. And so I think that happens just like across the board. We looked at the automobile industry in the United States before interchangeable parts and Taylorism. You had 80,000 people who are artisans building different components of combustion engine.
A
This is like before the assembly line too.
B
Yes, right.
A
Before The Model T. So just some dudes just sitting in a room in a circle banging parts together.
B
Yeah. Making a piston. Right. Or camshaft and. Yeah, I mean it worked. And they sold cars and then all of a sudden, well, the price of a Model T collapsed. Collapsed automobiles. And everybody was driving one. And the number of people working in the US automobile industry within five years went from 80,000 to 500,000. And there were a few people working on the line, but there were people marketing the cars, there were people designing the cars, there were people building dealerships, there were people building roads. And so the overall employment exploded. But it wasn't, it's not like, you know, around that time we were looking, there were about, there was about a million manual dishwashers, people who washed dishes for a living.
A
Wow, that's crazy. This is in the U.S. in the U.S. i mean. Yeah.
B
Every restaurant in, you know, in the
A
United States needed 1% of population, 2% of the population. That's crazy.
B
Yeah, or farming. Right. Like think about the shift from agrarian farming and people moved to the cities and they found all kinds of new work and now we have all these incredible industries. So there are other benefits. You look at the Waymo statistics, how much safer These cars are. 50,000 people die in the US unfortunately on roads. And once we get to a place where we have significant volumes of cars, think about the longevity of the average American will increase as a result of the safety.
A
Yeah, that's a pretty big one. Where I hear that a lot is there's millions of people who drive and this is a significant displacement. We were at dinner probably last week and I overheard the women beside me talking about this and the massive concern with them was like they don't trust them. But then also like, oh, what about all these drivers are going to lose their jobs from these self driving cars. I can't support that. I think the argument though is there's probably still going to be people in these vehicles in a decent amount of cases like long haul trucking, you may still need, maybe it's flat or something. There's just more trucks on the road that are enabled by this and you'll still have people in the warehouses that are like unloading them. And maybe, or maybe it's like, you know, you get, you sleep, there's like a, you sleep in the trailer or whatever, you should have a nice bed and you're kind of like maintaining the car while it's self driving across the country or something like that.
B
So long haul trucking, average age of a long Haul truck. I was just looking at this 46 to 47. It's not an industry where lots of young people are gravitating to. And maybe the tastes of new job seekers have shifted and they don't love that lifestyle. And so I think there's two parts to it. Where ideally we are automating the jobs, where there's not a tremendous amount of labor supply. And that's one. I mean, one of the ways of looking at AI is you really want, you need. The great place for AI, while it's not perfect, is you have a labor market shortage. You have somebody who needs the hiring manager who needs that job to be done. And therefore they're willing to accept like a 70% solution. Right. Like electric pole inspections, long haul trucking, you know, like anything to do with like sewer inspection, like, those kinds of things. AI is phenomenal at. And it's just like a very. It can be a very unappealing job. And so maybe there's this generational shift where people's preferences for different kinds of work evolves and the machines take the work that is no longer interested. I mean, working tilling a farm, I'm sure there's some fraction of the population that likes that. Great. But you don't have 10% of the population who wants to go and, you know, yoke some ox and oxen and then plow behind them. Right. The preferences change.
A
Yeah, And I think too about like accounting or like finance, right. Like back in the day, like an accounting department was just like a big building. Maybe it was like next to the factory with just people literally like writing the debits and credits on paper or whatever, like manual invoices. And I mean, some people still do some of this kind of stuff, but now it's literally like it's a spreadsheet and you type it in and it automatically calculates. And QuickBooks, literally, we were talking intuit. The software just does it for you. It calculates the financials. You can literally press a button and get the final financials.
B
We all know what a calculator is, but when I say the word calculator, you imagine, I don't know, like a TI82 or an HP calculator. But before that was invented, there was
A
a title like a human person that was a calculator.
B
Yeah, I mean, the Apollo missions, all the math was done by, much of it by women. And their jobs were senior calculator.
A
They literally had. I think I've seen those pictures where there's a woman who was standing and there's a stack of papers that she had calculated that was literally taller than her or something. It was just like calculations of the, you know, the route or whatever they had to calculate.
B
Yes, the trajectories and the orbits. Yeah, that's right. And so okay, what happened to all those calculators? Well, you know we found other work for them at a higher level. They didn't have to look up logarithms and big books.
A
Well maybe instead of like spending literally weeks just hand calculating the equations, it's just, it's done by the computer and like oh, this was wrong, like the calculation was wrong. Let's see what we need to do to change about this route. And you get into like more strategic work around you know, the, the calculation, this stuff that's like a rule based thing really at the end of the day. But yeah, it's just like it to, to the point of like you know, with investment firms or like finance. Right. Like you know back in maybe the 50s when they were doing like, you know, selling junk bonds or whatever, they were doing like doing out early stage of LBOs and companies you have like this army of people that tried to punch out all the calculations versus now. It's like a lot of them are doing more sales, more marketing. Right. Like, like enables more people to do lbo is more people to, to take out credit, more people to raise venture capital because we have this like army all that the AI is doing all the analysis and this is a good investment. So it's just all these VCs going out and giving money to founders and enabling them to start companies. Maybe we're exaggerating this a little bit, but I feel like it just like the productive work shifts towards things that kind of like generate, grow a business or like add more sales, do more things for customers.
B
Yeah, I mean those calculators got into the business of aerodynamics, computational fluid dynamics, quantitative stress modeling on different elements. There's always more work and it's increasingly sophisticated. Yeah.
A
Are there areas that you think AI is still kind of underrated today
B
or
A
maybe you're expecting it to get really good in the next couple years and people are maybe not thinking about it. I know you invested in an advertising company recently.
B
Yeah, we're really keen on online ads. I think Google generates something like $120 per user in the United States in ads. And you know the online ad market in the US is about 450, 460 global, 460 billion. And if you think about what ads can do for offsetting the cost of GPUs and also helping consumers find things that they might like. I think it's an absolutely huge market and so we're very keen in that space. It's been tough, I think, for startups as a whole within the online ads ecosystem, but AI is such a disruptive force that I think there's an opportunity to build a great business and we're lucky to work with a fantastic team there.
A
Is there anything that's not in the data that you are kind of waiting for or looking at? Like maybe there is data, but it's not well known data or it's not matured data. Like it's just kind of early signs
B
of things within the online ads ecosystem
A
or just in general, like in AI adoption or in usage or.
B
I don't think we're seeing the productivity gains yet. So why haven't we seen that? Well before, say October or November of last year, AI systems were great search engines. And then in November of last year the model started to be really great at executing workflows, multi step processes and that's where you really get time compression and work. Because I can write up a workflow in English and then I can say here's a list of 100 entries in a file and I want you to run each one of these workflows in parallel and boom, in 15 minutes I have the work that I could have done in four days. And we're not really seeing that in the productivity statistics yet or in the earnings per share of publicly traded companies, but, but it will be significant and sustained. I think it will be tremendous. So maybe early next year or mid to late next year, we'll see that.
A
So, so what will that show up as? Is it like I think I saw, I didn't actually look at this, I just saw Datadog. The day we recorded this is up like 30%. I saw someone make a joke that like this is the AI productivity we were expecting. Maybe it is or maybe it isn't related to it, but is it just companies are getting more efficient per employee essentially. Is that probably what shows up?
B
Yeah, they can do more work per hour. Whatever that unit of work is, whether it's lines of code for a software engineer or customer support cases solved for customer support rep, or a company is reviewed by a venture capitalist, the throughput of whatever factory you are operating has just gone up because the conveyor belts and the machines can now operate at twice the speed. That's a really great mental model for it. There's a Whole discipline called operations research, which is I have a factory with a factory line and as I change different components to it, how many more chocolate boxes can I make? And I think with AI, the reality is, I think people will. Could you see a 30%? I mean we just talked about Boris who's at like 50x. He's clearly, I don't know how many standard deviations out, but can you see like a 3x to a 5x productivity gain for a software engineer on average, maybe 3x. All of a sudden your software factory is now operating at 3x the throughput.
A
Is this kind of related to. You put out a study, I think it was about a year ago where you interviewed a bunch of, or you ran a survey with a bunch of. I think it was in go to market with sales teams and basically you found that using AI had zero impact on revenue growth or something like that. So I guess what was the study? And then maybe has this changed in the past year or two?
B
We're just about to launch the new go to market survey, so we will know this year. Yeah, and I think in retrospect that's the answer we should have expected because again, everyone had access to a fancy search engine instead of a system that could actually parallelize work. So I think even this year we will see modest positive response. And then next year I would expect to see very significant response.
A
Interesting. Okay, and then, so then how do you think people are actually buying AI today? Like what are you seeing in terms of maybe companies you've invested in, surveys that you've done, what is seen to be getting purchased? Maybe what's the obvious things, what's the less obvious things and just what's the general decision making framework that you see people using?
B
So there are buying committees, there is the line of business owner, vpn, VP products, VP marketing, VP customer support, there is the head of technology, vpn, cio, head of security, and then oftentimes general counsel. Because there are lots of different data, information and security questions around AI. I would say the sales cycles were extremely fast, November until March. And now as a result of some of these buying committees becoming more sophisticated, they're slowing a little bit, but they're still much faster than software sales cycles. And one mental model, which is not universally true, but it is useful, is that every leader within an organization will pick a platform that they trust to deliver to them the vast majority of their agents. If you're the head of data, you'll pick a company like Monte Carlo and say, great, I trust you to Deliver all of these data agents. If you're a VP of engineering, you already done that either with OpenAI or anthropic or cursor, one of the three and for the same for sales. And many of those categories still, it's still TBD who that brand is, but that's what will end up happening. And as a leader, you'll trust, you'll make a career decision and say I trust this particular company to deliver for me all the different sales agents I could need.
A
So then there might be then a sort of like jump ball type opportunity in some of these categories like with like in sales, like Salesforce, like do you make a bet on Agent Force or whatever all the Salesforce AI stuff is, or is there a new more emerging product or company that's out there that you maybe make that bet on? And my guess it would probably be, it'd probably depend on who the actual decision maker is there and if they use the product and then if they probably like a bet on the slope of improvement, you may say the startup is added, all these new features, they've got so much better. Probably making like you said, like a career bet almost on like this roadmap seems like it's actually going to be super useful for us and will actually drive the needle versus maybe the existing option we're using. I don't know. Is that like a fair way to think about it?
B
Yes. Right. And today you have general purpose tools, you have low code and code workflow builders that are growing very fast because there has been no specialization. And the most valuable tool now is a stem cell that I can play around with and then have it specialized until I see it germinate and blossom into a workflow that I will then crystallize, which is what happened in software. In software everyone was building a whole bunch of custom stuff and then you had Salesforce that said this is the right way to run a modern sales organization with software. And HubSpot did the same thing for the SMB and then Marketo came around and then the workflows, I don't want to say they calcified, but they definitely crystallized around best in class and everybody copied that until there was a new platform shift and everything has to be reinvented.
A
Do you think a lot of those companies are probably already founded?
B
No, I think it's wide open really.
A
Okay. Any areas that you're most interested in theory, for people listening, if they're like, oh, I'm working on this, yeah, so
B
we're really interested in online Ads. So if you do anything in the online advertising ecosystem, please look us up. We're very interested in inference. So we think you can think about. Inference will be the biggest market. And there are many different kinds of inference. There's like a really fast inference or real time inference. There's inference that's for images or video, there's inference that is for very long running background tasks. And just the way that if you had 1% of the database market you could become a public company. If you have 1% of the inference market, you'll be able to be a public company. So specialized inferencing is fascinating to us. And then another category we're really keen on is email and the automation of email with AI.
A
Interesting. And is this because agents are going to start reading most of the email? Is it?
B
Yeah. Turn. I mean what are the odds are in two years you're logging into Gmail five times a day.
A
I've been thinking more and more about like how much of my time is just like deleting these like stupid AI emails that I get that just like it's always the same format where it's like three follow ups and whatever and like they're pretty. I don't know.
B
Yeah, there's no way, there is no way you're logging into an email account five or six times a day in two years.
A
So what do you think is going to happen? Am I just sitting in Claude and it's pinging me when I get the best ones or something? Or am I only texting or slack?
B
Yeah, I mean it will learn what you care about, right? It will learn who you care about and everything else. It'll either summarize and prioritize or. But there's just no way. Look at the volume of emails you and I both receive and millions of other people do. I, I don't want to spend my time and neither, you know, archiving this and archiving and then now it's a text messages about, you know, I don't know how much credit you were offered today, but I can tell you I
A
get the calls every day. Every day I get a call on average about like I know a guy
B
can just borrow some money.
A
Oh yeah, great. Part of me is I've, I've like thought like should I just do one of these and just get like a hundred thousand dollar like blown or whatever. Should I just see what happens if I actually say yes to this?
B
It's kind of funny.
A
So you do these predictions every year. I think the ones you put out for 2026, I feel like we've actually kind of hit on some of them.
B
Oh my gosh. It's depressing, isn't it? Half of them are already there.
A
But one of the ones that you predicted was a lot of liquidity in the late stage ecosystem. I forget which ones you said, but I, I think like there's SpaceX, OpenAI, anthropic databricks. I don't know, I don't know if Anduril is considered if it's like big enough or close enough to IPOing, but there's just like a lot of these companies that are, I don't know, a couple trillion dollars of liquidity. Do you still think that's going to happen? Where? I guess we're a couple months in now. And what do you think kind of the impact of that's going to be?
B
Yeah, I mean I think SpaceX, OpenAI and Anthropic definitely go public. Stripe and Databricks. I'm just looking at the blog post now. I don't think that neither one of those happens. If those three go public at 50 billion each, they will raise more money from the public markets than the sum total of all IPOs in the previous decade.
A
You're saying if each of them, when they go public, if they raise 50 billion on average between the four or five of them, it will be between the three of them it'll be more money than the last decade of IPOs that they've raised.
B
Yeah, I mean when Facebook went public it was a $15 billion IPO and it was unconscionably large. And there was one. And now we're talking about three $50 billion IPOs. Sure. Inflation. Okay, let's say 40% more US dollars today than they were back then. You're talking about like 16, 18 billion compared to 150. It's still 10x larger. And so it is bending the public markets in a very real way so they will go public. I think there's a real question of how people become liquid and sell those positions. But the only thing it can be is positive for the ecosystem.
A
What do you think happens with sort of late stage venture market? Because there's a lot of people, their business model is just like asset management firms. The business model is like getting their cut of these rounds when they happen. Do we basically just have new companies that kind of grow into it and take their place where they're like new trillion dollar private companies that then IPO in another five or 10 years?
B
Okay, so when I started Inventure in 2008, there was $1 billion outcome in enterprise software in the whole year. I mean up until that point, aside from like Microsoft.
A
Oh, so there was one, there's only been one, one outcome of over a billion dollars venture backed.
B
And I remember being in awe of the venture capitalists who had that billion dollar outcome and everybody was like, wow,
A
now you can start a company and raise over a billion.
B
Yeah, well this is exactly the point. And so, and to go public you needed about 50 to 75 million in revenue and you could, you would raise 35 to 50 million in an IPO and there was a bank that would underwrite you and take you to market and charge a fee for it. And Today many Series A's are larger than those IPOs. Every Series B of significant company is larger. And so the private market has basically taken over that. And I think that's fine. It's because it's so expensive to go public, but there's plenty of business there. So my point is the IPOs of 15 to 20 years ago are today's mid size series B's and series C's. And there's plenty. Yeah.
A
Well so then does the average Series B or Series C in 10 years, is it like a trillion dollar valuation? I hope it doesn't continue that direction.
B
No, that means we're in hyperinflation like pre war Germany. No, no, I hope not. I think it's cyclical, right? I mean you have the oil industry went through a huge boom and railroad industry went through a huge boom, textile industry went through a huge boom, automobile industry. And so we will have a cycle. And when that cycle or that downdraft happens, no one can predict, but it will happen. And, and then the levels of overinvestment will be exposed and, but that's, excuse me, that's healthy. It's really important for us to have recessions and corrections.
A
So then I guess one question then. When you're, let's say I'm a founder, I'm meeting you for the first time, Maybe you do or don't know much about my business and just like the market that I'm in and so like what kind of things are you going to be asking me and looking at when I'm, when you're kind of making a decision of what you want to, I don't know, invest in, sort of be exposed to today as a fund? Like what are the things that are most important to you that you're kind of thinking about?
B
Yeah, what does the company look like in seven to 10 years, I think is probably the hardest question, but the most germane question in this era.
A
And is it ultimately you're thinking about inference, owning some inference spend. It sounds like you're thinking about advertising that you mentioned. Trying to remember the other two. I feel like you mentioned two other things.
B
Email is one.
A
Oh yeah, email is one. So you think a lot about then how do you slide into the sort of like future of how AI continues to eat more software.
B
Yeah. And then, you know, I mean, what does the business look like in seven years? You can say, well, we have a technology advantage, some awesome piece of kit that gives us 18 months and we will sustain an 18 month advantage in our market. Very valuable. As you can also say we can sell better than anybody else and build a brand. And brand is probably the only enduring strategic advantage of any company at scale. And so that's also a very viable strategy. But you need, you need a booster rocket, a way of getting a head start relative to the market. So what is the answer there?
A
Yeah, I feel like that's kind of related to, I guess, going back to Theory Ventures. I think the name is kind of related to the sort of disconnect in technology. Can you explain the name?
B
Yeah, the website says we craft theories about the future and then help them become a reality. And so we research a lot of different categories and try to understand the history of the category, which we've talked a lot about. And then if we know the history and we can understand the technology innovations that are occurring within it, then maybe what does the future look like? We try to find founders where we are similarly aligned in that vision and then work really hard to help them achieve their dream.
A
I know you were at redpoint for about 14 years before you started Theory. What was sort of the seeds of starting to do this? Did you always kind of know you wanted to or was there like a moment where you're like, this is it, I'm doing my own thing.
B
Yeah. I had a wonderful time at Redpoint. Many wonderful people there who taught me the business and I'm extremely grateful for it. And you know, and then decided to launch our own adventure. And now we're three years into theory and we're 10. I think we'll be 15 people here by the end of the year. So we're off to the races.
A
And it was, was it just you when you started or did you end up, you have ended up teaming up with more people that have joined you?
B
Yeah, yeah, yeah. So yeah, the team is 10. It's it's always been a we. And I'm really grateful, you know, I mean anybody who starts a company for the people who join and believe when, you know, we don't have an office and we're all building it together. I think it's been, I'm grateful for their confidence and all their hard work.
A
Yeah. And I think one thing I was realizing this when I was asking Claude all my things like trying to prepare this episode. I have not had a lot of people that have spun out from exist like pretty big funds on the podcast. I've had a lot of people who is like I started my own thing. Like I raised from founders I like invested in, blah, blah, blah. But like so how did you go about having probably getting to meet a ton of LPs over a long period of time, just like putting the first fund together. What was the process?
B
Yeah, it was, it's enterprise sales. Right. I think raising a venture capital fund is you're talking about 18 to a 36 month sales cycle because you are selling a 10 to a 15 year contract.
A
Pretty long contract.
B
Yeah. You're asking for somebody to entrust you with their capital for 10 to 15 years. And so best in class enterprise sales is 15 to 25% conversion with that sort of sales cycle. And so you need to build a funnel and build relationships and run it. Understand. Okay. How do we map the account, how do decision makers feel, how do we get references and all those kinds of things? I think that's the. Anyway, that's the mental model we applied and I'm grateful to the limited partners and our investors who took a bet on us when it was just a pitch and a dream and. Yeah, but it really is just working a funnel and building trust.
A
And I think you did actually a Monte Carlo analysis to figure out what the portfolio is going to look like. I don't actually know exactly what you did. So why did you do that? What was the process like?
B
It's really, I mean somebody tweeted this recently. Which is your fund size, is your strategy. What does that mean? Well, in fact, I think it's for us the opposite is true, which is our strategy determines our fund size and that's true at every raise, which is this is how many companies we want to invest in. These are the kinds of ownership targets we want. This is how many of them we want in a fund and this is how much money we want to continue to support them over different rounds. And given what's happening in the market and what 75th percentile Series A's go for. You can kind of calculate what that fund size should be. And that's the way that we think about fund size at theory. You have to first pick your strategy and then capitalize the business to be able to execute that strategy, which used to be the case for startups and no longer the case, but I think that's really essential. And. And then you want to put the probabilities on the side of you winning. What is the 75th percentile exit and 90th percentile exit for a startup? What is that worth? And then what does that mean for our expected value for fund multiples? And putting together all that math is a very. I think it's an absolutely essential function or essential task for early funds, because the greater the confidence you can have in that business model, the more confidence limited partners will have in your ability to execute it.
A
If you're willing to share, what do you guys assume is the average outcome look like for an investment that you're making?
B
I don't want to be too public about some of those numbers, but many of those numbers are public and you can pull them from PitchBook and those kinds of things. But we have our own very special way of underwriting. And then I think a key part of. Of running a fund is you have to make exceptions. And there are exceptional companies that don't fit the mold and you can't have a portfolio full of them, but you can have some.
A
Yeah, that's fair. And I know you, you make a lot of things with AI. Like, personally, like, you're just always messing around. What would you say is like, the coolest or most interesting thing that you've kind of built or. Or done, Whether it's related to theory or just for fun or.
B
I mean, the thing that's daily useful is a podcast processor, listens to 50 podcasts and then pulls out all kinds of interesting statistics and facts. That's really a lot of fun.
A
What do you get from that? Does it give. Take this episode and it gives you the five most interesting bullet points that you mentioned or something.
B
What are some interesting statistics? What are some counterintuitive perspectives? What's the overall narrative? And that's again, parallelization. I don't have the time to listen to 50 podcasts in a day. I'd run out of hours about halfway
A
through and not get any sleep.
B
Right. So that's not possible. But that's really useful. I think the most effective uses of AI all boil down to parallelization. You have a big, long list of something to do and you don't have enough time. How can you parallelize it? And the crazy part is the gpu, which is the chip that powers all of AI, is amazing at parallelization.
A
Yeah. And you do a lot for your content too. I mean, people probably have come across you. You have a blog that I think you write a couple times a week. I don't think it's quite daily, but actually sometimes you do post multiple times a day. I think if I'm remembering based on the timestamps, you post quite a bit. Do you use AI in the process of creating those and coming up with ideas?
B
AI is an incredible editor. When I first started writing, I hired an editor and I had this AP English teacher who taught me to love to write, a guy named Mr. Dunn. And so when I started writing, I really wanted to be graded like an AP English student. So I hired a wonderful person who did that. And now AI will do it for you exceptionally well. And so the amount of revisions with AI is. I mean, most blog posts 10 years ago might have had two revisions or three revisions. Blog posts today have 10 or 15 or 25 revisions.
A
So do you write something that's maybe long winded or not fully fleshed out, and then you have AI edit it? Like, do you have a series of prompts like cloud skills that you've made or something where you're banging through like, you read it, you're like, fix this, fix this.
B
Most important thing is to create an outline, figure out the lead, the real story, and then the data points or the supporting arguments. It takes multiple versions. Even after 20 years of writing, Claude code will say, or whatever Kimmy K26 will say, you buried the lead. You buried the most important part. Like paragraph 14.
A
You're saying you'll say that to the AI?
B
No, no, it will tell me. I was like, hey, critique this post and will say, you heard the lead and you feel. I mean, I feel like a freshman in high school. It's so basic, but it's that consistent discipline.
A
So thanks again for coming on the show. This was awesome. I know you have to run. Where can people follow you? Like Twitter, LinkedIn, blog, all three t
B
t u n g u z. You can find me on LinkedIn and Twitter and then tomtungoose.com is the blog cool.
A
We'll throw links in the show, notes for people to check them out.
B
Thanks for the conversation, Turner.
A
Really enjoyed it and I hope you enjoyed it. Thanks again to this episode's sponsors. Upgrade to Flex with one of the two links in the description and get $1,000 off your first $10,000 to spend. Put your sales tax on autopilot@numerl.com and for AI analytics, just ask Amplitude. If you enjoy this conversation, please like comment, subscribe and share this episode with a friend who loves talking AI data. Make sure to check out the back catalog of over 100 episodes with the founders of companies like Robinhood, Sweetgreen and Mercury, and investors like Gary Tan at YC and Chathan and Eric at Benchmark. Tune in next week for podcast and recording live from Allocates Beyond Summit, featuring conversations with a dozen early stage investors on everything they're seeing on the ground today. If you don't want to miss it, subscribe to my newsletter, the Split linked in the description to get each episode plus a transcript emailed directly to your inbox every week week. Thanks again for listening. See you next time.
Episode: [Title not provided]
Date: May 14, 2026
Guest: Tomasz Tunguz, Founder of Theory Ventures
Theme: The current state and future of AI startups, infrastructure, investment trends, and the playbooks emerging at the bleeding edge of tech.
Turner Novak welcomes Tomasz Tunguz, founder of Theory Ventures, for a deep-dive into the mechanics, economics, and future of artificial intelligence. The conversation unpacks the breakneck pace of AI infrastructure build-out, startup strategies for riding this wave, how business models are evolving, sectors of high opportunity and risk, and what the new age of AI-native tooling means for investors, founders, and the broader economy.
| Segment | Topic | Timestamp | |---------|-------|-----------| | Guest intro + Theory Ventures explained | 00:03–00:26 | | State of AI, infrastructure demand | 00:35–01:45 | | GPU bottleneck & economics | 01:31–03:57 | | Data center & power buildout | 03:57–04:23 | | Capex in AI infrastructure | 06:36–07:58 | | AI model obsolescence & parallel to pharma | 15:07–15:48 | | Anthropic’s strategy: commoditize complements | 25:56–28:37 | | Inference economics explained | 28:37–30:29 | | Owning the stack: data center, chip, model | 34:36–37:58 | | Avoiding big company roadmaps, startup tactics | 38:40–43:48 | | SaaS incumbents that survived cloud shift | 47:38–51:53 | | Macro impact: AI as GDP driver | 52:08–53:27 | | Productivity gains in AI, parallelization | 67:18–68:39, 88:19–89:06 | | How AI will change email | 74:57–75:26 | | Brand and defensibility for AI startups | 81:17–82:31 | | Fund strategy & building Theory Ventures | 84:13–87:18 |
The overall tone was candid, analytical, and laced with both historical perspective and practical advice. Both Turner and Tomasz use vivid analogies (from gasoline octane to "market #11 through #100"), concrete numbers, and a balanced view of where opportunity and risk exist as AI eats the world.
This summary is designed to equip listeners and non-listeners alike with both the key facts and the strategic depth discussed in the episode.