
In this closing keynote from a16z’s Runtime conference, General Partner Erik Torenberg speaks with our firm’s cofounders, Marc Andreessen and Ben Horowitz on highlights from throughout the conference, the current state of LLM capabilities, and why despite huge capex, AI is not a bubble.
Loading summary
Marc Andreessen
I think we don't yet know the shape and form of the ultimate products. It's one just obvious historical analogy is, you know, the personal computer, from sort of invention in 1975 through to, you know, basically 1992, was a text prompt system. Seventeen years in, you know, the whole industry took a left turn into GUIs and never looked back. And then by the way, you know, five years after that, the industry took a left turn into web browsers and never looked back.
Ben Horowitz
Right.
Marc Andreessen
And you know, look, I'm sure there will be chatbots 20 years from now, but I'm pretty confident that both the current chatbot companies and many new companies are going to figure out many kinds of user experiences that are radically different that we don't, we don't even know yet.
Podcast Host / Narrator
Every major technology shift brings new capabilities, new pressures, and new questions about how Progress unfolds. At a 16z's runtime conference, I sat down with Marc Andreessen and Ben Horowitz to discuss the current state of AI, how reasoning and creativity are evolving, how markets adjust to new technology, and what this moment means for founders and institutions shaping what comes next. Now to Mark and Ben.
Marc Andreessen
Please join me in welcoming Mark Andreessen and Ben Horowitz with general partner Eric Tornberg. Follow me into our solo get in.
Eric Tornberg
The flow and you can picture like.
Ben Horowitz
A photo, music mix mellow, maintain some, make melodies for MCs motivates to prank some everlasting opportunities. Thank you for the Rakim who did that.
Marc Andreessen
Ben picked the music.
Eric Tornberg
Mark, there's been a lot of talk lately about the limitations of LLMs that they can't do true invention of say, new science that they can't do true creative genius that is just combining or packaging. You have thoughts here? What say you?
Marc Andreessen
Yeah, so for me, yeah, so you get all these questions and yeah, they usually come in either sort of are language models intelligent in the sense of can they actually process information and have sort of conceptual breakthroughs the way that people can? And then there's are language models or video models creative? Can they create new art, actually have genuine creative breakthroughs? And of course, my answer to both of those is, well, can people do those things? And I think there's two questions there, which is, okay, even if some people are quote, unquote intelligent, as in having original conceptual breakthroughs and not just, let's just say, regurgitating the training set or following scripts, what percentage of people can actually do that? I've only met a few. Some of them are here in the room, but not that many, most people never do. And then creativity, I mean, how many people are actually genuinely creative?
Ben Horowitz
Right?
Marc Andreessen
And so you kind of point to a Beethoven or a Van Gogh or something like that. You're like, okay, that's creativity and yeah, that's creativity. And then how many Beethovens and Van Gogh's are there? Obviously not very many. So one is just like, okay, like if these things clear the bar of 99.99% of humanity, then that's pretty interesting just in and of itself.
Ben Horowitz
But.
Marc Andreessen
But then you dig into it further and you're like, okay, how many actual real conceptual breakthroughs have there ever been actually ever in human history as compared to sort of remixing ideas? If you look at the history of technology, it's almost always the case that the big breakthroughs are the result of usually at least 40 years of sort of work ahead of time. Four decades, right. In fact, language models themselves are the culmination of eight decades of previous work. And so there's remixing and then in the arts, it's the exact same thing. You know, novels and music and everything. There are clearly creative leaps, but there's just tremendous amounts of influence that come in from people who came before. And even if you think about like somebody with the creativity of a Beethoven, like there was a lot of Beethoven in Mozart and Haydn and in the composers that came before. And so there's just tremendous amounts of remixing and combination. And so it's a little bit of an angels dancing on the head of a pin question, which is like, if you can get within, you know, 0.01% of kind of world beating generational creativity, intelligence, like you're probably all the way there. So emotionally I want to like hold out hope that there is still something special about human creativity. And I certainly believe that. And I very much want to believe that, but I don't know. When I use these things, I'm like, wow, they seem to be awfully smart and awfully creative. So I'm pretty convinced that they're going to clear the bar.
Eric Tornberg
Yeah, I think that seems to be a common theme in your analysis. When people talk about the limitations of LLMs, can they do transfer learning or just learning in general? You seem to ask, can people do this?
Marc Andreessen
Yes. Can people do these things? Well, it's like lateral thinking, right? So yeah. So it's like reasoning in or out of distribution.
Ben Horowitz
Right.
Marc Andreessen
And so it's okay. I know a lot of people who are very good at reasoning inside distribution. How many people do I actually know who are good at reasoning outside of distribution and doing transfer learning. And the answer is like, I know a handful. I know a few people where whenever you ask them a question, you get an extremely original answer. And usually that answer involves bringing in some idea from some adjacent space and basically being able to bridge domains. And so you'll ask them a question about, I don't know, finance, and they'll bring you an answer from psychology, or you ask them a question about psychology and they'll bring you an answer from biology. Right. Or whatever it is. And so I know, I don't know, sitting here today, probably three. I probably know three people who can do that reliably. I've got 10,000 in my address book. And so three out of 10,000 is not that high a percentage. By the way, I find this very encouraging. Yeah, immediately the mood in the room has gone completely to hell. I find this very encouraging because look at what humanity has been able to build despite all of our limitations, right? And look at all the creativity that we've been able to exhibit and all the amazing art and all the amazing movies and all the amazing novels and all the amazing technical inventions and scientific breakthroughs. And so we've been able to do everything we've been able to do with the limitations that we have. And so I think that. Do you need to get to the thing where you are 100% positive that it's actually doing original thinking? I don't think so. I think it'd be great if you did. And I think ultimately we'll probably conclude that that's what's happen, but it's not even necessary for just tremendous amounts of improvement.
Eric Tornberg
Ben, we were just celebrating some hip hop legends at your paid in full event last week. And so you think a lot about creative genius. How do you think about this question?
Ben Horowitz
Yeah, I mean, I think that I agree with Mark that it's whatever it is, it's very useful, even if it isn't all the way that level. I think that there's something about the actual real time human experience that humans are very into, at least in art, where, you know, with the current state of the technology, kind of the pre training doesn't have quite the right data to get to what you really want to see, but, you know, it's pretty good.
Marc Andreessen
One of Ben's nonprofit activities is something called the paid in full foundation, which is honoring and actually providing essentially a pension for sort of the great innovators in rap and hip hop. And so he knows and has many of. We were just at the event. And he has many of the kind of leading lights of that field for the last 50 years perform, and it's really fun. Meet them and talk to them. But, like, how many people in that entire field over the course of the last 50 years would you classify as like, a true conceptual innovator?
Ben Horowitz
Yeah, well, you know, it's interesting. It depends how broadly you define it, but there were several of them there on Saturday, so. Dr. Jim, I think, yeah, Rakim, you'd certainly put in that category. Dr. Dre, you'd certainly put in that category. George Clinton, you'd certainly put in that category in a narrower sense. Like, Cool G Rap certainly had a new idea. But, you know, it depends, like a fundamental kind of musical breakthrough. You probably just say Brock Kim and George Clinton, are they excited?
Marc Andreessen
So two out of.
Ben Horowitz
Well, I mean, those of the guys who were there.
Marc Andreessen
Oh, yeah, yeah, yeah.
Ben Horowitz
But, yeah, it's a tiny percentage. Tiny, tiny, tiny, tiny, tiny.
Eric Tornberg
We had by the fireside last night with Jared Leto. He was talking about how many people in Hollywood are really scared or against what's happening here. What do you see in, you know, when you talk to the Dr. Dre's, the NAS, the Kanye's, Are they excited? Are they using it? Are they.
Ben Horowitz
So everybody who I speak to, there are definitely people are scared in music, but there are a lot of people who are very, very interested in it, and particularly the hip hop guys are interested because it's almost like a replay of what they did, Right. They just took other music and they kind of built new music out of it. And I think that AI is a fantastic creative tool for them. It, like, way opens up the palate. And then for a lot of what hip hop is, is it's kind of telling a very specific story of a specific time and place, which having intimate knowledge and being trained just on that thing is actually an advantage as opposed to being like a generally smart music model.
Eric Tornberg
At the same time, people also use the same logic of, hey, whatever is more intelligent will rule whatever is less intelligent. And Mark, you recently not said by.
Ben Horowitz
Anybody who owns a cat.
Eric Tornberg
Mark, you recently tweeted a supreme shape rotator can only rotate shapes, but a supreme word cell can rotate shape rotators. And also, someone's clapping here. And also high IQ experts work for mid IQ generalists. What means?
Marc Andreessen
What means. Yeah, so the PhDs all work for MBAs, right? So it's okay. So, yeah, well, I just take it up a level. It's just like when you look at World Today, do you think we're being ruled by the smart ones. Is that your big conclusion from current events, Current affairs. Right. Okay. We put the geniuses in charge.
Eric Tornberg
You mean Kamala and Trump aren't the best?
Marc Andreessen
Well, let's not even be specific towards the US let's just look all over the world. Yeah. And so I think two things are true. One is we probably all kind of underrate the importance of intelligence. And actually there's a whole kind of backstory here of intelligence actually turns out to be this, like, incredibly inflammatory kind of topic for lots of reasons over the last hundred years, which we could talk about in great detail. And even just the very idea that some people are smarter than other people just really freaks people out. And people don't like to talk about it. We really struggle with that as a society. And then it is true that intelligence is, like, in humans, intelligence is correlated to almost every kind of positive life outcome. Right. And so intelligence generally, in the social sciences, what they'll tell you is what they call fluid intelligence, the g factor, or IQ is sort of 0.4 correlated to basically everything. And so it's 0.4 correlation to, like, educational outcomes and professional outcomes and income, and by the way, also like, life satisfaction and by the way, nonviolence, being able to solve problems without physical violence and so forth. And so on the one hand, we probably all underrate intelligence. On the other hand, the people who are in the fields that involve intelligence probably overrate intelligence. And you might even coin a term like maybe intelligence supremacist or something like that, where it's just like, oh, intelligence is very important. And so therefore, maybe it's the most important thing or the only thing. But then you look at reality and you're like, okay, that's clearly not the case.
Ben Horowitz
Yeah, it's still only 0.4, right? Yeah.
Marc Andreessen
Well, so to start with, it's only 0.4. And in the social sciences, 0.4 is a giant correlation factor. Right. Like most. Most things that. Where you can correlate whether it's, you know, genes or observed behavior or whatever to anything, in the social sciences, the correlations are much smaller than that. So 0.4 is tiny, but it's still only 0.4. So even if you're like a full on. If you. Even if you're like a full on genetic determinist, and you're just like, you know, genetic IQ just like, drives all these outcomes. Like, it still doesn't explain 0.6 of the correlation. And so that leaves it. But that's just on the individual level, then you just look at the collective level. Well, you just look at the collective level. And it's like a famous observation is you take any group of people, you put them in a mob, and the mob is dumber. Right. Than the average. And you put a bunch of smart people in a mob and they definitely turn dumber. And you see that all the time. And so you put people in groups and they behave very differently. And then you create these questions around who's in charge, whether who's in charge at a company or who's in charge of a country. And whatever the filtration process, it's clearly not. It's certainly not only on iq and it may not even be primarily on iq. And so therefore it's just like this assumption that you kind of hear in some of the AI circles, which is like, inevitably the smart kind of thing is going to govern the dumb thing. I just think that's very easily. It's just sort of very easily and obviously falsified intelligence is insufficient. And then you just convey it. We're all in this room lucky enough to know a lot of smart people and you just kind of observe smart people. And some smart people really figure out how to have their stuff together and become very successful. And a lot of smart people never do. And so there must be, there obviously are, and there in fact must be many other factors that have to do with success and have to do with like who's in charge than just raw intelligence.
Eric Tornberg
It begs the follow up question of what are some examples of what that might be? Skills outside of intelligence, and more particularly specifically, why couldn't AI systems learn them?
Marc Andreessen
Yeah, so Ben, other than intelligence, what in your experience determines, for example, success in leadership or in entrepreneurship or in solving complex problems or organizing people?
Ben Horowitz
Yeah, there are many things. A lot of it is being able to have a confrontation in the correct way. And there's some intelligence in that. But a lot of it is just really understanding who you're talking to. Being able to interpret everything about how they're thinking about it and just kind of generally seeing decisions through the eyes of the people working in the company, not through your eyes is a skill that, you know, you develop by talking to people all the time, understanding what they're saying, so forth, these kinds of things. And it's just, you know, it's certainly not an IQ thing and not that like I could imagine an AI training on any individual and like figuring it all out and knowing what to say and so forth. But then you also need that integrated with Whatever the business ought to be doing. So you're not trying to do what's popular. You're trying to get people to do what's correct, even if they don't like it. And that's a lot of management. It's not a problem. Anybody's working on AI currently, but maybe they will.
Marc Andreessen
Some combination of courage, some combination of motivation, some combination of emotional understanding. Theory of mind.
Ben Horowitz
Yeah. What do people want? Like married to what needs to be done? And then how talented are they? Which ones can you afford? If they jump out the window, it's fine. Which one's not fine? This kind of thing, there's a lot of weird subtleties to it, and it's very situational. I think the hardest thing about it and why management books are so bad is, is because it's situational. Like your company, your product, your people, your org chart is very, very different than Here are the five steps to building a strategy. It's like, well, that's the most useless fucking thing I ever read because it has nothing to do with you.
Marc Andreessen
So one of the interesting things on this, the concept of theory of mind is really important. Theory of mind is can you and your head model what's happening in the other person's head? And you would think that maybe that maybe, obviously people who are smarter should be better at that. It turns out that may not be tr. There's reason to believe that that's not true, which is as follows. So the US Military was the early adopter and has continued to be sort of the leading adopter in US society of actually IQ testing. And they basically launder it through something called the asvab, which is their Vocational Aptitude Battery Test. But it's essentially an IQ test. And so they still use basically explicit IQ tests. And they slot people into different specialties and roles, in part according to iq, including in the leadership role. And so they know what everybody's IQ is and they kind of organize around that. And one of the things that they found over the years is if the leader is more than one standard deviation of IQ away from the followers, it's a real problem. And that's true in both directions, right? If the leader is not smart enough to be able to manage, to be able to for somebody who is less smart, to model the mental behavior of somebody who's more smart is of course inherently very challenging and maybe impossible. But it turns out the reverse is also true, which is if the leader is two standard deviations above the norm of the organization that he's running he also loses theory of mind. It's actually very hard for very smart people to model the internal thought processes of even moderately smart people. And so there's actually a real need to have a level of connection there that's not just. And therefore by inference, if you had a person or a machine that had 1000 IQ or something like, it would be so alien, its understanding of reality would be so alien to the people or the things that it was managing that it wouldn't even be able to connect sort of realistic way. So again, this is a very good argument that like, yeah, the world is going to be far from organized by IQ for centuries to come.
Ben Horowitz
Yeah. And Zuckerberg had a great line, which is intelligence is not life and life has a lot of dimensionality to it that is independent of intelligence. I think that if you spend all your time work on intelligence, you lose track of that.
Eric Tornberg
We sometimes say about some specific people that they're too smart to properly model.
Marc Andreessen
Or.
Eric Tornberg
They sort of assume too much rationality on other people or they just overthink things or over rationalize them. Yeah. Just to your point that it's not everything.
Ben Horowitz
Yeah. People seldom do what's in their best interest, I should say.
Marc Andreessen
You know, I also suspect this kind of gets more into the biology side of things. You know, there's more and more scientific evidence that basically also that like human, human cognition or human, I don't know, whatever you want to call it, self awareness, information processing, decision making, sort of experience is not purely a brain. The sort of famous mind body dualism is just not correct. And again, this is an argument against sort of IQ supremacism or intelligence. Supremacism is human beings don't experience existence just through the rational thought and specifically not through just the rational thought of the brain, but rather it's a whole body experience and there's aspects of our nervous system and there's aspects of every everything from our gut biome to smells, to the olfactory senses and hormones and all kinds of biochemical kind of aspects to life. If you just track the research, I suspect we're going to find as human cognition is a full body experience much more than people thought. And so therefore to actually. And this is one of the kind of big fundamental challenges in the AI field right now, which is the form of AI that we have working is the fully mind body dual version of it, which is it's just a disembodied brain. The robotics revolution for sure is coming. When that happens, when we put AI in physical objects that move around the world, you're going to be able to get closer to having that kind of integrated, intellectual, physical experience. You're going to have sensors and the robots are going to be able to gather a lot more real world data. And so maybe you can start to actually think about synthesizing a more advanced model of cognition. And maybe we're going to actually discover more both about how the human version of that works and also how the machine version of that works. But to me, at least, reading the research, all those ideas feel very nascent and we have a lot of work to do to try to figure that out.
Eric Tornberg
Do you have a sense for how they are at theory of mind today, or do you have a sense of where the limitations are? You like to talk to them a lot. Are there any particular things that are particularly surprising to you as you do?
Marc Andreessen
Yeah, I would say generally they're really good. Yeah. And so I find one of the more fascinating ways to work with language models is actually have them create Personas and then basically have. Well, actually, so basically I like Socratic dialogues. I like when things are argued out and like a Socratic dialogue. And so tell any advanced LLM today to create a Socratic dialogue. And it'll either make up the Personas, you can tell what it is. It does a good job. It does this very, very annoying property, which is it wants everybody to be happy. And so it wants all of its Personas to agree. And so by default, it will have a briefly interesting discussion and then it will sort of figure out, you know, basically like you're watching, I don't know, PBS special or something. It'll kind of figure out how to bring everybody in agreement. Everybody's happy at the end of the discussion. And of course, I fucking hate that. Like, it drives me nuts. I don't want that. So instead I tell it. I'm like, make the conversation more tense, right? And like, fraught with like, anger and like, you know, people are going to get like increasingly upset throughout the conversation. And then it starts to get really interesting. And then I tell it, you know, bring it, you know, introduce a lot more cursing, you know, really have them go at it. Like all the gloves come off. They're going for full, you know, you know, reputational destruction of each other.
Eric Tornberg
You do a lot of these skits.
Marc Andreessen
Yeah. And then I get carried away. And then I'm like, it turns out they're all like secret ninjas. And then they all start fighting. And you've got Einstein, you know, you know, you know, hitting you Know Niels Bohr with nunchucks. And by the way, it's happy to do that too. So you do have to, you have to, you have to control yourself. But it is very good at theory of mind. And that's, and I'll give you another example. There's a startup actually in the UK in the world of politics. And what they found is that they found that language models now are good enough so specifically for politics, which is sort of a subcategory where this idea matters. So in politics, people do focus group. You do focus groups of voters all the time. And by the way, many businesses also do that. So you get a bunch of people together from different backgrounds in a room and you kind of guide them through discussion and try to get their points of view on things. And focus groups are often surprising, like politicians. If you talk about politicians who do focus groups, they're often surprised by the things that they thought voters cared about is actually not the things that voters care about. And so you can actually learn a lot by doing this. But focus groups are very expensive to run and then there's a long lag time because they have to be actually physically organized and you have to recruit people and vet people and so forth. And so it turns out that the state of the art models now are good enough at this that they can correctly accurately reproduce a focus group of real people, people inside the model so they're good enough to clear that bar. In other words, you can basically have a focus group actually happening in the model where you create Personas in the model and then it actually accurately represents a college student from Kentucky is contrasted to a housewife from Tennessee is contrasted to whatever. You just specify this. And so they're good enough to clear that bar and we'll see how far they get.
Eric Tornberg
I want to segue to the bubble conversation. Amin and Jeetu Jensen and Matt spoke about the enormous scale of physical infrastructure being built out. AI capex is 1% of GDP. How should we understand and think about this bubble question?
Ben Horowitz
Well, I think the fact that it's a question means we're not in a bubble. That's the first thing to understand. I mean, a bubble is a psychological phenomenon as much as anything. And in order to get to a bubble, everybody has to believe it's not a bubble. That's sort of the, the core mechanic of it. And we call that capitulation. Everybody just gives up like, okay, I'm not going to short these stocks anymore. I'm tired of losing all my money. I'm Going to go long. And we saw that actually, and.
Marc Andreessen
A.
Ben Horowitz
Little bit of question, like, really, what was the tech bubble? But in the kind of dot com era, right as the prices went through the roof, Warren Buffett started investing in tech and he swore he would never invest in tech because he didn't understand it. And so if he could capitulated. Nobody was saying it was a bubble when it became like a quote unquote bubble. Now if you look at that phenomenon, the Internet clearly was not a bubble. You know, it was a real thing. It was in the short term, there was a kind of price dislocation that happened because the market, you know, there were just not enough people on the network to make those products go at the time. And then the prices kind of outran the market. You know, in AI, it's much harder to see that because there's so much demand in the short term.
Marc Andreessen
Right.
Ben Horowitz
Like, we don't have a demand problem right now. And like, the idea that we're going to have a demand problem five years from now to me seems quite absurd. You know, could there be like weird bottlenecks that appear, you know, like we just, at some point we just don't have enough cooling or something like that? You know, maybe. But like, like right now, if you look at demand and supply and what's going on and multiples against growth, it doesn't look like a bubble at all to me. But I don't know.
Marc Andreessen
Do you think it's bubble, Mark? Yeah, Look, I would just say this. Yeah. So nobody knows in the sense of the experts. If you're talking to anybody at a hedge fund or a bank or whatever, they definitely don't know. Generally the CEOs don't know.
Ben Horowitz
By the way, a lot of VCs don't know. They just get upset. VCs get emotionally upset when you guys have higher valuations. It makes them angry. And I get it all the time. And I'm like, what are you mad about? The shit is working, man. Be happy. Come on. So there's a lot of emotion around people wanting it to be a bubble.
Marc Andreessen
Yeah. Nothing's worse than passing on a deal and then having the company become a great success. Like, it's just me. That valuation is outrageous. You can be furious about that for 30 years in your business. It's amazing. And you can find, yeah, you come up with all kinds of reasons to cope and explain why it wasn't your mistake. But it's the world that's wrong, not me. Right. So there's a lot of that. Yeah. So I would just say, like I would always say, bring the conversation back to ground truth fundamentals. And, and the two big ground truth fundamentals are, number one, does the technology actually work? Can it deliver on its promise? And then number two is, are customers paying for it? And if those two things are true, then it's very hard to. As long as those two things stay grounded, generally things I think are going to be on track.
Eric Tornberg
When Gavin was up here with DG, he said ChatGPT was a pearl harbor moment for Google. The moment when the giant wakes up up. When we look at history and platform shifts what determine whether the incumbent actually wins the next wave versus new entrants or how should we think about that in AI?
Ben Horowitz
Well, reacting to it is important, but that doesn't mean it's a Pearl harbor moment. I think Google got their head out of their ass. It was the sound of it. So they're not going to get completely run over, but nonetheless I don't think OpenAI is going away. So they definitely let that happen. Some of it to speed and then just look, it's execution over a long period of time and some of these very large companies to varying degrees have lost their ability to execute. And so if you're talking about a brand new platform and you're talking about kind of building for a long time, it's like Microsoft got caught with their pants down on Google. Microsoft's still very strong, but they missed that whole opportunity. They also missed the opportunity. Apple was nothing and Microsoft fully believed that they were going to own mobile computing. They completely missed that one. But they were still so big from their Windows monopoly they could build into other things. So, you know, I think generally the new companies have won the new markets and that doesn't mean the big company, the biggest companies, the biggest monopolies from the prior generation just last a long time is the way I would look at it. Yeah.
Marc Andreessen
I also think we don't quite know like it's all happened so fast. We actually don't, I think we don't yet know the shape and form of the ultimate products.
Ben Horowitz
Right.
Marc Andreessen
And so like, because it's tempting and this is kind of what always happens, it's kind of tempting to look at. I'm not saying that's what these guys did on stage, but it's kind of tempting to look. Sometimes you hear the kind of reductive version of this, which is basically it's like, oh, there's either going to be a chatbot or a search engine. Right. The competition is between a chatbot and a search engine. And the problem Google has is the classic problem of disruption. Are you going to disrupt the 10 blue links model and swap in at sort of AI answers and potentially disrupt the advertising model? And then the problem OpenAI has is they have the full chat product, but they don't have the advertising yet and they don't have the distribution. Google scale distribution. And so you kind of say, okay, that's a fairly like. That'd be straight out of the innovator's dilemma. Business textbook like this is just a very clear one versus one kind of dynamic. But that assumes that the mistake that you could make in thinking that way is that assumes that the forms of the product in 5, 10, 15, 20 years that are gonna be the main things that people use are gonna be either a search engine or a chatbot spot.
Ben Horowitz
Right.
Marc Andreessen
And there's just obvious historical analogies. One just obvious historical analogy is the personal computer, from sort of invention in 1975 through to basically 1992, was a text prompt system. And at the time, by the way, an interactive text prompt was a big advance over the previous generation of punch card systems, timesharing systems. And then it was 1992, so it was about 17 years in, the whole industry took a left turn into GUIs and never looked back. And then, by the way, five years after that, the industry took a left term in the web browsers and never looked back. And so the very shape and form and nature of the user experience and how it fits into our lives is, I think, still unformed. I'm sure there will be chatbots 20 years from now, but I'm pretty confident that both the current chatbot companies and many new companies are going to figure out many kinds of user experiences that are radically different that we don't even know yet.
Ben Horowitz
Yep.
Marc Andreessen
And by the way, that's one of the things, of course, that keeps the tech industry fun, which is, you know, especially on the software side, you know, it's not obvious what the shape and form of the products are. And there's just, I think there's just tremendous headroom for invention.
Eric Tornberg
As you're coaching entrepreneurs and the entrepreneurs in this room, what else feels different about this era or other advice that you find yourself spent? Whether it's around sort of the talent wars that are going on or other aspects that feel unique to this era, what other advice do you want to be leaving our entrepreneurs with that's unique to this era?
Ben Horowitz
Well, I actually think you said the right thing, which is this is A unique era. And so trying to learn the organizational design lessons of the past or trying to learn kind of too much from the last generation can be deceptive because things really are different. Like the way your companies are getting built is quite different in many aspects. And the types of just our observation on PhD AI researchers is just very different than a traditional engineer, full stack engineer or something like that. I think you do have to think through a lot of things from first principles because it is different and observing from the outside, it's really different.
Marc Andreessen
Yeah. And I would just offer, I do think things are going to change. So I already talked about, I think the shape and form of products is going to change. And so I think there's still a lot of creativity there. I also think, and let's say, I think that in a world of supply and demand, the thing that creates gluts is shortages. So when something becomes too scarce, there becomes a massive economic incentive. Incentive to figure out how to unlock new supply. And so the current generation of AI companies are really struggling with particular shortages of the really talented AI researchers and engineers. And then they're really challenged with a shortage of infrastructure capacity, chips and data centers and power. I don't want to call timing on this. There will come a time when both of those things become gluts. And so I don't know that we can plan for that. Although I would just say the following, number one, the researcher, engineer side of things. It is striking the degree to which there are excellent, outstanding models coming out of China now for multiple companies, specifically Deepseek and Quinn and Kimmy. It is striking how the teams that are making those are not the name brand for the most part. These are not the name brand people with their names on all the papers. And so China is successfully decoding how to basically take young people and train them up in the field.
Ben Horowitz
Well, and Xai to a large extent too.
Marc Andreessen
Yeah. And so I think there's going to be, and look, it makes sense up until it makes sense that for a while it's going to be the super esoteric skill set and people are going to pay through the nose for it. But there's no question the information is being transferred into the environment. People are learning how to do this, college kids are figuring it out. And I don't know that there's ever going to be a talent glut per se, but I think for sure there's, there's going to be a lot more people in the future who of course know how to build these things. And then by the way also of course, AI building AI. Right. So the tools themselves are going to be better at contributing to that. And I think this is good because I think that the current level of shortage of engineers and researchers is too constraining. And then on the chip side, I'm not a chip guy and I don't want to call it specifically, but it's never been the case, it's never been the case in the chip industry that there's ever. Every shortage in the chip industry has always resulted in a glut because the profit pool of a shortage, the margins get too big, the incentive for other people to come in and figure out how to commoditize the function get too big. And so Nvidia has the best position probably anybody's ever had in chips. But notwithstanding that, I find it hard to believe that there's going to be this level of pressure on infrastructure in five years.
Ben Horowitz
Yeah. Even if the bottleneck within the infrastructure moves. So if it becomes power, if it becomes cooling or anything else, then you'll have a chip glut for sure. Yeah.
Marc Andreessen
So I think over then I would just say this. It's likely the challenges that we all have in five years from now are going to be different challenges.
Ben Horowitz
Yeah, yeah, yeah, definitely. This industry of all industries don't look at us as static. Like, you know, the positions could change very, very fast.
Eric Tornberg
Let's actually close on more of this macro note. Mark, you mentioned China. Last month we were in D.C. and one of the big questions the senator has is how should we make sense of sort of the state of the AI race vis a vis China? Do you want to share just the high level summary of what you shared with them?
Marc Andreessen
Yeah. So my sense of things and I think the current, if you just observe currently specifically like deep sea Kwana Kimi and these models coming out of China. My sense basically is, I would say the US specifically in the west generally, but more and more specifically the US the conceptual innovations have been coming out of the coming out of the US coming out of the west, kind of the big kind of conceptual breakthroughs. China is extremely good at picking up ideas and implementing them and scaling them and commoditizing them. And they do that obviously throughout the manufacturing world. And they're doing it now very, I think successfully, sort of in AI. And so I would say they're running the catch up game really well. And then there's sort of always this question of like how much of that is being done, let's just say authentically through hard work and Smart people. And then how much is being done with maybe a little bit of help, maybe a little USB stick in the middle of the night. So there's always a little bit of question, but either way they're doing a great job. Obviously they aspire to more than that. And there are many very smart and creative people in China. And so it will be interesting now to see the level to which the conceptual breakthroughs start to come from there and whether they pull ahead.
Ben Horowitz
Had.
Marc Andreessen
And so but like I would say, like what we tell people in Washington is like, look, this, this is a foot, this is now, this is a full on race. It's a foot race, it's a game of inches. Like we're not going to have a five year lead, we're going to have like maybe a six month lead. Like we have to run fast, we have to win. Like we, we have to, we have to do this. We, we can't. And then we can't put constraints on our companies that the, the, the Chinese government isn't putting on their own companies. And so, you know, we'll just lose. And you know, do you really want, do you really want to wait up in the morning and live in a world really controlled and run by Chinese AI? Most of us would say no, we don't want to live in that world. So there's that. And I would say I feel moderately good about that just because I think we're really good at software. The minute this goes into embodied AI in the form of robotics, I think things get a lot scarier. And this is the thing I'm now spending time in D.C. trying to really educate people on, which is because the US and the west have chosen to de, industrialize to the extent that we have have over the last 40 years. China specifically now has this giant industrial ecosystem for building sort of mechanical, electrical and semiconductor and now software devices of all kinds, including phones and drones and cars and robots. And so there's going to be a phase two to the AI revolution. It's going to be robotics. It's going to happen pretty quickly here, I think. And when it does, even if the US stays ahead in software, the robot's got to get built. And, and that's not an easy thing. And it's not just like a company that does that. It's got to be an entire ecosystem and it's going to be. The car industry was not three car companies. It was thousands and thousands of component suppliers building all the parts. And it's been the same thing for airplanes and the same thing for computers and everything else. It's going to be the same thing for robotics. And by default, sitting here today, that's all going to happen in China. And so even if they never quite catch us in software, they might just lap us in hardware and that'll be that.
Eric Tornberg
That.
Marc Andreessen
The good news is, I think there's a growing awareness. There's a growing awareness, I would say, across the political spectrum in the US that deindustrialization went too far. And there's a growing desire to kind of figure out how to reverse that. And I say I'm guardedly optimistic that we'll be making progress on that, but I think there's a lot of work.
Eric Tornberg
To be done on that call to arms. Let's wrap. Thank you. Mark and Ben to wrap up. I'd like to welcome you.
Ben Horowitz
Thank you.
Marc Andreessen
Thank you everybody.
Podcast Host / Narrator
Thanks for listening to this episode of the A16Z podcast. If you like this episode, be sure to like, comment, subscribe, leave us a rating or review and share it with your friends and family. For more episodes, go to YouTube, Apple Podcasts and Spotify. Follow us on X1 6Z and subscribe to our substack@a16z.substack.com thanks again for listening and I'll see you in the next video episode. As a reminder, the content here is for informational purposes only, should not be taken as legal, business, tax or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund. Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast. For more details, including a link to our investments, please see a16z.com forward slash disclosures.
Date: October 31, 2025
Guests: Marc Andreessen, Ben Horowitz (Andreessen Horowitz), Eric Tornberg (moderator & General Partner)
Main Theme:
A candid deep-dive into the current state and future of artificial intelligence, examining creativity, intelligence, market shifts, talent challenges, tech bubbles, and the global AI race—especially the competition between the U.S. and China.
This episode revolves around the transformative impact of AI beyond today’s chatbots, looking at its creative and reasoning capacities, potential economic and organizational disruption, and the evolving global landscape. The conversation—recorded at a16z’s runtime conference—features both technical insights and reflective anecdotes, especially on how AI’s trajectory mirrors past tech revolutions.
Timestamps: 00:19–06:42
Human vs. Machine Creativity: Both Marc and Ben argue that very few humans (the “Beethovens and Van Goghs”) actually achieve groundbreaking creativity or conceptual leaps—most innovation is incremental or remix-based.
Originality in Humans: Genuine “transfer learning” (bridging abstract ideas across domains) is rare among people. Marc estimates he knows only “three out of 10,000” contacts truly excel at it. [03:46]
Remixing and Influence: All creative work, even at the highest level, is built on remixing past influences, both in technology and the arts.
Timestamps: 05:05–07:37
Hip Hop as an Analogy: Ben Horowitz draws parallels between the early days of hip hop (which built on re-sampling prior music) and AI’s ability to remix and generate novelty.
Human Experience vs. AI: Ben believes there’s still something special about “real-time human experience” in art, which current AI lacks due to the way it’s trained.
Industry Sentiment: In creative communities (music, Hollywood), there’s both excitement and fear about AI, but hip hop pioneers especially see it as a creative accelerator, not a threat.
Timestamps: 07:37–16:09
Misplaced “Intelligence Supremacism”: Intelligence correlates with success but explains only about 40% of key life outcomes (“it’s still only 0.4” according to Ben [09:40]).
Mob Mentality and Leadership: Even groups of smart people can be collectively “dumber.” Who ends up in charge, whether in companies or countries, is not determined by IQ alone.
Skills Beyond Intelligence:
Theory of Mind: Military research shows that leaders much smarter or less intelligent than their teams struggle to empathize & coordinate. There needs to be a cognitive “fit.” [13:54]
Timestamps: 16:25–18:17
Mind-Body Dualism Flaw: Human cognition is a full-body experience, not just “brain-in-a-vat.” Factors from hormones to the gut biome impact thinking and decision-making.
AI Robotics: The next phase is integrating AI into embodied systems, which will open new frontiers and challenges—especially as sensors and physical interaction come into play.
Timestamps: 18:17–21:15
LLMs & Theory of Mind:
Practical Use: Political & business focus groups can now be simulated by LLMs, decreasing cost and time to insights.
Timestamps: 21:15–24:57
Bubble Psychology: By the time everyone wonders if there’s a bubble, we’re likely not in one; bubbles require universal euphoria and denial.
Current State: AI’s demand outpaces supply (infrastructure, chips, talent), and customers are paying. This suggests a fundamentally healthy market rather than a speculative bubble.
VC “Bubble Paranoia”: Many VCs resent high valuations and rationalize missing big winners by calling the market a “bubble.”
Timestamps: 24:57–28:45
Historical Analogies:
Search Engines vs. Chatbots: Today’s competition (e.g., Google vs. OpenAI) may ultimately be irrelevant: future “AI products” could look radically different than chatbots/search.
Timestamps: 28:58–32:58
Unique Challenges:
Talent and Infrastructure Shortages:
AI Building AI: The tools themselves will improve, enabling wider participation beyond elite researchers.
Timestamps: 33:16–37:01
China’s Catch-Up:
Phase Two—Robotics: The real danger is China’s manufacturing dominance as the AI revolution moves from software to embodied robotics.
Policy Implications: The U.S. must avoid restricting its own companies more than China does, or risk losing the lead.
On AI Creativity:
On the Leadership-Intelligence Mismatch:
On Mind-Body Cognition:
On AI Market Health:
On Incumbents and Platform Shifts:
On the U.S.-China AI Race:
Tone Note:
The conversation is frank, often humorous (with inside references to music, tech, and pop culture), and optimistic but tempered with direct warnings—especially around global competition and the risks of narrowly focusing on intelligence as the ultimate trait.
End of Summary