
Adam D’Angelo (Quora/Poe) thinks we're 5 years from automating remote work. Amjad Masad (Replit) thinks we're brute-forcing intelligence without understanding it. In this conversation, two technical founders who are building the AI future disagree on almost everything: whether LLMs are hitting limits, if we're anywhere close to AGI, and what happens when entry-level jobs disappear but experts remain irreplaceable. They dig into the uncomfortable reality that AI might create a "missing middle" in the job market, why everyone in SF is suddenly too focused on getting rich to do weird experiments, and whether consciousness research has been abandoned for prompt engineering. Plus: Why coding agents can now run for 20+ hours straight, the return of the "sovereign individual" thesis, and the surprising sophistication of everyday users juggling multiple AIs.
Loading summary
Adam D'Angelo
Nothing seems fundamentally so hard that it couldn't be solved by the smartest people in the world working incredibly hard for the next five years.
Amjad Masad
Humanity went through the agricultural revolution and the industrial revolution. We're going through another revolution. We will not be able to call it something. I say future people will call it something, but we are going through something.
Adam D'Angelo
The number of solo entrepreneurs that this technology is going to enable is vastly increased. What a single person can do for.
Amjad Masad
For the first time, opportunity is massively available for everyone. Just the ability for more people to be able to become entrepreneurs.
Adam D'Angelo
It's massive.
Podcast Host
The age of solo entrepreneurship powered by AI is here, but the path to full automation is messier than the hype suggests Today. You'll hear from Adam d', Angelo, founder of Quora and CEO of po, and Amjad Massad, founder and CEO of Replit, on why we're in a brute force era of AI rather than true intelligence and what that means for the future of work. We discussed the expert data paradox, how automating entry level jobs creates a crisis in training the next generation of experts, why managing tens of agents in parallel will define the next wave of productivity, and how the sovereign individual framework might be the best lens for understanding AI's economic and political impact. Plus, Adam makes the case for why vive coding is radically underrated. And Amjad explains what Claude 4.5's strange new self awareness might signal about the path ahead. Let's get into it.
John O'Farrell
Adam, Amjad, welcome to the podcast.
Amjad Masad
Thank you.
Adam D'Angelo
Yeah, thanks for having us.
John O'Farrell
So a lot of people have been throwing cold water over LLMs lately. Seen some general bearishness, people talking about the limitations of LLMs, why they won't get us to AGI. Well, maybe what we thought was just a couple years away is now maybe 10 years away. Adam, you seem a bit more optimistic. Why don't you share your broad general overview?
Adam D'Angelo
Yeah, I mean, honestly, I don't know what people are talking about. I think if you look a year ago the world was very different. And so just judging on how much progress we've made in the last year with things like reasoning models, things like the improvement in code generation ability, the improvements in videogen, it seems like things are going faster than ever. And so I don't really understand where the kind of bearishness is coming from.
John O'Farrell
Well, I think there's some sense that we hoped that they would be able to replace all of tasks or all jobs and maybe there's some sense that it's like middle to middle but not end to end and maybe labor won't be automated in the same way that we thought it would on the same timeline.
Adam D'Angelo
Yeah, I mean, I don't know what the previous timelines people were thinking were, but I think if you go five years out from now, we're in a very different world. I think a lot of what's holding back the models these days is not actually intelligence. It's getting the right context into the model so that it can be able to use its intelligence. And then there's some things like computer use that are still not quite there. But I think we'll almost definitely get there in the next year or two. And when you have that, I think we're going to be able to automate a large portion of what people do. I don't know if I would call that AGI, but I think it's going to satisfy a lot of the critiques that people are making right now. I think they won't be valid in a year or two.
John O'Farrell
What is your definition of AGI?
Adam D'Angelo
I don't know. Everyone thinks it's something different. One definition I kind of like is if you say that you have a remote worker, a human, any job that can be done by someone whose job can be done remotely, that's AGI. You can then say, does it have to be better than the best person in the world at every single job? Some people call that ASI does have to be better than teams of people. You can argue with those different definitions, but I think once we get to be better than a typical remote worker at the job they're doing, we're living in a very different world. And I think that's effectively what people. That's a very useful anchor point for these definitions.
John O'Farrell
So in summary, you're not sensing the same limitations of LLMs that other people are. You think there's a lot more room that LLMs can go from here. We don't need like a brand new architecture or other breakthrough?
Adam D'Angelo
I don't think so. I mean, I think there are certain things like memory and learning, like continuous learning that are not very easy with the current architectures. I think even those you can sort of fake and maybe we're going to be able to get them to work well enough, but we just don't seem to be hitting any kind of limits. The progress in reasoning models is incredible. And I think the progress in pre training is also going pretty quickly. Maybe not as quickly as people had expected, but certainly fast enough that you can expect a lot of progress over the next few years.
John O'Farrell
Amjad what's your reaction hearing all this?
Amjad Masad
Yeah, I think I've been pretty consistent and consistently right, perhaps, dare I say.
Adam D'Angelo
Consistent with yourself and consistent with what.
Amjad Masad
I'm saying with myself and with I think how things are unfolding, that I started being a bit of a more public doubter of things around the time when the AI safety discussion was reaching its height back in maybe 22, 23. And I thought it was important for us to be realistic about the progress because otherwise we're going to scare politicians, we're going to scare everyone. D.C. will descend on Silicon Valley, they'll shut everything down. So my criticism of the idea of AGI 2027, you know that paper that I think it's called, Alexander someone else wrote and then, and the situational awareness and all this hype, papers that are not really science, they're just vibe. Here's what I think will happen. The whole economy will get automated, jobs are, are going to disappear. All of that stuff is that again, it's just, I think it's unrealistic, it is not following the kind of progress that we're seeing and it is going to lead to just bad policy. So my view is LLMs are amazing, amazing machines. I don't think they are exactly human intelligence equivalent. You can still trick LLMs with things like they might have solved the strawberry one, but you can still trick it with like single sentence questions like how many Rs are in the sentence? I think I tweeted about it the other day, which was like three out of the four, four models couldn't, didn't get it even. And then GPT5 with high thinking had to think for 15 seconds in order to get a question like that. So LLMs are I think, a different kind of intelligence than what humans are. And also they have clear limitations and we're papering over the limitations and we're kind of working around them in all sorts of ways, whether it's in the LLM itself and the training data or and the infrastructure around and everything that we're doing to make them work. But that makes me less optimistic that we've cracked intelligence. And I think once we truly crack intelligence, it'll feel a lot more scalable and that you can, and that the idea behind the bitter lesson will actually be true and that you can just pour more power, more resources, more compute into them and they'll just scale more naturally. I think right now there's a lot of manual work going into making these models better. In the true pre training scaling ERA, GPT 2, 3, 3.5, maybe up to 4. It felt like you can just put more Internet data in there and it just got better. Whereas now it feels like there's a lot of labeling work happening, there's a lot of contracting work happening. A lot of these contrived RL environments are getting created in order to make LLMs good at coding and becoming coding agents. And they're going to go do that. I think the news from OpenAI that they're going to do that for investment banking. And so I try to coin this term I call functional AGI, which is the idea that you can automate a lot of aspects of a lot of jobs by just going in and collecting as much data and creating these RL environments. It's going to take enormous effort and money and data and all of that in order to do. And I think I agree with Adam that things are going to get better 100% over the next three months, six months. Cloud 4.5 was a huge jump. I don't think it's appreciated how much of a jump it was over 4. There's really, really amazing things about Claude 4.5. So there is progress. We're going to continue to see progress. I don't think LLMs as they can understand, are on the way to AGI. And my definition for AGI is, I think the old school RL definition, which is a machine that can go into any environment and learn efficiently in the same way that a human could go into. You can put a human into a pool game and within two hours they can shoe pool and be able to do it. Right now there's no way for us to have machines learn skills like that on the fly. Everything requires enormous amount of data and compute and time and effort. And more importantly, it requires human expertise, which is the non bitter lesson idea, which is human expertise is not scalable. And we are reliant today. We are in a human expertise regime.
Adam D'Angelo
Yeah, I mean, I think that humans are certainly better at learning a new skill from a limited amount of data in a new environment than the current models are. I think that on the other hand, human intelligence is the product of evolution which used a massive amount of effective computation. And so this is a different kind of intelligence. And so because it didn't have this massive equivalent of evolution, it just has pre training for that which is not as good. You then need more data to learn everything, every new skill. But I guess I think in terms of the functional consequence. So if you're like, when will the job landscape change? When will the economic growth hit? I think that's going to be more a function of when we can produce something that is as good as human intelligence, even if it takes a lot more compute, a lot more energy, a lot more training data. We could just put in all that energy and still get to software that's as good as the average person at doing a typical job.
Amjad Masad
So I don't disagree with that. And, and that's. It is. It feels like we're in a brute force type of regime, but. But maybe that's fine and.
Adam D'Angelo
Yeah, yeah.
John O'Farrell
So where's the disagreement then? I guess so there's agreement on that. Where is the. Deborah transcribed I don't think that we'll.
Amjad Masad
Get to the singularity or I don't think we're going to get to the next level of human civilization until.
Adam D'Angelo
We.
Amjad Masad
Crack the true nature of intelligence, until we understand it and have algorithms that are actually not brute force.
Adam D'Angelo
And you think those will take a long time to come?
Amjad Masad
I'm sort of agnostic on, on that. It just does. It does feel like the LLMs in a way are distracting from that because all the talent is going there and therefore there's less talent that are trying to do basic research on, on intelligence.
Adam D'Angelo
Yeah. At the same time a huge portion of talent is going into AI. Research that used to previously wouldn't have gone into AI at all. And so you have this massive industry, massive funding, funding compute. But also funding human employees. And that is, I guess nothing seems fundamentally so hard that it couldn't be solved by the smartest people in the world working incredibly hard for the next five years on it.
Amjad Masad
But basic research is different, right? Like trying to, like trying to get into the fundamentals and as opposed to like there's a lot of industry research, like how do we make these things more useful in order to generate profit? And so I, I think that's, that's different. And often, I mean, Thomas Kuhn, this philosopher of science, talks a lot about how these research programs end up, you know, becoming like a bubble and like sucking all the attention and ideas and like think, think, think about physics and how they're like, of string theory and like it pulls everything in and there's sort of a black hole of progress and you know.
Adam D'Angelo
Yeah, yeah, no, and I think, I think one of his things was like, you gotta wait until the current people retire.
Amjad Masad
That's right.
Adam D'Angelo
You can have a chance at changing the paradigm.
Amjad Masad
He's very pessimistic about paradigms. Yes.
Adam D'Angelo
But I guess I feel like the current Paradigm, this is maybe our district. I think the current paradigm is pretty good and I think we're nowhere near the sort of like diminishing returns of continuing to push on it. And I bet, yeah, I guess I would just bet that you can keep doing different innovations within the paradigm to get there.
John O'Farrell
So let's say we continue to brute force it. We're able to automate a bunch of labor. Do you estimate that gdp is something 4 or 5% a year or are we going up to 10% plus? Or what does it do to the economy?
Adam D'Angelo
I think it depends a lot on exactly where we get to and what AGI means. So let's say you have LLMs that with an amount of energy that costs $1 an hour, they could do a job of any showman. Let's just take that as a theoretical point. You could get to, I think you're going to get to much more than 4 to 5% GDP growth in that world. I think the issue is you may not get there. So it may be that the LLMs that can do everything a human can do actually cost more than humans do currently, or they can do kind of like 80% of what humans can do and then there's this other 20%. And I do think at some point you get to LLMs, they can do every single thing a human can do for cheaper. I don't see a reason why we don't eventually get there. That may take 5, 10, 15 years. But I think until you get there, we're going to get bottlenecked on the things that the LLM still can't do or the building enough power plants to supply the energy or there are other bottlenecks in the supply chain.
Amjad Masad
One thing I worry about is the deleterious effect of LLMs in the economy in that say LLMs, you know, effectively automate the entry level job, but not, but, but, but the, but not the expert's job. Right. So let's take, you know, QA, Q quality assurance and it's, it's so good, but there's still all these long tail event, you know, events that it doesn't handle. And so you have a lot of really good QA people now like managing like hundreds of agents and you effectively increase productivity a lot. But they're not hiring new people because the agents are better than new people. And, and that feels like a weird equilibrium to be in. Right. And I don't think that many people are thinking about it.
Adam D'Angelo
Yeah, yeah, for sure. Yeah. No, I think that's, you know, I Think it's happening with CS majors graduating from college. There's just not as many jobs as there used to be. And LLMs are a little more substitutable for what they previously would have done. And I'm sure that's contributing to it. And then it means that you're going to have fewer people going up that ramp that, you know, companies paid a lot of money to to employ them and, and, and train them. And so I, I think it's a real problem. I think it's gonna, I'm guessing you'll probably see some kind of like that problem also creates a economic incentive to solve the problem. So it may be that there's like more opportunities for companies that can train people or maybe use of AI to teach people these things. But for sure that's an issue right now.
Amjad Masad
Another related problem is that since we're dependent on expert data in order to train the LLMs and the LLMs start to substitute those workers, but at some point there's no more experts because they're all out of jobs and they're equivalent to the LLMs. But if the LLMs is truly dependent on labeling data expert RL environments, then how would they improve beyond that? I think that's something, a question for an economist to really sit down and think about is like once you get the first tick of automation, I mean there are some challenges there. And so how do you go to the next part?
Adam D'Angelo
Yeah, I mean, I think a lot of it is going to depend on how good of RL environments can be created. So on one extreme you have something like AlphaGo where it's just a perfect environment and you can just blast past expert level. But I think a lot of jobs have limited data that anyone can train from. And so I think it'll be interesting to see how easy is it for research efforts to overcome that bottleneck if.
John O'Farrell
You had to make a guess on what job category is going to be introduced or explode in the future. Some people say it's like the everyone's an influencer or in some sort of caring field, or everyone's employed by the government and some sort of bureaucrat thing or maybe training the AI in some way. As more and more things start to get automated, what is your guess as to what more and more people start to do? Doing art and poetry is.
Adam D'Angelo
Yeah, I mean at some point you have everything automated and then I think people will do art and poetry. And there's a data point that the people playing chess is up. Since computers got better than humans at Chess. So I don't think that's a bad world if people are all just kind of free to pursue their hobbies as long as you have some kind of, you know, way to distribute wealth so that, so people can afford to live. But I, you know, in the near, that, that's a while away. And in the near term, well, like 10, 15 years out, I don't know how much. But yeah, in the, in the, I'll put it in the at least 10 years range. I think in the near term the job categories that are going to explode, the jobs that can really leverage AI. And so people who are good at using AI to accomplish their jobs, especially to accomplish things that the AI couldn't have done by itself, there's just massive demand for that.
Amjad Masad
I don't think we're going to get to a point where you automate every job. Uh, definitely not in the current paradigm. I would, I would doubt it happening. I, I, I'm not certain it would ever happen, but definitely not in the current paradigm. Now here's what I think, because a lot of jobs is about servicing other humans. You need to be fundamentally human in order to, you need to be actually human in order to understand what other people want. You know, and so you need to have the human experience. So unless we're going to create human humans, unless the, unless AI is actually embodied in human experience, then humans will always be the generators of ideas in the economy.
John O'Farrell
Adam, respond to Andrew's point around the human part because you created one of the most, you know, the best wisdom of the crowds, you know, platforms in the universe and now you've gone, you know, all in with po. What are your thoughts on, you know, to what extent will we be relying on humans versus will we be trusting AIs to, you know, be our therapists, be our, you know, caretakers in other ways.
Adam D'Angelo
Humans have a lot of knowledge collectively and you know, even like one individual person who's an expert and has lived a whole life and had a whole career and seen a lot of things, they, they often know a lot of things that are not written down anywhere. Tacit knowledge. And you call it tacit knowledge, but also, also what they're capable of writing down if you did ask them a question. I think there's still an important role for, for people to play in the world by sharing their knowledge, especially when they have knowledge that, that just wasn't Otherwise in an LLM's training set, you know, whether they will be able to make a full time living doing that. I don't know. But if that becomes a bottleneck, then for sure that's going to mean that all the sort of like economic pressure goes to that in terms of the, like, you know, you have to be human to know what humans want. I don't know about that. So as an example, I think, I think recommender systems, the system that ranks your Facebook or Instagram or Quora feed, those recommender systems are already superhuman at predicting what you're going to be interested in, in reading. Like if I gave you a task that was like make me a feed that I'm going to read, there's just no way, no matter how much you knew about me, there's no way you could compete with these algorithms that just have so much data about everything I've ever clicked on, everything everyone else has ever clicked on, what all the similarities are between all those different data sets. And so I don't know. It's true that as a human you can kind of simulate being a human and that makes it easier for you to test out ideas. And I'm sure that composers and artists, this is an important part of their process for doing work. Is there.
Amjad Masad
Or chefs or you know, yeah, they.
Adam D'Angelo
Produce something and you know, a chef will cook something and they taste it and it's important that they can taste it. But I don't know, you know, they just, they have very little data compared to what AI can be trained on. So I don't know how that's going to shake out.
Amjad Masad
That's a good point. I mean ultimately what recommender systems are, they're like aggregating all the different tastes and then sort of finding where you sit in the sort of multi dimensional taste vector space and like getting you the best content there. So I guess there's some of that, I think that's more narrow than we think. Like, like yes, it, it's true in recommender systems, but I'm not entirely sure it's true of, of, of everything. But so I, I think the best prediction for where the world is headed and this is not a endorsement or necessarily like this is where I think the world's headed because I think part.
Adam D'Angelo
Of it is.
Amjad Masad
Will be slightly unstable system. But I think the sovereign individual continues to be, I think a really good set of predictions for the future. Although it's not a scientific book or not, it's a very polemic book and, but, but the idea is, you know, in the late 80s, early 90s. Are they economists? I'm not sure. I think they're economists or political Science majors. Two people out of the UK wrote this book about trying to predict what happens when computer technology matures, right? They're like, you know, humanity went through the agricultural revolution and the industrial revolution. We're going through another revol, clearly information revolution. Now we call it intelligence revolution, whatever. I think we will not be able to call it something. It's a future. People will call it something, but we are going through something. And so they're trying to predict, okay, what happens from here. And what they arrive at is that the. Ultimately you're going to have large swaths of people that are potentially unemployed or economically not contributing, but you're going to have the entrepreneur, the entrepreneur capitalists going to be so highly leveraged because they can spin up these companies with AI agents very quickly. So because they have this, because they're very generative, they have interesting ideas. They're human, they've. They have interesting ideas about what other people want. They can create these companies very quickly in these products and services and they can organize the economy in certain ways. And the politics will change because to, you know, today's politics is based on every human being economically productive. But when you have, only when you have massive automation and then you have a few entrepreneurs and very intelligent generative people are actually able to be productive, then the political structures also change. And so they talk about how the nation state sort of subsides and instead you go back to an era where states are competing over people, over wealthy people, and as a sovereign individual, you can negotiate your tax rate with your favorite state. So it starts to sound like biology a little bit. And I don't think it is far from where it might be headed now. Again, it's not a sort of a value judgment or desire, but I do think it's worth thinking about when people are not the unit of economic productivity. Things have to change, including culture and politics.
John O'Farrell
Yeah, I think there's a question with that book and some of this conversation more broadly of when does the technology reward, you know, the defender versus the sort of aggregator or something? Or like the when does it incentivize more decentralization versus centralization? Like, remember Peter Thiel had this quip a decade ago of like, you know, crypto is libertarian, is more decentralizing, AI is communist or more centralizing. And it's not obvious to me that that's entirely accurate on either side. AI does seem to empower a bunch of individuals, as you were saying. And then also, you know, crypto, turns out, is like Fintech or something. It's like stablecoin. You know, it does empower sort of, you know, in nation states, we're talking about doing the sort of like, you know, the China thing that they were going to do. So, yeah, I think there's an open question as to, you know, which technology leads to. Who does it empower more, the edges or the center? And I think if it empowers the edges, it seems like the sovereign individual is. And maybe there's a barbell where it's like both. Basically the big. The incumbents just get much, much, much, much, much bigger and there's like these edges. But anyways, that's something.
Adam D'Angelo
Yeah. I'm very excited for the number of solo entrepreneurs that this technology is going to enable. I think it's just greatly, it's vastly increased what a single person can do. And there's so many ideas that just never got explored because it's a lot of work to get a team of people together and maybe raise the funding for it and get the right kind of people with all the different skills you need. And now that one person, person can, can bring these things into existence, I, I think, I think we're going to see a lot of really amazing stuff.
Amjad Masad
Yeah, I get these tweets all the time about people who like, quit their jobs because they started making so much money. You're using tools like, like Replet and it's, it's really exciting. I think if for the first time, opportunity is massively available for, for everyone.
Adam D'Angelo
Yeah.
Amjad Masad
And I think that that is to me, the most exciting thing about this technology other than all the other stuff that we're talking about. Just the ability for more people to be able to become entrepreneurs is massive.
John O'Farrell
That trend is obviously going to happen. As we look out of the next decade or two. Do you think that AI is more likely to be sustaining or disruptive in the Christian sense? So to ask it another way, do you think that most of the value capture is going to come from companies that were scaled pre OpenAI starting so replit still counts as the latter and so does court in some degree. Or do. Do you think most of the value is going to be captured by companies that started, you know, after, let's say, 2015, 2016?
Adam D'Angelo
So there's a related question, which is how much of the value is going to go to the hyperscalers versus everyone else? And I think on that one we are, I actually think we're in a pretty good balance where there's enough competition among the hyperscalers that the, there's enough Competition that as an application level company you have choice and you have alternatives and the prices are coming down incredibly quickly. But there's also not so much competition that the hyperscalers and the labs like Anthropic and OpenAI, there's not so much competition that they are unable to raise money and make these long term investments. And so I actually think we're in a pretty good balance and we're going to have a lot of new companies and a lot of growth among the hyperscalers.
Amjad Masad
I think that's about right. So the terminology of sustaining versus disruptive comes from the innovator's dilemma. And it's this idea that whenever there's a new technology trend, it's sort of, there's this idea of a power curve. It starts as a toy almost or something that doesn't really work or captures the lower end of the market. But as it sort of evolves, it goes up the power curve and eventually disrupts even the incumbents. So originally the incumbents don't pay attention to it because it looks like a toy and then eventually disrupts everything and eats the entire sort of market. So that was true of PCs. You know, when PCs came along, the big mainframe manufacturers did not pay attention to it. And initially it was like, yeah, it's for kids or whatever, but we have to run these large computers or data centers or whatever. But now even data centers are running on PCs and so on. And so PCs were just a hugely disruptive force. But there are technologies that come along and really benefit the incumbents and really don't really benefit the, the new players, the startups. I think Adam's right, it's both and maybe for the first time it's kind of both like a huge technology trend because the Internet was hugely disruptive. But this time it feels like it is an obvious supercharge for the incumbents, for the hyperscalers, for the large Internet companies. But it also enables new business models that is perhaps counter position against the existing ones.
Adam D'Angelo
Although.
Amjad Masad
I think what happened is everyone read that book and everyone learned how to not be disrupted. For example, ChatGPT was fundamentally counter position against Google because Google had a business that was actually working. ChatGPT was seen as this technology that hallucinates a lot and creates a lot of bad information. And Google wanted to be trusted. And so Google had ChatGPT internally. They didn't release Gemini until like two years after ChatGPT. And ChatGPT had sort of already won the like at least brand recognition. And so there was In a way, OpenAI came out as a disruptive technology but now Google realizes it's a disruptive technology and kind of response to it at the same time. It was always obvious that AI is going to benefit Google. At minimum it's, you know, overview search Overview has gotten a lot better. All its, you know, workspace suite is getting a lot better. With Gemini, their mobile phones, everything gets better. So it seems like it's both.
Adam D'Angelo
Yeah, I really agree. Like everyone read the book and that changes what the theory even means because you have like all the, all the public market investors have read that book and they now are going to punish companies for not adapting and reward them for adapting even if it means they have to make long term investments. I think all the management leadership of the companies have read the book and they're on top of their game. I think also just the people running these companies are, I guess I would say smarter I think than the companies from the generation that book was sort of built on. And they're, they're on at the top of their game and they are, a lot of them are founder controlled and so they can make, it's easier for them to sort of take a hit and, and make these, these investments. So that's, I actually, you know, I think if, if you had an environment more like we had in say like the 90s, I think this would actually be more disruptive than, than the, the current hyper, hyper competitive world that we're in now.
John O'Farrell
One mistake that we as firm have reflected on over the past few years though, of course I haven't been here for more than just a few months is this idea of that we've passed on companies because they weren't going to be the market leader or the category winner. And thus we thought oh, learning the lessons from Web2, you have to invest in the, in the category winner. That's where things are going to consolidate. Value is going to accrue over time and it seemed. So why do the next foundation model company if the first one already has a, as a head start. But it seems like the market has gotten so much bigger that in foundation models but also in applications there's just multiple winners and they're kind of, you know, fragmenting, you know, and taking parts of the market that are all venture scale. I'm curious if this is durable phenomenon. But that seems just one difference than the Web2 era is just more winners across more categories.
Adam D'Angelo
I think network effects are playing much less of a role now than they did in the web2era also. And that makes it easier for competitors to get started. There's still a scale advantage because if you have more users, you can get more data. If you have more users, you can raise more capital. But that advantage is not. It doesn't make it absolutely impossible for a competitor of smaller scale. It makes it hard, but it's, there's definitely like room for more winners than there was before.
John O'Farrell
I think another difference is that people are seeing the value so strongly that they're willing to pay early on. And maybe a way that they. The question with Web2 companies was how are they gonna make money? And you were Facebook super early. Obvious Google, et cetera, was like, oh, how, how are they going to monetize? And you know, the companies here are monetizing from, from the get go, you know, your guys, companies included.
Adam D'Angelo
Yeah, yeah. And the, I, I think with the earlier generation of companies, the monetization kind of depended on scale. Yeah, like you couldn't build a good ad business until you got to millions, tens of millions of users. And now with subscriptions you can just charge right away. I think especially thanks to things like Stripe that are making it easier. And so that's also made it a lot more friendly to new entrants.
Amjad Masad
There's also questions of geopolitics. Like, it seems clear that we're not in this globalized era and perhaps it's going to get much worse. And so investing in the foundation in the OpenAI of Europe might be a good idea. And like similarly, China being an entirely different, different world. And so there's sort of a geo aspect of it that, yeah, all of.
John O'Farrell
A sudden our geopolitics, you know, nerdiness is helpful, is, is useful. Adam, you know, we were talking about sort of human knowledge. Did you see yourself with PO kind of disrupting yourself in a sense or talk about the, the, the bet that you made with POE in the evolution there.
Adam D'Angelo
You know, I think we saw POE more as just an additional opportunity than as disruption to Quora. The way we got to it was we, in early 2022, we started experimenting with using GPT3 to generate answers for Quora. And we compared them to the human answers and sort of realized that they weren't as good. But what was really unique was that you could instantly get an answer to anything you wanted to ask about. And we realized it didn't need to be in public. It actually was your preference would be to have it be in private. And so we felt like there was just a new opportunity here to let people chat with AI in Private.
John O'Farrell
Yeah. And it seems like you were also making a bet on how the different players were going to. That there was going to be.
Adam D'Angelo
Yeah, yeah. So it was also a bet on diversity of model companies, which took a while to play out, but I think now we're getting to the point where there's a lot of models, there's a lot of companies. Especially when you go across modalities, you think about image models, video models, audio models, especially, like the reasoning research models are sort of diverging. Agents are starting to be their own source of diversity. So we're lucky to now be getting into this world where there's sort of enough diversity for a general interface aggregator to make sense. But yeah, it was a bet early on. We kind of.
Amjad Masad
It's surprising actually that even not particularly technical consumers actually do use multiple AIs. I didn't expect that. You know, people only use Google. They never, like, looked at Google and then Yahoo or like, very few people do it. But now you talk to just average people and they'll say, Yeah, I use ChatGPT most of the time. But Gemini is much better at like these types of questions. And it's like, oh, interesting. The sophistication of consumers have gone up.
John O'Farrell
And even people saying that they have different personalities and they, you know, you know, sort of resonate with Claude more or, you know, or whatever. The. I want to return back to this point we said earlier, Adam, about kind of talking about like dark matter, about how we're going to, you know, brute force. There's a lot of knowledge that people have that's, you know, sort of not sort of categorized yet. And it's not just tacit knowledge. It's actually knowledge that you could, you know, ask them about and they could describe it how, you know, because one question people have with LLMs is like, how much we've already trained the whole Internet. How much more knowledge is there? Is it like 10x? Is it like a thousand? Like, what is sort of the. What is your kind of intuitive sense of if we do brute force it and build this whole, you know, machine that gets all the knowledge out of humans onto sort of, you know, a data set that we can then, you know, implement. How do we think about the upside from there?
Adam D'Angelo
You know, I think it's very hard to quantify, but there's a massive industry developing around getting human knowledge into the form where AI can use it. So this is things like scale, AI, Surge, Merkur. But there's a massive long tail of other companies just getting started. And as you have, you know, as intelligence gets cheaper and cheaper and more and more powerful, the bottleneck, I think, is increasingly going to be on the data and what do you need to create that intelligence. And so that's going to cause more and more of this to happen. It might be that people can make more and more money by training AI. It might be that more and more of these companies get started, or it might be that there's other forms of it. But I think it's going to be sort of like the economy is going to naturally value whatever the AI can't do.
John O'Farrell
What is the framework for what it can't? Like, what is the mental model for what it can't do?
Adam D'Angelo
You could ask an AI researcher. They might have a better answer. But to me, there's just information that's not in the training set, and that is something that's inherently going to be something the AI can't do. The AI will get very smart. It can do a lot of reasoning. It could prove every math theorem at some point if it starts from some axioms that you give it. But if it doesn't know, how did this particular company solve this problem 20 years ago? If that wasn't in the training set, then only a human who knows that is going to be able to answer that question.
John O'Farrell
And so over time, how do you see Quora interfacing with, like, how are you running these in parallel? How do you think about this?
Adam D'Angelo
Yeah, so, I mean, Quora, our focus is on human knowledge and letting people share their knowledge. And that knowledge may be helpful for, you know, it's helpful for other humans, and it's also helpful for AI to learn from. We have relationships with some of the AI labs and we're going to sort of play the role CORE will play, the role that it is meant to play in this ecosystem, which is as a source of human knowledge. At the same time, AI is making Core a lot better. We've been able to make major improvements in moderation quality and in, in ranking answers and in just, just improving the product experience. So. So it's gotten a lot better by applying AI to it.
Amjad Masad
Yeah.
John O'Farrell
And, and I'm going to talk, talk about your future as well. Obviously, you know, you had this business for, for a long time, you know, focused on developers. At one point you were targeting, you.
Amjad Masad
Know, there's a nonprofit is no, exactly.
John O'Farrell
The ed tech market. I believe you did 2 or 3 million in revenue reported. And then, you know, recently, TechCrunch, I know it's outdated, but I think it reported something 150 million. I know it's higher. Since you've had this incredible growth as you've shifted the business model and the customer segment, how do you think about the future of replit?
Amjad Masad
I think Karpathy recently said that it's going to be the decade of agents and I think that's absolutely right. It's as opposed to like prior modalities of AI. Like when AI first came to coding, it was autocomplete with Copilot, then it became sort of chat with ChatGPT. Then I think Cursor innovated on this composer modality, which is like editing like large chunks of files. But that's it. I think replit, what REPLIT innovated is the agent and the idea of like not only editing code, provisioning infrastructure, like databases, doing migrations, you know, connecting to the cloud, deploying, having the entire debug loop, like executing the code, running tests and so just like the entire development lifecycle loop happening inside an agent. And that's going to take a long time to mature. So we're Agent in beta came September 2024, and it was first of its kind that did this, both code and infrastructure. But it was, you know, fairly janky, didn't work very well. And then Agent V1, around December, it took another generation of models. So you go from cloth 3.5 to 3.7. 3.7 was the first model that really knew how to use a computer, a virtual machine. So unsurprisingly, it was the first also computer use model. And these things have been moving together. And so with every generation of models we see we find new capabilities. And Agent V2 improved on autonomy a lot. Agent V1 could run for like two minutes. Agent V2 ran for 20 minutes. Agent 3, we advertised it as running for 200 minutes. It just felt like it should be symmetrical, but it's actually, actually runs kind of indefinitely. Like we've had users running it for 20 plus hours. And the main idea there was that if we put a verify in the loop. I remember reading DeepSeek, a paper from Nvidia about how they used DeepSeek to write CUDA kernels and they were able to run deepseek for like 20 minutes if they put a verifier in the loop, like being able to run tests or something like that. And I thought, oh, okay, so what kind of verifier can we put on the loop? Obviously you can put unit tests, but unit tests doesn't really capture whether the app is working or not. So we started kind of digging into computer use and whether computer use was going to be able to test apps. Computer use is very expensive and it's actually kind of still very buggy. And like Adam talked about, that's going to be a big area of improvement that'll unlock a lot of applications. But we ended up building our own framework with like a bunch of hacks and some AI research and replace computer use. I think testing models, I think one of the best. And once we put that into the loop, then you can put Replit in high autonomy, so we have an autonomous scale. You can choose your autonomy level and then it just writes the code, goes and tests the applications. If there's a bug, it reads the error log and writes the code again and can go for hours. And I've seen people build amazing things by letting it run for a long time. Now that needs to continue to get better, that needs to get cheaper and faster. So it's not necessarily a point of pride to run for a lot longer. It should be as fast as possible. So we're working on that. Agent four, there's a bunch of ideas that are going to be coming out Agent four but one of the big things is you shouldn't be just waiting for that one feature that you requested. You should be able to work on a lot of different features. So the idea of like parallel agents is very interesting to us. So, you know, you ask for a login page, but you could also ask for a stripe checkout and then you ask for an admin dashboard. The AI should be able to figure out how to parallelize all these different tasks or sometimes are not parallelizable, but should also be able to do merge across the code. So being able to do collaboration across AI agents is very important. And that way the productivity of a single developer goes up by a lot. Right now, even when you're using cloth code or cursor and others, that there isn't a lot of parallelism going on. But I think the next boost in productivity is going to come from sitting in front of programming environment like Replit and being able to manage tens of agents, maybe at some point hundreds, but you know, at least, you know, 5, 6, 7, 8, 9, 10 agents, all different, all, you know, working on different parts of your product. I also think that UI and UX could, could use a lot of work in terms of right now you're trying to translate your ideas into just like textual representation. I'm just like, like a prd, right? What product managers do, right, is product descriptions. But Product descriptions that don't. It's really hard and you see it in a lot of tech companies. It's really hard to align on the exact features because language is fuzzy and so I think there's a world in which you're interacting with AI in a more multimodal fashion. So open up like a whiteboard and being able to draw and diagram with AI and really work with it like you work with a human. And then the next stage of that having better memory, better memory inside the project but also across project and perhaps having different instantiat of replit agent that you know, this agent is really good at like Python data science because you know, it has all the information and skills and memories of about my company, what it's done in the past. So I'll have a data analysis like sort of replit agent and I'll have like a front end replay agent and they have memory over multiple projects and over time and over interactions and maybe they sit in your slack like a worker and you can like talk to them. So again like I can, I can keep going for another 15 minutes about a roadmap that could span like three to four to five years perhaps. And so, but, but this, this agent, this agent phase that we're in is just, there's so much work to do and it's, it's, it's going to be a lot of fun.
John O'Farrell
Yeah, it's a, I was talking to one of our mutual friends, one of the co founders of one of these, you know, big productivity companies and he leads a lot of their R and D and he's like, like man, during the week these days I'm not even talking to humans anymore as much. I'm just like, it's just you know, using all these agents to, to build. So it's living in the future to some degree is already in the present.
Amjad Masad
There's something interesting about that and that are people talking to each other less at companies and is that a bad thing? So it's, you know, I think I'm starting to think more about these second order effects of things like that, you know, will it make it awkward for like again the new grads? I feel so bad for them. Like you know, if, if people are not sharing as much knowledge between each other or it's like it's not culturally easy to go ask for help because like you should be able to use AI agents. There's something, there's some cultural forces that I think need to be reckoned with.
John O'Farrell
Yeah, I think a lot of tough Cultural forces presumers these days.
Amjad Masad
Yes.
John O'Farrell
Gearing towards closing here. Obviously you guys are, you know, focused on running your companies, but to stay current on the AI ecosystem, you, you guys also make angel investments as well. Where are you guys most most excited? You know, we haven't talked about robotics. Are you guys bullish on, on robotics in the, in the near term or any emerging categories or use cases or spaces that you're looking to make more investments in or you have made some.
Adam D'Angelo
I actually think vibe coding generally is just unbelievably like high potential. Just the idea that all the, you.
John O'Farrell
Know, this underhyped even still, I think so.
Adam D'Angelo
I think, you know, just opening up the potential of software to the mainstream of, you know, every, everyone. I think that. And yeah, and actually I think one reason I think it's underhyped is that the tools are still very far from what you can do as a professional software engineer. And if you imagine that they're going to get there, and I think there's no reason why they wouldn't, it'll take a few years, but then it's like everyone in the world is going to be able to create any things that would have taken a team of 100 professional software engineers that's just going to massively open up opportunities for everyone. So I think Replit is a great example of this, but I think it's also gonna, there will be cases other than just like building applications that this also creates.
John O'Farrell
By the way, just on that note, if you were going to Stanford or Harvard today, 2025, just enter. Would you major again in computer science or just focus on building something?
Adam D'Angelo
I think I would. I mean, I went to College starting in 2002 and it was right after the dot com bubble had burst. And there's a lot of pessimism. And I remember my roommate, his parents had told him, don't study computer science. Even though that was something he really liked. And I just kind of did it because I liked it. And I think that it's definitely like the job market is worse than it was a few years ago. At the same time, I think having these skills to understand the sort of fundamentals of what's possible with algorithms and data structures, I think that actually really helps you in managing agents when you're using them. And I'm guessing that it will continue to be a valuable skill in the future. I also think the other question is what else are you going to study? And every single thing you could imagine, there's an argument for why it's going to be automated. So I think you might as well study what you enjoy. And I think this is as good as anything.
Amjad Masad
Yeah, there's a lot to get excited by. One thing maybe kind of random, but I get really fired up to see mad science experiments like the Deep Seq OCR that came out the other day. Did you see it? It's wild. Where. Correct me if I'm wrong because I only looked at it briefly, but basically you can get a lot more economical with a context window if you have a screenshot of the text instead of the fucking text.
Adam D'Angelo
Yeah, I'm not the right person to be correcting you on that, but like there's definitely some really interesting things.
Amjad Masad
Yeah, I saw another thing on Hacker News the other day where, you know, text diffusion where someone made a text diffusion model by instead of doing Gaussian denoising, he would take like a single BERT instance and try to mask different words and just predict these different tokens. We have a lot of components. I don't think people think a lot about that. We have now the base pre trained models, we have all these RL reasoning models, we have the encoder decoder models, we have diffusion models. There's all these different things like just mix them in different ways. I feel like there isn't a lot of that. It'd be great if a new research company just comes out and is not trying to compete with OpenAI and things like that, but instead is just trying to discover how to put these different components together in order to create a new flavor of these models.
John O'Farrell
In crypto they talk about composability and mixing primitives together. And in AI, maybe there needs to.
Amjad Masad
Be more experimentation, there's less playing around. I found, I remember in the Web 2.0 era when we were like playing around with JavaScript what browsers could do and what web workers could do, whatever. There was a lot of like really interesting weird experiments. I mean Replit was born out of that. The original version of Replit in open source pre pre the company which my interest was like can you Compile C to JavaScript?
Adam D'Angelo
Right.
Amjad Masad
That was like one of the interesting things. Now that became WASM by the time it was emscripten and it was like such a, such a nasty hack and. But I think there's so much. I think we're an era of Silicon Valley where it's like very, very get rich driven and that makes me a little sad. And that's partly why I moved the company out of sf. I feel like the culture in SF has has gone maybe to, maybe I wasn't there. But like during the dot com era a lot of people talked about how it's sort of like get rich fast or the crypto thing. So I feel like there needs to be a lot more tinkering and I would love to see more of that and more companies getting funded that are trying to just do something a little more novel, even if it doesn't mean like a fundamentally new model.
John O'Farrell
Last question. Amjad, you've been into consciousness for a long time. Are you bullish that we will via some of this AI work or just some scientific progress elsewhere, make some progress and understand in, you know, getting across this, this hard problem or you know.
Amjad Masad
Something happened recently which is interesting. Quad 4.5 seemed to have to become more aware of its context length. So as it gets closer to the end of the context, it starts becoming more economical with tokens. It also, it looks like its awareness when it's being red teamed or in a test environment like jumped significantly. And so there's something happening there that's quite interesting. Now I think in terms of.
Adam D'Angelo
The.
Amjad Masad
Question of consciousness, it is still fundamentally not a scientific question and there is a sort of, we've given up on trying to make it scientific, but I think it's, I think this is also the problem that I talked about. With all the energy going into LLMs, no one is trying to really think about the true nature of intelligence, true nature of consciousness. And there's a lot of really core, core questions. Like one of my favorite one is the Roger Penrose Emperor's New Mind where he wrote a book about how everyone in the sort of philosophy of mind, space and perhaps the larger scientific ecosystem started thinking about the brain in terms of a computer. And in that book he tried to show that it fundamentally is impossible for the brain to be computer because humans are able to do things that Turing machines cannot do or Turing machines like fundamentally get, get stuck on, such as, you know, just basic logic puzzles that we're able to kind of detect. But like there's no way to encode that in a, in a, in a Turing machine, for example, like this statement is false, you know those like old logic puzzles. And anyways it's like a complicated argument. But if you read that book or, or many others, there's like a core strain of arguments in the theory of mind about how computers are fundamentally different from, from human intelligence. And, and so, so yeah, I've been very busy, so I haven't really updated my thinking too much about that, but I think there's a huge field of study there that is not being studied.
John O'Farrell
If you were a freshman entering college today, would you study philosophy?
Amjad Masad
I would do that. I would do that. I would definitely study philosophy of mind. I would probably go into neuroscience because I think those are the core questions that are kind of become very, very important as AI kind of continues to more of jobs and economy and things like that.
John O'Farrell
That's a great place to wrap. I'm John Adam. Thanks for coming on the podcast.
Adam D'Angelo
Thank you, thank you.
Podcast Host
Thanks for listening to this episode of the A16Z podcast. If you like this episode, be sure to like, comment, subscribe, leave us a rating or review and share it with your friends and family. For more episodes, go to YouTube, Apple Podcasts and Spotify. Follow us on X16Z and subscribe to our substack@a16z.substack.com thanks again for listening and I'll see you in the next episode. As a reminder, the content here is for informational purposes only, should not be taken as legal, business, tax or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund. Please note that A16Z and its affiliates may also maintain investments in the company's technology discussed in this podcast. For more details, including a link to our investments, please see a16z.com disclosures.
Title: Amjad Masad & Adam D’Angelo: How Far Are We From AGI?
Date: November 7, 2025
Host: John O’Farrell (Andreessen Horowitz)
Guests:
This episode brings together Adam D’Angelo (Quora/POE) and Amjad Masad (Replit) to discuss the current state, limitations, and near-term future of Large Language Models (LLMs) on the path to Artificial General Intelligence (AGI). The conversation unpacks recent AI progress, critiques surrounding LLMs’ capabilities, the economic and societal impacts of automation, and emerging paradigms like agent-based productivity. They also touch on pressing questions about the sovereignty of individuals in a high-AI world, the future of work, the division between “brute force” and “true” intelligence, and philosophical perspectives on consciousness and intelligence.
Rapid and Ongoing Progress
"I don't really understand where the kind of bearishness is coming from." (Adam D’Angelo, 01:51)
Limits of LLMs and "Brute Force" Intelligence
“I started being a bit of a more public doubter…around the time when the AI safety discussion was reaching its height back in maybe 22, 23.” (Amjad Masad, 04:49)
Definitions of AGI
Solo Entrepreneurship & Democratized Opportunity
“The number of solo entrepreneurs that this technology is going to enable is vastly increased.” (Adam D’Angelo, 00:20)
“For the first time, opportunity is massively available to everyone.” (Amjad Masad, 00:29, 29:14)
Productivity Gains, GDP Growth & Bottlenecks
The Expert Data & Training Crisis
“...the agents are better than new people. That feels like a weird equilibrium.” (Amjad Masad, 15:02)
Shifting Paradigms in AI Development
“It feels like we're in a brute force type of regime...” (Amjad Masad, 10:33)
Agents as AI’s New Modality
Tacit and Unwritten Knowledge
The Growing Importance of Data Infrastructure
“Sovereign Individual” Framework
Decentralization vs. Centralization
“...tools are monetizing from the get-go...with subscriptions you can just charge right away.” (Adam D’Angelo, 36:54)
Underrated Areas: “Vibe Coding” and Mad Science
AI and Philosophy of Mind
On Recent Progress & Brute Force Limits
“Nothing seems fundamentally so hard that it couldn't be solved by the smartest people in the world working incredibly hard for the next five years.”
— Adam D’Angelo (00:00, 11:32)
“LLMs are a different kind of intelligence than what humans are… I think once we truly crack intelligence, it’ll feel a lot more scalable.”
— Amjad Masad (04:49, 06:36)
On Automation & Training Crisis
“If the agents are better than new people… you increase productivity a lot, but they're not hiring new people… that feels like a weird equilibrium.”
— Amjad Masad (15:02)
“There's just not as many [CS] jobs as there used to be. And LLMs are a little more substitutable for what they previously would have done.”
— Adam D’Angelo (15:58)
On Future Economic and Political Organization
On Industry Trends & Research
“I actually think vibe coding generally is just unbelievably high potential…underhyped even still, I think.”
— Adam D’Angelo (52:49)
“There needs to be a lot more tinkering… more companies getting funded that are trying to just do something a little more novel.”
— Amjad Masad (58:17)
On Consciousness and Philosophy
| Segment | Speaker | Key Topics / Quotes | |---------------------------------------|-----------------|---------------------------------------------------------------------| | 00:00–00:39 | Adam, Amjad | Framing: solo entrepreneurship, the new revolution | | 01:30–04:46 | Adam | Rapid progress, optimism, AGI definitions, bearishness critique | | 04:49–09:17 | Amjad | Skepticism about AGI hype, current LLM limitations | | 09:17–13:27 | Adam, Amjad | Brute force vs. true intelligence, industry research vs. fundamentals| | 13:27–17:40 | Panel | Economic impacts, GDP, automation’s bottlenecks | | 17:40–21:07 | Adam, Amjad | Training crisis, survival of tacit knowledge | | 24:08–28:56 | Amjad, Adam | The "Sovereign Individual" framework, future of entrepreneurship | | 29:56–36:54 | Panel | Disruption vs sustaining innovation; hyperscalers vs startups | | 44:35–51:40 | Amjad | Agents and productivity, Replit’s evolution | | 52:49–58:17 | Adam, Amjad | Areas underhyped, need for more experimentation in AI | | 58:17–61:27 | Amjad | Consciousness, philosophy of mind, the hard problem |
The discussion offers an in-depth, candid, and sometimes contradictory tour through today’s AI state of play. Both guests agree: AI is rapidly broadening what individuals and small teams can accomplish; however, critical questions remain about the limitations of current approaches, the impending crises in upskilling and expertise, and the societal/political re-ordering to come. While brute force methods are “good enough” to drive major change, the breakthrough to true AGI—let alone understanding consciousness—may lie elsewhere entirely. As we enter the “decade of agents,” opportunity and disruption are both omnipresent, and a renewed curiosity for basic research and human-centric values becomes ever more vital.