There's been a shift market pronouncement in the last year and especially in the last few months, most pronounced by Claude code, which is a specific model that has a coding engine in it, which is so good that I think now you have VIME coders, which are people who didn't really code much or hadn't coded in a long time, who are using essentially English as a programming language as an input into this code bot, which can do end to end coding instead of just helping you debug things at the middle. You can describe an application that you want, you can have it lay out a plan, you can have it interview you for the plan, you can give it feedback along the way, and then it'll chunk it up and it'll build all the scaffolding, it'll download all the libraries and all the connectors and all the hooks, and it'll start building your app and building test harnesses and testing it. And you can keep giving it feedback and debugging it by voice, saying, this doesn't work, that works. Change this, change that, and have it build you an entire working application without your having written a single line of code. For a large group of people who either don't code anymore or never did, this is mind blowing. This is taking them from idea space and opinion space and from taste directly into product. So vibe coding is the new product management. Instead of trying to manage a product or a bunch of engineers by telling them what to do, you're not telling computer what to do. And the computer is tireless. The computer is egoless and it'll just keep working. It'll take feedback without getting offended. You can spin up multiple instances, it'll work 24, 7 and you can have it produce working output. What does that mean? Just like now, anybody can make a video, or anyone can make a podcast. Anyone can now make an application. So we should expect to see a tsunami of applications. Not that we don't have one already in the app Store, but it doesn't even begin to compare to what we're going to see. However, when you start drowning in these applications, does that necessarily mean that these are all going to get used? No. I think it's going to break into two kinds of things. First, the best application for a given use case still tends to win the entire category. When you have such a multiplicity of content, whether in videos or audio or music or applications, there's no demand for average. Nobody wants the average thing. People want the best thing that does the job. So first of all, you just have more shots on goals. So there will be more of the best. There will be a lot more niches getting filled. You might have worn an application for a very specific thing like tracking lunar phases in a certain context, or a certain kind of personality test, or a very specific kind of video game that made you nostalgic for something. Before, the market just wasn't large enough to justify the cost of an engineer coding away for a year or two. But now the best vibe coding app might be enough to scratch that itch or fill that slot. So a lot more niches will get filled. And as that happens, the tide will rise. The best applications, those engineers themselves are going to be much more leveraged. They'll be able to add more features, fix more bugs, smooth out more of the edges. So the best applications will continue to get better. A lot more niches will get filled. And even individual niches, such as you want an app that's just for your own very specific health tracking needs, or for your own very specific architecture, layout or design. That app that could have never existed will now exist. We should expect, just like on the Internet, what's happened with Amazon, where you replaced a bunch of bookstores with one super bookstore and a zillion long tail sellers, or YouTube replaced a bunch of medium sized TV stations and broadcast networks with one giant aggregator called YouTube, or maybe a second one called Netflix, and then a whole long tail of content producers. So the same way the app store model will become even more extreme, where you will have one or two giant app stores helping you filter through all of the AI slop apps out there. And then at the very head, there'll be a few huge apps that will become even bigger because now they can address a lot more use cases or just be a lot more polished. And then there'll be a long tail of tiny little apps filling every niche imaginable. As the Internet reminds us, the real power and wealth, super wealth, goes to the aggregator. But there's also a huge distribution of resources into the long tail. It's the medium sized firms that get blown apart. The five 10, 20 person software companies that were filling a niche for an enterprise use case that can now be either vibe coded away, or the lead app in the space can now encompass that use case. So if anyone can code, then what is coding? Coding still exists in a couple of areas. The most obvious place that coding exists is in training these models themselves. There are many different kinds of models. There are new ones coming out every day. There are different ones for different domains. We're going to see different models for biology for programming, we're going to see pointed focus models. For sensors, we're going to see models for CAD. For design, we're going to see models for 3D and graphics and games, models for video. You're going to see many different kinds of models. The people who are creating these models are essentially programming them, but they're programmed in a very different way than classic computers. Classic computing is you have to specify in great detail every step, every action the computer is going to take. You have to formally reason about every piece and write it in a highly structured language that allows you to express yourself extremely precisely. The computer can only do what you tell it to do. And then once you've got this very structured program, you run data through it, and the computer runs the data and gives you an output. It's basically an incredibly fancy data, very complicated, meticulously programmed calculator. Now, when it comes to AI, you're doing something very different, but you are nevertheless programming it. What you're doing is you're taking giant data sets that have been produced by humanity thanks to the Internet, or aggregated in other ways, and you're pouring those data sets into a structure that you've defined and tuned. And that structure tries to find a program that can produce more of that data set or manipulate that data set, or create things off that data set. So you're searching for a program inside this construct that you've designed. You've set up a model. You've tuned the number of parameters, you've tuned the learning rate, you've tuned the batch size, you have tokenized the data that's coming. You've broken it into pieces, and you're pouring it inside the system you've designed almost like a giant pachinko machine. And now the system is trying to find a program and could find many different programs. So your tuning really influences how good the program that you found is. And that program can now suddenly be expressive and in different kinds of domains. So it can do things that computers before were traditionally very bad at. Traditional computers are very good when you program them to give you precise output, specific answers to specific questions, things you can rely on and repeat over and over again. But sometimes you're operating in the real world and you're okay with fuzzy answers. You're even okay with wrong answers. For example, in creative writing, what's the wrong answer? If you're writing a piece of poetry or fiction, what's the wrong answer? If you're searching on the web, there are many right answers. There are many details of the right answers, but they're not all quite perfectly right. And real life sort of works that way. There are variations of right answers or mostly right answers. When you're drawing a picture of a cat, there are many different cats you could draw. There are many different levels of detail. There are many different styles you could use. When these semi wrong or fuzzy answers are acceptable, then these discovered programs through AI are much more interesting and much more adapted to the problem than ones that you coded up from scratch, where you had to be super precise. Fundamentally, what we're doing is a new kind of programming. But this is the forefront of programming. This is now the art of programming. These people are the new programmers. And that's why you can see AI researchers are getting paid gargantuan amounts, because they've essentially taken over programming. Does this mean that traditional software engineering is dead? Absolutely not. Software engineers, even the ones who are not necessarily tuning or training AI models, these are now among the most leveraged people on earth. Sure, the guys who are training and tuning models are even more leveraged because they're building the tool set that software engineers are using. But software engineers still have two massive advantages on you. First, they think in code, so they actually know what's going on underneath. And all abstractions are leaky. So when you have a computer programming for you, when you have Claude code or equivalent programming for you, it's going to make mistakes, it's going to have bugs, it's going to have suboptimal architecture, so it's not going to be quite right. And someone who understands what's going on underneath will be able to plug the leaks as they occur. So if you want to build a well architected application, if you want to be able to even specify a well architected application, if you want to be able to make it run at high performance, if you want it to do its best, if you want to catch the bugs early, then, then you're going to want to have a software engineering background. The traditional software engineer is going to be able to use these tools much better. And there are still many kinds of problems in software engineering that are out of scope for these AI programs today. The easiest way to think about those is problems that are outside of their data distribution. For example, if they need to do like a binary sort or reverse a linked list, they've seen countless examples of that, so they're extremely good at it. But when you start getting out of their domain, when you have to write very high performance code, when you're running on architectures that are novel or brand new, when you're actually creating new things or solving new problems, then you still need to get in there and hand code it. At least until either there are so many of those examples that new models can be trained on them, or until these models can sufficiently reason at even higher levels of abstraction and crack it on their own. Because given enough data points, there is some evidence that these AIs actually learn. They learn to a higher level of abstraction because the act of forcing them to compress the data forces them to learn higher level representations. If I show an AI five circles, it can just memorize exactly what the sizes in the radii and the thicknesses and so on of those circles are. If I show it 50,000 circles or 5 billion circles, and I give it a very small amount of parameter weights which are its equivalent neurons to memorize that, it's going to be much better off figuring out PI and how to draw a circle and what thickness means and forming an algorithmic representation of that circle rather than memorizing circles. Given all that, these things are learning at an accelerated rate and you could see then started to cover more of the edge cases I've talked about. But at least as of today, those edge cases are prevalent enough that a good engineer operating at the edge of knowledge of the field is going to be able to run circles around vibe coders. And remember, there is no demand for average the average app, nobody wants it. At least as long as it's not filling some niche. The app that is better will win essentially 100% of the market. Maybe there's some small percentage that will bleed off to the second best app because it does some little niche feature better than the main app, or it's cheaper or something of the sort. But generally speaking, people only want the best of anything. So the bad news is there's no point in being number two or number three. Like in the famous Glengarry Glen Ross scene where Alec Baldwin says first place gets a Cadillac El Dorado, second place gets a set of steak knives, and third place, you're fired. That's absolutely true in these winner take all markets. That's the bad news. You have to be the best at something if you want to win. However, the set of things you can be best at is infinite. You can always find some niche that is perfect for you and you can be the best at that thing. This goes back to an old tweet of mine where I said become the best in the world at what you do. Keep redefining what you do. Until this is true, and I think that still applies in this age of AI.
B (23:11)
That one's glib in multiple ways. First of all, being an entrepreneur isn't a job. It's literally the opposite of a job. And in the long run, everyone's an entrepreneur. Careers got destroyed first, jobs get destroyed second. But all of it gets replaced by people doing what they want and doing something that creates something useful that other people want. So no entrepreneur is worried about an AI taking their job. Because entrepreneurs are trying to do impossible things. They're trying to do very difficult things. Any AI that shows up is their ally and can help them tackle this really hard problem. They don't even have a job to steal. They have a product to build, they have a market to serve, they have a customer to support. They have a creativity to realize. They have a thing that they want to instantiate in the world, and they want to build a repeatable and scalable process around getting it out into the world. This is so difficult that any AI that shows up that can do any of that work is their ally. If the AIs themselves are entrepreneurs, they're likely going to just be entrepreneurs serving other AIs, or they're under the control of an entrepreneur. The thing that the AI itself is missing at the end of the day is its own creative agency. It's missing its own desires. And they have to be authentic, genuine desires. Unless you can pull the plug on an AI and turn it off, and unless it lives in mortal fear being turned off, and unless it can actually make its own actions for its own reasons, for its own instincts, its own emotions, its own survival, its own replication, it's not quite alive. And even then, people will challenge is it alive? Because consciousness is one of those things. As a qualia, it's like a color. It's like if you say red, I don't know if you're actually seeing red. You might be seeing what I see as green and I might be seeing what you see as red. But we'll never know because we can't get into each other's minds. So the same way, even an AI that's completely imitating everything that humans do, to some people, it'll always be an imitation machine, and to others it'll be conscious, but there'll be no way of distinguishing the two. We're still pretty far from that, though, right now. The AIs are not embodied. They don't have agency, they don't have their own desires, they don't have their own survival instinct. They don't have their own replication. Therefore, they don't have their own agency. And because they don't have their own agency, they cannot do the entrepreneur's job. In fact, I would summarize this by saying the key thing that distinguishes entrepreneurs from everybody else right now in the economy is entrepreneurs have extreme agency. That's why it's diametrically opposed to the idea of a job. A job implies that you're working for somebody else or you're filling a slot, but they're operating in an unknown domain with extreme agency. There are other examples of roles like this in society. An explorer also does the same thing, right? If you're landing on Mars or you're sailing a ship to an unknown land, you're also exercising extreme agency to solve an unsolved problem. A scientist exploring an unknown domain does this. A true artist is trying to create something that does not exist and has never existed, yet somehow fits into the set of things that can explain human nature, allow them to express themselves and create something new. So in all of these roles, whether you're a scientist or whether you're a true artist, or whether you are an entrepreneur, what you're trying to do is so difficult and is so self directed that anything like an AI that can help you is a welcome ally. You're not doing it because it's a job. You're not trying to fill a slot that somebody else can show up and fill. In fact, if the AI can create your artwork, or if the AI can crack your scientific theory, or if the AI can create the object or the product that you're trying to make, then all it does is it levels you up. Now it's the AI plus you. The AI is a springboard from which you can jump to a further height. We're going to see some incredible art created that's AI assisted. We will see movies that we couldn't have imagined created by people using AI tools. There's an analogy here in art that's interesting. For a long time in art, the rough direction was trying to paint things that were more and more realistic. Paint the human body, paint the fruit, paint proper lighting, et cetera. Eventually photography came along and then you could replicate things very precisely. And so that selection pressure went away. And then art got weird. Art went in many different directions. Art became all about, well, can I be surreal? Can I create something that expresses me? A lot of art schools spun out of that, that got really weird, including modern art and postmodernism. But also, I would argue, some of the greatest creativity came at that time. We were freed up. Photography got democratized, but photography itself became a form of art. And there were great photographers taking many different kinds of photographs. And now everyone's a photographer. There are still artists who are photographers, but it's not the pure domain of just a few people. So the same way, because AI makes it so easy to create the basic thing, everybody will create the basic thing. It'll have value to them individually. A few will still stand out that will create variations of it that are good for everyone. And it would be very hard to argue that society is worse off because of photography, although it may have certainly felt like that to some of the artists who were maybe making a living painting portraits of people and got displaced. Similar things will happen with AI, where there are people who are making a very specific living doing very specific jobs that will get displaced that the AI can do. But in exchange, everyone's society will have the AI. You'll have incredible things that were created with AI that couldn't have been created otherwise. And within a few decades, it'll be unimaginable that you could roll back the clock and get rid of AI or any kind of software, any kind of technology for that matter, just to keep a few jobs that were obsolete. The goal here is not to have a job. The goal is not to have to get up at 9 in the morning and come back at 7pm exhausted, doing soulless work for somebody else. The goal is to have your material needs solvable by robots, to have your intellectual capabilities leveraged through computers, and for anybody to be able to create. I used to do this thought exercise I think I talked about in a podcast that you and I did literally 10 years ago, which was, imagine if everybody were a software engineer, or everybody was a hardware engineer and they could have robots and they could write code. Imagine the world of abundance we would live in. Actually, that world is now becoming real. Thanks to AI. Everybody can be a software engineer. In fact, if you think you can't be, you can go fire up Claude right now or any of your favorite chatbots and you can go start talking to it. You'd be amazed how quickly you could build an app. It'll blow your mind. And once we can instantiate AI through robotics, which is a hard problem, I'm not saying we're that close to having solved it yet, but once we have robots, everyone can also do a little bit of hardware engineering. And so I think we're getting closer and closer to that vision.
B (33:28)
Yeah, humans are universal explainers. Anything that is possible with the current laws of physics as we know them, a human can model in their own heads. Therefore, just by enough digging, enough questioning, we could figure anything out related to that. We should discuss AI as a learning tool because I think the other place where it's incredibly powerful is the most patient tutorial that can meet you at your level and explain anything to your satisfaction a hundred different ways, a hundred different times, until you finally get it. I don't think the AIs are going to be figuring things out that humans cannot understand. But intelligence is poorly defined. What is the definition of intelligence? There's the G factor, which predicts a lot of human outcomes. But the best evidence for the G factor is its predictive power. It's that you measure this one thing and then you see people get much better life outcomes along the way in things that seem even somewhat unrelated to gain. So I would argue, and I think this is one of our more popular tweets, the only true test of intelligence is if you get what you want out of life. This triggers a lot of people because they go to school, they get their master's degrees, they think they're super smart, and then they don't have great lives, they aren't super happy, or they have relationship problems, or they don't make the money that they want, or they become unhealthy and this sort of triggers them. But that really is the purpose of intelligence for you as a biological creature, to get what you want out of life, whether it's a good relationship or a mate or money or success or wealth or health or whatever it is. So there are people who I think are quite intelligent because you can tell they have high quality functioning lives and minds and bodies, and they've just managed to navigate themselves into that situation. It doesn't matter what your starting point is, because the world is so large now and you can navigate it in so many different ways that every little choice you make compounds and demonstrates your ability to understand how the world works until you finally get to the place that you want. Now, the interesting thing about this definition, that the only true test of intelligence is if you get what you want out of life, is that an AI fails it instantly. Because an AI doesn't want anything out of life. The AI doesn't even have a life, let alone that, but it doesn't want anything. AI's desires are programmed by the human controlling it. But let's give it that for a second. Let's say the human wants something and programs the AI to go get it. Then the AI is acting as a proxy for the human, and the intelligence of the AI can be measured as did it get that person that thing? Most of the things that we want in life are adversarial or zero sum games. So, for example, if you want to seduce a girl or get a husband, you're competing with all the other people who are out there seducing girls or trying to get husbands. So now you're in a competitive situation, the AI has to outmaneuver the other people. Or if you say, hey AI, go trade on the stock market for me and make me a bunch of money, that AI is trading against other humans and other trading bots. It's an adversarial situation. It has to outmaneuver them. Or if you say, hey, AI, make me famous, write me incredible tweets, write me great blog posts, record me great podcasts in my own voice and make me famous. Now it's competing against all the other AIs. So in that sense, intelligence is measured in a battlefield arena. It's a relative construction. I think the AIs are actually going to fail mostly in those regards, or to the extent that they even succeed, because they are freely available. They will get out competed away, and the alpha that will remain would be entirely human. As a thought exercise, imagine that every guy had a little earpiece where an AI was whispering to him, a Cerno de Bergerac kind of earpiece telling him what to say on the date. Well, then every woman would have an earpiece telling her to ignore what he said or what part was AI generated, what part was real. If you have a trading bot out there, it's going to be nullified or canceled out by every other trading bot until all the remaining gain will go to the person with the human edge with the increased creativity. Now, that's not to say that the technology is completely evenly distributed. Most people still aren't using AI or aren't using it properly, or aren't using it all the way to the max, or it's not available in all domains or all contexts, or they're not using latest models. So you can always have an edge, like people who early adopt technology always do if you adopt the latest technology first. This is why I always say to invest in the future. You want to live in the future, you want to actually be an avid consumer of technology because it's going to give you the best insight on how to use it, and it will give you an edge against the people who are slower adopters or laggards. Most people hate technology. They're scared of it. It's intimidating. You press the wrong button, the computer crashes. You lose your data, you do the wrong thing, you look like an idiot. Most people do not have a positive relationship with complex technology. Simple technology, embedded technology, they're fine with. You throw on a light switch, light turns on. That used to be technology. It's so simple now, you don't think of it as technology anymore. You get in a car, you turn the steering wheel left. To a caveman, that would be a miracle. The car turns left. No longer technology to you, but computer technology in particular has had very complex interfaces and been very inaccessible and very intimidating to people in the past. Now with the AIs, we're getting the chatbot interface, which is you just talk to it or you type to it. And one of the great things about these foundational models, what truly makes them foundational is you can ask them anything and they'll always give you a plausible answer. It's not going to say, oh, sorry, I don't do math or I don't do poetry, or I don't understand what you're talking about, or I can't give relationship advice or anything like that. Its domain is everything that people have ever talked about. In that sense, it's less intimidating. It can be more intimidating because we've anthropomorphizes so much. If you think Claude or ChatGPT is a real person, then it can be a little scary. Am I talking to God? This guy seems to know so much. He knows everything. He's got an opinion, everything. He's got every piece of data. Oh, my God, I'm useless. Let me start talking to it and asking it what to do. And you can reverse the relationship and fool yourself very quickly. That can be intimidating. Overall, I think these AIs are going to help a lot of people get over the tech fear. But if you're an early adopter of these tools, like with any other tool, but even more so with these, you just have a huge edge on everybody else. I remember early on when Google first came out, I used to use a lot in my social circle. People would ask me basic questions and I would just go Google it for them and look like a genius. Eventually this hilarious website came along, something like lmgtfy.com and it stood for, Let me Google that for you. Somebody who asks you a question, you would go type the question into this website and it would create like a tiny little inline video showing you typing that question into Google and giving the Google results. And I feel like AI is in a similar domain right now where I will sit around in social context and people will be debating some point that can be easily looked up by AI. Now, you do have to be very careful with AI. They do hallucinate, they do have biases in how they're trained. Most of them are extremely politically correct and taught not to take sides or only take a particular side. I actually run most of my queries, almost all actually through four AIs, and I always fact check them against each other. And even then I have my own sense of when they're bullshitting or when they're saying something politically correct and they'll ask for the underlying data or the underlying evidence. And in some cases I'm finally dismissing it outright because I know the pressures that the people who trained it were under and what the training sets were. However, overall it is a great tool to just get ahead. And in domains that are technical, scientific, mathematical, that don't have a political context to them, then the AI is very much likely to give you closer to a correct answer. And those domains, they are absolute beasts for learning. I will now have AI routinely generate graphs, figures, charts, diagrams, analogies, illustrations. For me, I'll go through them in detail, then I'll say, wait, I don't understand that question. I can ask it super basic questions and I can really make sure that I understand the thing I'm trying to understand. At its simplest, most fundamental level. I just want to establish a great foundation on the basics. And I don't care about the overly complicated jargon heavy stuff. I can always look that up later. But now, for the first time, nothing is beyond me. Any math textbook, any physics textbook, any difficult concept, any scientific principle, any paper that just came out, I can have the AI break it down and then break it down again and illustrate it, analogize it until I get the gist and I understand it at the level that I want. So these are incredible tools for self directed learning. The means of learning are abundant. It's a desire to learn that's scarce. But the means of learning have just gotten even more abundant and more importantly than more abundant because we had abundance before. It's at the right level. AI can meet you at exactly the level that you are at. So if you have an eighth grade vocabulary, but you have fifth grade mathematics, it can talk to you at exactly that level. You will not feel like a dummy. You just have to tune it a little bit until it's presenting you the concepts at the exact edge of your knowledge. So rather than feeling stupid because it's incomprehensible, which happens in a lot of lessons and a lot of textbooks and with a lot of teachers, or feeling bored because it's too obvious, which also happens instead, it can meet you exactly where you're like, oh yeah, I understood A and I understood B, but I never understood how A and B were connected together. Now I can see how they're connected. So now I can go to the next piece. That kind of learning is magical. You can have that aha moment where two things come together over and over again.