B (33:03)
One prediction for 2026. There's so much talk of agents right now and there has been for a while, but no one has truly created a mass scale consumer agentic AI. I think the models are there today for this to be possible. And in 2026 we will see the group that figures out the right interface and system and product that creates as big a step function and overall experience as chat did when it first came out. And I think this area is not nearly as seated to the labs as people assume. It really is anyone's ballgame. Hello, Aaron here. First of all, I get quite awkward around doing selfie videos. This is my ninth take of this video, so I hope it goes okay. But 2026 prediction would be that this is going to be certainly the continued year number two of AI agents, but in particular AI agents in the enterprise in either deep vertical or domain specific areas. I think this is going to be the main way that we actually take all of the progress that we're seeing in AI models and actually deliver them into the enterprise. You have to be able to tie to the workflow of the organization, you have to be able to get access to the data that they have. You have to have the right context engineering to make the agents actually work and then you have to do the change management that makes the agents effective. So this is going to be a year where we start to see this pattern emerge more and more, which equally means that we need to ensure that we have a lot more happening on agent harnesses. So shout out to and Dex for that answer. But it's definitely going to be the year of age and harness and seeing how do you start to get an order of magnitude improvement on the model's capabilities by having all the right scaffolding around the model. And then finally it will be the year of economically useful evals. So really starting to figure out how these models end up doing a lot more knowledge worker tasks in the economy. And we're going to see a lot more of that in 2026. We saw some previews of that this year with APEX and gdp, VAL and a handful of others, we're going to see way more of that. So those are the predictions and we'll see you in 2026. I think 2026 is going to be a very interesting year for American open models. Over the last year, the frontier of open intelligence shifted from America to China, starting with the release of Deep Seq at the end of 2024. And American institutions were slow to notice this erosion of American leadership in open intelligence. But I think they've noticed in a big way over the last half year, both from the government level, from the enterprise level. And there are some really interesting neolabs starting to come out with open intelligence as their directive. And there are a few of these not just reflection. And these companies are starting to produce some very interesting small open models. And next year I think we'll see the US regaining leadership at the open wave frontier at the largest scale. And I'm really excited to see that. Hey folks, my prediction for 2026 is that I think we will see AI become much more politicized. I think we'll see it become a major point of discussion for the 2026 midterm election elections. And some people will come out strongly against it, some people will come out strongly supportive of it, and I'm not sure which side's gonna win out. 2025 has marked an incredible year in AI drug discovery. In the past year alone, we've gone from being able to design simple molecules on the computer to designing simple antibodies, and now, most recently, full length antibodies with drug like properties zero shot on the computer. If 2025 has been the year of research in AI drug discovery, 2026 will be the year of deployment. The models have finally entered an era where they're becoming really useful for drug discovery. Not only do they make things faster, but they're also allowing us to go after really challenging targets which have been traditionally really difficult to do with traditional techniques. I'm really excited to see what comes next because the models show no signs of success slowing down. Okay, my prediction for 2026 is it will be the year that YOLO dies. We will begin transforming ourselves from a you only live once to don't die. I think right now we're kind of a suicidal species. We do very primitive things. We poison ourselves with what we eat. We design our lives so that we slowly kill ourselves. Companies make profits by making us addicted and miserable. We destroy the only home we have. And so somehow we celebrate these things as virtue. I think it's all backwards and I think one day we'll look back and we'll be pretty astonished that we behaved like this. I think the shift coming is going to be simple and radical, that we say yes to life and no to death. It's simple, but I think it could be in response to AI's progress. And we do this defiantly as a form of unification. I think it does require a lot of courage. Courage for us though to say we recognize how sacred our existence is, we don't want to throw it away and we want to defend it with every bit of courage and strength we have because it is so precious. I think it's going to be the year we end YOLO and the beginning of don't die. The most striking thing about next year is that the other forms of knowledge work are going to experience what software engineers are feeling right now. Where they went from typing most of their lines of code at the beginning of the year to typing barely any of them at the end of the year. I think of this as the Claude code experience, but for all forms of knowledge work, I also think that probably continual learning gets solved in a satisfying way. That we see the first test deployments of home robots and the software engineering itself goes utterly wild next year. My prediction for 2026 is that it's the year where everyone's perceptions are flipped. Currently everyone believes that you can only use Nvidia outside of Google and that will be obvious that that's not the case. Currently about a third of Americans hate AI and think it's really bad. That number will increase. Currently most Americans think AI is not useful. That will flip as well. And so everyone's priors will be flipped. That's because the transformative use of AI will be so prevalent, the the obvious utility of it will be so high that there is no way for anyone's priors, you know, cognitive dissonance will be wiped away. Hey, I'm Benjamin Spector, I'm Ash Inspector. And our prediction is that 2026 is the year of energy efficient AI. Data center buildups are primarily constrained by energy power availability, grid interconnects, high voltage equipment, things like that. Which is why Xai's colossus was initially powered by on site gas turbines. The thing is the demand for computers continuing to grow. Labs, neo labs like us and startups like Krishna have been pretty remarkably insatiable demand for both training and compute. And this demand is currently on stripping our buildings with lots onto the grid. This means that in 2026 it will be really important to squeeze every available bit in tons out of every wallet. That said, in the long term chips probably matter more than power because chips depreciate much more quickly than the underlying power infrastructure. So for example, with data center power supplies of t per kilowatt hour, the chips cost action order mounted more than the power in the five year depreciation cycle. So in 2026 we think intelligence per watch is really important to squeeze as much intelligence as you can out of every unit of energy. But in the long term, we think it's the chips that matter more. Happy Holidays. Happy New Year.