Transcript
Sarah Wang (0:01)
I chatted with the head of IT recently who told me for the first time in his two decade long career, he believed that IT support was fundamentally going to change.
Mark Andrusko (0:10)
If all of us want this software to be doing work for us, ideally it's doing work with at least, if not more competency than a human could.
Stephanie Zhang (0:19)
We're no longer designing for humans, but for agents. The new optimization isn't visual hierarchy, but machine legibility. And that will change the way we create and the tools that we use to do it.
Podcast Host / Narrator (0:32)
Every year we step back and ask a simple question. What will builders focus on next? Our 2026 Big Ideas bring together the themes our investing teams believe will shape the coming year in tech. This episode is built around three big ideas that together explain where AI products are actually heading next. The shift is not just that models are getting smarter, it's that software is changing shape. AI is moving from a tool you consult to a system that can understand intent and take action. You're going to three different perspectives on that. What it means for the interface, what it means for how we design software and information, and what it means for how work gets executed inside organizations. The first big idea is that the prompt box is not the final interface for AI. Mark Andrusko argues that the winning products will feel less like chat and more like proactive teammates. They'll notice what you're doing, anticipate what you need, and propose actions you can approve. Here's Mark.
Mark Andrusko (1:28)
I'm Mark Andrusko, a partner on our AI apps investing team. My big idea for 2026 is the death of the prompt box. As the primary user interface for AI applications, the next wave of apps will require way less prompting. They'll observe what you're doing and intervene proactively with actions for you to review. The opportunity we're attacking used to be the 300 to $400 billion of software spend annually in the world. Now what we're excited about is the $13 trillion of labor spend that exists in the US alone. It's made the market opportunity or the TAM for software about 30 times bigger. If you start from there and then you think about, okay, if all of us want this software to be doing work for us, ideally it's doing work with at least, if not more competency than a human could. Right? And so I like to think about, like, well, what do the best employees do? What are the best human employees do? And I've recently been talking about this graphic that was floating around on Twitter. It's a pyramid of, like, the five Types of employees and the ones with the most agency and why they're the best. So if you start at the bottom rung of the pyramid, it's like people who identify a problem and then come to you and ask for help and ask what to do. And that's like the lowest agency employee. But if you go to the S tier, like the most high agency employee you could possibly have, they identify a problem, they do research necessary to diagnose where the problem came from, they look into a number of possible solutions, they implement one of those solutions and then they keep you in the loop. Or they come to you at the very last minute and say, do you approve of this solution I found? And that's what I think the future of AI apps will be. And I think that's what everyone wants and that's what we're all working towards. So I feel pretty confident that we're almost there. I think LLMs have continued to get better and faster and cheaper, and I think there's a world in which the user behavior will still necessitate a human in the loop at the very end to sort of approve things, certainly in high stakes contexts, But I think the models are more than capable of getting to a point where it's suggesting something really smart on your behalf and you basically just have to click accept. As you guys know, I'm pretty obsessed with the notion of an AI native CRM. And I think this is like a perfect example of what these proactive applications could look like. So in today's universe, a salesperson might go open their CRM, explore all the open opportunities they have, look at their calendar for that day, and try to think about, okay, what are the actions I can take right now to have the greatest impact on my funnel and my ability to close deals with. With the CRM of tomorrow, Your AI agent or your AI CRM should be doing all these things on your behalf in perpetuity, identifying not only like the most obvious opportunities that are in your pipeline, but going through your emails from the last two years and harvesting. You know, this was once a warm lead and you kind of let it die. Like, maybe we should send them this email to drum them back up into your process, right? So I think there are so many ways in which drafting an email, harvesting your calendar, going through your old call notes, like, the opportunities are just endless. The ordinary user will still want that last mile approval almost 100% of the time. They will want the human part of the human in the loop to be the final decision maker. And that's great. I think that's like the natural way in which this will evolve. I can imagine a world in which the power user is basically taking a lot of extra effort to train whichever AI app it's using to have as much context about their behavior and how they perform their work as humanly possible. These will utilize larger context windows. These will utilize memory that's been baked into a lot of these LLMs and make it such that the power user can really trust the application to do 99.9% of the work, or maybe even 100. And they'll pride themselves on the number of tasks that get done without a human needing to approve them.
