
Loading summary
A
Hey listeners, welcome back to no Priors Today. You've just got me in a lot again.
B
It's a favorite type of episode. Sarah Habibi, how you doing?
A
I'm great. I'm so excited. Everything is adorable. Cartoons that are also like slightly nostalgic and sensitive and tell me about how you react to Studio Ghibli and also just better image generation.
B
I mean, I'm a long standing anime fan, so I think converting the world into everything, anime or manga is a very positive step for humanity. So I view this as something I've been waiting for for a while. I feel like every year or two there's sort of this moment in the image gen world where people have a wow, that's amazing moment again. And the first version of that was like, oh my God, these. You know, I think maybe even the GAN wave was the first wave. There was a GAN artwork in like 2019 or so or 2018 that went to Sotheby's for auction, which was one of the first sort of AI generated arts back when people were doing these adversarial network based approaches to generating artwork and it was kind of these kludgy tool chains. But even then people were like, whoa, look at what AI can do right now. And it was super bad in comparison to what you can do today. And then there was kind of the mid journey early stable diffusion wave where those models came out and people are like, oh my gosh, this thing is amazing. But everybody has seven fingers in the images. But oh my God, it's amazing. And look at all the things we can do with it. And it's going to transform society, et cetera, et cetera. I feel like we've periodically had these and I feel like this is the latest version of that. And part of it is we're just on this amazing curve of quality and fidelity in this art work and the ability to do. I mean even back in the GAN world there was like style transfers and you know, do this in the style of Van Gogh and et cetera. But the degree to which it does it so well now and so cohesively and in so many styles and with so much aesthetic beauty and oversight is really striking. And I think we're just hitting another one of those moments where people are like, wow, this can really do it for forms of animation and other things. And all this is obviously in the context of ChatGPT and OpenAI and sort of the 4O models sort of incorporating a lot of this stuff directly in. So I think it's Fantastic. We're going to see another thing like this in another year, I think. And then I think there'll be the very commercial versions of this which are already sort of happening. But look, we can use it for graphic design completely seamlessly versus it kind of works. And we can use it for all these different use cases. And so I feel like we're doing the horizontal version of it and soon we'll have the vertical versions all come out and obviously there's companies like Recraft and others working on the vertical versions directly. But I just view this as a super interesting evolution of the technology. So I think it's super exciting. What do you think?
A
I think it is funny how much at least our little niche of the technology ecosystem but anime and Mango is pretty popular. The world reacts to. They want more cute, they want more beauty. I think it's really exciting. One of the interesting things this exposes is users, people overall are very good at projecting like where we are in terms of quality and controllability and how much more room we have. Right. I think like going from, you know, it's eight bits of grayscale to you have images that might be perceived as photos of real people was a huge jump to your point of people being shocked at some point, you know, in, you know, two generations ago of image generation. And then, you know, I think one of the things that midjourney did was really have an aesthetic point of view and like take a bunch of user feedback into account in terms of what was preferred. I actually feel like a lot of people thought of image generation like end users, not researchers as you know, a little bit more of a solved problem. And I think just this is another data point of how much more like we're going to get and that people want nevermind in video and everything.
B
Yeah. Also text and logos and there's just, there's just a lot that's coming that people haven't done are sort of these truly integrative things where you can start truly clicking into the images and modifying pieces and there's apps that are doing that or there's things like Kriya that sort of do these real time modifications as you're working on things. But I do think there's so much room still. We're very early but it's still so striking. So it's a very exciting area and.
A
I think ease of controllability is also going to give people a lot more creative power. Like one of the things that heygen is demonstrated is going to come out with and Product very recently is the ability to use natural language to describe emotion and voice. Right. So you can like whisper ASMR and just, you know, say I want the whole video with this person in this way, with three, you know, three words of text description. I think that kind of controllability is going to be really powerful.
B
You can incorporate it into augmented devices and then I would just be working through an MSR world, that's all. I would, I would just live in that.
A
Is that the ideal?
B
No. Maybe the Manga part, but the rest, not so much.
A
Are you freaked out about the macro?
B
You mean the Nasdaq or what? The markets?
A
Yeah, the markets.
B
Tariffs, inflation. Which part of it?
A
You know, consumer confidence is at a multi year low. The Nasdaq's down 8%. Tariffs on Chinese imports and on autos. I think there are investors and companies in market talking about how stressed they are about that.
B
Yeah, I'm not very stressed about it. I feel like there's a degree of uncertainty in the world right now for sure. But from the perspective of people building technology companies, barring something truly existential happening, it's kind of business as usual. And I've been through a few of these cycles now where markets are way up and everybody's freaking out in a different direction and markets are way down and people. And the main place where it impacts the venture world or the startup world sometimes is if it soaks money out of the venture capital ecosystem and therefore valuations come down or there's less funding for the marginal startup or things like that. But other than that, these sorts of cycles tend to really wash away unless you're a super late stage company that's about to go public and there's some issue with your valuation in terms of expectations versus where you'd want to go out or something like that. But for day to day technology startups, particularly ones that are not doing hardware, which would be impacted by the tariffs. Right. People who are just writing software, it should really be of minimal actual day to day impact. Especially if your startups working like you'll, you'll be able to get customers to pay you or find funding or whatever it may be. I've been through a few of these and every time it's been a bit of a, of a shrug. I actually remember I went to the rest in peace good times presentation that sequoia did in 2008. So back in 2008 there was the great financial crisis and I was running a startup at the time. I was CEO of this small company and Sequoia did this big all hands where they pulled together all their founders and they had people come in and tell war stories from when the dot com bubble collapsed and how it's time to batten down the hatches and do layoffs and the world will never be the same again and everything's over. And they were doing this as a service to the startup community. Right. They were trying to help their founders kind of figure this stuff out. And I remember talking to one of the Sequoia partners during it. I'm like, we're like a six person startup, like who cares? And he's like, yeah, you're right, you shouldn't worry about this at all. You know, and that's as all these financial institutions were collapsing around. And so this strikes me as very small in comparison to that. And I think back then that didn't have that much of a real impact to tech. Maybe Google did its first layoff ever, but other than that tech just kept humming along and if anything the biggest tech companies in the world are now 20 times bigger than they were back then. So I think this is an even more minor blip that from a long term tech perspective, who cares? But again, barring some unexpected path that's splitting off of this, I don't know what do you think?
A
It has like almost no impact on me. Right. I think especially at the early end of the market I'm like, well the really high quality opportunities are, there's plenty of capital for them. I keep discovering that the capital markets are much deeper and we should talk about this are much deeper than I thought. For very expensive for example foundation model plays, I still expect like capital availability and a lot of inflow there. I think it's probably a little different for investors who have more public equities exposure. Right. I bet pre IPO crossover investors are getting more cautious. Right. You have those sort of much more long term issues of liquidity having been starved for several years now. But I think, you know, return of M and A and like several companies ready to go public will help that somewhat.
B
The place where the tariffs kind of matter that I think are interesting is for very specific industries where to some extent it's useful for America or the west to protect themselves. So I think automotive would be a good example where some of the Chinese car companies seem to be getting so good that if I was Europe for example, and given the industrial base is so automotive dependent, I would probably be pushing for tariffs relative to Chinese imports of cars. Right. Because the internal car industry may not be as competitive and so I do think there's some areas where the tariffs may be useful. There'll be some areas where they're probably being used as like a negotiation tool. And then some areas where they may be either net beneficial or net harmful in terms of actual costs passed on and things like that. But I think there may be a few areas where we should make sure that we actually have some in place. And then there may be some areas where it's going to be net negative or destructive. And then there may be some areas where it's just good for negotiating broader policy or relationships with certain external parties. People are kind of using a catch all for all of them versus looking item by item.
A
Yeah, I agree with that. And I think the productive version of tariffs as a. I think there's a need for a broader industrial policy that is more supportive of the industries that we care about. And like that's going to be a big investment. Right. If we want to make key components for defense or automotive in the United States, like we are quite behind in many domains in terms of getting competitive from a skill and cost perspective. And some of those things are worth investing in on both the positive and the protection side.
B
Yeah, I guess. You mentioned depth of funding for models as part of all this. What do you think is happening in the foundation model world?
A
You and I were just talking about these artificial analysis charts showing convergence, like kind of monotonically more competitive market for capabilities and amazing improvement over the last 18, 24 months. But you just had the most recent Gemini release from Google, like they're clearly still in the game. I don't know who was doubting that given they have infra, they have researchers, not just researchers, but you know, very smart people at the helm competing here as well. I think one of the more, more interesting things is that you have convergence not just on capability, but also in the like product surface areas. Like most people have search, they have a research product, they have reasoning in the models. I think like a lot of it is going to end up with like consumer surplus and distribution being the question.
B
There's actually a really great website called ArtificialAnalysis AI that shows different benchmarks that they've run against these various models for reasoning or for different aspects of how you test model for knowledge base or for other forms of performance, speed of tokens per time unit, et cetera, et cetera, et cetera. So I think that's really worth taking a look at. And you see that for certain areas there is really strong convergence and then there's almost like a Cluster of models that seem reasonably within ballpark. And again, certain things spike dramatically in one form or another around coding or around reasoning or other things. And then you have sort of a longer tail of other models. And so at least for the core language model world, which those benchmarks are for, there definitely seems to be some forms of convergence happening. And then there's outliers, right, Like GROK or X AI coming out of nowhere with a roughly soda model in like nine months was super impressive. Or you know, some of the things DeepSeq or others have been doing. And then you know, they don't really have benchmarks for Imagen, although those obviously exist on a variety of sites and other places. But then there's a whole other suite of models that I think are discussed a lot less. Right. And part of that is just the economic value, part of it's what's in the market today. But that's things like physics, it's materials, it's robotics, it's certain types of science, maybe things that are more specialized in terms of post training, like health related data on top of some of these core models. And so I do think that there's a lot of other types of models that people spend a lot less time on, some of which are becoming quite interesting. Probably the place that gets the most attention outside of the foundation model world or the core LLM world, I should say the language models is probably actually biology. I feel like there's a new biology model every week. But there's all these other fields and disciplines where I actually think there's some very big opportunities. And opportunities obviously are both societal in terms of impact, but also in some cases I actually think there's very big markets behind them. And I think often the interest level of people working in the industry to build models is divorced from the economic value of these models. Right. And sometimes that's rightfully so. You know, there may be really interesting scientific applications that aren't very commercially applicable and sometimes it's really misaligned where you're like why are all these things getting funded when there's these wide open spaces for certain types of models that just nobody's working on. And so at least I've been looking a lot at what are these alternative models that are interesting from a market perspective that maybe are getting a little bit ignored right now. And then I guess there's the other question of is this, and I'd like to hear your thoughts on this. How many things do you get subsumed into these core LLMs versus their own standalone thing. Do you think it's all one ring to rule them all or do you think it's going to be a fragmented landscape and where do you think that fragmentation happens?
A
It's somewhat of too binary a distinction to say it's a model company versus not a model company. Actually even many of the companies that you and I and the industry would consider to be like model research companies, they are starting with some base of pre training of like existing knowledge which is more and more readily existing knowledge and reasoning that is more and more readily available. In the case of robotics, you start with video pre training. The case of other domains, if you were going to start separately focusing on code and we can talk about whether or not that's a good idea, you want both language and code in terms of being able to interact with the model. I 100% believe that there are big opportunities in some of these domains. But one of the biggest distinctions to me is what does like the data collection engine for this look like. So if you are thinking about physics, chemistry, biology, robotics, like and you know, maybe even some more near term commercial applications, the, the data you would want the understanding for the model to learn from it often doesn't exist yet. So I think a theory of many of these companies that is interesting is our job is to go collect or generate it efficiently and use that to train the model. And in that case I think the question of like, does it need to be, you know, will it be in this single model to rule them all? There's a question of, well, is it reasonable to expect one of the existing large labs to go do that data generation? Right. Like if you have to set up a physical lab with robotics to do experimentation on new chemicals, that feels more far afield than co generation RL environments.
B
For example, anytime you go into the physical world it's always harder to generate data. And that's one of the reasons that the language models where you just effectively collect the wisdom of the Internet digitally are the first places where we've really seen this scale of sort of breakthrough happen in recent times. And coding is a great example where you not only have a lot of the data resident either online or digitally, but also you have very clear utility functions or things that you can test against in terms of code and its performance and et cetera, is it doing what you think it's going to do? So those are always going to be the easiest areas. It's kind of funny, this is an odd pet peeve of mine, but it always annoys me when people who do really well as founders in traditional software and tech start telling everybody else to go and do the hard stuff in biology and materials and physics and oh, you need to go be hardcore. And you're like, well, you made all your money in fucking software. What are you talking about? And so I feel like there's been a long history of that, right? Like I remember interviews with Bill Gates from 20 years ago where he's like, if I was to start today, I'd go into biology. So I feel like sometimes there's the model versions of this.
A
You're so funny. I feel like you're the opposite. You're like, I actually have a PhD in biology.
B
That's why I know, that's why I know reality.
A
I think the other distinction I would draw is like, is it some like orthogonal, like totally different technical thesis Do I think there's like a research advance that is just very different architecturally, quite different. I'll like describe categories of companies that could be relevant here. We had Karin and Albert from Cartesia on the podcast. I think states based models are an interesting direction that are highly efficient for certain types of data that are compressible. Right. If you look, there are several plays on like formalism and like translating problems into Lean and taking that as a path to increasing reasoning capability for math and code. I think there's a number of, there are a number of companies that are trying to train models that are better at taking actions in software and on the web. This is clearly also right in line of the large foundation model labs, but I think they're at least trying to work on a question that doesn't feel fully answered in terms of consistent generalizable RL environments for, for agents. And so there are spaces where I think there, there is a theory of why the company should. Should exist if true, versus just being like straight in line of the, of the OpenAI anthropic X like Steamroller, of course, and Google Steamroller. What did I miss? What else do you draw as a distinction or where do you think there is opportunity?
B
To your point on state space models, there may be advantages in terms of the speed and size of some of those models on a relative basis for very specialized tasks. And so usually I think of it as a 2x2 matrix where you have one axis which is sort of speed, performance, cost, because those are roughly the same thing for many of these models is inference time effectively. And then there's reasoning fidelity, whatever you want to call it. And depending on where you are in those different quadrants you have one quadrant which is like, it's slow and it's expensive and it's not very smart. And obviously nobody wants to use those models. It's very slow and expensive, but it's very smart and very capable. And that's where you're like, I'm going to upload a hundred documents, Supreme Court brief, and it'll give me this amazing analysis I can use to argue a case or whatever. Right. So high value and it'll take a while to process and do it. And then there's the super fast, super performant tends to just be these very specialized niche models for specific applications. And I think some of the space state models tend to work very well for that. Some of the SSMs for very specific application areas. And then there's the last quadrant. And based on which of those quadrants you're in, I think it really determines the type of things that you can build. And some of the really fast high performance tend to be more vertical focused or tend to be more focused on very specific types of tasks and the really slow expensive ones that are actually very performant. You could imagine verkalized versions, but it seems like the backbone for a lot of those are actually these very generalizable models where a big chunk of what you're getting is the reasoning and broader linguistic capabilities that you then apply to a domain. And then of course there's stuff that people build on top of it in terms of orchestration layers and specialized bespoke things that route things at different models differentially relative to your use case. And it seems like everything that's quote unquote agentic right now is basically doing that, you know, across customer success and code. And you go through every domain that has like a specialized approach and they always have this sort of orchestration layer built on top. So, you know, I think, I think it's super exciting to watch all this stuff. And I do think some of the applications and some of the less just purely linguistic domains may be interesting in the short run.
A
I think going back to the question of like, is the macro stressing you out? There's like such a virtuous cycle in technology happening right now. This is actually quite dominated by the fact that M and A is alive again and so we're going to have outcomes. But to your point, there's exploding surface area of stuff that these models can attack. You have research progress, people making different technical bets. You mentioned deep seq. I think model development and continued increase of just more aggressive use of reasoning and test time Compute is quite expensive and training continues to be more expensive. So I think the fact that there are now people trying to solve data and scale and latency problems like that'll help everybody too.
B
Do you know if it's true that the Deep Seq researchers are not allowed to leave China?
A
I do not know if that is true. I think you in any country should want to hang on to your best talent but perhaps not restrict people's movement. I think we should be trying to attract great talent here.
B
We should keep all the AI researchers in the Mission District and just not.
A
Let them leave somewhere between Mission and Dogpatch. Yeah, like actually we could just draw a line between our offices.
B
They all have to go to Atlas.
A
Cafe every day, talk through the like talent categories. Actually for anybody who is not thinking about their kids and 10 years from now, but just thinking about the like next two, three years, like what type of expertise is valued where you should stay between, you know, my office and a lot's office in the Mission and the Dog patch. Okay. Like you have researchers, you have infrastructure, scaling and efficiency. We welcome all of you. Hardware, software, co design. Right, like design, you know, the next generation TPU or whatever.
B
There's a special visa for you to move into that region.
A
Yes, we're here to sponsor you Visa program. Yes. If you are ready to design chips to better handle sparsity or massive MOE models or something like I. I've got a visa campaign for you. Kind of what you said, right. Like anybody who has got deep like domain user understanding combined with the basic product engineering. It's not basic but the product engineering sense for this like orchestration applied ML area evals for agents, setting up RL environments like still very nascent area of like gather, context, plan, make, mod like a bunch of model calls, paralyze, verify, retry, like this orchestration. I already described all of that. We've got a visa program for you. We're thinking about naming it. We'll hire somebody to run it. It'll be great.
B
We'll call it Gill and go.
A
We're going to work on the marketing.
B
I feel we're in the business as usual phase of AI. I think the stack is reasonably well defined and obviously it'll change and there'll be new things in it. But I feel like if anything the last couple months have been very clarifying in terms of the consolidation of the things that are short term crucial and there's the model layer of that and all the various accoutrements around agentic stuff and reasoning and Et cetera. And obviously that will only accelerate and get dramatically better. And it's on its own scaling curve. And then on the infrastructure layer, I think that solidified a bit. I remember when RAG was a big deal about a new thing. All these things I feel like are kind of falling into place, evals and how do you do them? And I think things are solidifying there with companies like BrainTrust and others. And then I feel like on the application layer side, I think bought into a notion we've been discussing for a year or 2 now around AI, really starting to impact different services related industries and vertical applications and different use cases. And then I'm starting to finally see some inkling of consumer stuff again. And I think it's nascent and early, but at least people are trying. I feel like there was two or three years where nobody's really trying to do anything consumer. Although one could argue that perplexity and chatgpt and Midjourney and all these sort of presumery things were early consumer forays. Right? And so maybe ChatGPT is the world's biggest new AI consumer product. I mean, Google was really the original one in some sense. It feels like a period of brief consolidation and in a handful of verticals I think we're starting to see some of the winners emerge. And so I think it's an interesting clarifying time. And of course the thing I say about AI is that the more I learn, the less I know. It's the only industry where I feel like the more I learn about the market but the more confused I am. I feel like there's this brief moment of clarity and then I'm guessing in a year all bets are off and you know, all things, all sorts of things will scramble again. But at least for now, it feels to me like a few things have kind of fallen into place for at least temporarily.
A
This actually like feels like a very comfortable time to invest for me because to your point, it feels more like a. I don't know, maybe it's like inning three instead of inning one where there's a little bit of stability in the ecosystem. There's real goodness around standardization. Some standardization of integration with different, like MCP I think is going to accelerate a bunch of development for people. Like I'm meeting companies where they set up a data source that is useful to the enterprise in some way that these models can interact with well and they're like, oh, MCP server, do you.
B
Want to quickly explain to people model context, protocol and MCP and what that is? And how it works.
A
I'm going to fudge this, but I will try to describe it. So this is an attempt by Anthropic came from Ben Mann's group in Labs. It's called Model Context Protocol, which is an attempt to spec out a standard interface for connecting model capabilities to systems where you already have useful data that could be documents, it could be logging, it could be business tools, it could be the ide, whatever. Sam from OpenAI said they're going to support it as well. And I think this is not a complete solution. It has gotten a lot of popularity with developers over a very brief period of time. But it's just like how you expose your data to the model and it's.
B
An open standard, so it's not proprietary, anybody can use it. And it's like a two way connection between data sources and AI powered tools.
A
And big companies have done it. Yeah, I think there's still a bunch of work for developers to do in terms of describing, you know, their tools and how to use them very specifically and cleanly. But it, it does make it much easier and I think it will accelerate agent development a lot. But you know, going back to this idea of like, what does it mean for the ecosystem? I think the fact that you have like, like you're accelerating the ways for models to interact with existing ecosystems, we expect agents to get better. You have a bunch of choices around model availability. As you said, there's this like clear pathway about how to automate certain types of work that is orchestrated orchestration of these capabilities and I think that's going to be super fertile. I do think it's very unclear what types of winning consumer experiences are possible here. There aren't consumer agents that don't look just like search or research, you know, in the large model products that are really working yet that I've seen, but I expect to see them this year. I'm excited about it.
B
Yeah, I think it's cool stuff coming.
A
When everything destabilizes a lot. And I will be back on, no priors, we'll talk to you all then.
B
It'S going to get destable again. But I think it's a moment of calm and calm is all relative. Right. There's enormous innovation, huge changes coming, big technology waves, new things every week. But at least there's a little bit more of a view of, okay, who are going to be some of the main players in some of these areas and how do all these things fit together. So I think we should enjoy the calm while it lasts for the next week or whatever it is the next few hours before the next thing drops.
A
All right, signing off, y'. All.
B
Good to see ya.
A
Find us on Twitter nopriorspod. Subscribe to our YouTube channel if you wanna see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new episode every week and sign up for emails or find transcripts for every episode@no-priors.com.
Co-hosts: Sarah Guo & Elad Gil
Date: April 3, 2025
In this episode, Sarah Guo and Elad Gil dissect current trends and inflection points in AI, focusing on three major topics: the evolution and impact of image generation technology, the state of public markets (including venture capital, tariffs, and macroeconomic effects), and the landscape of specialized vs. general AI models. They blend industry anecdotes, personal insights, and playful humor to offer perspective for founders, researchers, and investors tracking the pace of the AI revolution.
00:09–04:42)Elad's Anime Optimism: Elad expresses delight at the mainstreaming of anime-style and nostalgic art, noting the regular “wow” moments in image generation, from GAN auctions at Sotheby's to the current era of Midjourney and Stable Diffusion.
00:23, Elad)Trajectory of Progress: Both hosts describe how technologies like GANs, style transfer, and now text-to-image models have rapidly improved. Early issues (like the infamous "seven-fingered hands") are giving way to genuine utility in creative fields.
01:39, Elad)User Expectations & Controllability: Sarah notes that end-users often perceive image generation as already “solved,” but each leap (especially in video) reveals more headroom for improvement. The conversation notes advances like HeyGen’s natural language controllability for video emotion and voice (04:05).
Next Horizons: Both hosts discuss a coming wave of more vertically-integrated tools (e.g., in graphic design, animation, logo generation), with companies like Recraft and Kriya building deeper creative workflows.
04:42–09:36)Market Stress & Startup Resilience: Sarah raises concerns about consumer confidence, tariffs, and a recent Nasdaq dip. Elad, however, remains unworried for early-stage tech companies.
05:38, Elad)Comparison to Previous Downturns: Elad recounts being a startup CEO during 2008’s Great Financial Crisis and attending Sequoia’s infamous “RIP Good Times” meeting. For most software startups, he argues, even major shocks have limited operative impact unless at the pre-IPO late stage.
06:15, Elad)Depth of Capital Markets: Sarah points out that ample funding remains for high-quality deals and especially foundation model startups, with only some caution among crossover (pre-IPO) investors.
Tariffs and Industrial Policy: Elad sees justified tariffs in sectors like automotive to protect Western industry, but cautions against blanket approaches. Sarah emphasizes the need for updated industrial policy to support strategic sectors such as defense and automotive component manufacturing.
09:36–19:52)Benchmark Convergence: Sarah and Elad discuss market “convergence”—where many major foundation models achieve similar capabilities, measured on benchmarks like those tracked by ArtificialAnalysis.AI. The remaining differentiators are product surfaces, user experience, distribution, and specializations.
10:38, Elad)Specialized and Vertical Models: Beyond LLMs, the hosts highlight growing investment in biology, materials, physics, robotics, and health models. Sarah argues opportunities abound in domains where training data must be collected/generated, such as scientific research or real-world robotics.
“Single Model to Rule Them All?”: They debate whether all capabilities will be subsumed into massive LLMs with domain-specific adaptation, or whether standalone specialized models will dominate via efficiency or unique data. Both agree hybrid approaches are likely, depending on the relative cost, speed, and data generation challenges in each domain.
Practicality of Specialization: Elad wryly notes a pattern of successful software founders urging new entrepreneurs into highly ambitious technical areas (e.g. biology, materials), despite having made their own fortunes in “easier” software.
15:03, Elad)19:52–26:49)Model-Orchestration & Agentic Systems: The hosts describe the growing prevalence of orchestration layers—tools that route requests between general and specialized models for efficiency. Current “agentic” stacks build on this, especially in customer support, coding, and other verticals.
Market Timing and Stability: Sarah observes that the AI ecosystem feels more stable now (more like “inning three”), with nascent standardization and a clearer sense of which layers matter.
Emerging Standards—MCP: Sarah introduces the “Model Context Protocol” (MCP), an open standard from Anthropic for seamless data-model integration. It’s gaining traction for enterprise tools and may accelerate real-world agent deployment.
24:55, Sarah)20:39–22:22)21:35, Sarah)22:22–26:55)Stack Consolidation: Elad breaks down the AI value chain: model, infrastructure, and application layers are clarifying, with heavyweights like OpenAI, Google, and Anthropic consolidating at the foundational level, while vertical solutions and new consumer applications are starting to emerge.
22:23, Elad)What's Next for Consumers?: Sarah notes that, beyond research-like search interfaces, the winning consumer agent experiences haven't yet landed—but she expects material progress this year.
Outlook—A Momentary Calm: Both agree this is a rare lull before another AI acceleration, with “main players” more clear and the main technical and commercial challenges better defined, but warning the clarity may dissolve quickly (“the calm before the next storm”).
Elad on Market Cycles:
“For day-to-day technology startups, particularly ones that are not doing hardware... it should really be of minimal actual day-to-day impact.” (05:38)
Sarah on User Perceptions:
“A lot of people thought of image generation like end users, not researchers, as a little bit more of a solved problem. And I think just this is another data point of how much more we're going to get…” (03:14)
On the AI Researcher “Visa” for San Francisco:
“There's a special visa for you to move into that region... Yes, we're here to sponsor you, Visa program... we're thinking about naming it. We'll hire somebody to run it.” (21:33)
Elad on AI’s Perpetual Uncertainty:
“The more I learn, the less I know. It's the only industry where I feel like the more I learn about the market, the more confused I am.” (23:26)
Sarah on the Investment Climate:
“This actually feels like a very comfortable time to invest for me because it feels more like inning three instead of inning one where there's a little bit of stability in the ecosystem.” (24:13)
00:09–04:42 — Image Gen progress, nostalgia, controllability04:43–09:36 — Macro: public markets, venture, tariffs, resilience09:36–19:52 — Foundation models, vertical/specialized models, benchmarks, data19:52–22:22 — Stack consolidation, orchestration, technical talent “zones”22:22–26:55 — Standards (MCP), agentic progress, consumer outlook, calm before the stormThe episode is marked by good-humored banter, deep technical knowledge, and a mix of optimism and hard-won experience. Both hosts balance clear-eyed realism about market cycles and startup resilience with enthusiasm for genuine progress and playful speculation about the AI community’s future.
This summary provides an in-depth but accessible look at the episode’s major topics, offering newcomers a roadmap to both the current state and coming waves in the AI landscape, as understood by two of the field’s sharpest observers.