A (8:54)
Hey, another essay on AI. This time it's from Nicholas Bustamante, who says that LLMs are dismantling the moats that made vertical SaaS defensible and the market sell off in SaaS names is structurally justified but temporarily exaggerated. Summarizing from his quite lengthy tweet, his argument begins with a simple observation. Vertical software software built specifically for one industry has historically been one of the best businesses in technology. Bloomberg in finance, LexisNexis in legal, Epic in healthcare, Procore in construction, Viva in life sciences. These these companies charge astonishing amounts of money and enjoy retention rates hovering around 95%. Bloomberg terminals cost roughly $25,000 per seat per year. FactSet often exceeds $15,000 per user. Law firms pay thousands per month for research tools customers rarely leave for decades. This model worked because these companies built deep, defensible moats. What large language models are now doing, the author argues, is selectively detonating some of these moats while leaving others intact and understanding which are which is the whole game here. One of the most underestimated sources of defensibility in vertical software, according to Bustamante, was the learned interface. Bloomberg's cryptic keyboard commands, legal research filters, proprietary navigation systems. These weren't intuitive tools. They were languages. Professionals invested years mastering them. That fluency became a switching cost. Saying were a Bloomberg shop wasn't just about data quality. It meant the entire firm had internalized a workflow. Replacing the software meant retraining muscle memory developed over a decade. The interface wasn't cosmetic. It was actually the product. LLMs dissolve that advantage by collapsing every interface into natural language chat. Instead of navigating specialized menus, users simply ask for what they want. The model executes the workflow. The accumulated literacy in a proprietary interface becomes worthless. The cost centers that supported those interfaces, design teams, onboarding staff, customer success managers disappear. If much of the premium pricing rested on interface mastery layered on top of licensed or semi commoditized data, that pricing logic erodes quickly. The same pattern applies to custom workflows and business logic. Vertical software encoded how industries actually functioned. Legal citation networks, financial modeling, assumptions, compliance checks, approval chains. Historically, this logic was embedded in code written by engineers who also understood the domain, a rare and expensive combination. Building that infrastructure took years. LLMs fundamentally changed that equation. Because business logic no longer needs to be hard cod, it can be written as plain language instructions that models execute. A seasoned portfolio manager can encode a discounted cash flow methodology in a markdown document without touching Python. What once required multi year engineering efforts can now be implemented in days. The logic becomes readable, auditable and customizable. And it improves automatically. As the underlying model improves itself, the moat of accumulated workflow complexity shrinks dramatically. Another major pillar of vertical SaaS was making messy public data accessible. SEC filings, case laws, patent databases technically public but practically unusable without specialized parsing and search infrastructure. Companies built enormous scaffolding to structure and query this information. But LLMs now arrive pre trained on these formats. They understand the structure of a 10k, the difference between GAAP and non GAAP metrics, how precedent works and legal reasoning. The model itself becomes the parser. The we made it searchable layer, which justified premium pricing, becomes a commodity capability embedded in the foundation model. The data still exists, but the excess premium collapses. Talent scarcity was another traditional barrier. Building vertical software required engineers who could bridge domain expertise and production code an extremely limited pool of people. LLMs invert that scarcity. Engineering becomes accessible through APIs. Domain experts can translate their knowledge directly into software behavior. The scarce resource shifts from technical implementation to domain expertise, which is far more abundant. The barrier to entry drops sharply. Then there's bundling. The strategy of expanding into adjacent modules to increase switching costs. Also that is weakening. Incumbents historically locked customers in by building ecosystems of complementary tools. But if an AI agent can dynamically orchestrate across multiple providers, the integration layer shifts from the vendor to the agent. Instead of buying the entire Bloomberg suite, an agent could query the cheapest or the best data source for each task. The economic logic of paying for a bundled ecosystem weakens when orchestration becomes trivial. Yet not all moats are collapsing. Some grow stronger proprietary data that cannot be replicated or scraped. Real time trading feeds, exclusive ratings, regulated Credit assessments become more valuable in an AI driven world. If the data is truly scarce, LLMs amplify its importance as a necessary input. The critical distinction is whether the data can be licensed or synthesized elsewhere. If yes, the vendor risks becoming a commodity supplier to AI agents. If no, then the moat might hold. Regulatory and compliance lock in also remains powerful in healthcare, financial reporting and other heavily regulated sectors. Switching systems involves certification hurdles, audit trails and multi year implementations. HIPAA and FDA requirements do not dissolve because a better model exists in these environments. LLM adoption may even lag due to compliance risk reinforcing incumbent positions. Network effects persist as well. Platforms that function as communication layers like Bloomberg's messaging system derive value from participation, not interface design. LLMs do not dissolve those network dynamics. Similarly, software embedded directly in financial transactions, payment processors, settlement systems, loan origination infrastructure remains durable. AI may improve interfaces but does not replace transaction. Rails. System of record status presents a more nuanced case. Being the canonical source of truth for critical business data creates enormous switching costs. LLMs do not immediately threaten this, but agents are quietly building cross platform memory layers. By seeing email documents, messaging and CRM data, agents accumulate a broader contextual record than any single system. Over time, this could erode traditional system of record advantages. Though the shift will be gradual. The cumulative effect is a collapse in barriers to entry where the destroyed moats once dominated in SaaS. Historically, building a competitor to Bloomberg or LexisNexis required hundreds of engineers, massive licensing deals and years of development. Now, small teams leveraging frontier models can replicate much of the functionality in months. Competition does not increase incrementally. It explodes. Instead of three incumbents, there may be hundreds of AI native entrants offering comparable capability at lower cost. Revenue may not vanish overnight due to long enterprise contracts, but valuation multiples compress as markets anticipate erosion in pricing power. The deeper strategic threat comes from a pincer movement from below. AI native startups flood vertical niches from above. Horizontal giants like Microsoft embed AI into their ubiquitous platforms extending into vertical workflows without traditional engineering investment. The stack required to build vertical depth agent frameworks, pluggable data access, domain skills written in text is simple. Software becomes headless with the agent owning the user relationship. The aggregator captures margin data suppliers compete on price. Ultimately, the reckoning is not about vertical SaaS dying wholesale. It's about distinguishing real moats from illusions. Interfaces, encoded workflows and search layers built atop public data are vulnerable. Proprietary data regulatory lock in transaction embedding and network effects remain durable. LLMs do not destroy all defensibility. They expose which advantages were structural and which were artifacts of an era before intelligent agents.