Transcript
Harry Stebbings (0:01)
You're watching TVPN. Today's Wednesday, April 8, 2026. We are live from the TVPN Ultradome
Noah Hirschfeld (0:07)
Temple of technology, the fortress of finance,
Harry Stebbings (0:09)
the capital of capital. We got white suits on. You know what that means. The stock market is booming. The Dow Jones is up 2.68%, the S&P 500 is up 2.46%, the Nasdaq's up 2.9%. And there's a bunch of other stocks that are moving within that. Of course this is on the back of the the very good news that there has been a ceasefire, that the street might be opened. Of course, it's all back and forth. The front page of the Wall Street Journal is covering all of the geopolitical moves. But we're here to talk about tech and business, of course. And the big news today is that Meta Platforms has launched a new AI model. Alex Wang, the chief AI officer at Meta Platforms, announced a new large language model today, its first major new artificial intelligence model in more than a year. The rollout of the model called Muse Spark is a critical moment for Meta, which is up 7.5% already, which has spent billions of dollars hiring AI talent in a bid to catch up to OpenAI, Anthropic and Google DeepMind leading labs have been putting out models at an accelerating pace. In a departure from its previous models which were open source, Muse Spark is a closed model that will power Meta's AI chatbot and AI feature features within it. John Ludig has a very interesting post about open source AI and sort of predicted this. I can pull that up at some point.
Noah Hirschfeld (1:36)
We can find predicted that Meta would eventually bail.
Harry Stebbings (1:38)
Yeah, let me find it. The future of foundation models is closed source. Let me see if we have this here, he said. Given Meta is the primary deep pocketed large open source model builder, open source AI has become synonymous with Meta AI. He wrote this maybe three or four years ago. So the operative question for open source AI is what game is Meta playing? In a recent podcast, Zuckerberg explains Meta's open source strategy. One he was burned by Apple's closeness for the past two decades and doesn't want to suffer the same fate with the next platform shift. It's a safer bet to commoditize your compliments. He likes building cool products and cheap performant AI enhances Facebook and Instagram. That's 100% true. We've seen this in the ads product and the growth there. There's some call option value if AI assistants become the next platform and that makes sense. In Manus and the Meta AI app, he bought hundreds of thousands of H1 hundreds for improve social feed algorithms across products and this seems like a good way to use the extras. That all makes sense and Llama has been great developer marketing for Facebook. But Zuck also suggests several times that there's some point at which open source AI no longer makes sense, either from a cost or safety perspective. When asked whether Meta will open source the future $10 billion cost model, the answer was as long as it's helping us, at some point they'll shift their focus towards profit. And that's what John Ludig wrote. When did he write this? 20 man time flies. Barely just under two years ago, he says unlike the other model providers, Meta is not in the business of selling model access via API. So while they'll open source as long as it's convenient for them, developers are on their own for model improvements thereafter. That begs the question, if Meta is only pursuing open source insofar as it benefits themselves, what is the tipping point at which Meta stops open sourcing their AI sooner than you think? He says. Exponential data Frontier models trained on the corpus of the Internet, but that data is a commodity model. Differentiation over the next decade will come from proprietary data, both via model usage and private sources. Exponential capex. He highlighted this two years ago. A lagging edge model that requires just a few percent of Meta's.40 billion in capex is easy to open source. No one will ask questions. But when you reach $10 billion or more in capex spend for model training, shareholders will want clear ROI on that spend. The metaverse raised some question marks at a certain scale too, diminishing returns on model quality within Meta. There's a large upfront benefit for Meta building an open source AI model, even if it's worse than the frontier closed source counterpart. There are lots of small AI workloads, think feed algorithms, recommendations and image generation where Meta doesn't want to rely on a third party provider like they had to rely on Apple. And so the news has been. Back in December there was a reporting that Alex Wang disclosed an internal company Q and A that his team was working on two new models. One was this text based LLM codenamed Avocado. And then a separate model that was for image and video.
