Podcast Summary
Merryn Talks Money – "Why You Should Wait Out AI’s Super-Spending False Start"
Date: April 13, 2026
Host: Merryn Somerset Webb
Guest: Dr. Janousz Meretsky, AI partner at Aaron Innovation Capital
Episode Overview
This episode delves into the current state and future prospects of artificial intelligence (AI), focusing on whether the wave of massive investment in AI infrastructure is justified or fundamentally misguided. Dr. Janousz Meretsky, with deep experience in both academic and commercial AI research, challenges the established narrative around large language models (LLMs) and the so-called AI revolution, questioning whether the super-spending on data centers and compute power is built on false assumptions. The conversation covers the technical limits of current AI approaches, investment strategies, the nature of AI "hallucinations," and how real breakthroughs might emerge from fundamentally new approaches.
Key Discussion Points & Insights
1. Defining AI and Its Current Capabilities
- What does "AI" mean today?
- Dr. Meretsky explains modern AI as "a system which is approximating certain processes… They just approximate things, they don't solve intelligence." (03:08)
- Large language models (LLMs) like GPT-4 do not possess true intelligence, but provide an illusion through statistical approximation.
2. The Data Limitation and Model Collapse
- The data bottleneck:
- "We have run out of high-quality, diverse data three and a half years ago… The last potent LLM was GPT-4, data for it finished at the end of 2022." (04:48)
- New data generated online is increasingly tainted by other LLMs, leading to "model collapse," where AI models train on low-quality or AI-generated content, degrading their performance.
- "The new models now train on the entire internet, training on the output from other models… that, using a technical term, ends up leading to something called model collapse, where the models themselves are actually getting dumber." (06:36)
3. Technical Limits of Current Generation AI
-
Diminishing returns from scaling:
- Performance plateaued about three years ago; adding more data or compute isn't improving LLM benchmarks (07:26).
- Developments now aim at smaller, more resource-efficient models that can even run on laptops, but the underlying limitations persist.
-
Why scaling isn't enough:
- "You really don't [need to pay for compute]. ...What’s going to happen a year, two, three years from now when the majority of laptops out there will be able to run general purpose language models? ...Why do you need all this expansion of the data centers?" (09:00)
4. AI Hallucinations and Lack of Continual Learning
-
Why AI can’t be trusted for everything:
- LLMs are "stochastic"; every word produced carries a small probability of error, compounding over long outputs.
- "You have a little bit of error, you cannot eliminate that thing." (14:16)
- The dream of AI that continually improves by interacting with humans is not yet realized; current systems don’t genuinely learn from ongoing interactions.
- LLMs are "stochastic"; every word produced carries a small probability of error, compounding over long outputs.
-
Permanent systemic shortcoming:
- "No, it’s impossible [to solve these problems with current neural network approaches]...You have to use a different technique." (14:16)
- Even AI leaders have "jumped ship" from LLMs: "The leading researchers in the field have already jumped ship... working on the next generation things." (16:12)
5. The Economics and Investment Landscape of AI
-
Super-spending on data centers: A misallocation?
- Webb: "Is a catastrophic misallocation of capital?"
- Meretsky: "Well, if that's how you look at it, yeah. ...You shouldn't allocate [capital] in companies which are spending on this compute. You should allocate ...in companies that are allowing those data centers to run efficiently." (18:15)
-
Winners and losers:
- Avoid firms "borrowing a lot of money to expand those data centers." Consider "arbitrage—buy companies that have not wasted money on LLMs and short companies that have borrowed a lot of money to expand data centers." (19:35)
-
Big Tech namedrops:
- Apple is implied as a likely winner for not heavily investing in training LLMs, instead prioritizing publication and research. (19:37)
6. Where Next? Alternative AI Approaches
-
Emerging non-LLM models:
- Companies like Innate AI (Switzerland), Pathway AI (Bay Area), and Fractal Brain AI are exploring "frontier models" inspired by neuroscience, focusing on continual learning and dynamic network structures.
- "They create new connections all the time and on top of that they are continually learning and thousands of times more power efficient..." (21:57)
- AI research is "going back from the age of scaling to the age of research." (24:13)
- Companies like Innate AI (Switzerland), Pathway AI (Bay Area), and Fractal Brain AI are exploring "frontier models" inspired by neuroscience, focusing on continual learning and dynamic network structures.
-
The virtues of the LLM "hype":
- "I like the current hype... because they allowed us to understand the size of the problem and approximate it. Now that we know that we can in principle approximate human language, let's just solve it, okay?" (24:28)
7. Human Input Remains Critical—For Now
-
AI won’t replace most jobs—yet:
- Generative AI is effective for boilerplate work, not for detail or accuracy.
- "The killer use case for generative AI is producing output that you yourself can check for correctness." (27:32)
- Entry-level jobs, especially in software, are threatened, but senior roles are safe for now. The bigger worry is the growing skills gap as AI automation eats up learning roles.
- "It's not about replacing software engineers, it's about us not having a pipeline of software architects and senior software engineers." (28:53)
- Generative AI is effective for boilerplate work, not for detail or accuracy.
-
Law and other professions:
- Legal professionals still necessary, especially for accuracy—AI will create draft documents, but humans must check for correctness. (30:43)
8. Risks of Misuse Over Malice
- Caution against overconfidence:
- "I do worry that people are going to be using existing systems in domains where they should not be used, for example, for identifying targets to bomb in Iran. ...I worry about misuse of existing AI tools." (32:38)
9. Energy Efficiency: Human Brain vs. AI Models
- AI's massive energy consumption:
- LLMs and data centers are energy hogs; next-generation systems are built to be far more energy efficient, only activating the necessary parameters.
- "The next generation systems... are three orders of magnitude better [in energy efficiency]." (34:28)
- The challenge: truly adaptive systems, once released, can become unpredictable.
- "We have systems which are adapting themselves...you cannot outsmart them permanently. They will learn from your mistakes... I personally do not know if it's a good time to release those systems to the general public." (36:04)
- LLMs and data centers are energy hogs; next-generation systems are built to be far more energy efficient, only activating the necessary parameters.
Memorable Quotes & Notable Moments
-
“At the end of the day, the current generation of AI techniques... are just function approximators. They don't solve intelligence, they approximate. So that's why you may have an illusion that those systems are intelligent.”
— Dr. Janousz Meretsky (03:08) -
“We have run out of [diverse] data three and a half years ago. ...We have trained the model that used all the publicly available data on the Internet. There is nothing more out there to use to train the model.”
— Dr. Janousz Meretsky (04:48) -
"The market still believes we can solve hallucinations, but the leading researchers have jumped ship. ...It's unbelievable to me that we keep pouring money into bigger data centers, knowing we've used all the data already..."
— Dr. Janousz Meretsky (16:12) -
“You really don’t [need to pay for compute]. ...Why do you need all this expansion of the data centers? You really don’t.”
— Dr. Janousz Meretsky (09:00) -
“The killer use case for generative AI is producing output that you yourself can check for correctness. Unfortunately, people are using those LLMs to answer a question to which they themselves don’t know the answer to. This is a recipe for an absolute disaster.”
— Dr. Janousz Meretsky (27:32) -
“It's not about replacing software engineers, it's about us not having a pipeline of software architects and senior software engineers.”
— Dr. Janousz Meretsky (28:53) -
"I worry about inadvertent misuse of those tools without understanding what they are not good for."
— Dr. Janousz Meretsky (32:38)
Timestamps for Key Segments
- [02:06] – Introduction and episode theme
- [03:08] – What do we really mean by "AI"?
- [04:48] – The data bottleneck and supply ceiling
- [06:36] – "Model collapse" and dangers of AI-generated training data
- [09:00] – Questioning the compute/data center expansion
- [13:56] – The inevitability of AI hallucinations and error propagation
- [16:12] – Top researchers leaving LLMs for the next frontier in AI
- [18:15] – The folly of current Capex/investment in mega data centers
- [21:57] – Startups and alternative approaches (Innate AI, Pathway AI, Fractal Brain AI)
- [24:28] – Why the hype is productive for research
- [27:32] – AI’s current best uses and its limitations
- [28:53] – Implications for the job market, especially for new entrants
- [30:43] – Application (and limits) of AI in the legal profession
- [32:38] – Cautions about use and misuse of AI systems
- [34:28] – Energy efficiency: future models vs. today’s LLMs
- [37:16] – Dr. Meretsky's reading habits and learning Spanish
- [38:42] – Thanks and episode close
Conclusion
This episode offers a grounded, skeptical—but ultimately optimistic—perspective on AI’s real progress. Dr. Meretsky urges caution around the ongoing investment boom in AI infrastructure, identifying clear technical and systemic limits in current approaches, and encouraging listeners (and investors) to track the ongoing pivot in AI research toward fundamentally new models.
While generative AI and LLMs offer useful productivity tools, they are not the “intelligence revolution” so often hyped, and reliance on them for critical or highly accurate tasks remains fraught. The real breakthroughs—efficient, adaptive, continually learning AI—are still in the lab, and may bring an entirely different set of challenges and opportunities.
Recommended for: Investors, technologists, and anyone seeking to understand the difference between AI hype and reality—not just for profit, but for preparing for the real next act in artificial intelligence.
