Podcast Summary: Catalyst with Shayle Kann
Episode: "Inside a $300m Bet on AI for Materials Discovery"
Date: November 6, 2025
Host: Shayle Kann (Energy Impact Partners)
Guest: Doge Chubuk (Co-founder of Periodic Labs, former Google DeepMind)
Overview of the Episode
This episode delves into the intersection of artificial intelligence (AI) and materials discovery with a focus on the ambitious $300 million seed-funded startup, Periodic Labs. Shayle Kann invites guest Doge Chubuk to discuss how advancements in large language models (LLMs) and hardware automation might enable breakthrough discoveries in materials science—potentially even the holy grail: room temperature superconductors. The conversation explores what's changed in AI over the past year, the challenges of scientific reasoning, the laboratory-in-the-loop approach, and the prospects and business models for AI-driven R&D.
Key Discussion Points & Insights
1. Catalyst for Starting Periodic Labs
[04:22 - 06:47]
-
Advances in LLMs and Reasoning Abilities:
Doge notes that since their last conversation (a year prior), LLMs—especially those with advanced reasoning (e.g. OpenAI's "01" model)—showed dramatic improvements, particularly in complex mathematical and scientific reasoning. This was a crucial factor inspiring him to launch Periodic Labs."The LLMs have improved even further... The reasoning models have gotten good enough that they are able to sort of get around that [training data limitation] challenge via reasoning."
— Doge Chubuk [06:47] -
Out-of-Distribution Generalization:
The AI breakthroughs offer hope in moving beyond simply interpolating known data to reasoning more like scientists and making educated "jumps" to new discoveries.
2. Hybrid Lab Approach: Digital + Physical
[09:19 - 11:33]
-
Combining Physical and Digital Experiments:
Periodic Labs aims to merge frontier AI research with an advanced physical laboratory—feedback loops where LLMs propose experiments, run simulations, and digest results to inform further research. -
Automating the Workflow:
"The LLM can propose, for example, synthesis recipes, or simulations to run... The LLMs are pretty good at tool use, it can actually do it itself..."
— Doge Chubuk [09:56] -
Bottlenecks in Automation:
While high-throughput experiments are becoming commoditized, full automation of characterization (assessing the results of experiments) remains a challenge, but one they hope to soon overcome.
3. Targeting Superconductors and ‘Breakthroughs’
[12:08 - 16:37]
-
Why Superconductors?
The team prioritizes superconductors because a genuinely novel discovery—even prior to real-world deployment—would be immediately impactful for science and tech (e.g. quantum computing, fusion)."If somebody discovers a room temperature superconductor today, even before it makes it into a product, it has huge impact. First of all, it changes how we think about the universe."
— Doge Chubuk [13:47] -
Path to Breakthroughs vs. Incremental Progress:
Doge cautions that LLMs (and humans!) are still limited by their training data; thus, true breakthroughs remain difficult but the automation and scale at which they operate increases the odds of stumbling upon big discoveries."I actually would guess that there's a law out there that we haven't discovered yet that says that you can't just look at your training set that's different than what you're trying to discover and just predict it."
— Doge Chubuk [12:58]
4. Reward Functions and AI-driven Discovery
[20:45 - 21:34]
-
Reward Design for LLMs:
Instead of chasing only room temperature superconductivity, they optimize for various valuable traits (e.g. higher critical temperature, ductility, critical magnetic field)."For real life experimental measurement of Tc, it's much harder to reward hack, which we love... it's a nicer, unhackable reward."
— Doge Chubuk [20:51]
5. The Human in the Loop & Hypothesis Generation
[22:18 - 23:25]
- Humans vs. AI:
While automation is a goal, Doge is pragmatic—human scientists remain crucial in hypothesis generation, an area where current AI is still lacking:"There are things that ML/AI is better than humans, but one of those things is not hypothesis generation."
— Doge Chubuk [22:18]
6. Cost Structure & the $300M Seed Round
[24:07 - 24:55]
- Huge Compute & Hardware Costs:
Training LLMs and running experiments both drive costs. GPUs for LLMs/simulations are surprisingly among the biggest expenses, even greater than physical labs in some scenarios.
7. Generalist vs. Specialist Models
[25:20 - 27:51]
-
Why Cross-disciplinary AI Matters:
The complexity of modern science means no human can master all relevant fields, but a well-trained LLM could possess deep, cross-domain knowledge—potentially making the most exciting discoveries "at the fringes between disciplines"."A lot of the exciting discoveries happen to lie in between fields... we're excited about an LLM that can basically do that at a scale that humans couldn't yet."
— Doge Chubuk [27:51]
8. Synthetic vs. Physical Data
[27:51 - 30:10]
-
Emerging Value of Lab-generated Data:
Chubuk reflects on how the unique experimental data generated in their lab could drive foundational scientific insights unavailable anywhere else."There are certain experiments that told us so much about how we understand ... the universe."
— Doge Chubuk [28:22]
9. Business Models: Licensing, Productization, and R&D-as-a-Service
[30:10 - 34:02]
-
R&D as a Scalable Service:
Short-term, providing unique, science-aware LLMs to other R&D organizations could be lucrative. Longer-term, if Periodic Labs can repeatedly pioneer materials, it could redefine the industry (akin to Genentech in drug discovery)."We already see this big need and a big potential for impact by providing these LLMs to do physical R&D."
— Doge Chubuk [32:56] -
Analogy to Cloud Infrastructure:
Shayle likens their offering to AWS for R&D—leveraging scale and unique infrastructure (AI + lab) to serve many clients.
10. Looking Forward: LLMs and Future Breakthroughs
[34:46 - 36:48]
-
Riding the Tide of Platform LLM Advancement:
Ongoing improvements from the big LLM developers (like OpenAI) help Periodic Labs, especially as LLMs get better at code, simulation, and scientific reasoning—a rising tide that could lead to the next great scientific leap."We actually basically rise with the tide, right? Like, as LLMs get better, there's so many advantages of that to other applications."
— Doge Chubuk [35:27]
Notable Quotes & Memorable Moments
-
On the Limitations of AI Reasoning in Science:
"There's a difference between winning gold medals in Math Olympiads and scientific discovery... You can't really practice how to discover the next big theory."
— Doge Chubuk [00:11 & 07:49] -
On Choosing Superconductors as a Focus:
"Superconductivity is a bit like that. To discover an exciting superconductor, we probably have to develop so many capabilities on the way there that's by themselves very useful."
— Doge Chubuk [15:48] -
On the Value of Interdisciplinary Science:
"Science is kind of like a fractal in the way it's hierarchically organized and there's so much surface area that humans have exploited, of course, but then there's probably so much left to exploit, and we're excited about an LLM that can basically do that at a scale that humans couldn't yet."
— Doge Chubuk [27:51] -
On Business Excitement and Company Culture:
"We are hosting weekly seminars where the physicists will teach the computer scientists about the physics and the computer scientists will teach the physicists about LLMs. And of course there are a lot of people in between. It's actually again, a fractal."
— Doge Chubuk [34:14] -
On What Could Change in AI Next:
"One of them could be things like hypothesis generation or more auto domain generalization... because that's what you're kind of trying to get at with your reward."
— Doge Chubuk [36:11]
Important Timestamps for Segments
- Start of Episode / Introduction: [00:07–02:29]
- Background: Why Periodic Labs, What's Changed: [04:22–06:47]
- LLMs + Physical Labs, Closing the AI/Experiment Loop: [09:19–11:33]
- Superconductivity as Breakthrough Target: [12:08–16:37]
- Discussion of Scientific Reasoning Limitations: [12:58–13:56]
- Reward Functions & Incremental vs. Breakthrough Progress: [20:45–21:34]
- Role of Humans vs. AI in Research: [22:18–23:25]
- Funding and Resource Allocation: [24:07–24:55]
- Generalist vs. Specialist Scientific AI: [25:20–27:51]
- Synthetic vs. Physical Data Value: [27:51–30:10]
- Business Models for AI-driven Science: [30:10–34:02]
- Looking Ahead for AI in Science: [34:46–36:48]
Summary Tone & Takeaways
The episode is both optimistic and sober, reflecting the cutting-edge yet challenging state of AI-driven scientific discovery. Doge and Shayle are pragmatic about the barriers ahead—especially the difference between automated incremental progress and true paradigm-shifting breakthroughs—and yet excited by the rapidly increasing capabilities of modern AI when paired with new experimental approaches. The vision is huge: creating the infrastructure and know-how for AI to make real-world, not just digital, scientific discoveries at a pace and scale previously unimaginable.
For anyone interested in the fusion of AI, scientific discovery, and climate technology, this episode offers both a visionary roadmap and a clear-eyed view of the challenges that lie ahead.
