Eye On A.I. – Episode #330: Sebastian Risi – Why AI Should Be Grown, Not Trained
April 6, 2026
Host: Craig S. Smith
Guest: Sebastian Risi
Episode Overview
This episode explores the concept of "growing" artificial intelligence, in contrast to the conventional idea of "training" via gradient descent. Host Craig S. Smith interviews Sebastian Risi, a prominent AI researcher, about the field of neuroevolution—a biologically inspired approach to creating adaptive, robust, and continually learning AI systems. The conversation covers how drawing from nature's methods—evolution, growth, plasticity, and self-organization—can overcome current limitations in AI and potentially lead to more flexible, resilient, and creative artificial agents.
Key Discussion Points & Insights
1. Gradient Descent vs. Neuroevolution
- Explanation of Gradient Descent vs. Evolutionary Approaches
- [01:30] Host uses a vivid analogy:
- Gradient Descent: Like a blindfolded person on a mountain, always stepping downhill to find the lowest point (local minima), but possibly missing the very lowest valley (global minima).
- Neuroevolution: Like dropping explorers all over the mountain range with different strategies, keeping the best ones, and evolving variations – more likely to find global minima.
- “With neuroevolution, you improve by variation and selection… No one has to know which way is downhill to begin.” — Craig S. Smith [02:24]
- [01:30] Host uses a vivid analogy:
- Non-differentiable Problem Spaces
- [04:16] Sebastian explains that evolutionary methods don't require smooth, differentiable landscapes like gradient descent does, making them suitable for more complex or discrete optimization problems.
- “Evolution doesn’t really care if anything about it is differentiable or not.” — Sebastian Risi [04:54]
2. What is Neuroevolution?
- Combines evolutionary algorithms with neural network design.
- Can optimize not just weight values but architecture, learning rules, hyperparameters, and more.
- Not restricted by the need for differentiable components.
- [03:28] "The nice thing is it doesn't only have to be the weights... it doesn't have to be differentiable. So it's quite versatile how you can apply it." — Sebastian Risi
3. Plasticity and Continual Learning
- [09:13] Hebbian Learning and Adaptivity:
- Networks with evolving (plastic) weights can adapt to changes—even drastic ones never seen during training (like losing a "leg" in a robot).
- “These Hebbian networks... you can cut off a leg and oftentimes it will still be able to function, even though it has never seen this kind of variation during training.” — Sebastian Risi [00:10], [10:39]
- [12:06] Neuromodulation:
- Inspired by biology, neuromodulatory neurons act as on/off switches for local learning, reducing catastrophic forgetting.
- "One functionality is that it tells parts when should they switch learning on and when should they switch it off." — Sebastian Risi [12:17]
4. Growing Networks and Developmental Programs
- [13:40] Neurogenesis & Morphogenesis:
- Networks can “grow” by adding new nodes and connections—reflecting brain development in biology.
- “We call this a neural developmental program... that can then decide when should another node be created.” — Sebastian Risi [13:40]
- [15:44] Adaptive Capacity & Fitness:
- Growth is moderated by selective pressures (comparable to energy constraints in biology), integrated via multi-objective optimization (performance + resource efficiency).
- Scaling up is possible, but balancing growth and plasticity is still a challenge.
5. Continual Learning and Model Merging
- [27:54] Incremental Learning without Overwriting:
- The goal is networks that, instead of overwriting old knowledge, grow new nodes to store new learning acquired in operation.
- “That will be the ultimate goal…” — Sebastian Risi [27:55]
- [28:20] Evolutionary Model Merging (Sakana):
- Merging already-trained models (e.g., one good at Japanese, another at math) to combine capabilities using evolutionary search.
- “You let evolution figure out how to combine them together and have a model that's good at Japanese and math.” — Sebastian Risi [28:50]
6. Artificial Life and Resilience
- [30:52] Artificial Life Research:
- Exploring “life as it could be,” including growth, self-organization, and self-replication (e.g., neural cellular automata, virtual salamanders).
- “The nice thing is you can train those with supervised learning... If you don't know [the target]... you can use evolution.” — Sebastian Risi [32:04]
- Robustness and Adaptation:
- Damaged agents can recover lost capabilities (e.g., a robot regrowing after being cut).
- Biological inspirations can increase the resilience of AI systems compared to brittle deep learning models.
7. Applications to Science and Creative Search
- [35:07] AI Scientists and Sakana’s “Shinka Evolve”:
- Combining LLMs (as “mutation operators”) with evolution to generate, mutate, and evaluate hypotheses or designs.
- “The only thing you need is to be able to somehow score it based on some fitness function.” — Sebastian Risi [36:30]
- Demonstrated by generating academic papers, deriving code solutions, and even proposing new scientific ideas.
8. Limits of LLM Creativity and the Human Knowledge Boundary
- [42:55] Are LLMs Fundamentally Constrained?
- Risi acknowledges that language models are still generally limited by their training data, but can sometimes recombine ideas to reach new territory.
- “For me, it’s less clear how far is that outside of what it has seen.” — Sebastian Risi [44:00]
- Need for Real-World Experimentation:
- True scientific creativity may require AI to not only generate hypotheses but also run (automated) experiments and act in the world.
9. Open-Endedness and Environment Co-Evolution
- [47:58] Evolving Agents and Environments (“POET,” “Omni”):
- Gradually increasing task complexity (via environment evolution) allows agents to develop stepping stones and solve harder challenges.
- “You need to go through these stages for it to discover these kind of stepping stones in the behavior to be able to do the final thing.” — Sebastian Risi [49:36]
10. World Models and Continuous Thought Machines
- [51:48] World Modeling:
- Internal world models let agents “dream” or simulate interactions, supporting safe, imaginative, or counterfactual reasoning.
- [54:05] Renaissance for Evolutionary AI:
- LLMs and evolutionary methods pair well: LLMs can propose rich, varied representations; evolution can select and combine at a scale and flexibility not possible before.
- [56:23] Continuous Thought Machine (Sakana):
- An architecture where “neurons” are more complex, and computation time is allocated based on “internal confidence”—blending dynamic, biologically inspired processing.
- “The network itself can decide to think about a problem for longer time periods… not just input-output.” — Sebastian Risi [56:44]
Notable Quotes
-
On Resilience:
“Biological systems are incredibly resilient… deep learning, often you find these weird examples and it completely fails. So I think there’s a lot of promise in using these systems that can self-organize… they have inbuilt resilience that I think we could exploit…”
— Sebastian Risi [33:20] -
On Model Merging:
“You can take a model that is good at Japanese, you take a model that’s good at math, and you let evolution figure out how to combine them together.”
— Sebastian Risi [28:50] -
On Collaborative Intelligence:
“How do we best kind of combine it with what humans are good at and what machines are good at? … How can we combine the best of both worlds?”
— Sebastian Risi [41:40] -
Limits of LLMs:
“Its creativity is constrained by the training data… Is it really going to come up with ideas that aren’t in some way embedded in the training data?”
— Interviewer [42:55]
Suggested Timestamps for Deep Dives
- Gradient Descent vs. Neuroevolution Explanation: [01:30]
- What Is Neuroevolution?: [03:28]
- Plasticity, Hebbian Learning, and Lifelong Adaptation: [09:13]
- Neuromodulation and Preventing Catastrophic Forgetting: [12:06]
- Growing Network Architectures (Neurogenesis): [13:40]
- Model Merging and Evolutionary Search: [28:20]
- Artificial Life and Robustness: [30:52]
- AI Scientists, Creativity, and Scientific Discovery: [35:07], [42:55]
- Continuous Thought Machine and Future AI Architectures: [56:23]
Tone and Style
The tone of the episode is intellectually adventurous and technically in-depth but remains accessible and engaging. Both host and guest are enthusiastic about the future of AI that draws more deeply from the lessons of biology—evolution, growth, continual adaptation, resilience, and creativity.
Memorable Moments
- Host’s mountain analogy for optimization methods ([01:30]).
- Detailed discussion of robotic quadrupeds adapting to amputated limbs ([10:10]).
- The concept of artificial salamanders in Minecraft regrowing after being cut ([32:00]).
- Candid reflections on the limitations of current LLM-based creativity ([42:55]).
- The emerging vision that “growing” networks—combined with evolutionary and language model techniques—could define the next major phase in AI.
This episode is a comprehensive dive into why tomorrow’s AI may look less like a rigidly trained machine and more like a living, evolving, and endlessly adapting organism.
