Intelligent Machines (Audio) — Episode 841: "Dust and Deli Meat – Open Source AI Revolution"
Date: October 16, 2025
Host: Leo Laporte
Panelists: Paris Martineau (Consumer Reports), Jeff Jarvis (Craig Newmark Graduate School of Journalism)
Guest: Jeffrey Cannell (CEO, Nous Research)
Overview
This episode of Intelligent Machines explores the philosophy, technology, and social impact of open source AI, featuring an in-depth conversation with Jeffrey Cannell, CEO of Nous Research. The discussion delves into the ethical and technical aspirations of developing truly democratized, unfettered artificial intelligence—contrasting this vision with the prevailing landscape of corporate-controlled AI. Spanning practical breakthroughs, the challenges of scaling outside big tech, foundational questions of guardrails and alignment, and philosophical reflections on the dangers and hopefulness of the AI future, the episode offers an insider’s peek into the grassroots revolution attempting to shape "the most exciting revolution humanity has ever seen."
Key Topics and Insights
1. Introducing Nous Research: An Open Source AI Frontier
- Origins: Began as a grassroots Discord channel with 15,000+ volunteers aiming to replicate (then outpace) capabilities of ChatGPT and big company models.
- Mission: Building AI that is not beholden to corporate or political values, but is “neutrally aligned”—designed to take direction from the end user rather than enforcing a preset agenda.
“Neutrally aligned really just means aligned to you. ... It will take your direction as what you want it to be.” —Jeffrey Cannell [04:16]
2. AI Guardrails, Ethics, and Free Speech (06:08)
- The Print Press Parallel: Restricting access to information leads to power asymmetries and stifles innovation.
- Guardrails Illusion: Questioning whether “guardrails” truly prevent misuse or just create a false sense of security.
“We can't control every use, so we should do this work in public, out in the open, not behind closed doors.” —Jeffrey Cannell [06:42]
- Open Source/Transparency: Full publication of model weights, datasets, training procedures, and academic research. Crucially, not just open weights, but open everything.
“The best way to end up with a secure operating system is by having the source code completely available and iterated in the open.” —Cannell [07:29]
3. Democratizing AI Model Training and Compute Power (08:42)
- Technical Goal: While expertise (and funding) remains necessary, publishing results for universal access is key.
- Bottleneck—Compute: Access to GPUs is the limiting factor, not just money—hyperscalers maintain low utilization.
- Innovation: Building "PSYCHE," an infrastructure that lets unused GPUs around the world collaboratively train AI models, compensating contributors permissionlessly (often via Solana blockchain).
“What we've created is infrastructure where if your GPUs are idle, you can join in collaborative training ... It gives us access to the compute scale we need to play in the big leagues.” —Cannell [10:56]
4. Model Alignment and “Chameleon AI” (11:32)
- User-Aligned AI: Hermes models are designed to adapt via system prompts to the user’s context, ideology, or needs.
- Roleplay Training: Extensive training means Hermes can convincingly adopt various perspectives or literary personas.
“It's not about being left or right. It's about being able to act as if you put yourself in those shoes, and now go along with that.” —Cannell [12:16]
- Philosophy: AI should “lift you up”—empowering users—not simply maximize attention (“eyeball extortion”).
5. Open Source Commons vs. Big Tech Oligarchy (13:31)
- Challenges: Headwinds include financial muscle of big tech, talent competition, and systemic resource imbalance.
- Nirvana Vision: A “free and open” AI ecosystem that is both philosophically sound and technically superior—a landscape where openness is the natural “downstream” for talent and innovation.
“We have to set up the landscape where the natural course of action is for AI to be open and free, not closed.” —Cannell [15:05]
6. Diversity, Discord, and Global Collaboration (16:38)
- Growth Path: From a “bunch of homies on Discord” to a core full-time team of 30, powered by thousands of global volunteers.
- Early Breakthrough: Introduced key innovations (notably long-context reasoning—extending context windows from 4k to 100k tokens) that were quickly adopted by the industry.
7. Data and Training: Synthetic Data, System Prompts, and Avoiding Collapse (21:16)
- Synthetic Data: Early use of AI-generated training data to bootstrap models, with careful human curation.
- “Model collapse” concern (AI trained only on AI output) proved manageable with robust pipelines.
8. Resource Constraints and Innovation (24:16)
- Chinese Open Source Success: Resource limitations (e.g., fewer H100s) force creative solutions—mirrored at Nous.
- Reluctance to Depend on Meta: Although LLaMA was foundational to early Hermes versions, Nous’s mid-to-long term vision is self-reliance.
9. Crypto for Governance and Payments (28:11)
- Why Solana/Crypto? Needed decentralized, borderless consensus and payment for compute contributors—crypto fit the bill.
10. AI Research and Funding (31:13)
- Need for New Funding Models: Beyond academia (grant chasing) and corporate labs (closed research), we need alternative, less-encumbered paths for novel, frontier AI work.
- Philanthropy’s Role: Noted new $500M philanthropic AI fund (MacArthur, Mozilla, Ford, et al., see also [141:09]) aiming to support people-centered AI.
11. Long-Horizon / "Agentic" AI (32:54)
- New frontiers: Multi-day planning and task execution by AI agents (“long-horizon planning”) is the next big research challenge.
12. Personal Philosophy, Faith, and Humanistic AI (34:05)
- Faith & AI: Cannell’s Catholic faith frames creation as a way to explore the world’s complexity and beauty—AI as extending mastery for human flourishing, not as an "alien god."
- Not a Doomer: Sees the main risk of AI not as “paperclipping the universe,” but deepening social echo chambers and human isolation.
“The danger is not from the AI, it’s from ourselves ... my doom is just that it is a tool that keeps us from understanding each other.” —Cannell [36:41]
13. How to Support Open AI: Get Involved
- Join the Project: Get on Discord, contribute compute, or engage in technical and philosophical conversation.
“Go to our discord … that’s the beginning and the end. That’s where we do everything.” —Cannell [38:03]
Notable Quotes & Moments
-
On open source AI’s radical transparency:
“Even the academia side—when you make a breakthrough, you publish the method, not just the result.” —Leo Laporte [08:13] -
On necessity and invention:
“Necessity is truly the mother of invention ... If you have unlimited resources, you do things the lazy way.” —Cannell [24:41] -
On the real dangers of AI:
“I think the danger is not from without in the AI, it's from within in ourselves ... it's a social risk.” —Cannell [36:41] -
On what open AI means for society:
“We want our models to help you become a better version of you ... not take your eyeballs away.” —Cannell [12:23] -
On the competitive, sometimes toxic culture of open source AI:
"I see this even in the open source space—‘I'm more open source than you!’ ... It's very competitive, like vegan activism." —Cannell [19:16]
Timestamps for Key Segments
- [04:07] — What does "neutrally aligned" mean in AI?
- [06:08] — Guardrails and the open vs. closed AI debate
- [08:04] — Extreme openness: datasets, weights, training methods
- [08:42] — Technical vs. social goal of democratized AI
- [10:56] — Compute bottleneck and the PSYCHE distributed training network
- [12:16] — How “chameleon AI” alignment works
- [13:31] — Open source AI’s Achilles heel: Funding and the scalable open future
- [16:38] — Diversity and the Discord-to-startup journey
- [21:16] — Synthetic data generation methods and challenges
- [28:11] — Crypto enables borderless collaboration and payment
- [31:13] — Rethinking AI research funding and alternative models
- [34:05] — Personal philosophy: faith, AI, and free will
- [36:33] — Why AI doom isn’t actually about the robots
- [38:03] — How listeners can support or participate in the open source AI movement
Additional Podcast Highlights
Innovations & Technical Details
-
Long Context Window Breakthrough: Extended context windows from 4k (ChatGPT v1) to 100k tokens—a foundational leap adopted across the field.
(Discussed [17:17]) -
Synthetic Data Pipeline: Using AI to bootstrap, generate, and then heavily curate seed data, including inventive prompt combinations (e.g., “write it as a rap song”).
-
Distributed GPU Training: Idle GPUs worldwide can join training runs for shared resource efficiency, consensus and payment via blockchain.
Social and Philosophical Context
- Open Source “Crab Bucket”: Even among open labs and researchers, competitiveness and cultural frictions threaten collaboration.
- Necessity = Innovation: Scarcity (compute constraints) is seen not as a setback, but as the driver of creative breakthroughs (with China cited as a parallel).
Concerns and Cautions
- Guardrails Myth: No system of filters is proof against determined actors (“jailbreakers” will always find workarounds).
- Echo Chambers & Social Risk: Emphasizes that the deepest risk is AI as a force exacerbating social isolation and misunderstanding.
Community Engagement
- Discord as a Nexus: Project is open for grassroots collaboration—builders, researchers, and the philosophically inclined are all welcome.
Memorable, Fun, and Lighthearted Moments
- History lesson: On the Linotype machine and how it factored into mass media and print culture (see: Jeff Jarvis’s upcoming book).
- Panel banter: “Dust and Deli Meat” emerges from a digression on New York grocery stores’ unique scent [158:09], providing the episode’s eponymous title.
- ASCII rocket vs. snail: Live testing of open source LLMs with fun prompts, revealing both their prowess and (literal) quirks.
- D&D Planning: Paris and Leo riff about prepping for Micah Sargent’s live D&D session for Club Twit members.
Related Listener Resources
- Nous Research: nousresearch.com
- Open Access: All models, datasets, methods, and academic papers linked from the official site and Discord.
- Discord Community: Linked from the Nous website for volunteer contributors (technical and non-technical).
Final Thoughts
This episode of Intelligent Machines offers a sweeping, energizing insight into the ethos and mechanics of open source AI: innovation sparked by necessity, a philosophy of radical transparency, and a drive to make AI a tool for personal and social well-being—not just another profit engine. The panel and guest stress that while there are no guarantees open approaches will win, a community striving for both technical excellence and ethical openness is our best defense against a future in which AI is simply another tool for centralized control or user manipulation.
“It’s not just about being philosophically motivated—it’s about being the best. We have to make open AI tech that is technically the best, and morally, too." —Jeffrey Cannell [32:58]
For anyone seeking to understand the stakes and dreams of the AI revolution—from the grassroots up—this episode is instructive, candid, and deeply engaging.
End of Summary
Notable episode title drop:
“It’s dust and deli meat”—Paris Martineau [158:15]