Digital Social Hour: Michael Weiss – This Is What the Next 10 Years Look Like
Episode #1726 – January 1, 2026
Host: Sean Kelly
Guest: Michael Weiss, Co-Founder of the AI4 Conference
Overview
In this episode of Digital Social Hour, Sean Kelly welcomes Michael Weiss, AI entrepreneur and co-founder of the rapidly expanding AI4 Conference. They dive deep into the evolving landscape of artificial intelligence: its explosive industry growth, the life-changing and sometimes unsettling uses emerging today, and what the future holds over the next decade. The conversation touches on everything from self-driving cars, personalized education, defense, humanoid robots, AI alignment problems, open vs. closed-source models, and the elusive idea of artificial general intelligence (AGI).
Key Discussion Points & Insights
1. Growth of AI4 Conference & AI Industry
Timestamps: 00:43–01:40
- The AI4 Conference has grown from a 300-person event in 2018 to 8,000 attendees from 85 countries in 2026.
- Massive increase in industry interest came after ChatGPT triggered the “foundation model era” (2022 onward).
- Over 600 speakers and 250 exhibitors at the current event.
Quote:
“We started in 2018 as a 300 person event at this little hotel in Williamsburg that doesn’t exist anymore. And now, this year, we’ll have around 8,000 people from 85 countries.” — Michael Weiss [00:54]
2. AI-Powered Innovations & Applications
Timestamps: 01:52–05:44
- AI is making possible things that were once unthinkable: e.g., resurrecting deceased loved ones via digital avatars (Reflector), self-driving cars (Waymo), resurrecting extinct species (woolly mammoth projects).
- AI is now touching all industries, with 55 different thematic tracks at AI4 spanning finance, healthcare, education, defense, and more.
- Personalized education and drug discovery are standout areas, promising tailored learning and revolutionary advances like “digital biology” and quicker, cheaper medication development.
Quote:
“The idea of just completely personalized learning experiences for each person that goes at their exact pace I think is amazing.” — Michael Weiss [04:16]
3. Self-Driving Cars and Future of Human Skills
Timestamps: 02:23–03:42, 13:35–16:33
- Waymo’s self-driving fleets expanding from San Francisco to Austin and Scottsdale.
- Data suggests human-driven cars are already less safe than AI-driven ones; Weiss doubts his newborn daughter will ever need to learn to drive.
- As AI takes over tasks like coding and driving, traditional skills may become obsolete—future generations may not need to learn these but should develop new aptitudes.
Quote:
"There’s no way that...the majority of cars are being driven by humans. Yeah, there's just no way." — Michael Weiss [02:53]
4. AI in Defense: Existential Risks & Regulation
Timestamps: 05:44–07:51
- Michael describes “drone swarms” as an existential AI threat: autonomous, weaponized micro-drones guided by computer vision, able to target individuals or profiles.
- Ethical and regulatory challenges: balance between AI’s power and responsible use.
- Positive flipside: drones for rapid delivery of medical goods to remote areas.
Quote:
"Imagine a bunch of small drones...each one is a little tiny explosive... and then imagine saying to your drone swarm, 'We want you to go kill these people of a certain profile...' It is kind of like an existential threat." — Michael Weiss [06:09]
5. Large World Models: The Next Leap
Timestamps: 08:22–11:06
- Next-gen AI will move beyond Large Language Models (LLMs) like ChatGPT, towards “Large World Models” designed for navigating and manipulating the physical world.
- Companies like World Labs (Fei-Fei Li) are leading this; Michael envisions a future where humanoid robots, using such models, will execute tasks ranging from groceries to complex assistance.
Quote:
"But the vision of this large world model concept...is to create...an analogous model to the foundation model of an LLM like ChatGPT for just navigating the physical world." — Michael Weiss [10:40]
6. Acceleration: AI Writes Code and Designs Hardware
Timestamps: 14:15–15:20
- AI is already writing most new code and helping design the next generation of computing chips.
- Creates a feedback loop: better AIs develop better hardware, which supports more capable AIs, and so on.
- Raises the urgency of tackling AI alignment to ensure beneficial outcomes for humanity.
Quote:
“Not only is AI writing the code that is going to lead to the next generation, it’s also helping design the chips...So the next generation of chips is also becoming dependent on the current generation of AI.” — Michael Weiss [14:42]
7. Rethinking Education & Critical Thinking
Timestamps: 15:20–18:18
- As skills like driving and coding become less relevant, critical thinking rises in importance—but if AI can do that, what’s next for human education?
- Michael questions if we should resist AI’s role in assignments, or embrace new skills when old ones are made obsolete (comparing calculators in math to AI in homework).
- Noteworthy initiative: Major teachers’ union (1.8M members) partnering with Microsoft, OpenAI, Anthropic to build an AI education lab in New York.
Quote:
"If we have a technology that's reliable that can do all the things that we currently think are critical thinking, maybe we just let the technology do all those things and then find the new bucket of things to think about." — Michael Weiss [16:31]
8. The Power of Collaboration & Talent Wars
Timestamps: 18:21–20:43
- Bringing top minds together (e.g., Fei-Fei Li, Jeff Hinton) is crucial for navigating AI’s impact and risks.
- Competition for AI talent is fierce—Meta paying $250M for a single AI expert is a signal of the field’s value and the stakes involved.
- Debate over open-source (Llama) vs. closed-source models (ChatGPT, Gemini): Open models allow more innovation but less control.
Quotes:
“Events like this... bringing the smartest people together.” — Sean Kelly [19:00]
"Meta is going to be really serious about AI moving forward...This was a strong signal." — Michael Weiss [20:09]
9. Open Source vs. Closed Source AI
Timestamps: 20:57–22:21
- Closed-source models dominate daily use, but are often restricted in what information they’ll return.
- Open-source models foster freedom: users can build more customized, less restricted applications (e.g., no content guidelines block requests).
Memorable Moment:
“I literally asked, ‘Can you name me some survivors from Epstein Island?’ It said, ‘I can’t.’” — Sean Kelly [21:53]
"Which it should answer that. There’s nothing wrong with that." — Michael Weiss [21:56]
10. Artificial General Intelligence (AGI): Shifting Goalposts
Timestamps: 22:28–24:47
- The “AGI” definition keeps evolving. Alan Turing’s original bar (the Turing Test) has been surpassed, but the field’s goals keep moving.
- Weiss doubts there will ever be a definitive “arrival moment”—AI will always look and feel different from human intelligence.
- Consciousness remains mysterious—even in humans—so it’s a weak basis for assessing AGI.
Quote:
“I think we’re just going to keep moving the goalpost, to be perfectly honest, and it’s going to keep getting more powerful, more general purpose, more amazing.” — Michael Weiss [24:16]
11. AI and the Limits of Creativity
Timestamps: 24:47–26:19
- One challenge: Can AI eventually originate scientific theories, as humans like Einstein or Newton did?
- Weiss believes we’re not close to fully “automating science,” but this would be a major milestone.
Quote:
"There's just no prompt right now that we can give to really get like some original observation about reality out from AI." — Michael Weiss [25:54]
12. The Future of AI4
Timestamps: 26:19–27:13
- AI4 Conference continues to expand—planning 12,000 attendees and over 1,000 speakers next year, moving to the Venetian in Las Vegas.
- “AI4 Everything” reflects AI’s reach into all corners of society.
Notable Quotes At a Glance
- “We built civilization using human intelligence and now we’re just starting to rely more and more on artificial intelligence to progress.” — Michael Weiss [07:51]
- “It is a Black Mirror episode. I mean, this whole thing is going to kind of be like that, this whole AI transition.” — Michael Weiss [13:50]
- “Whether it goes good or bad, is up to us humans. And literally just talking about it and communicating about how we’re approaching this huge transition for society is critical.” — Michael Weiss [19:03]
- “If we can have a human talk to a machine and the human can’t decipher whether or not it’s human or machine, AI has arrived...But now we’ve passed that Turing Test, obliterated. And people are still, now we’re still pursuing AGI.” — Michael Weiss [23:01]
Episode Flow & Timestamps
| Segment | Timestamp | |-------------------------------------|--------------------| | Opening: Concept of drone swarms | 00:00–00:43 | | AI4 growth & global scale | 00:43–01:40 | | AI innovation examples | 01:52–05:44 | | Self-driving & future skills | 02:37–03:42, 13:35–16:33 | | Defense: Drones, threats, and drones for good | 05:44–07:51 | | Large world models & robotics | 08:22–13:13 | | AI coding and chip design cycle | 14:15–15:20 | | AI’s impact on education | 15:20–18:18 | | Talent wars & open vs. closed debate| 18:21–22:21 | | AGI, consciousness, and creativity | 22:28–26:19 | | AI4’s future and closing remarks | 26:19–27:13 |
Conclusion
This episode offers a sweeping look at both the hopes and anxieties surrounding AI’s rapid progress. Michael Weiss paints a picture of vast opportunity—across medicine, education, and even daily life—as well as serious risks and unresolved questions. The conversation reflects both awe and urgency: as technological waves crest ever higher, it is human perspectives, ethics, and collaboration that will ultimately shape the future.
