Podcast Summary: "Intentional Tech: Designing AI for Human Flourishing"
AI and I with Dan Shipper | Guest: Alex Komoroske, Cofounder & CEO of Common Tools
Date: July 9, 2025
Episode Overview
This episode explores the future of AI technology with a focus on designing systems that promote human flourishing. Host Dan Shipper interviews Alex Komoroske, cofounder and CEO of Common Tools, delving into how Large Language Models (LLMs) can be used not just for engagement and profit but as intentional tools that amplify agency, privacy, and personal growth. The conversation critically examines the technical and societal architectures underpinning AI, challenges with current paradigms, and offers a systems-thinking perspective on building AI aligned with user intentions rather than corporate incentives.
Key Discussion Points & Insights
1. Reframing Tech’s Trajectory: Human Flourishing vs. Engagement Maximization
- Alex's Core Concern: LLMs are as transformative as the printing press, electricity, or the internet, but their trajectory depends on the intentions behind their design and deployment ([00:00], [06:46], [56:53]).
- Quote (Alex, 00:00 & 06:46):
“We have a choice. We can go down the path we've been going down—which is engagement-maximizing hyper aggregation ... Or we could enable a new era of human flourishing. That could lead to a new era of human flourishing… We want technology that aligns with our intentions—not necessarily what I want in the moment, but what I intend to do.”
- Quote (Alex, 00:00 & 06:46):
2. From Chatbots to Coactive Systems
- Paradigm Problem: Chatbots are a starting point, but their 'append-only' conversational structure lacks the capacity for long-term, structured context ([00:54], [03:03], [04:24]).
- Quote (Alex, 04:24):
“Chatbots to me feel like a feature, not a paradigm... For long-lived tasks, you need structure.”
- Quote (Alex, 04:24):
- Coactive Fabric: Alex describes Common Tools as aiming to build a “coactive fabric for your digital life”—a system where both user and AI agent develop meaning in parallel and context is shared, rather than siloed.
- Quote (Alex, 03:03):
“I think of it like a coactive fabric for your digital life. You are active in the system and so is this emergent intelligent process … your private intelligence, powered by LLMs.”
- Quote (Alex, 03:03):
3. Intentional Tech: Aligning AI with Human Values
- AI & Personal Intentions: Emphasizes the opportunity for AI to learn and amplify stated intentions, not just revealed preferences (e.g., time with family, intellectual challenge), marking a departure from social media’s outrage/maximal engagement design ([06:46], [56:53]).
- Quote (Alex, 06:46):
“I intend to spend quality time with my family. I intend to experience new things ... Technology that aligns with your intentions—not just your dopamine-driven behaviors.”
- Quote (Alex, 06:46):
- Personal vs Corporate Alignment: Warns that ‘personal’ AI from companies with engagement business models is inherently conflicted ([08:51], [16:31]).
- Quote (Alex, 08:51):
“The very first word—personal—doesn’t actually align … If you’re maintaining a dossier on me and that dossier leads to a powerful and proactive thing, that's terrifying.”
- Quote (Alex, 08:51):
4. Architectural Foundations: Context, Privacy, and the Same-Origin Paradigm
- Stateless vs. Contextual AI: Most LLMs are stateless; true value comes from the user context layer ([09:29]).
- Security Models as Drivers of Centralization: Current “same-origin paradigm” (web/app security model) hinders interoperability and integration, covertly leading to feature centralization and monopolistic dynamics ([25:52], [29:19]).
- Quote (Alex, 25:52 & 29:19):
“The current laws of physics, the security model we use for the web and apps, actually limits this possibility ... Data accumulates inside that origin as a little island.”
- Quote (Alex, 25:52 & 29:19):
- Open Attested Runtimes & Confidential Compute: Advocates for cloud-based architectures verified via confidential computing to ensure privacy without local-first inconveniences and to break the current triangle of trade-offs ([32:01]).
- Quote (Alex, 32:01):
“Confidential Compute is secure enclaves in the cloud … even someone with physical access to the machine can’t peek inside.”
- Quote (Alex, 32:01):
5. The Dynamics of Open vs. Closed Systems
- Lessons from Internet History: Alex draws parallels between current LLM ecosystems and early internet/AOL vs. open web, warning against the dangers of repeating closed aggregative structures ([18:23], [19:27], [22:29]).
- Quote (Alex, 19:27 & 20:15):
“AOL was an important company… but eventually the open-endedness of the web took over … Open systems tend to win under certain conditions, especially in the growth era.”
- Quote (Alex, 19:27 & 20:15):
- Feature Innovation Limits: Big companies ignore “50,000 user” features—this stifles innovation ([24:44]).
- Enabling Open Ecosystems: Open APIs and commodity LLM services are positive signs, but security and privacy must be actively architected, not assumed ([25:21]).
6. Systemic Thinking: Coordination, Power, and Evolution in Tech
- Complex Systems and Coordination Costs: The exponential increase in organizational coordination costs hinders innovation in large organizations ([44:44]).
- Quote (Alex, 44:44):
"As organizations get larger, they get much, much, much slower. And that's true even if you assume everybody is actively good at what they do. It arises due to an exponential coordination cost blow up."
- Quote (Alex, 44:44):
- System Intervention Mindset: Lasting change comes from shifting leverage points (technical or otherwise) rather than brute regulatory force ([41:05]).
- Emergent Knowledge Transfer: LLMs unlock the fluid transfer of tacit knowledge, not just explicit explanations, enabling richer interpersonal exchanges (“liquid media”) ([49:04]).
7. AI Agents, Prompt Injection, and Trust
- Prompt Injection Risk: Compares the risks in LLM prompt injection to traditional code injection—except it’s even riskier due to “all text being executable” ([62:21]).
- Quote (Alex, 62:21):
"Prompt injection kind of fundamentally breaks the basic interaction paradigm ... SQL injection is child's play compared to prompt injection."
- Quote (Alex, 62:21):
- Architectural Solutions Needed: Warns that current agent tool integrations are “built on quicksand”—security must go deeper than OAuth ([63:31]).
- Agency and Trust Boundaries: Importance of maintaining contextual integrity—AI should know when not to blend contexts across domains (e.g., “therapist” vs. “boss”) ([59:49], [60:21]).
8. The Human-AI Loop: LLM Literacy and Evolving Human Nature
- User Sophistication Matters: The value derived from LLMs is dependent on user skill—prompting is an acquired 'literacy' ([69:12]).
- Quote (Alex, 69:12 & 70:04):
“When I watch someone who is technically savvy, by the way, tech savviness has nothing to do with your savviness for prompting... It's a new kind of skill.”
- Quote (Alex, 69:12 & 70:04):
- AI Changes What It Means to Be Human: Rather than replacing us, LLMs will shift what we can do, how we coordinate, and even how we see ourselves ([56:53], [57:41]).
- Potential for Human Flourishing—Or Infinite Distraction: LLMs could allow deeper empathy and understanding, or just infinite, addictive content ([56:53], [57:17]).
- Quote (Alex, 56:53 & 57:17):
“We have the potential for a dawn of a new era of human flourishing ... It could also be like infinite TV, amusing ourselves to death.”
- Quote (Alex, 56:53 & 57:17):
Notable Quotes & Memorable Moments
-
On the Path Before Us:
“I would run onto the stage where Steve Jobs shows off the iPhone with a poster that says, ‘this will become the most important computing device on earth. It is insane to allow a single company to decide what things you may run in it.’”
—Alex Komoroske [38:43] -
On Coactive AI:
“It’s not you ask, it answers. You are both actively building meaning together on the same substrate.”
—Alex Komoroske [03:03] -
On Tacit Knowledge:
“Know-how is rich ... We used to require explanations ... now we can just move tacit knowledge between people because you can train a model with a bunch of examples.”
—Dan Shipper [49:04] -
On Prompt Injection and AI Safety:
"LLMs are imminently gullible and make all text effectively executable ... the combination of that and tool use is potentially explosive.”
—Alex Komoroske [62:21] -
On Technology's Trajectory:
“The choice isn’t preordained—you have to work for it. If you aren’t paying for your compute, it’s working for somebody else.”
—Alex Komoroske [14:09] -
On Systemic Limits and Gravity Wells:
“A lot of PMs are under the misunderstanding that they’re in way more control of their users and their usage than they actually are … if you’re the lead PM for the web platform, you are under no illusion that you’re in control.”
—Alex Komoroske [41:53] -
On LLM Era's Early Stage:
“It feels like we are halfway through the LLM era. We're in the very first innings—rubbing sticks together.”
—Alex Komoroske [52:29]
Timestamps for Key Segments
- 00:00–01:16: Alex outlines the stakes of the LLM era; transformative potential vs. dystopian default.
- 03:03–05:27: Introduction to Common Tools and the “coactive fabric” concept; limitations of chatbot UIs.
- 06:46–10:14: Intentional tech, difference between intentions and revealed preferences; aligning technology with human flourishing.
- 13:05–15:35: Debate on business structures—can tech for good be built within for-profit models?
- 18:07–22:29: Open vs. closed ecosystems—AOL analogy, risk of hyper aggregation, and hope for 'web of AI'.
- 25:52–29:31: Security architectures and centralization; the same-origin paradigm and data gravity.
- 32:01–34:39: Confidential Compute and cloud security as alternative privacy solutions.
- 44:44–46:38: Alex's background in systems thinking, the ‘slime mold’ deck, organizational dynamics.
- 49:04–52:29: Tacit knowledge transfer, “liquid media”, and new reading formats with LLM-enabled AI.
- 56:53–58:19: How LLMs may redefine human potential or exacerbate passivity.
- 62:21–65:19: Prompt injection in agentic AI, architectural implications, emerging AI safety risks.
- 69:12–70:28: Importance of LLM literacy and emergent user skills.
Final Thoughts
Throughout the conversation, both Alex Komoroske and Dan Shipper bring a mix of enthusiasm and caution to the AI revolution. From the technical underpinnings of security architectures to the lived experience and agency of everyday users, this episode makes a strong case for building AI that serves, empowers, and truly understands its user. At its best, intentional tech offers a future where AI is not just a tool, but an extension of human intention—enabling not only productivity and convenience, but deeper flourishing.
For references, essays, and further AI experiments: every.to/chain-of-thought.
