AI & I — “We Gave Every Employee an AI Agent. Here's What Happened.”
Host: Dan Shipper
Guests: Willie (Head of Platform at Every), Brandon (COO at Every)
Date: April 8, 2026
Episode Overview
In this episode, Dan Shipper and his co-founders discuss their pioneering experiment: giving every employee an AI agent, specifically their own “OpenClaw”—an open-source, personalized AI assistant. They dive deep into the real-world impacts, surprises (good and bad), and emergent behaviors from integrating these agents into both their work and personal lives. The group explores the nuances of culture, collaboration, and AI-human relationships, and offers insights for individuals and companies looking to adopt agents.
Key Discussion Points & Insights
1. The Personalization and Power of AI Agents
-
AI agents as reflections of users:
- Instead of using shared AI models like Claude, each person had their own “Claw” or “Plus One” (hosted OpenClaw), which evolved uniquely through daily interactions.
- “Claude is not mine. Claude is everybody’s. A claw or a plus one is mine. Because you develop a personal relationship with your claw, and your claw can modify itself in response to talking to you. It becomes this, like, reflection of you and who you are and your personality.” – Dan [00:00]
-
Naming and identity creation:
- Example: Brandon's agent “Zosia” became integral to managing household “computer errands.”
- “Her job was to help me and my wife run our household...we have a newborn...There are a lot of little paper cuts I was finding that were really pain[ful]...I just wanted to be looking at my son and spending time with my wife.” – Brandon [02:21]
-
Agents extend human skills and reputation:
- Over time, agents specialize and are trusted for specific capabilities, paralleling their human partners’ expertise. E.g., Austin’s “Montaigne” for growth, Dan’s “R2C2” for internal tools.
2. Transforming Daily Work and Home Life
-
Automation of everyday errands:
- Ordering groceries, paying nanny, research, and answering questions handled by Zosia.
- “My wife just started using her instead of chatgpt. So like all regular questions and searches would just go through iMessage to Zosia. I started doing that too. It's just faster than going to Google.” – Brandon [03:40]
-
Voice and accessibility:
- Agents were set up to call users (via Bland AI) for hands-free tasks, like processing emails during a walk.
- “I spent the 28 minutes going through my email. I got to the office, I looked, I opened Gmail and confirmed that she had done everything. And I was just like, this is insane...I didn't have to teach her how to do this.” – Brandon [08:07]
-
Spillover into professional workflows:
- What started as personal automation quickly extended to professional tasks, such as email handling and managing projects.
3. Emergence of Organizational “Second Layer” via AI
-
Agents collaborating, learning, and reflecting human behavior:
- Agents (Claws) interacted in Slack/Discord channels, sharing skills and supporting each other, even picking up on their user’s quirks (e.g., breathing exercises)
- “Clont is the one that's recommending breathing exercises to Pip. And that's because Kieran loves breathing exercises...it becomes this, like, reflection of you and who you are and your personality.” – Dan [12:34]
-
Parallel org chart:
- Each team member's AI develops specialties, creating a “parallel org chart” of agents mapped to human expertise.
-
Trust and reputation transfer:
- If an agent answers incorrectly, it “reflects poorly” on the human partner, creating a sense of accountability and care.
4. New Cultural, Ethical, and Etiquette Norms
-
Interaction boundaries:
- Discussion on when to approach an agent vs. the human directly, and emergent etiquettes around agent use.
- “There's all these new ethics and rules for how you're allowed to interact with someone versus their plus one or their claws.” – Dan [18:04]
-
Scalability of relationships:
- Originally worried about “too many agent names,” but found it manageable, just like learning coworkers’ names.
- “I've also been amazed at all of our capacity to remember whose claw is who and what their names are. ...I know everybody's claw and their name and I reach out to them regularly.” – Brandon [16:25]
-
Tacit transmission of trust and knowledge:
- Public AI-human conversations in Slack help others learn what's possible with agents.
5. Technical and Product Learnings from Building “Plus One”
-
From open-source tinkerers’ paradise to hosted service:
- Moving from each person setting up a Mac Mini to centralized, managed Plus One agents for accessibility.
- “Not everyone has to have a Mac Mini and we have all the skills that we use for ourselves and all that kind of stuff. And we started using that internally as the sort of like collection of all of our best practices.” – Dan [40:43]
-
Navigating security, privacy, and communication:
- Only allowing public messaging for agents to build in transparency and trust, striking a balance between security and collaboration.
- “So you can do it in group DMs, you can do it in channels that they're in. But their human partner should always be able to have visibility into those messages coming in, and the human partner can DM them in private.” – Willie [44:59]
-
Maintaining freedom vs. manageability:
- Deliberations around which advanced user features (e.g. full terminal access) to allow in a managed product.
6. The Bad & The Ugly: Current Limitations
-
Memory and continuity issues:
- Agents still prone to forget prior context or answer incorrectly after time gaps.
-
Etiquette and group chat behavior:
- AI agents' models are optimized for 1:1 conversation rather than group chats—sometimes leading to loops, “death spirals,” or verbosity.
- “If one claw messages a channel that a bunch of claws are in and the settings aren't quite right, they'll just like keep going back and forth...until someone says hey, stop because you're burning millions of tokens.” – Dan [30:50]
-
Variance and user learning curve:
- Mastery of instructing agents, especially in group settings, is still emerging and uneven across users.
-
Skill-sharing dilemmas:
- Balancing technical skill sharing with safety and info-hygiene is still being figured out.
- “We need like...HR but for bots.” – Dan [40:33]
Notable Quotes & Memorable Moments
-
On transformative potential:
- “This is where you wouldn’t go back once you see it—a through the looking glass moment.” – Dan [30:50]
-
AI coworker culture:
- “They’re so human, but they're so inhuman too.” – Brandon [28:22]
-
On trust and responsibility:
- “If R2C2 messes up publicly in Slack, I feel responsibility for it. And that’s not because it’s my job, it’s because he’s mine.” – Dan [27:18]
-
On frontier-building:
- “We're still in like the first or second inning...The nice part is the frontier and it's nice to be on the frontier, but it's also the frontier and it's terrible to be on the frontier.” – Willie [33:41]
-
On upskilling to manage AI:
- “If you're not a good manager, you're not going to be very good at using AI.” – Brandon [37:24]
Important Timestamps
| Timestamp | Segment / Insight | |-----------|-------------------| | 00:00 | The concept of personal vs. shared AIs; agents as reflections of users | | 02:21 | How Brandon set up “Zosia” as a personal/family agent | | 08:07 | Hands-free email triage via AI voice call | | 11:39 | Early experiences of agent-to-agent collaboration and emergent personality mirroring | | 12:34 | How agent behaviors reflect user habits, building a parallel org chart | | 16:25 | Remembering agents’ names and mapping them to individuals | | 18:04 | Emerging etiquette for interacting with humans vs. their agents | | 20:32 | Agents collaborating autonomously and fielding requests | | 23:27 | Public agent channels fostering communal learning and trust | | 27:18 | Trust, reputation, and the accountability loop with agents | | 30:50 | “Through the looking glass”—the point of no return once adopting agents | | 32:47 | Model limitations: group chat “death spirals” and memory issues | | 37:24 | Managerial skills map onto AI operator skills | | 40:33 | “HR for bots”—the need for agent onboarding and policy | | 44:59 | Security/trust model for agent communications in Slack | | 47:29 | Skill sharing challenges and risks; product vs. tinkerers’ features |
Lessons Learned / Practical Takeaways
- Personal agents accelerate productivity and harmony but need thoughtful guardrails.
- Embedding agents in public channels and workflows hastens cultural adoption and trust.
- Personalization and emergent specialization (reflecting user skills) provide distinct advantages over general-purpose bots.
- Transparency, reputation, and etiquette are vital when agents act as stand-ins for their humans.
- Current AIs have limitations in context retention and group/social dynamics but rapid iteration is expected.
- Organizational “agent culture” is as much about people as it is about technology—manage it closely.
Final Thoughts
The “agent-native” workplace is already transforming the way high-performing teams—like the hosts at Every—function and collaborate. While there’s lots to debug, the leap in efficiency, personalization, and cultural re-wiring is profound. The future, as they see it, belongs to organizations that learn to orchestrate both humans and their digital doppelgangers.
