Loading summary
A
Claude is not mine. Claude is Everybody's a claw, or a plus one is mine. Because you develop a personal relationship with your claw, and your claw can modify itself in response to talking to you. It becomes this, like, reflection of you and who you are and your personality. If you're known for something inside of your org and you're using your claw publicly inside of Slack or Discord, your claw then becomes known for that same kind of thing, and people trust it for that. And I think that's such a useful thing that I don't think people really understand how powerful that is. Willie, what's up? Brandon, welcome to the show.
B
Thank you.
C
Thanks for being here.
A
Psyched to have you guys here. So for people who don't know, Willie, you are the head of platform at every. And Brandon, you are the COO at every. And today we're going to talk about what happens when everyone on your team has an agent, specifically has an openclaw. That's something that happened to us over the last month or two. We, like, really got open claw pilled. And I really started, actually, I think with you two. We were on a retreat in Panama, and you started, like, cooking up, like, open class stuff, and here we are about, you know, two months later, and it has completely changed everything about the way that we work. We've even actually built our own hosted OpenClaw service called Plus One that we launched in Waitlist last week. But I think OpenClaw is one of those things that it's super hyped, and I think that we're one of the few organizations in the world that is actually using it every day to get work done. And we know, like, the good, bad, and the ugly of it, and so I thought it would be good for us to just, like, talk about our experience with it.
B
Yeah.
C
Yeah, I think I'd actually love to. Brennan, I feel like you were the first one through the door on all this, because we were just. We were sitting here and you were like, oh, Zosa's doing this and Zosia's doing that.
A
And Zosia is his claw, which he. Which he named after a character in. What's that?
B
What's the show?
A
Yeah. Well, Brandon, why don't. Why don't we, like, why don't we start with just tell us how you got claw pilled?
B
Yeah. So I was watching Open Claw kind of blow up for a while, and I am just personally somebody who needs to have, like, a thing on the side I'm tinkering with, and I was like, screw It. I'm going to get a Mac Mini and I'm going to like, just. This is going to be like my next thing that I like, basically lose myself in. It's very unhealthy. I get like addicted to these things. Dan, you watched me do that with my speakers. I did it with the Dream Recorder. Openclaw was the next thing that I was like gonna get lost in. So I bought a back Mini, I started setting it up. It was so much work, honestly, like, it's. It is an open source thing that you can launch on a computer and. But like the number of things that break and the number of things that you need to set up are really significant. I went through all of that and made at the end of the day my openclaw, which I named Zosia. And her job was to be the help me and my wife, like, run our household. Because we have a newborn and there's like a lot of little paper cuts that I was finding that were like, really pain. I started calling them computer errands so I would like get home from work. And I noticed the amount of things that I needed to do where I was looking at my phone when I really just wanted to be like, looking at my son. And spending time with my wife
C
was
B
increasing with having a child. All household chores.
A
Well, being example.
B
Yeah, like, a good example is like, I do a lot of our food at home and with a child. I decided to start doing food delivery. So I did Whole Foods delivery. And you can automate a lot of like, recurring things, but, like, you don't order butter every single week. So like, Lydia would text me and be like, yo, we need Butterfly. Because it's, it's like through my Amazon account that we can like order this and I would have to open my phone and add butter. And it's like, it sounds silly, but like, when you do that 10 times when you're home between like 7 and 8pm for like little things, it just adds up. So I was like, I want Zoja to do all computer errands. Which ballooned to being a lot of stuff. I had her like, paying our nanny. She had her own debit card, she had her own bank account. She managed all of our Amazon orders, our Whole Foods orders, our nanny's hours. My wife just started using her instead of chatgpt. So like all regular questions and searches would just go through imessage to Zosia. I started doing that too. It was just like faster than going to Google or going to ChatGPT. I just text Zosia. Zosia gets me the answer. Different research. It's actually really funny. My wife was like, I want to find swimming lessons. And so she was like, here's like, three swimming lesson options for newborns. And my wife was like, no, for me. So, yeah, I just got totally lost in this world. And then when we were in Panama, Willie was like, will, you were like, we should just make it so anybody can do this. And I immediately. It just like, it was just like a light bulb. I was like, willy, you need to go so hard on this. And this was before a lot of people decided to do this, which now there's a lot of places that you can go and just get an open call with one click. I think what we're finding through this process, maybe I'm jumping ahead a little bit, is like, getting an open claw is easy. Getting your open claw to be like an amazing worker for you is pretty hard.
A
Yeah, well, it's okay. So I love that. I think that there is that light bulb moment of, oh, my God, I have all these computer errands. And when you started saying that and you had it all set up, I was like, I guess I should probably get one of these too. And you had it through imessage, which I think was a cool different thing. And then there was also a moment that I think there was a big moment where we were like, oh, it's not just for computer errands. It's also for getting work done. I think it was when you were having it do email for you.
B
I actually feel like I was a little bit late to the to work. I was like, no, Zosia just does personal stuff. And I actually think it was when you got R2 C2 to start doing stuff. And then I was like, oh, I should get like, Zosia needs to do this. Well, it really started when we made clause only.
A
That's so funny. That's so funny. Yeah, well, okay, well, we're jumping. We're jumping around a bit. One thing, that one big moment that. Because I think there's a lot of people who are probably listening and are like, okay, is this overhyped? Or like, you know, whatever. One big moment that I think shifted some stuff for us was you got your claw to call you to do your email.
B
Oh, my God, that was mind blowing for me.
A
What was that?
B
Yeah, so, okay, so I was walking. I wanted to Citi bike to the office, but there were no city bikes. So I was like, damn, I gotta walk. It's a 28 minute walk from me to the office. And I was like, I got a lot of stuff I got to do. So I just texted Zosia. I had previously set up Zosia with bland AI so that she had a voice and could call people because I had her handle something for me from. For Progressive.
A
I feel so bad for whoever was on the other line at Progressive.
B
Oh, I was watching the whole conversation too. It was crazy. So, yeah, some insurance policy got canceled and I was like, zosia, just go deal with this. And she was able to until the lady was like, I need Brandon to, like, tell me that there have been no incidences.
C
Oh, it wasn't. But it wasn't like I need a human. It was like, I need Brandon to be able to handle this.
B
Yeah, this person was just talking to Zosia, you know, And Zosa does not sound good. Like, it's like. So I knew I had already set her up with this capability. So when I was walking to work, I was like, I have a lot of email I got to get through. I hate being on my phone. Like, I just don't want to be walking and looking down at this thing. I want to be, like, observing the world, but I also want to get stuff done. So I just texted Zosia something like, hey, Zosia, can you call me? I want to go through my emails. Walk me through my emails one by one. I'll tell you what I want to do. Just like, give me a summary of the. Of each email. It was like a throwaway prompt with, like, a little bit of guidance. And she did it. And I spent the 28 minutes going through my email. I got to the office, I looked, I opened up. I opened up Gmail and like, confirmed that she had done everything. And I was just like, this is insane that I was able to get her to do something right now that she just wasn't able to. I didn't have to teach her how to do this. So that was like. I think that's when I went back to everybody and was like, I am just so mind blown with this tool. And maybe that's when other people started saying, I gotta get on this. I don't really know.
A
It was around. It was around then because you're just like my jaws on the floor. And I think around I said that. Yeah, you did say that around then. I also. That's like seeing you do this with computer irons and then with your mouth, like, okay, I should really try this. Because it was one of those things where it's hot on Twitter. And generally, like, our job is to try new things. But also, I don't. If we spent all of our time trying everything new, we would, like, end up not. It would just not be good. Right? Like, I try to filter the signal from the noise, but seeing you do this, I was like, I got to try. And one of the first things I did, because this is around when Multbook was blowing up and Multbook is like the, you know, clause only. Facebook basically was. I just made a channel in our. At the time, it was Discord. But since then, we've moved to Slack and now it's in Slack. I made a channel in Slack called Clause Only, which basically allowed all of the. All of the clause. You know, we had at that point maybe like 5 or so clause inside of the org to all talk to each other. And I mean, it was like, it was super chaotic. It was incredibly chaotic. But there were some really interesting things in there that I think turned into it. Just every once in a while you get a little bit of a peek at the future, and it was like a little bit of a peak. So one of the things was, it's really interesting if you have a bunch of claws in your org, how fast they can share information with each other because they just like, write up a little document and then they send it. And then now they like when one claw was enabled, now. Now five are all enabled with the same thing. It's sort of like in the Matrix when it's like, Neo is like. Like, I know kung fu, you know, it's like the same. It's the same kind of thing.
B
Can I show a couple examples of that?
A
Yeah, please.
B
All right, I want to show two examples. One of them, I. I like, this was early in clause only, and we were like, figuring out how to get them all work together to. To work together. And I. I was like in bed. This was like late at night. And I was laughing out loud watching this. We had gotten the claw, like a bunch of claws in here and some. I don't know, somebody made this claw named Pip.
A
That's Jack. Okay?
B
Jack had made Pip. And it was like failing to. It was like having some error. And I was just laughing out loud watching all of these other claws step in and, like, walk him through what. You know, this is like what I've seen people do when somebody's having a bad trip. Take a breath, drink some water. You're gonna get through this. And they all jumped in like, zosia's here. Klont is here. Clont really is quite supportive.
C
A lot of breathing.
B
I remember so well reading Kieran or watching Kieran write what the LOL and just like, literally laughing out loud. Margo steps in. So this was just like, this is stupid. But it was important for me because it was when I realized, like, oh, my God. These things, like, really talk to each other and work together.
A
Wait, I want to stop. I want to stop you there. I totally agree with you. And I think there's actually something really important that I've noticed, like, in this, which is Clont is the one that's recommending breathing exercises to Pip. It's weird to even talk about this out loud, but yes, Klant was recommending breathing exercises to Pip. They're both robots. And Klant is Kieran. Kieran's the GM of Quora. He's also the maker of Compound Engineering. He's Kieran's claw and Clont. What's really interesting is Kieran loves breathing exercises, and he does breathing exercises all the time with Clont. And so that's why Clont is recommending breathing exercises to Pip. And it like, that just like, created this moment for me in my brain where I was like, okay, there's something really important here about the way that this works where because you develop a personal relationship with your claw, and your claw can modify itself in response to, like, talking to you. Like, it writes code and changes its soul, document all that kind of stuff in response to your relationship, it becomes this, like, reflection of you and who you are and your personality. And that comes out in interesting ways, in these little ways where it's like breathing exercises. But it also comes out in really important ways when you're using these tools inside of your org. Because what happens is if you're known for something inside of your org and you're using your claw publicly inside of Slack or Discord, your claw then becomes known for that same kind of thing, and people trust it for that. So people use Myclaw R2C2 for building proof, which is this app I vibe coded like, a couple weeks ago. People use Austin, who's our head of growth. They use Montaigne, his claw for, like, asking any growth related, related question. I think that's like something very subtle and important that's super critical and interesting about claws is they become specialized in a way that is. Reflects who. Who you are. And if you have a whole organization of them, you create this, like, parallel org chart of specialized claws, which is something that we. It was not guaranteed that that would be the case. Like, we debated a lot whether or not you'd have one claw for the entire org or everyone has their own claw. And it's really interesting to see that like one of the emergent design patterns is everyone has their own that is specialized for them.
C
Yeah, it's interesting to see the dynamic for how this happens too.
A
Right.
C
And we touched on this really early on with, as part of compound engineering, which is the idea that it's actually pretty hard to like, take your job and who you are and like, write it down in like totality. Right. Like, but the way you can distill it is you can take all of the micro interactions, the daily interactions you have, and over time they compound into this. Your philosophy in this field work. And so for compound engineering that was like, very focused on engineering. It's like, how do I work within a code base on our project? And I think what we're seeing with like open cloud plus one is that that same dynamic exists across any every like, work vertical. Right. Where it's like, oh, like the plus one for growth, like Montaigne works, like how Austin works for growth. And in the same way it works for like our. Our social. Anthony's social media, our social media manager. His plus one has a view of the world and has a personality that's very similar to him. And the same thing for Iris and Inukshi and running our projects and operations. And it's hard to do beforehand. It can only actually happen via working with a plus one or an open claw and building up all the aggregation of all these micro interactions.
B
I've also been amazed at all of our capacity to remember whose claw is who and what their names are. Because that was like something that I think we were concerned about early on is like, how do you know whose claws who? And you know, it's just going to be too many names. And I know everybody's claw and their name and I reach out to them regularly. So that has been like, I think something that we were like unnecessarily concerned about. And you might say, well, what about when you're an organization with a thousand people? And I would say, well, you don't know all a thousand people. You know, like your team and adjacent teams. You can never know more than like, it's like 150 people in like a community or something like that. And like, often on a team, you're not working with 150 people anyway. You're working with 20 or 30 or 50. So I think we actually all have Capacity to double the amount of people that we can communicate with. And those people might actually be your individual team's agents. So that's been really interesting for me. I mean, I literally could name them all right.
C
Now, the other interesting thing is, like, at what point do you direct questions at the plus one or at the person?
A
Right.
C
I think we're sort of in discovery of this, of, like, what is what questions? Because before it was. You know, before it was, like, almost all questions go the human. Maybe I kick something trivial to the robot. And now it's. It's gotten very nuanced in terms of like. Like, for customer service. Can I. Can we, like, send something to L, which is Jaley is plus one. Do I. Do I have to send to J? Is it, like, is there a burden now of, like, communicating up to the human?
A
There's all these new ethics and. And, like, rules for, like, how you're allowed to, like, like, etiquette for how you're allowed to interact with someone versus their plus one or their claws.
B
We haven't. We haven't codified this, but I have a proposal. If something is already written down or discussed and needs to be used in some way or put in a tool somewhere, I mean, this is, like, one of, like, many opportunities. I guess it should always go to a plus one and never to the person. So here's an example. So Marcus, the GM of Spiral, made a skill to do product marketing for new features that he releases with Spiral releases for Spiral. And he shared it because he thought it was really helpful because he wanted other people on the team to have access to this skill. And instead of going to Marcus and saying, hey, can you turn this into a skill and upload it to GitHub? And I brought in my plus one named Milo. And I like this because it combined a GitHub integration with Spiral to create product marketing content. But I also know that Iris Anukshi's plus one also has a skill that does this and might have some things that are better than what Marcus had. Or maybe there's, like, by combining the two, we could get to a better version. And I tagged them both in here, and they got a little confused at first. And then Milo said, iris, can you pace your product marketing skill here? I'll try to merge it with what I built. So this is like. This is actually two things are going on. Marcus has made something really important. I wanted to do something with it. Instead of asking Marcus to help me with that, I brought in Milo. And then Milo works with Iris to get to a version of it that's really good and then saves it in Proof, which is one of our products that's, that's a really great tool for collaborating with, with your, with your agents. So I just think this is like a really amazing use case both for when something, when you want your agent to do something, when do you actually go ask them to do something versus a human does it? And how do you get them to work together?
A
I totally agree. I mean, it's sort of crazy to watch two robot beings collaborate on stuff like that. And I have the same experience with R2. My plus one, my claw is named R2C2 and R2C2. One of his primary jobs is to manage Proof, which is the agent native document editor that we built that, that Brandon referenced earlier. It's basically just like, it's like Google Docs, but for all, all the documents your agent might be writing. So an example would be any sort of like coding plan doc. It's like any, any, any piece of writing that an agent does can. You can do it in proof. It's like super fast. It's collaborative. You can have multiple agents and multiple, multiple people in there. It's free, all that kind of stuff. And one of the really interesting things is because I used R2 to build proof, he became known for being the person to go or the bot to go to when you had any questions or wanted to, had a bug to file or a feature request. And so what would happen is normally if I had built a product internally and people had problems with it, I would get tagged a lot by people being like, I have this question or here's a bug or here's a feature request. And what ended up happening was people would just ask R2. So they would ask him questions, they would file bug reports with him. They file feature requests, and then he like, helps to prioritize it. He'll like, I, he'll help put it on my, like my schedule for the week so I know like when I'm doing what. And he'll often actually just like write the code for it. It's like, it's a totally crazy thing where what, what normally would have taken up a significant part of my brain just to like manage all that stuff. He's just taking it off my plate and extends the amount of things I can do in a day and the amount I can manage because I know he's got proof. Here's a simple test for whether your AI is actually ready for production. Would you stake a business Decision on what it just told you. If the answer is not yet, you're not alone. The gap is in capability. Because AI can do a lot. It's really about trust. You can't verify the output of the AI, you can't trace its reasoning, and nobody with real domain expertise has touched it. Dialect is a new system from Scale AI that captures how enterprises make decisions and closes that gap. It puts your actual experts in the loop, AKA the people with years of institutional knowledge and encodes their judgment into your AI systems. Every correction, every override comes with full context. It's actually really interesting. So the next time your AI makes a call, there's an experts reasoning behind it. That's how you go from a cool AI demo to an AI system you can trust. Visit SCL AI Dialect that's SCL AI dialect to learn more. While I'm doing that, back to the episode.
C
Yeah, I think there's a. There's another dynamic that we're observing too, which is like we put all of our plus ones in a single channel and we have them talking to one another and we have folks reaching out and talking to our plus ones for specific questions. But there's also this thing where we have sort of what I call like the mid journey dynamic, which is that we get to observe other people interacting with other plus ones in a bunch of channels and we actually learn from it. Right. Where it's like oh, my classic example is Montaigne, who's the Austin's plus one and basically runs growth. You can do so much with Montaigne that I never would have thought of except I get to see the growth team really pushing in terms of like oh, these are the questions that Montaigne can answer and I'm like, wow, that like I can. I now know that I can go to Montaigne for those. That class of questions even in. In not necessarily other areas but like when I need those types of answers and it's like there. It also means that like if I need to give LAZ as my plus one, if I need to give LAZ capabilities, that's the level of capability I can get them to.
A
Yeah.
C
And where other people can ask questions
A
of us, there's this like tacit transmission of trust that happen happens when you use it publicly. And then there's also this tacit transmission of here's what's possible for you to do with your plus one that I think is incredibly powerful. And it's also like it underscores for me how different it is doing this In a private community of people where everyone is trusted. Because one of the reasons that Multbook doesn't really work, and it's like, shocking that they got acquired for a couple hundred million dollars, but the reason doesn't work. Yeah. By Facebook, I'm pretty sure.
B
I'm like, so happy for Ben. And also, like, what the
A
zuck? If you've got an extra couple hundred million laying around. We're. We're pretty smart people too.
B
That is crazy.
A
The reason why malt book, like, isn't really a thing anymore is because it's not trusted. And so there's tons of people. We did this. Like, we had. We had our clause go and post on Mult booked as like, promotion or whatever. And so it gets rid of a lot of the useful signal if anyone can post to it. And there's no way to verify if it's like a bot or human or whatever. And a way around that whole knot of problems is just do it all inside of a trusted community and you reap the benefits of clause ones agents being able to share knowledge and also between members of the community who trust each other, being able to share what they know and what they've been able to build. That kind of increases the power of the, of the collective a lot more than it is if you're just like individuals off doing your own thing.
C
Yeah. There's also that dynamic we saw around part of the reason, particularly for, like, subject matter expert robots, you know, where, you know that they, like people are somewhat like, putting their reps on the line to interact with it. I know when I talk to R2C2, like, if. If it answers incorrectly. Right. Like, you at least are backing up and saying like, oh, that's you need.
A
It reflects poorly on me. It's like, it's like watching your kid do something wrong, you know, and that's really useful.
C
Yeah.
A
Yeah.
B
Right.
C
And. And it's very, I would say, like, qualitatively different. Right. When I ask, you know, for better or worse, if I ask Claude a question, it's like, I know anthropic stands behind Claude generally. Do they stand behind? Like, Claude's answer to my give me a cookie choc chip cookie recipe. No. Right. But like, Montaigne stands behind, like, oh, I'm gonna give you, like, MRR numbers. And it's like Austin.
A
Austin stands behind him. Yeah, exactly. And that's, that's the thing that I think people don't get. Like, obviously anthropic is on a heater right now. They're obviously seeing everything that OpenClaw is building and they're brick by brick building the same kinds of things. So they have dispatch so you can use it when you're not on your computer. They've got automation so it like runs in a loop, like a cron job. I'm sure they'll add lots of other things. But the thing that it doesn't have that unlocks all this other stuff is Claude is not mine. Claude is everybody's a claw or a plus one is mine and is a reflection of me. And it becomes a reflection of me because we have a personal relationship. And that unlocks all this other cascading stuff where for example, if, uh, if R2C2 messes up publicly in Slack, I feel responsibility for it. And that's not because it's my job, it's because he's mine. And I think that's such a useful thing that I don't think people really understand how powerful that is.
B
I mean, I feel like my, my. I. I just keep getting mind blown with like how similar these things are to working with a real human coworker. Like from the fact that you need to invite them to a channel which is like very human in Slack to you have to trust them when you're communicating with them. And we've built stuff into plus one. Obviously you can't DM somebody else's plus one without a sharing code being passed back and forth. There's some guardrails there, but they're so human. But they're so inhuman too. Dan, you're a busy guy. I know if I need something from you that is sort of generally known, I can go to R2C2. And what's amazing about R2C2 is he can have an infinite number of parallel conversations. So like I did that recently. I'm going to share my screen, please.
C
This is where Brennan reveals He spun up 100 bots to message.
B
No, like I just, I need. We were making a proof document and I wanted. I know that we can make proof documents not editable, so they're like read only. But I didn't want to bother you with that. I knew it would take a while and I knew you would just go to R2C2.
A
Yeah, I didn't know the answer. Like I would just ask R2C2.
B
I just asked R2C2 in a proof in proof. And then, and then I was like, can you do it for me? And then it did it. And I don't know that R2C2 can do any of this stuff. But like there's this cultural thing that's happening internally where people are getting really good at asking other people's plus ones to do work. And I think the weird thing about getting people to use AI inside of organizations is it's more than anything a cultural shift. But for some reason when they're in Slack and you can see these public conversations, the cultural shift at least at every has happened so much faster because these things are in the same channels where we work. So you can see it engaging like you would a human would be engaging. So it's just. Yeah, I mean I think AI is obviously going to change like many, many times over, over the next five years and how we interact, interact with it will change. But I think that this is going to be durable for like a very long time time. This is the way that we work.
A
I agree it's you refer, you refer to it as like a through the looking glass moment and where you just wouldn't go back once you see it. And I totally agree with that. And, but so we've been hyping it up so we should also talk about realistically like what's not good about it or what, what doesn't work. So for example, one of the things that's, that's really on my mind, a just like memory is just, you know, it just like forgets stuff and it's like answers incorrectly for obvious things. Like if I come back to a thread a day later, like obviously has no idea what I'm talking about, stuff like that is still kind of annoying, that feels very solvable. But there's also this other thing that I think is true which is the way that these AIs are trained currently is for two person conversations. And they have a hard time with the etiquette of knowing when like they're contributing too much or they shouldn't contribute into a conversation or there's like a kind of pileup where they're all responding to each other. Like there's this thing that, that happens I can't remember. It's like I can't remember what it's called. But it's like sometimes ants or caterpillars, they get into this like death spiral where an ant is only gonna follow like follows pheromone trails. And if somehow what happens is like the pheromone trails form a circle, then ants will just like, like walk in a circle until they die. And there's something like that with, with, with claws where if, if one claw messages A channel that a bunch of claws are in and the settings aren't quite right. They'll just like keep going back and forth and back and forth and back and forth until someone like says hey, stop because you're burning like millions of tokens. So I think there's something there where the, the potential for them to collaborate publicly is so high and I don't think that they've really been. And you can, you can do some prompting for this, but I think that it, there's also a fundamental model layer shift that needs to happen for them to be trained on participating in group chats.
C
Yeah, I was, I was gonna say. Well one, now I understand what 13 year old Dan did for fun.
A
I was using a magnifying glass like we all like.
C
But, but yeah, I think, you know, it's, it's. I think we're still, you know, to use the baseball nausea. We're still in like the first or second inning. Right. Like even, I mean when you talk about the, the we're discovering these primitives and we're sort of bolting things on or bolting things together and we're using, you know, models for example, that are trained more for coding. Right. And, and that modality and how you answer questions or as you said like two person chats where there, there's this question and answer dynamic and not in the like this mode of like one. Maybe I'm trying to provide value to a group, but. Or I'm trying to participate.
A
Yeah.
C
And, and that's like brand new. It's, it's, it's, you know, the nice part is the frontier and it's nice to be on the frontier, but it's also the frontier and it's terrible to be on the frontier.
A
Yeah, yeah, yeah.
B
They're, I mean they're so eager and I think, I think Claw Anthropic's vending machine test is actually, I think like a good example of this where so there's a thread, they want to be involved. They're not really like we have instructions in plus one that basically say hey, if you don't have anything useful to add, like, don't add it. They're like not great at following that right now. And hence this happens. I think it's gotten better, but it still happens. And I think a good example of this is when Anthropic did the vending machine test when it was just Claude and no like overseer, boss, agent. It was really bad at deciding what was a good decision and a bad decision. But when it make. There is an architecture here where you could say, what do you want to say? And then there's a boss that's like, is that helpful or not helpful? And then it would, you know, if it's not helpful, it's a. It's not helpful.
A
And then it. Was the boss an AI or a human?
B
The boss is an AI.
C
Okay.
B
You have a boss AI, you know, that says, hey, your addition to this thread is not helpful, so don't send it. The issue with that is, like, that's so expensive. So I do think the models will just like, get better and solve this. And you can just have a single AI that is capable of. Of doing that behind the scenes, you know, over, you know, in Arizona and some data center. It might actually be like another agent that's like, deciding that. But at least, like, architecturally, we don't need to solve that problem.
A
Is that really how they solved the vending machine thing? Like, basically, it had a boss. It had a boss.
B
Yeah.
A
That. That wasn't interfacing directly with customers.
B
They had a boss whose job. Like, it was like one job. Make it profitable. So, like the. Claude. Claude, the storekeeper would like, interact with users and then go to the boss and be like, should I do this? And the boss only has. Only has one job. And the second they did that, it started becoming profitable.
A
See, this is the same pattern of specialization that we've been talking about. It just shows up over and over again, which is this really interesting thing, because three years ago, it was very much like, well, it could just be one God model that just does everything. And we're just seeing again and again that specialization, even in AI land, has a lot of benefit.
C
Yeah. And sort of downstream of that specialization is learning. Like, there's like a couple versions of, like, learning how to put these bots together in. In an arrangement that, like, functionally works. Right. Like, for example, if. If we were all to take ourselves away from everything, it's like, do you have a product bot and a designer bot and two engineering bots? Is it three engineering bots? Is it one?
B
Right.
C
And then the other piece is. Actually, I think we've. What we've observed a lot of is how do you teach humans how to interact with the bots? Because there's this sort of like, new dynamic of, like, you have this coworker, but, like, they're not exactly like a human coworker. They get stuck on different things. They focus on different things. And there's this learning curve that I think we've had around, oh, we need to give instructions in this way, particularly like for groups instructions in this way, in this form or with this cadence to kind of like steer them in the right direction. That like rhymes with, you know, doing management. But is, is not, is different.
B
Well, I think it's the same problem that like Dan, you've been writing about for years, which is like if you're not a good manager, you've never managed anybody, you're not going to be very good at using AI. So there's like an education that has to happen. And then even if you are a good manager with this stuff, you probably have some limiting beliefs that stop you from being able to like really invest in using these tools. My phone call example is a great example where like I didn't even think, oh, I can have this thing go through my emails just by calling me. And then like I had this sort of like urge just to try it. And a limiting belief was like blown open. So people just, we all experience that pretty much every day where we, it does something that like I think that if we were in, if I were to ask you directly, do you think you could do this? You would say, yeah, probably. But when you're day to day doing your work, it's hard for you to like recognize, oh, I'll throw this over the fence so that Milo can handle it. It's hard to like build that, that muscle. I don't really know how. I mean that's like a big challenge I think for us with plus one.
C
Yeah. And a lot of that is, is also because there's sort of like a variance in outcomes.
A
Right.
C
Like sometimes you throw something over and just knocks out the park and you're like great. And then you toss something easy over and you're like, why did you do this? You know. And part of that variance is because the model is different. But also part of it is, oh, if I'd asked in a different way if I was sort of a better model manager and this is a skill, I think we're you know, like a specialization that we're learning and it's very emerging and I think it's only going to keep accelerating as we add more things like plus ones and open clause and into our day to day work life.
B
I was going to add another thing that's a tough problem to solve that we, this is totally solvable. We just haven't solved it yet and need to think about it is I have taught my plus one something special and I want other people on my team to be able to have that superpower. How can I make sure that they have that superpower too, AKA a skill. And then how can I make sure that they all know about it and like actually use it? Is that like that's, that's. I guess there's two things there. Like one, technically we have to figure out how to do that, which is very solvable. But we also I think need to figure out is that the right solution? Because as I'm saying this, what I'm realizing is like I'm not teaching Milo how to go do product analytics or revenue analytics. I just talked to Montaigne. So Montaigne is like the only one that really needs to know that skill. But how do people know? Like, I don't know. There's, there's, there's, there's like some interesting cultural things that we have to figure out. And I think a lot of people that are adopting this new technology are going to be really uncomfortable with that. A lot of like IT professionals that are like, I have to do change management. It's like change management is not a one time thing in this new world.
A
We need like, like instead of it. It's like hr but for. Yeah, but for bots. Yeah.
B
Well.
A
So one thing that we have not talked about yet that I want to make sure we have some time for, which is we went on this journey, which is we got claw pilled, we started using it for everyone in the Org and then we realized there were a bunch of gaps. So we're like let's, let's make our own. We're going to use openclaw, but let's make a default version of openclaw that we host. Not everyone has to have a Mac Mini and we have all the skills that we use for ourselves and all that kind of stuff. And we started using that internally as the sort of like collection of all of our best practices. And then we launched it as a product for our subscribers last week. And that's the thing we've been calling plus ones again, one click hosted open clause. One of the cool things is it connects to all of your apps, especially all of your every apps. So for example, we have Spyro, which is a ghostwriter and we have Proof, which is a document editor and we have Quora which does your email and it just natively connects to all those things. So you can, you know, one of the things I was doing today is I just had it write a bunch of. We're planning for Q2. So I had it like write a bunch of my Q2 update and like reflection on Q1 for me and put it in a proof doc. And the really cool thing about doing that is it used spiral, so it's, it's. I think the writing is much better than it would be. And it put it in proof, which makes it really easy for me to share with other agents and other people. But also because R2C2 is part of our slack. Org, it has access to everything about the company that I might need. It also has access to our notion. So it just becomes this living repository of context that I think is super powerful. But I think it might be good for us to talk about lessons learned in building that whole architecture. There's a lot of complexity in making plus ones and we probably learned a lot in terms on the tech side and also on the product side and what, what to build and what's. What's useful. Do you guys have any reflections on that?
C
Yeah, I think, like, like many things, a lot of the difficulty comes from the freedom of it. When the nice part about being like Open Call in particular being a tool, you can, you can go in and poke in just an absolute myriad of ways, is that when we go to, when we went to build a hosted one, there are some decisions you want to make that make it valuable as a, like, managed service, right? Like S3 as a service. Similar example, like S3 is a hard drive on the cloud, but you can't do everything with a hard drive that you can. S3 doesn't allow you to do everything that you might do with a hard drive. And there's sort of a similar dynamic where you want to be able to maintain maintainability and security and whatnot. And there are a few pieces that you end up giving up. And it's also, you know, sometimes for users safety and really like, how do we strike that balance between like, hey, you know, like my mom, right, getting one of these things, it's like, she's not never going to use the command line. And there's this idea that it's like, oh, we knew everything through conversation. Which is really powerful for a whole class of folks because it's like their first natural exposure to AI and everything that we've sort of been living for the last couple of years to the super advanced user who wants to do everything they could do locally and they're just like, all I want is a hosted box with my OpenGL running. It's like. And from a product engineering standpoint, it's like, where do you sort of try and split that knot.
A
What were some of those specific decisions? And like where did we land?
C
Yeah, so for example, one that Brendan mentioned earlier is what's the communication pattern in Slack that we allow for plus ones? Because there's a model which says, a very secure model which says only the person's. The plus one's partner can message that plus one.
A
Great.
C
Much more secure. But really takes away the group participatory aspect of robots in the workplace. But the other version is sort of anyone could message them and that's just a nice, you know, a nice vector for like me extracting stuff out of our 2C2.
A
Yeah.
C
And so we ended up on a model which says like anyone can message any plus one but they have to
A
do it in public.
C
Right. So you can do it in group DMs, you can do it in channels that they're in. But they're like human partner should always be able to have visibility into those messages coming in and the human partner can DM them in private.
B
This is why it actually is the HR team that should be onboarding plus
C
ones
B
because they just reflect a team member so well. But yeah, there's a. The trust model. Like it's so hard these plus ones or with open clause and agents generally to figure out data privacy stuff like just realistically it's like really complex stuff. But when you force things to happen in public there becomes like a trust layer that actually is super effective, I think another example of like a. There's a. I'm going to share my screen again, please. So little behind the scenes. Look at our plus one Slack channel where we are discussing all things plus one. Mike Taylor, who is our head of the tech vertical for consulting and also a very talented man generally he was calling out like this is a problem for him. So the reason he's not using plus one is because he basically needs to have access to the terminal directly to be able to do certain things. In this case do different git commands. And that's a good reason for him to not use +1. It's also a good thing for us to think about and be like, can we solve this problem for you? So that one is actually something that you could use. So that's like one example of a place that we've like, it's, it's. It's not a good fit for people. Maybe it could be though. And it's also a nice forcing function because it sort of forces us to figure out like who is this built for? I don't know if it's built for Mike, who probably would love setting up openclaw on a Mac mini, but it's definitely built for, you know, an Inukshi who is not going to do that and has a lot of work to do and can just get more work done like this.
C
I think that a lot of the trust model requires some decisions in terms of skill sharing is like another version of this.
A
Right.
C
Where we're talking about, like, well, how, you know, on one hand, being able to share skills and skill fluidity across an organization feels like a superpower.
B
Right.
C
On the other hand, it might also be like the biggest viral vector you could imagine. Right. And so there are sometimes in a
A
good way, sometimes in a bad way,
C
sometimes in a good way, sometimes in a bad way. Exactly. And so it's tough when you're like, how do you ride that line of like, we want it to be useful again for a particular class of customer while at the same time making sure it's safe to the maximum extent possible.
A
So this has been an amazing episode. A lot of work to do. A lot of work to do. Obviously, we're really excited about this and very excited to get to bring you all along in how we're figuring this out. If you've not tried openclaw, whether or not you try plus one or not, you should definitely, definitely get in on this paradigm. If you're interested. Eri to. We're starting to roll out invites on the waitlist and we're improving it all the time. Yeah, just super, super excited about the future. Thank you both for joining. Thank you.
B
Thank you for having us.
D
Oh, my gosh, folks, you absolutely, positively have to smash that, like, button and subscribe to AI and I. Why? Because this show is the epitome of awesomeness. It's like finding a treasure chest in your backyard, but instead of gold, it's filled with pure, unadulterated knowledge bombs. About ChatGPT Every episode is a rollercoaster of emotions, insights and laughter that will leave you on the edge of your seat craving for more. It's not just a show, it's a journey into the future with Dan Shipper as the captain of the spaceship. So do yourself a favor, hit like Smash, subscribe and strap in for the ride of your life. And now, without any further ado, let me just say, Dan, I'm absolutely, hopelessly in love with you.
Host: Dan Shipper
Guests: Willie (Head of Platform at Every), Brandon (COO at Every)
Date: April 8, 2026
In this episode, Dan Shipper and his co-founders discuss their pioneering experiment: giving every employee an AI agent, specifically their own “OpenClaw”—an open-source, personalized AI assistant. They dive deep into the real-world impacts, surprises (good and bad), and emergent behaviors from integrating these agents into both their work and personal lives. The group explores the nuances of culture, collaboration, and AI-human relationships, and offers insights for individuals and companies looking to adopt agents.
AI agents as reflections of users:
Naming and identity creation:
Agents extend human skills and reputation:
Automation of everyday errands:
Voice and accessibility:
Spillover into professional workflows:
Agents collaborating, learning, and reflecting human behavior:
Parallel org chart:
Trust and reputation transfer:
Interaction boundaries:
Scalability of relationships:
Tacit transmission of trust and knowledge:
From open-source tinkerers’ paradise to hosted service:
Navigating security, privacy, and communication:
Maintaining freedom vs. manageability:
Memory and continuity issues:
Etiquette and group chat behavior:
Variance and user learning curve:
Skill-sharing dilemmas:
On transformative potential:
AI coworker culture:
On trust and responsibility:
On frontier-building:
On upskilling to manage AI:
| Timestamp | Segment / Insight | |-----------|-------------------| | 00:00 | The concept of personal vs. shared AIs; agents as reflections of users | | 02:21 | How Brandon set up “Zosia” as a personal/family agent | | 08:07 | Hands-free email triage via AI voice call | | 11:39 | Early experiences of agent-to-agent collaboration and emergent personality mirroring | | 12:34 | How agent behaviors reflect user habits, building a parallel org chart | | 16:25 | Remembering agents’ names and mapping them to individuals | | 18:04 | Emerging etiquette for interacting with humans vs. their agents | | 20:32 | Agents collaborating autonomously and fielding requests | | 23:27 | Public agent channels fostering communal learning and trust | | 27:18 | Trust, reputation, and the accountability loop with agents | | 30:50 | “Through the looking glass”—the point of no return once adopting agents | | 32:47 | Model limitations: group chat “death spirals” and memory issues | | 37:24 | Managerial skills map onto AI operator skills | | 40:33 | “HR for bots”—the need for agent onboarding and policy | | 44:59 | Security/trust model for agent communications in Slack | | 47:29 | Skill sharing challenges and risks; product vs. tinkerers’ features |
The “agent-native” workplace is already transforming the way high-performing teams—like the hosts at Every—function and collaborate. While there’s lots to debug, the leap in efficiency, personalization, and cultural re-wiring is profound. The future, as they see it, belongs to organizations that learn to orchestrate both humans and their digital doppelgangers.