
Explore CloudBot, OpenClaw, and the AI for Individual Rights program as we demystify LLMs, open vs. closed models, vibe coding, and how AI is empowering activists and creators worldwide.
Loading summary
A
You're listening to tip. You're listening to Infinite Tech via the Investors Podcast Network. Hosted by Preston Pysh. We explore Bitcoin, AI, robotics, longevity, and other exponential technologies through a lens of abundance and sound money. Join us as we connect the breakthroughs shaping the next decade and beyond, empowering you to harness the future today. This show is not investment advice. It's intended for informational and entertainment purposes only. All opinions expressed by hosts and guests are solely their own, and they may have investments in the securities discussed. And now, here's your host, Preston Pish.
B
Hey, everyone. Welcome to the show. I am here with Alex Gladstein, Justin Moon. Guys, it feels like the world is moving at 10x the speed and pace that it was just a couple months ago. I don't know if you guys are feeling the same way, but things are accelerating. Oh, my God.
C
I listened to the show with Pablo and he said, it's compressing time. And I'm like, that's it?
D
Yeah.
B
By the way, if a person's listening to this podcast and you haven't listened to the show, that was basically too earlier where we were talking about the Clawbot or it's called Open Claw now with the branding, I would highly encourage you to go back and listen to that conversation as well, because it's going to be pertinent to some of the stuff we're talking about here.
D
Yeah. And I just spent three days with Pablo, so I applaud you for bringing him on. I'll be sharing some of his insights as well from the work we just did together over the last few days.
B
Amazing. Amazing. So, Justin, where do we even start this conversation? Because what I kind of feel like was the conversation I had with Trey and Pablo was so, like, we were already going 100 mile an hour with the conversation. And for the listener that was listening to it, I think their takeaway might have been, oh, my God, what is happening? I don't even know what they're talking about right now. So, like, maybe we throttle things back and, like, slowly bring everything up to speed. So take it away.
C
I agree. It was a great episode and I really enjoyed it. I could almost keep up. I could keep up with it only because I know them and I really know Pablo well. But I feel like for the drive by listener, it was like trying to get on a fully moving train, like one of those Japanese trains. It's like, it's asking a lot. So I think I want to help kind of explain at least how I understand, like, what's going on, what the hell's happening. And if you understand claudebot or opencloth, the thing in the news right now, if you understand that, you kind of understand what's going on. And I was thinking about how to understand it, like break it down into basics. And I realized you had to introduce a lot of foundational ideas first that most people don't quite get and it impairs their ability to understand what's going on. So I'm trying to introduce 10 ideas. I have a bunch of notes here. I think you have to understand in order to really understand what's going on. But I'm not going to use any jargon. I'm going to try to simplify it and make it understandable to people who don't know anything in this. Okay. So that's my goal. It's a bit of a high wire act, so it might not go well, but we'll, we'll see real fast.
B
Before you kick that off, would you say from a really, really like zoomed out from space kind of view that all the excitement is about right now is everybody's accustomed to using cloud based large language model AI. They type into a chat and they get an answer back. But now you're at this pivotal point where the tech is so advanced that now people can run it locally in a way that's actually going to be quite useful. And we haven't had the hardware and we haven't had the software models to do that yet. And that's really kind of like this clear break of what we're experiencing is now people can run it locally without even tapping into a cloud based provider.
C
The significance from OpenCloud to me is it's a big step towards like self sovereign user controlled AI. It's not a full step all the way there, but it's a big step in that direction. Yeah, and it's a step in that direction from a couple different angles. And so I want to try to tease that out for people. I need to introduce some basic ideas just to make it make sense.
B
Okay.
C
There's a few things that you can understand why the importance of like we've talked a lot with href about the importance of Vibe coding. That's going to be one of the takeaways here. Is like vibe coding, enable this and it's going to enable a heck of a lot more over time. So like just zooming out, like what is an LLM, right? Like we got to start from the very base. But like what is an LLM? To me it's like a New way of using computers, right? So like traditionally a computer, like computer programs, right? Desktop apps and stuff like that. A computer program is something where it's like a recipe, a recipe for a computer. So it's something that's typed out with exact instructions by a human and it tells the computer exact steps to follow to do something. So anything that can be broken down into steps can be like represented in a traditional computer program. Like arithmetic. Traditional computers are very good at arithmetic. They're very bad at telling jokes because you can't encode the steps of a good joke. Like it's almost a sense. What makes it funny is because it's unexpected, right? Zoom out. One way to think of an LLM, it's like a new type of computer program, like bad at everything. Traditional computer programs were good at like arithmetic, but good at all the things they were bad at like creating art, right. Or telling a story.
D
Right.
C
Or coding. So that's kind of like the high level thing is I want to frame this as like In a sense, OpenClaw is a new type of computer. To me, that's what it is. It's a new way of using computers. It's a new type of computer program. I'm assuming you've all used an LLM but have no idea how they work. So basically there's kind of three steps in an LLM. The first is like, it's called pre training. So what it does is it downloads all the text on the Internet and compresses it into a single file. That's the fundamental thing of what an LLM is. You take all the information on the Internet and you try to lose the least important parts of it and only keep the most important kind of ideas and principles and facts. So what you get at the end is a file that can given like half of an Internet document, it can complete it. It can like do a best effort job getting half of a Wikipedia article and writing the second half. That's all it can do, which is it has a lot of intelligence, but it's not actually useful because like when does a normal person need to complete an Internet document, right? And so that file, it's a file. That's what a model is. If you're a model, that's what a model is. It's a file, right? If you've heard of weights, weights are what's in the file. That's what weights are in AI. And an open model versus a closed model, an open model is if you can download that file like Deep Seq or Kimi Generally many of them are Chinese and then the American ones are closed. Generally you can't download the file. So it's generally the closed ones are a little smarter and the open ones are a little more self sovereign. The closed ones are generally American, the open ones are oftentimes Chinese.
B
Let's pull on that thread because I think somebody who's hearing that, it makes no sense to them, I have an opinion on this. I'm very curious to hear your opinion though. Why are they the ones releasing these open models but in the US where you would think that would be taking place, you're not seeing anything of the sort. Why is that the case?
C
To me, I think the biggest part is like the capital structure of the companies doing it. So like OpenAI and Anthropic have like these huge capital structures and they need to make a lot of money fast and they're on the frontier and they need barriers to prevent competitors. And so not releasing the model weights is the biggest thing. Just from a business point of view, no kind of extra thinking. I think that makes sense. I mean another thing is like I bet the CCP likes that there are these open models out there that get embedded into like Airbnb. Airbnb to come out and say, hey, we use these Kwan for all kinds of stuff. It's great, right? It's a way for like the CCP basically to embed Chinese values in American tech software. And also, you know, America is like the leading one and then it's kind of easier. Chinese economy is the last 10, 20 years has done a lot of imitating domains that get imitating right. So that's kind of another thing is it's just like kind of something that they're already very good at is reverse engineering. Those are three things. Alex, you have anything to add there?
D
Yeah, I just would say that at the moment they judged that they could not compete proprietary side and could both introduce maybe some chaos and opportunities for themselves by going this route. However, going that route, kind of like a Sputnik thing as we know it has opened a whole new door. And you know, it's actually, I think been good for the world at large that you have other geopolitical powers pushing open source options. It's going to eventually force the American companies to do the same. So you're going to have pressure, just like you had pressure to add encryption to devices and to apps, the Snowden files, like over time there's going to be pressure on American companies, you know, despite profits, like they're going to Feel pressured to have open arrangements and open products and we'll get to this at the end of the recording. I think that hopefully also privacy protecting ones too. But yeah, that would be my take.
C
One small note, I want to recap from a talk that was given at our yearly AI summit in San Francisco was this guy Rames. I mentioned how like a year ago we thought there would be a takeoff runaway leader in AI and that didn't happen. They're, they're all getting closer and closer. It's getting more and more competitive and the closed models and the open models are starting to get competitive. It's a bigger G will go and now it's getting very competitive. So this is like great for user sovereignty. We're not, you know, it's trending in a way that you don't have like a single overlord. And it's a very competitive dynamic, which I think is great for freedom.
B
One of the things that I think also makes it more competitive is when you start running these models that are not on the forefront of being the best from an intelligence standpoint, but you combine these lesser models with persistent memory run locally, the performance that you get for what it is that you need is actually a lot better than a premier model because it's continuing to learn and it's not forgetting all those past interactions like you get with a frontier model that has a new context window every single time that you open up with very limited memory. So that persistent memory is one of the things that I think is massive for self sovereignty and from getting away from these large language models that are just sucking all the data and using that potentially against you, you're going to get better local performance. The thing that I was, you know, on that original question, it seems like, and I've asked the AI this particular question, why we're seeing the open source models coming out of places that we would least expect it. And it gave me a really surprising answer in that they're looking at the game theory and where this is all going. And what they're trying to do is slightly, very, very, just ever so slightly steering the results of what you get out of the model.
D
For.
B
Let's just take an example. Tiananmen Square. If you're training the model, you can either have that as part of the initial, you know, data input before it compresses everything into the model and it adjusts the weights ever so slightly. And if those are the models that everybody starts to build on and run locally, you get somewhat, slightly different results than if you have Somebody who's feeding it with the base. Everything that's ever been written on the Internet, minus these things that we really don't want in there, when we compress the model, they're removed. And so I found that to be really interesting and, you know, really a lot of foresight, if true, there's a lot of foresight in there to make sure that you get your model out there. Now, at the end of the day, I can run that model locally. I can ask it a question that maybe isn't in its weights. I can say, you know, go out there and research. That's just wrong, that's not truth. Go out there and research on the Internet, like more facts on using the Tiananmen Square as an example.
D
Right.
B
And then my local model now knows it, and it's not like it's part of its weights anymore because I've steered it in a different direction. So in the end it doesn't matter. But I.
C
So I want to make one before moving on. I want to make one point here. So we did a hackathon recently where we put together, like, activists from HRF with Freedom to the Duck developers from my bitcoin meetup in Austin, basically. And one of the interesting projects was an actual Tiananmen Square, like student organizer Gen Lee, last name Dr. Young. Gen Lee, yeah. Dr. Young. Gentle. They did a project where they basically made a. Made a benchmark for all the different LLMs, comparing their questions on like, human rights questions, like Tiananmen Square. Right. Which is very interesting and we look forward to that getting published. Yeah, Let me move on, because I have a lot here. So I'm trying to like where an LLM comes from and how it's used. Right. So I talk about training. You take the Internet and you get it onto a file. Then there's a thing called post training, which turns it into like a useful assistant. It gives it a bunch of examples like, here's how to be useful to a person. Here's how to do a coding agent. Right. And so now you have something that goes from be able to complete a document to be able to, like, answer questions, be your therapist, write some code.
D
Right.
C
And so that's how the model happens. That's it. So then the question is, how do you use it? Right. And the word for that is inference. You probably heard that word. It took me a while to remember that that's what it means. Inference means just when the model is run. Right. And so this is something that, that you can hire someone to do in the Cloud for you, like ChatGPT or Anthropic, or you can do it on your own computer, if you have a computer. So you can use something called Ollama. Right. And so what inference is you run that model, basically, and you can put text in and you get text out. Right. So it's just like the ChatGPT interface. That's what's happening behind the scenes. Text in, text out. And the one problem with open models is you need about a $20,000 computer in order to run them. Right. So that's one of the tough things right now. That's a big technical barrier to real user, individual user sovereignty, and AI. And it's something we're all kind of working. So that's what inferences. Okay, so now I want to talk about another word that's very, very important. This may be the most important one called content. You see, we are just in contact.
B
Justin, I'm sorry to slow you down. I just want. So people heard on the episode with Trey and Pablo that Trey was running his off of a Raspberry PI. And so they're like, hold on, you just told me it cost $20,000 to run it locally. And I just want to explain to the listener. So the way Trey's open claw works on his Raspberry PI, which is, you know, 3, 400 bucks, is he's making API calls to Claude or to, you know, OpenAI to do the inference on their cloud, and then it's giving a result back.
D
Right.
C
He has an agent, which we'll get to. He has an agent running on a Raspberry PI. But the inference, the thing that's actually doing the smart AI stuff is on a. In a cloud. Yep. So there is a step towards user sovereignty, because what ChatGPT was trying to get us to do a year ago is run the agent in the cloud too. So this is like halfway there. So it's huge step forward. Right. Running the agent locally, it can save memories locally, you know, and you have the option for certain things to use a local model too. So it's a great kind of a half step forward. I mean, it's 10 steps forward, but it's not all the way to the goal.
D
A huge win for open source. And it changed the game.
C
Yeah.
D
Let's go.
C
Okay. So we talked. We defined the word inference. That's like one kind of word you need to do. And context is maybe the most important one. So context is. It took me a while to act. I mean, I'm very technical. It took me a While to actually understand what the heck people were saying probably took like six months to actually understand it. And the key thing to understand is that LLMs are something we called stateless. Every time you interact with an LLM, it actually, it's a bit of a. We talked about memory earlier. On a deep, technical level, there is no memory at all. Every time you interact with it, you start from scratch. All it remembers is the training and the pre training. That's it. Okay, so if me and Preston use ChatGPT 5.2, we are getting exactly the same model, Right? If there are some memories that are specific to Preston, they come from elsewhere. They don't actually come from the model. We get the exact same thing. That's an important thing to understand.
B
So, Justin, would it be safe to say that this context. So we're using. You and I use the same model, but the header that's put into the start of that chat is what's different and so.
C
Exactly.
B
So if you have like, past memory, like Preston likes short answers, he doesn't like a long answer, that little snippet or that header is inserted, and you don't see it getting inserted into the context window, but it's inserted in there. And so that's how we might get a different answer from its past memory of us, and how we use it is that header that it's seeded with before you enter the context window.
C
Exactly. So like, if me and Preston have the same model and we're getting different answers, I mean, you can see this with yourself, right? Like, let's say you use ChatGPT. If you're in a long conversation, it will remember things previous in the conversation, but it usually won't remember things from different conversations, but every once in a while it will. Right, so that's like a big question. Well, if alarms are stateless, how are these two things that we've all observed true? Right. And so the answer is that every round of conversation, let's say you open a chatgpt, you go through 10 back and forth, right? On the 11th one, it doesn't just send the question you asked or the thing you said. The 11th time, it sends that, it sends the 10th, the response the 9th, it sends the entire history every single time. And there's also one extra one that you don't see, which is called the system prompt. This is what the header that Preston was talking about. This is like. Think of it as like the ten Commandments. This is something that. God, you know, like the developer, basically. ChatGPT, or sometimes the user themselves gets to put in there. And it's instructions for how the model should behave, which the model doesn't always follow. It tries to. And it's also important that it be the ten Commandments and not like the ten thousand Commandments, right? So like what we were doing with a year ago, as we were doing the 10,000 commandments, we'd write like a whole essay on the beginning and we basically overload the model and it couldn't do things. And so a lot of the development over the last year that has enabled open plot and things like it is that we figured out a way to only give it ten commandments and figure basically derive the extra things and do like just in time learning to figure out the other things without overloading it right as a start. So context means what context means is. It's the conversation, the entire conversation. Everything you've gone in that session is what context means is everything that has been said previously, including the magic system prompt at the top. Let's take a quick break and hear from today's sponsors.
B
All right, I want you guys to imagine spending three days in Oslo at the height of the summer. You got long days of daylight, incredible food, floating saunas on the Oslo fjord. And every conversation you have is with people who are actually shaping the future. That's what the Oslo Freedom forum is. From June 1st through the 3rd, 2026, the Oslo Freedom Forum is entering its 18th year, bringing together activists, technologists, journalists, investors and builders from all over the many of them operating on the front lines of history. This is where you hear firsthand stories from people using Bitcoin to survive currency collapse, using AI to expose human rights abuses, and building technology under censorship and authoritarian pressures. These aren't abstract ideas. These are tools real people are using. Right now, you'll be in the room with about 2,000 extraordinary individuals. Dissidents, founders, philanthropists, policymakers. The kind of people you don't just listen to, but end up having dinner with. Over three days, you'll experience powerful mainstage talks, hands on workshops on freedom, tech and financial sovereignty, immersive art installations, and conversations that continue long after the sessions end. And it's all happening in Oslo in June. If this sounds like your kind of room, well, you're in luck because you can attend in person. Standard and patron passes are available at oslofreedomforum.com with patron passes offering deep access, private events and small group time with the speakers. The Oslo Freedom Forum isn't just a conference. It's a place where ideas meet reality. And where the future is being built by people living it.
E
Every business is asking the same question. How do we make AI work? For us, the possibilities are endless, and guessing is too risky. But sitting on the sidelines is not an option, because one thing is almost certain. Your competitors are already making their move. With NetSuite by Oracle, you can put AI to work. Today, NetSuite is the number one AI Cloud ERP trusted by over 43,000 businesses. It's a unified suite that brings your financials, inventory, commerce, HR, and CRM into a single source of truth. That connected data is what makes your AI smarter, so it doesn't just guess. It knows how to intelligently automate routine tasks and deliver actionable insights. Let's see your competitors do that. Whether your company earns millions or even hundreds of millions, NetSuite helps you stay ahead of the pack if your revenues are at least in the seven figures. Get NetSuite's free business guide demystifying AI at netsuite.com study the guide is free to you at netsuite.com study netsuite.com study there's one investment making the headlines lately that's been hiding in plain sight. We first found out about this group around 2020, and investors have allocated about $1.3 billion since then. It's an asset class that's outpaced the S&P 500 overall, with near zero correlation since 1995, not gold, real estate, or crypto. It's a strategy typically exclusive to the ultra rich that's moved independently of other popular markets. But now you can go to masterworks.com billionaires to invest in shares of multimillion dollar artwork by artists like Banksy, Basquiat and Picasso. Masterworks became a unicorn startup back in 2021, led by a serial entrepreneur and top 100 art collector. They've posted 26 sales to date with annualized net returns of 14.6%, 17.6%, and 17.8%, with tens of millions of dollars paid out. As one of our listeners, you can go to masterworks.com billionaires for priority access. That's masterworks.com billionaires past performance is not indicative of future returns. See Important disclosures@masterworks.com CD all right, back to the show.
B
I'm going to pause here and really foot stomp why this is such a big deal. So you're about to see commercials coming out at the super bowl from Claude basically banging OpenAI over the head because they recently said that they're going to start doing advertisements in their service. Let's just like really pull on this thread and go deeper. If you're OpenAI and you have an advertiser that's doing really well with you because they've got a high margin product and you're able to convert on that, OpenAI could potentially, and I'm not saying they're going to do this, but there's an incentive for them to do this where they start blindly inserting in the header things that could potentially steer the user to wanting said product that's being advertised. And you would have no idea that that's in the header.
C
Yeah.
B
And this just goes to the whole point of, like, why we're having this conversation, which is local AI is going to be very important for you to see the world clearly because you won't know that you're being very indirectly subliminally steered in a certain direction because you have no idea what's going into that header.
C
Yeah. Like the AI experience will get steered by something. Do you want it to be an advertiser? Do you want it to be a big tech company? Do you want it to be another government? Or do you want it to be you? Right, like, we want it to be you.
B
Alex, do you have anything to add on that particular point? Because, I mean, this is really why you're so passionate about running local AI.
D
Right, well, let's let Justin finish the context.
C
Yeah. And then.
D
And then I. I have my piece and I think it'll help pull things together.
B
Okay, keep going, Justin.
C
Yeah, yeah. So we think about it from like a bitcoin point of view. Like the Bitcoiners, we understand scarcity. That's like one mental model that the bitcoiners really get. And so you think you apply that to AI. It's like what's scarce, right. In the training. It's like the debt. You need data, you need energy, you need computers. Right. In inference, when you actually run it, it's context. Context is a scarce thing, that conversation. The longer it gets, the more confused the AI will get. And at a certain point, you run out of context and you just have to start over. And that's called compaction. And it makes everything worse. Right. So that's the big engineering battle. And it's traditional engineering. It has nothing to do with AI, really, traditional software engineering. The last year we've all been trying to figure out how to get better at managing this. And that is what has led to good AI agents now that we didn't have a year ago. It's a Big part of it, right? The models got smarter, but the context engineering also got way smarter. So I want to discuss next what an agent is, right? An agent is like, so now we're getting close to openclaw. Openclaw is an agent, right? So an agent to me is like a marriage between these new and old computer programs, right? The old stuff is like, you know, how you control your desktop computer or how you run a browser, stuff like that. And the new one is an LLM which can generate text. That's like really smart and in some sense has the entire all the intelligence of the Internet baked in, right? So an agent, how is it a marriage between these two and all things? An agent is the thing that makes requests to an LLM. So like the ChatGPT website in this definition would be an Agent Claude code, which is like a desktop or terminal program you can run that will write code for you or repl it. Those are agents, right? So it's something. It makes a bunch of requests to some AI and also has the ability to use what we call tools. A tool is like you can do something. All an LLM can do is spit out text. It can't do anything in the world. So the question was, how do you make something that can only spit out text control a browser or do a web search, right? How can it do a web search? And so what we did is we invented this idea called a tool. What a tool is, is you put in the system prompt, you tell it there's a special marker that means I want you to search this on the web, right? So think of this. It's like a sentence that says, search this in capitals. And then there's like a question and then it ends, search this in capitals as well. So if the AI responds with that to your question, if the LLM sends that back, search this question, Search this agent will say, oh, I know that. That's a marker, special marker. I got to do something special with that. I'm not going to show that to user. I'm going to go fire up Google and do a web search and then I'm going to send it back to the LLM. So this is what an agent does in the system prompt. You teach us tools that the agent software itself will intercept and do special things like search the web, control a browser, send a message on Telegram, and all the other things that OpenClaw does. That's called a tool. And so once we had that, you had, this is the way of augmenting an LLM to be able to do stuff in the real world. So maybe you heard of mcp. MCP was something like a year ago that blew up because it was a way to publish a bunch of these tools and share them. So that basically, like, you know, in the beginning, chatgpt tried to dictate what tools you could use, right? They said, now we have our tool and you can only use this one, right? And everyone's like, screw that. We want to do anyone we want. And so MCP was invented as a way to share tools, and so the user can choose which one they want. And the problem with it, it was like, if you ever heard, like, just in case learning versus just in time learning, like, just in case learning is by getting a college degree to solve a problem. Just in time learning is like, you have a problem and then you go to YouTube and learn how to solve that problem and you solve it. And so, like a year ago, we were doing just in case prompting with MCP. We'd say, here's how to do 10,000 commandments, just in case you need them. And then the first round of conversation, the AI is already kind of confused because you should have told it way too much, right? And so now a thing called Skills, which I'll talk about next, is more like, like, just in case prompting, you say, here's a bunch of manuals you can use if you need them. They're over on that shelf over there. Don't read them yet, but you can see the titles and when you should use them on the findings, right? That's kind of the difference between MCP is like, that was like just a case prompting, and a skill is like, just in time. Yeah, prompting. And so this was kind of a revolution in context engineering because you could expose many more things to an LLM without overloading its context window.
B
That was extremely helpful for me personally, because I've seen both mcps and I've seen skills, and I know so many.
C
There's so many. Like, if you feel overwhelmed by all the jargon, like, there's just so much. There's so much.
D
It's kind of like in the Matrix when they plug the different things into Neo's head, right? Yeah. What skill do you want? And you're going to have a little fricking library.
C
Yeah, very similar. So, yeah, let me. Let me tell you more about what a skill is. So now skills are like, this is a foundational thing that OpenClaw is built on. So an MCP was like, here's 50 different things you can do. You gotta Figure out how to use them. Now you gotta figure out when to use them. Like it was asking a lot of the LLM to kind of map to figure out the user's intent and like when to do stuff. Skills are based on the insight. It's a mapping from a user intent to an action. When user wants X, you do this, right? So you only see that at the beginning in the system prompt. And when the user declares the system, the intent, you go and look up the manual and figure out how to do that, right? And so what is the manual? The manual, this is a skill. A skill is skill, kind of like an analog to an app. Right now the closest thing to the old world, it's like an app. The skill is a folder. That's a very traditional thing, a folder. You've seen many folders on your computer with two types of content. One is text files containing prompts, meaning just a plain English description of like, hey, when the user wants to book a flight, you know, first you open the browser, then you log in and the user has to enter their password. You need to wait for that and then go to kayak.com and so it's a prompt, but it's not only a prompt because sometimes if you give it an open ended task like that, it won't be able to do that. But parts of this are better done by like a traditional programming technique, like a computer program. That's the second thing that goes in a skill folder. You can have programs, right? So you could have a program that can specifically open Kayak.com and can specifically find where to put the credit card information and can specifically, you know, do a bunch of the things, the actual steps that are involved in booking a flight can control Google Chrome browser, for example, and do all these things. And the prompt would say, hey, they prefer aisle seats to window seats, right? They'll have a bunch of preferences like that. It's like a compact manual that maps a user intent to an action and leverages prompting, which is the new type of computer and like a simple computer program, which is kind of like the old type. So to me it's kind of like a marriage. It's a good marriage between these two. And that's why it's so powerful, is because it allows these LLMs to more effectively use a computer to accomplish what the user wants.
B
It's more efficient, it's faster, it's not bloated. Your context window probably won't fill up nearly as fast.
C
It only it fills up once the user wants, it wants it to, but not before. So it's much more efficient.
B
Yeah, yeah.
C
And so that's kind of like one thing here is that we figured out an hierarchy for these types of things, right? So like in Quadbot, it saves a bunch of memories, but it doesn't look at the memories until they might be relevant. Right. So it's, it builds for like file system hierarchies to only expose what the user needs, but to allow it to be discoverable for other things they need in the future. Right. That's been a big thing in context engineering. We've been adding hierarchy for all these things. We used to just dump in there just in case. Right. Okay. So I want to. The next one is one more and then it'll be open class. So vibe coding. What is vibe coding? So this has been a really big thing. We just had like the one year anniversary of this.
D
Happy birthday. Vibe coding.
C
Happy birthday. Vibe coding.
D
Yeah.
C
So normally when you write computer programs, it's like a very, very. You have to really, you have to have the blinders on. You have to really look. And if you get one semicolon, you're typing text into a file, doing really logical operations. And it's like very, very focused, anal, you know. And so vibe coding is like the complete opposite, where you put your feet on the desk and you're like, hey, computer, build me a movie player app that can download it from my Dropbox. And you just watch it do it. Right. And so this became sort of possible a year ago and it's very effective in the last three months. Like very.
D
Yeah, it's.
C
Yeah. And so let's just talk about like what is actually happening there. What happens is you say, hey, why don't you write a program for something, log code or replit, right? And then it might come back like a normal ChatGPT conversation, ask you some clarifying questions, try to clarify your intent a little bit, and then it will go into a loop. Right? A loop just is a programming term that means to do something over and over again. Right. And so we'll do a bunch of these tool calls. It will do a tool call to do web search, to search something you might have said. Then it will do read some files in the existing thing, then it will write a file, then it will edit a file. And at the end, once it thinks it's working, it will do a tool call to run the program and then you can interact with it. And at the very end, and it might try to do some tool call to test it manually itself. So it's just doing a loop, doing these tools over and over again and skills and stuff like that until it judges, hey, I think I accomplished the thing. And then loops have a termination condition. You do it until there's some condition. And in Vibe coding and coding agents, that condition is a response from the LLM that doesn't have a tool call in it. So every one is just a bunch of these little things with the special marker to do something special. And at the end it's just a text message and that's just displayed to the user and the loop exits. And if you're lucky, you have a working app that does exactly what you wanted. A year ago you often didn't, but now you often do.
D
And since are like some of the agents update you along the way, they're like showing you, oh, we did this, cross that off, this off. And they could be quite transparent. So it's, it's exactly what he's saying there. You can see it's how it's working.
C
And you can steer it along the way if it's going in the right direction. You say, I want blue, not purple. Right. So you can control a lot. And you know, this is something. Now if you go on Replit, for example, you can have a pretty good time with zero technical understanding. And I encourage everyone to do it because it will give like a different lens. It gives you a lens into what? That's what the future is.
B
Replit, like Cowork, like Claude's cowork, kind of.
C
So Replit, it's a website that you can go to and you can ask it to build an app. And it's very good at building an app. It's also very good at posting it on the web or like getting it onto your phone if it's a mobile app. So it's a 10 year old company that was where they were dedicated to make it easy to learn to program. Yeah, I actually used to do interviews on this platform like 10 years ago and they were early to seeing this Vibe coding trend because, hey, this, this solves the mission of the company. So.
D
So you're about to explain how Open Claw works, right, Justin?
C
Yeah, open.
D
I think this is a good time for me to like interject some of the social impact of what Justin has just described. And then I'll sort of end with something I just saw Open Cloud do. And then you can explain how that works because I think we've covered a lot of ground and I think we're ready for that. So. Okay, so a Lot of people, including me and Pablo. Five years ago, if you had asked us about AI Zoom out, way outside of learning how it works, just impact on the world, we would have thought that it would be inherently repressive with regard to civil liberties and personal freedom. There was an old, you know, I'll paraphrase Peter Thiel about seven or eight years ago, he said something like, Bitcoin is decentralizing, AI is centralizing. If you want to frame it ideologically, Bitcoin is libertarian and AI is communist. Yeah, you know, a lot of people, including me, really believe that. We thought it would be very pernicious towards human rights in the hands of states as they vacuum up everybody's information and build a more efficient surveillance and control machine. And a lot of that is true. Part of the program we've launched at the Human Rights foundation where we brought Justin on to help us, is going to be exposing how dictators are using and abusing AI. But what we didn't see coming until, you know, in the last 18 months was 24 months was how can AI supercharge individuals asymmetrically in the same way that encryption or Bitcoin could certainly help dictators, but it helps individuals way more. I mean, dictators already control vast communication networks, banking systems, massive data centers. They already have ways to exploit money and spy on people, control armies and big companies. And they have huge numbers of talented people to do their bidding. But individuals and resistance groups and innovators don't. So by coding changes this. Right? So now individuals have access to enormous cutting edge computing power and unbelievably intelligent personal assistants that are already saving them huge amounts of time and resources. I mean, just very simply, the fact that you can talk to a computer and make it do things for you is, is revolutionary. And this is increasing exponentially. So again, one year ago bytecoding was invented. Nine months ago, a non technical person could bytecode a website decently. I don't know if they could deploy it maybe through replit, but like little shaky but like they could do it. Today a non technical person can spin up an agent that can autonomously conduct work and perform tasks in the background without human oversight. And tomorrow, like we don't know, right? So six months ago a lot of elite developers, including a lot of the ones that Justin I know looks down upon vibe coding and they thought it was very ineffective and a bad work ethic, et cetera, et cetera. I did a retreat with some of these people, amazing elite developers in the beginning of December and a Bunch of them were like, nope, don't want that. All of them have changed their minds as of today. Right. It's really crazy. So Karpathi, the former head of AI at Tesla, who invented Vibe coding, more or less said about a month ago that, or he said that in November he was manually doing 80% of his code work and using Vibe coding essentially for 20%. And as of a few weeks ago, that's switched to now he's vibe coding 80% to 20% manual. So, you know, these agents are capable of massively automating a lot of human work and it makes it possible to really super scale individuals and small organizations. So, you know where we started with the activists doing some basic trainings and workshops. That's now blossomed into like multi day hackathons and bespoke trainings. And we can basically give people superpowers. And, you know, the way I like to look at like what's available for the activists today, and this lines up pretty much with what Justin has said so far, and I'm getting close to finishing here, is you have your chatbot, just in terms of terminology, okay, everybody knows they have their chatbot. Go to chatgpt or Claude or whatever. Then you have what I would call creator mode, which is like Claude code. It can do a lot more than just spit text out, as Justin was describing, can use tools, skills. Then you have a personal agent. So these are three kind of options that are out there now. We're about to explain how open cloud actually works, but the social impact of it is really important. Essentially what I've seen with OpenCloud. So like yesterday what we did is to a group of 20 people from different industries. Pablo and I did a 40 minute session where we did some background. We did some pretty amazing things with cloud code. And then we used his own open claw that he set up. And basically like from my phone I can go into telegram and I can message his. And I left it, I just left it a two minute voice note with an incredibly complex task to do. And like three minutes later it responded like, it gave me this thing and it was just like the most insane data rich website thing that was actually quite useful. I mean, to be very clear, we asked it to create a doable, scalable, manipulatable, circular, global spherical map that shows exactly how much civil liberties and free speech and democracy funding every single country in the world gets broken down by who gives it and then like sorted. So you could like rank them.
C
Hold on, you.
B
You sent this request over, like phone.
D
Line over, telegram from the phone. I was just like, yo. And I had on speaker and other people were listening in the room. And I just said, I want you to do all these things. And then a couple minutes later, it gives us this, like, freaking incredible visual project. And what is showing me is the following. And this is the kind of where I'll conclude is that workflow for creators is going to change. So basically, the way it works to till this point is like, if you're an executive or you're a creative person, you have a meeting and you have a cool idea, you really want to do something. Well, what do you do? Well, you normally, like, talk to your executive assistant or your product team or your program team, depending on what kind of organization you work with. And you have a meeting and you describe what you want, and then they go talk to the creative team because they're not designers or, you know, engineers, or they go talk to engineers, and then those people talk to web people. And then maybe they come back to you a few weeks later with some proposals. Hey, do you like this one better or this one? And there's just so much human time and effort there. Now what you're going to be able to do this year is like the creative, like the founder post person can literally describe exactly what they want. They could say, I want it to look like liquid glass on iPhone, or I want it to kind of look like this movie vibes. Or they can literally, like, the dream can come out of the head so specifically, and then they can take, like off of a voice and they can speak it into existence. And they take that and give it to the creative team. And then there's no more like, well, do you like this color or that? No, no, no. They have a really specific idea of the vision. So this is going to become, in my opinion, like a skill like surfing or like sculpting. And it's like, are you going to be decent at it or are you going to be like Michelangelo? And we'll see. But I think it's going to be so amazing for creators, people who have big dreams and visions, because it can, like, really quickly get them to, like, a really good, really good blueprint of what they want.
F
And.
D
And then their colleagues or alliances or teams can finish the rest. And that's, I think, one of the biggest social impacts of what Justin is describing. So maybe, Justin, now we turn to you and figure out how I can, like, talk to Telegram and have it do stuff. Something like that.
C
Yeah. So transition from vibe coding to openclaw or, like, chat. It started with, like, the ChatGPT interface and it became kind of Vibe coding agents, right? And now it's like the personal assistant is. We're just starting to enter that where, you know, we've had a good coding agent for about a year. We've just started to get good personal assistants. And that's what openclaw is. It's kind of the first actually useful personal assistant. And so to transition, though, I want to make a note that, like, I actually met Peter Steinberger, I think his name is the guy who created it from a blog post about how we Vibe coded. And when I read it was called Shipping at Inference Scale, and it, like, blew my mind. I'm like, oh, my God, I'm a complete amateur. What this guy is doing is unreal. And I think openclaw is largely a story of. He was like the world's best vibe coder. This guy figured out how to Vibe code, and that's actually what created OpenCloth. Like, the real thing that unlocked it was that he was able to use these Vibe coding tools so effectively. So I'll get to that. But so what is the user experience? Right? It's a personal assistant that you can chat with on any messenger you like, signal, Telegram, Noster. Like, the last night I did a live stream. We used. There's an existing Noster thing that wasn't very good. And I built one using Marmot, right? So you can add and do whatever.
D
The heck, email any emails that they.
C
Email, anything you want. And if it doesn't exist, you can make it so the ingestion can be talking to. This can be from anywhere. The agent has its own computer. It gets a computer and it totally controls it. It can be a desktop, like a little Mac Mini. It can be a virtual. It can be something in the cloud. It can be on your laptop, although don't probably do that in general. Be very careful with this. Do not try this information security skills. Like, I'm still scared of it, and I'm like an expert, almost like. And then it can totally control that computer, right? So you can talk to it anywhere you want. It has its own computer and it totally controls that computer. And basically the premise is, what if you gave the agent its own computer and gave it skills and tools to control literally anything about that computer that the user wants to. And it got to a certain point now where the developers don't even have to invent the skill anymore. Now if it's missing something, if there's something else you Want it to be able to control that it can't do you just say, hey, now make it has recursive self improvement. Now you can be okay, make a skill that allows me to pilot this weird app that nobody else uses. Right. And so it's basically vibe coding internally to make a personal skill.
D
Or if you could color this in, you can also buy, you know, free market skills. So Pablo was showing me that what he's building, he's building a not competitor to open cloud, but like something like an alternative that's more for a different use case. But the idea is that when he wants stuff done, his agent can go hire via Noster and Bitcoin can hire like an expert in Cashew, for example. Yeah, that Kale has like worked with so that it knows kung fu. Right. So like you can hire that one and then. Or hire one that's really good at designing liquid glass apps for iOS for example. So we can go out and hire these and then like do it. So again, like the skills thing is not just something that you have locally. You could hire them or you could, you know, acquire them or whatever you want. But the point is, it's fascinating to see this start to work.
C
Let's take a quick break and hear from today's sponsors.
E
No, it's not your imagination. Risk and regulation are ramping up and customers now expect proof of security just to do business. That's why Vanta is a game changer. Vanta automates your compliance process and brings compliance, risk and customer trust together on one AI powered platform. So whether you're prepping for a SoC2 or running an enterprise GRC program, Vanta keeps you secure and keeps your deals moving. Companies like Ramp and rider spend 82% less time on audits with Vanta. That's not just faster compliance, it's more time for growth. I love how Vanta makes it easy to stay on top of your compliance without it taking over your entire workflow. It just simplifies something that's usually way more painful than it needs to be. Get started@vanta.com billionaires. That's V A N T A dot com billionaires. Billion dollar investors don't typically park their cash in high yield savings accounts. Instead, they often use one of the premier passive income strategies for institutional investors, Private credit. Now the same passive income strategy is available to investors of all sizes thanks to the Fundrise Income fund, which has more than $600 million invested and a 7.97% distribution rate. With traditional savings yields falling, it's no wonder private credit has grown to be a trillion dollar asset class in the last few years. Visit fundrise.com WSB to invest in the Fundrise Income Fund in just minutes. The fund's total return in 2025 was 8%, and the average annual total return since inception is 7.8%. Past performance does not guarantee future results. Current distribution rate as of 12312025 carefully consider the investment material before investing, including objectives, risks, charges and expenses. This and other information can be found in the Income Funds prospectus@fundrise.com Income this is a paid advertisement. Starting something new isn't just hard, it's honestly kind of terrifying. I still remember those moments right before I really committed to podcasting. Lying awake at night thinking, what if no one listens? What if this completely flops? Or what if I'm just straight up wasting my time? And even though pushing past that doubt was not easy, making the leap ended up being one of the best decisions I've ever made. And I'll say this, it helps a lot when you have the right tools on your side. And that's where Shopify comes in. Shopify is the commerce platform behind millions of businesses and about 10% of all e commerce in the US from massive household names to brands just getting started. If you've ever thought, what if I don't know how to build a store? Shopify makes it easy with hundreds of beautiful, ready to use templates that actually match your brand. Or what if I don't have time to do everything? Shopify's built in AI tools help write product descriptions, headlines, and even enhance your product photos. It's time to turn those what ifs into with Shopify today. Sign up for your $1 per month trial today at shopify.com WSB go to shopify.com WSB that's shopify.com WSB.
C
All right.
D
Back to the show real fast because.
B
We have a huge bitcoin audience here. Yeah, when you look at how these AIs are going to want to transact with each other, for me it's become super obvious that they're going to want bitcoin because it's the only form of payment that they can't be rugged on. So if they're managing their own wallet and you look at all the different ways that they could be paid. Yeah, anything that touches human rails or has the capacity for a human to be like, I think I'm going to liquidate this account that it's using. I think the AIs are going to deeply understand that risk and never want to denominate their exchange in such a thing.
D
I think for sure that's where we go. But it's just worth noting now that for example, I saw the founder of Umbrella today. He was just posting that like he had his open claw on Umbrella just book his debt for him and yeah, he like gave it his credit card. He gave it his credit card and his billing address. So it does work with Fiat. But like, I think you're right that like over the coming years it'll be way easier for these things to work with a digitally native currency. Yes. Yeah.
C
I almost think it's going to happen the opposite way, where it's like they'll just use dollars because that's what's in the training data and that's what everyone accepts by default. Right? They'll use fiat.
D
Right.
C
And then they'll try to do something where they can't and they'll be like, I can't. Is there another option? Oh, I can just use a bitcoin.
D
Oh, I get the bitcoin skill.
C
It'll be more, I think it'll come more from trial and error where it's like, yeah, dang, it's like they keep asking me for all this stuff. I got to check emails and my owner has the email and I can't get in there. So it's like, let me just create a bitcoin wallet, right. I think it'll kind of happen that way from the ground up just based on failure with the Fiat.
D
Right. It's like a trainer person in Nigeria and credit card is not working well. Why don't I try something else? Let me see. Oh, there's this bitcoin skill. Oh, let me learn that really quickly. Oh, okay. It works now. Like it's going to do that.
C
Okay, let me get to the continue the open clock thing. So like, yes, so I talked about the user experience, right? It's a personal system that you can message however you want, that has its own computer and that computer can be whatever you as the user want. You have the freedom to choose. And so it completely blew up in popularity. So to give a sense, GitHub is like the collaboration platform for open source software. There's something called a like or like a favorite on GitHub you can like favorite a post or a project. You say, I like this one, right? Star. It's called the GitHub Star. GitHub or Bitcoin has 80,000 GitHub stars. That's a really popular project. Openclaw and it's 15 years old. Open Claw is like six weeks or seven weeks old and it has 160,000 stars. So it's double as popular as bitcoin in like six or seven weeks Linux is like 200,000. So it's almost got up to Linux which is like the most famous open source project that exists. So that just gives you an indication.
B
Of like is that the fastest moving?
C
Oh yeah. Like they show, there's graphs where you can find where they show all these other like, yeah, super fast moving projects that look like a hockey stick and compared to those open claws, like a vertical line, it's just insane. Like there's no X dimension of the adoption. It's really cool. So that's to give you, the listeners, a sense of how popular it got. And so it's because the user experience was really good. Like this is what everyone's wanted. It's like a relatively self sovereign personal assistant. I just want to kind of ask some questions about like why did it happen now? And give my take on it. What enabled this? Like and this is in the sense of like where are we now? Like the first thing you think is oh, finally the AI's got smart enough. I kind of disagree. Like I kind of think that if we had Claude bot like when Claude 4 came out, this is May 22nd of last year, I kind of think it could have gone viral at the same time. Wouldn't have been able to do everything. But I think that some of the previous models from six or nine months ago maybe could have done this. I'm not sure I want to do some testing on it. Yeah, I don't actually think that actually when it comes down to running the assistant, we needed the models that we have today. So one big one with context engineering, we got a lot better at this just in time prompting instead of just in case prompting. And that's traditional software engineering. So this was human software engineering. But I think to me the biggest one was that this one guy basically vibe coded a massive bridge like Peter Steinberger's GitHub is insane. The average developer does like maybe 10 GitHub contributions. That's like an action on GitHub a day. This guy does like a thousand a day. He's just absolutely ripping it. He's operating at a much higher level than the rest of us. And one of many of us are trying to catch up. He has like 50 projects on his GitHub that compose this bridge between a traditional computer and an agent. So stuff like managing Google Calendar, managing Gmail, making tweets, communicating over telegram, communicating over like Apple Messenger. He made all these little command line tools, little basic tools that were optimized for an agentic user, not a human user. Like no human would want to use a CLI tool to manage their calendar. But since LLMs are all text based. Right, it's all based on text. They are really good at making these little CLI tools. And so eventually it got to this kind of recursive improvement where the tool byputs itself. I mean also it's like the labs couldn't do it because it was reckless. Like you needed like a cowboy basically you needed an open source cowboy. He didn't care like very. I don't know if this guy's a bitcoiner, but he would fit right in.
B
Yeah, he would.
D
Like Satoshi. Yeah, Open source this thing, no big.
C
Thing would ever do this. And also he's kind of a hero because he didn't, you know, he could have raised the VC money and all these things. You know, I'm already successful. I'm just going to leave this for the people. Right. You know, so there's a lot of these technical things like making skills, skills for missing abstraction, context, engineering.
D
Amazing. And, and it puts so much pressure on the large corporations because the users are now going to want the choice of using whatever input they want. Whereas before they wanted to corral you in their theme. Like they wouldn't have wanted you to use Signal to talk to Anthropic's new product. They'd want you to use their right. And now it's like well what are we going to do? Begin to try to. They're probably going to have to offer ways for people to use any input they want. So this is pretty, pretty seismic. And I just would also just note that from a human rights perspective, maybe we can conclude a little bit of this with this part. Justin, like I'm not doomer on the. Yes, of course these things are risky and evasive but like the cool part is you can hook up Signal and Maple and do open cloud like that. Like you can use privacy protecting AI agents and you can use privacy protecting messengers and there are some serious innovations happening on that now by some of our friends and people in our community who are making, you know, what are going to essentially be full stack personal agents where in maybe three to six months some of them are already like very alpha but like you experiment with them but like you'll be able to go in your signal and have it do stuff and have like the whole supply chain be encrypted. And I'm so bullish on that. So that's what Ahrefs really gonna be focusing on this year. Yeah, from a investment point of view, like supporting the infrastructure is gonna be building those tools and then the rest of what we're doing is gonna be just the super scaling and education.
C
Yeah, yeah, let's like go into those in a little more detail. I just want to kind of summarize first.
D
Yeah, go ahead.
C
So if you like, think of Open Cloud like a story, and it is a story, that's why it went so viral. Like the story is just as much as the tool. I think in a sense it's like the story of like what one individual can do with the help of icoding with AI development. Right. One guy it was basically. And then eventually he got far enough where a big open source, voluntary open source community arose around it. And this is like exactly what we bitcoiners participate in. This is what NOSTR is. And so it's very inspiring to see what one person can do. And to me, openclaw is more of like an idea than an actual product. Like, it shows us the idea of what if an agent has its own computer and you can talk to it however you want. I'm going to build my own OpenCloud. I'm not going to use Open Cloud. I'm just going to bytecode my own and I'm going to use some of the pieces they have and all my friends are going to do the same thing and you're going to see this big renaissance of stuff that can't be controlled that is customized to what the user wants. And so for my takeaway, it's like I want to teach more people about AI and also that like, this is why I'm proud to work on the hrf, like AI for Individual Rights program. Like, we're fighting to make sure that more of this type of stuff can happen, that AI remains user controlled and that people can thrive in an AI world. So, yeah, I transition out of Alex just to hear a little more about, you know, maybe share a little more about, you know, how the program started and what we've done and what we want to do.
D
Well, yeah, again, just the moment was fortunate about 13 months ago when we were presented with the opportunity to do this by generous supporter and anyone listening. You can just do things. You can support people like us and have us do really cool things. So thank you to everybody who supported us, including you, Preston, for helping us Today, just even having this conversation is going to spark a lot of thoughts, I think. But yeah, created the world's first AI for individual rights program. Every other human rights group either hates AI or they're going to try to, you know, really focus on research. And it's just they're not going to do anything. And you know what, like, we wanted to do it differently and most of our effort is going to be focused on how to make this tool a mechanism for personal liberation, period. We are going to do again some research and investigations into how dictators are abusing it. That's very important. You know, we do feel like that will start to get crowded with other people. What I don't see anyone else doing for sure is like in the same way that we've been pioneers of educating dissidents and activists and resistance groups on Bitcoin, well, we're going to do the same thing, these open source privacy protecting AI tools because in the same way that bitcoin helps them become unstoppable, AI is going to help them 10x or 100x what they can do. And we need that right now. Right now is the moment for us to push freedom forward. So that's what the program's designed around. We're going to do events that bring people together, as Justin was describing, like bringing together talented developers with activists. I mean, both of them were thrilled. I mean, the event went so well. The first one, we're going to do two more this year at least. We're doing one in Nashville at Beckman park in May. Tuck the rods. We're going to do one at pub key in D.C. in September. So we're going to cook with these. And the developers were like thrilled because it's like something so inspiring to work on as opposed to just like the standard hackathon. And the activists are like, awesome. I get like five of the smartest people in the world to like help me do what I want to do. Like, everybody's like, you know, let me.
C
Chime in here a little bit. So like we had this idea, you know, so hrf one thing, I mean, my friend still every once I'll give me crap about like, how do you work for an ngo? And I'm like, I don't know, man. I. I don't know. Alex. Sure.
D
We are non governmental. We.
C
I'm not, I'm not. And I'm like, well, Alex brought me and my friends to these freedom tech developers, the ideological freedom tech developers. And we met these like physical freedom fighters who actually fight for freedom in authoritarian regimes. And over the years I would meet these people and they were like some of the most courageous, inspiring people I've ever met. And I just like, man, I wish I could help them. But it was always a little distant because it's like I'd be like, okay, use my wallet, you know, use my. I can teach you how to use Bitcoin, right? It's. It remained friendship, a social thing. But then when bytecoding happened, like what vicode means is the cost of software production going kind of to zero. That's what it means. You have a year ago you needed to be chatgpt to build an agent. Then Peter Steinberger could build one himself in Pablo. And now the tools themselves can recursively self improve, right? The cost is going down, down, down and down. So the opportunity is like, okay, what if we could put activists and developers together and have them actually try to solve problems? Right? Usually the ideas are bad and there's no distribution of the product at the end. But the activist collaboration fixes both of these. The activists bring a real problem like, hey, how do we make a, make a leaderboard of which LLMs respect human rights and how do we distribute it? Okay, the guy's got a massive academic following and is very respected and works at Harvard. You know, like this is all, this is what all the projects were like, right? It was very empowering from the actors point of view because they got to do something useful and they also got to see how software is created, right? So a lot of these people have been around hrf, talked to these developers, but I don't think they actually understood where it comes from. And they got to see it for a day, where it comes from and from the developer interview powering. Because like man, we've been working on these abstract problems all the time and now I get to make a tool that can help find corruption and a big data dump of documents from R. It's very nice to work on like a concrete problem and then apply the skills you knew previously from your work with Freedom Tech. So it was a big success. It was a very surprising success for me and I'm really looking forward to.
D
Do more of these and just, you know, just tldr, what are we doing? I mean, two main things. Again, we're going to be bringing people together at all kinds of interesting events. We'll have a big Freedom Tech Day at that was the Freedom Forum where we're going to have quite a bit of Vodkoti for activism. And then the second Thing will be grants. I mean, you know, we want both the activists to apply to our AI fund to seek help to build the things they need. Then we also want really talented developers working on essentially, you know, things like open code or, or openclaw or Maple, like open source sovereignty and or privacy improving infrastructure. We want to aggressively support that. So people should get in touch with us and you know, we really, really want to beef that up and you know, even small investments can go a really long way. Right now the virality is here. Like again, the guy from openclaw when he released it it was Claude Bot. It's not like he had third raised $30 million of venture capital, but did it out of his house. And it's like we could do that. I don't know if you want to mention like briefly just what like Kali came out with today or yesterday. The claw. Today.
C
Yeah.
D
Our friends are coming up with amazing stuff.
B
Kali, another pretty famous bitcoiner who has just done incredible things historically as far as writing code. He made a turnkey clawbot that he just released his website, right? That makes all of it super easy. A person can just, you know, go to the website that he just stood up. And I can only imagine how quickly a guy that's as talented as he is in writing software was able to engineer something like this and put it out there.
D
No, and it still has. It's got a ways to go on the security side but like, you know, he knows that he's a privacy maximalist and he's going to work on that. You know, again, where we are today is like for the activists at least is we want people to use something like Maple for their basics, for what their one on ones are. Like you should just not be using other chat. Like it'll get 95 cents on the dollar at least of the big corporate models and you could be encrypted. Let's move there. Let's move from text message to signal. Next three to six months we're going to be able to move your creator mode, you know, your basically your cloud code type things. And I think we're going to be able to move your agent as well into a similar environment. So that's like the hope and the dream right now is that now like in the next three to six months people who really value privacy and sovereignty, you know, will have access to extremely powerful tools that reflect their values, but that can also 10x to 100x their work. And that's very exciting guys, we have.
B
To keep this conversation going like Honestly, like, you guys are on the tip of the spear. It's a military term. You're on the tip of the spear of everything.
F
For you.
C
Thank you, Preston.
B
No, I really mean it. In the conversation I had with Pablo and Trey, I was like, guys, you got to come back and keep us updated with this. Because I honestly think that this Clawbot thing, and it's interesting because Sam Altman literally said the same thing. And you know, coming from a guy that's one of the, you know, one of the biggest in the AI space.
D
No, he said it's here to stay.
B
It's here to stay. That caught my attention. And I think that this is something that is going to be massive for individuals. It's the wild wild west right now. And for all intents and purposes, from a privacy security, people losing, you know, their bank accounts and email addresses and things like that, I think it's the wild wild west right now. But in a year from now, I can only imagine what this.
D
I mean, it's a new era of personal computing. You know, just these commentary, like the creator of openclaw really just opened a new, tore a new hole and what's possible, and then we're going into that world.
C
Let me give one analogy. Like, personal agents at this stage really remind me of Ecash, right? Which I worked on through Fetty man and Kelly worked on through Cashew, because it's like there's an obvious trade off, big security trade off right up front. It's like, hey, you trust another guy, random guy with your bank, right? And so it's like kind of crazy, you know, you give an AI agent its own computer and let it do whatever the heck it wants. So it's like a big upfront trade off that's a little reckless. But then you get this flowering of all kinds of hobbyists and people who are kind of understand the risk, understand the trade offs. That's what we were trying to communicate. Don't just recklessly do this if you don't understand what's kind of going on. That's why I tried to explain so much of these ideas to you, because you need to equip yourself with some of these basic things in order to make these decisions. But when you have this flowering of a big group of very motivated people in like the open source ecosystem, that's when you can have really magical things happen. And that's what happened with Ecash and Cashew, and that's what's happening with these personal self sovereign agents.
B
You know, you have all these People talking like, AI is coming. It's going to take all of our jobs. The other side of the coin that I think I really want to, you know, impress on a person listening to this. The tools we're talking about also give a person the ability to 100x or thousandx their capacity and their ability to do things. And so these two forces, amazing, really come down to what is your perspective? Is your perspective. This is too hard and complicated. Well, AI is probably going to eat your lunch or are you sitting there saying, hey, this is my moment.
D
Yeah, like, what can you do with this? What can I think about this? I give you a great example. Well, I'm here with a really well known Cuban activist. I'm thinking to myself, like, you know, right now there's no like bitcoin wallet really is perfect for her needs. And, you know, no one's really going to build that. She's going to build it like within the next year she'll be able to speak to a computer and it'll open source. It'll take some stuff from Bitch, which is very important given that Cuba doesn't have great Internet. It'll take some stuff from like some very popular open source lightning libraries. It'll just build what she needs and it'll look awesome and it'll be exactly what she needs. And she can just do it in a few weeks or a few days or a few hours, depending on how much you want to put into it. I mean, we're going to see the blossoming of so many interesting little personalized tools that can radically expand people's potential. And it's just such an exciting moment. To your original point, Preston. And yeah, we'll come back and you know, we're making a mini documentary right now, current six months that we're living on that we're going to play on the main stage of the Oslo Freedom Forum. It's going to start January 1st. It's going to end June 1st. We're going to play it on June 2nd. And at the bottom, third, you're just going to see the days go by and you're going to see like the headlines and you're going to see interviews and work and it's going to be so crazy what happens on Boom two when we show this thing. The speed is just face melting at what. What is going on here? So honor and a pleasure as always.
B
Hey, that event and also the one in Nashville in May, I am very interested in going to the one.
D
Let's go.
B
Yeah, so we'll put Links to that in the show notes.
D
Yeah. May 8th. May 8th to 10 for the Bitcoin for our Kakathon part two.
B
Yeah.
D
AI hack for freedom. And then it's June 1 to 3 for the House of Freedom forum in Norway.
B
Amazing.
D
Freedom. Form.com. check it out.
B
Amazing.
C
I have one thing to plug here at the end. So I started doing some live streaming on Nostr to try to share what I've learned over the past year. And so next week I'm going to try to vibe code of coin full node. That's what I'm going to try to do. So I'm going to be live streaming Nostra all week and probably going to injure myself severely in this process. Wish me luck.
D
Good.
B
Amazing.
D
Amazing.
B
Okay, so we end the shows now with a song. And we need you guys to select either one of you what your favorite artist is or song. Like, if there's a specific song you like, I want it to be like that. And then the song is going to recap everything we just talked about in a fun song like way. So do either of you have a very strong preference for a specific song artist, genre? Go ahead and speak up.
D
Dustin, you. You fire.
C
I don't have. I can't think of a specific song, but I would go with the sea shanty style would be fun.
B
Sea shanty songs. I don't even know what that is, but I'm about to start.
C
Sailors. I could send you one afterwards.
D
Oh, like, okay.
C
Yeah, like the sailors about how they're getting off the door and they're going to get into trouble and, you know, it's great.
D
Wow.
B
I love how diverse these song selections are. The last one I think was a. A Beatles song or something like that, so. All right, guys, thank you so much for making time. We're gonna have links to all of that in the show notes. Enjoy your Seashanti song on the close out here.
D
Thank you. Steady.
F
Through the fog we sail when.
C
Ready.
F
Open, close, open sea they can't hold what they can see.
C
Yeah huh.
B
Okay, okay, okay, okay, okay.
F
Flip the ship now we dip, dip sliding on the base by the cold double time never leaving any trace Sovereign, sovereign, sovereign Running on my own pie on the counter A I picking up the phone filled it in a night yeah the coffee wasn't cold 160,000 stars that's a story being told But I don't slow down nah I keep it moving keep it spinning Keep the sound bouncing off the walls and the ceiling and the floor Open source a recipe Then I'm cooking up some more Feel it in your chest when the baseline drop Once we start this wave we don't ever stop Open close open See they can hold they can't see open See this the code that set us free Set us free this the code that set us free One more time this the code that set us free okay, okay, okay, okay Let me break it down slow One developer changed the whole flow Then we speed it back up like we never hit the brakes Signal buzzing, telegram humming Making sovereign state Sacks of fists and hackers Builders in the dreamers Everybody vibing none of us asleep us fast now, fast now Catch it if you can Code is in my left hand Futures in my right hand Sliding like the bass do Popping like the sand do Building what they said that we would never dead who calendar handled Emails flowing data growing where we heading next Baby we already going Feel it in your chest when the baseline drop Once we start this wave we don't ever stop Open claw open See they can hold what they can see Open the sea this the code that set us free Set us free this the code that set us free let's go this the code that set us free Used to think the future wasn't ours now we hold the the key to sovereign. Yeah, let that sit now we build it now we share it Got a whole new world and we declawing open claw yeah we calling open sea Never stalling open claws are falling Open sea New day dawn and old day dawn Open cloud open open sea they can't hold what they can't see Open clothes open sea this the code that set us free Set us free this the code that set us free this the code that set us free.
C
This the.
F
Code that set us free. The sea is ours now.
A
Thanks for listening to tip. Follow Infinite Tech on your favorite podcast app and visit theinvestorspodcast.com for show notes and educational resources. This podcast is for informational and entertainment purposes only and does not provide financial, investment, tax or legal advice. The content is impersonal and does not consider your objectives, financial situation or needs. Investing involves risk, including possible loss of principal and past performance. This is not a guarantee of future results. Listeners should do their own research and consult a qualified professional before making any financial decisions. Nothing on this show is a recommendation or solicitation to buy or sell any security or other financial product. Hosts, guests and the Investors Podcast network may hold positions in securities discussed and may change those positions at any time without notice. References to any third party products, services or advertisers do not constitute endorsements, and the Investors Podcast Network is not responsible for any claims made by them. Copyright by the Investors Podcast Network. All rights reserved.
Date: February 18, 2026
Host: Preston Pysh
Guests: Alex Gladstein (Human Rights Foundation), Justin Moon (Bitcoin developer/AI builder)
This episode dives deep into the dramatic rise of "OpenClaw" and the revolution in self-sovereign, user-controlled AI agents. Preston is joined by Alex Gladstein and Justin Moon to dissect the technical, social, and ethical repercussions of running advanced AI agents locally, rather than through corporate cloud providers. The conversation is designed to demystify core AI concepts, explain the growing movement for open-source agentic AI, and lay out what this means for personal freedom, privacy, and activism.
"All the excitement is about... we're at this pivotal point — people can run it locally in a way that's actually going to be quite useful."
— Preston Pysh (03:05)
"An open model is if you can download that file... the closed ones are a little smarter and the open ones are a little more self-sovereign."
— Justin Moon (06:04)
"If those are the models everyone starts to build on and run locally, you get slightly different results... than if you have somebody feeding it with the base minus these things we really don’t want in there."
— Preston Pysh (10:17)
"You won’t know that you’re being very indirectly, subliminally steered... you have no idea what’s going into that header."
— Preston Pysh (21:56)
Vibe Coding allows anyone (even non-technical) to “talk to the computer,” building programs or agents in real time using AI.
AI agents are now capable of recursive self-improvement via Vibe Coding: they can build new skills themselves as needed.
What is OpenClaw?
“OpenClaw is an idea as much as a product.”
"To me, OpenClaw is more of like an idea than an actual product... It shows us the idea of what if an agent has its own computer and you can talk to it however you want."
— Justin Moon (51:26)
"Now, individuals have access to enormous, cutting-edge computing power and unbelievably intelligent personal assistants that are already saving them huge amounts of time..."
— Alex Gladstein (32:31)
"They're going to want Bitcoin because it's the only form of payment that they can't be rugged on..."
— Preston Pysh (45:10)
“We want both the activists to apply to our AI fund ... Then we also want really talented developers working on things like OpenClaw or privacy-improving infrastructure.”
— Alex Gladstein (56:19)
"If the AI model is open source, you get to see all of it... so you know if you're getting steered. If it's closed, you never know what they're inserting in the prompt, or who they're doing it for."
— Justin Moon (21:56)
"Personal agents at this stage really remind me of Ecash... there's an obvious trade-off, big security trade-off right up front. But you see a flowering of all kinds of hobbyists who really understand the risks."
— Justin Moon (59:51)
"The tools we're talking about also give a person the ability to 100x or 1000x their capacity and their ability to do things."
— Preston Pysh (60:47)
"This is a new era of personal computing. The creator of OpenClaw just opened a new... tore a new hole in what's possible."
— Alex Gladstein (59:40)
| Segment | Description | Timestamp | |-----------------------------------------|--------------------------------------------------|----------------| | Introduction to Local AI | Context on "10x" tech acceleration, why it matters| 00:52–03:49 | | AI Fundamentals, LLMs demystified | What is an LLM? Open/Closed models, context | 04:06–10:17 | | Model "Steering" and Censorship Risks | Geopolitics and base-model biases | 10:17–11:24 | | Inference, Context, Memory | How agents converse, inherit memory | 12:12–15:23 | | Agents, Tools, Skills, Vibe Coding | The technical building blocks | 24:43–32:27 | | Vibe Coding's Social Impact | Empowering creators, revolutionizing workflow | 32:30–38:39 | | How OpenClaw Works & its Viral Growth | OpenClaw's rise, technical/community factors | 38:53–49:47 | | Open-Source, Self-Sovereign AI Future | Social/political implications for freedom | 51:22–59:51 | | Money for Agents (Bitcoin) | Why future AI agents will likely favor Bitcoin | 45:08–46:34 | | HRF and Activist Empowerment | Grants, events, hackathons, call to action | 52:39–56:19 |
The tone is lively, deeply technical yet clear, and focused on democratizing understanding. The hosts and guests are passionate about privacy, freedom, and practical impact, emphasizing both empowerment and realism as frontier technologies mature. The dialogue is full of accessibility metaphors (e.g., "the Japanese train," "10 Commandments," and "vibe coding") and open about risks as well as the wild potential.