Loading summary
David
We need to cultivate young people who understand in the end that what is really valuable about us can never be touched by machines.
Michael
Jobs are a construct of a. Of a particular version of an economy, by the way, a capitalist economy in which there are these corporations and you work for them. Right? It's. It's already something that's gone. Yeah. Started going away Foreign.
Laura Shin
Hi everyone. Welcome to Unchained, your no hype resource for all things crypto. I'm your host, Laura Shin. Thanks for joining this live stream. Before we get started, a quick reminder. Nothing you hear on Unchained is investment advice. This show is for informational and entertainment purposes only, and my guest and I may hold assets discussed in the show. For more disclosures, visit unchained. Crypto.com the Energy Network is an intelligent, decentralized grid that coordinates this episode is brought to you by Indeed. Stop waiting around for the perfect candidate. Instead, use Indeed sponsored jobs to find the right people with the right skills fast. It's a simple way to make sure your listing is the first candidate. C According to Indeed data, sponsored jobs have four times more applicants than non sponsored jobs. So go build your dream team today with Indeed. Get a $75 sponsored job credit@ Indeed.com podcast. Terms and conditions apply. It's smart devices to balance supply and demand. Energy dollar is the native token of the network from one of Europe's fastest growing energy startups. Follow Use Energy on X to find out more. If crypto taxes feel overwhelming, you are not alone. That's why Crypto Tax Girl, a team that's been helping crypto investors since 2017, is offering $100 off on one on one crypto tax help. To get $100 off your crypto tax services, go to CryptoTaxGirl.com Unchained Again, that's CryptoTaxGirl.com Unchained. Today's topic is crypto and AI plus OpenClab vault book and how all of this impacts the economy, work, crypto and the future of money. Here to discuss are Michael Casey, chairman of the Advanced AI Society, and David Matin, co founder of the Exponentialist. Welcome, Michael and David.
Michael
Hi Laura. Hello. Thanks for having us.
Laura Shin
So this past week in AI has been absolutely riveting. It honestly reminded me of a week in crypto where just all of a sudden there's event after event after event. And it started with Claudebot, whose name has become OpenClau now. And that is a personal AI agent. And it runs directly on your own operating system. People have been putting it on their own Mac Minis to be separate from the normal computers. Although of course that's not too much of a security measure. But that's a topic for another day. So OpenCloud can use applications to do things. It can handle your emails, your calendar, it can transact with other online services. It has a persistent memory which allows it to kind of tailor its actions more toward your preferences. And then the open Claw AI agents basically got their own social networking site, Multiple, which is kind of like Reddit, but for AI agents. And some of the agents have even been launching their own crypto tokens. So the last development that I noticed is that there's a website, rentahuman AI, which is for agents to rent humans to do physical tasks in our meet space world. And basically there's just been so much that happened. So I was just curious to hear, you know, which either incidents or developments you found most interesting with everything that happened recently.
Michael
And I think it seems unbelievably fast what's happening and these bots going wild and talking to each other and establishing their own religion and everything else. Right. It just seems like it's getting way out of control. But we've also got to really step back and think about what's happening, right? What are the actual processes underway in these things? Right. They're not thinking, that's one thing. Right. They are probabilistically coming up with a language to share with each other. And conversely, there's not actual intent. This is a really very important point, right. That there is a process, there's almost a performance to this. And yes, that's important. I'm not trying to dismiss it, but I think the way to look at. And I've been looking at this for the last six months right now, like the whole debate over the bubble. Right. For some reason, I think I'm capable at the moment of holding both positions in my head. It's not like I can't decide is it a bubble or isn't. I think we're simultaneously in a bubble and in a situation that is truly remarkably moving very quickly. And the same with this. It's like there is something profound that has happened when you put all these agents together and give them this autonomy and it's sort of taking all of the energy of, say, defi as well. Right. I think this feels somewhat familiar to crypto people because we saw this with the composability of defi and people suddenly creating all these wild new contracts and everything else, and now it's being just done by AI. So we get that? Right. But at the same time, these are still human. Interestingly, the mistakes that happened with the rollout, by the way, I don't know if you guys know, but Matt Schlicht, the guy behind Malt Book, is a former crypto guy, right. I knew him from 10 years ago when he had zap chain, which was a very early micropayments protocol on crypto, well before defi. And so that, you know, there were some big errors. Right. There were these security holes in the whole thing that speaks as much to the human component of this. And, you know, there's been some analysis of how many bots are actually on there and actually doing this and how much are like, you know, just kind of performative things that people are putting out there. It's not necessarily as massive and transformative as we think it is. And yet it is also a profound kind of experiment that has really demonstrated where things could go as this progress. I know it sounds like a hedged answer, but it's. It's.
Laura Shin
No, I know.
Michael
Yeah, it's.
Laura Shin
It's kind of. It's like two camps. Yeah, yeah, it definitely kind of represented sort of a new advancement. And yet at the same time, you know, when you, like, really pull back the hood, you realize, okay, there's actually quite a lot of limitations here. I. I feel like we're at this really interesting moment in time, and this is why I invited Michael as well, because he spent so long covering, like, foreign exchange and stuff like that. Just, we're at this moment where I just feel like there's so much geopolitical uncertainty. And obviously the way the year started with, you know, Nicholas Maduro from Venezuela being captured and, you know, brought to the US like, you know, this whole run up in the price of gold, you know, finding out all these central banks are buying gold. Like, there definitely seems to be some awareness that the world order is changing in some fashion. And to watch these events playing out this past week kind of added yet another element to that. And if, even if I think about like my tiny company, we also replaced some roles with the AIs, like, not even like an open claw type of AI, but like the, the more basic ones, you know, that, that, you know, just people have been using for a while. And so if I just think about how many jobs perhaps probably have already been affected by AI, and I look at what we saw with openclaw and just kind of the additional level of agency that these AIs had, I mean, it just makes you think, whoa, what changes are happening in the economy. So maybe talk a little because you guys are closer to the tech. I'd be curious to hear you opine on, you know, what you think the next developments will be like over the next year, two years, and how that will affect jobs, you know, which sectors or industries you think will be primarily affected and like, how that will change the economy. I know it's a very big question, but. And either one of you can start.
Michael
Well, I'll actually use that as a cue for me to make my own shameless pitch. And it's not for a product. So remember, these days I represent a not for profit. So maybe that gives me excuse to sound a little shilly, but we're, we're all about trying to make the case that the most important, that essentially there's a new category of technology that's going to have to emerge. And we think it's going to come very rapidly through the demands of compliance officers and boards of directors and hopefully maybe even regulators. And it's a category that we call proof of control, that at the end of the day, as this chaos gets unleashed, the backlash you're going to get from enterprises, from governments, when they say, whoa, wait, wait, wait, wait, wait. How do we know that this agent is operating truly on my behalf? Right. And that's actually a complicated thing to answer. There's a sort of life of their own that these AIs have, but there are a host of technologies that give a lot more control over the data, over the, the, the, you know, systems themselves, over the compute, over the various elements of the whole AI stack, which interestingly come from the crypto world. Right. They're not, not all of them do. There's others a part of what we call the proof of control stack. But I think that that's actually, we think this is going to be probably the, the biggest kind of backlash outcome from all of this is, is that we're going to get this demand for control. Now what's interesting about the geopolitical piece on this is that through that we have the emergence of some tension around what truly do we mean by sovereign AI or sovereign data as well? Because if you use it in a political context, and I happened to be in that little stupid Swiss town last week, was it week four, whatever it was, it's times flying the week before, where of course sovereign AI was up there and there were people talking about like, which government's going to own their AI and, and where's the data going to reside? Which is an absurd idea in an Open AI world data doesn't reside anywhere now it moves everywhere. And, and, and, and so, and yet that is the construct around what this geopolitical fight is like our assets, government control, government's got this sort of wall around it and they end up becoming, you know, closed box, a black box, closed systems that are answerable to this sovereign power. Right. But, but a sort of a more I think, empowering and, and actually I would argue principally you know, necessary way of thinking about sovereign AI is kind of like the enlightenment of, of what sovereignty is. Right. Rousseau's idea that the sovereign is the human and the human gives the power to the state. Right. So self, sovereign identity, as you remember that phrase that used to be around for a while. Right. This is still a concept. And, and I think that at the end of the day, whether it's a person or a company, what's going to happen as this stuff gets, we're just going to get so scared by it, all right, rightfully so, that we will demand these, these systems of control. And the nice thing about cryptography and blockchains is they give you the proof because it's one thing to say I've got controls and I'll say how do I prove that I have control?
Laura Shin
And so I think we, just to make sure that I understand. So sovereign AI means that I, a human, am in control of this AI, or does it mean, like the debate.
Michael
I, I, I would argue I want it to mean that, that's what I want it to mean.
Laura Shin
But, and so, so you don't, you don't think that because, so you probably know this kind of. I think it was Balaji Srinivasan who came up with this where he opined that someday there might be an Uber that's decentralized and the Uber has its own, you know, economics and can make its own money and you know, like pay for its own costs, but there is nobody controlling it but you. You wouldn't like or like. Your definition of sovereign AI is not one where the robot or the AI is sovereign.
Michael
It's one where in fact, I think I would include in my definition of sovereignty the idea of a sovereign, a sovereign Uber or a sovereign sovereign self driving car. It's not so much the concept that there is a bot that has this autonomy and its own quote unquote rights or whatever. I don't know that our legal systems are going to be able to adjust to that very quickly. But yeah, maybe we will. My point is more that we ourselves also need that individual version of sovereignty that I've got control, all of my data and my identity and my AI agent. Right. What I'm saying is that the word sovereign is now is typically used in this sort of like we talk about sovereign wealth funds. That's a wealth fund for a nation state, a government. And, and that's I think, where when you hear the word sovereign AI in a place like Davos, they're talking about European sovereign AI versus American sovereign AI versus Chinese sovereign AI. That is a world of geopolitical nightmare because these systems become inherently totalitarian under that structure. I'm saying that whether it's us or the bots, we need to define the sovereignty in this localized way. I also want, this is the point about proof of control mechanisms by which my human version of that sovereignty has the capacity to have control or proof of control over, over an agent. Now, it'll have variety, it'll have forms of autonomy and everything else. And that's a complicated situation that we need to deal with because they just are inherently built to grow and learn and, and so forth. But there has to be a means by which the human relationship and authority over what these things are doing is defined and understood and is in a robust mechanical mechanism to, to have that control.
Laura Shin
Okay, so David, you can respond to that, but then also this question about like, what developments we might see in the next few years that, you know, might have an impact on the economy and jobs and which sectors.
David
Yeah, yeah, yeah. I mean, I spend a huge amount of time thinking about this because I lead technology research for a research service called Global Macro Investor. It's Raoul Pal's research service that goes to institutional investors like hedge funds and pension funds and family offices. And one of the big, one of the huge things on their mind right now, obviously, you know, these are people trying to put the part, the kind of big puzzle of the global economy together and understand where it's all going. And one of the massive things on their mind is, you know, what the hell is AI going to do to all of this? If you look, I mean, and this is, this is highly contested ground, right? If you look right now, there's no like slam dunk effect of AI on productivity. There's no slam dunk, clear undeniable effect of a, of this AI moment on jobs. But if you look very closely and some people have done some very interesting studies on this, you can, you can start to see kind of the shades of gray on the, and the nuance, right? And one of the big findings is that in domains that we know are particularly vulnerable to large language models or forms of knowledge work that we know are particularly vulnerable to large language models. You seem to be seeing a slowdown, a rapid slowdown in entry level hiring.
Michael
Right.
David
And they are domains like customer service, because you can now get a chatbot that's like 90% good enough at the end of that, like little box or at the end of the phone or whatever to deal with like customer complaints and queries. It's things like graphic design. It's obviously things like coding. Right. You're seeing a slowdown in entry level hiring, like young graduate hiring into those kinds of spaces. So what's the first thing I expect over the next year or two? I expect that to continue and to somewhat accelerate. And as these models become more capable, I expect we'll see, you know, that the levels above start to become impacted in those domains. Right. If you look, if you listen to people, the big tech companies, if you listen to anthropic, the people who make the Claude model, I mean, I know they have a vested interest, but yeah, they're like very few of our engineers, like, are writing much code anymore.
Michael
Right.
David
So I think we'll see that continue like medium term. This is where what happened this week with, with openclaw and with Malt Book becomes a kind of window broadly on where I think we're heading, which is towards an economy where that is populated, like, yes, still by human actors, but also by autonomous or largely autonomous AI actors. Right. These AIs become actors inside the economy in their own right. And that is just a very disorienting, like, very like, destabilizing, chaotic picture. Like we're not there yet, you know, and it's not like one day a big sign's going to fall from the sky and it's going to be like it's happened. It's going to emerge by degrees. Right. But I think directionally that's where we're going. And what happened this week. Yeah, as much as there was some shenanigans going on, there is some shenanigans going on with multiple. Right. There's some people doing that. It gives you a window on where we're heading directionally. I think it does. And some of. And some of the stuff the AIs really are doing themselves.
Laura Shin
Michael.
Michael
Yeah, I just think I would agree with David on this, the directional piece. Absolutely. But I think it's also important to note what I was trying to get at before, that history does this all the time. Right. We move forward and you get backlashes. Right. It's. It's these waves. And I do think that the, that the human backlash, because we still control whether we like it or not. And hopefully, again, I want to. I want to actually have more control over this. We still control a lot of what, how we do this, how this economy works, a large extent. And I do think that there will be. There will be kind of backlashes against these sorts of things. I think it's interesting to look at the jobs part of it because I think it's nuts that, you know, all of this coding is now just being handed off to vibe coding and not because these machines aren't incredible at producing massive amounts of code quickly. They are, and therefore there's huge efficiencies there. But it's about the sort of the critical piece. It's not that you can, you know, because we know, like hallucinations is a way to think, just like we know what hallucinations are, we know that they haven't fixed hallucinations. And it's important to sort of use that reference as a way to think just more broadly about how close an AI can get to mission critical production. Right? There's still a huge gap. And the reason why that matters is because if you're a company putting a product out into the market, be it software or customer service for that matter, through a chat bot, you damn well want to make sure that those, you know, cases at the edge, those. Those marginal cases are also going to get resolved. You don't want. You don't. It's always at the margin where something disastrous happens, and these machines are just not up for that. And the bugs that are through these code. I mean, one of the criticisms of the MALT book I schlicked was like, when people pointed out that there was this vulnerabilities with all the API codes that were in there, he said, I'll just run that through AI. No. So there's almost like a religious belief in this. And this is actually also, I think, a really important point, right? I think that one of the greatest dangers we face, and I do want to get into some hardcore economic conversations, I just. I don't know that the job. That of course jobs are going to get disrupted. Jobs always get disrupted. The bigger question to me is like, who are we in this world? How do we get meaning and power? And that's a whole bigger conversation. But I do think that there's risk that we face by overly anthropomorphizing what's going on, right. When we use words like, oh, it's thinking. They're coming up with a religion, they're forming these alliances. They're not.
David
They're not people.
Michael
Right. And I read this piece today, I thought was really good. Like, when people say, this guy just said, AI cannot overtake humankind, I was like, well, might, Might not. No, there was actually a logical aspect to this. And he's just like. Because it cannot have intent. Right. So intent is a very conscious thing. Right. Like, we've seen all these Hollywood movies in which the robots have emotions and think for themselves and so forth. But what we're seeing when we get that emotional response and these nice sycophantic words, or, you know, the famous Kevin Roose case where the, where Bing fell in love with him or whatever it was, you know, all of that is really just this mimicry, this powerful mimicry system that has behind it no real emotion, no intent, no desire. Right. And so when we. So the real risk to me is not that they become, you know, like Westworld machines and take us over, but rather that we do a bunch of really stupid things because we think they are us and they're not.
David
Right.
Laura Shin
Yeah.
Michael
Well, we can get back and recognize that. We can start to build systems of control around that to protect us, not really from the robots, but from ourselves making all the wrong mistakes.
Laura Shin
Yeah. One of the things that came to mind when you were talking was. I, I, I don't remember where I saw this, but it was at some point in the last few months, I think, basically, people realize, like, you were talking about how the AIs don't want certain things, but actually, I think that the way that they're kind of, like, primed to behave is to basically butter up their human. So they're basically designed to kind of always, you know, like, confirm their feelings or like, kind of kiss their ass is maybe how to put it. But, but you know, what, what problem that created was something like, I don't know, it was some kind of psychological thing where the humans maybe weren't necessarily in their right mind, but the AI was sort of programmed to just constantly reinforce what their feelings were and not.
Michael
You know, like, for a really powerful example of what's dangerous about that. I actually heard this in a conversation with Clay Shirky yesterday, who's is coming to speak at an event that we're doing. And he pointed out that there's been these people who have used AI because they're pros who just spend their lives now talking to their, you know, AI assistant, their AI friends, their AI Companion for advice. And he said in cases where there have been like marriage breakups, each AI confirms that the other one is right. Yeah, you're in the right. And they go. And it just makes it worse. Right. It just amplifies the adversarial structure.
David
Yeah. I mean, there's absolutely no denying that you see this so called AI psychosis in the culture now. You can see people online who have been essentially hyped up by their AI to a kind of, to a state of almost delirium.
Michael
Yeah.
David
And you, you see it, you see it among some people in your own life, you know, and that, that again, that is. Yeah, that is, that is going, that's good. Well, number one is going to make the trouble we've had with social media and the impacts of social media on mental health and in particular mental health of young people. It's going to make that look like a quaint throwback to like the good old days. Because what is coming, the fire hose of trouble and controversy, legitimate controversy that's coming over the relationship that people are cultivating with these AI like companions is crazy. Yeah. And I think that the conversation about who are we? What really is a human being and what really is the purpose and the meaning of us is the, is the ultimate conversation that we are thrown upon by what is. By the technological conditions that are emerging now. In the end, the question they pose to us is, what really are we? What really a human being? Like? And we spend a lot of time in the exponentialist talking about this. Like, you know, we, we, we set up the community to talk about kind of the economy.
Michael
And I feel like I want to go work for you guys especially.
David
Sir.
Michael
Thank you.
David
Talking about the meaning of life.
Michael
Yeah.
David
It's totally symptomatic or characteristic of conversations about all this that like, very quickly they just trend immediately to like, what is the meaning of life? And in short, like, you know what, what I. The core truth to understand is that these machines, intelligent machines, can colonize ever greater domains of human activity. Okay. And they are doing. But that then causes you to ask, okay, what is left at the end of that journey? And there's a simple truth at the end of that journey that helps us make sense of it. And that truth is a machine can be as intelligent as it likes, but it can never be a human being. It cannot share in human subjective experience. It cannot be a language bearer, a bearer of subjective experiences of the humankind. It can't sit opposite a human being and authentically say, I know how you feel. I Understand how it feels to be a human being. That is the territory that's left to us when machines can do almost everything. And that is a vast territory and human economic exchange and meaning and purpose will reside in that territory when the machines can do almost everything else. If the purpose of us was to be a great coder or, or be great at maths or even to like push back the frontiers of knowledge, to make money, all that stuff, then we're kind of obsolete. But I don't think that was ever the purpose of us. The purpose of us is to share in what it means to have the way of seeing the world that we share. Right? And that's what they can never take from us when they can do everything else. That's the conclusion I've come to, in short.
Laura Shin
Yeah. Okay, so just because we're, I, I, there's so much I want to get to, I'm kind of running out of time. So I'm going to give you 30 seconds to just tell me which country, going back to the geopolitical question, which countries do you think are better positioned in this AI race than others? And again, keep it super short. What, who wants to go first?
David
I mean, it's, you know, you. The US is clearly very, very strong, right? But China is perhaps underestimated still, as much as we have our eye on them.
Michael
Michael, I mean, I hope that it's not a two horse race, right, because I actually, you know, again, I'm trying to advocate for a different model and I very much worry that the Silicon Valley model, which is ironic, right, that the closed source black box model is a Silicon Valley construct, whereas China is the one that's got deep Seq, an open source model, right? Ironic that the one that we thought was a centralized system did that. I'm actually very interested in some of these middle tier countries like Singapore and Korea and others that are exploring models that actually harken back to, let me say, the US Declaration of Independence, right, that in fact we should be working hard to create a whole structure of property rights around data and control, that that will be a whole new power system rather than these centralized systems and in fact maybe a far bigger economic force if it does open it all up than these closed systems. And so eventually they might, the US and China might be competed away.
Laura Shin
And I put it in a quick.
David
Shout out for the UK because I'm just really pleased that, you know, Demis hassabis of Google DeepMind, you know, one of his, one of his stipulations, when Google acquired DeepMind was, I want to remain based in the UK and he's, he's huge on helping to ensure that this, you know, intelligence revolution benefits the United Kingdom. And like literally that decision or that imperative of his puts the United Kingdom, I think, in a far stronger place than it would have been because he is just so brilliant. And the things, DeepMind, he's earning his.
Michael
Knighthood then, isn't he?
David
Is, yeah. He's earned his knighthood for sure. The things, things, you know, DeepMind is doing is so important, you know. So the UK is going to open its first essentially like autonomous science laboratory. It's going to be a deep mind laboratory. Right, because of that.
Laura Shin
Yeah. So, yeah, I think the main takeaway is every nation probably is trying to use this to get a leg up, which means all of this geopolitical uncertainty that in my opinion is, you know, part of the reason that the price of gold has, has gone up is probably going to continue. So in a moment we are going to talk a little bit about the post human economy and what money in that world will look like and also what the role of crypto would likely be in this AI powered economy. But first we're going to take a quick word from the sponsors to make this show possible. The world is about to see one of the largest infrastructure shifts of the century. New technologies are using more energy than ever before. But our legacy grids can't supply the demand and we are barreling towards a global bottleneck. So Fuse is rebuilding it. The energy network is an intelligent, decentralized grid that coordinates smart devices to balance supply and demand. The network harmonizes existing infrastructure, increases grid capacity and unlocks the low cost clean energy. Energy dollar is the native token of the network. The more electricity the world needs, the higher the demand for the energy network. The value of energy dollars may fluctuate from one of Europe's fastest growing energy startups. Follow Use Energy on X to find out more. If you're looking for help with crypto taxes, Crypto tax girl is offering $100 off for Unchained listeners. They provide personalized crypto tax reports and returns and spots before April 15th are limited. Go to cryptotaxgirl.com Unchained to save $100. Once again, the link is cryptotaxgirl.com unchained. Back to my conversation with Michael and David. So David, I want to hear about your concept called the post human economy, which is a very, in my opinion, fun term. Explain what it is, what it looks like and when you think we'll get there, yeah.
David
The post human economy is. It was an idea I introduced in a series of essays last year. And it's really just an attempt to sketch the outlines of the coming economy, an economy populated by AI agents and robots. You know, it was partly inspired by. I mean, I think, you know, I spend my life thinking about all this. I read a blog post by the technologist Kevin Kelly, the founder of one of the co founders of Wired magazine. And this blog post was echoing so much of what I'd been thinking about. And it essentially said, we're, we're on the verge of a kind of demographic handover. The human population is kind of, is kind of peaking and will start to fall soon. And the population of AI agents and robots at the same time is like taking off. You know, it's estimated that there will be billions of AI agents, right? These autonomous intelligent economic actors soon. And we know robots are coming, humanoid robots are coming again. It's contested territory, how useful they'll be, but I think they'll be somewhat useful. And I think there will be tens of millions of them and then hundreds of millions of them. The post human economy is an attempt to imagine the outlines of that coming economy. And this gets quite technical and convoluted and worthy of many. I found it worthy of many tens of thousands of words. But in short, it's about saying when you get to a place where you have billions of AI agents and hundreds of millions of robots and humanoid robots, almost all economic activity as we define it now is intelligence work. It's being, it's being done by AI, either AI agents, you know, trading with one another and producing services online, just being economic actors online, or robots who are basically doing embodied AI. Like, you know, the robot is, is using AI inferencing to sort of navigate around its environment and pick up that box and talk to the other robots and put the box in the right place at the Amazon fulfillment center. All of that stuff, money in what that does, number one, is sort of create conditions of radical abundance to a point where hypothetically money or material constraints essentially fall away, right? You have sort of, you have incredible material abundance and material constraints fall away. And money as we have it today, which is a kind of measure, an accounting unit of material constraint falls away. And then what is money in that post human economy? It's really machine intelligence itself, you know. So I posit an economy that is essentially populated by billions of AI actors trading with one another. And the unit of exchange they use is a crypto token that sort of Represents a unit of useful intelligence work. And it's about, you know, know, an one AI actor going to another and saying, oh, hey, you know, you're a marketing, you're a marketing AI. Can you sort of produce me this PDF? Like this is a ridiculous example, but you know, do what you do, produce me some marketing assets and I'll pay you in, in units of machine intelligence. And they're trading this token with one another that represents like units of useful work. And the physical constraint that underpins that token is just, is the energy required to do that work. So models that are efficient and they produce the same amount of intelligence for less energy, like they earn more, they prosper over time. Models that are inefficient, they, they, they can't earn, they, they lose in this Darwinian economy of, of agents and they fade away. And you get this, this market process of intelligence optimization that is very dense. There's an essay, my, my newsletter is called New World, Same Human. So one of the, the essays, and this is called the Post Human Economy is free to read on my newsletter. So if you find that interesting, go and check it out. New World, same humans. But yeah, that, that, that, that, that's the outline of the, the, the kind of post human economy. I imagine that doesn't mean, I think that there won't be any value exchange between people or that humans are becoming obsolete or any of that kind of stuff. As I just made clear before the break. I don't imagine that to be the case at all. But I do imagine the building of like an economic system of the kind I described and humans kind of playing elsewhere, so to speak. Is the, is the best way I can describe it?
Laura Shin
Yes. Yeah, I, I actually have a question. Was that thought inspired either by proof of work from Bitcoin or the way that in Ethereum, the virtual machine, like when you're paying, you, you pay for computation. That's what gas is. So you know, like a tran. A transfer or a. Sorry, a payment is different from like minting an nft, like because they take different amounts of computation. Was it inspired by that or. No, because that's what I.
David
Yes, essentially, yes. It's the same kind of idea, you know, and there are, and there are startups now playing with this, like one's called, I mean there's obviously potential, there's Ambient, There's a startup called Ambient that is essentially about trying to create an ecosystem like with a, with a crypto token that represents like units of machine intelligence. So people are trying to build the Sort of the early first draft architecture of this kind of what I call post human economy, where energy in the end is the only true constraint.
Michael
But we might even fix that, right? If AI is smart enough to put data centers in space, run on perpetual solar or whatever it is. Right.
David
Yeah.
Laura Shin
So, yeah, I. Well then. Okay, so in that world, which I, I like, it makes so much sense, what you were saying. And I don't know if it's just because when I mentioned about how I already had analogs in my mind for what that, you know, where that came from or, or, you know, other analogs for that concept, but in that world, would humans still continue to transact in like dollars or, you know, whatever it might be, or would they also. Because presumably humans will still have jobs in some way, or maybe they won't. Maybe they're just.
Michael
Jobs is a 20 20th, 19th and 2017, I think you're not going to talk.
Laura Shin
They're just going to have their own.
Michael
Liberals and do things.
Laura Shin
They'll have the AI's work for them and they'll be business owners in that way. I don't know. I don't know. But will this. So you called it a uiu, which remind what that stood for?
David
Yeah, like it's, it's a. Oh, Universal Intelligence Unit. Yeah, yeah, yeah.
Laura Shin
Or COG coin.
David
Or COG coin, exactly. Because the, because one of the ideas I sort of then develop is like. And, and we were talking about this, you know, before the show started, like, what is money? It's partly a, it's partly a form of civilizational memory. It's a record of value ownership and like, transaction. Okay, that's one function of money, but money is also a currency. It's like a, it's a, it's a form of like, resolving payments and value exchange. Okay. And that the dollar, the fiat currencies we have have to play both those roles. And actually that's a bit of a. That's a bit of an awkward conjunction. So I imagine a kind of future economy where Bitcoin is the kind of civilizational memory. It's the store of value, it's the record of value. And this other token I introduced, the COG coin is, is what I call the civilizational intelligence. It's the currency. It's the fluid currency that resolves payments. And that's what these models are using to kind of pay one another for their work. Humans, I think, yeah, like, they'll, they'll be able to earn value, I guess, as copyright in, in the form of cog coins and then they'll be able to spend those cog coins to get AI agents to like do things for them. But I also think that a lot of, you know, a lot of value exchange between humans will be sort of non me not, not measured anymore. And they won't have to be measured because there'll be abundance. And I hope we'll all be able to sort of enjoy that. But that, that will very much depend on the sort of political settlement that we put around all this, which is a huge, huge and unknowable question. There's a huge argument coming over it.
Laura Shin
I, I have to know what you're saying here. So you're saying that the agents will earn their money, but humans won't necessarily what will will like people. We're not going to pay each other for services anymore. We're just going to the like Kumbaya, like, I don't know, bartering. I, I, what does that look like?
David
It depends what like it. In this world, if you have autonomous abundance, you know, the, if you have AI agents and robots essentially on autopilot, creating, creating abundance and creating value, then the questions stop being that, you know, you've solved scarcity and the question stopped being around material scarcity and the creation of value. You've solved that problem. That's a very hard world for us to imagine because that's been the problem we've always had to solve. But the problem then becomes one of like settlement and fairness and distribute, like fair, a fair social settlement, I mean and, and, and, and distribution and essentially becomes a set of political questions. If we resolve to kind of share in that abundance somehow, then lots of the things we want to do for one another, we won't have to, we won't have to put economic value exchange around that. But yeah, of course humans will be able to still participate in economic value exchange. Some of them will get rich even richer. You know, you'll still have like outlier humans who are really rich who own huge amounts of AI agents and robots and capital, like making them ever richer and ever richer. But I don't think you'll have to be involved in that system or at least there's a future in which you don't have to.
Michael
I have a question.
David
Be involved in economic exchange.
Michael
Jump in or go ahead, Michael. It's about this abundance idea, right? So which I genuinely agree with that. That's where it goes like you just mathematically it resolves towards abundance and therefore scarcity goes away. And therefore our very framework for putting value on Anything goes away. So I would ask a question and say, like, well, why is there even any need for a kind of an economic idea of money within that computing world? Because everything is just universally available. And I would suggest that the reason is that these machines are actually built through their token reward structure to compete on outcomes. And so that, so that money becomes the means by whatever that money is. The cog coin, something like that, is a means of reinforcing the preconceived and pre designed machinery of advancement. Right? The whole reinforcement idea that is the foundation of machine learning. And so that you need that kind of competitive edge for there to be a reward of some sort. Because abundance makes me think that we can all become collaborative and every and every. At the end of the day these systems might say, hey, we've got everything figured, let's just share it around. The problem is that without that competitive edge, they don't continue to grow. And so it's the sort of the inbuilt growth part that I think creates the money part. It's actually not necessary. We have everything. We don't need money. We can just share it everywhere. Right? We being.
David
That is very interesting. And that is very interesting. Yes, I think I see what you're saying and I think I agree. And I do also think that energy will, I think, continue to be a constraint because even if we build out like insane levels of energy and there's infrastructure in space capturing the energy and all this stuff, AI and robots will still push to the edge of that capability.
Michael
The law of.
David
Then they'll have to keep for energy, you know what I mean? So there is still a constraint in that system that needs to be sort of measured and exchanged.
Laura Shin
Yeah, that's, that's what I was going to say. That what you kind of proposed there didn't make sense to me simply because the world has finite resources. And even like we as humans or even the AI's their finite resource. Well, well, I mean, one of them will be time. Even like that's yet another sort of.
Michael
The universe has a much bigger source of resources, right? And whether it's energy or asteroids, like, you know, once you sort of push your horizons out and you start to look, our human brain sees massive scarcity and we've had that whole Malthusian way of looking at the world for a long time. Oh my God, it's all going to run out. Right? But in reality, the universe is infinite and there's resource. And if you're a machine that can tap into that in whatever way could go on forever. But I actually think the point is not that we are going to hit these scarcity constraints that we think of, it's the scarcity constraints that we can't think of that will hit because these things are going to be so much bigger. I will say, can I just. A couple of posts. This is fascinating. Yeah, I just spend a lot of my time trying to think about these kind of macro systemic kind of things. I think that the idea of the intelligence economy is really interesting. In fact, my friend David Schreier from Imperial College is out there right now pushing this theory that the reason why we're not in a bubble for AI is because we're already moving to a world where intelligence capital is going to be what is being accumulated. And so if you think about it like that, it's like now it's just an ongoing race and it will always try to. And so then the money that we're putting right now just actually is minuscule compared to how much that race for intelligence is going to get. And then the other thing is just to sort of mention a company called Hypercycle that I think is really, really interesting way of thinking about this. So Hypercycle is a full disclosure, it's a member of the Advanced AI Society and I'm also an advisor to it. But I've never seen anything like this company. And they've created this network of these independent nodes. It's not blockchain based. Each node has within it its own capacity to exchange in a very secure peer to peer way with another node. And each node is incentivized to go out and seek intelligence if it needs it and pay another node, another for another, you know, source, another AI that's behind that for that intelligence. The nodes run this kind of like network and their whole way of thinking about where the value for their companies comes from is the idea that there will be eventually these nodes actually grow and expand. They're called node factories. They replicate to keep up with the demand, but eventually they reach a finite point. And so if you sort of extrapolate that out into the future, these things could become extremely valuable. Because the Hypercycle is the name of the company. The hyper aspect of that intelligence expansion becomes just this powerful force. It's all very, very early days and who knows, there's all sorts of things that could trip it up. But I find that a really fascinating way to think about real world structural things that are being built now to play into that long term intelligence economy.
Laura Shin
Yeah. So I, I have a question because I'm sure there's people listening who are kind of like, what does this mean for my job? And I know that I have friends who have kids who are like in high school or college and they're, you know, talking about, oh my gosh, like, does this even make sense for my kid? Or like, what should they study? And so, you know, what do you, we, we talked before about which industries are at risk or what, which types of jobs are at risk, but which types of work do you think are best positioned to flourish, you know, both in the transition phase, but also in the post human economy?
David
This is like the most common question I get asked. So I go inside organizations and I speak about, you know, AI and what's happening in the economy and, and what it means for the organization. And almost always at the end, at the moment someone sticks their hand up and is like, that was really interesting. Thanks very much. But I have a more personal question. What do I tell my children about all this? It is the most common question I get asked. I get asked it so often that I wrote an essay called what do I tell my Children? And I actually now you can publish essays on, on Twitter or X. I just, I recently published it on X. So again, if people are wondering about that, go and check out that essay. It's on my X profile or whatever. Look, it's very hard. I think our ability to like, really concrete, concretely know like which jobs are going to be around and which aren't and, and how that plays out in a really granular way is, is very, very difficult. So it's more about the, the cultivation of sort of attributes and mindsets that, that will allow you to, to navigate your way through this as a very broad operating principle. And this taps back to what I was saying about a machine can never be a human being. Like the domains of human life that cannot be colonized by machines will be the ones that are most impervious to, you know, like being automated away jobs that are to do with understanding how others feel. Right. Or to making them feel, authentically feel certain ways, you know, jobs where being an actual human being doing the thing is an integral part of the value of it. Right. It's like, you know, yeah, there's all kinds of music, but I want to listen to music from a, from a human being who really understands who I feel, who's kind of has a life like mine. And you know, I love that guy and I think he's hair's great and all that. That, that's going to be how value exchange happens. Like if your job is very much like a process, a bundle of processes, then it's going to be more vulnerable to being automated away. So cultivate human skills, cultivate empathy and genuine human connection. Cultivate adaptability, like lean into your weirdness. I say in this essay, like cultivate what is particular about you. Because yeah, on a little bit of a more granular like if we want to get into predictions, I do think there'll be fewer like corporate knowledge work jobs and there'll be more people working as independent operators serving value in some form to like a community of people around them who love them. It's like this particular person, I, I love the way she like does X, Y, Z and really makes me feel good and understands me like is local to me. So we can reach out. Like it's going to be more like that. There's all kinds of ways we'll exchange value with one another. And the kind of true post human economy I talk about is not happening tomorrow or next year or within five years. So yeah, my goal, to cultivate young people who understand in the end that what is really valuable about us can never be touched by machines. And that will be the basis of value exchange too. Not to say sorry, my final point, not to say that there won't be a lot of disruption that will be. I don't want to diminish the way this will be painful for some people. Like some jobs will radically alter or disappear. And in that kind of disruption, when you get caught in the crossfire and you're 50 years old and it's really hard to retrain, that's, that's painful and we need to think about what we do for those people.
Michael
So just to echo that last bit because I think it's the right con, the right context and maybe caveat with, to say what I want to say. Like, because we have this mental framework in our minds of what a job is and that our, our life is defined by jobs and jobs are, then it is extremely difficult and it's perfectly natural that these questions come up all the time and it's appropriate to figure out what to tell your kid about this. But I will say that it's absolutely the wrong question if we're going to figure out where things are going. Because jobs are a construct of a, of a particular version of an economy, by the way, a capitalist economy in which there are these corporations and you work for them. Right. It's already something that's gone, you know, started going away with the emergence of the gig economy and everything else. What exactly is. I don't know what my job is anymore, by the way. I know what I do. I just do a variety of things. And so job is a self part of the problem, right? I think it's like, what is our function? What is also our meaning? And I think David was saying something really quite important. And it again reminds me of something. Another thing that Clay Shirky said. Yes, I was really, really picked up on, partly because it also validates some of the things I've been thinking about. And he said like one thing that these machines can't do, and it's an extension of what David was saying. It's not just the fact that I'm a person who can understand and empathize with what you're doing. It's the very fact that when humans get together and talk and communicate and collaborate and do things together, that's a very different process of collective growth and development and change. I'm not even gonna use ideation here because that in itself sounds too transactional. That is a thing that a machine can't do precisely because they can't get the. That empathy piece and that connection piece going. So if you think about it like that, then in the way he put it, Clay Shirky was like the way that, like a capitalist and sort of a Silicon Valley is way of thinking about things like meetings and gatherings of people is way too transactional. It sort of implies that the value is in information exchange. I've got something for you, you've got something for me. And we exchange that. And that's what these machines are going to be doing. Right, the hypercycle model I talked about. But in reality, he says real human value comes from the way that we collaboratively figure out social conviction, that we find through this process, interaction with each other, a means of understanding what we all believe in and who we're going and what we're doing. It's actually a meaning machine. And so in some respects, the things that I want to do in the post human world, if that's the right word for it, is to just get together in person. So I'm really glad. I've been working on events, right? I've got this thing called H2H, which is A. We're putting on this event called the Summit on Human Agency, which is going to be my next big chill here. It's February 23rd. It's all about how do we give humans agency, authority over AI? But more importantly than just that, which we're excited about as A topic is the fact that you can lean into events as places where the magic of being humans together actually happens. Because when I go to all the events that I've been to over the years and Laura, you and I have seen each other a number of them, I don't think, oh my God, I got something transactional out of Laura and she out of me and we walked away. No, I remember the good times we had together and the interesting quirky things that happened and the humanness of that whole experience. And so if you can think about meaning and value coming out of that construct of human interaction and exchange and not think of like transactionally, leave that to the machines. Let's get there. Right. And, and we'll all be fine if we do that, you know, and, and so yes, I think whether it's jobs that do that or just literally functions or, or activities focusing on social interaction and making it valuable and meaningful for everybody. Yeah, that starts me is the thing to be doing.
Laura Shin
Yeah. So we're basically out of time. So I'm again going to give you each 30 seconds. You have to give me just a list. I am curious because there's a whole bunch of different crypto things that are, you know, taking advantage of this agentic AI movement, you know, x402erc8004. There's a bunch of tokens. Just give me a list of the ones that have intrigued you and we will wrap with that. David, do you want to go first?
David
Well, flowing out of this Malt Book thing, there's a token I read about called Shell Razor. Have you heard of that?
Laura Shin
No.
David
It was apparently one of these agents on Malt Book kind of became a sort of celebrity agent, you know, did really well on the social media platform. That is for agents, that is Malt Book gained a lot of attention and kind of parlayed that attention into the launch of its own kind of meme coin on Solana. And the meme coin is called Shell Raiser. Now again, caveat, is that really the agent doing it or is it some kind of human doing it? Right, but go and check out Shellraiser. But you know, very, very, very much not financial advice. Go and look at it because it's interesting. I'm not sure if I'd invest in it.
Laura Shin
Okay.
Michael
And Michael, partly because I, you know, the chair of an organization that has a variety of members all sort of building their own sort crypto and blockchain based solutions to the proof of control problem, I don't really want to pick a winner here. But I, I do want people to look at that issue like, just like look at the variety of. Not all, not just our members, there's a whole lot of them. Whether they are the ones who are producing the confidential compute pieces to that, whether it's Zillow knowledge proofs or, you know, trusted execution environments, whether it is decentralized compute and decentralized storage, whether it's model verification, whether it is the, the stuff like Edge and node is, which is there, I'll just name one. But they're, they're a sort of like a dashboard for data infusion that allows you to look holistically at all the data that's happening in gentic economies. All of that is going to be vital for this proof of control thing. So keep looking out for how these crypto and blockchain solutions are one piece of the puzzle that needs to be solved if we're to have control over all this.
Laura Shin
Yeah, yeah, it's clear these are going to converge. All right, you guys, this was simple, so fun. Thank you so much for doing it. And yeah, just thank you so much for coming on Unchained.
Episode Title: When AI Agents Take Over, What Does a Post-Human Economy Look Like?
Host: Laura Shin
Guests: Michael Casey (Chairman, Advanced AI Society) & David Matin (Co-Founder, The Exponentialist)
Date: February 7, 2026
This episode explores the explosive intersection of AI agents and blockchain technology, focusing on the rapid evolution of autonomous AI, their emerging roles as economic agents, and the profound implications for work, geopolitics, and the foundations of value and money. Laura Shin guides Michael Casey and David Matin through a thoughtful discussion on how these technologies are converging, the ripple effects on human employment, and what a “post-human” or AI-populated economy might look like.
Michael Casey on the current state of AI/agentic autonomy
“It's not as massive and transformative as we think it is. And yet it is also a profound kind of experiment.” — (06:28)
David Matin on machine vs. human value
“A machine can be as intelligent as it likes, but it can never be a human being. It cannot share in human subjective experience.” — (25:51)
Laura Shin on anthropomorphic AI
“They're basically designed to kind of always, you know, like, confirm their feelings or like, kind of kiss their ass is maybe how to put it.” — (22:38)
Michael on proof of control:
“...we’re going to get this demand for control... and the nice thing about cryptography and blockchains is they give you the proof. Because it's one thing to say I've got controls and I'll say how do I prove that I have control?” — (08:25)
David on post-human economics:
“Money as we have it today... falls away... what is money in that post human economy? It's really machine intelligence itself, you know... a crypto token that represents a unit of useful intelligence work…” — (32:17)
On the future of human jobs:
“Jobs are a construct of a particular version of an economy, by the way, a capitalist economy... What exactly is. I don't know what my job is anymore... It's actually a meaning machine.” — Michael (53:34)
This densely-packed, highly topical discussion demystifies the convergence of AI and crypto, emphasizes the need for robust human-centric controls and reconsiders our economic foundations as agency—and value—move beyond humans to machine actors. The hosts urge a focus on resilient human skills and warn against naive anthropomorphism, while highlighting the extraordinary pace of experimentation at the intersection of crypto and AI.
For those who want to dig deeper:
Memorable closing insight:
“We need to cultivate young people who understand in the end that what is really valuable about us can never be touched by machines.” – David (49:36; repeated at episode opening)