Loading summary
Ryan Sean Adams
Bankless Nation. We are here with Jasmine Sun. She writes about AI technology and politics. She is a contributing writer at the Atlantic and recently has a New York Times opinion piece on AI and the permanent underclass, a phrase we are all too familiar with here in the world of crypto. She's also the AI of the AI Populism series on her substack Jasmy News. Jasmine, welcome to Bankless.
Jasmine Sun
Thanks so much for having me.
Ryan Sean Adams
Jasmine, you put together a definition for AI populism. You wrote it. A worldview in which AI is viewed not only as a normal technology, but as an elite political project to be resisted. This is really what we want to explore here with you today on the show. Kind of want to ask the question, and maybe we can start with this. How big is AI populism as a political issue domestically here in the United States? And we kind of want to get to do we think AI populism will be a relevant issue in the 2028 election? So maybe we can start with that first question. Just how big do you think AI populism is in the world of politics?
Jasmine Sun
Yeah. Thanks for. Thanks for asking. Yeah. I've been thinking about AI populism a lot over the last few months. I think noticing this mass movement that is sort of growing around the AI backlash and in particular noticing how very different interest groups and very different factions, different sides of the aisle, are coming together to protest AI. And, and so, you know, when I'm in Washington, D.C. i'll notice that there are Family first, conservatives sitting with antitrust, people sitting with environmentalists, people who would never be working side by side, but who have united in order to push for AI regulation. And that was sort of what really got me thinking about AI populism in terms of how big of a force it is in the US Right now. I would say that it's not a primary force in American politics yet, but it is rising extremely quickly. And so one of the best research polling that's been done on this topic is from David Shore's Blue Rose research. And what he's shown in his polling is that among like, you know, a list of 40 different issues that American voters might care about, AI ranks 29 out of 39. So it's not super high, but it has risen in salience the faster than any other issue over the last year. And so in terms of how quickly it is entering the broader political conversation, I think AI is rising really fast. And the other thing that I'm starting to notice is that AI is not just a Separate issue. Like most people who are, you know, thinking about AI, they're not, they, they may not have particular opinions on, you know, which model is the best or should we do chip export controls. They're really seeing AI as part of these broader conversations around affordability, economic mobility, geopolitics. And those are issues that do rank very high on Americans list of concerns. And so if AI is seen as, you know, a bogeyman or very tied to conversations around land use and their neighborhoods, around economic mobility and whether you're going to have a job, then AI will be a much bigger part of the political conversation than we would otherwise expect.
David Hoffman
So it's rising fast, but it's still 29th in the list of issues. So there are other, you know, the top five issues have got to be like, the economy, jobs, inflation, that type of thing. And yet we see some of the most savvy politicians that we have in the US that seem to be doubling down on AI populist messaging, maybe tripling down. Even Bernie Sanders, it seems like, has made it sort of a cornerstone piece for him. In other words, he's kind of like betting heavily on the topic of AI populism and putting a lot of his chips in. Why, like, is he, if it's, if it's only 29th, wouldn't people rather hear about inflation and jobs and other things that are core to the Sanders message? Why is it getting so hard?
Jasmine Sun
Right. I think, like, I think it's because they're tied together. I mean, I have the blue rose research pulled up next to me. Number the top five issues, like, you say it's cost of living, the economy, corruption, inflation, and healthcare. Right. And that's kind of roughly the issues we'd expect. My guess is that those five issues probably haven't changed all that much over the past, you know, 20, 30 years. My guess would be two things. One is if AI is a thing that you are going to blame for the economy, for the cost of living, for, for like corruption, inflation, healthcare, then you're able to tie it into the issues that Americans do really care about. Right? And when, you know, you have these AI CEOs saying AI is going to take all the jobs, when you have these questions about whether we're in a bubble or like the fact that, like, you know, I think it was something like a huge fraction like 30% or something of US GDP growth in 2025 was from data center and AI related investments, then it means that your questions about cost of living in the economy are very tied to AI and then the other thing that I think is going on with Bernie is I think there is an element of opportunism. Right? And you don't just see that from Bernie, you also see it from other politicians is you may have been saying the same message on cost of living and the economy and the billionaires for like year on year on year on year. But now you have this new force that's showing up and it's, and, and you know, the leaders are also promising it's going to change everything. It's going to take all the jobs. We are like the only thing that matters in the economy now. And so like maybe if you feel like your messaging wasn't resonating before in terms of getting people to support universal healthcare or like a higher minimum wage or whatever, AI is like a brand new shiny reason that, to sort of build support for the policies that you might have already wanted to pass. You see that from people like Bernie. You also see that from folks who want to say increase speech regulation and censorship or content moderation for tech platforms where there are folks who are already very interested in applying stronger age and kids safety laws or stronger skills speech regulations on tech platforms. And now that AI has showed up, it's become kind of an extra reason to push for the thing that people are already pushing for. So I do think that AI matters, but I also think that a lot of politicians are being pretty opportunistic about, you know, pointing to the shiny new thing and saying maybe this is a reason to do what I've been saying all along.
David Hoffman
I kind of wonder if this opportunism is actually going to stick in the hearts and minds of the American people though. Right. So there was something that we recall, at least Bankless listeners will recall being in kind of crypto of Elizabeth Warren. And some politicians tried something similar with kind of an anti crypto policy. This was in 2023, 2024. Listeners will recall kind of a campaign slogan, something she promoted. Elizabeth Warren is building the anti crypto
Ryan Sean Adams
army right after the fall of ftx which is highly opportune of a time to broadcast that message.
David Hoffman
Yeah, it was, you know, Sam Beckman fried and you have kind of the corrupt crypto bros and this weird technology that no one really understands. And there was a bubble and there's NFTs and everyone hates it. And so there, there seemed to be an effort that was somewhat contrived, an opportunistic effort to lump all of these things together and have kind of a, a theory of everything message around Populism. Some of the, the campaign messages for Elizabeth Warren, but it didn't seem to really stick or hold. Like even among, obviously the, the crypto people didn't enjoy this, that she was building an anti, anti crypto army. But I think like the normal people just looked at it and were like, huh, like anti crypto army. I care about jobs and the economy and inflation. Like, what are you talking about? And that messaging didn't really stick. And I'm wondering if that will be a repeat story with this AI populist opportunism that we're seeing, that the politicians are trying to group things that just like, don't exactly belong together in a voter's mind.
Jasmine Sun
Yeah, I mean, I could see why you would think that. And I do think that looking at, you know, parallels to crypto are definitely interesting. I think that AI has some pretty distinct differences. One is it's just a far bigger part of the economy than crypto ever was. Like, crypto was not driving like 30 or 40% of GDP growth over the course of a year. Crypto is not. You know, yes, there's like bitcoin mining operations, but these are not showing up in neighborhoods as much as data centers are. Most people at their work are not being forced or encouraged to use crypto as part of their jobs. Nor is there as high of consumer adoption, like even from people's own volitional use of crypto. Um, that was always a niche thing. It was. Crypto is very hard and confusing to use. And I, my guess is that most, most Americans never really got in the habit of using crypto on a regular basis. Whereas ChatGPT is like the fastest growing app in human history. Right. And so I think that in terms of the salience of AI to a lot of normal people, it does feel like a more relevant thing. There are also other differences. Like I think the AI leaders have been very different than crypto leaders in their messaging. The way that you describe the war and dynamic, which is not something I'm personally familiar with, I didn't follow crypto quite as closely, but it sounds like Elizabeth Warren was forging one narrative. And people in the crypto industry, and maybe many crypto advocates, had another perspective in AI. One thing that's really interesting, and that has always been really interesting about AI, is that the risks that the populists are talking about are many of the same risks that people in the industry are talking about. Right. Like Dario Amada is one saying that 50% of entry level white collar jobs are going to go away by 2030. And so that adds a lot of credibility to the message when the people building the technology are saying, actually that's true, like this stuff is going to hurt you, it is going to take your job.
Ryan Sean Adams
When the market pulls back, most people just wait. They hold cash, hoping things stabilize. But there's another move. And that's where Nexo comes in. Nexo is a platform built to help keep your digital assets productive. You can earn daily interest on supported crypto assets through their yield product or get funds through a crypto backed credit line without having to sell any of your assets. So if you want optionality, Nexo gives you both sides of the equation. You can, you can put your assets to work or borrow against them when you need flexibility. Nexo has been around since 2018 and has over 8 billion in assets on the platform and has paid out more than $1.3 billion in interest to clients globally. So if you're a new US user, there's a welcome incentive waiting for you when you sign up. Check it out at the link in the show notes and as always, this is not investment advice. In 2024, emerging markets generated over $115 billion in annual yield for investors. With yields ranging between 10 to 40%, these are some of the highest demand most persistent yields on earth. The problem? DEFI can't access them. BRICS changes this built on mega eth bricks takes emerging market money markets and sovereign carry and turns them into composable primitives you can access straight from your wallet. While DeFi investors earn 3 to 6% on stablecoins and T bills, institutions have been harvesting 10 to 50% yields backed by sovereign monetary policy. BRICS connects these worlds with institutional grade tokenization, local banking rails compliance across jurisdictions and real time stablecoin settlement. BRICS does the heavy lifting so DEFI can finally act access real collateral and structured products on top of real world yield. Even the best carry trades can be within reach. Bricks brings Defi's promise to the emerging world and brings emerging market yield to your wallet. Let the yield flow with bricks. You would have never thought two years ago that you could soon be trading tokenized oil on MetaMask, but here we are. I've been using MetaMask since 2017 and we all remember buying NFTs with it in 2021 and now in 2026. If you haven't checked in on Metamask recently, let me tell you you can trade tokenized stocks, funds and commodities along with leverage perpetuals, prediction markets and Even yes, you can gaseously swap between crypto tokens across networks too. There's advanced security features like MeV and Front Run protection and even a debit card. So you can actually spend your crypto directly at merchants all around the world. And it's all self custodial, everything you want to trade in one place. This is the open money future we've all been waiting for. Check out the new Metamask. It's already on your phone or in the link below. There's something dislocated to me when you tell me that AI is 29th in terms of importance on politics. Yet we have, I mean, maybe this is just cherry picking or just like picking out, you know, a few bits of data, but you have entire communities showing up into town halls to tell people that they don't want data centers in their communities. Backyards like that doesn't ring like a 29th most important political issue. AI. And maybe it's something as you just said, like the, the AI tech leaders, Dario, Sam, they're saying, oh yeah, we're going to completely rewrite the social fabric. And well, what does the social fabric do as a result of those statements? Like kind of get scared, kind of gets offended, decides to show up where they know how to show up, which is in their communities. And so there, there's something like uniquely galvanizing about AI. And so the 20. When I hear the 29th most important political subject matter, I feel like, I feel like that's a lagging indicator. Indicator.
Jasmine Sun
And I'm watching the trend lines like I'm looking at the fact that it is number one for fastest rising issue, with number two being war in the Middle East. So this is as of February, to be clear.
Ryan Sean Adams
Yeah. And then, and then there's one more thing I'd like to just introduce is there's actually been violence on the table. Sam Altman's home has been the target of two attacks. One with a Molotov cocktail, another one with some bullets. I think, I think there's others. And then they're just not related to AI but for some reason the Luigi, I don't know how to pronounce his last name, but the individual who killed
David Hoffman
the Magione, David, he's in the place.
Ryan Sean Adams
Yeah, the healthcare CEO, like the political assassination. And then we have, you know, people showing up with Molotov cocktails. To Sam Altman, it's just, it's, as a political topic, it's just far more galvanizing and motivating than any other. Like what do you make of just like how some people really feel motivated to do big things, big drastic things when it comes to AI and what that means for just like the future of what that means for the 2028 election and domestic politics?
Jasmine Sun
Yeah, I mean I think that people again, AI has sort of become almost this political boogeyman. I think in some ways it reminds me of the way that China showed up in the discourse over the last decade where everything was because China, like we gotta, you know, we, we need to do AI because China, we need to reinvest in manufacturing because China, we need to educate our kids better because China the, the specter of China competition in China, eating America's lunch, on the economy, on geopolitics, on whatever was sort of used as an all purpose justification in Washington D.C. and I think that sometimes this is fair. Like again, I think some of the AI risks are really real. I think that China competition is a real thing. But I think it also comes from the sense that when there's a big other force in the world, this big alien force, whether that's another country like China that's very foreign to people or whether there's this specter of super intelligence and people don't really understand it, but it promises to change everything and it seems very powerful and, and like there's a lot of money behind it. It becomes very easy to sort of blame and tie into a really wide range of issues. But yeah, I think that this opportunism is probably going to accelerate going into the 2028 primary season. I mean it's going to be a crowded primary, most likely on both the Republican and the Democratic sides. And we're already starting to see some of the likely candidates picking this up as part of their campaign messaging. Like it's very notable to me that you know, like Ro Khanna and Mark Kelly, both who are expected to put themselves in the running have been, you know, doing these big AI action plans. Josh Hawley on the right, for example, has also been especially active in AI legislation on kids safety, on jobs. I've heard from other folks who haven't necessarily introduced plans yet but who are expected to do so. And I think again it's because you always need a galvanizing new thing that's going on in order for these politicians to justify why they are the unique ones to sort of meet the moment with their plans. And AI can also be interestingly kind of distorted to fit any of these plans. I think another thing is just like it collides with pre existing sentiment Pre existing populist sentiments in America, right? Like we, we're already seeing rising distrust of institutions, rising distrust of elites, distrust of billionaires, corporations. That's been a growing sentiment in the US A growing resentment long before AI and with how wealthy these AI billionaires are, with how much revenue the companies are making. Anthropic hitting 30 billion run rate recently with the scale of these data center investments, I think that AI is a very good target for a lot of this anti billionaire, anti corporate sentiment. And so, you know, like I'll. Even when I talk to accelerationists, even when I talk to people who are very pro AI, when I talk to AI executives, they understand that they are very unsympathetic, right? Like most Americans do not relate to Sam Altman. They do not find him relatable. They know that they personally are getting no piece of this pie. Remember, these are private companies and so most people have no way of sharing in the wealth of this thing. And so it's very easy to blame the AI billionaires because they're like kind of culturally weird. They're really far away from you. They're not sharing their, their wealth in any way. And they are kind of like transforming the whole of economy and society. And so I think that they're also a politically convenient target and one that I expect is going to get more ire and more hatred over the next couple years as the presidential primary really kicks into effect. Like 1.1Crypto contrast I think is really interesting, for example, is the super PACs that the industry has created, right. And so during the crypto era, Chris Lehane, who now works for OpenAI, was one of the critical people shaping, shaping the fair Shake pack which, which lobbied for pro crypto legislation. And he was really effective with a lot of that. They went after candidates who really wanted to crack down on crypto. This scared off more candidates from doing the same. And for the most part, a lot of potentially onerous crypto legislation was avoided. And Fair Shake mostly flew under the radar for normal people. Whereas on the other hand, the same playbook was tried for AI is being tried with leading the future pack, also shaped by Chris Lehane. And as well as some other AI venture capitalists and executives. They went after Alex Borros in New York for, you know, pushing New York state AI regulation. And the actually the opposite thing happened where Alex Boroughs was ranking like number three in the polls. He was kind of an irrelevant guy who was going to lose. The AI billionaires go after him, start running attack ads, he starts running his own ads being like, AI billionaires hate me, shoots up to number one in the polls, or like neck and neck number one, number two. And he now has a much better chance of winning now that Leading the Future, this AI Super PAC has gone after him. I've seen in other districts the same thing happen where when Leading the Future, the AI Super PAC endorses a candidate, the other person in the race will say, thank God I haven't been endorsed by the AI billionaires. And so you have enough of this populist sentiment that it's actually a little bit of a political liability to be partnered too close with AI industry in so many ways.
David Hoffman
I think some things that happen with crypto were really a dress rehearsal for AI. I do want to get on this thread of the violent attacks though, because that is somewhat new in American life. And I'm curious about this thread. I think you called some of these things the attack on Sam Altman, warning shots, the Molotov cocktail thrower. He was a 20 year old. He was part of a pause AI Discord group. Some of his writings, he said, sociopaths, psychopaths are gambling with your future and with the lives of your children. I'm wondering, between this and the murder of Luigi Mangione, the United Healthcare CEO Brian Thompson, are those like the attacks on Sam Altman and the murder of a healthcare CEO, is that all part of the same movement or is there a particular thread that is targeted towards kind of the tech leaders and the AI leaders that's separate from the attack on kind of healthcare executives?
Jasmine Sun
Yeah, super interesting. I think my argument would be that they are not part of the same movement. They have different motivations for their attacks. Like for example, the attacker of Sam Altman, the guy who threw the Molotov, he had written some blog posts about existential risk in particular and his, you know, Eliezer Yudkowski style fears about how AI was going to kill us all. So he definitely had some AI specific fears. I think the things that feel really similar to me when you look at a lot of the recent assassination attempts or successful assassinations that have happened over the past few years, is a lot of them are committed by very online young people who spend a lot of their time in discords and in these very niche online communities that often tend to develop more extreme beliefs like Charlie Kirk's. Charlie Kirk's murderer did the same thing. He was also a discord lurker, very young as well. And I think that it also reflects the fact that political violence in the US has become More prominent. And that's something that political science researchers have found too, both when they look at the incidence of political violence, but also when they poll the public on do you ever think assassination is justified? Do you think that violence is justified? And now whether you pull on the, for right wing figures or left wing figures, you get numbers like 10 to 20% of Americans think that assassination attempts are justified when they're directed as people who you think are bad people. Whether that is Nancy Pelosi's husband or whether that is Donald Trump or whether that is the United Healthcare CEO. And so the thing that I notice with you know, Sam Altman's attackers, like with the other attackers, is that these are young people who have developed a pretty nihilistic politics, who might, whose views might be increasingly extreme as a result of participating in online communities where people kind of reinforce each other's beliefs really quickly in the cycle and who also believe that they have no other outlet but political violence. I think that when I think about the resentment that people feel or why do crazy things like this happen? Like I, I am no fan of political violence. You know, like why would someone do something like this? What I really see is these people no longer believe that the democratic system works. They do not believe they have any other channel to, quote unquote, have a voice or to shape the direction of what happens to politics, what happens to the economy. And they see direct action, in this case direct violent action, as the only way of making their voice heard in order to stop some of the changes they think are coming. I mean, I see this at a lesser scale with things like data center protests or something like that. It's like, you know, a lot of my friends in, in the AI industry, for example, think that the data center protests are really stupid. Like they're like data centers are the wrong target. If you are worried about AI safety, you should pursue regulation or something. But I'm like, do normal people have any channel to pursue regulation or to shape how these models are trained or what the products look like? They don't. They, they don't, they don't know anybody who works in AI policy. They don't know anyone who works at an AI lab any. If they feel like they are being forced to use AI in particular ways that they don't like, or that it's threatening their job or their kids safety. They do not actually have a lot of channels to express that discontent. It's not like something you can vote on. It's not democratically governed. And when people are really Nihilistic and very distrustful of these companies, which is how a lot of folks feel they are going to go for things like grassroots protests or even in the extreme cases, political violence. And so that's one of the things that I notice when I see more incidents of violence, whether it's against healthcare executives or AI executives. It's people saying, I don't like the way our healthcare system works. I don't like the way that AI is affecting my life and I have no idea what to do about it. And I feel like I have nothing to lose anyway because I feel very bleak about the future and so I might as well shoot someone. I think that's really scary.
Ryan Sean Adams
Yeah, yeah. AI is definitely in this moment of time in which there is but convergence on just a number of different things. Wealth inequality is at all time highs. The last tech boom, social media, you know, promised global connectivity to all of our friends. And we all understand that we're being fed something completely different and everyone is kind of like disgruntled about that. And so like you said, like, distrust is at all time highs. Just being chronically online is probably at all time highs.
Jasmine Sun
Oh yeah, absolutely.
Ryan Sean Adams
And then, and then all of a sudden we have this AI industry, which I think as you're kind of alluding to, is like a pretty good boogie band to express a lot of our frustrations in society upon. It's kind of like this blank slate. It's like, what do you upset about? Well, you can point it at AI in some particular way and, and to your point, like a basic psychological principle is like, if you thwart any individual human's goals, what are they going to do? They're going to lash out. Yeah, you, you back someone into a dog, into a corner, and they have no choice but to bite. And I think with wealth inequality, you have a growing number of, of people who probably feel something like that is like, I just, I don't know how to improve my circumstance. And then here we have like a new like wave of technology and you have the CEOs, the leaders of that technology, really not doing themselves any favors. No, like Sam Altman and Dario are both like, yeah, we're going to like do what? Mass job wipeout and it's going to be sick.
David Hoffman
Sam. Sam has changed his messaging.
Ryan Sean Adams
Sam has changed his messaging.
Jasmine Sun
The thing is, he a changes messaging after the New York Times underclass piece. I think it was pretty clearly responsive to that. B, if you read his tweets closely, he says in the second tweet, he's Like, I think people will be more fulfilled than ever, but we're going to have some painful transitions along the way. And that's the thing that really bothers me. It's what they all say. It's. They say that in the future, 20 years, 50 years, whatever down the line, we're going to have this amazing utopia where AI does all the work, all the diseases are cured, consumer goods are really cheap, housing's cheap, whatever's cheap, and life is going to be perfect. But they all talk about this transition period where it's kind of a euphemism. They'll say it really quickly, like, yeah, there'll be a bit of transitional friction, but it's going to be okay. And what do people hear? What do they mean by transitional friction? What they do mean is that if you are a current worker, not somebody 50 years in the future, if you work as an illustrator or copywriter or a young software engineer, you are kind of screwed. And so, like, even in Sam's new sort of approach to this, he's still admitting that a lot of people working right now are going to be screwed over on the way to the utopia. And so when people hear that, they're like, man, like, I don't want to be screwed over.
Ryan Sean Adams
Yeah, yeah, there's that stat that like 80% of American, the American, like, labor force is like one unexpected medical bill away from, like, poverty. And when you hear, like, Sam Altman say that, yeah, there's going to be a painful transition, well, that counts as an unexpected medical bill, a painful transition. And so this is probably making things feel a little bit too real or threatening to the average worker. I want to know what you think either Sam, people like Sam Altman or Dario, the leaders, and then also people at these companies, what they think publicly versus what they think privately. Like, there's kind of like maybe there's a gap between on mic versus on micro. This is a quote from your article in the New York Times. Tech industry sources express express expressed more extreme concern about the labor market impacts of AI in private conversation, but suddenly became optimist once I turned on the microphone. And so I kind of want to understand, like, give us a take about what people, what you think people believe behind closed doors. All the people inside of, like, the AI elite, Silicon Valley circles.
Jasmine Sun
Yeah, I mean, so the reason I wrote this New York Times opinion piece was in large part that I felt like people were saying things behind close that they were not willing to say on the record. And I felt like because I had at least Heard some of these conversations, and I was aware of the sentiment. I could piece it together and sort of lay out a case with publicly available information and a couple anonymous quotes as to what people really expected. And even when I was reporting the article, I noticed this happened where there might be a person who I talk to just as part of my normal, you know, life living in San Francisco. We just chat about AI and they'd say something like, yeah, I think the median person is screwed. I don't know what I would do if I was 17 and I didn't have a lot of money. I don't think I could go to college. I have no idea. And then if I'd asked that person, hey, like, would you mind, like, doing this interview for the piece? I'm trying to make a case for managing this disruption better. That same person would say, sure, I'll do the interview. But then on the interview, they'd focus on stuff like, well, you know, like, I think AI can help people start a lot of small businesses. And they would be super reluctant to say any of the things that they had said maybe an hour or a day before to me and the same person, and this actually freaked me out more, was because it wasn't just that people have these bleak predictions about what was going to happen to the economy and to workers, but that they were flipping their tune as soon as I turned on the mic and asked them to go on the record. And I noticed this happened with multiple people. Some people wouldn't go on the record at all. And one person, like a high, you know, high powered venture capitalist, told me a lot of my executives are telling me that they want to lay off their workers with AI but to be honest, Jasmine, I don't think they're going to talk to you for their piece, for your piece, because they don't want to be the bad guys. They know that they're going to get backlash for saying that. And so I feel really frustrated when people say things like, you know, Dario is just trying to do marketing and hype for his company. And the reason that he's predicting these crazy things is that he doesn't actually believe it. It's just marketing. I'm like a. He does actually believe it. I feel pretty certain he actually believes it. That doesn't mean he's correct about the way it's going to play out, but he at least believes he is correct. And second of all, it makes him look worse. It makes people more anti AI when he says that. So it doesn't make sense. As a marketing strategy. And then third of all, the vast majority of AI leaders, researchers and executives who hold the exact same belief as Dario are not willing to say it out loud because they don't want to be the one targeted for laying off their workers with AI or for building the worker replacing technology. And so I actually do think that the belief that there will be at minimum mass job displacement or a near term disruption is super, super common. I think people differ on like, will there be jobs in the far off future? Like the permanent underclass belief is more niche. The idea that everyone is permanently screwed. But I think that the belief that AI will exceed the abilities of basically every human and this will cause mass job disruption in the near to medium term is pretty common among folks who I talk to in the AI industry.
David Hoffman
So you think when Dario says 20% unemployment, you think he really means it? You think he actually thinks that's what's going to, and so this is a warning for the world to get ready for that?
Jasmine Sun
Yeah, I do think so.
David Hoffman
Let's talk about whether he's right or not, because there is significant pushback on those unemployment numbers. People say, people like Dario and Sam, they're not economists. One of the sources of that pushback is Marc Andreessen, who I think enjoys pushing back on a lot of your work, Jasmine. So, I mean, he'll point to the lump of labor fallacy, right? So he'll call this classic zero sum economics. The idea that there's only a fixed amount of work in the economy and then you have to sort of split it up. Well, that's not really true. That's the lump of labor fallacy. Of course, like we can have grow the pie types of gains, productivity gains, new industry, new demand. The classic case in lump of labor fallacy that everyone cites is ATMs. There was a time where people thought ATMs were going to kill the jobs of bank tellers. You know, in the 70s and 80s, what actually ended up happening in the decades that that followed was we got more bank tellers. They actually grew because demand increased. And we've seen the same thing with radiologists. You know, AI was supposed to wipe out radiology jobs, and radiologist jobs are growing. Even programmers right now. Maybe not entry level, but the demand for programmers, at least by some measures, is increasing as a result of AI. They'll also point to deflation benefits to labor. So they'll say AI is a deflationary force. It's making everything cheaper, in particular services. So we want better healthcare services. Time with A doctor. Well, you have a doctor on a, on an app in your phone with doctor level intelligence or a therapist or a psychologist or a lawyer, or name your thing that you want to make more affordable. This is all a deflationary effect and that will benefit labor as well. And then lastly, someone like Mark will dismiss all of the things that even people like Dario are saying is kind of a particular lens on the world. Maybe like a doomer socialist type of take that. You're, you're taking your worldview and you're applying that to AI and you're saying, you know, here it is, you're being politically opportunistic about things. I'm not saying you in particular, of course, Jasmine, I know you are reporting about these things, but this is the pushback on the unemployment numbers. That it's just like that's not actually how it's going to play out. And even if Dario believes that's how it's going to play out, we've had technical revolutions throughout history and it's led to more productivity. It's, it's led to more, you know, positive sum games for, for more people. And why wouldn't it play out this way? So what do you think is actually going to happen here?
Jasmine Sun
Yeah, I mean, that was a lot. I can, I don't know if you're. Do you want me to just say what I believe or do you want me to make the steel man for Dario's case? Because those are not the same because I don't agree with Dario either.
David Hoffman
First, why don't you give the steel man for Dario's case and then I would be interested in your own opinion because I know you've spent a lot of time here and given it considerable thought.
Jasmine Sun
So yeah, like you mentioned, I think the most common critique of jobs doomers, which Mark Andreessen and other folks have made is the lump of labor fallacy and Javon's paradox. Right, Or Jevons paradox, I don't know how to pronounce it. They basically say that if something is cheaper, then actually demand can go up. And so if software is really cheap, more people will want software. If therapy is really cheap, even more people can access therapy and demand for therapy will go up and there will always be new forms of work to do. People's desires are infinite, they're not limited. It's not like once you satisfy one desire, they won't, won't want a new thing. And we see this where there are now yoga studios and maybe a hundred years ago we weren't spending our money on yoga studios or something like this. And I think in general, historically this has been a really good argument and it has held through. Held true through history. The thing that I think Daria would say as to like, why would AI be different? Is that both of those arguments, Javon's Paradox and Lump of labor, assume that more labor equals more humans. So what they're saying is that demand is unlimited and that the amount of labor to do in the economy will always go up. But they also assume that there is an inherent link between productivity and labor and humans. Right? Whereas the thing that AI promises to do is particularly AGI, like fully human replacing AI is that you can have labor without having humans, so you can produce software without having humans. So yes, maybe demand for software goes up, but AIs are making all the software. Or yeah, maybe demand for therapists goes up, but AIs can do the therapy. Lots of people are already using AI for therapy. And so even though we're not yet in that world, because AI is very jagged, it can't do everything yet. And so humans remain complements to AI. Right now, humans are augmented by AI for a lot of things like radiology. You need both a human and an AI together. And so if demand goes up, you still need human labor. AI is generalizing really fast. It's improving really fast. And Dario believes that in the next two, three years, we're going to get AI that can do that, can produce infinite amounts of software therapy or whatever it is, without the requirement of having any humans. And so let's take your software engineer example. Right now we see that overall demand for software engineers is going up, but the junior engineers are affected. Right? So if you're a new grad engineer, you are actually struggling to get work because you're not really that much better than cloud code. But if you're a senior engineer, you're totally fine. Lots of demand for senior engineers. The thing is, if you look at the way that AI models have progressed on software benchmarks, year after year after year, they're improving really, really fast. And so right now, maybe AI can only replace a junior engineer, but it seems totally feasible to me that next year AI will be able to replace a mid level engineer, and maybe the year after that it will be able to replace a senior engineer. And if that continues, then AI will, then we will no longer need human engineers to make more software. Right? And so the argument that people like Dari would make here is that AI breaks the necessary tie between humans and labor and that's the thing that people like Mark Andreessen are failing to consider that. That, that would be me making the steel, man, for saying, even there, even
David Hoffman
there on Dario's case, like, it wouldn't be the case. Let's say AI automates all of kind of the, the types of tasks in the economy. Isn't it the case that humans still have this insatiable demand for kind of status types of games? So you think about something like yoga or, you know, a personal trainer or something like this. This is just about fulfillment, I suppose, in life. Or maybe there's some idea of a, of a status game that's being played. You know, it's like, I can, I can get stronger, I can get more fit, something like this. And so maybe all the software developers become personal trainers and, you know, they spend their time on more fulfilling tasks. And isn't it fantastic that all of these more labor intensive, boring types of jobs get filled and we'll just replace all of that as long as humans are around, we'll just replace all of that with, with other games that we play, like status types of games.
Jasmine Sun
Yeah. So this is the argument that like Alex Dmas, the economist has made, right? Is like, what will become scarce, like, oh, relational goods, like you said, therapists, you know, personal trainers will become scarce party hosts. I think that there'll be a lot of party hosts after AGI, event planners, whatever it is. And to be clear, I personally am like quite sympathetic to this argument. But the argument that Daria would make here, or when I'm feeling pessimistic, the argument that I would make here is actually AIs are really good at emotional and relational labor too. And even a lot of wealthy people choose that stuff. So before, maybe if you wanted to be entertained, you might have to go see a live play, you'd have to go see live theater. And you need like 50 people to like make this production of live theater. Now you have like Netflix and TikTok and increasingly in the future we're going to have Netflix and TikTok with like AI avatars and AI storylines. You just need like way less humans produce to produce the same entertainment. And like, even people who are really rich sometimes prefer to watch Netflix and TikTok versus go to the theater, even though they can also afford to go to the theater. A lot of people do prefer asking ChatGPT for medical advice or what they should do about their relationship problems over asking a human therapist, even if they can afford the human therapist. So we see people make choices that Prioritize the convenience and quality of technology over the status good of talking to a human over and over and over. We see that happen all the time. And I think that it is true in my opinion that there might be like some niche areas where people really want another human there. But that pool might actually be a lot smaller than people think. People pay more for Waymos than they do for Ubers, for example. Even people who could afford a black cab taxi driver will often prefer the Waymo instead. And so, you know, I think that actually AIs are quite good at doing a lot of these relational tasks and will continue to get better at them. I also think that one of the things you want to look at in terms of demand is how many people can afford to produce demand. So I spent some time in China recently. China. One of the problems that China has, one, it's had white collar unemployment for quite a long time for non AI related reasons. And one of the reasons for that is household spending is very low. And so you don't have as big of a services economy because there's not that as big of a middle class. You have some very rich people, you have a lot of very poor people. And middle class spending is really necessary to drive consumer demand because rich people only have so many hours in a day, they only have so many wants. Right. And so if you have a world that's very unequal, which is something that we expect with AI because there's going to be more returns to capital, those rich people, they may be able to hire a few party planners and a few personal trainers, but like they got 24 hours like the rest of us. And so you're just not going to have as much demand in a very unequal economy compared to one where there's a really strong middle class and everybody is, you know, buying a lot of services and goods all the time. So those are some arguments that I would consider making if I were trying to make the more extreme case. But once again I just want to say that like my own beliefs are a little bit more moderate.
David Hoffman
And so let's zone in. What are your beliefs on the unemployment?
Jasmine Sun
Yeah, I mean I. So I. What do I think is going to happen? I lay some of this out in the New York Times piece. I do expect the near term labor disruption. I think that there are certain categories of jobs that are way easier to automate than others. And this is where a lot of my disagreement with people like Dario comes from, is that software engineers super easy to automate because code is verifiable it's all of the context is in a code base. You have this like open source data on the Internet that you can go train on. Most jobs are not like that. Software engineering is a really weird type of job. Maybe accountants are also like that. There's like a few jobs like software engineering, maybe digital marketing, copywriting and freelance digital illustration. Maybe like accountants or something, management consultants. Let's call it like 10% or 5% of the US economy is jobs that are very, very easy to automate for like some slate of reasons like this. Those I do think are going to get disrupted pretty quickly because financial incentives are just going to make bosses choose to use AI over hiring humans. Especially like when a human gets laid off or they quit their job, you're just not going to replace them if an AI can do a good job. So I do think we will see some labor impacts even though I don't think it's going to be all of the jobs. Because physical world jobs, relational jobs, jobs that are protected by regulation, like doctors, that stuff I think is going to take a long, long time to automate. So I see these near term disruptions. I also think that retraining is usually overestimated by economists. So folks who believe in stuff like lumber, labor economists, they tend to say that people are just going to go move to other jobs. So before, during deindustrialization in the US when a lot of factory jobs were automated, these economists predicted that the laid off factory workers would just move to different geographies to work in different factories or that they would learn like digital skills, like learn to code. And I think we all kind of laugh at that now because we see over the past 10, 20 years that these steel workers did not learn to code. They also did not move. They often got addicted to opioids and had a really, really bad time. And like we are still living out the political and the social consequences of de industrialization even though it wasn't that many workers and actually it created more jobs total. But the new jobs that were created by factory automation were all like software jobs in San Francisco and not like jobs in Buffalo, New York. Right. And so just because you have new jobs elsewhere in the economy does not mean that the people who are laid off are going to be able to retrain, even with income support, even with access to school into those new jobs. Because these people might be like 50 years old. Like they just, they don't have the brain elasticity anymore, they don't have the motivation anymore to go and learn Something brand, brand new. And so I think that even if it's not, even if it's, let's say, 5% of jobs are going to be automated by AI, and it's not all of the jobs immediately, I think a lot of these folks are going to really struggle to retrain. I don't think that they're all going to easily switch into a new job. I think they're going to build a lot of political resentment. And so this is where it sort of connects to my interest in AI populism. This time. Maybe instead of right wing resentment, the kind that drove Trump, it might be more like left wing resentment where it's blaming the AI billionaires. I think we already see that. I think some of the biggest critics and skeptics of AI are people like creatives whose jobs have already been impacted by AI. And so I think we're going to get a lot of populist backlash that results from people's jobs being threatened, even in small numbers. And I also think that on the macro scale, even in a world with full employment, you still might get a declining labor share of the economy, which is something to worry about, which is, right, this idea that, yes, maybe everyone still has a job, but overall wealth is accruing to capital owners who have the ability to rent infinite robot labor, and wealth inequality can cause its own kind of problems, like these political imbalances, resentment at elites, things like that. And so that's something I worry about even in a world where people mostly retain their work. So I tend to be like job displacement. It's not going to be, it's not going to happen all at once. It's not going to be the sort of apocalypse. It's going to affect some narrow categories of people, but those people are going to be really, really mad about it and it's going to really, really suck for them. And I kind of want our policymakers to be more proactive to tell people if your job is automated by AI through no fault of your own, like you spent decades learning some skill and now it goes poof because of AI. I do think we should support those people. I don't think it's their fault.
Ryan Sean Adams
What do you think about the whole concept of just like the capitalism end game, which is like, there's just a, it's just game over for labor. You take super intelligence and then not too long afterwards you get super robots. You smash those things together and like the whole concept of being a human is just obsolete and redundant. And then this invokes the Idea of just like the, the permanent underclass where there are just people who are just stuck down there. And then you like zoom forward a few decades and you get movies like Elysium where all the elites, like, escaped to their like super fortress in space and all the permanent underclass are like stuck on Earth. And it's just like, it's just entrenched that way. What do you think about this?
David Hoffman
Yeah, it's kind of like the idea to flesh that out a bit more and add to that. David. It's like the idea that capital no longer needs labor to function. Like, for all of its history, capital has had to hire labor in order to get jobs done and get work done. And now it has AI tokens to substitute for human labor, so it doesn't need labor any longer.
Ryan Sean Adams
And this is like the extreme version of what might happen. Yeah, right, right.
David Hoffman
There's like books like. Or there was an essay called the Intelligence Curse. I don't know if you read that. We had the authors on. Also, Garrison Lovely is coming up with a book called Obsolete, I think, which, you know, delves into this thesis. Basically, labor becomes obsolete in this world.
Jasmine Sun
Yeah, I mean, I think that is the, like one of the versions of the things that people like Dario, even more extreme than Dario, do believe. That's what they're worried about. Right. Is like AI will be a one to one substitute for labor. It will be able to do literally everything and capital will discard people. And that's where I would start to make arguments like you've been making, Ryan, where I'm like, well, actually if human labor is scarce, some people will want their human party planners. And so I do think there will be some jobs available in the relational economy. I think that it also requires believing in full automation. Again, like technology has to advance so much that it's not just replacing cognitive jobs, but it's also replacing like jobs in the physical world. And do I think that could happen someday? Like, maybe, probably. Like robotics is improving, but we're pretty, pretty far away from that. Like, I think that we are going to have a lot more problems to deal with in the next decade before we get to the point where full automation is even worth considering. Even for folks who do map out and care about these full automation scenarios, like the economist Phil Trammell, who wrote his capital in the 22nd century essay, making a version of this argument. He's called it the 22nd century because his very rough, low confidence estimate was 100 years in the future. And again, we may never arrive There. If the relational sector of the economy is big enough, wait 100 years in
David Hoffman
the future, what happens?
Jasmine Sun
Labor will go to zero. So labor goes to us, full automation. Labor goes to zero. If it's plausible, it's going to be like a hundred years in the future or something like that.
Ryan Sean Adams
Even if it's this drastic, we have time.
Jasmine Sun
Yeah, like I think we have a lot of time and I think there are a lot of things that could happen between now and 100 years from now. And so maybe personally I'm more focused on these near term scenarios. But like, I do think that like it is worth considering that capital relies on labor right now. And if it doesn't require humans as much, I don't expect governments or corporations to be as generous in terms of things like welfare and caring about what people think about how things should go because they have robot alternatives. And so those political dynamics might start showing up.
David Hoffman
Yeah, I mean that's the argument of the intelligence curse, basically that it breaks the social contract between labor and capital and governments and its citizens. And so a new social contract has to be, you know, created.
Jasmine Sun
Yeah, and I think that's why people are turning to things like violence, frankly. It's like if you are not, if you as a worker or as a normal person have no leverage as a result of doing work, because that's one of the traditional ways you have leverage. Where do you have leverage? You can do violent acts, do terrorism and riot in the streets. And so I think people are recognizing that one of their few channels of leverage when you lose everything else, is to do violence. And so that's why I think that even if I am a totally self interested capitalist who doesn't care about people at all, I would be pretty concerned about making sure that not too many people end up unemployed and disempowered because I do not want to face these violent threats from people who are, have been deprived of every other channel for leverage except for violence for sure.
David Hoffman
But like, okay, but doesn't it seem a little early for that? Like we don't know how this is going to end up yet. So what is unemployment in the US? Is it something like 5%?
Jasmine Sun
Yeah, I mean, I don't think it's too early to plan for scenarios. Right. Like I don't think that we have to not to plan for UBI right now. Like I wouldn't, I would not support that.
David Hoffman
No, to be clear, not to plan for scenarios. But why are people getting violent and angry already? Like it hasn't happened yet at some Level. That's what I find somewhat curious, you know, because they feel like they can stop it.
Jasmine Sun
Right? You know, like they.
Ryan Sean Adams
It's a little like Ted.
Jasmine Sun
That's what they're trying to do.
David Hoffman
Okay, but is it little Ted? That's. That's pretty. That's pretty strong data.
Jasmine Sun
These are extreme people, right? Like, these are not normal people. Most people are not engaging in violent attacks. But the thing is, if you genuinely believe that this thing is going to come for you and your family and your community, sure. Then, like, they. These people believe that if they do enough violence, they can stop the thing. Again, I do not endorse violence. I think this is super bad. It's not that many people. But you can just see how I'm trying to arrive at that.
David Hoffman
What I'm trying to find out, the violent actions aside. But like, all of this, this, you know, kind of vitriol against tech populism, this is just like. Or this vitriol against big tech is how much of it is vibes, you know, versus reality. Like, we don't actually know what is going to happen yet. It hasn't hit us yet. So so much of this is narrative and vibe, and it might turn out,
Ryan Sean Adams
which is a narrative which the AI leaders are. Are fostering.
David Hoffman
Some of them. Yes. Which gives.
Ryan Sean Adams
Which gives the vibe a lot of credibility. Like, no one is saying the other side of the vibe other than, like, Marc Andreessen.
Jasmine Sun
And Mark Andreessen is investing in lots of companies whose value proposition is to replace workers. So I know that Mark Andreessen is tweeting different things, but if you look at his portfolio of companies, many of them have a core value proposition of replacing workers. Right. And so, like, I see why people would be skeptical of Mark Andreessen's public statements.
Ryan Sean Adams
Right, right. And plus, Mark Andreessen just kind of politically aligned himself, and so now he's kind of, like, shoehorned in that sort of political camp. The other thing Ryan, I think is kind of worth highlighting is like the. Did you read the text message or the statement that the recent Donald Trump assassin. Attempted assassin left behind? There was like a whole. In his manifesto, there was like a question, answer, question, answer. And he was answering his own question, like, why are. Why are you the one to do this? And then he would answer it. He basically, like, rationalized himself. And anyone who is curious in reading his manifesto is to, like, why he thought he was valid in making an assassination on. On Donald Trump. And this is clearly a guy who's, like, chronically online. He was like in Reddit communities and it looks like just like kind of hyper rationalism. And I think that's these are the same people who are doing political violence against Sam Altman. It's like this is why I kind of said it's a little Ted Kaczynskius is they think that they are stopping this future Terminator like Skynet type thing that is going to happen in the future and they just have to do the right thing in the now to solve that future, that solve that future problem. So like while again no one on this podcast is supporting or political violence, I can kind of see the logic. Well, you only, you only need a
David Hoffman
very Stakes are this high, right?
Ryan Sean Adams
If you see this. But the tech leaders are saying the stakes are this high.
Jasmine Sun
Like it's like I don't know what Darius P. Doom is, right? Like, but like I think he probably has a PDM that's like 30% or something is my guess. Like it's probably like quite high like relative to most people. Like he is. He is clearly very worried about the prospect that AI could kill everybody or leave the world a very bad place. And I can see that if you believe that, which I again, I personally do not happen to believe. But if you thought that these tech leaders were actually gambling with your future and they were actually going to like do two coin flips and there's a 25% chance that you're going to end up, if not dead in the permanent underclass, you might think like, just like, you got to kill baby Hitler, you got to do this kind of violence, you know, and they've done too many thought experiments. This is the whole thing about the hyper rationalism, like online too much. I'm like, you have done too many thought experiments. Like read some virtue ethics. We're not going to do this.
David Hoffman
Yes, please. Virtue ethics. Let's inject. Re. Inject that, please.
Ryan Sean Adams
I want to talk about the kind of, just like the political map that comes out of this. There's just a ways to kind of divide how the future of politics when it comes to AI. Like looks like there's left versus right. There's labor versus capital, there's Silicon Valley versus Washington D.C. how do you think this is? The lines are going to get drawn here clearly. Like Bernie Sanders is on one side and I think AOC would join him. I don't know necessarily who the pro AI politics are, but like when we see like factions joining together and political lines being drawn, how do you map this out?
Jasmine Sun
Yeah, I think this is super Interesting. I mean like you said, there's a million ways it could break. One that I worry about when I'm like freaked out by all this is that it's going to be these like techno capitalist elites from both sides of the aisle, sort of centrist, pro neoliberalism, pro technology folks against everybody else, whether they're like right, left, whatever, people who are don't like technology. I mean like some people have articulated it as friends of the future will be one camp and then like everyone who's trying to stop technology and stop change will be another camp. I don't know that it will be that, but I think that sometimes feels really plausible. Especially when I notice that a lot of these very anti AI factions, they're very bipartisan, they have people from a lot of different political camps like creatives, labor unions, environmentalists, states rights people, family first people, religious people. Like so many different interest groups are coming together all because they think that AI is going to alter the existing environment, existing jobs, people's existing social circles and their way of life. And then there are people who are more interested in like economic growth or the long run future or are just a little bit more pro technology in general. And this freaks me out because I feel like personally I am someone who really likes technology. I like using AI, I love the Internet. I feel like it's added so much to my life. I believe in economic growth. I just want to distribute the benefits of growth equally. Like I just think that we should care about the distribution. But I generally am pretty pro technology and that really freaks me out to think about this kind of thing. Like I, you know, I wonder how Ezra Klein and Derek Thompson like the abundance folks feel because they are people who try to make a case to the Democrats that they should embrace technology more. Right. That actually if we think about the way that things like AI might bring down the cost of pharmaceuticals or unlock scientific discoveries or make work less onerous, like that could be an amazing thing for Democrats and for anyone who cares about like a broad, broad public well being. And I was a pretty big fan of abundance. I'm pretty sympathetic to that argument, but I don't think that's the way that the current Democratic party is going to go. Because they're the ones whose voter base of like youngish, college educated white collar people are the most impacted by AI. They are very scared. And we have a lot of distrust of the technology companies right now where people think yeah, maybe there's going to be a cure for cancer. But I'm not Going to get it is kind of the way that people feel like maybe there's going to be, you know, a therapist, teacher, whatever in your pocket for all those people. But I can't pay the 200 bucks a month to get the best models and I'm going to be left on the other side of that divide. And so I think with such low levels of social trust right now, trusting companies, trust in the government, trust in each other, I would not be surprised to see a kind of increasing split around these lines of are you, are you part of this like broad populist group or are you sort of on the side of the techno capitalist elites or whatever?
David Hoffman
Wait, wait, so the way you broke it out, right, and your, your fear, the thing you hope doesn't happen, but the thing that you are seeing take shape is some sort of a binary between sort of the, the futurists and the Luddites or the technophiles and the anti tech people, the, like the ex, the accelerationists and the D cells. I'm, I'm very much seeing that too. And I think that is the, the worst possible outcome because there are a lot of people who are kind of more in the middle who, who are like, hey, technology, if it's good and if it helps people and there are ways to kind of marshal it towards that, we can't just be anti tech and also we can't just be pro tech no matter what the technology is. It's kind of like a guided tech type of theory. And those people that are caught in the middle will have to pick a side. I think probably Derek Thompson, Ezra, Ezra Klein are among those who would have to pick a side. And it's. And I'm wondering if, where you think, if, if those are the two sides, at least for this election cycle, where do you think that splits among party lines? It seems like the left is more going in kind of a D cell type direction more than the right though there are factions of the AI, the, the populist right.
Ryan Sean Adams
The right is not inherently pro AI either.
David Hoffman
Well, but they seem to be at
Ryan Sean Adams
least in more, more left, certainly regime.
David Hoffman
And so if that's the break, are we going to get Democrats who have to be decel and then Republicans who have to be accelerationists?
Jasmine Sun
Yeah, I mean my sense is that in the 2028 election, unless things get really, really crazy with AI, it'll probably still be Republicans versus Democrats. But between those party lines, I think it is more likely the Democrats will be the D cells, which as someone who is personally mostly closer to a Democrat than I am to a Republican and also some closer to a pro technology person than an anti technology person. I'm like, oh, I really don't like this. But I, yeah, I think that again, AI impacts the voter bases of the Democrats more than it does the Republican voter base for the most part. I think that concern is one of them in terms of just like the job threat. I think Democrats tend to be these days be more concerned over like, I don't know, things like protecting labor, protecting the environment, protecting creatives. Like a lot of these particular concerns that AI introduces are more aligned with the Democratic voter base. And I think the Republicans still have maybe, you know, like, I think even in this current political environment, the fact that Trump was mostly an accelerationist and mostly a pro AI person really prevented a lot of Republican Congress people who wanted to pursue AI regulation from doing so because they knew that Trump was going to Trump or one of the aligned pacs or something was going to go after them if they tried to introduce too onerous of AI regulation. And so my guess is that Democrats would be the more decelerationist party. But then again, you do have folks like Gavin Newsom, who is the current Democratic front runner, who is pretty pro tech and has aligned with himself with Silicon Valley a lot. So I'm not sure about that either. Just because you do have, you know, people like Gavin Newsom or John Ossoff who recently did a fundraiser in San Francisco with Chris Lehane, the OpenAI lobbyist. Right. And so like you are, you do see a few Democrats going for the pro AI lane. I wonder if that's going to work, like in a money versus the people battle, like in a world of increasing populism and resentment against tech. Does having this super PAC behind you, does having Silicon Valley money behind you, is that going to win you the primary against other people who are like, screw the AI billionaires? I have no idea. But they'll be interesting to watch if
David Hoffman
the, if the left or the Democrats do go in that direction, which is kind of like anti tech moratorium on data centers, the Bernie Sanders type of approach on this. Doesn't this kill the Ezra Klein, Derek Thompson abundance agenda entirely? Because maybe you have abundance with kind of like housing, if you could even get there. But like that means you don't have abundance on intelligence. And intelligence, as we were just discussing, can mean cheaper healthcare, cheaper therapy. Like it can be a deflationary force. Yeah, and in theory, everything. I mean, if, if Dario is even, you know, a small percentage correct, then that can be a massive supply shock in a good way. Supply economic shock to our entire economy. And it's essentially a progressive policy to give healthcare intelligence to every citizen of the United States. We could do that if we have an abundance agenda for intelligence. But it seems if you go full D cell and you, you just do moratoriums on things, then you don't get that.
Jasmine Sun
I mean I think if the Bernie Moratorium camp like takes over the Democratic Party, like if right now most Democrats are not backing the moratorium, if they all decided to go that way, I do think the Dems would be the party of the D cells. You know, like I think that would signal a big shift for the Democratic Party if they got majority support among Democrats for a moratorium. I will say that like if I were to steal my announcer Klein and Derek Thompson, because I think they actually talked about AI populism in their one year retro on abundance research.
Ryan Sean Adams
I heard that.
Jasmine Sun
Yeah, yeah. And I, I think one argument that you could make if you were them is that the thing that's blocking healthcare provision and housing and all that is not really more intelligence, that it's either a political issue or something to do with manufacturing or stuff in the physical world. I mean we've seen, seen Bommel's cost disease where the cost of digital services goes down but a lot of health care is still like surgery or housing requires like building in the real world or the US has lost a lot of manufacturing capacity compared to places like China. And so one could make an argument that they are. One is pro technology in the sense of physical things like drug development and manufacturing, things that deliver these broad based benefits, even if intelligence, we don't max out or something. And so I could imagine an argument something like that. But I do broadly think that if the Democrats become a sort of firmly anti tech party, that would be a blow to the abundance style progressive movement.
David Hoffman
Yeah. Like New York State Senate, there was a bill being considered, Senate Bill S7263. And this would basically prohibit AI chatbots from impersonating licensed professionals for therapy or health care advice or that sort of thing, which of course drives the cost. Oh yeah. It doesn't decrease the cost of providing those services if someone wants to get those inside of an AI or a chatbot. Right. So that does seem to be part of the, the decelerationist agenda seeping into politics. I don't know if that'll pass or
Jasmine Sun
not, but yeah, I don't think it will. I mean if it does, I, I think it'd be really stupid. Like I think it's a stupid bill. Most people like using chatbots for medical advice. Like that's one area where people, I would say do not have populist sentiments is that most people find their chatbots quite useful for doing these kinds of little tasks, giving them advice. And I think taking that consumer surplus away from people would be a bad thing. Similar. Like I look at the Waymo battles, right? Like it's like Waymos are safer than human drivers. I think the research is pretty clear on that fact. They feel safe when you're in them. I love taking Waymos. I do think that like I would like to see either Google or you know, governments think about how to transition cab drivers into other roles. Like if Waymo does expand in a city because again, it's not those cab drivers fault that they invested decades in a career that may go away. But like I do want to see Waymos rolled out. Eventually I think the world will be better if we have technologies that make us safer. And so to me the question is just how do we navigate that transition in a way that is empathetic to people who sort of are the losers lost out on the technology because it devalued their skills. But I do definitely, I would like to see a vision where we are still spreading technologies that do make us safer, make us healthier, whatever.
Ryan Sean Adams
Quick shout out to OK X they are live in the States building the new Money app and Wall street is taking notice. The parent company of the NYSE just invested at a 25 billion valuation and took a board seat. That's the New York Stock Exchange coming to crypto, not the other way around. And why OkX? It's the only app combining a full centralized exchange and self custody wallet in one place. Sex trading, dex access on chain activity, all in a single interface, nor bouncing between five apps, copying pasting addresses or bridging tokens in separate tabs. They support Bitcoin, Ethereum, Solana Base and more. Millions of tokens, just a few clicks and an infrastructure that processes trillions in transactions and keeps assets fully backed. OkX users are set to get tokenized New York Stock Exchange stocks and derivatives later this year. Traffi and Defi finally in the same app, head to the link in the show notes, download OK X and see why it's the NYSE go to for going bankless in the United States, not investment advice services not available in New York, Kentucky and Texas. What's something you're actually looking forward to next month? Because Coinbase is doing something interesting. Coinbase One member month starts with 20% off your first year of Coinbase One, plus a $50 Bitcoin bonus when you spend $100 with a new Coinbase One card in your first 30 days. They're also layering in extra rewards and perks throughout the month. And if you're active in crypto, Coinbase One is basically designed for you. Zero trading fees on thousands of crypto assets, 3.5% APY on USDC and boosted staking and lending rewards, and up to 4% Bitcoin back with the Coinbase One card. So if you're going to try it, now is the time to lock in that 20% discount before the weekly rewards kick off. Start your month of more with 20% off the first year of your annual plan at coinbase.combankless that's coinbase.combankless visit coinbase.combankless to get 20% off of the first year of your annual plan today. Offers are valid until May 31. Terms apply. Coinbase One card is offered through Coinbase Inc. And Cardless Inc. Card issued by First Electronic Bank. Bitcoin back rates are based on cardholder assets on Coinbase.
David Hoffman
I was kind of wondering an undercurrent of this whole AI populism and our discussion today has been growing wealth inequality. And I sort of wonder if the AI populism is just a proxy battle in some way or bundling of the greater problem of wealth inequality. And as I look at something like wealth inequality, I'm sort of like wondering what the problems inside of that actually are. So if everyone is getting wealthier, but the top are getting wealthier at a faster pace, at some level, you look at that system, you'd say, okay, what's the problem as long as we're all getting wealthier. But then sometimes I wonder if wealth inequality, we call it wealth inequality, but it's really more about power inequality and it's more about a concern that a certain group of elites are able to translate that wealth into coercive direction power and they begin to become kind of the, the rulers. I don't know if you've given any thought to that, but like, what is, what is the driver behind this anti. This backlash to wealth inequality is, is this really all just kind of a, a proxy battle here for, for power? Right? Is that what's. What, what's really in contention in the American political system?
Jasmine Sun
Yeah, I think that's a good diagnosis. I think that a lot of inequality is a proxy battle for power. Right. I think that's. That's why people are not that excited about certain ideas, like a ubi, because it feels like being on permanent welfare and relying on handouts from the people who actually do have all the money and power. And even if they're keeping you around so that you can pay your rent and pay for food, you don't really have a say because you're still reliant on them. Right. The dependence is that you are dependent on, say it with ubi, like the state, for doling out those welfare benefits. Or, you know, you look at corruption as a top five issue for what voters care about, and you look at a lot of corruption that's going on with current administration. You look at the way that Elon Musk got into politics basically by spending a ton of money. And not only did he spend all that money, but a lot of things that the Trump admin did basically went his way. He was allowed to do doge like, they cared about the issues that Elon wanted to care about, and he basically spent his way into political power. And people see that. People see that when you have money, you can influence policy, you can influence this physical world. You can buy yourself a lot of freedoms that other people don't have. And I think that's where the real frustration comes from. Because like you said, if people can pay their bills and pay for healthcare and pay for food, which, again, not everyone can. But that's a different question. That's not the same as, oh, yeah, what's the point of my vote when Elon Musk can just buy his way into power? Right.
Ryan Sean Adams
Jasmine, you're clearly very sharp and informed about all these subjects, so I've definitely appreciated getting your wisdom and your takes on the podcast today. When it comes to just actual policy positions, what are your recommendations? What do you think? What do you think people should do? If you were. If you were the lady behind the policy machine, do you have any, like, ideas or concepts of things that you think would actually be, like, effective interventions here that would kind of, like, smooth out the hard edges on both sides?
Jasmine Sun
Yeah, I mean, oh, man, this is the hard question, right? I should say I'm not a policy wonk, and I didn't focus most of my research on policy solutions. I've talked about them with a lot of people, but it's not something that I feel really confident on my prescriptions for. Um, it's also, I will say, it's something that I don't think anybody feels very confident about knowing what to do, because, like, Ryan Said like a lot of the impacts haven't played out yet. Like we are going to need a different policy situation for if we see like slow and gradual job displacement versus we actually do get this like big apocalypse or job shock or maybe we get no job shock at all. Maybe everything's fine and then we shouldn't, you know, do anything crazy. But I do think we should be planning for those different scenarios. I think that pretty likely to me. Seems like we're going to need some tax and redistribute for like corporate and capital gains taxes. Like if it is true that a ton of money basically flows to these AI infrastructure companies for example and they get way, way, way bigger than everything else in the economy, finding the right way to do tax and redistribution is pretty important. What do you spend on if you're going to redistribute? I think that some are like longer unemployment insurance. Like right now in California where I live you get six months of unemployment insurance. I think if we start to see a lot of AI displacement of these long time jobs, people generally need more than six months to learn a new skill. Maybe you need 12 months or two years of unemployment insurance. There's things like universal healthcare start to become relevant. Because one thing that I expect in an AI world is you're going to have more entrepreneurship and small business capital and more freelancers and small business people. Right? It's less like you have a giant firm that employs like millions of people. Not millions but like tens of thousands of people or thousands of people. You're going to have more one person companies, people doing startups, people doing small businesses, that person with a yoga studio or their event planning thing or whatever. Those folks are going to need healthcare. And right now I think the economy is and the benefit system is wired for a place where most people are in these like normal W2 jobs. But actually what does it look like if you have a lot more small business owners and freelancers? We are going to need to think differently about benefits and health care and things like that. I also think like education is going to look really different, right. And so right now we have this four year college system that the everyone's been not everyone but a lot of government effort has been spent pushing people through the four year liberal arts college system. I am a little bit pessimistic about how long that's going to last. I think there have been a lot of cracks in this sort of four year college system for a long time. A lot of problems with it. This idea that you study History for four years and you get handed like an accounting job at the end or whatever it is. This has always been a broken promise. Like your skills are not whatever tied to any of the classes that you went to. People are going to tons of debt and now they're not even getting another job at the other end of it. And so maybe we need apprenticeships, maybe we need national service programs where some countries have national military service, maybe we do national public service and you work some kind of job, whether it's cleaning up parks or working in administration, learn some actual on the job real skill that we need. And like you actually take that and like convert it into job skills instead of taking philosophy courses that you like chatgpt your way through, which is basically what's going on right now. And so I think that these are all the way that I'm sort of thinking about it is like, what are the ways we expect the economy to change? I expect less white collar IC work. I expect more small businesses. I expect more relational sector work. I expect more people who go through these periods of losing their job and needing to find a new thing to do. And like, how do we plan policies that are going to train people that are going to give people a little bit of a cushion so that it doesn't ruin your life if you're like in this period of like vast technological change.
David Hoffman
So say we give them a cushion, right? But say on the other side of that, Dario is more right than everybody else and there's actually like no real job on the other side, then do we get to a ubi, like, what do you think about that? And there's some other interesting ideas, like the idea of a tax per token, per AI token generated, where you're just like taxing AI at the source of, you know, consumption. Or there's the idea of creating kind of a sovereign wealth fund almost the way resource rich countries and oil and natural gas kind of do. And so we take a percent of AI and we create sort of a sovereign wealth fund that all citizens own. Any of these ideas appealing or are they too radical to think about right now?
Jasmine Sun
I think we should think about them. I think that like, researchers should start to plan out what that would look like. If it looks like we're moving on more track to a Dario world, which again, I would say right now we are not on that path. But if it seems like we're ticking towards that path, I would prefer if some research had already been done. I mean, you know, Sam Altman did that UBI pilot A while ago. Right. And it wasn't like he tried just giving a bunch of people money and running a randomized controlled trial on to see what people did with that money. One thing I often wonder is what is the next version of that? Do we need to do a pilot of a jobs guarantee? Do we need to do a pilot of some of these other programs so that if we hit a world with truly mass unemployment, we can know what the better options are? I think a public wealth fund is interesting. I think like the Norway model is pretty interesting. Shorter work weeks are one that I think about a lot because again, like lump of labor fallacy, like in a world where humans are always necessary, you don't want to do that. But I think that like if humans are able to do less and less tasks because machines can literally just do the vast majority of tasks you ever could imagine because they're just smarter and more capable in all dimensions. I think that it would be better to me to shorten the work week so that people still have jobs. It's not like 10% of the people have jobs and 90% are unemployed. I would personally rather have a world where 90% are employed, but they have maybe a two days a week work week. They have a 15 hour work week. Because again, that still gives you a little bit of leverage when, when you care about these political issues, like do you actually have leverage? Is there some reason that capital or political, the government has to care about you? You have some role in the economy, you also have some purpose. I think it's better for people to feel like they have a purpose in life. I think about like shortening the work week. And maybe we go from a 40 hour workweek to a 30 hour workweek to a 20 hour workweek as the number of capabilities that AI can do expands and human capabilities in a comparative sense decrease. So shortening the workweek is one that I think most people would be in support of because again, I think people want purpose, they just want relatively easy and chill job to do.
David Hoffman
Jasmine, I think one of my biggest fears is something you said earlier, which is like AI populism wins out to such an extent that decelerationists kind of win the day and we just like kill this technology. We say not in our town, not in our county, not in our state, not in our country. And then it moves somewhere else. It's maybe it moves offshore and we lose the benefits of it. We lose the productivity gains, we lose the labor enhancement. Maybe another country gets these instead. And I hope we don't go too far in that direction. Maybe the direction, you know, critics would say Europe has gone in, in, in some areas, in some ways, you know, Germany with nuclear, for example, moratorium on nuclear power generation. And so there's no more nuclear power plants. But at the same time, we can't just have the tech optimist vision without any regard to how wealth gets distributed to the rest of the population. So if you were to think through some sort of a maybe like a grand bargain where you're like mediating these two parties and you're like there's Bernie Sanders on one side and there's maybe like Marc Andreessen on the other side, what kind of a grand bargain would you propose to have a meeting of the minds? I think about the way the US government worked in the 1990s where the right and the left were all like, okay, maximize the pie. The left just wanted to tax it higher in order to pay for our social programs, for instance. Now we're of the mindset of either just like full accelerationist or full just hit the brakes. But is there some kind of grand bargain we can strike? What would that look like?
Jasmine Sun
It's a hard and a big question, but I'm asking the same one. It seems to me that there has to be some kind of grand bargain. I mean, I think that the original New Deal and rewriting of the social contract contract with the introduction of work week regulations, minimum wage, union bargaining power was that one. I was having a conversation with a friend earlier about how come during the 20th century the United States experienced a ton of mechanization and automation and some people's jobs were displaced in that process. But you didn't see mass political violence, you didn't see a Luddite style backlash. And there are different theories for why this is true. But one of the strongest theories is that in the 20th century, automation mostly affected factories that had strong unions that basically sat down at the union bargaining table and worked with the automators to figure out, okay, we're going to have wage guarantees for the people who keep their jobs. We want workers wages to go up if productivity goes up. So like let's tie workers wages to productivity gains. And also you had like the expansion of federal level welfare in order to sort of again reassure people that the jobs would be better jobs, that they would be taken care of, that they would share in the gains. I think most people want to live in a growing economy. Most people want to be more productive. They just want to know that they are going to get a piece of that. And if their company ends up making more money because technology increases productivity, they want to get an equal or like some part of that as well. And I think that's the part that's broken. I don't know that today unions are the right people to be doing that bargaining. One is like, it's not, we're not affecting unionized industries anymore. We're talking about software engineers and marketers and whatever. And most of these people are not in unions. But that kind of what is the bargaining table is the thing I now think about. Like, I think when there's not a legitimate channel to have those kinds of conversations, that's when you see things like political violence or you see these data center moratoriums because you don't have a place where you can actually negotiate. So like with things like Waymo, I'm like, is there a way for the cab drivers and Waymo to come to the table and to figure out some kind of arrangement where Google, which is a very profitable company and is going to make even more money in a world where Waymos are everywhere, can they somehow share some of that with the cab drivers who are affected? Fund training programs? I don't know what it is, but like, these are the conversations that I'm really interested in. And I hope that policymakers, political candidates start to think about what, what their role in this looks like. Maybe they're sitting down with the AI executives and saying, where are you seeing impacts on jobs? Like, what do you think we should do there? Because if my, my belief maybe naively, is if you can come to a deal, if you can get to a bargain, we're going to be able to preserve the gains of technology, the growth that you get from technology, without this kind of mass populist backlash.
Ryan Sean Adams
One way to ensure that people get their share of the pie, I think, is to also kind of stay ahead of the curve and use AI to the best of their ability. The way me and Ryan talk about this when we're optimizing our clauds and our Claude coworks is how do we get our clods to produce more valuable tokens? Like what do we need to do? What prompts do we need to do, what data do we need to give it to make the tokens that come out of our cloud more valuable. And Jasmine, you are also a content producer. We're all content producers here. You do a lot of writing on one of the fastest growing substacks, which we will link in the show notes if listeners want to subscribe but maybe this is just a personal question. How do you use AI to do your work better? And what do you have to teach both myself, Ryan, and also the listeners.
David Hoffman
Yeah, we don't want to become NPC's high agency only.
Jasmine Sun
Yes. Oh my gosh. I mean, I feel like you guys are probably like masters at this. So I don't know that I have any crazy tips. You know, I mean, I use like, I pay for like the best models I use every few months. I will sort of. I have my own personal eval. So, like, mine is something like if I feed, if I feed an AI, like 10 interview transcripts and one paragraph about the kind of article I want to write, can it just spit out a reported article? I never copy paste these. To be clear, I do not actually use them, but that's the eval that I measure them on because I want to know at what point will they be able to do that kind of work? And if they do start getting pretty good, I also want to know where's my comparative advantage going to be? The way that I, I think about this, and I think this is what most economists would advise as well, is technology is going to get better, but so long as humans have a comparative advantage, then you're going to be okay, right? As long as you're a compliment to the technology. And so I am actually almost more interested oftentimes in what it is the tech can't do yet. The thing that the only way to find out what the tech can't do yet is to constantly be playing with AI so that you know, right? Because, like, if AI is way better than you as something, you should use it for that thing. Like I use ChatGPT for research, like transcript generation. Sometimes I'll ask for feedback, like all the time. If AI is better than you as something, I think that oftentimes you should use it and take advantage of that. But the other thing is when I experiment a lot with AI, I really see the jagged edges. I see the things. It's not good enough yet at, like, it cannot do a podcast like this. It cannot have a conversation. It cannot build trust with an interview source and get them to share their feelings about stuff. It can't go places in the physical world and describe like, what it likes, what it is like to be in a place. So a lot of my writing is kind of like scene based and quote unquote anthropological. And I think that is more interesting to people in a world where AI can just get facts off the Internet. Like, I read the facts off the Internet. But what I can do is I can actually stand next to a data center and see what it sounds like and interview the people around it and say, what do you think of this thing? And so I would probably spend a lot of time not just experimenting, but also asking what is my personal comparative advantage as a human against the AI and how can I really invest in that? Because that's what is going to be robust as AI gets better and better.
Ryan Sean Adams
Jasmine, thank you so much for coming on the show. This was a fantastic episode.
Jasmine Sun
Thank you both. Yeah, love the conversation.
Ryan Sean Adams
You write@subsack subsack.com Asmine you're also on Twitter. Where else do you want readers or listeners to go to to find you?
Jasmine Sun
That's great. Yeah, Twitter and Jasmine.subsack.com are the best places to find me. Thanks so much.
Ryan Sean Adams
We'll get all those in the show. Notes Bankless Station. You guys know the deal. We didn't really talk about crypto. We talked about AI. But nonetheless, it's risky either way. You can lose what you put in. But we are headed west. This is frontier. It's not for everyone, but we are glad you're with us on the Bankless journey. Thanks a lot.
Guest: Jasmine Sun – Writer on AI, Politics, and Author of the AI Populism Series
Date: May 13, 2026
This episode delves into the rise of "AI populism"—the growing public and political backlash against artificial intelligence as both an economic disruptor and a perceived elite-driven project. Host Ryan Sean Adams and David Hoffman engage guest Jasmine Sun (The Atlantic, NYT, Jasmy News) to explore AI's intersection with politics, labor, wealth inequality, political violence, and implications for the 2028 US elections. The discussion explores the risks, narratives, and possible policy responses around AI’s rapid impact on society, drawing historical comparisons to previous tech-backlashes (notably crypto), and considers grand bargains to responsibly integrate AI into society.
Find Jasmine Sun: Jasmine.substack.com | Twitter (@Jasmine)
Highly recommended for anyone seeking to understand emerging socio-political dynamics around AI—and the policy challenges that lie ahead.