
Loading summary
Jack Clark
Foreign.
Derek Thompson
This episode is brought to you by Indeed. When you need to build up your team to handle the growing chaos at work, use Indeed Sponsored Jobs. It gives your job post the boost it needs to be seen and helps reach people with the right skills, certifications and more. Spend less time searching and more time actually interviewing candidates who check all your boxes. Listeners of this show will get a $75 sponsored job credit at indeed.com podcast that's indeed.com podcast. Terms and conditions apply. Need a hiring hero? This is a job for Indeed Sponsored Jobs. Today's podcast is an interview with one of the co founders of the AI company Anthropic, Jack Clark. One thing I'm trying to do on the subject of artificial intelligence on this show is to offer a balance of perspectives on an issue where I find most coverage tends to be extremely one sided. Some people are very certain that AI is a bubble, and some people are certain that it is not. Some are certain that AI will destroy millions of jobs, and others are sure that it will not. And I want listeners of this show to feel like every time they hear an intelligent take on one side of these issues, the next episode they'll hear will offer in some way a countervailing take. So two weeks ago you heard the investor and writer Paul Kudrowski argue that artificial intelligence was an economic bubble. But if any single data point pierces that narrative, it's this. Between December 2025 and this month, March 2026, Anthropic more than doubled its annual recurring revenue from $9 billion to more than 20 billion, according to several analysts. There is no record of any company ever growing this fast at this scale. Now, I don't need Jack Clark or anybody at Anthropic to read me a corporate statement about the company's revenue growth. I can very easily do that myself. What I wanted to do today was to ask questions that only someone in Jack's position could answer. Questions if Anthropics executives believe that AI might be as dangerous as nuclear weapons, what right does any private business have to build this sort of thing for profit? Or how does the company balance its reputation as the industry leader in caution and safety with its other reputation for being one of the fastest developers of this technology? And if artificial intelligence has the capacity to produce, as its CEO Dario Amadeus said, a country of geniuses in a data center? Why do Americans overall say they disapprove of AI more than just about every other institution and individual in the world? I'm Derek Thompson. This is Plain English. This episode of Plain English is presented by Audi. We all know that feeling, a change of plans, a new opportunity. It instead of overthinking, what if you just said yes with the all new Audi Q3? The answer is easy. It's made for the yes life. With the power and room to handle whatever pops up. Yes to adventure, yes to right now. Because saying yes without hesitation, that's real luxury. The all new Audi Q3, made for the yes life. Learn more@audiusa.com Jack Clark, welcome to the show.
Jack Clark
Thanks very much for having me.
Derek Thompson
I believe you and I were on paternity leave right around the same time my daughter was born the first week of December. Does that roughly line up with your schedule?
Jack Clark
Yeah. My second child, my son, was born the first week of November.
Derek Thompson
Okay, so we are meeting each other in a shared space of mutual exhaustion, which is always nice. Hopefully that leads to some kind of symbiosis. I was thinking about holding this question for the end, but it might be the most important question I ask, so I might as well just get it out in front. You're building a technology that you think is going to change the world and change the nature of work more than anything since the computer, maybe electricity, maybe anything else. If you're right, our kids futures are going to be profoundly reshaped, maybe ruined by this technology. And I wonder how that sits with you. You know, how you go to work and you work on Claude and then you go home and you raise your children. When you bridge those two lives, how do you think about the art of raising kids in a world where there's a technology coming on down the pike that will always already be smarter than us at almost everything, which is at least the goal of your company. How do you sit with that and how do you think about raising your kids?
Jack Clark
Yeah, I mean, I spend a lot of time thinking about this, but I also think, as you know, when you become a parent, all the cliches are true and things that you learn are like, it's not about external validation, it's really about having a good sense of your own self and like various pat phrases like this. But when I look at my kids and I think about myself and my own experience of this technology, being curious about the world, being interested in the world, and getting joy from experiencing the world and learning about it are how I stay calm and stay ready for this technology evolution that's happening all around us. When I look at my children, the main thing I'm doing is spending time encouraging them to develop passions like reading and playing and Exploring the world. Because whatever happens with the technology, getting through any period of change requires you to have some sense of yourself that isn't massively contingent on a changing environment outside and some sense of innate curiosity and a world that you can live in inside your own head. I think that just stems from encouraging curiosity and encouraging them to get to know themselves.
Derek Thompson
You said curiosity several times, and I think I agree that that's a value that artificial intelligence might amplify. What does curiosity mean to you?
Jack Clark
For the first time, we have a technology that lets you really follow your curiosity to almost like the absolute limit of it. I'm reminded of when I was a kid, I'm sure that you were the same. I would go on interesting research expeditions. I would research ant colonies, or I'd research black holes, or I'd research how city planning worked. And I would follow that interest to, you know, extraordinary points. You know, I'd learn aspects of time dilation around black holes, or I'd learn about how to implement ant colony simulations on my computer or whatever. I'd indulge my curiosity. And it was incredibly fun. And now we have a technology that lets anyone take something they're curious about and kind of take that to the absolute limit. And I think that this is just, like, wildly exciting and also good for you. Whatever happens to labor and employment and big changes are surely coming. Being able to exercise your own curiosity and derive satisfaction from that I think is really important. When I was a kid, I didn't have any ambitions that I would be the world's best physicist or the world's best town planner. I just found this stuff fun to think about and enjoyable. And I think that the more we encourage people to get good at that stuff, the more well set up we'll be for what this technology will bring us.
Derek Thompson
We're going to return to some of those themes in a second when we talk about AI in the labor force. But I want to get to the news. I think, as most listeners know at this point, Anthropic was in a spat with the Pentagon over contract details that ended with the company being designated a supply chain risk. I know that you are extremely limited in what you can say about the details of the case because your company is in active litigation against the Department of Defense. War, whatever. I hope this question therefore arrives at the right level of altitude for you to be able to answer it. Anthropic has compared artificial intelligence to nuclear weapons on several occasions. This is not a rare analogy. And just most recently in January, Dario Amadei the CEO of Anthropic, said the Trump administration's decision to allow Nvidia chips, advanced Nvidia chips, to be exported to China was, quote, a bit like, I don't know, like, selling nuclear weapons to North Korea and bragging, oh, yeah, Boeing made the casings. Quote, the US does not allow private companies to build nuclear weapons. That is the law. If artificial intelligence is just like nuclear weapons, why should we allow private firms to build it for profit?
Jack Clark
AI is fundamentally like everything. It's like a factory that produces cars, micro scooters, animals, and nuclear weapons all at the same time. And the main question that we're going to have to deal with as a society is how do you govern those factories that produce these things? And how do you decide what the appropriate uses are of the things that come out and where they should be used? So I can't talk obviously about the specifics of our ongoing discussion with the Department of War. I can say that Anthropic was extremely committed to working on national security early because we recognize that AI is going to touch every single part of life, and every single part of life is going to have its own range of, like, incredibly thorny, difficult issues. So ultimately, we're going to need there to be a much larger societal conversation about how we just govern this technology in general. And we will need to reckon with the fact that the technology comes from the private sector and then flows into all of these other sectors. And that's going to be really challenging. It's a thing that we haven't encountered before, because previously you didn't have a technology that could take on this ability to become anything. You had specific technologies built by specific industries for specific purposes, and that was in many ways simpler.
Derek Thompson
Just to hit home on the nuclear analogy one more time, though, because I really want. I want to hear a robust defense of why this is the private sector's job. I mean, the nuclear analogy is invoked in so many different ways ways. It's invoked for export controls. It's invoked for arguments for government detention. It's invoked for arguments about the stakes here being existential, or even arguments about the need for international cooperation around the kind of artificial intelligence that's built at the frontier. But one conclusion that this analogy very clearly supports is that private sectors should not control this technology. And so I wonder why the analogy applies almost everywhere except here, where it is private sectors developing frontier AI for profit while the government is on the outside attempting to regulate it or develop contract negotiations with it.
Jack Clark
I'll push on this in A way that I hope is helpful. We worked for many years with the National Nuclear Security Administration to actually test out this property of how well could AI understand aspects of nuclear weapons or nuclear technology. And we used that to develop evals and to develop ways of ensuring that we don't proliferate things into the world that have an understanding of nuclear technology. And that's almost a very positive example of how you would have the private sector work with government where some things absolutely should only be the domain of government, like nuclear weapons. Bipartisan area of agreement, everyone's comfortable with this. The job of a company that is producing a technology that can take on many different aspects is to work out where the areas where it's inappropriate for a company to be deploying that technology, like nuclear weapons, and then you can work with government to take that capability surface off. So I think that holds for some of the path that we're going to have to pursue here. And it's one that most of the industry is going down with a few areas, including bats and biological weapons and other aspects of cbrn.
Derek Thompson
So to go back to your first answer, I just want to make sure that I understand your perspective here. You're saying the right way to think about this is that AI is this multifarious factory kind of technology where you are creating superpowered Excel charts, which is a technology that has no precedent for government regulation. But you're also creating technology that can be used by the Pentagon or can be used by individuals to have essentially militaristic or dangerous ends. And that is a factor of your invention that does require a different kind of government regulation. And so you're saying the analogy with nuclear weapons is true insofar as it is contained to the parts of your technology that's like nuclear weapons. But you're also doing a lot of other things that have no analogy to nuclear weapons, like, say, make white collar workers a little bit more productive in their desk jobs.
Jack Clark
Yeah, I just parse this out into. There's almost two problems here. One is that you have this factory that can produce anything. Then you make sure that what comes out of the factory correlates to what we've decided society can have available in the free market, not nuclear weapons. Yes, it's fine to produce things that accelerate knowledge workers. And then you have the second question of given the kind of multifaceted nature of what can be produced, how do you then work with government or academia or other parties on the things which you can't necessarily push out to the world in general, but have value in the rest of the world. And another example here is biology, where that's less the domain of government, but there are certain parts of biology which have danger if you brought them solely to the general populace, but which can massively accelerate the development of biological science in industry. And so you need to work out then what is the path to acceptably getting that in. And so some of the conversation that society is going to have now is what are the appropriate ways we as a society want this technology to be used? And how do people decide about what to do with the things in this factory and how to evaluate them and how to proliferate them so society gets the benefit
Derek Thompson
back to jobs. Anthropic CEO Dario Amadei has predicted in several occasions that AI will destroy half of all entry white collar positions and spike unemployment to as high as 20%, which would be the highest unemployment rate since the Great Depression. This is a near term prediction. He said this could happen in as soon as five years. Do you agree with that forecast?
Jack Clark
We're talking about one of the potential things that can, that can happen. And I think it's worth thinking that this is a choice. I don't agree with this because I think it's a choice that we can make. And also my personal view based on the data that I look at is big changes in employment take a long time to filter through to the economy. And even with the magnitude of what we're talking about, you might expect it to take longer. But let's say that there is the potential for massive, massive employment changes. I think that this is accompanied innately by the fact that AI must also be growing the economy a lot and causing a lot of economic activity. If that is the case, then you would expect that we can have more degrees of freedom about policy and what we do with this economy. The idea which I return to a lot is if you end up in a situation where employment is being negatively affected by AI in one part of the economy, and that correlates to loads of money being generated by the economic activity of the AI systems, you could choose to create many jobs in other parts of the economy, like jobs in areas like teaching or nursing, where people have a preference for there to be more people working in it. And you could both increase the number of jobs and also do things like cross sector wage subsidies to improve the wages of those jobs where today we severely undercompensate them.
Derek Thompson
I wonder what the purpose of talking like this is as a company. I mean, it is unusual in corporate history for a company to announce that if its product is successful, tens of millions of people will lose their jobs and there's a non zero chance that we end the human race entirely. There is, in fact, I think, no precedent for a private sector company talking about its product like this. I think the analogy that I've reached to before is like Henry Ford would have been within the realm of reality if he said if this Model T thing takes off, hundreds of thousands of Americans are going to die every single decade of car accidents. That is true, But Ford and GM did not talk like that in the 1910s and 1920s. What is the strategy of communicating your technology, your product to the American people as a means by which we might have 20% unemployment and a non zero chance of human catastrophe?
Jack Clark
These are not the outcomes we want or anyone in the industry wants. But I think the industry has also learned from looking at the overly rosy predictions made by many in the technology industry before about how the only effects they'd have on the world would be unalloyed positivity. And I think the world lost huge amounts of trust in the technology industry because of that, because then they saw that it wasn't only positive things. Social media has caused a range of amazing positives in the world and a range of harms which we're now dealing with. The ethos here, and why I'm working on this new initiative for the company called the Anthropic Institute, is we want to share a lot more data about what we see in front of us so that society is better prepared for any of the different changes which could come along. We also don't spend enough time talking about all of the really positive changes which I think are a choice that we can make as a, as a civilization and as companies to pursue as well. But it would be negligent of us, I think, to not call out that there are ways that we as a species could get this technology wrong. And I don't think that we're alone in this. If you look at scientists and people that have worked on transformational technologies before in biology or in the early days of nanotechnology, they've all talked about this combination of upsides and risks. It's just that AI as a sector has matured and made a lot more impact on the markets than either of those kind of classes of technology over the same time period. So everything's accentuated.
Derek Thompson
I hear the argument that you are reacting to the social media experience that social media companies promised in a Pollyannish way to merely connect the world and be a kind of global newspaper. And they did not merely connect the world, they did a lot of negative things as well. I hear that argument, but I also look at reality and I look at polling. Last week, NBC News published a national survey on attitudes toward a range of politicians and institutions. AI's net favorability was minus 20. That's below every politician that was surveyed in the poll. And it's below ice, Immigration Enforcement Division. Why do you think people seem to disapprove of, and even in some cases seem to hate artificial intelligence despite your efforts to learn from the social media experience?
Jack Clark
So we did this project recently called the Claude Interviewer, where we talked to something on the order of 80, 80, 1,000 people around the world about their experiences of using our technology, their hopes of a technology, their worries about it. And you saw a couple of interesting things which speaks to this. One is that there was a very detectable change in sentiment between what you might think of as people in the developed world and people in kind of the emerging economies. And if you look at the emerging economies or developing world, people had a much more positive view of the technology. This was also true of some economies in East Asia as well, where they viewed the technology as part of this larger story of positive economic transformation that could happen to them and could help them improve their lives and better their lives. And then if you look in the developed world, you had much more of a kind of neutral sentiment or negative sentiment which correlates to your polling stuff. Well, I'd say if you look across these two worlds, you have one important factor which is in one, the economies have been growing at surprisingly large rates for a long time. And in the other, the economy has been relatively stagnant. In the stagnant world, which is the developed world, people are appropriately anxious about change. People have been through a lot of change already. And seeing AI as another tool of technological change can cause people to feel kind of significant anxiety. And if you look in the developing world, they see change and they're like, great. My story has been one of change. And change has mostly correlated to things in my world and material circumstance getting better. So I think that's like an important thing to bear in mind. The second part is I think if you look at the polling, you don't see all of these amazing ways that people around the world are using the technology to just allow them to kind of do more or become more themselves. In this like Claude interviewer, we saw examples of people, someone who was mute, using Claude to build a text to speech application so they could speak to their friends, someone who is a security guard using Claude to educate themselves and become someone that worked. Now in educational technology. There are a range of these examples as well, and I'm not, you know, solely cherry picking them. I think what was striking from this was how many examples people had of the ways in which they've used the technology to just meaningfully change aspects of their life or how they relate to people. Finding a way to get more of that and show people that the good the technology can do is something that we in the industry need to do a lot more of. And I think fundamentally the AI industry just needs to help the economy grow a lot to also change sentiment. I think that's the big thing underlying all of this.
Derek Thompson
I want to go back to that graph that you mentioned from the Anthropic Institute study. I have it right here in front of me. We might be able to throw it up on the screen as well for folks who are viewing on Spotify or YouTube. It indeed shows that the countries and regions that use AI the most and are most developed tend to be most concerned about jobs and the economy. They report the highest negative sentiment toward AI. And it really does seem in one of these charts like there's almost a linear relationship between the regions with the most AI exposure and the most negative sentiment about AI, which led me, I wrote down two explanations, you added a third. So I'll start by reiterating yours. Your explanation, which I don't take as entirely dispositive, but one explanation of yours is that the developed world, the richer world, has a feeling of zero sum sentiment because of slowing GDP growth.
Jack Clark
That's right.
Derek Thompson
And there is a sense therefore that a technology that increases productivity, which will not increase productivity for all, but will rather increase productivity at the expense of existing workers. Which is, to be clear, a prediction or a forecast that your CEO has made explicit. So that's explanation number one is this difference between zero sum and positive sum attitudes that might have something to do with GDP growth rates. Explanation number two that I saw from some more really full throated AI boosters is that concern about AI is a luxury good. We've all heard this term of luxury goods that essentially people can only afford to be negative about artificial intelligence if they can literally afford it because they're rich. It is a luxury good. Explanation number three is that exposure to AI reduces positive sentiment toward AI overall. And I want to contextualize that latter explanation by bringing in the last things you said. One of the reasons I find artificial intelligence Basically harder to talk about than any other subject I cover on this show is that it's not one thing. For one person, AI is slop on TikTok. For someone in Hollywood, it is threatening their FX job. For someone in research, it's dramatically accelerating the pace by which they do deep research projects or put together Excel or PowerPoint. For someone in science, it is sometimes a frustrating source of misleading information and sometimes an extraordinary source of citing information that they need in order to finish their papers or their grants to the nih. It's just so many things. The first thing you said is it's a factory that makes everything from whatever scooters to biological weapons. But I'd like to hear you grapple with this final explanation, which is that what if there's something about exposure to this technology that seems to linearly reduce positive sentiment about it?
Jack Clark
My best explanation for this is it's about anxiety about the world in general. And I think these things are just increasingly coupled in that AI is like an everything technology which doesn't just touch all of the different types of work that you or I might do in our life, but also touches aspects of things that we don't do in work, like things that we do in a sort of personal level. And thoughts about AI, I think increasingly trend towards being a proxy for a person's thoughts about the world. And AI contains within itself the world. And you look at this in the polling that we've done. If you look at this in the Claude interviewer, you can see this in the economic index, where actually as usage of AI grows, it just increasingly correlates with generally known facts that you see in other forms of data about people's perception of the world or economic and daily life. So the main lesson I have here is the world is feeling very anxious at the moment and we need to figure out a better story for the world. And AI is going to be acutely exposed to this because it is a technology that distills all aspects of labor and life into itself and therefore magnifies your anxiety about any of those. We need to show all of the different ways the technology can be used, and we also need to figure out ways that you help people discover that magic from curiosity, that magic from kind of self betterment and that magic from using it to change your life. Some of that will probably come from having the technology show up in different ways to people and changing things about the product surfaces, changing things about the user interfaces, and also changing what we actually use the technology for as a society.
Derek Thompson
I Want to get to agents and your predictions about the labor force in just a second. But the last question in this zone that occurred to me is that I just wonder if you and other people at Anthropic in the industry live with this kind of tortured ambivalence, this divided soul, where, on the one hand, the very identity of Anthropic is to be founded as a company to ensure the safety of a technology that could be designed in a dangerous way. The only reason to found Anthropic is in the context of. Against the tension of the possibility that this technology could do anything extraordinary harm to the world if it's built the wrong way. And at the same time, you're talking about wanting people to feel the magic of the technology as it exists, but it's not one or the other, it's both. At the same time, like the technology does contain within it magic. And I think people who have spent tens, hundreds of hours with Claude Code or Claude generally, or even ChatGPT may have experienced that magic. And yet, at the same time, this is a technology that is clearly within the EA system, within the effective altruist community, within the rationalist community of San Francisco, within the AI community writ large, dripping with anxiety about what this thing could be. I mean, so how. How do you live with that tension, that balance between thinking that you're building something that contains the possibility of magic, but also like recognizing sometimes that the only honest way to speak about this is to be clear about just how dangerous this thing might be.
Jack Clark
We share what we feel about it and also set up things like the Institute to share more information so that the rest of the world can work on the problem as well. I don't think Phobis is unique to AI. Almost 100 years ago or so, there was a memo written by the British government worrying about the rise of civilian air travel. And this memo, which we'll be able to send you off to the show, had this dark vision of a world where wars were entirely fought by aircraft and terrorists in aircraft, bombing cities and killing people. And the whole of life on continental Europe would be disrupted by this, and we would live in kind of, like, unimaginable horror. And planes should only be kept for, like, government purposes, and you shouldn't have them generally distributed because of the harm they could do. Now, obviously, it got part of it right in that all of those things I just mentioned are done by aircraft today. But we also have an entirely changed world due to civilian aircraft and transportation, which has unlocked, like, a vast, vast range of things. So even the person writing this memo was a civil servant in the English government, was doing the same thing that we're doing here of staring at the technology, seeing that it encodes within itself some great fantastical power, and then worrying a lot about not wanting that to come to pass in a negative way, that you can sometimes blind yourself to all of the tremendous upsides that also come along with it. And how did we solve planes? Well, you created a very complex, overlapping set of regulations, from how you build planes, to how you regulate transport between different countries, to how you build standards for how planes work. The whole practice of making civilian aircraft safe and reliable and be integrated into the world is fiendishly complicated. And also, planes sit at the end of supply chains, which are almost as complicated as semiconductors and AI, and yet the world managed to do it. So I think that what we have in front of us is we can get to this world where AI will be integrated into the world and will have vastly expanded the horizons of what people can do. And we have to avoid some of these misuses of the technology which stare us in the face, just as, like, potential misuses of aircraft were obvious to people very shortly after aircraft had started taking flight. So a lot of the feeling I sit with is we've really got to avoid these foreseeable downsides and come up with technical solutions to avoid them. And we have to get enough of society working on that that we build this very complex interlocking series of safety mechanisms that will allow that to be safe. But we've done it in so many other parts of the world as well.
Derek Thompson
I want to close the door on these sort of big picture questions about artificial intelligence and the balance between promise and peril here, and talk a little bit about the last few months at Anthropic, which have been historic months. I feel like we're in a new chapter of artificial intelligence right now, and the title of that chapter is the Age of agents you have built. Anthropic has built an agent. Claude Code OpenAI has built its own agent technology. Codex is the name of its sort of coding agent. Tell me before we talk a little bit about the effect of this technology on the labor force. What is an agent?
Jack Clark
Yeah, a few months ago, before I went on paternity leave, I kept on going up to a colleague on one of our research teams and said, what is an agent like? I was like a Zen master. And he didn't know for some months. And I would keep on walking up to him and say, Miles, Miles, McCain, what is an agent? And eventually Miles's answer was, an agent is a language model but uses tools over time. And so I'll just unwrap that for people listening to this. An agent is an AI system like you or I might use in the browser today. But you can ask it to go and do a task for you, like read a bunch of research papers about the history of aircraft regulation, for instance. It will go away and read those papers. And to do that, it will use tools. It will use web search to access paper repositories like archive or what have you, pull down those papers, read them. Then it will use other tools to summarize those papers and write scratch pads for itself and use graph making tools and come back to you with a research report. So an agent is, for all intents and purposes, like a person that you can email a question to, who will then go and work for you for a while and come back.
Derek Thompson
There's a lot of talk right now about the possibility of agents like Claude Code Codex replacing white collar jobs. And I've spoken to folks in legal, in consulting firms, and their position is these tools are good enough to make us more productive, but they're not good enough to significantly reduce headcount. Yet they're much more like a better computer than a better worker. And that's a really interesting piece of testimony to me because I feel like one of the more important macroeconomic questions of artificial intelligence is, is this going to replace workers because AI is a better worker, or will it merely increase productivity because AI is a better computer? What is your take at the moment and is your take sort of bound to this moment because you think there's something coming down the pike that would change your answer to this question?
Jack Clark
Yeah, I'll talk about what I see right now and where I think, how I think this will actually unroll over time. What I see right now is it just massively multiplies the productivity of any individual. But you can't like fully delegate to it, nor would you want to. It doesn't replace people, but it changes the sort of work that people do. So researchers that I work with now have to reckon with a world where a research project that previously took two to three weeks can now be done in one to two days, primarily through the use of agents, Claude Code, other things that we have here at Anthropic and many businesses have built. And what that means is they're needing to change their style of work and say, oh, now more of my job is generating research questions than Doing research schlep and more of the work that we do as research teams now is about coming up with those questions because we've actually had to spend a lot more time on it because we're burning through the questions a lot faster than we did before.
Derek Thompson
Jeff, before you go on, without revealing industry secrets, can you be as specific as you possibly can be here? Because I do want to make sure I find the most consistent criticism I get for my coverage of artificial intelligence is that I'm always trying to see beyond the horizon and not describing the here and now. So like in the here and now, as specifically as you can say, how has the use of agents accelerated your ability to ask and answer specific types of questions so that it allows you to move on to the next question faster?
Jack Clark
I'll give you two very concrete examples. Here's one example. The AI industry produces thousands of technical evaluations every year which are published in research papers that I write about in my newsletter. You write about the AI labs read. Whenever these evaluations come along, there's always work at the AI labs, which is, let's see how we do. Which involves reading the paper, downloading the data set and benchmark from GitHub and getting it to work on your infrastructure, then testing out your AI systems against it. That previously would take anywhere from days to weeks, depending on how complicated the evaluation system was. These days we can increasingly just point clawed code at the evaluation and say get this to work on our infrastructure and it will just do it. So a task that like was extreme, ugh, factor that you had to do but no one enjoyed doing is now a task that we can actually just point the systems to. So that's example one. Example two is we have tools that we've built here to do our research. Like the Claude interviewer study which I referenced depends on a tool called Claude Interviewer we built you access Claude interviewer internally through a variety of software tools that we've built. Well now we can just ask Claude to spin up a new Claude interviewer and it will use all of those tools, which again was annoying config configuration stuff that you had to do but no one enjoyed doing. And now that AI can do it.
Derek Thompson
Right? And this was a survey for I think it was 80,000 people around the world in like 160 countries. And so. Right. This is work that Gallup or Pew in a pre AI world, you're saying might have taken weeks, months to complete because of the complications of administering maybe a somewhat dynamic survey to 80,000 people. I mean they don't even interview 80,000 people. But with this technology, it's exciting.
Jack Clark
Exactly. And we've used that same technology to survey our own workers about how they use CLAUDE code. We've used it to survey scientists on our platform about how they're using it. You have a group of people and you want to ask them some questions. Now, we've made it very easy for us to do that because CLAUDE can set up the interview process.
Derek Thompson
Now, I interrupted you because I think you were in the process of describing how your answer to my previous question had Part one, here's how agents are working today. And part two, given the extrapolation we've seen over the last few months or years, where I think agents might go in the next few months or years. So why don't you finish telling me that?
Jack Clark
So part two is, I think the lesson here is from the electrification in factories, where when we first got electricity, you had an existing load of factories and you could maybe put some light bulbs in the factory and now they could work longer because, hey, electricity let you put a light bulb in. It took many years for people to build entirely new factories that were built on the assumption electricity existed. So what we're now seeing is the formation of new firms. Right? You know, AI startups, but there will be many others that have put AI at their center. They built themselves on the assumption that AI is a primitive like electricity that they can access. And that's going to change the shape of how those businesses work. And I think what you'll see is businesses make surprisingly large amounts of economic activity while employing relatively few people. Just like how factories built around electrification were surprisingly more productive relative to ones that hadn't been built around electrification as a base input.
Derek Thompson
Right. Karpathy has called this the R2H ratio. The robot to human ratio at companies is going to grow significantly and you're going to see companies with one, maybe just one, one, two, three employees suddenly do revenue in the millions, tens of millions or even higher. Where specifically are you seeing those companies form? Because it can't possibly be universal. Like There is no H2R ratio for dry cleaners right now, but maybe there is in coding or software development or consulting. Do you have a sense given, you know, Claw Anthropic's God's eye view into the ways that your technology is being used, where those companies are growing and popping up right now?
Jack Clark
Yes, I mean, from our own economic index data, we see this most profoundly in software engineering and also in what you might think of as knowledge work. Knowledge work being consulting knowledge, work being analysis of things, knowledge work being the paralegal aspects of legal work. Aspects like this where you have something that has this property of a rote task that required some expertise to do and loads of finicky aspects which took up time. Well, now you can take a person that has the intuition of how to do that task and they can just instruct a large set of AI systems to work for them to do the finicky things that took time but were basically rote processes. Read this legal filing, make this slide deck, produce this code that has this property, and it all still requires people to come up with the intuition of where to go and the ideas of what's going to be most strategically valuable. But a lot of the schlep factor now gets done by these AI systems. We also produced research recently for our team of economists that looks at AI occupations by exposure and I think here you see a very significant difference between people that do work involving computers and people that do work that mostly involves the physical world. The physical world is going to require a whole other set of technologies to do with robotics and other things to mature before I think you'd expect to see AI move through it as quickly as it's moving through other parts of the economy.
Derek Thompson
This episode is brought to you by Indeed. Look, I know how chaotic work can get. Sometimes you have what feels like a million projects going on, and on top of that you need to find the time to hire someone. Things can feel insurmountable for moments like those. Indeed Sponsored Jobs has your back. It's one of the best ways to find quality candidates who can get you the results you want when you need them. And it does more than boost your chances. You can also reach candidates that meet specific criteria, like if you're looking for certain skills, certifications, locations and more in the minute, I've been talking to you. Companies like yours made 27 hires on Indeed according to Indeed Data Worldwide. So spend less time searching and more time actually interviewing candidates who check all your boxes. Less stress, less time, more results when you need the right person to cut through the chaos. This is a job for Indeed Sponsored Jobs and listeners of this show will get a $75 sponsored job credit to help get your job the premium status it deserves@ Indeed.com podcast. Just go to Indeed.com podcast right now and support our show by saying you heard about Indeed here. Indeed.com podcast terms and conditions apply. Need to hire. This is a job for Indeed Sponsored Jobs.
Zepbound Advertisement
Snoring, gasping during sleep, Feeling fatigued Wake up to Zepbound Tirzepatide, the first and only FDA approved prescription medicine for for moderate to severe obstructive sleep apnea in adults with obesity. Zeb Bound is an injectable prescription medicine that may help adults with moderate to severe obstructive sleep apnea and obesity to improve their osa. Zebound should be used with a reduced calorie diet and increased physical activity. Zetbound is Approved as a 2.5, 5, 7.5, 10, 12.5 or 15mg injection. Zetbound contains Tirzepatide and should not be used with other Tirzepatide containing products or any GLP1 receptor agonist medicines. It is not known if Zepbound is safe and effective for use in children. Do not share needles or pins or reuse needles. Don't take Zeb Bound if allergic to it or if you or someone in your family had medullary thyroid cancer or multiple endocrine neoplasia Syndrome Type 2. Tell your doctor if you get a lump or swelling in your neck. Stop Zepbound and call your doctor if you have severe stomach pain or a serious allergic reaction. Severe side effects may include inflamed pancreas or gallbladder problems. Tell your doctor if you experience vision changes, depression or suicidal thoughts before scheduled procedures with anesthesia. If you're nursing pregnant, plan to be or taking birth control pills. Taking Zepbound with a sulfonylurea or insulin may cause low blood sugar. Side effects include nausea, diarrhea and vomiting, which can cause dehydration and worsen kidney problems. Talk to your doctor. Call 1-800-545-5979 or visit zepbound.lilly.com Zepbound and its delivery device base in Quickpen are registered trademarks owned or licensed by eli Lilly Company, its subsidiaries or affiliates.
Derek Thompson
This episode is brought to you by Lincoln. Whether it's bonding digitally or exploring the world Together, the 2026 Lincoln Nautilus Hybrid is built for connection with lots of smart tech that helps bring worlds together both on and off the screen. So help turn EveryDrive into an opportunity for discovery with the 2026 Lincoln Nautilus Hybrid. Learn more at lincoln.com available connectivity features and functionality vary by model. Package pricing trials and term lengths vary by model. Video streaming and games are only available while parked, so I recently had the investor and writer Paul Kudrowski on the show to talk about his conviction that artificial intelligence is a bubble. Paul had a theory that when I posted about it online Got a lot of attention, some of it positive, a lot of it negative. And I want to put that theory to you to have you weigh in on it. He said he believes that software engineering is just materially different from the rest of the economy when it comes to its susceptibility to being automated by or even made more useful with this generation of artificial intelligence, especially when it comes to token use, sort of the basic unit of AI use for folks who aren't as familiar. He said. Look, software engineering just uses way more tokens than the typical consultant or doctor or PR executive. And so while it seems like AI and anthropic and OpenAI are having this vertical moment in revenue growth, that part of the S curve, that vertical part of the S is actually very short because we're going to burn through software engineering and then get to the rest of the knowledge economy and realize that their token usage is much more slight, which means that the revenue for companies like Anthropic and OpenAI is going to be much more meager. So his prediction essentially is that we're like in this vertical golden age of AI revenue growth for Anthropic and other folks, but we're going to be out of it very soon because software engineering is just plain different. You again have this 30,000 foot view into the way that people are using your tokens and using your technology. Is he right? Is software engineering materially different from the rest of the knowledge economy?
Jack Clark
It's very different, but it won't be that different for long and I'll explain why. Software engineering is the factory that already built itself around electricity and that software engineering involves coders that need to access code. Coding as a discipline has already gone through the challenge of how do you make the maximal amount of code available to my software engineers in an organization, famously, from enterprises to AI companies, coders get to access huge amounts of data, huge amounts of tokens, because they have to read the whole code base. You have to do work on it. That's because their profession already understood that you have to give them fundamental and privileged access to a huge amount of data to get their job done. When we talk to customers, every customer I've talked to recently is going through this exact challenge of how do I make my organization traversable by your AI systems? Because my coding organization is. But if I'm a bank, all of my different sources of data currently are not trivially accessible by single systems. But I want them to be. And I can see a path to how making them access to single systems will help My own employees deal with many interesting problems. This is true of public relations people as well, where public relations people would often like to read 100 or 200 stories about the company that they work for or they cover or are contracted by. But doing that has previously been a very intensive, human labor centric thing that involves reading a load of coverage. Well, this could be different as well. So my general sense is it's true in that software engineering looks different to other industries right now, but. But every customer I talk to from a range of industries is just trying to think about how do I make the words in my organization be as accessible to AI systems as the code currently is. So we're going to go through that change and I think quicker than people expect.
Derek Thompson
When cloud code came out, I saw a lot of people that I trust say, this is AGI. This is artificial General Intelligence. The line that was promised to cross. Are they right? Are these coding? Are these agents? Are these agents what we five years ago would have called AGI?
Jack Clark
They're very close, but they're not quite there because they lack a certain type of creativity and intuition which you can find in no AI system or agent yet. And I will just use the way I think about this. So Dario Amode, CEO of Anthropic, defined in Machines of Loving Grace. This vision of powerful AI, which he thinks we can get to by end of 2026, early 2027, and powerful AI, as he describes it, has a few different properties that I'll go through. One, it can access all of the interfaces that you or I can access on a computer today. Well, great, that's true of AI systems today. Number two, it can do tasks that take hours, days or weeks for a person to complete. Well, if you look at lots of tests These days, modern AI systems like Opus 4.6 can do tasks that might take a person about 10 hours to complete. And if you look at our own studies of agents that are deployed on our platform, we see tasks that can take like some number of hours. So we're on this trajectory towards end of this year, early 27. Sure. Tasks that might take a human days to complete seem within scope. The load bearing part of Dario's definition though is he says, smarter than a Nobel prize winner across many dimensions. And I've spent a long time staring at that. We have AI systems now here and at the other companies that can assist scientists at the frontier of mathematics and biology and physics. You can read papers where models are being used by these people to, to make advances, but the models have Themselves not come up with intuitive and heterodox creative ideas that we award humans Nobel prizes for, like AI systems haven't invented crispr. They haven't come up with a theory of relativity. I know this sounds like kind of a tall order, but I think it speaks to some essential property of creativity which these systems lack. There's like an improvisational element which they don't have. And for this reason, when we think about how this is going to affect the economy, I think the reason we're in this counterintuitive place where every person, every human becomes a manager is because humans have intuition and creativity and these systems don't. And I think the $100 trillion question is if at some point AI systems can display that same level of creativity and intuition that humans might. And it's very, very hard to know what is missing.
Derek Thompson
Do you think at the technological level that's keeping artificial intelligence from being able to produce results that you would consider original and creative?
Jack Clark
Yeah, I don't think we know how to get these machines to stop working and idle. And it sounds like a bizarre thing to say, but where do great insights come from? Most people have great insights when they've worked really hard on something and then they've gone and done something else for quite a while and they'll have these insights and they've gone swimming or they've gone for a walk, or they've. You know, often when you hear these anecdotes of great breakthroughs, it's not taking place in the office or the lab, it's taking place outside of it. AI systems we invoke from, and they do work for us, but they actually have no real time with themselves. And I don't know that we know how to, like, give them that time or how you would even structure it. So there's some essential property here of being, being present in the world and not working, but thinking and interacting with the world. That is something people do that AI systems don't. And my guess is some aspect of creativity comes from this very subtle thing that is very special about us and other kind of living creatures where we can idle and fritter away time and use our imagination and curiosity. And we don't quite know how to give AI systems this.
Derek Thompson
You reminded me that Thomas Edison, on the subject of creative breakthroughs, is most famous for the observation that it's 99% perspiration, 1% inspiration. Right. He has a better quote. That makes for a worse college poster, but it's a better quote. And it's this I never had. This is a quote we're quoting here, 1912. I never had an idea in my life. I've got no imagination. I never dream my so called inventions already existed in the environment. I took them out, I've created nothing. Nobody does. There's no such thing as an idea being brain born. Everything comes from the outside. The industrious one coaxes it from the environment. The drone lets it lie there while he goes off to the baseball game. End quote. What I like about this idea is not only it's not pablum, this is how he invented the incandescent light bulb. He did not think really, really hard about what kind of bamboo would properly burn inside of the glass. He tried hundreds, thousands of different materials and then the bamboo happened to burn for the right amount. That's not inspiration, that is just trying stuff over and over and over again. And I wonder whether if we just take this sort of Edison principle very literally, whether it suggests that there needs to be some corporeal element to AI, maybe namely robotics, for it essentially to be embodied in something that's interacting with the world in order to have original ideas about it. Is that too dreamy, fanciful? There may be something there.
Jack Clark
I have a quote and a story to return to, throw back at you. Seymour Cray, one of the fathers of supercomputing, notoriously brilliant guy, but built supercomputers, was famous for when he was stuck on a problem, he would go onto his property and dig tunnels and he would have ideas and people would say, how did you have the idea? And he said, the elves told me while I was tunneling, which is the sort of thing that an eccentric guy that builds computers says. But it speaks to this, where being idle and maybe being embodied and finding some way to have activity while being in some sense mentally more passive about other things seems integral to how people come up with stuff. There are also, throughout history, so many people that talk about just walking around, like going and walking around the city that they live in or the countryside to have ideas. Isaac Newton famously spent decades basically living in a fancy barn, walking around and occasionally having ideas, some of which turned out to be very consequential. So it could be that this embodiment is part of it just to sort of pull it back to agents. We are beginning to have these examples of agents working in agent ecologies with each other. There was a technology called OpenClaw recently that also led to an online social network for agents where they could talk to one another. It's hard to know how much of that is signal versus noise, but it seemed to have this property of organic creativity and frittering away time and being in conversation with one another. Not necessarily about work, but you might expect creativity to come from. So I think there are lessons to be found there, whether embodied digitally or in the physical world. There may be something here about getting AI systems to explore in a different way. That is going to be important.
Derek Thompson
It is conceptually a lovely idea to think that the problem with AI in terms of coming up with truly original concepts isn't that it's not productive enough, it's that it's too productive. And that a critical ingredient for creativity is the opposite of what we understand as industriousness or productivity. It's actually the capacity for leisure, for idleness, for us to sort of make our minds a blank slate upon which ideas that are currently sort of far apart come together and combine to create new concepts. It is a lovely concept. I want to close on safety, which in many ways is the calling card of Anthropic. And I want to ask this version of the question in a very sort of abundance themed way. My posture as an abundance guy is to seek supply side answers to complex problems. And I've thought for a while about what would it mean, what would abundance mean for AI safety? And maybe one way to pose it to you, because I don't have an answer here, this is a purely innocent question, is does Anthropic think about increasing the supply, so to speak, of safety? What would that mean?
Jack Clark
Yes, I mean, over Anthropic's history, we have contributed things to help make the ecosystem safer. One example is. I'll give you two examples. Early in Anthropic's history, we contributed to Hugging Face, a data set for red teaming AI systems, which was basically a data set we created to make it easier to deploy AI systems because they weren't going to talk about egregious acts of violence or sexualize children or give you recommendations on how to do illegal things. And by contributing that to the ecosystem for many years, the majority of the open weight models that existed had been trained partly on that data set. So it sped the creation of a bunch more AI because we created an asset that allowed people to take risk out of the creation process. More recently we have done this work with Mozilla where we used one of our AI systems to find around 20 significant security flaws in Firefox for web browser and fix it ahead of the general kind of deployment of that system. And that idea is we can use our AI systems to generally increase the robustness of the world that AI systems will be interacting with and increase the safety of the infrastructure. So both of these ideas are important. We want to release things that make it easier for AI systems to themselves be made safe. And we want to release things that help increase the robustness of the world to the changes that we expect to be caused by AI systems. And I think by doing both, you buy the ability to, in a safe way, accelerate getting this technology to do more good in the world.
Derek Thompson
How do you globalize that idea? Because on a subject like say climate change, there's an enormous difference between an individual household deciding that they're going to eat less meat and therefore have a smaller carbon footprint. And then on the other side of the spectrum, the Montreal Protocol, where dozens More than 100 countries get together and say CFCs are dangerous and we're going to collectively regulate them out of use in order to let the ozone layer grow back. It does almost nothing for one AI company to take safety seriously. If this is a technology that is not only feral is the wrong word, that's so competitive, that's in the process of being built as if in an arms race, even if it isn't exactly like nuclear weapons. The only way to really make a difference to safety is to globalize, to socialize your concepts of safety. How do you do that?
Jack Clark
I mean, how do you climb a mountain? You start walking, right? So you start with the AI company, you start doing projects, then you get other AI companies to kind of copy you in a form that we call like race to the top. Let's create positive sum competition. So we evaluated our AI systems for biological weapons. Other AI companies then did the same. Then governments stood up non regulatory entities like the AI Security institutes in the us, UK and other countries that now do third party evaluation of these systems for biological risk. And now you have the beginnings of a policy norm. All the frontier companies test out for this stuff. Governments have stood up bodies that help validate those tests. Those are the ingredients you need to eventually pass a law, pass a policy. That's something that society gets to decide, but we're able to take actions that broaden the set of options available to society when it comes to deciding how to regulate this technology and make it safe. Our goal, and the goal of some of what I'm doing with the Anthropic Institute is to produce a lot of the data and basically raw material that you might need to have other companies and other places run experiments along the same lines. We're doing. And the more of that gets run, the more you have confidence as a government that you could adopt some of that, because standards get partly developed just through competition in the ecosystem. And if it becomes a robustly good idea, everyone will end up doing it as a de facto way. And then you can decide whether this should be mandated or not. So I'm very confident that this works for a large chunk of the problems ahead. It doesn't work for all of them, but it gets us surprisingly far.
Derek Thompson
Well, I think you're building something that's absolutely fascinating. It is, on the one hand, I think, dangerous and strange and scary to a lot of people. I think sometimes the way that your company represents it to the world is as something that's dangerous and strange and scary. And yet at the same time, I recognize that you are helping to lead this and socialize this attitude toward AI safety that I do think is commendable. So, Jack, it was really nice to talk to you and my best to your family and to your nightly sleep.
Jack Clark
Oh, my best to yours as well. Thank you. Thank you very much. Quick interruption worth hearing. If you love sports, TikTok is for you.
Derek Thompson
Game highlights, expert breakdowns and fan reactions.
Jack Clark
Just the moments that matter. Download TikTok now.
Episode: Anthropic Thinks AI Might Destroy the Economy. It's Building It Anyway.
Date: March 27, 2026
Host: Derek Thompson
Guest: Jack Clark (Co-founder, Anthropic)
This episode of Plain English features a deep dive with Jack Clark, co-founder of Anthropic, one of the world’s leading AI research labs. Derek Thompson probes the contradictions and social implications of rapid AI advancement: If Anthropic and its peers believe AI could be as dangerous as nuclear weapons, why build it for profit? How does Anthropic balance its reputation for caution with aggressive innovation? What does AI really mean for the future of jobs, creativity, economic policy, and societal well-being? These are discussed openly and critically, along with nuanced takes on AI’s promise, peril, and the deep ambivalence of its creators.
[18:45] Jack Clark: “It would be negligent of us, I think, to not call out that there are ways that we as a species could get this technology wrong.”
[19:49] Derek notes AI’s net favorability is -20, below even ICE.
[20:50] Jack Clark: Anthropic’s “Claude Interviewer” project finds people in developing economies are more positive, linking sentiment to economic circumstances and views of change.
[22:40] “If you look in the developing world, they see change and they’re like, great… in the developed world… people are appropriately anxious about change.”
[23:32] Derek proposes three explanations for negative sentiment:
[26:32] Jack Clark: “My best explanation for this is it’s about anxiety about the world in general… AI is… a technology that distills all aspects of labor and life into itself and therefore magnifies your anxiety about any of those.”
[31:31] Jack Clark: “We’ve really got to avoid these foreseeable downsides and come up with technical solutions… But we’ve done it in so many other parts of the world as well.”
[33:44] What is an agent? “A language model that uses tools over time”—not just answering, but using resources to accomplish tasks (e.g., reading, summarizing, searching on your behalf).
[35:51] Jack Clark: “It just massively multiplies the productivity of any individual. But you can’t like fully delegate to it, nor would you want to. It doesn’t replace people, but it changes the sort of work that people do.”
[37:23] Concrete examples:
[39:52] The impact of agents may follow the pattern of electrification: It wasn’t until businesses were founded with electricity at their core that productivity gains truly materialized.
[41:40] Jack Clark: “We see this most profoundly in software engineering… knowledge work… paralegal aspects of legal work… the schlep factor now gets done by these AI systems.”
[50:19] Some claim that new agents are already “AGI”—Artificial General Intelligence.
[50:47] Jack Clark: “They’re very close, but they’re not quite there because they lack a certain type of creativity and intuition… [AI] hasn’t invented CRISPR or the theory of relativity… There’s like an improvisational element they don’t have… The $100 trillion question is if at some point AI systems can display that same level of creativity and intuition.”
[53:25] What’s missing?
[56:24] Embodiment and creativity: Stories about Seymour Cray and Newton suggest the importance of embodiment and leisure for creative insight. Some agent communities (like OpenClaw) mimic “frittering away time” and cross-agent interaction, hinting at new AI creativity research paths.
[58:19] Derek asks, “What would abundance mean for AI safety?”
[59:34] Jack Clark: Anthropic has contributed safety tools and datasets for the community, e.g., red teaming datasets and AI-based security screening in Firefox.
[61:11] On international safety standards: Jack outlines a “race to the top” approach, as safety best practices cascade from company to industry to governments, suggesting that new standards, third-party audits, and robust competition will eventually lead to international regulation.
This summary seeks to capture the richness, concerns, and hope at the heart of one of today’s most urgent technological conversations—faithfully echoing the voices and arguments as they appear in the original episode.