
Evan Osnos speaks with Wired’s Katie Drummond about the hype around artificial intelligence, and what tech moguls learned from Elon Musk’s tenure in the White House.
Loading summary
A
This is the New Yorker Radio Hour, a co production of WNYC Studios and the New Yorker.
B
Welcome to the New Yorker Radio Hour. I'm Evan Osnos, sitting in today for David Remnick. I'm based in Washington for the New Yorker, and I can tell you that some of the best political reporting on Trump's second term has actually come from a tech magazine, Wired. First, Wired published a series of scoops about Elon Musk's reign in the White House. And even though Musk has moved on, Doge is still having a massive and disruptive influence in our government and in many people's lives and jobs. It could be startling to realize how much the tech industry is setting the agenda for American politics right now. Donald Trump, who, after all, used social media to triumph in politics, has also become a major promoter and beneficiary of cryptocurrency. And meanwhile, AI is at the top of every serious conversation about the world and how it's changing. Wired has been covering all of this really well. So I recently sat down with its global editorial director, Katie Drummond. Drummond is also the co host of Wired's podcast called Uncanny Valley. Katie, we are talking about six months into this administration, and we'll get to what Wired has explained recently as Doge 2.0 in just a second. But I want to start by going back for a moment to when this effort began, because Doge has been a hard target for journalists to pin down. It's operating very much in the shadows by design. It's not subject to a lot of the usual disclosure rules at other agencies. But Wired has really dug into this over the course of the administration very successfully. I mean, you found patterns, you've found individuals. Individuals. It seems like the experience of covering tech turned out to be unusually helpful to covering politics right now. And I want to know how that became clear to you. How did you and your colleagues confront this puzzle of covering Doge?
C
I would say, without giving myself too much credit, because as we've talked about, I am Canadian, so I don't like to do that. But I think it's been very clear to all of us, and certainly very clear to me as a journalist for the last several years, that the technology industry is now the locus of power in the United States. Right. And arguably around the world. Right. The CEOs of major tech companies hold so many of the keys that determine the way we all live our lives, certainly the way our country is governed. And what became clear to me, I think, in the run up to the 2024 election, even before Elon Musk really jumped in, sort of in July of 2024, was that that overlap between technology, the technology industry and politics and federal politics would be a major storyline. We were prepared because we, we hired a politics team. So we just said we have to start covering this in a more deliberate way. But we had the benefit of already having really strong coverage of the tech industry. We have a business team helmed by Zoe Schiffer, who actually wrote a book, a fantastic book, about Elon Musk and his takeover of Twitter. When you read a lot of our Doge reporting, you'll see several bylines on those stories. And that's because we had business reporters working with politics reporters to cover any sort of given news event.
B
Was there an actual confluence of information coming from the west coast and information coming from Washington? Did you ever find that these two were leading you to common destinations in a way?
C
Absolutely. And I think we had first mover advantage to what turned into a very, very big story, right, the story of Doge. We got a flood of tips and information from people inside of these federal agencies, right, who were watching this happen, who were interacting with DOGE operatives, DOGE workers inside their agencies and felt like they had to say something. And then sources, you know, close to and around Musk and sort of adjacent to this almost like right wing faction of Silicon Valley were talking to us and who were sort of helping us understand where Silicon Valley and the tech industry stood vis a vis the administration and Doge.
B
You've actually hit on what is kind of the essential point of encounter, perhaps in this current era of politics, because there are the civil servants who represent the traditional backbone of American governance. And then you have this new group of disruptors, as they would describe themselves, coming in. And each one regards the other as a fundamental obstacle to human flourishing. You were already alert to that, that something was about to happen in Washington, that local inhabitants, and I'm talking to you, of course, as somebody who lives here, really didn't fully understand about what was about to happen.
C
I think that's right. It was Mehta and Mark Zuckerberg many, many years ago who used the very cliched slogan, move fast and break things. And that is very much something that many of these people embody. And so you start moving fast and breaking things and sending emails asking for, you know, bulleted lists of productivity from people who have been civil servants for two decades. I mean, that is an extreme culture clash.
B
This has also shown us something important, which is about the ideological underpinnings of Silicon Valley this period. You know, you've covered the industry for years and you've seen the ebb and flow of ideas. This goes all the way back to when you started at Wired back in about 2009, I think, right? And back then, of course, the cliche about tech was it's fairly left leaning, it's Obama aligned, you know, even if it was always quite hyper market oriented. And now, of course, we see this much more complicated picture, a much more overt conservative streak, to put it mildly. And question is, was it always there and underappreciated? Was it quiet or has it grown really in this period? And if so, why?
C
I mean, obviously the tech industry and Silicon Valley are not a monolith. There are certainly many, many people living and working in San Francisco, in Silicon Valley, in tech, who are very left leaning, who are looking at what's happening with great horror. And obviously, right then there is the more conservative streak, the Marc Andreessens of the world, who certainly have taken that set of philosophical ideas and sort of that set of politics and really taken it to the extreme in recent years. What I'm seeing from leaders in the tech industry is less about an overt set of political beliefs, right? Like we are liberal or we are leftist, or we are right wing or we are right leaning. We're all in on the gop. It's much more craven and cynical and opportunistic than that. It is. We are gonna move in the direction that is in the best interests of our business, the best interests for our shareholders, you know, for our board, for our own pocketbooks, for our own desire to be dominant. We are gonna go wherever we need to go to realize our ends. And so if that means we will be sitting front row center at Donald Trump's inauguration in the year 2025, then that is what that means. And so I think the change that I've seen since maybe 2016 to now is less about, oh, all of a sudden a bunch of them have just shifted to the right. It's much more, oh, they feel very much empowered to follow their own interests and the interests of their companies and their industry, wherever that may lead them, even if it is unsavory or, or perhaps unethical or perhaps ethically dubious. At the very least, they are opportunistic to the nth degree. And that is what we're seeing.
B
That's immensely important. If that's the case, and I think the evidence out there certainly supports that, I want to understand how that happened. Not to glamorize the past. Let's remind ourselves that Bill Gates, after all, was brought before Congress. There was an effort to try to break up Microsoft back in its day for monopolistic tendencies. But the way in which you described, I think this period is very persuasive and I'm curious what you think was driving it. How did this theory of what they would describe as shareholder responsibility, where they're taking care of their investors, took on this fundamentalist form?
C
I remember working at Wired the first time around. So this was 2008, 2009. Wired in that moment was, was great. It was a great Wired. It was so optimistic in its view of what technology was doing to the world. And this is not exclusive to Wired. I mean, this was. Was tech journalism overall in that moment and I think spoke to the view of the vast majority of just the average sort of American citizen was, wow, social media is here. Look at what it has the potential to do. Look at how the Internet can connect all of us. What an incredible global utopia we may one day soon live in where we can all spend time on Facebook or Twitter sharing ideas, building bridges, resolving conflict. Da da da da da da da. Silicon Valley was telling a great story. Everyone was buying it. These companies grew to be tremendously powerful. The consequences of their actions and the consequences of what they built became very clear to all of us. Right? As the years went on and on and on, they felt that backlash, right? They felt the media, quote, unquote, turning on them, the public turning on them. They felt this shift in sentiment. A lot of those tech leaders took that very personally. I mean, they felt that backlash and it led us to where we are today, which is, well, if I don't sort of have the hearts and minds of the press and of the general public, fuck em. Yeah, that sort of turn around 2016, sort of as the promise of social media started to crumble and we all saw what was really there and sort of what the real consequences here were. I think that that backlash led Silicon Valley leaders to, to this almost like futile sense of, well, I hold all the keys, I have all the money, I already have all of your eyeballs, I have all your data. So I will do with it what I wish. Consequences come what may. And the reality is, other than sort of public perception of a company like Meta or Mark Zuckerberg not being particularly favorable, what are the consequences for these companies? Right? They know that they can operate with relative impunity and, and they are now lining themselves up next to a president who will allow that to Continue to happen. Right. Who will not take great effort to regulate AI to regulate. I mean, God forbid we think about regulating social media 20 years too late, but they are lining themselves up with an administration that can create the most hospitable environment for them because they are, frankly, you know, tired of hearing from the press, tired of hearing from the public, learned their lesson from those backlashes that they. That they worked through, and I think just feel that they are of a size and a scale and a level of power, that they can operate in this sort of very cynical way, in this very opportunistic way.
B
Now, we've just had this amazing moment when Elon Musk was essentially defenestrated from Washington by, first, his collapsing relationship with the president, but even more importantly, almost by the public, in the sense that Tesla sales collapsed. You had his favorability go through the floor. Do you get the sense that the tech community takes from that? Any lesson of, oh, we seem to have crossed some threshold the public couldn't accept? Or do you think they just say, disruption has always generated backlash and we must continue forth?
C
No. I mean, look, I think there were a lot of lessons that they could have learned from what went wrong with Doge, with Musk, but from everyone that I've talked to, and we have some fantastic politics reporters who are talking to, you know, sources inside the GOP as well as sources in Silicon Valley, if they learned anything. Ultimately, it was be quieter. Like, Elon Musk is so loud. I mean, he was so out in public. He was so out in front. You don't need to, like, be in President Trump's meetings with his Cabinet with your son. Like, don't be weird. Elon Musk flamed out because he was wildly overexposed, and he caused all kinds of headaches for himself and for the administration. I mean, he said crazy stuff. He said, we're gonna cut $2 trillion out of this budget without understanding that there weren't trillions of dollars to feasibly cut in the first place. But ultimately, if anything, if I'm being totally honest, I think that the tech industry and Silicon Valley, if they've learned anything from what Elon Musk was able to accomplish, it's that this is open season. Like, this is an invitation from the president, from his administration, for these tech elites to ascend to wherever they want to in this country, provided they play by the GOP and Trump's rules.
B
I'm speaking with Katie Drummond of Wired. This is the New Yorker Radio Hour. I'm Evan Osnos. More in a moment.
D
The New Yorker Radio Hour is supported by Hims and Hers. If you're someone who values choice in your money, your goals and your future, then you know how frustrating traditional healthcare can be. One size fits all treatments, preset dosages, zero flexibility. It's like trying to budget with a fixed expense you didn't even choose. That's where Hims and Hers comes in. They offer access to personalized care for weight loss, hair loss, sexual health and mental health because your goals, your biology and your lifestyle are anything but average. There are no membership fees and no surprise fees. Just transparent pricing and real care that you can access from anywhere. Feel like your best self with quality, convenient care through HIMS and HERS. Start your free online visit today at hims.comnyrh that's H I M s.comnyrh to find your personalized treatment options not available everywhere. Prescription products require provider consultation. See website for full details, important safety information and restrictions.
C
On the Broadside we take you into the heart of the south with stories that'll surprise you. Bigfoot apparently loves glow sticks. He likes to party, I guess. Exactly. He's a raver. And topics that dig into the muddy margins of history, right? The good, the bad, the ugly.
B
It's not clean at all.
C
It's so messy. Wait a second.
B
This is actually real.
C
Listen to the Broadside One story every week exploring the rich traditions of the South.
B
This is the New Yorker Radio Hour. I'm Evan Osnos, a staff writer at the New Yorker, and I'm in for David Remnick. This week I've been speaking with Katie Drummond, the global editorial director of Wired. Wired's always been a major voice in tech journalism, but these days, and I think especially since Trump took office again, there's hardly any daylight between technology and politics and economics, Wired has really risen to the challenge of covering these intersections. Katie Drummond started at Wired as an intern, and she worked at Vice and Bloomberg and other publications before taking the helm at Wired in 2023. So we'll get back to our conversation, which was recorded for the New Yorker's Political Scene podcast. Katie, I want to talk about AI. We can't have any conversation, apparently, in 2025 without bringing that up. But in all seriousness, its connection to some pretty big societal questions that are just over the horizon, or I would argue, actually now already upon us. The big difference here, it seems to me, between this generation of technology and what's come before, whether it was the mechanization of agriculture or electricity, it's the speed of diffusion about how fast this is sweeping through our lives. Just the sums of money involved are eye watering already. The amount of money that AI is hoovering up and we're about to see big changes. How are you sensing that? These companies are talking about their role in the societal implications of this?
C
Ultimately, they see their role and they see what's happening with AI as an inevitability. And I think that that's interesting because they are not saying or thinking, well, we're creating this incredible technology and we look forward to seeing whether or how or to what end it's adopted. They are seeing this, talking about this, positioning themselves and positioning their companies in a way that very overtly says, this is here and more is coming and there's nothing that, that you can do about it.
B
And are, are they right about that? Is there an inevitability? They would argue? Of course. Well, look, if we don't do it, China's gonna do it. And so we have to get there first and we have to claim the moral high ground and so on. I've, I have a lot of reasons to be skeptical of that argument, but this idea of inevitability seems to be directly at odds. And I've heard them voice it. It seems at odds with the other thing that they said for years, which was, we must make sure that the worst consequences and powers of AI are contained and prevented. How did they suddenly go from saying this thing is a dangerous entity to saying it's coming for us? So let's just try to manage the downside.
C
Yeah, I mean, I think that a lot of that catastrophizing, not all of it, but a lot of it was marketing. They want these models and they want this technology to sound as big and daunting and powerful and impressive and scary as they possibly can. Right. I think that that is by design. I like. I think it's important if you are spending billions of dollars and raising billions of dollars to make what you do sound not only inevitable, but really, really, really powerful. Their PR tactics change as the market changes. And I think the doomsday scenario stuff was a very effective way to inculcate, to establish to Americans and to the rest of the world and to every, the CEO of every company who is like sipping AI Kool Aid somewhere in their office right now that this is serious stuff. You know, a lot of these AI leaders talk using that sort of dystopian doomsday language, or they talk about the potential crises that AI will unleash, whether it's like 50% unemployment, or, you know, Sam Altman said, AI is going to usher in this era of, you know, widespread rampant fraud because of, you know, the ability to, to imitate a person. A lot of that language and that hyperbole masks the fact that these individuals have a stake in exactly the scenarios that they are outlining. So Sam Altman oversees something called World ID as part of his, you know, enormous empire, which is designed to use biometrics to literally scan your iris and give you a number. I mean, I can't believe, like, I should start writing novels. So he's talking about this dystopian future. How convenient that he already runs a company that has a solution. When you listen to people talk about artificial intelligence, you always have to ask yourself, what is their motive?
B
Yeah.
C
What is their incentive? Do they run an AI company or are they an independent researcher with an institution that is not funded by Microsoft or Google? There is so much hype around this technology and it can be very hard to discern even for me. And like I run Wired. I'm just going to be real with you. Even for me, it can be very hard to distinguish, like, what's actually happening here.
B
Yeah.
C
And then how much of this is just BS or how much of this is just marketing?
B
One of the projections that you hear from Dario Amade at Anthropic is that he expects half of all entry level white collar jobs to disappear in five years. And it's sort of been a controversial subject, but if you listen, you know, the CEO of Ford Motor Company came out and says, I expect that half of the white collar workforce is going to disappear eventually. And Amazon says, we've probably had peak employment. I mean, there are these indicators that something is changing in the labor force, even if it's not the sort of sci fi dystopia that they've been marketing over the years. How do you think about what is actually going to happen to white collar work over the course of the next 10 or 20 years?
C
If I had a perfect answer to that, I would be a consultant. I would have quit this job. I would be so rich right now. You know, I don't think that anybody really knows. It is true that the nature of employment will change. It is true that there are some jobs that I could imagine right now that in a year, do I think they'll exist? Like, probably not. Right. Are there new jobs that will be created as all of these guys are promising? Like, yes, of course. Like there will be new jobs in this sort of new, more automated white collar world. But I think it's two things. If we're thinking about in the here and now, how do I think about it? How do I talk to my staff about it? You can't ignore that these tools and products exist and that they are being treated as an inevitability. So it is in your best interests if you are just graduating from college or if you are at the midway point of your career or wherever you are, you should spend some time with them. Like it would behoove you to understand how these things work, to understand how they work in the context of what you specifically do for a living. The other piece of it that I'm seeing a lot that I think is one of the reasons I don't feel confident making sort of long term predictions about this is that I also think we are seeing the premature elimination of some roles or the premature integration of AI into some companies and some workforces, which is happening because all of these executives go to the same conferences and then they all talk and then they go back to their offices and panic. You know, I talked to someone a few weeks ago who works sort of adjacent to big tech. A lot of these companies are clients of his. And he was telling me about how a lot of companies that he works with have replaced software engineers with AI and are now churning out like the buggiest shittiest code, like terrible code, to the point that they now need to pay another company to debug their software for them because they overshot. Right? Like it's not good enough yet to replace their software engineers. But they did the layoffs anyways because they needed to cut costs. So I am waiting for some of that dust to settle. And I think in the next 12 to 18 months, do I think that half of all entry level jobs or whatever that estimate was, do I think that that's gonna happen in the next year, year and a half? No, I don't. I think what we'll see in the next year to year and a half is the reality meeting the hype in terms of are companies and our corporate leaders actually seeing productivity gains? Is this actually improving the bottom line? Are these sort of big corporate use cases of AI as real as they have been promised they are, or are we not quite there yet? And I want to give that a minute to settle before we start talking about the fact that like nobody's gonna have a job anymore. But, you know, there's so much interesting conversation about sort of AI and education and how that's changing the way students are learning or not learning. I think that it is in all of our best interests, especially if you are new in your career, to become very conversant in these tools.
B
Yeah, you mentioned education and I talked to a former university president recently and I said, so, okay, you were running a school, how would you deal with the fact that students are having AI write their papers? And he said, well, it's not hard from my perspective. I think what I would do is just tell them one out of every ten will be randomly assigned an oral exam like a PhD student. And if you flunk, then you really flunk. Sounded a little Hunger Games to me, but also perhaps quite effective. And I was curious if you're seeing in the coverage of this problem of cheating in school, are you seeing us moving towards a more sustainable arrangement? Something better than, you know, AI writing the papers and AI grading the papers?
C
I think that educators by and large are starting to move away from the panic and the sort of hysteria that I saw characterized sort of the first few years of this was, oh no, our students are using AI, they're all cheating. This is terrible. And it becomes, well, what are you going to do about it? Because you will have a class next year and next year and the year after and the year after, and again, this isn't going anywhere. So, okay, they can all write the essay using ChatGPT. Now you need to change the assignment. I do think we are seeing educators adopt a more, shall I say, solutions oriented approach. Because the reality is like you need to change the way you conduct your curriculum. The way you educate students has to change because the reality is a lot of the assignments, a lot of the methodologies that you used 10 years ago are no longer going to be effective. If any take home assignment can be written using an LLM and the detection software, to be quite candid, is not very good. And there are plenty of ways around it. How are you going to make sure that these students are learning everything they are supposed to learn instead of just learning how to use ChatGPT to write an essay? Right? Like that's obviously not the end goal. They should learn how to use ChatGPT, but it shouldn't be just to complete the assignments that their teachers have given them.
B
So we're at this fascinating moment right now where it seems like there's this Game of Thrones going on among the big AI companies that reminds us a bit about how Microsoft and Google and Meta came to be these giant leviathans. And we're also at a time when the Federal Trade Commission has been over the last few years trying to break up what it certainly sees as unfair market behavior and monopolistic enterprises. Do you think that we're heading in a direction where that will replicate itself? Are we just going to end up with a few giant AI companies or has anything been learned that's going to keep it a bit more distributed?
C
You know, Evan, I'm really trying to bring some optimism to this conversation, but you're not setting me up very effectively.
B
Can my phone take a photo? Photo? That's my question. Katie. Can my. You know, let's find something that we can take some solace from.
C
Yeah, look, I know. I mean, the future that I see in the short term right now, given who's in office, given where all the money sits, where all the power sits, where all the lobbying heft sits in this country, I think that we are moving towards an ongoing monopoly of big tech. If anything, I think what we'll see in the next sort of year to two years are some of these, many of these actually smaller AI companies or startups just being hoovered up by the bigger players. Right. Like that's, that's inevitably where this is going. I mean, these companies are fundraising at outrageous valuations. Like, there is so much money being pumped into a lot of these startups, it's not sustainable. Some will shut down, some will be acquired, there will be acqui hires. We're already seeing that happen. So I think ultimately we will end up with sort of a portfolio of big tech companies, whether they're the same big tech companies that we're working with now, or whether we start to see that shift a little bit as some of these AI companies become bigger and bigger and bigger. I mean, I'm talking about OpenAI in particular, and sort of what their ultimate destiny looks like. But I think this, this sort of era of monolithic big tech is by no means over. And I wish that I were giving you a different answer, because there's nothing I would love to see more than a more dynamic environment, a more competitive environment, an environment where startups and novel thinking and new ideas really have space to flourish amid the metas and Googles and Microsoft's of the world.
B
And I think you've actually taken us to a really important point here, which is that if you look at the history of technology and the way that it is, that it exists in broader society, you know, there are moments when it is this fundamentally optimistic realm. And you, you know, you go back to when we were first putting people on the moon and it was reflected in the science Fiction of the time, it was all sort of utopian and what will it be like out in the new realms? And then we've watched as it's all gotten so much darker over the last generation. And I just am curious, what gives you reasons for optimism, if there are any. If you see people out there who are at the moment marginal figures who are thinking differently or are swimming against that tide, but give you a sense that actually this is not an inevitability and perhaps there is a voice within this movement that can perhaps turn the direction a bit.
C
I do, and I would point to sort of two specific examples. These are two people and two organizations who give me a lot of hope right now who I think are doing really interesting work. One is Meredith Whitaker at Signal. They are incredibly principled and resolute about the premise of Signal, Right, which is really robust, end to end, encrypted communication, period. The end. Right? That is what they do. They do it tremendously well. And she is an incredibly articulate voice in every room that she's in about the dangers of exactly what we just spent this time talking about. Wired is very grateful to Signal because that's where we do most of our, most of our chatting right now. It's also where I talk to my entire family. I mean, if you aren't on Signal and you live in Trump's America, I would get there. And the other is, is, you know, Blue sky and Jay Graeber. And I think that that's a really good example of social media and social networking being done differently and being done in a way that actually empowers the user, much less so than it empowers the company. I think, honestly, so many of us were at a point where the very nature of social media and how it worked and who benefited and who didn't felt set in stone. It felt like this is the way it has always been. This is how Mark himself designed it, and this is just the world that we live in. I give all of my content to this platform. They sell all of my data, they shove a bunch of ads into my feed and everybody is mean to me on here. Anyway, like, that was basically what social media was.
B
Congratulations. That's what we.
C
Congratulations, all of us. This is the utopia that we were promised when they landed a man on the moon. And I think that Jay Graber and Bluesky have come along and done something really different, which is a decentralized social network and one that puts the power in the hands of the user to design their experience exactly the way they want it to be designed to take their followers with them when they move over to another service or another platform. It's a fantastic idea. It's actually surprising to me that it didn't exist 10 years ago because I can't imagine now my social media reality and my world without Blue Sky. You can talk all day about, oh, everybody fled Twitter, all the lefties moved to Blue Sky. Well, Twitter is an echo chamber for far right provocateurs and Nazis. So I don't think that we need to think too hard about the decision to not spend time on that platform. And I think that what Blue sky is doing just from sort of a technological point of view and a human betterment point of view, that is a better way to run a social media company. It just is. So yes, there are good things happening on the Internet and on your phone, few and far between.
B
Yeah, we'll take them where we can. But it is, I mean, the connections to politics are really profound and a lot of this is really important and frankly, new material for people whose nose is usually buried in politics. And it's wonderful to have you here. Katie Drummond, Global Editorial Director of Wired, thank you so much.
C
Thank you for having me. This was great.
B
Katie Drummond of Wired we spoke in July and our conversation appeared on the New Yorker's podcast the Political Scene, which I co host in Washington along with Susan Glasser and Jane Mayer. I'm Evan Osnos. David Remnick will be back next week. Thanks for joining me and have a good week.
A
The New Yorker Radio Hour is a co production of WNYC Studios and the New Yorker. Our theme music was composed and performed by Meryl Garbus of Tune Yards, with additional music by Louis Mitchell and Jared Paul. This episode was produced by Max Balton, Adam Howard, David Krasnow, Jeffrey Masters, Louis Mitchell, Jared Paul and Ursula Sommer. With guidance from Emily Bottin and assistance from Michael May, David Gable, Alex Barish, Victor Guan and Alejandra Deckett.
C
And we had help this week from Amber Bruce.
A
The New Yorker Radio Hour is supported in part by the Cherina endowment fund. Since WNYC's first broadcast in 1924, we've been dedicated to creating the kind of content we know the world needs. Since then, New York Public Radio's rigorous journalism has gone on to win a Peabody Award and a Dupont Columbia Award, among others. In addition to this award winning reporting, your sponsorship also supports inspiring storytelling and extraordinary music that is free and accessible to all. To get in touch and find out more, visit sponsorship.wnyc.org.
Podcast: The New Yorker Radio Hour
Host: Evan Osnos (in for David Remnick)
Guest: Katie Drummond (Global Editorial Director, Wired)
Date: August 22, 2025
Topic: The influence of Big Tech on the current U.S. political landscape, especially during Trump’s second term, with a focus on the convergence of technology, politics, and the ongoing impacts of AI and cryptocurrency.
This episode explores how Silicon Valley and the broader tech industry have become power brokers in Trump’s America. Evan Osnos interviews Katie Drummond, Wired’s global editorial director, to discuss the magazine’s in-depth reporting on topics like Elon Musk’s political role, the enigmatic “Doge” group, and the broader societal and political ramifications of AI, cryptocurrency, and tech culture’s shifting ideology.
[00:12 - 04:29]
“The technology industry is now the locus of power in the United States... The CEOs of major tech companies hold so many of the keys that determine the way we all live our lives, certainly the way our country is governed.”
— Katie Drummond [02:20]
[04:29 - 05:37]
“You start moving fast and breaking things and sending emails asking for, you know, bulleted lists of productivity from people who have been civil servants for two decades. I mean, that is an extreme culture clash.”
— Katie Drummond [05:09]
[05:37 - 08:16]
“They are opportunistic to the nth degree. And that is what we're seeing.”
— Katie Drummond [07:48]
[08:16 - 11:56]
"If I don't sort of have the hearts and minds of the press and of the general public, fuck em… I hold all the keys, I have all the money... So I will do with it what I wish. Consequences come what may."
— Katie Drummond [09:39]
[11:56 - 13:49]
“If they've learned anything from what Elon Musk was able to accomplish, it's that this is open season. Like, this is an invitation from the president... for these tech elites to ascend to wherever they want to in this country, provided they play by the GOP and Trump's rules.”
— Katie Drummond [13:32]
[15:34 - 20:48]
“A lot of that catastrophizing... was marketing. They want these models and this technology to sound as big and daunting and powerful and impressive and scary as they possibly can.”
— Katie Drummond [18:22]
[20:48 - 24:32]
“There are some jobs... that in a year, do I think they'll exist? Like, probably not... But I think... we are seeing the premature elimination of some roles... I am waiting for some of that dust to settle.”
— Katie Drummond [22:25]
[24:32 - 26:37]
“Educators by and large are starting to move away from the panic... It becomes, well, what are you going to do about it? ...You need to change the way you conduct your curriculum.”
— Katie Drummond [25:15]
[26:37 - 29:05]
"I think that we are moving towards an ongoing monopoly of big tech... many of these smaller AI companies or startups just being hoovered up by the bigger players."
— Katie Drummond [27:28]
[30:01 - 32:42]
“If you aren't on Signal and you live in Trump’s America, I would get there.”
— Katie Drummond [30:32]
“What Blue sky is doing... from a technological point of view and a human betterment point of view, that is a better way to run a social media company. It just is.”
— Katie Drummond [32:14]
This episode underscores how the U.S. now finds tech and politics almost indivisible, with Silicon Valley’s influence reshaping government, work, ethics, and society at large. While concerns about misaligned incentives, monopoly, and the social consequences of tech loom large, there are glimmers of hope in independent, user-centered projects and new social web paradigms. For anyone seeking to understand the real power dynamics of 2025, Wired’s rigorous reporting—and this frank, insightful conversation—are essential listening.