Loading summary
A
I'm Kiana. And I leveled up my business with Shopify. Once I figured out that Shopify was a thing, I never turned back. I can create a site with my eyes closed. Shopify thinks ahead of us, you know, and it thinks about the customer more than anything. Every day I'm thinking about some other new business, but Shopify is doing it to me because it's so easy to use. It's like I can't stop. I'm addicted. Start your free trial@shopify.com foreigning me.
B
Today on IHIP News is Ronan Farrow, a contributing writer at the New Yorker. And he is out with a juicy new investigative piece about Sam Altman. The title is Sam Altman May Control Our Future. Can he be Trusted? Welcome, Ronan. How are you today?
C
It's great to be here. I'm exhausted from getting this piece out, but it's been really wonderful to see the reaction. I think people across America, America are starting to really clock how the AI industry is in need of a conversation about accountability.
B
And let's just. So, listener, if you don't know, Sam Altman owns ChatGPT, and I know that you probably use that at some point. I've used it at some point. But, Ronan, how dangerous potentially is AI?
C
Well, so part of what's built into this story, Jennifer, is that Sam Altman himself founded OpenAI on a very specific promise. He said this is the most powerful and maybe dangerous technology in human history. And, you know, not everybody agrees with this, but the founders of this company and the people technically inside building this technology of artificial intelligence were sounding these alarms. They were saying, look, there could be the science fiction scenario of a Terminator Skynet situation where an AI falls out of alignment with human values and it becomes advanced enough and integrated enough, enough into our systems that it could launch nukes. But you don't even have to go that far to be alarmed. Already right now, we are seeing an environment where AI is powering weapons in war zones. There's been at least one case where it seems like a drone went rogue without a human operator, where chemical weapons are being identified much, much more rapidly through this technology, where the whole economy has come to depend on a very few AI companies that are heavily leveraged up and borrowing and doing deals with each other. Even the sunniest projections from economists hold that in the coming years, millions and millions of jobs are going to be exposed to disruption and maybe elimination from this. So the stakes are real. They have not gone away. And the story this investigation tells is of how Sam Altman while he was fundraising on this premise of we've got to be scared and therefore give the money to us because we're the safety guys and we're going to go slow and we're not going to be about growth, growth. We're going to stay a 501 seat. 3. That very rapidly was replaced in a pattern of him kind of saying one thing critics allege and documents show in this piece and then doing another. And that starts right at the top with the high level direction where this company has become one of the biggest for profit companies on earth now.
B
And does Sam Altman know how dangerous this is? Does he know you? You said earlier that he had people that are aware and it's my understanding from your reporting, you found people from inside the company that had like a safety net to try to put up guardrails and those have been completely removed, which sounds familiar as to kind of what's happening with the federal government kind of like a through line there, that guardrails are disappearing.
C
That's exactly the point. You know, when I spend a year and a half of my life and my co author on this also did the same same investigating a company like this and an individual like this, it's because it matters. This one company and this One individual, if OpenAI is promises, you know, Sam Altman's out there saying this technology also has all this upside. Some applications are real, you know, in medical diagnosis, in weather warnings. There is life saving stuff that is coming out of this tech. But that's not where all the valuation is coming from. Right. And they're doing big defense deals to get money now and the risks are not going away. And Sam Altman, critics allege in this piece triggered a wider race to the bottom. I think some of the guardrails you're referring to are safety. People in the company were sounding alarms. We document a lot of the internal whistleblowing and people complaining to the board saying, hey, this company has started to race towards profit and it's actually in some cases concealing what's happening safety wise. There are allegations that Sam Altman was telling his board, you know, that a complex AI model had been tested in various respects, safety wise. And then it turns out it hadn't. There's lots of stuff like that. But also in terms of the corporate structure, there were guardrails that fell away because this was a nonprofit. It was designed to have a board that could fire the head of the company if they were untrustworthy. And particularly in this context, if they couldn't be trusted with this incredibly important mission of this may kill us all. Sam Altman himself has said this could be lights out for all of us. That's his words, if it goes wrong. And what happened was a couple of years ago, his board and executives at this company got together and they fired him because they felt that he was lying too much. And there's people in this piece that say he was lying too much for any executive. And there's people in this piece who say he was lying too much, especially in this context where that can become dangerous. And the story of what happened after is the story of that wider race to the bottom. This is really a situation where capitalism won out. Sam Altman went into a war room, and we document how he fanned out to all these powerful allies in the investment community, who in turn were connected to heavy hitters in politics. One of his main defenders is this Silicon Valley Democratic investor, Ron Conway. He was at lunch with Nancy Pelosi when he got the te from Sam saying, like, I've been fired. We've all got to go to war. And a bunch of those people did go to war. And, you know, in fairness to all of those people, the board that fired Sam Altman did not communicate what was happening. They got bad legal advice. They stayed quiet. Right now is kind of the first time we're seeing all of the details of what they were really alleging and why he was fired. But in that void, Sam Altman was able to make the case, look, the numbers don't lie. I'm the key to the growth of this company. It's going to fall apart without me. And he came back, and I document parts of that comeback that are still raising real questions. There was an outside law firm investigation that was kept entirely out of writing. So this goes to your big point, right? There's a lack of guardrails, a lack of accountability. And since then, what we've seen in AI is a lot of these researchers who are closest to the safety concerns, saying over and over again, this is becoming a situation where all of the labs, even ones like there's a company, anthropic, if anyone listening has used Claude, which was founded by some of these people who left OpenAI because they didn't trust Sam Altman and they wanted to be the safety guys. They're also watering down some of their safety commitments. You know, they're also in an environment where there's less and less space for AI labs to focus on safety. I think this is a case, Jennifer, where we, we just need outside eyes on this tech. And we just don't have a political environment where that's happening meaningfully.
B
So in reading your piece, it reminded me of what I've been observing since Trump was reelected. And you start that at one point, Sam Altman was a Hillary supporter and he seemed to be, you know, for equal rights. He is a gay man and he seemed to be more a left leaning person. And it's not just Sam Altman, it's Marco Rubio and all of the oligarchs. And it seems like voting for Trump the third time. My observation, it broke something in people. And in your reporting, some people you interviewed said that they referred to Sam Altman as a sociopath. I had Kara Swisher on the podcast a couple of months ago and she said, oh, these tech guys, Jennifer, they don't believe in anything. If Kamala would have won, they would have had their pronouns in their bios. What can you speak to Sam Altman? But I see him as a representative of so many people that have just broken and bent the knee to this overwhelming moral collapse that we are viewing. And then this savior narrative in the middle of it, Trump, I alone can fix this. Sam Altman, I alone can save AI And Elon Musk, I alone can solve the national debt deficit. What's your take on all of that, Ronan?
C
Part of what we write about in the piece is that Sam Altman during the Biden administration was saying, regulate us. This is so dangerous, you might even take control of this. And he was in there with Biden administration officials, you know, really hammering out like, we've got to get more guardrails on this tech and presenting himself that way. And then the moment that Donald Trump was in, he was right there in the first days of the administration announcing these massive infrastructure deals that are going to put AI capacity, potentially dangerous AI capacity into the Middle East. We're seeing right now during the Iran conflict, threats from Iran to strike, a planned data center in Abu Dhabi that is a product of all of this lobbying from Sam Altman. The backdrop of this is developing advanced AI just takes an incredible amount of money. And a lot of the business dealings we describe in this company were about Sam knocking on different doors, particularly in the Middle east, trying to get that money. And the Biden administration was somewhat alarmed by this. And he actually was in a security clearance vetting process that we write about. And we get internal emails related to where experts on security clearances said, we don't think this guy can get through because he has all of These foreign entanglements, and he's raising all this money. The moment Trump came in, all of the regulators went away, and all of the money from the Middle east could flow freely. And, you know, there are safety people in this piece who say that is incredibly dangerous, that reshapes the balance of power in the world, that hands over, like, the equivalent of nuclear weapons in whole new contexts to autocrats. You make the right point. That is the wider story of so many American industries right now. I'm a lawyer by training. The law firms have in a situation where they could actually be a really important bulwark right through impact litigation against the erosion of democratic values. They bent the knee so quickly, starting with Paul Weiss. If anyone followed that story, you know, Hollywood media, the consolidation of platforms where there can be this kind of accountability reporting is real. There are not a lot of places that will give the resources for a reporter to me, like me to work on something like this for a year and a half and get all these documents and. And we don't, meanwhile, have that kind of independent oversight happening through regulation and legislation. There is a huge proliferation of money from AI in politics, as I mentioned before. You know, this is a moment where, if you're running for office in America right now, you are really having to contend with a ton of the money coming from AI because a ton of the economy is propped up on AI and there's just very little way to push back on that. I do, though, believe that if readers see this, they look at this piece, they care. You can encourage independent oversight in journalism by subscribing to places that do it. Maybe that's the New Yorker. Maybe you believe in what we've done with this piece and you think we need more of that. Please subscribe. Maybe it's ProPublica. You know, there are places doing meaningful work in a shrinking space. A funny example of this is actually as we were closing this piece, and we're in deep conversations that are, as you imagine, very combative with Sam Altman and others at OpenAI, they announced they were acquiring TBPN, this tech podcast. So it's just a little microcosm for how the whole media landscape is now being gobbled up by these tycoons that we're talking about separately. If you read this piece and you care about these safety stakes and you care about what those researchers are warning about, I do still believe, even in this environment, in the power of democracy to work. And I do think that politicians are looking at polling numbers that are increasingly coming out saying a Majority of Americans see AI as having more risk and downside than upside currently. And I think if we all kind of join arms and say this matters and accountability for Silicon Valley matters and oversight matters specifically for these AI safety issues. And if politicians get a sense that people will vote on that basis, there is a chance that the legislative branch can still do its job.
B
You know what I think the messaging and a lot of this is that will hit in a lot of the electorate that think tanks and focus groups try to dive to get into. And it's a really simple message. And you mentioned it earlier in this interview, the scapegoating that was used by the Trump campaign that immigrants are coming to take your jobs. It was never the immigrants that were going to take the jobs, it was always the oligarchs. And this AI, we're already seeing plans from Bezos, from all of these tech oligarchs, mass firings. And they want to replace working class Americans jobs with robots. And sometimes simple messaging through all of this is a way to break through because it's like, oh, wait a minute, Elon Musk, immigrant. I mean, now he's an American, of course, and I don't have an issue with somebody coming over here and making a business. My issues with Elon Musk are about his character. But I think that's some of the messaging that journalism can bring about. And that's why I think it's so important. I always tell people, you know, I'm not a journalist, I am a commentator. But what you do is so important because it shines a light and rights wrongs and brings awareness to the public. And so what is your take on their plan? Especially when you dive into the personalities of like Peter Thiel and Musk and Altman, their plans to dismantle democracy and have this kind of weird CEO world that's running the the world. What is your take on all of that and what did you discover in, in your, you know, year and a half long research on all of this?
C
Well, you're exactly right to highlight this backdrop of openly anti democratic ideology in Silicon Valley. I mean, Peter Thiel is openly espousing a set of ideals, you know, that trickle down from Curtis Yarvin. If anyone knows that that is really about the idea that democracy has failed and that what should replace it? I mean, in the case of what Thiel is expressing specifically is a like racially stratified a situation where a monarch, basically a CEO like character has much more absolute power. So, you know, I think I probably don't even need to comment on that further. And that thinking has gone into the water in Silicon Valley. Beyond that extreme example of Thiel. You know, Sam Altman is not, I think, an extremist politically. I think the thing we talked about earlier, the dynamic in which he is opportunistic in this respect and he sees, as government officials told us in this story, an opening where the Trump administration can once said do his bidding, you know, where the Biden administration could not. And by the way, this is not to exonerate the Biden administration, which created a number of these problems and gaps in oversight, or the Democratic Party, which as you I think alluded to earlier, is like at the forefront of big Silicon Valley money taking over and a failure to espouse any kind of protections of working class Americans. But with respect to these oligarchs, when I reported on Elon Musk, what I found was he really has gotten so much money, it's become so disproportionate that he doesn't view himself as someone who needs to participate in the social contract anymore. You look at the Rockefellers, you look at the Carnegie's, you look at, there was a previous era of bad guys, you know, in the Gilded Age. And I don't exonerate them either. But they thought they had to like build stuff for people. And I think we're really entering an era where the gulf, the inequality has become so extreme, where these institutions and individuals that I have been reporting on, they just, they don't have the value of engaging or giving back anymore. You look at Mark Zuckerberg's charitable contributions, you know, when it was convenient and in vogue, he was contributing to charity. It's all getting pulled back now. You look at the giving Pledge where Bill Gates and others, you know, we're getting together and saying we're going to give above a certain threshold of our net worth that is now ridiculed in many circles in Silicon Valley.
B
What strikes me a lot about these billionaires is if you get on Elon Musk's Twitter feed or if you listen to Mark Zuckerberg who said, you know, when Biden was president, I felt like I was neutered. He said that on Joe rogan like an 82 year old man neutered you. And you just said that on video. That's weird. They seem like they're just such victims. And my thing is, and pardon my language here, I think everybody knows I cuss by now, yet they have all this you money and then they're just in this Constant state of victimhood. They feel like they're so oppressed. And I don't think there's ever been a clear advertisement against the grotesque accumulation of wealth than this slate of billionaires. And I wanted to ask you because I've noticed that Elon Musk and Sam Altman like hate each other. Do you have any insight as to their billionaire beef and to the larger psychology of why these people feel so oppressed when they have everything and a president that will do anything on the planet for them?
C
This goes back to our conversation about the story mattering in more ways than just with respect to OpenAI, that whoever has their finger on the button, as one person puts it in this piece, in this industry, they are fallible and some are more fallible than others. But even like, you know, I mentioned Anthropic, this competing firm that's supposedly they're like the number one safety guys still or the only ones with some vestige of that priority left in AI, they've had, you know, crazy leaks in recent weeks. So the question is, can we trust any of these guys without an outside framework of oversight? And one of the things that I think reinforces the importance of that question for me is when I descended into this reporting, I got such a full face blast of the mud fight between these individuals and the ways in which, as their fingers hover over the button that really may reshape all of our employment, all of our safety, our economy, everything potentially by their own pitch. Right. That is what they are conveying. They are at each other's throats like children. Yes. And I document in the piece that, that, you know, Elon Musk, who was a co founder of OpenAI and, and now in a gigantic lawsuit is alleging that he was scammed because of this transformation of OpenAI from a non profit to a for profit. You know, I, I won't get into the merits of the case except to say that there is this blood feud that you allude to between a lot of these guys, by the way, but this is a great example of it where as we're entrusting them to such a serious thing, they're spending all of this time and all of these resources trying to murder each other. Reputationally in Silicon Valley there are, you know, allegations about Sam Altman and his personal life that are circulated so widely, they're discussed like, it's just common knowledge. Specifically like claims that he pursues underage boys that simply appear to be untrue. Like I spent months calling around to everyone I looked at all of the opposition research dossiers. And I couldn't find that. And the unfortunate thing is that that stuff obscures the very real and very evidence based stuff. And, and it is extreme. I mean, people are spending money on like chasing each other around with private investigators while our future is in their hands.
B
That is crazy. Okay, one little detail that I loved because I kind of love salacious details, is that you wrote that Altman met his husband in Peter Thiel's hot tub. And I thought that was a rather juicy tidbit because I come from Oklahoma, Oklahoma City. I lived there for 51 years. And I often am puzzled how women carry water for the patriarchy or how gay men can carry water for the patriarchy, Latinos for Trump, etc, because I know that these people are going to be the first to drown. What can you speak to the hot tub meeting or all were all of these guys asshole buddies at one point and then now they're all frenemies or enemies. And then what do you make of Peter Thiel and his connection with Jeffrey Epstein, which I found interesting in the drops about celebrating like the end of Brexit?
C
Well, on the teal Epstein connections, I have worked in the past on reporting about Epstein and institutions taking money from Epstein and hiding that in some cases I am still looking at some Epstein stuff without getting into details because I can never talk about reporting before it's fully baked and getting out there. So I'm not going to get into specifics on the Epstein stuff. I think the Peter Thiel anecdote just really underscores how powerful these guys are and the fact that someone like Thiel really has tendrils into everything. And you wind up in a situation where a lot of the moguls in Silicon Valley have a shared playbook, they have a shared ideology. Even if many of them would say, well, I don't buy into all of this crazy stuff Peter Thiel is saying. You know, Peter Thiel is saying, Greta Thunberg is the Antichrist and the Antichrist is coming. And so a lot of these guys who are. I don't want to describe the community monolithically because Sam is a different level of reasonableness than Elon and it's all very different than Peter Thiel. But the influence of each of them is so complete and the financial incentives and the broader economic structures and the lack of oversight so readily facilitates the kind of power mania that Thiel exhibits and naturally leads to an endpoint that is in some ways anti democratic. That that does see across the whole community. There is just less need for these guys to be accountable and these companies to be accountable. And I think that's why this moment of AI becoming so important to all of our future makes it a moment where we need a reckoning with that. That is a big structural problem. And with AI, it's a problem that might well kill all of us. You know, if you trust these guys who founded OpenAI.
B
Right. It feels like to me, like this is, you know, we've all read about big tobacco. We've read about all, you know, big oil, the Sackler family. I mean, this seems like it has all of those components and we're on this side of it and where we
C
can be preventative suits against OpenAI right now. You know, all of these lawsuits alleging that ChatGPT has facilitated people's psychosis, people with mental illness, and been a catalyst for suicides and even a murder in one case. So what you say is exactly right. It does feel like a big tobacco moment.
B
Yeah. Ronan Farrow, this has been so great. Thank you so much for all of your journalism. Thank you for everything you do, shining a light and helping to right wrongs, and in this case, helping to get in front of a lot of wrongs that could potentially happen. Thank you so much.
C
Thank you to you, and thank you to everyone listening. And look, if you care about accountability journalism, do try to subscribe and support journalists doing this sort of thing.
B
Yes. And go follow Ronan and subscribe to. And this is in the New Yorker. Correct?
C
In the New York. You can read it on the New Yorker dot com.
B
I think it's the most democratic, patriotic thing we can do right now is to support journalists and journalism. Thank you so much.
C
Thank you for that,
B
Sam.
Hosts: Jennifer Welch & Angie “Pumps” Sullivan
Guest: Ronan Farrow (Contributing writer for The New Yorker)
Date: April 9, 2026
In this hard-hitting, witty episode, Jennifer and Angie interview investigative journalist Ronan Farrow about his explosive new New Yorker piece, "Sam Altman May Control Our Future. Can He Be Trusted?" The conversation dissects the immense risks and unchecked power of the AI industry and its leading figure, Sam Altman, delving into whistleblower claims, billionaire feuds, and the collapse of regulatory guardrails. The episode delivers both urgent warnings and sharp laughs while probing the deeper sociopolitical consequences of Silicon Valley's tech oligarchs on democracy and society.
[01:23-03:24]
"Already right now, we are seeing an environment where AI is powering weapons in war zones... Even the sunniest projections from economists hold that ... millions of jobs are going to be exposed to disruption and maybe elimination from this." (Ronan Farrow, 02:16)
[03:24-07:54]
"Sam Altman himself has said this could be lights out for all of us. That's his words, if it goes wrong." (Ronan Farrow, 05:09)
[07:54-13:38]
"The moment Trump came in, all of the regulators went away, and all of the money from the Middle east could flow freely… That hands over ... the equivalent of nuclear weapons ... to autocrats." (Ronan Farrow, 10:40)
[13:38-15:15]
"Peter Thiel is openly espousing ... that democracy has failed and ... a monarch, basically a CEO like character, has much more absolute power." (Ronan Farrow, 15:18)
[18:06-21:38]
"They are at each other's throats like children... they're spending all of this time and all of these resources trying to murder each other. Reputationally." (Ronan Farrow, 20:06)
[21:38-24:35]
[24:35-25:14]
"It does feel like a big tobacco moment." (Ronan Farrow, 25:14)
On the power shift:
"This is really a situation where capitalism won out...He [Altman] came back, and I document parts of that comeback that are still raising real questions." (Ronan Farrow, 06:52)
On media consolidation:
"As we were closing this piece...they announced they were acquiring TBPN, this tech podcast. So it's just a little microcosm for how the whole media landscape is now being gobbled up by these tycoons." (Ronan Farrow, 12:13)
On elite Silicon Valley dynamics:
"You know, Sam Altman is not, I think, an extremist politically. I think the thing we talked about earlier, the dynamic in which he is opportunistic in this respect and he sees...an opening where the Trump administration can...do his bidding." (Ronan Farrow, 16:18)
On billionaire victimhood:
"They feel like they're so oppressed. And I don't think there's ever been a clearer advertisement against the grotesque accumulation of wealth than this slate of billionaires." (Jennifer Welch, 18:25)
On structural risks:
"The influence of each of them is so complete and the financial incentives and the broader economic structures and the lack of oversight so readily facilitate the kind of power mania that Thiel exhibits and naturally leads to an endpoint that is in some ways anti democratic." (Ronan Farrow, 23:17)
The episode delivers a spirited, pointed critique of tech oligarch power, punctuated by Farrow’s factual rigor and the hosts’ grassroots energy. If you care about technology, democracy, or the future of work, this is an urgent and entertaining listen.
Support accountability journalism. Read Ronan Farrow’s piece at NewYorker.com.