
The head of OpenAI has a reputation for deception. The New Yorker’s Ronan Farrow explains why that matters.
Loading summary
Host (Advertisement Narrator)
Support for Decoder comes from Adobe. Life is unpredictable, and that means you need your projects to adapt with whatever gets thrown at you. That means mastering the ability to pivot and collaborate with others to reach your goals. Adobe gets that, which is why they made a tool that's just as flexible as you are. PDF Spaces and Acrobat. Your PDF files are no longer static. Instead, they're living documents that flex with you and your project's needs. Learn more@adobe.com do that with Acrobat.
Ronan Farrow
Recommendations can be great. Maybe someone recommended this podcast and here you are. But home projects are a little different. If the podcast isn't your thing, you might lose a few minutes from your day. But if you hire your cousin's neighbor to mount your tv, you might end up with a lopsided screen and wall damage. I know a guy isn't a good strategy for your home. That's why Thumbtack works so well. It matches you with top rated local pros with photos, reviews and credentials all in one convenient place. For your next home project, try Thumbtack. Hire the right pro today
Host (Advertisement Narrator)
Support for the show comes from MongoDB. If you're a developer stuck fixing bottlenecks instead of building the next big thing, then you need MongoDB. MongoDB is the flexible, unified platform that gets out of your way. It's ACID compliant, enterprise ready, and built to ship AI apps fast. It's trusted by so many of the Fortune 500 for a reason. Ask any developer. It's a great freaking database. Start building@mongodb.com build
Nilay Patel
hello and welcome to Decoder. I'm Neil Apitel, editor in chief of the Verge, and Decoder is my show about big ideas and other problems. Today I'm talking with Ronan Farrow, one of the biggest stars of investigative reporting working today. Ronan broke the Harvey Weinstein story, among many, many others. And just last week he and co author Andrew Morantz published an incredible deep dive feature in the New Yorker about OpenAI CEO Sam Altman, his trustworthiness, and the rise of OpenAI itself. One note before we go any further here. The New Yorker published that story and Ronan and I had this conversation before the attacks on Sam Altman's home. So you won't hear us talk about them directly. But just to say it, I think violence of any kind is unacceptable. These attacks on Sam were unacceptable, and the kind of helplessness that people feel which leads to this kind of violence is also unacceptable. And it's worth more scrutiny from both the industry and our political leaders. I Hope that's clear. All that said, there is a lot of swirl around Sam Altman that's fair game for rigorous reporting, the kind of reporting that Ronan set out to do. Thanks to the popularity of ChatGPT, Altman has emerged as the most visible figurehead of the AI industry, having turned a once non profit research lab into an almost trillion dollar private company in just a few years. But the myth of Sam Altman is deeply conflicted, defined equally by both his obvious deal making ability and the tendency, which Ronan reported to, well, lie to everyone around him. Ronan and Andrew's story is over 17,000 words long and it contains arguably the definitive account of what happened in 2023 when the OpenAI board of directors very suddenly fired Altman over his alleged lying, only for him to be almost immediately rehired. The story is also a deep dive into Altman's personal life, his investments, his courting of Middle Eastern money, and his own reflections on his past behavior and character traits that led one source to say that he was unconstrained by truth. I really suggest you read the entire story. I suspect it will be referenced for many years to come. Ronan talked to Altman many times over the 18 months he spent reporting the story, and so one of the main things I was curious about was whether Ronan sensed any change in Altman over that time. After all, a lot has happened in AI, in tech, and in the world over the past year and a half. You'll hear Ronan talk about that very directly, as well as his sense that people have become much more willing to talk about Altman's ability to stretch the truth. People are starting to wonder out loud and on the record whether the behavior of people like Sam Altman is concerning not just for AI or tech, but for society's collective future. Before we start, a quick reminder that you can listen to this episode or any episode of Decoder completely ad free by subscribing to the Verge. Just go to theverge.com subscribe okay. Ronan Farrow on Sam Altman, AI and the Truth Here we go. Ronan Farrow, you're an investigative reporter and contributor to the New Yorker. Welcome to Decoder.
Ronan Farrow
Glad to be here. Thanks for having me.
Nilay Patel
I am very excited to talk to you. You just wrote a big piece for the New Yorker. It's a profile of Sam Altman and sort of with it OpenAI. My read of it is as all great features do it with rigorous reporting validates a lot of feelings people have had about Sam Altman for a very long time. You've obviously published it, you've gotten reactions to it. How are you feeling about it right now?
Ronan Farrow
I've been heartened, actually, by the extent to which it's broken through. In a time where the attention economy is so kind of schizophrenic and shallow. This is a story that in my view, affects all of us. And when I spend a year and a half of my life, and my co author, Andrew Marantz also spent that time of his really trying to do something forensic and meticulous, it's always because I feel like there are bigger structural issues that affect people beyond the individual at the heart of the story, beyond the company at the heart of the story, Sam Altman against the backdrop of Silicon Valley hype culture and startups that balloon to massive valuations based on promises that may or may not come to pass in the future, and an increasing embrace of a founder culture that thinks of as a, as a feature, not a bug telling different groups, different conflicting things. Even against that backdrop, Sam Altman is an extraordinary case where everyone in Silicon Valley who expects those things can't stop talking about this question of his trustworthiness and his honesty. And, you know, we knew already that he had been fired over some version of allegations of dishonesty or serial alleged lying. But extraordinarily, despite the fact that there's been wonderful reporting, you know, Keech Hagee has done great work on this, Karen Howe has done great work on this, there really wasn't definitive understanding of the actual alleged proof points and the reasons why those have stayed out of the public eye. Point number one is I feel heartened by the fact that some of those gaps in our public knowledge is, and even in the knowledge of Silicon Valley insiders have now been filled a little bit more. Some of the reasons that they were gaps have been filled a little bit more. You know, we report on cases where people inside this company really felt like things were covered up or deliberately not documented. One of the new things in this story is that a pivotal law firm investigation by Wilmer Hale, which is obviously a fancy, credible big law firm, that did investigations of Enron and WorldCom, which by the way, were all voluminous, like hundreds of pages published, they did this investigation that was demanded by board members that had fired Altman as a condition of their departure. When he got rid of them and he came back. And extraordinarily, in the eyes of many legal experts I spoke to, and shockingly, in the eyes of many people in this company, they kept it out of writing. All that ever emerged from. That was an 800 word press release from OpenAI that described what happened as a breakdown in trust. And we confirm that this was kept to oral briefings. There's cases we talk about where, for instance, a board member seemingly wants to vote against the conversion from OpenAI's original nonprofit form into a for profit entity and it's recorded as an abstention. And there's like a lawyer in the meeting saying, well, that could trigger too much scrutiny and it gets. The person wants to vote against and it gets recorded as an abstention. To all appearances, you know, there's factual dispute. OpenAI claims otherwise, as you might imagine. But these are all cases essentially where you have a company that by their own account holds our future in their hands. Right. The safety stakes are so acute, they have not gone away. This is the reason this company was founded as a nonprofit focused on safety and where things were being obscured in a way that like, credible people around this found less than professional. And you couple that with a backdrop where there's so little political appetite for meaningful regulation, and I think it's a very combustible situation. The point for me is not just that Sam Altman, you know, deserves these questions so acutely, it's that any of these guys in this field and many of the key figures exhibit like, if not this particular idiosyncratic, alleged lying all the time trait, certainly like some degree of a race to the bottom mentality, right, where the people who were safetyists have watered down those commitments and everyone is in a race posture. I think the point, as we look at like even recent leaks out of Anthropic, is there's a person who poses the question of who should have their finger on the button in this piece. The answer is if we don't have meaningful oversight, I think we gotta be asking serious questions and trying to surface as much information as we can about all of these guys. So I've been heartened by what feels like a meaningful conversation about that or the beginnings of one.
Nilay Patel
The reason I asked it that way is you worked on this for a year and a half. You talked to, I believe, 100 people with your co author, Andrew. That's a long time for a story to cook, and I think about the last year and a half in AI in particular. And boy, have the attitudes and values of all these characters shifted very quickly. Maybe none more so than than Sam Altman, who started off as the default winner because they had released ChatGPT and everyone thought that would just take over for Google. And then Google responded, which seemed to surprise them, that Google would try to protect its business, maybe one of the best businesses in tech history, if not business history. Anthropic decided that it would focus on the enterprise. It seems to be taking a commanding lead there because the enterprise uses of AI are so high. And OpenAI's product, they're now refocusing away from we're going to take on Google to Codex and they're going to take on the enterprise. And I just can't quite tell. Over the course of reporting over the last year and a half, did it feel like the characters you were talking to changed? Like their attitudes and their values, did those change?
Ronan Farrow
Yes, I think, first of all, that the critique that is explored in this piece, coming from many people inside these companies at this point, that this is an industry that, despite the existential stakes, is descending into something of a race to the bottom on safety and where speed is trumping everything else. That concern has grown more acute. And like, I think those concerns have been more validated as the past year and a half has transpired simultaneously. Attitudes about Sam Altman specifically have changed. You know, when we started talking to sources for this, people really, really were leery of being quoted about this, going on the record about this. And then by the end of the reporting, you know, you have a body of reporting where people are talking about this very openly and explicitly, and you have like, you know, board members saying, like, he's a pathological liar, he's a sociopath. A range of perspectives from this is dangerous, given the safety stakes, that we need leaders of this tech that have elevated integrity all the way up to, like, forget the safety stakes. This is behavior that is untenable for any executive of any major company, that it just creates too much dysfunction. So the conversation has become much more explicit in a way that feels maybe belated, but is heartening in one sense. And Sam Altman, to his credit, the piece is very fair and even generous. I would say to Sam, you know, this is not the kind of piece where there was a lot of gotcha stuff. This is like, I spent many, many hours on the phone with him as we were finishing this up and really heard him out. And as you can imagine, in a piece like this, not everything makes it in. And some of those cases in this one were because, like, I was listening sincerely. And if Sam was actually making an argument that I felt carried water, that something, even if it was true, could be sensationalist, you know, I really erred on the side of keeping this like forensic and measured. I think that is being received rightly. And I just hope this factual record now that's accumulated over this period of time can trigger a more bracing conversation about the need for oversight.
Nilay Patel
That's actually my next question. I think you talked to Sam a dozen times over the course of reporting the story. Again, that's a lot of conversations over a long period of time. Did you think Sam changed over the course of the reporting over the past year and a half?
Ronan Farrow
Yeah, I think one of the most interesting subplots in this is that Sam Altman is also talking about this trait more explicitly than he has in the past. The posture of Sam in this piece is not like there's nothing there. You know, this is not true. I don't know what you're talking about. The posture he has is, you know, he says that this is attributable to a people pleasing tendency and a kind of conflict aversion. He's acknowledging that it caused problems for him, particularly earlier in his career. He is saying, well, I've kind of, I am moving past that or have to some extent moved past that. I think what's really interesting to me is the contingent of people we talked to who were not just sort of safety advocates, not just the underlying technical researchers who very often tend to have these acute safety concerns, but like pragmatic big time investors who are backers of SAMs, who in some cases look at this question and talk about even having played a key role in him coming back after his firing and now say on this question of like, has he reformed? To what extent is that change meaningful? Well, we gave him the benefit of the doubt at the time. And I'm thinking of, you know, one prominent investor in particular who said, but since then, like, it seems clear he wasn't taken out. Behind the woodshed was the phrase that this one used to the extent that was necessary. And as a result it seems like this is now like a stable trait. Like we're seeing this in an ongoing way. And you can look at some of OpenAI's biggest business relationships and the way they kind of carry the weight of that mistrust in an ongoing way. Like Microsoft, you talk to executives over there, there's really acute and like recently catalyzed concerns. There's this instance where in the same day OpenAI is reaffirming their exclusivity with Microsoft with respect to underlying stateless AI models and then announcing a new deal with Amazon that's to do with selling enterprise solutions for building AI agents that are stateful, meaning they have memory. And you talk to Microsoft people and they're like, that's not possible to do without interacting with the underlying stuff that we have an exclusivity deal on. So that's just like one of many small examples where this trait has tendrils into ongoing business activity all the time and is a subject of active concern within OpenAI's board, within OpenAI's executive suite, and in the wider tech community.
Nilay Patel
You keep saying that trait. There's a line in the story that, to me, feels like the thesis of the story, and it's a description of the trait you're describing. It's that Sam Altman is unconstrained by the truth and that he has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. And the second is an almost sociopathic lack of concern for the consequences that may come from deceiving someone. I have to tell you, I read that sentence 500 times, and I tried to imagine always saying what people wanted to be liked and then not being upset when they felt lied to. And I couldn't. I could not make, like, my emotional state understand how those things can exist in the same person. How, as you've talked to Sam a lot, you've talked to people that have experienced these traits. How does he do it?
Ronan Farrow
Yeah, you know, it's interesting on a human level because I do approach bodies of reporting like this with a real focus on humanizing whoever's at the heart of it and, like, seeking deep understanding, right. And empathy. When I kind of tried to approach this from a more human standpoint and say, hey, like, this would be devastating for me if so many people that I've worked with said, I'm a pathological liar. How do you carry that weight? Like, how do you. Do you talk about that in therapy? What is the story you tell yourself about that? You know, I got some sort of, in my view, maybe like, west coast platitudes about, like, yeah, I like breath
Host (Advertisement Narrator)
work,
Ronan Farrow
but not a lot of the kind of bracing sense of deep self confrontation that I think a lot of us would probably have if we were seeing this kind of feedback about our behavior and our treatment of people. And I think that that actually goes to the broader answer to the question, too. Sam asserts basically that this trait has caused problems, but also that it's part of what's empowered him to accelerate OpenAI's growth so much that he is able unite and please, essentially different groups of people. He's constantly convincing all of these conflicting constituencies that what they care about is what he cares about. And that can be a really useful skill for a founder. You know, I've talked to investors who then say, well, maybe it's a less useful skill for actually running a company because it shows so much discord. But on the same personal side, you know, I think the thing that I pick up on when I try to connect on a human level on this, the apparent lack of deeper confrontation and reflection and self accountability also informs that whether you want to call it a superpower or a liability for a company preparing for an ipo, he is someone who, in the words of one former board member, there's this former board member named Su Yoon who's on the record in the piece saying that to the point of fecklessness is the phrase she uses. He really believes the shifting reality of his sales pitches or is able to convince himself of them or at least if he doesn't believe them, it is able to, you know, bluster through them without like meaningful self doubt. I think the thing that you're talking about where you or I might, as we're saying the thing and realizing that it conflicts with the other assurance we've made kind of have a moment of freezing up or checking ourselves, I think that that doesn't happen with him. And you know, there's a wider Silicon Valley hype culture and like founder culture that kind of embraces that.
Nilay Patel
We need to take a quick break. We'll be right back.
Host (Advertisement Narrator)
Support for this show comes from Doppel. Maybe that ping you just got is an urgent message from your CEO. Or maybe it's a deep fake trying to target your business. Doppel is the AI native social engineering defense platform fighting back against impersonation and manipulation. As attackers turn to AI to power increasingly sophisticated strikes, Doppel uses it to fight back. Their digital risk management dismantles attacker infrastructure, while human risk management builds team resilience through simulation and training with automated takedowns, multichannel coverage and AI defenses that build intelligence with every fight, Doppel works relentlessly to protect people, brands and trust. Doppel outpacing what's next in social engineering? Learn more@doppel.com that's D O P P E L.com Dollar support for decoder comes from Adobe. For every big idea, your documents folder tells a story. Let's say you've just finished pulling together a brief. So you hit export on final version PDF, but then you open the file and you immediately notice a typo. Several versions later, you're exporting final v4 actual final draft Adobe Acrobat can save you the digital clutter with PDF spaces. It takes your documents and turns them into a living project that you can engage with, get insights from, and collaborate with others on. You can gather all your files into one workspace and have a whole conversation with your AI assistant about it and ask questions to get deep insights about your project. You can even invite people to your PDF space and let them add files, comments, notes, and more. You could doodle in the margins or even turn your project into your own personal podcast episode. Acrobat lets you generate an audio overview of your project in just one click. Learn more at adobe.com do that with Acrobat. Support for today's show comes from CNN. Do you want to live forever? Influential journalist Kara Swisher is taking a hard look at the longevity industry to separate the influencer hype from evidence backed science. In her new CNN Original series, Cara's talking to Silicon Valley power players and trying out the latest in anti aging technology to see what works and what's a waste. Kara Swisher wants to Live Forever New series now streaming with a CNN subscription. Go to CNN.com subscribe to get started and save 40% for a limited time Terms apply
Ronan Farrow
Study and play come together on a Windows 11 PC and for a limited time, college students get the
Nilay Patel
best of both worlds.
Ronan Farrow
Get the Unreal college deal everything you need to study and play with select Windows 11 PCs. Eligible students get a year of Microsoft 365 Premium and a year of Xbox game. Pass ultimate with a custom color Xbox wireless controller. Learn more@windows.com student offer while supplies last ends June 30th terms at aka Ms. College PC.
Nilay Patel
We're back with investigative reporter Ronan Farrow discussing his New Yorker feature on Sam Altman. You know, it's funny the the Verge is built on what amounts to a product reviews program. Like it's the heart of what we do here is we I hold a trillion dollars of Apple R and D once a year and say this phone is a 7 and it's it sort of legitimizes all of our reporting in our opinions elsewhere, right? We have an evaluative function and we spend so much time just looking at the AI products and saying do they work? And that feels missing from a lot of the conversation about AI as it is today. There's endless conversation about what it might be able to do, how dangerous it might be. And then you drill down, you say, does it actually do the thing it's supposed to do today. And in some cases the answer is yes, but in many, many cases the answer is no. And that feels like it connects to the, the hype culture you're describing and also to the sense that, well, if you say it's going to do something and it doesn't and someone feels bad, that's fine because we're on to the next thing. Like, that's in the past. And in AI in particular, Sam is so good at making the grand promises. Right? Just this week, I think the same day as your story published, OpenAI released a policy document that said we have to rethink the social contract and have AI efficiency stipends from the government. And this is a grand promise about how some technology might shape the future of the world and how we live. And all of that relies on the technology working in exactly the way that maybe it's promised to work or it should work. Did you ever find Sam doubting AI turning into AGI or super intelligence or getting to the finish line? Because that's the thing that I wonder about the most do. Is there any reflection about whether this core technology can do all of the things that they say it can do?
Ronan Farrow
It's exactly the right set of questions. There are credible technologists that we spoke to in this body of reporting, and obviously Sam Altman is not one. Right. He's a business person who say the way that Sam talks about the timeline for this tech is just way off. You know, there's blog posts going back a few years where Sam is saying we've already reached the event horizon. AGI is like, basically here, super intelligence is around the corner, we're going to be on other planets, we're going to be curing all forms of cancer. Like, truly, I'm not, you know, embellishing the cancer.
Nilay Patel
One is actually interesting that Sam is hyping up the person who theoretically cured their dog's cancer with ChatGPT. And that simply did not happen. They. They talked to ChatGPT and that helped them guide some researchers that actually did the work. But the one to one this tool cured this dog is not actually the story.
Ronan Farrow
I'm glad you raised that because I want to go on to this bigger point about when is both the potential and the risk of the technology really going to vest. But it's worth mentioning these little asides constantly that happen from Sam Altman where he seems to embody this trait all over again. To use the example of the Wilmer Hale report where we had this information that it had been kept out of writing and wanted to know, you know, was the brief, the oral brief along the way given to anyone other than the. The two board members Sam helped install to oversee it? And he said, like, yeah, yeah, no, I believe it was given to everyone who joined the board after. And that is, we have, you know, a person with direct knowledge of that saying that is simply a lie. And, like, that really does appear to be the case that that is untrue, you know, and if we want to be generous and perhaps he was misinformed. So there's a lot of these casual assurances. And I use that example in part because that's a great example of dissembling, let's call it, that can have real consequences legally. You know, I don't need to tell you, like, under Delaware corporate law, if this company, IPOs, shareholders could, under Section 220, like, complain about this and demand underlying documentation, there's already board members saying, like, well, wait a minute, that briefing should have happened. So these things that seem to jump out of his mouth all the time, they can have, like, real market moving effects, real effects for this company and bringing it back to the kind of utopian hype language that's resurfaced, I think, not coincidentally, on the day this piece came out. Also effects for all of us because the dangers are so acute, you know, with respect to the way it's being deployed in weaponry, with respect to the way it's being used to identify chemical warfare agents, the disinformation potential, you know, because of the way in which the utopian hype does seem to be prompting a lot of credible economists to say this has all the signs of a bubble. Even Sam Altman has said, you know, someone's going to lose a lot of money here that could really, like, crater a lot of American and global economic growth. If there's like a true puncturing of a bubble involving all of these companies doing deals with each other, going all in on AI while borrowing so heavily. So what Sam Altman says matters, and I think the preponderance of people around him, you know, you mentioned we talked to more than 100. It was actually well over 100. We had a conversation at the finish line where it's like, would it be too petty to say it's like this much higher number? And we were like, yeah, let's downplay. We'll play it cool. But it was so many people and such a significant majority of them saying this is a concern. And I think that's all why?
Nilay Patel
Let me ask you about that number. And as you mentioned, people got more and more open with the concerns as time went on. It feels like the pressure around the bubble, the race to win, to pay off all this investment, to emerge as the, the winner to ipo, that has changed a lot of attitudes. It certainly created more pressure on SAM and OpenAI. We published a story this week just about the vibes at OpenAI. Your story is part of it. But massive staffing changes in the executive ranks at OpenAI. People are coming and going. The researchers are all headed away, largely philanthropic, which I think is really interesting. You can just see this company is feeling the pressure and it's, it is responding to that pressure in some way. But then I think back to Sam getting fired and very memorable. This is just memorable for me. It's memorable for no one else. But I took a source call at the Bronx Zoo at 7pm on a Friday and it was someone saying they're going to try to get Sam back. And then we spent the weekend chasing that story down. And I was just like, I'm at the zoo, like, what do you want me to do here? And the answer was stay on the phone. Well, my daughter was like, get off the phone. And like, that's what I did. It was ride or die to get Sam back. That company was like, no, we're not letting the board fire Sam Allman. The investors, they're in quoted in your piece, we went to war, I think is the Thrive Capital position to get Sam back. Microsoft went to war to get Sam back. It's later. And now everyone's like, we're very, we're going to ipo. We're going to, we got to the finish line. We got our guy back and he's going to get us to the finish line. We're concerned he's a liar. Why was it war to get him back then? Because it doesn't seem like anything has actually changed. Right. You talk about the, the memos that Ilya Sutskever and Dario Amadei kept while they were contemporaries of Sam Altman. Ilya's number one concern was Sam is a liar. None of that has changed. So why was it war to bring him back then? And now that we're at the finish line, it, it seems like all the concerns are out in the open.
Ronan Farrow
Well, first of all, sorry to your daughter and my partner and all the other people around, journalists.
Nilay Patel
It's quite a weekend for everyone.
Ronan Farrow
Yeah. No, it does take over one's life and this story definitely has mine. Over the last period of time. It actually relates to this theme of journalism and access to information. I think, you know, the investors that went to war for Sam and all played roles in making sure he came back, and the board that had been specifically designed to protect a nonprofit's mission to put safety over growth and to fire an executive if they couldn't be trusted with that, them going away, that that was all because, yes, the market incentives were there. Right? You know, Sam was able to convince people, well, the company's just going to fall apart. But the reason he had support was lack of information. Those investors, in many cases now say, you know, I look back and I think I should have had more concerns if I had known fully what the claims were and what the concerns were. Not all of them. Opinions vary, and we quote a range of opinions, but there are significant ones who were acting on very partial info. The board that fired Sam was, in the words of one person who used to be on the board, you know, very jv, and they fumbled the ball hard. And we document the underlying complaints, and people can decide for themselves whether it accumulates into the kind of urgent concern they felt it was. But that argument and that information was not being presented. They received what some of them now acknowledge as bad legal advice to describe it. You'll remember the quote. Probably a lot of your listeners and viewers will remember the quote. You know, a lack of candor was what it was reduced to. And then they, like, essentially wouldn't take calls.
Nilay Patel
They would not take calls. I'm sure you tried. Everyone I know tried. And it was like, you, you and we. It got to the point where, you know, as a journalist, you're not supposed to give your sources advice. But I. I was like, you know, you. You. This will go away if you don't start explaining yourself.
Ronan Farrow
And that's what happened. And so you had, you know, forget journalists like Satya Nadella saying, like, what the hell happened? I can't get anyone to explain to me. And that's the company's major financial backer. And then you have, like, Satya calling Reid Hoffman and Reid calling around and saying, I don't know what the fuck happened? And they're like, understandably in that void of information, looking for, you know, the traditional non AI indicators that would justify such an urgent sudden firing. Like, okay, was it sex crimes? Was it embezzlement? And the entire subtle but I think meaningful argument that this tech is different and that this kind of a steady accumulation of smaller betrayals actually could have meaningful stakes both for this business and maybe for the world that was really largely lost. And so, you know, capitalist incentives won out. But also the people who made it win out were not always operating with complete information.
Nilay Patel
I want to just ask about the what everyone thought it was aspect for one moment because I certainly saw the news and I said, oh, something bad must have happened. You've done a lot of MeToo reporting. Famously you broke the Harvey Weinstein story. You spent a lot of time reporting on these claims that I think you decided were ultimately unfounded, that Altman had sexually assaulted minors or hired sex workers or even murdered an open air whistleblower. Did you? I mean, you are the person who can report this stuff the most rigorously. Did you, did you decide that came to nothing?
Ronan Farrow
Well, look, I'm not in the business of saying something has come to nothing. What I can say is I spent months looking at these claims and did not find corroboration for them. And it was striking to me that, you know, these guys, these companies who have so much power over our futures truly are spending a disproportionate amount of their time and resources in a childish mud fight. You know, it's. One executive describes it as Shakespearean. It's like the amount of, you know, private investigator money, the opposition dossiers being compiled, it's relentless. And the unfortunate thing is that the kind of salacious stuff which gets parroted by Sam's competitors as just assumed fact, right? There's this allegation that he pursues underage boys. And at many cocktail parties in Silicon Valley you hear this. And on the conference circuit I've heard it repeated by credible prominent executives. Everybody knows this is a fact. I talk about where this comes from, the various vectors by which it's transmitted. Elon Musk and his associates seemingly pushing like really hardcore dossiers that kind of amount to nothing. They're vaporous when you actually start to look at the underlying claims. The sad thing is that it really obscures the more evidence based critiques here that I think really deserve urgent oversight and consideration.
Nilay Patel
The other theme that really comes through in the story is almost a sense of fear that Sam has so many friends, he's invested in so many companies, from his previous role as CEO of Y Combinator just to his personal investing, some of which is in direct conflict with his role as CEO of OpenAI, that there's silence around him. There's one thing that's really struck me. You describe Ilya Suitscover's memos and they're just out in Silicon Valley and everyone calls them the Ilia memos, but there's even silence around that, right? They're, they're passed around, but they're not discussed. Where do you think that comes from? Is it, is it fear? Is it a desire to get angel investment? Where does that come from?
Ronan Farrow
I think it's a lot of cowardice, I'll be honest. You know, having reported on national security stories where the sources are, you know, whistleblowers who stand to lose everything and like face prosecution and they still do the right thing and talk about things to create accountability. I've worked on, you know, the sex crimes related stories that you mentioned where sources are deeply traumatized and fear like a very personal kind of retribution. In many cases around this beat, you're dealing with people with their own profile and power. Right. You know, they're like either famous people themselves or they're surrounded by famous people. They have robust business lives and in my view it is actually like very low exposure for them to talk about this stuff. And thankfully, like the needle is moving as we talked about earlier and people are now talking more. But for such a long time people really just shut up about it because I think the Silicon Valley culture is just so kind of ruthlessly self interested and ruthlessly business and growth oriented. So, you know, I think this afflicts even like some of the people who were involved in firing Sam. Where you saw in the days after, yes, one factor that led to him coming back and the firing of old board members was that he rallied investors who were confused to his cause. But another is that so many other people around it who had the concerns and voiced them urgently just folded like napkins and changed their tune the moment they saw the wind was blowing the other way and they wanted in on the profit train. It's pretty dark, honestly, from my standpoint as a reporter.
Nilay Patel
Some of those people are Mira Moradi, who for I believe 20 minutes was this the new CEO of OpenAI. She was then replaced. It was very complicated, sort of dynamic and obviously Sam came back. The other person is Ilya Suitscover, who was one of the votes to remove Sam. And then he changed his mind, or at least said he changed his mind and then he left to start his own company. Do you know what made him change his mind? Was it just money?
Ronan Farrow
Well, to be clear, I'm not singling those two out. There's also, you know, there's other board members who were involved in the firing who also fell very silent after. I think it's like a wider Collective problem. These are in some cases people who had the moral fiber to sound alarms and take radical action. And that is to be commended. And that's how you assure accountability. And that could have helped a lot of people who are affected by this technology. It could have helped an industry to remain more safety focused meaningfully. But you know, dealing with whistleblowers a lot and people who try to prompt that accountability a lot, you see that it also, it takes the fiber of sticking it out and standing by your convictions. And this industry is truly full of people who just do not stand by
Nilay Patel
their convictions, even though they think they're, they're building digital God that will somehow either eliminate all labor or create more labor or some something will happen.
Ronan Farrow
Well, that's the thing. So it's the culture of not standing by your convictions and all ethical concerns falling by the wayside. The moment there's any heat or anything that could threaten your own standing in the business is, you know, maybe all well and good to some extent for business as usual, companies that are making whatever kind of widget. But these are also the same people who are saying this could literally kill us all. And you know, again, you don't have to go to the Terminator Skynet extreme like that is a set of risks that are already materializing. It is real. They are right to warn about that. But you know, you'd have to have someone else armchair psychologize how those two things can live in the same people where they're sounding the urgent warnings. They're maybe like putting a toe in and trying to do something and then they're just folding and falling silent. And that is precisely why you can have these kind of instances of things being kept out of writing and things being swept under the rug and no one talking about it this openly for years after the fact.
Nilay Patel
We have to take another quick break. We'll be back in just a minute.
Advertisement Voice
When you need to build up your team to handle the growing chaos at work, use indeed sponsor jobs. It gives your job post the boost it needs to be seen and helps reach people with the right skills, certifications and more. Spend less time searching and more time actually interviewing candidates who check all your boxes. Listeners of this show will get a $75 sponsored job credit@ Indeed.com podcast. That's Indeed.com podcast. Terms and conditions apply. Need a hiring hero? This is a job for indeed sponsored jobs. Starting a business can seem like a daunting task unless you have a partner like Shopify. They have the tools you need to start and Grow your business from designing a website to marketing to selling and beyond. Shopify can help with everything you need. There's a reason millions of companies like Mattel, Heinz and Allbirds continue to trust and use them. With Shopify on your side, turn your big business idea into sign up for your $1 per month trial@shopify.com specialoffer.
Ronan Farrow
Tomorrow morning is knocking. Stock your fridge now. How about a creamy mocha Frappuccino drink?
Nilay Patel
Or a sweet vanilla smooth caramel maybe? Or white chocolate mocha? Whichever you choose, delicious coffee awaits. Find Starbucks Frappuccino drinks wherever you buy your groceries. Where is Daredevil?
Ronan Farrow
I'm right here.
Nilay Patel
Don't miss the return of Marvel Television's Daredevil Born Again.
Ronan Farrow
So what's next? I feel liberated.
Nilay Patel
We're gonna take this city back over medicated in an all new season. Now streaming only on Disney.
Advertisement Voice
They're hunting us. It's time we started hunting them.
Ronan Farrow
I can work with them.
Advertisement Voice
This should be tons of fun.
Nilay Patel
Marvel Television's Daredevil Born Again now streaming only on Disney. We're back with the New Yorker's Ronan Farrow discussing how the AI industry is now colliding with the world of politics in unprecedented ways. The natural responsible party here would not be the CEOs of these companies. It would be governments. In the United States, maybe it's a government government. Spain's federal government. Certainly these companies all want to be global. There's lots of global implications here. I watched OpenAI and Google and Anthropic all sort of goad the Biden administration into releasing an AI executive order. It was pretty toothless. In the end it just said they had to talk about what their models were capable of and release some safety testing. And then they all kind of backed Trump and Trump came in and wiped all that out and said, we have to be competitive. It's a free for all. Go for it. At the same time that they're all trying to raise money from Middle Eastern countries that have lots of oil monies that want to change their economies. Those are politicians. I feel like politicians should definitely understand someone is talking out of both sides of their mouth and they're not going to be too upset if someone's disappointed in the end. But the politicians are getting taken for a ride too. Why do you think that is?
Ronan Farrow
This is really, I think why the piece matters in my view and why it was worth spending all this time and detail on. We are in an environment where the systems that, as you say, should be providing oversight are just Hollowed out. And that's a post Citizens United America, where the flow of money is so unfettered. And it's a particular concentration of that problem around AI, right, where there's these packs that are like proliferating and flooding money into quashing meaningful regulation at both a state and a federal level. You have Greg Brockman, you know, Sam's second in command, directly contributing in a major way to a couple of those. And it leads to a situation where there just really is capture of legislators and potential regulators. And that is a hard spiral to get out of. The sad thing is, I think that there are simple policy moves, some of which are being trialled elsewhere in the world, that would help with some of these accountability problems. You could have more mandatory pre deployment safety testing, which is something that is already happening in Europe for frontier models. You could have more stringent written public record requirements for the kinds of internal investigations where we saw things being kept out of writing. In this case, you could have a more robust set of national security review mechanisms for the kinds of Middle Eastern infrastructure ambitions that Sam Altman was pushing. And as you say, where he was kind of doing this bait and switch of with the Biden administration saying regulate us, regulate us and like helping them craft an executive order. And then the moment Trump gets in, like truly in the very first days, just going no holds barred, let's accelerate and let's build a massive, you know, data center campus in Abu Dhabi. You could have, this is a really simple one, like whistleblower protections there. There is no federal statute protecting AI company employees who disclose these kinds of safety concerns that are being aired in this piece of, you know, we have cases where like Jan Leica, who was a senior safety guy at the time, leading super alignment at the company, writes to the board, essentially whistleblower material, saying the company is going off the rails on its safety mission. Like those are the kinds of people who should actually have an oversight body they can go to and they should have explicit statutory protections of the kinds we see in other sectors, right? Like this is simple to replicate a kind of a Sarbanes Oxley style regime. I think that despite how acute the problem is of Silicon Valley assuming control of all of the levers of power and despite how hollowed out some of these institutions that might provide oversight and guardrails are, I still do believe in the basic math of democracy and of self interested politicians. And there is more and more polling data emerging that a majority of Americans think that the concerns or questions or risks of AI currently outweigh the benefits. And so I think the flood of money into AI, it's within all of our. I'm sorry, into politics from AI. It's within all of our power to make that a source of a question mark with respect to politicians. You know, when Americans go to vote, they should be scrutinizing whether the people they vote for, especially if they are, you know, uncritical and anti regulation, given all these concerns, you know, are bankrolled by big Tech special interests. So I think, like, if people can read pieces like this and listen to podcasts like this and care enough to think critically about their decisions as voters, there is a real opportunity to generate a constituency in Washington of representatives who do keep an eye and do force oversight.
Nilay Patel
That might be one of the most optimistic things I've ever heard anyone say about the current AI industry. And I appreciate it. I might. I'm obsessed with the polling that you're talking about. There's a lot of it now. It's all pretty consistent. And it kind of looks like the more young people in particular are exposed to AI, the more distrustful and angry they are about it. That's kind of the valence of all the polling. And I look at that and I think, well, yeah, smart politicians would just run against that. They would just say, we're going to hold Big Tech accountable. And then I think about the past 20 years of politicians saying they're going to hold Big Tech accountable. And I'm struggling to find even one moment of Big Tech holding it being held accountable. And the only thing that makes me think this might be different is, well, you actually have to build the data centers and you can vote against that and you can petition against that and you can protest against that. You know, I think there's a politician who just had their house shot at because they voted for a data center. The tension is reaching. I would call a fever pitch. You've described the insularity of Silicon Valley, right? This is a closed ecosystem. It feels like they think they can run the world. Right. They're putting a ton of money into politics and they are running up against the reality of people don't love the products, which doesn't give them a lot of COVID Right. The more they use the products, the more upset they are. And the politicians are beginning to see there are real consequences to supporting the tech industry over the people they represent. You talk to so many people. Do you think it is possible for the tech industry to learn the lesson that is right in front of them?
Ronan Farrow
You know, you Say it feels like they think they can run the world without accountability. I don't even think that needs the feels like qualifier. I mean you look at like the language Peter Thiel is using. It's explicit, right? And of course that's an extreme example. And Sam Altman, though he is close with and informed by Thiel's ideology to some extent is a very different kind of person who might sound different and more measured up to a point. But I do think the wider ideology that you get from Thiel, which is basically like we're done with democracy, we don't need it anymore. We have so much that we kind of just want to build our own little bunkers. We're not dealing with the Carnegie's anymore or the Rockefellers anymore where they're bad guys, but they're, you know, they feel they need to participate in a social contract and build things for people. There's a real nihilism that's set in and I do think it's just been like a mutually reinforcing spiral in recent American history of moguls and private companies acquiring super governmental power while democratic institutions that might hold them accountable are hollowed out. And I do not feel optimistic about the idea that those guys might just wake up one day and think like, huh, actually maybe we do need to participate in society and help build things for people. I mean you look at like the little microcosmic example of the, the Giving Pledge where you know, there was a moment where it was seemly to be charitable and that moment is now like passed and even kind of ridiculed. That is a problem, the broader problem of lack of accountability that I think can only be solved extrinsically. That has to be voters mobilizing and like resurrecting the power of government oversight. And you're exactly right to say that the main vector through which people could maybe achieve that is like it's local, it's to do with where infrastructure is being built, you know, and you mentioned some of the like white hot tension around this that's leading to violence and threats and obviously like nobody should be violent or threatening. But I, and I'm also not here to make specific policy recommendations other than to just present like these are some of the policy steps that seem basic and are working elsewhere in the world. Right. Or that have worked in other sectors. I'm not here to say which of those should be executed and how. I do think something needs to happen and it needs to be external, not just trusting these companies because right now we have A situation where the companies that are developing the tech and are equipped best to understand the risks and in fact are the ones warning us of the risks are also the ones with nothing but incentive to go fast and ignore those risks. And you just don't have anything to counterbalance that. So whatever form reforms might take in terms of specifics, something has to run up against that. And I do still return to that optimism that the people still matter generally by argument.
Nilay Patel
Let me just make the one tiny counter argument that I think I can articulate. The other thing that could happen outside of the ballot box is that the bubble pops, right? That not all these companies get to the finish line, that there isn't product market fit for consumer AI applications. And again, I don't. I don't quite see it yet, but I'm a consumer tech reviewer and maybe I just have higher standards than everybody else. There is product market fit in the business world, right? Having a bunch of AI agents write a bunch of software seems to be a real market for these tools. You can read the arguments from these companies saying we've solved coding and that means we can solve anything. If we can make software, we can solve any problems. I think there are real limits to the things software can do. That's great in the business world. Software can't solve every problem in reality, but they got to get there. They got to finish the job. And maybe not everybody makes it to the finish line and there is a crash and this bubble pops and maybe OpenAI or anthropic or Xar, one of these companies does like, fails and all this investment goes away. OpenAI is right on the cusp of an IPO. There's a lot of doubts about Sam as a leader. Do you think they're going to make it to the finish line?
Ronan Farrow
I'm not going to prognosticate, but I think you raise an important point, which is market incentives do matter internally to Silicon Valley and the precarity of the current, maybe potentially allegedly bubble dynamics do stand to interrupt the, you know, again, potentially, according to critics, race to the bottom on safety. I would also add to that if you look at historical precedents where there's a similarly seemingly impenetrable set of market incentives and potentially deleterious effects for the public. There is impact litigation and you see that as an area of concern lately. Like Sam Altman is out there this week endorsing legislation that would shield AI companies from some of the types of liability that OpenAI has been exposed to. Right and wrongful death suits for Instance, of course, there's a desire to have that shield from liability. I think that the courts can still be a meaningful mechanism and it'll be really interesting to see how these suits shape up. You know, you already saw, for instance, the class action suit actually, of which I and many, many other authors I know are members against anthropic for their use of, of books that were under copyright. If there are, you know, smart legal minds and plaintiffs who care, we just have seen historically in cases from Big Tobacco to Big Energy, that you can also get some guardrails and some incentives to slow down or be careful or protect people that way.
Nilay Patel
It does feel like the entire cost structure of the AI industry hangs on a very, very charitable interpretation of fair use. Doesn't come up enough. The cost structure of these companies could spiral out of control if they have to pay you and everyone else whose work they've taken, but that it's inconvenient to think about. So we just don't think about it. Right next to that is all these products are now running at a loss. Like currently, today, they're all running at a loss. They're burning more money than they can make. At some point they have to flip the switch. Sam is a businessman, right? As you've mentioned several times. He's not a technologist, he's a business person. Do you think he's ready to flip the switch and say we're going to make a dollar? Because that when I ask, do you think OpenAI is going to make it? They got to make a dollar. And so far, Sam has made all of his dollars by asking other people for their money instead of having his companies make money.
Ronan Farrow
Well, that's a big lingering question, you know, for Silicon Valley, for investors, for the public. You see some statements and moves out of OpenAI that seem to evince a kind of panic about that, you know, shutting down sora, shutting down some ancillary project, trying to zero in on the core product. But then on the other hand, you still see at the same time tons of mission creep, right? You know, even like the, as a small example, it's obviously not core to their business, but like the TVPN acquisition, by the way, right as we were reaching the finish line and fact checking, the company facing this kind of journalistic scrutiny acquires a platform where they can, you know, have more direct control over the conversation. I think that there's a lot of investors that are concerned, based on the conversations I've had, that this problem of promising all things to all people also extends to this lack of focus in the core business model. And I mean, you're closer to the kind of prognosticating and watching the market than I am, probably. I'll leave you and I'll leave the listeners to be the judge of whether they think OpenAI can flip the switch.
Nilay Patel
Well, I asked the question because you've got a quote in the piece from a senior Microsoft executive and it is that Sam's legacy might end up more similar to Bernie Madoff or Sam Bankman Fried than Steve Jobs. That is quite a comparison. What'd you make of that comparison?
Ronan Farrow
I think that's a paraphrase. The Steve Jobs thing isn't part of that quote. But yes, there is this extraordinary comparison to I think it's actually, there's an interesting sort of sobriety to it because it's phrased as like, I think there's a small but real chance that he winds up being a, you know, an SBF or a Madoff level, level scammer. Meaning to my mind, not that Sam is being accused of those specific types of fraud or crimes, but that the degree of dissembling and deception from Sam may have a chance of ultimately being remembered at that scale. Yeah, I think what's most striking about that quote, honestly, is that you call around at Microsoft and you don't get a like, that's crazy. We've never heard that. You get a lot of like, yep, a lot of people here think that, which is remarkable. And I think it does go to these nuts and bolts business questions that there are investors who say one told me, for instance, look, in light of the way in which this trait has persisted in the years after the firing, this was also like I thought an interesting sober thought that it's not necessarily that Sam should be at the absolute bottom of the list, should be like the lowest of the low in terms of the people that absolutely must not build this technology. For what it's worth, there's several people who say Elon Musk is that person, but that this trait puts him maybe at the bottom of the list of people that should build AGI and beneath several other leading figures in this field. So I thought that was an interesting appraisal. And that's the kind of thinking I think that you get from the real pragmatists who maybe aren't buying into the safety concerns as much. They're just growth oriented and they think that OpenAI now has a problem with Sam Olman.
Nilay Patel
The Microsoft piece of it is really interesting that company thought they were on top of the world that they had made this investment and they were going to leapfrog everyone, especially and most importantly Google, back into consumer good graces and the level to which they feel burned by this adventure. This is a very soberly run company I don't think can be overstated. You mentioned the characters and the personality traits. I want to end here with a question from our listeners. I said on the, on our other show, the Vergecast, that I was going to be talking to you and I said if you have questions for owning about this story, let me know. So we have one here that I think ties in neatly with what you're describing. I'm just going to read it to you. How do the justifications for bad behavior, cutthroat actions of Altman and other AI leaders differ from the justifications Ronin has heard from other high profile leaders in politics and media? Don't they all justify their actions by saying, this is how the world gets changed. If I don't do this, someone else will?
Ronan Farrow
Yeah, there's a lot of that going around. I would say what is distinctive to AI is that the existential stakes being so uniquely high means both the statements of risk are extreme. Right. You have Sam Altman saying this could be lights out for all of us. And also the kind of, you know, critics might say, mania that the questioner is referring to is extreme. The thing that Sam accused Elon of on the record, you know, that, that maybe he wants to save humanity, but only if it's him. The kind of ego component of wanting to win, which is a framing Sam uses all the time, that this is one for the history books. This could change everything. And therefore, even above and beyond the you gotta break a few eggs mindset of most Silicon Valley enterprises, there is in the minds of some figures leading AI, I think a complete rationalization for any, any and all fallout. And, you know, forget breaking eggs. I think a lot of the underlying safety researchers would say, like potentially risking breaking the country, breaking the world, breaking, you know, millions of people whose jobs and safety hang in the balance. That's what's unique about it. And that's where, you know, I close reflecting on this body of reporting, really believing this is about more than Sam Altman. This is about an industry that is unconstrained and a spiraling problem of America being unable to constrain it.
Nilay Patel
Well, we had some optimism in there, but I think that's a good place to leave it.
Ronan Farrow
There's a lot of downbeat, of course,
Nilay Patel
that's every great story really. The Musk Altman trial is upcoming. I think we're going to learn a lot more here. I suspect I will want to talk to you again. Ronan Farrow, thank you so much for being on Decoder.
Ronan Farrow
Thank you.
Nilay Patel
I'd like to thank Ronan Farrow for taking the time to join Decoder and thank you for listening. I hope you enjoyed it. Like, let us know what you thought about this episode or really anything else at all. Drop us a line. You can email us@decoderverge.com, we really do read all the emails. Or you can hit me up directly on Threads and Bluesky. We're also on YouTube. You can watch full episodes at DecoderPod. We also have a TikTok and an Instagram. They're Decoder pod as well. They're a lot of fun. If you like Decoder, please share it with your friends and subscribe wherever you get your podcast. Decoder is a production of the Verge and part of the Vox Media Podcast Network. The show is produced by Kate Cox and Nick Statt. It's edited by Ursa Wright. Our Editorial director is Kevin McShane. The Decoder Music is by Breakmaster Cylinder. We'll see you next time.
Ronan Farrow
There's a new way to sweetgreen Meat Wraps Handheld, hearty and made for life on the move. With bold, chef crafted flavors, fresh ingredients and over 40 grams of protein, they're built to satisfy without slowing you down. Try wraps today in the app or@order.sweetgreen.com available at all participating locations.
Advertisement Voice
Starting a business can seem like a daunting task unless you have a partner like Shopify, they have the tools you need to start and grow your business. From designing a website to marketing to selling and beyond, Shopify can help with everything you need. There's a reason millions of companies like Mattel, Heinz and Allbirds continue to trust and use them. With Shopify on your side, turn your big business idea into Sign up for your $1 per month trial at shopify.com
Ronan Farrow
specialoffer Ryan Reynolds here from Mint Mobile. I don't know if you knew this, but anyone can get the same Premium Wireless for $15 a month plan that I've been enjoying. It's not just for celebrities. So do like I did and have one of your assistant's assistants switch you to Mint Mobile today. I'm told it's super easy to do@mint
Advertisement Voice
mobile.com Switch upfront payment of $45 for three month plan equivalent to $15 per month required intro rate first three months only then full price plan options available, taxes and fees, extra fee, full terms@mintmobile.com,.
Episode Date: April 16, 2026
Guest: Ronan Farrow, investigative journalist, The New Yorker
Host: Nilay Patel, Editor-in-Chief, The Verge
This episode features an in-depth conversation between Nilay Patel and Ronan Farrow on Farrow's recent New Yorker exposé about Sam Altman, CEO of OpenAI. The discussion centers on Altman's pattern of being "unconstrained by the truth"—a tendency toward dishonesty and people-pleasing that’s become emblematic of wider issues in both tech leadership and the unchecked growth of the AI industry. The pair delve into the findings of Farrow's 17,000-word piece, transparency failures at OpenAI, the overall culture of Silicon Valley, the risks of rapid AI development, and the lack of effective oversight from both industry and government.
On Altman's trait:
“The point is not just that Sam Altman deserves these questions; it’s that any of these guys in this field exhibit—if not this particular lying trait—certainly some degree of a race to the bottom mentality.” – Ronan Farrow (09:20)
On investor support for Altman:
“The reason he had support was lack of information.” – Ronan Farrow (31:27)
On OpenAI board’s dysfunction:
“The board that fired Sam was... very ‘JV’ and they fumbled the ball hard.” – Ronan Farrow (31:50)
On the Silicon Valley ecosystem:
“It’s low exposure for them to talk about this stuff... The culture is just so ruthlessly self-interested... so many other people... just folded like napkins and changed their tune... to get in on the profit train.” – Ronan Farrow (37:31–38:14)
On government oversight:
“We are in an environment where the systems that... should be providing oversight are just hollowed out... It’s a hard spiral to get out of.” – Ronan Farrow (44:53)
On what's unique about AI leadership culture:
“...potentially risking breaking the country, breaking the world... That’s what’s unique about it” – Ronan Farrow (62:32)
Farrow and Patel close on a sober but urgent note: The issues around Sam Altman are reflective of greater, systemic problems in tech—namely, a culture of unchecked power, self-interest, and opacity in an industry that claims to be both saving and possibly endangering the world. Despite momentary optimism about democracy and litigation acting as checks, the need for transparency, accountability, and robust oversight is pronounced.
“This is about more than Sam Altman. This is about an industry that is unconstrained and a spiraling problem of America being unable to constrain it.”
– Ronan Farrow (63:40)
This episode serves as a bracing examination of how character, hype, and lack of oversight can intersect to shape the development and deployment of one of the century’s most consequential technologies.
For more: