
Is he cautiously shepherding in the A.I. age, or is that just another part of the pitch?
Loading summary
Sponsor/Ad Voice
Think Verizon is expensive? Think again. Anyone can bring their AT and T or T mobile bill to a Verizon store today and we'll give you a better deal. So bring us your bill, walk in running, hogo, sticking teleport if you can, ride on the back of a rollerblading yak or fly in on the wings of a majestic falcon. Any way you can bring your AT and T or T mobile bill to a Verizon store today and we'll give you a better deal on the best Network based on RouteMetric's best overall mobile network performance US 2nd/2025 all rights reserved. Must provide a very recent postpaid consumer mobile bill in the name of the person redeeming the deal. Additional terms, conditions and restrictions apply.
Lizzie O'Leary
This episode is sponsored by Smart Travel, a new podcast from NerdWallet. You know that one friend who always finds the best travel deals, picks the right cards, and somehow ends up in first class for the price of coach. Smart Travel is like that friend, but in podcast form. They cover things like how to book tickets for spring break, even when every other family has the same week off, or what exactly you should spend those 90,000 points on, or which airport lounges are actually worth it and which are just free chairs. NerdWallet's trusted travel experts are here to
Sponsor/Ad Voice
help you put your dollars to work
Lizzie O'Leary
with practical tools and smart strategies you'll find yourself using every time you need to book a seat. Smart Travel knows that planning is half the battle. They make it easier for you to button up a schedule, put away the laptop, and finally go exploring. Travel smarter and spend less. With help from NerdWallet. Follow Smart Travel wherever you get your podcasts. Once upon a time, Sam Altman's employees tried to get him fired. He had founded a company, and some of the people there were concerned about his truthfulness, or lack thereof. They went to the company's board with their complaints. This story might sound familiar, but it's not about OpenAI where the board famously did fire Altman in 2023, only to rehire him. This happened more than a decade earlier at Altman's company, loopt, and what was
Andrew Morantz
told to us is that employees at this company said, look, Sam is personally likable, but we just are not sure how trustworthy he is. Sounds like he may be saying different things to different people.
Lizzie O'Leary
That's New Yorker writer Andrew Morantz. He and his co author Ronan Farrow spent more than a year reporting on
Andrew Morantz
Altman, and based on those concerns, we were told a Bunch of the employees asked the board to replace Altman with another CEO. And according to this story, what they were told is, this is Sam's company. Get back to fucking work.
Lizzie O'Leary
What did Learning about Looped tell you when you were reporting on the Sam Altman of today?
Andrew Morantz
I mean, one of the most perplexing things about Sam Altman's reputation and Persona is that he can be a bit of a cipher, right? He can kind of different segments of people can project different things onto him. So to the kind of classic business community, the classic Silicon Valley startup community, he seemed like a classic startup guy, you know, up and to the right. Rocket ship growth. You know, he was president of Y Combinator, which is kind of the center of a lot of this hyper growth in the industry. And yet in other contexts, for example, with AI engineers who are terrified of bringing advanced AI into existence, he could strike a totally different Persona. According to dozens of people we spoke to, he could seem much more conscientious, much more actually burdened and terrified by the thing he was bringing into existence. And so this seems to be, according to some people, a pattern that has recurred throughout his career.
Lizzie O'Leary
Now, as OpenAI prepares to go public and Americans grapple with the kind of future that AI may bring, Altman has become an avatar of the industry. Andrew Story asks, can he be trusted?
Andrew Morantz
I do think that this idea that he could sort of be the good guy to all these different, mutually exclusive sectors, that he could kind of tell the public what it largely wanted to hear about being slow and circumspect and asking for more regulation while reportedly behind the scenes doing the opposite. That he could tell these really scared, skittish engineers that we're going to be really cautious and we're only going to build AI once it's definitely safely aligned, and then that he could sort of go to investors and say, okay, full speed ahead. That does seem to have caught up with him. And you don't get this sense that you did just a year or two ago that he can kind of paper over all of it with a press release or with a podcast interview. Like, I don't think that people are willing to extend the benefit of the doubt the way some of them once were.
Lizzie O'Leary
Today on the show, the Many Faces of Sam Altman. I'm Lizzie o', Leary, and you're listening to what Next? Tbd, a show about technology, power, and how the future will be determined. Stick around. Starting your own business is never easy. Starting your own podcast, that seems easy, but actually there are a ton of landmines to step on along the way, finding producers, selling ads, and connecting to wi fi. Oh, does that sound straightforward? It's not. I'm talking about sitting in coffee houses for hours after buying one scone. I'm talking about sitting in hotel lobbies and pretending your backpack is luggage. It's torture. I spent so much time making my home office look professional, but my connection didn't get the memo. The last thing you want during a major interview is for your guest's voice to turn into a stutter. When your bandwidth can't keep up with your ambition, your home office starts feeling like an amateur operation pretty fast. And for a podcast, the Internet is key because the Internet is how we talk to almost everyone. And no matter the guest, a laggy connection can ruin an exclusive interview. Great connectivity isn't a bonus, it's the whole game. And ATT Business is here to help. They've got the tools, team and expertise you need for a stable network you can rely on. And when you can rely on the network, you can get back to thinking about the more important stuff, like nabbing that great guest and getting back to work at and T Business Built to Work get at and t business@business.att.com. Today's episode is brought to you by Quo, spelled Q U O. Spring is here. That means it's time for some spring cleaning, including streamlining some of the messier parts of your business. Quo is the 1 rated business phone system on G2 with over 3,000 reviews built for how modern teams work. That's why more than 90,000 businesses, from solo operators to growing teams, rely on Quo to stay connected, professional and consistently reachable. Your entire team can handle calls and texts from one shared number. No more missed messages or disconnected conversations. Everyone sees the full thread, making replies faster and customers feel genuinely cared for. It's easy. Calls, texts, voicemails, transcripts and contact details all live in one clean view with full context at your fingertips. Your team communicates faster, stays aligned, and delivers a more personal experience. Make this the season where no opportunity a no customer slips away. Try quo for free plus get 20% off your first six months when you go to quo.com TBD that's quo.com TBD quo no missed calls, no missed customers.
Sponsor/Ad Voice
This episode is brought to you by Capital One. Capital One's tech team isn't just talking about multi agentic AI, they they already deployed one. It's called Chat Concierge and it's simplifying car shopping using self reflection and layered reasoning with live API checks. It doesn't just help buyers find a car they love. It helps schedule a test drive, get pre approved for financing and estimate trade in value. Advanced, intuitive and deployed. That's how they stack. That's technology at Capital One.
Lizzie O'Leary
I asked Andrew to take me back to the founding of OpenAI. When Sam Altman pitched the idea by email to Elon Musk.
Andrew Morantz
He writes to Elon Musk in May of 2015. Back then Elon Musk was merely the hundredth richest person in the world, not the single richest person in the world. And Sam Altman was not a household name. And his pitch for why we should we, you and I, Elon should start this thing is, is not we're going to make a bunch of money. It's not, you know, there's a market out there for this product. It's actually precisely the opposite. His pitch to Elon Musk is AI is going to be so powerful and so scary and it could literally destroy all of humanity, like literally everyone on earth. And given the sort of dual use stakes of this, we need to start something that is an anti company basically like this cannot be controlled by Google because Google is a profit seeking mega corporation. It can't fall into the hands of rogue actors or you know, China or some other government that is antithetical to American interests. The only solution, Altman says, is for us to start a lab, a nonprofit safety focused research lab modeled on the Manhattan Project. Explicitly he calls it a Manhattan Project for AI only unlike the Manhattan Project which was a top secret government program, this will be privately funded in this case by Elon's money.
Lizzie O'Leary
We have done a lot of coverage of OpenAI on this show and kind of this intended mission, the kind of mission statement, Constitution, all of those things as they slowly start to change. I wonder in your reporting out this story, how did you find that initial mission begin to conflict or mesh with Altman's leadership over time?
Andrew Morantz
So there are some parts of the mission that you could argue were maybe a kind of adaptation to changing circumstances. Right. So at first they thought they were going to be open source. That's where the open part comes from. And then they have a series of conversations and say actually this could be dangerous to be open source because what if we put a model out there that is then used by some rogue actor? Right. So some of this is correcting in real time. But a lot of the promises that get rolled back, there's really even now basically no good explanation for other than they were saying one thing then because they thought it would be effective and now they're saying something different because it's more effective now. So safety and regulation and nonprofit are all good examples of this. They said they were going to remain a nonprofit. But we have access to internal emails and documents and stuff that was shown to us showing that they were thinking about pursuing a for profit even as early as 2017. We have on the safety stuff, reams and reams of stuff that has, that we've obtained that is not just from their competitors in the industry, but from people who've left the industry, from all kinds of concerned actors who say that they were basically told by Altman and other OpenAI executives, you should come work here, you should take a pay cut to work here, because we are the safety focused research lab. And then by their telling, a lot of the safety stuff just fell away over time. And it was often the way they tell it. It was often done gradually, with some plausible deniability, maybe saying, okay, this was a research direction that didn't work, or whatever. But the bottom line is the public rhetoric From Sam and OpenAI just a couple of years ago was if we don't solve alignment this problem, that the machines might become super intelligent and might not be aligned with human interests. If we don't solve that problem, that could literally kill us all. That was the rhetoric coming from OpenAI.
Lizzie O'Leary
It's very powerful rhetoric.
Andrew Morantz
It's very powerful. It got a lot of people's attention. It got a lot of investment. There are some people who think that they never really bought it. There are some people who think they really did believe it. But either way, it is not what they're saying now.
Lizzie O'Leary
There's a lot of really great reporting about OpenAI. Keith Hagee the Wall Street Journal wrote a book. Karen Howe also wrote a book about it. What is really new and striking to me in your reporting, you and Ronan Farrow, is that there are a lot of these candid, contemporaneous thoughts, documents, emails from these periods of tumult, both when the mission is changing and then again as we move toward the kind of OpenAI board coup of 2023. Can you give me a flavor of who had concerns about Altman and what those concerns were?
Andrew Morantz
Yeah, for sure. So, yeah, we're obviously building on a lot of stuff, as you say, that, that Keech Hagee's done, that Karen Howe has done, that other people on this beat have done at the Wall Street Journal, the New York Times and elsewhere. But we were seeing in full a set of documents that have Kind of become fabled in these worlds. So when Altman was first fired from OpenAI in late 2023, there was kind of this meme of what did Ilya see? And that was referring to Ilya Sutskever, who was one of the co founders of OpenAI. He left Google and turned down a $6 million salary from Google to go work at OpenAI because he thought these are the good guys, they're the David against the Goliath. Eventually, over time, he starts to really worry and he starts to say, according to, according to the accounts we've been given, Sam should not be the guy with his finger on the button. And the button in that case is the AGI button. So Ilya turns against Sam, becomes the key swing vote to fire him. And what we've reviewed in full is kind of all the documentation that he had laying out why he thought Altman should be fired.
Lizzie O'Leary
Because it was so mysterious at the time.
Andrew Morantz
Right, right. And I think, you know, this is a little glib and self serving, but I think basically that's because the ideal format for explaining why he was fired was not a 70 page bullet point memo, it was a 16,000 word New Yorker piece. Because these are these. And I mean, I'm sort of kidding when I say that, but there are these subtle accumulations of fact and details that you can't really get across in a bullet pointed list. I mean, it's not that Altman was fired because somebody walked in on him clubbing a baby seal or something. It's not that kind of a smoking gun thing. And so at the time, and even now, a lot of people remain confused. And by people, I don't just mean the general public, I mean investors, board members at Microsoft, but what emerges through looking at the documentation that was compiled by Sutskever, by other competitors like Dario Amadei, by also just people who, as I say, are not competitors, it's this kind of pileup of details that any one of them in itself might seem kind of innocuous. And to some people it still does seem innocuous. But taken in totality, the impression that we got from a lot of people we talked to was this is just a level of power seeking and duplicity that really is unusual even among the cutthroat world of business CEOs. And to be clear, people really should read with a lot of skepticism the claims that come from Altman's competitors. I mean, this is an extremely competitive field and we, we worked very hard in the reporting process to not just launder Competitors, claims. And I just think this is something that emerges from all corners of people you speak to, whether they are in the industry or not, whether they are scared of the technology or bullish about it. This is just something that comes up again and again.
Lizzie O'Leary
Can you give me an example if there's one that really stood out to you of, like, a particular moment where we're talking, saying one thing to one audience and another thing to another? Because you're right, in the piece, it's this, like, building of a wave. And yet in the moment, you could sort of say, eh to any one of these.
Andrew Morantz
Right. And a lot of people did. I mean, so a lot of it would be sort of workplace disputes, you know, telling two different people that they have the same job. Right. This is the kind of thing that managers do all the time. And we actually talked to Altman several times for the piece, and he admitted that this was, by his telling, a flaw of his, that he was too much of a people pleaser, that he could be too eager to, you know, tell people what they want to hear. And so. Right. That's an example of, okay, you're gonna fire the CEO of this hugely successful company because he told two people that they had the same job description. But, you know, you. According to these people who actually did fire him, it was that in combination with a constellation of 50 other things that made it seem to them that this was not the environment in which a safe AGI could be built. And, you know, I think it's also important to recognize, again, you can believe or disbelieve this, but what the people believed who were building this stuff and who are building this stuff is that the duty of care and integrity was so much higher in this particular arena
Lizzie O'Leary
than anywhere else because of the power of the technology.
Andrew Morantz
Because of the power of the technology. The reason, again, going back to the pitch originally for why OpenAI should exist, the whole point of this was, AI will be the most important thing since electricity or since fire. It will be the most dangerous thing since nuclear weapons. These are just direct. I'm quoting from the way they talk about it. And given the stakes, it needs to be controlled by somebody who is of the utmost integrity, who is not power seeking, who is purely altruistic. These were legally binding fiduciary duties of the nonprofit back when it was a nonprofit. And so the implication, I mean, they literally used to, in private conversations, talk about the scenario of an AGI dictatorship. The scenario being that whoever had control of the most advanced model would essentially be more powerful. Than any person on Earth.
Lizzie O'Leary
And this is something Sutsgiver wrote about.
Andrew Morantz
Yes. And he was worried that both Elon and Sam Altman were trying to pursue an AGI dictatorship. And he just said to them directly, I don't understand why it's so important to you to have control of this company if you are as altruistic as you say. Like, your behavior seems more consistent with someone who is seeking power.
Lizzie O'Leary
Right. Someone who, who was altruistic wouldn't be so insistent that they become the CEO.
Andrew Morantz
Yeah. And there's a kind of selection bias here problem. Right. Because someone who was really afraid of this technology probably wouldn't be so involved in building it. So this is kind of a paradox at the center of the industry that again remains. I mean, we go back to the origins here. But these things remain in play today as the power of these things and how wedded they are to the infrastructure of our society only accelerates
Lizzie O'Leary
when we come back. How do you tell a true story from one that's being shopped around by a CEO's enemies? Every idea starts with a problem. Warby Parker's was glasses are too expensive. So they set out to change that. By designing glasses in house and selling directly to customers, they they're able to offer prescription eyewear that's expertly crafted and unexpectedly affordable. Warby Parker glasses are made from premium materials like impact resistant polycarbonate and custom acetate. And they start at just $95 including prescription lenses. Get glasses made from the good stuff. Stop by a Warby Parker store near you.
Sponsor/Ad Voice
Think Verizon is expensive? Think again. Anyone can bring their AT&T or T mobile bill to a Verizon store today and we'll give you a better deal. So bring us your bill. Walkin running pogo stickin' teleport. If you can ride on the back of a rollerblading yak or flyin on the wings of a majestic falcon. Any way you can bring your AT&T or T mobile bill to a Verizon store today and we'll give you a better deal on the best network based on RootMetric's best overall mobile network performance. US second half 2025. All rights reserved. Must provide a very recent postpaid consumer mobile bill in the name of the person redeeming the deal. Additional terms, conditions and restrictions App
Lizzie O'Leary
so there's this moment in your piece after the board has fired him and he says, this is just so fucked up. I can't change my personality. And what I don't know is how to read that, like, do I read that as I can't change my personality, I'm just someone who tries really hard and wants to get it done, or I can't change my personality? Like, yeah, maybe I do lie a little bit to get where I need to be.
Andrew Morantz
Yeah, this, this is, this is one where, you know, Altman told us he basically remembers it the former way. Like, I'm doing my best here and like, you guys just seem to not like my personality or I forget how he put it. And there were other people who were familiar with this who interpret it the other way, which is that he isn't going to change his kind of pitch, man. Sort of slippery personality. Now when you're reporting on stuff where there were just a few people on a phone call or in a room, you really can't be omniscient. And so we really do try to bend over backwards in the piece. And because of our famously thorough fact checking process, we really do try to get the perspective of Altman and all the other OpenAI executives into the piece as much as possible. And we were glad that he engaged in interviews and engaged in fact checking. And we really, I think, were pretty meticulous in representing where people's memories differed. But again, this is where if it were just one thing, one piece of evidence, one day in the life of a CEO, it might be impossible to come to any view of what's going on, but there's a kind of larger pattern here. And again, obviously this is not an editorial op ed piece. So it's not like at the end we say, and these people were right and these people were wrong. But it's just, it's something that when you go into this reporting, there are certain consistent themes that do emerge again and again.
Lizzie O'Leary
I want to talk a little bit about relationships, Altman's and the relationships of OpenAI. And I want to start with Dario Amadei, who left, started Anthropic recently, big publicized fight with the dod. What do you think was the moment when Dario Amedei said, I can't work with this guy anymore. I don't trust him.
Andrew Morantz
Well, so the breakdown of the relationship between Sam Altman and Dario Amadei is one of these other much fabled Silicon Valley disputes. And actually Keech Hagee wrote a really good piece about this for the Wall Street Journal. And, um, many, many people have probed at this. This is another area where I think people are right to be skeptical. Right. Because Obviously Anthropic and OpenAI are major competitors and rivals and famously Sam and Dario wouldn't even touch hands when they were on stage next to each other, right in India. So there's clearly some bad blood there. Um, but it goes way back to, apparently, to the origins of before OpenAI even existed. I mean, what we know from the public record and from private records is Dario was one of the initial recruitment targets, along with Ilya Sutskever and a handful of other people. He didn't join OpenAI initially. He did join shortly thereafter a few months later. He went from Baidu to Google to OpenAI. And then apparently, he and a few other safety researchers started having these concerns, by their telling, very early on and started, we report, I think, for the first time in this piece, a series of secret meetings that a bunch of OpenAI researchers held in 2017, 2018, saying, I don't know if Daria was personally involved with this, but that kind of mindset of safety researcher, where they were even back then saying, are we the bad guys?
Lizzie O'Leary
Yeah.
Andrew Morantz
Can we OpenAI be trusted? And that was before Sam had even really taken full control of the company.
Lizzie O'Leary
So then there's this other part of this, and you guys say this explicitly in your piece, that intermediaries connected to and at least one case compensated by Elon Musk circulated dozens of pages of detailed opposition research about Altman. They are suing each other. They reflected. They reflect extensive surveillance documenting shell companies, personal contact information of close associates, even interviews about a purported sex worker conducted at gay bars. Like, how do you disentangle what is some degree of, like, Michael Clayton corporate, you know, espionage titans fighting each other with what might also be duplicitous conduct?
Andrew Morantz
Yeah, very, very carefully. No, really. I mean, this stuff. I think the word that one of these executives used is Shakespearean. The level of cutthroat rivalry in this space. And, you know, we don't want to be naive about, like, this is the first time that business titans have had personal animus. I mean, obviously, this is as old as capitalism, but the level of mudslinging
Lizzie O'Leary
and
Andrew Morantz
oppo research, it really does seem kind of unprecedented, at least in what I've seen. And so we. We had a lot of conversations about how to represent that in the piece and how to do so fairly and responsibly and say what we could about what we know and believe to be true. And, I mean, people should read the piece and judge for themselves, but basically, it seems like a lot of these wealthy tech tycoon, jetsetter types seem to do a lot of rich guy stuff, and then a lot of Times that gets weaponized and spun up and exaggerated by their rivals. And I mean, the allegations that we heard were, you know, that, that, that Altman was being trailed by private investigators, that people were going to gay bars to try to dig up dirt about his sex life. I mean, really crazy stuff. And just to state up front, the ugliest allegations were not ones that we found any evidence for. The ugliest allegations being that he pursues minors, which is about as ugly an allegation as you can make. We did not. Our reporting did not bear that out. And we say that in the piece. And look, I mean, it's just. It's very, very tricky to try to be fair to all concerned and to try to distinguish what is rich and powerful people who deserve scrutiny from oppo dumps and scurrilous, salacious rumors. And it took months for us to do that. And I think people can judge how fairly we did it.
Lizzie O'Leary
We got to talk about Annie Altman, Sam Altman's younger sister, who accused him in a lawsuit of childhood sexual abuse. Liz Weil at New York magazine has, I believe, one of the, you know, know, few journalists who visited her in Hawaii to talk about this. How do you reckon with Annie's accusation? And we should say that Sam and his and his family have denied this repeatedly.
Andrew Morantz
Yeah. And this is, again, Karen Howe also spoke with her extensively. We also spoke to her and she reiterated these disturbing allegations to us. And again, I think we just treat it the way we treat all these really delicate things where, you know, it's out there, it's part of what has been introduced into the public discourse. So it felt like necessary to deal with in some form. And there was just. We couldn't substantiate it. And so we spoke to her about it. We put, you know, the fact checking questions. We spoke to Sam about it. Yeah, it's just. It's one of those things that you have to bring it up, as painful as it is, and then you just have to say as plainly as you can what you think the facts do or don't demonstrate.
Lizzie O'Leary
Your piece is coming out in what I would say is probably a very different environment than when you started reporting it. OpenAI kind of seems to be struggling a little bit. If you read between the lines, they just shut down. Sora, their deal with Disney blew up. Where do you see the company focusing at this moment with Sam Altman at the helm?
Andrew Morantz
Yeah, they've gone through a number of what are sometimes called code red moments where they kind of say, okay, let's stop all the extraneous stuff and just focus on our core product, which in their case is ChatGPT. And they seem to be in another one of those. Look, I mean, one of the core promises at the founding of OpenAI was we'll be a safety lab, we will support all regulation. Among those promises was we will not exacerbate or participate in race dynamics within the industry because we think that racing to AGI could incentivize recklessness and cutting corners. We want to use our position of advantage, if we can get one, to slow things down.
Lizzie O'Leary
But they're not talking about AGI as much anymore.
Andrew Morantz
Yeah, well, they're. And they're certainly not talking about this, you know, slowing down the race to the bottom. I think it's pretty clear now that everybody is participating headlong in a race to the bottom. And again, you know, there's a kind of jaded view of this that says, yeah, how did you ever expect this to work out, given how we've structured our society economically? And I think that's valid, but it is very clearly not what they said they set out to do. But, yeah, you're right. I mean, these companies are all desperately trying to position themselves with. They're all desperately clinging to some attempt at a market advantage. They're trying to outpace each other in terms of data center development, which is enormously expensive, which means they have to raise more money. Which means. I mean, during the rounds of revision on this piece, we had a thing that said OpenAI's most recent round of fundraising alone is bigger than X and Y example of fundraising in the past. And we kept having to change the sentence because they kept raising bigger and bigger and bigger rounds of funding. I think by the time we closed the piece, it was 122 billion just in the most recent funding round. That is insane. It is like, literally hard to conceive of. And so that is not avoiding race dynamics.
Lizzie O'Leary
But that is also, like, if you look at the narrative for those fundraising rounds, it changes. It was for, we need this money because we need to reach AGI and we're going to do it safer than the other people. Then we need to be better at this than China and Russia, because they're going to do it and they're not responsible. He's raising more money now. What is that money for?
Andrew Morantz
Well, I mean, I think it's for winning the race. I think that they. And not just OpenAI, but I think most or all of the major players at this point, they kind of see this as an all out race. And they're not talking, you're right, they're not talking about safety in the same way. They're not talking about regulation in the same way. But one part of the rhetoric that does seem consistent with the early days is they used to talk about the Ring of Sauron and, you know, they would compare themselves to characters in Lord of the Rings or compare themselves to characters in, you know, the making of the atomic bomb. These kind of extremely grandiose parables about power. They're cautionary tales, which I don't know is always something that these guys perceive.
Lizzie O'Leary
These are like I have invented the Torment Nexus from the Torment Nexus.
Andrew Morantz
Exactly, exactly. So, but, but this, the rhetoric is still consistent with that. Like we need to go full speed ahead so that we can be the ones to grab the ring. That seems pretty consistent with what a lot of these companies are doing now. That might be editorializing on my part and they may say differently. And clearly there are CEOs who say we feel the weight of this responsibility keenly and we want to move forward carefully, but economically that's not really what.
Lizzie O'Leary
Right. That hasn't stopped them from issuing debt.
Andrew Morantz
Oh, no. These are some of the most highly leveraged companies of all time. And many people we spoke to are worried that this is an economic bubble. Including Sam Altman, by the way.
Lizzie O'Leary
Well, he says he's worried that this is an economic bubble.
Andrew Morantz
Yeah.
Lizzie O'Leary
You talked to Sam many times. I keep thinking about an anecdote in Karen Howe's book where they're kind of racing to get ChatGPT onto the market because they want to beat Google and they want to beat Anthropic. Where do you put Sam Altman on a scale of Optimist? Who. We're working on it. We're going to get it there to Theranos.
Andrew Morantz
I mean, the optimist part, I think like a lot of things, it just depends on the audience.
Lizzie O'Leary
Right.
Andrew Morantz
The Keeach Hagee book is called the Optimist, but often he sounds way more like a doomer, or at least he did until that became unfashionable. So it's just really hard to know. I don't purport to know what's in his brain or what's in his heart. One thing that people consistently allege about him, and I'm, you know, not in a position to judge, is that he tells people what they want to hear. This is something that comes up again and again. And one of the notes that we land on in the piece is that's also what LLMs do. So a lot of these chatbots, they have this persistent problem of sycophancy where they tell you what you want to hear. And a pattern that you hear interviewing people about Altman is that this is also something that he does. Whether that's a feature or a bug is a separate question.
Lizzie O'Leary
Andrew Morantz, thank you for your piece and for talking with me.
Andrew Morantz
Thank you so much. This was delightful.
Lizzie O'Leary
Andrew Morantz is a staff writer at the New Yorker. All right, that is it for our show today. What Next TBD is produced by Evan Campbell and Patrick Ford. Our show is edited by Paige Osborne, who is the senior supervising producer for what Next and what Next tbd. Mia Lobel is the executive producer here at Slate, and TBD is part of the larger what Next family. We'll be back next week with more episodes. I'm Lizzie o'. Leary. Thanks for listening. Foreign.
Sponsor/Ad Voice
This episode is brought to you by Capital One. Capital One's tech team isn't just talking about multi agentic AI. They already deployed one. It's called Chat Concierge and it's simplifying car shopping using self reflection and layered reasoning with live API checks. It doesn't just help buyers find a car they love, it helps schedule a test drive, get pre approved for financing and estimate trade in value, advanced, intuitive and deployed. That's how they stack. That's technology at Capital One. Think Verizon is expensive? Think again. Anyone can bring their AT&T or T mobile bill to a Verizon store today and we'll give you a better deal. So bring us your bill. Walk in, running pogo sticking teleport. If you can ride on the back of a rollerblading yak or fly in on the wings of a majestic falcon. Any any way you can bring your AT&T or T mobile bill to a Verizon store today and we'll give you a better deal on the best network based on RootMetric's best overall mobile network performance. US 2nd half 2025 all rights preserved must provide a very recent postpaid consumer mobile bill in the name of the person redeeming the deal. Additional terms, conditions and restrictions apply.
Podcast: What Next: TBD | Tech, Power, and the Future
Host: Lizzie O’Leary (Slate Podcasts)
Guest: Andrew Marantz (The New Yorker)
Date: April 12, 2026
This episode investigates the complexities of Sam Altman, CEO of OpenAI, exploring his reputation, leadership style, and the shifting narratives around his stewardship of one of the world’s most powerful AI companies. Journalist Andrew Marantz joins Lizzie O'Leary to discuss more than a year of deep reporting (with co-author Ronan Farrow) on Altman’s persona, OpenAI’s evolution, and the controversies—including ethical disputes, allegations, and dramatic boardroom coups—that have surrounded Altman throughout his career.
| Timestamp | Speaker | Quote | |-----------|--------------------|----------------------------------------------------------------------------------------| | 02:38 | Andrew Marantz | “This is Sam’s company. Get back to fucking work.” (On Loopt board’s response) | | 09:13 | Andrew Marantz | "It's not, 'We're going to make a bunch of money.' It's actually precisely the opposite. His pitch for why... is, AI is going to be so powerful and so scary and it could literally destroy all of humanity." | | 13:06 | Andrew Marantz | "The public rhetoric from Sam and OpenAI just a couple of years ago was if we don't solve alignment...that could literally kill us all." | | 16:13 | Andrew Marantz | "...this is just a level of power seeking and duplicity that really is unusual even among the cutthroat world of business CEOs." | | 18:56 | Andrew Marantz | "AI will be the most important thing since electricity or since fire. It will be the most dangerous thing since nuclear weapons." | | 27:38 | Andrew Marantz | "The level of mudslinging and oppo research, it really does seem kind of unprecedented." | | 36:36 | Andrew Marantz | "One of the notes that we land on in the piece is that's also what LLMs do...a pattern that you hear interviewing people about Altman is that this is also something that he does." |
The episode maintains a clear-eyed, investigative tone—balancing skepticism, empathy for whistleblowers, scrutiny of power, and caution toward rumors or unverified claims. Marantz’s careful, nuanced reporting lends weight both to critics of Altman’s leadership and to the complexity of the tech world’s internal politics.
This episode offers a deep, nuanced examination of Sam Altman and the turbulent culture at OpenAI, exploring why his leadership has been controversial, how the company’s original safety- and transparency-driven vision has collided with commercial reality, and how personal and professional rivalries have shaped the direction of AI’s most prominent company. The discussion is grounded in new reporting, revealing not just industry gossip, but accumulated patterns that raise significant questions about tech power, personality, and accountability in a high-stakes field.