Loading summary
A
Offline is brought to you by indacloud. April's funny half the Internet is talking about spring cleaning. The other half is already planning their 4:20. Wow. That's where IndeCloud fits in. Indecloud is your fully legal online cannabis dispensary for gummies, exotic flower premium pre rolls and zero sugar THC sodas. A clean, alcohol free way to relax without throwing off tomorrow. Everything available is federally legal. Hemp THC lab tested and shipped discreetly to your door. And this month new customers get 40% off all month long with their biggest sale of the year. Sleep gummies for nights that actually restore you. 0 Sugar THC sodas for social plans without alcohol, Premium pre rolls for intentional wind downs and $70 ounces for consistency. That feels sustainable. Boy, we love into cloud. Yeah, it's great. It's great to have a wind down. I like an intentional wind down. I love. You know it's intentional because I take the gummy.
B
Yeah, listen, honestly, in a pinch, I'll take an unintentional wind down.
A
I just want to wind down. I want to get down. I want to wound down. I'm up. I want to be wound down. And that's what into Cloud can do for you. That's what it can do for you. If you're 21 or older and a new customer, go to intocloud co that's co not.com and use code offline for 40% off your first order that's into cloud Code offline that's into cloud code offline for 40% off all month long shipped discreetly to your door. Plus free shipping on orders over $50 and $30 in free gifts on qualifying orders. Don't forget to fill out the quick survey when you order to support this show. As always, please enjoy responsibly and mega. Thanks to Indecloud for supporting your 420 plans this year.
B
Ryan Reynolds here from Mint Mobile with a message for everyone Paying Big Wireless way too much. Please, for the love of everything good in this world, stop with Mint. You can get premium wireless for just $15 a month. Of course, if you enjoy overpaying. No judgments. But that's weird. Okay, one judgment anyway, give it a try@mintmobile.com Switch upfront payment of $45 for 3 month plan equivalent to $15 per month required intro rate first 3 months only, then full price plan options available, taxes and fees extra. See full terms@mintmobile.com this is the story
C
of the one as a maintenance tech at a university he knows ordering from multiple suppliers takes time away from keeping their arena up and running. That's why he counts on Grainger to get everything he needs, from lighting and H vac parts to plumbing supplies, all in one place. And with fast, dependable delivery, he's stocked and ready for the next tip off. Call 1-800-GRAINGER Click grainger.com or just stop by Grainger for the ones who get it done.
B
Another person who told us that this is probably a bubble is Sam Altman, who has said multiple times that he thinks it's a bubble and that someone is going to lose a phenomenal amount of money. I believe that's a direct quote. So yeah, I worry about the potential for a bubble here.
A
I'm Jon Favreau and you just heard from today's guest, the New Yorker's Andrew Morantz. Andrew, along with a fellow New Yorker journalist you may have heard of, Ronan Farrow, just published an incredible, expansive investigation about one of the most important figures in tech, Sam Altman, the CEO of OpenAI. Over the course of hundreds of interviews, including over a dozen with Altman himself, Andrew and Ronan unveiled a picture of a leader who is widely distrusted by the people who've worked with him closely and who tells people exactly what they want to hear, whether or not it's true. Just like the AI model he created, Andrew and Ronan raised the question, can the man behind the most influential artificial intelligence company in the world, who's going full steam ahead on a potentially civilization destroying technology, actually be trusted? I'm sorry to say the answer will not make you feel better. I talked with Andrew about the contradictory narratives coming out of OpenAI, why this is so much more complicated than good guys versus Bad guys, and how Altman's resolve to go founder mode means he may be headed down the same well traveled path as many tech oligarchs before him. We'll get into that conversation in a moment, but before we do, please consider becoming a Crooked Media subscriber if you haven't already, so that you don't miss out on any of the great content we're putting out for our friends of the Pod subscribers, get our new extra episode of Pod Save America called Pod Save America Only friends, other subscriber only shows like Polar Coaster with Dan Pfeiffer, access to all of our excellent substack newsletters like Pod Save America, open tabs ad free episodes of all your favorite crooked pods, and you get to feel good about supporting one of the few independent, proudly pro democracy media outlets left in Trump's America. So head to cricket.com friends and subscribe. Here's Andrew Marantz. Andrew, welcome back to offline.
B
Thank you. Always a pleasure.
A
Want to talk to you about your big Sam Altman piece in the New Yorker that you wrote with Ronan Farrow. You And Ronan spent 18 months reporting this piece. You sat down with Sam Altman, I think, more than a dozen times. You got access to hundreds of pages of internal memos, documents, and, you know, on one level, it's a story about the internal drama of a company where, where people no longer trust the guy who runs it to the point where multiple people described Altman to you unprompted as a, quote, sociopath. But this also happens to be one of a tiny number of companies building a civilization changing and possibly civilization destroying technology. So I guess my first question is spending 18 months on this, what, if anything, changed for you personally in terms of your perspective on AI and the people building AI?
B
Yeah, I mean, this is a really critical backdrop for this. Right. Because, you know, all people who are at a certain echelon of power and wealth deserve serious scrutiny. But I don't think I would have been that interested in this level of individual scrutiny for someone who, you know, was the CEO of a really big, you know, name the widget structure. Yeah, exactly. Like or a shoe company. I mean, this matters because of the structural impacts of AI specifically. And so there's a lot we can get into about Sam Altman. The person, the personality, the Persona. But the reason this matters at all is because I think AI really matters. And I think I see a lot of people who are worried and scared and therefore want to put their heads in the sand and say it's a parlor trick. It's a trick of the light. It's not real, it's hitting a wall. It's stochastic parrots. It's whatever. I don't think that is tenable anymore. I just don't think we can sit this one out as a society. And so I think we need to bring serious scrutiny to bear on the people who are building it and on just like knowing what the thing is to the extent that anyone knows, including the people who are building it. Because this is not like a news cycle that you can just sit out. Like, AI is part of, you know, weaponry at the highest levels of the military. It's part of surveillance. It's part of basic transportation infrastructure and weather prediction. It's, you know, liquefying our brains with slop. It's contributing to what experts call Human enfeeblement, which is basically like the more you outsource to LLMs, the less you're able to think and write and perceive the world. So, like these things are happening. Whether or not you think that you should spend time worrying about the more sci fi scenarios where it kills us all. And by the way, we can get to this. But I think the sci fi scenarios where it kills us all are also worth worrying about.
A
Yeah. Did you leave the reporting more alarmed about where we're headed?
B
I did, I did. And this is not just. Again, this is not just an OpenAI thing or a Sam Altman thing. I think before I really started reporting on AI in earnest, I kind of thought, you know, of course, like, nerds are gonna nerd and sci fi people are gonna sci fi. I'm like, yeah, everyone has some apocalyptic fantasy about how their generation will be the last one ever on earth.
A
Yes.
B
And there's definitely truth to that. I mean, there are these narrative things that in the nuclear age, we get Dr. Strangelove, and now in the age of AI, we get AI dystopian fantasies. And it's even weirder than that because the AIs are trained on data that includes dystopian sci fi, so they themselves start spitting it out sometimes. So I'm not sitting here and saying the Skynet scenarios are likely to, but the more I looked at this stuff, the more I kind of understood what the arguments are from the people who are really worried. And they were not all arguments that I could immediately refute. And so I think the fact that you now have members of Congress on the left and the right, you know, saying let's take these nerds kind of more seriously than we did, it's not incidental. I think it's because they're actually listening to the substance of the arguments for the first time. And even though the arguments might be hypothetical and even though they might be technical, they're not ones that you can just immediately bat down without giving them serious thought and without actually trying to regulate or control our way out of it.
A
Yeah. And the other thing is, we talk a lot about the technology itself, but you can't divorce the technology itself from the people who are building it and then the people who are in charge of it and the people who may or may not regulate it in the future.
B
Right. I would place my money on may not, but we'll see.
A
Right. It seems like the entire the governance structure of AI in the broadest sense, not just from actual governments and politics, but from what's happening at these companies seems critical here, which is what your piece gets into with regard to Sam. So let's just. I just want to get into a few of the bigger revelations in the piece. I thought one of the more damning revelations is what happened with the allegedly independent investigation of Sam Altman after the the board fired him in 2023 for essentially lying to them. And so Altman sort of engineers his own return a few days later. And one of the conditions of his return is this outside investigation led by Wilmer Hale, which is the same firm, law firm that investigated Enron. A few months later, OpenAI announces that the investigation has cleared Altman. But there's no written report, there's nothing's made public. That's it. And a board member told you this could prompt a need for another investigation. Has anyone reached out to you guys since the publication? Anyone in the Delaware or California AG's offices? Or do you think there's an appetite for a real investigation now? Or do you think that chapter is closed?
B
Yeah, I mean, we, I think, really nailed down and report for the first time that there was never a written report, because it appears that a report was never written. And it seems from all of our reporting that that was intentional, that, you know, the goal seemed to be to clear Altman, or at least that if that was where it was heading. You know, a lot of sources told us, like, well, then why should we create a paper trail that could create complications for us if where we're heading is to exonerate him? And this gets to sort of one of the persistent patterns that comes up in the reporting of this piece, which is, you know, everyone knows that Sam Altman was fired in late 2023, and everyone knows that he came back. What people didn't know before we got our hands on all these documents, and by people, I mean, not just the general public, but like Microsoft executives, like investors, OpenAI employees. There was a ton of confusion at the time of, like, why is this person being fired? Like, what did Ilya see became the meme around Silicon Valley? Because Ilya Sutskever was the co founder member of the OpenAI board who kind of became the swing vote in the firing. And we have now reviewed a lot of documentation, including the full memos that Ilya Sutskever sent to the board backing up why he thought Altman should be fired. Lots of other notes that were kept by Dario Amadei and other employees. Also, some employees who have left and have gotten out of the game are not part of rival companies, but who are just sort of concerned citizens or whistleblowers. And what it all redounds to is basically like if it had been one really simple smoking gun that you could have put in a tweet, we would know about it by now. Right. The reason that this remains mysterious on some level is that it wasn't one thing. It wasn't like Ilya walked in on Sam strangling a bunch of baby kittens and was like, you know, this guy needs to go. Right. Normally when you fire a CEO, it's because of a pretty clear, bright line pattern of behavior. And in this case, what we document, and the reason it took such a long and meticulous process and piece is it's kind of this accumulation of small details where people feel that he's telling mutually contradictory stories to different sets of people, both inside and outside the company. He's telling people what they want to hear. These are the allegations that one hears. And honestly, any one of them in isolation, you might kind of think like, okay, a CEO who tells people what they want to hear, like, is that a fireable offense? And it's only over kind of the accumulation of these details that it starts to add up to something.
A
Well, and also, alarmingly, it seems from your piece and from everything we've seen that since he has returned, none of that has really changed. None of the complaints or concerns about him have really gone away. He hasn't changed. He's still sort of doing the same thing.
B
Yeah. I mean, if anything, one thing we document in the piece is that he's sort of gone more into what's called founder mode in Silicon Valley, which is like, yeah, it's my company and I'm not going to be as much of a people pleaser anymore. When we talked to him, and we actually did talk to him extensively, he did kind of cop to this and say, yeah, at certain times in the past, I've been sort of too much of a people pleaser and I've been too conflict averse. And he said, I'm going to work on being less conflict averse in the future. So if anything, it's sort of more control at the top, which I think it's important to point out. Like, this is directly flying in the face of the way that OpenAI specifically was pitched from the beginning.
A
Right.
B
You know, there's a way of looking at this that's like, again, wow, so crazy that a CEO has control of his own company. Like, how naive could you guys be? I think for people who are not inundated with this stuff. It's important to start from the beginning and to remember or recognize the ostensible purpose of OpenAI. The reason that Sam Altman said it needed to exist was as a counterweight to the big evil megacorporation Google. Because AI was such a powerful technology that it couldn't be left to the profit motive to develop and deploy. It had to be in the hands of a small, safety focused, nonprofit research lab, which was what OpenAI was supposed to be at the beginning, because it could only be built slowly, cautiously, with aggressive support for maximum regulation. And that to do it quickly, to do it in a race dynamic, would be potentially devastating or could potentially destroy or kill everyone on Earth. That was the pitch.
A
And then they just decided, well, we're going to actually have a for profit
B
company that did become. Actually, while we were working on the story, they made the final conversion. And speaking of Delaware and California, this was challenged in both of those states because their original articles of incorporation, their original binding fiduciary duty was as a nonprofit to benefit all of humanity. And you know, you can say those are sort of airy words. And you know, all tech companies sort of say some version of don't be evil, right? But they really said, and their employees to a large extent really believed that the whole purpose was to be different. They had all these different byzantine corporate structures where they were at first totally a nonprofit and then they were a capped profit owned by a nonprofit. And the board of the nonprofit had exclusive control. And they also had this charter where they said, if someone else is developing a safe version of AI before we do, we should merge and assist with that project. Like we should merge our resources into the safe AI project, even if that happens to be at Google or at the US Government. So they were saying these things that no normal company in the history of capitalism would ever rationally say, but that's because they weren't supposed to be a normal company.
A
What did Sam Altman say to you guys about that shift?
B
So we had several conversations about this, and one of the things that comes up is we didn't realize how much money we would need to get this off the ground. Like we knew we would need money, basically. I mean, Sam didn't say it to us in these words, but what's clear from talking to him and from reviewing the documentation is his initial pitch in May of 2015 is to Elon Musk, who was then merely the hundredth richest person in the world and not the single richest person. And he says, because AI is so dangerous. And because Google is doing it and Google is the bad guy, we need to start a Manhattan Project for AI and we might need up to a billion dollars to do it. Fast forward to now. Their most recent round of funding alone was 122 billion. And we kept having to update that in the piece because we would write in the piece, their most recent round of funding alone was 40 billion. And then by the time the piece went to revision, they had done another head spinning. Like the numbers here are literally impossible for a human to conceive of. And so to answer your question, this is the story that Sam tells is that yes, we thought we could be this little David versus Goliath Safety Lab, but we just didn't realize how compute intensive and how cost intensive the project would be. And there's truth to that. This stuff, it gets smarter, apparently, the more data and training you feed it. And that's really expensive. And you need to build these massive data centers. They suck up a lot of power. You need to site them somewhere. So these are all like infrastructure challenges that were not foreseen at the beginning of this, but it doesn't fully explain how aggressively and how longstanding. According to a lot of private records, the intent to ditch the nonprofit structure actually was.
A
Offline is brought to you by Delete Me. Deleteme makes it easy, quick and safe to remove your personal data online. At a time when surveillance and data breaches are common enough to make everyone vulnerable, it's easier than ever to find personal information about people online. Having your address, phone number and family members names hanging out on the Internet can have actual consequences in the real world and makes everyone vulnerable. More and more online partisans and nefarious actors will find this data and use it to target political rivals, civil servants, and even outspoken citizens posting their opinions online. With Deleteme, you can protect your personal privacy or the privacy of your business from doxing attacks before sensitive information can be exploited. The New York Times Wirecutter has named Deleteme their top pick for data removal services. Someone with an active online presence. Privacy is important. Way too much on there about yourself. You know, if you're online a lot, there's probably more info about yourself and people you know than you even imagine. Have you ever been a victim of identity theft, harassment, doxing? If you haven't, you probably know someone who has. Delete Me can help take control of your data and keep your private life private by signing up for Deleteme now at a special discount for our listeners get 20% off your delete me plan when you go to JoinDeleteMe.com offline and use promo code offline at checkout. The only way to get 20% off is to go to JoinDeleteMe.com offline and enter code offline at checkout. That's JoinDeleteMe.com offline. Code offline offline is brought to you by Oneskin. You've probably heard us talk about Oneskin for their best selling skincare, but now they're bringing that same longevity science to address hair loss with their scalp serum. OS1 hair spring can bring an increase in seasonal hair shedding. Happens all the time and changes in routine can trigger stress related hair loss at any time of year. That's right. Yikes. One Skin's OS1 hair serum is formulated to address those concerns at the source. Powered by their proprietary OS1 peptide, this scalp treatment targets the hair follicles to support an environment where hair can feel thicker, fuller and more resilient. Best of all, os1 hair is drug free, delivering effective results without any harsh side effects. Experience the difference of a peptide driven approach to scalp health and see why users are prioritizing OS1 hair in their daily routines. Born from over 10 years of longevity research, One Skin's OS1 peptide is proven to target the cells that cause the visible signs of aging. So your scalp and your hair stay healthy now and as you age, for a limited time, try one skin with 15% off using code offline at OneSkin Co offline. That's 15% off OneSkin Co with code offline. After you purchase, they'll ask you where you heard about them. Please support our show and tell them we sent you. I saw that. The country's plan you report on is pretty incredible. Greg Brockman, the president of OpenAI, allegedly proposed that they play Russia and China and the US against each other, basically starting a bidding war for advanced AI. Brockman half denies this. Yeah, so I was going to say a how confident are you in the reporting? And b what does it tell you about how the founders actually thought about humanity benefiting from this technology?
B
So it's actually we feel really confident in the reporting. You know, it's funny, like I think people really are right to be skeptical about any of these industry stories and especially to be on the lookout for competitors trying to sling dirt at each other and sort of launder it through the press fair. There are several parts of this story where we really, really try to put pressure on things that seem like they are flinging mud at OpenAI so that a competitor like Google or Anthropic or XAI can benefit from that. And we go to great lengths in the story to kind of tease those apart and try to be fair. Something like, this country's plan is not, you know, everyone in the room basically agrees that some version of this happened, and they kind of just recall it differently. Now, to be clear, we are talking about hypotheticals, right? We're not talking about a scenario where they did sell AI to Putin or xi. But basically everyone in the piece agrees that some version of a country's plan happened and that basically, I mean, people should go read the piece. But basically, in the early days of OpenAI, they are all talking about this mission of how when they achieve the most powerful advanced AI ever, and it's kind of the most powerful invention since electricity, they need it to benefit humanity rather than destroying humanity. How will they do it? What does that mean in practice? And they're kind of bouncing around ideas, like in a conference room with a whiteboard, and they actually hired someone whose entire job was to make a game plan for like, okay, how did they do it with nukes? Well, they had this whole thing called a Baruch plan, and, you know, let's write up a whole proposal about what a Baruch plan for AI would look like, right? And the allegation is that over time, this kind of non zero sum, non competitive vision kind of morphs into a fundraising pitch, basically. And that then it morphs into, well, what if we sold it to world governments? Now, Greg Brockman denies that that was the idea. He says it was actually something less scary than that. But nobody just denies that this took place at all. These are the kinds of things that were being batted around and that apparently they were also pitching to outside investors, at least one investor. So these things sound crazy on their face because they kind of are. But it's also like, this is how they were talking about it at the time. This wasn't just a public rhetorical display. This wasn't just like what they put in commercials. This is how they talk about it among themselves. There will be an AGI dictatorship, and whoever gets there first will control the Ring of Sauron. I mean, these were routine metaphors that they used in their private correspondence. On the countries plan thing, Greg Brockman does say, we were never going to auction this off to evil world powers. So his story is that there was a more collaborative effort that he was envisioning. But these are all different versions of the way people remember the same set of discussions.
A
What was the argument for the country's plan that is not diabolical and just about playing these countries off each other to make money.
B
There were several iterations of it. So there could have been. What we were told is that there could have been a version where it was like, trying to make it like mutually assured destruction so that everyone had an equivalent arsenal, so that nobody blew each other up. Now, again, I think people who deeply study nuclear deterrence would find some flaws in that analogy. But this is how it was talked about, right?
A
You want to give everyone the nukes.
B
Exactly, exactly. I mean, we're pro nuclear proliferation. Exactly.
A
We like the proliferation. Yeah. Okay.
B
I mean, you know, they wouldn't be the first people in history. I mean, the thing is, like, in this story, as with all these stories, you don't find people who are sitting there twirling their mustache and saying, how can I be evil today? What they saw themselves as trying to do. And this is Sam Altman, Greg Bruckman, like, I do believe, based on the body of evidence, they were trying to find a way to be the good guy. And I think the story that you tell yourself, if you think that you are in this world historical position, I mean, remember, these are people who routinely compare themselves to Robert Oppenheimer and all the characters in the making of the atomic bomb. And they sort of say, like, okay, who are you? Like, he's Edward Teller, I'm Oppenheimer. Who are you going to be? Right. So if you think, and not for no reason, that that's your role in future history books, then you have to come up with a way to be not villainous in a way that's also realistic and that also wins the race before the bad guys win the race. And so then it does become a kind of Manhattan Project thing. Right. Why would you build an atom bomb? Well, you would do it if the bad guys are going to do it first.
A
Yeah. And I mean, and I think Sam acknowledges this to people in your piece, which is, I think from the outside, people are like, oh, these rich people just want more money. Right? Well, they're all rich. And yeah, of course, money is a driving motivation for a lot of people, for all people in business. But I think what people sometimes miss is how power, and not even power in the sense of, like, again, twirling your mustache, but influence. And this notion, this great man theory of which they think, like, yes, this is gonna be legacy defining, and I'm history, and so I must control this because other people are bad. And if I control this it's good. And maybe they not, maybe they don't think to themselves that they're going down the bad path. But when you believe that you are the only person that can do something, and then you just keep getting more and more control, it's going to lead to bad outcomes, historically.
B
Right. And it's gonna lead to race dynamics, which was Another thing that OpenAI set out to avoid, ostensibly from the beginning,
A
on the sort of foreign entanglements. One line in your piece I keep coming back to is the former OpenAI executive saying, quote, we're building portals from which we're genuinely summoning aliens, and that Altman has now placed one of those portals in the Middle East. So national security officials in your reporting, clearly alarmed about this, as I think they should be. Altman's foreign financial entanglements are compared to Jared Kushner's. Can you talk about why this alarmed so many people? And my reaction was like, how is this not a bigger story in Washington?
B
Oh, I mean, Altman's foreign entanglements were compared to Jared Kushner's in the process of him trying to get a security clearance, or at least considering getting a security clearance, when it emerged that members of royal families from, I guess it was the U.A.E. in that case, were giving him very expensive cars as personal gifts. So, yeah, there is a level of foreign entanglement here that is, at the very least, eyebrow raising. Look, the whole story of these companies and their involvement with the government and with intel agencies and national security agencies could totally have been its own piece. I mean, there's a lot of really, really rich suggestive reporting there. So OpenAI was started under the Obama administration, goes through Trump one, goes through Biden, goes through Trump two. What you see and what you hear from talking to officials from these administrations is because the allegation about Sam Altman is that he mirrors back what people want to hear. What you often hear from government officials is when the prevailing winds are toward regulation and toward export controls on sensitive chips and things like that, there would be some push and pull and there would be some tension, the way there often is with industry. But broadly, a lot of the people we spoke to felt, at least under the Biden administration. Yeah, I mean, OpenAI is pro regulation. And then we have a quote from someone basically saying, as soon as Trump got reelected, he said, okay, well, now the shackles are off and I don't have to play that game anymore. That was the perception of these government officials. And then what you see is on the first full day of the second Trump administration, this big announcement that OpenAI will do the biggest build of data centers in history with the support of the Trump administration. And then you see Sam Altman, who had been a stalwart donor to Democrats and Democratic PACs, suddenly saying, Trump is such a refreshing change. It's so great to have a pro business president.
A
Do you think his thinking, his political views actually evolved, or was this just. Does it seem more like opportunism?
B
It seems, and we have people in the piece saying this, like what he wants to do is win the AI race. And so his actions and rhetoric seem consistent with what he thinks will best achieve that. And this is something that you see in closed door meetings with government officials. This is something you see in public testimony before Congress. This is something you see in his interviews. I mean, one ability that people point to, and this is coming from many, many interviews, it seems like he was particularly well suited to sort of meet a particular historical juncture where it's 2015. We've just gone through the tech lash. Social media executives have had this really blustery approach to if you regulate us, you're a Luddite and you're ceding the future to China. And so Altman comes to the public with a very different pitch and says, actually, please regulate us. What we're doing is so dangerous that if you don't regulate us, you and everyone you love will die. He goes before Congress and says, I urge you to do more. And we have in the piece, Senator John Kennedy, not usually charmed by tech CEOs, says, oh, could you please write the regulation for us? Basically, at the same time, he's making a pitch to his own employees and recruits the engineers who are so terrified of the power of this technology that they themselves don't want to build it, at least not until it's proven to be safe. And he's saying to them, I'm really one of you. I really am so concerned about these safety things that I need you involved because you alone can build it safely. And then according to the reporting we have from investors, he goes and does a pitch deck and says, let's accelerate this and it'll be really profitable for industries. So again, it's like, I don't want to be overly shocked by the fact that, you know, CEO makes different pitches to different people. But the level of difference and the level of existential stakes that are being invoked here is really unusual. And that's also something that happens from one presidential administration to the next.
A
Offline is brought to you by three Day Blinds. At this point we can shop for groceries, furniture and even cars from home. So why is blind shopping still stuck on the stone Age? That's why you need to check out three Day Blinds. There's a better way to buy blinds, shade, shutters and drapery and it's called three Day Blinds. They are the leading manufacturer of high quality custom window treatments in the US and right now if you use my URL threedayblinds.com offline, they're running a buy one, get one 50% off deal. Three Day Blinds has local, professionally trained design consultants who have an average of 10 plus years of experience that provide expert guidance on the right blinds for you in the comfort of your home. Just set up an appointment and you'll get a free no obligation quote the same day. Not very handy. The Expert team at 3 Day Blinds handles all the heavy lifting. They design, measure and install so you can sit back, relax and leave it to the pros. Love three Day Blinds. Have been using them for years and years and years before they were even advertisers. They're great. They come to your house, you tell them what you want for blinds, they give you a whole bunch of options, then they help you pick them out, they help you install them. It's all very easy. And the blinds themselves are just very high quality. Three Day Blinds has been in business for over 45 years and they have helped over 2 million people get the window treatments of their dreams. So they're a brand you can trust right now. Get quality window treatments that fit your budget with three day blinds. Head to three day blinds.com offline for their buy one get one 50% off deal on custom blinds, shade shutters and drapery for a free no charge, no obligation consultation. Just head to 3dayblinds.com offline or one last time. That's buy one get one 50% off when you head to the number 3D a Y blinds.com offline.
C
If you work in university maintenance, Grainger considers you an MVP because your playbook ensures your arena is always ready for tip off. And Grainger is your trusted partner offering the products you need all in one place, from H Vac and plumbing supplies to lighting and more. And all delivered with plenty of time left on the clock so your team always gets the win. Call 1-800-granger. Visit grainger.com or just stop by Granger for the ones who get it done.
A
I want to ask about Altman's involvement in the battle between anthropic and The Defense Department. So Hegseth blacklists Anthropic as a supply chain risk because the company wouldn't drop its prohibitions on autonomous weapons and domestic surveillance. Hundreds of OpenAI Google employees sign a letter defending them. Meanwhile, as you guys report, Altman has been negotiating with the Pentagon for at least two days while signing an internal memo claiming OpenAI shared anthropics ethical boundaries. Emil Michael is a Defense Department official who had been previously, I guess, Travis Kalanick's right hand man at Uber, says on the record, I called Sam and he was willing to jump. Is there a less cynical reading of that or is that just the reading?
B
I would say the less cynical reading of it is something we talked about before, which is people don't think of themselves as being the bad guys. People think of themselves as doing the best job they can to be the good guys in a tough set of circumstances. So I think what Sam's defenders would say, and we talked to multiple Altman defenders, Altman loyalists, people who've stayed at the company for a long time, people outside the company. I think what a defender would say about this Pentagon interlude is, okay, he saw that, you know, the relationship between the Pentagon and Anthropic was fraying and he wanted to come in and get those contracts so that someone worse couldn't get them. And probably someone worse would be Elon in that scenario. So that's the most defensible. And I think he said publicly, Sam Altman has said publicly, look, this $200 million contract that we got from the Pentagon, that's peanuts to us. It wasn't really worth the PR hit for me to do that. I only did it because I was trying to help. Now people can believe that or disbelieve it. Maybe it's just that he's such an instinctive deal maker that he couldn't leave a deal unmade when he saw an opportunity. Maybe he believes in Anthropic's red lines and maybe he believes that he has gotten a better deal. We don't know because they haven't made the contract public. They've just sort of said like the government says they won't do mass surveillance, and we believe them, but we'll see. I mean, again, it's just like one of the benefits of putting all this together in a big, long New Yorker piece is you can really see the evolution from the start of the OpenAI dream until now. And I think if you could put someone who was one of the co founders or one of the early employees from 2015 into a time machine and say, we're swooping in to get the autonomous drone contract with the Department of War. They would find that a little surprising based on the original pitch.
A
Yeah, I mean, reading the piece is just like watching the train come down the track and nothing stopping it. So, speaking of Anthropic, the day after your piece dropped, the company announced it's withholding its newest model, Mythos, from public release because they believe its cyber attack capabilities are too dangerous. Meanwhile, Sam Altman just told Axios this week that AI enabled cyber attacks are, quote, totally possible within the next year. Your piece reports on an open AI representative who literally asked you what do you mean by existential safety? That's not a thing. What do you make of Anthropic's decision, and how do you compare it to what is currently happening at OpenAI?
B
Yeah, just to clarify, you know, I've seen some people sort of saying the existential safety thing, was that like a gotcha journalist question where, you know, the question was worded in a confusing way and they didn't. And I should just say, like, we put it multiple ways, multiple times, because there's a difference. When you say safety, sometimes that means like, user safety, user privacy, making sure people don't get doxxed, or, you know, making sure that the chatbots don't say naughty words or whatever. And then there's existential safety, which is making sure that the thing doesn't literally kill all of us, which, again, I didn't invent that as a fear. Like OpenAI told me to be afraid of that. Yeah, so. And that was just not something that this representative had ever heard of, apparently. Look, the thing with Anthropic is tricky because on the one hand, this is apparently the first instance we've seen of a company being asked to do something and saying, no, we won't do it because that violates our ethical principles and therefore putting itself into a really perilous position as a business. On the other hand, it's not like Anthropic is really acting like an AI safety lab nonprofit either. I mean, they were only in that position because they were the classified system of choice at the Pentagon to begin with. And they've made many, many other compromises. I mean, they're also raising money in the Middle East. So I think it's this very complicated game theory dynamic where everybody thinks, or wants to think we're doing the best we can and we're between a rock and a hard place. But it's not like Anthropic is acting super unblemished by their own lights either. I mean, the whole idea behind OpenAI and then anthropic subsequent to that, the sort of pristine rhetorical idea is we're going to incentivize a race to the top so we don't have a race to the bottom. And I don't see anyone racing to the top. I see a lot of racing to the bottom or somewhat slowing down the race to the bottom.
A
Yeah. And this is something that I've come to think is key to understanding this whole thing, as I've interviewed some people in these companies and done a lot of shows on this. It's like we still think in terms of characters and villains and good guys and bad guys, but there is a larger structural issue here, which is, yes, Anthropic can seem right now like they're doing their best, and maybe they're the best of the bunch. Obviously, I don' Elon's running a tight ship over there at XAI. Reading your piece about Sam Altman and OpenAI, that doesn't seem so great either. But, like, it's not that these are just individuals who have, like, personal moral failings or, you know, this profit motive above all else. Like, there is a larger system here where if you have a competitive environment both within this country and globally, where all of these different companies and all of these different individuals are racing to build this technology within a capitalist system. Like, this is what's going to happen.
B
Absolutely. And this is, I mean, to be fair to all the crazy hypothetical scenarios we were talking about with the country's plan, this is something they foresaw and to at least some extent, theoretically tried to avoid. The question is, A, was it ever avoidable? And B, how hard did they try to avoid it? But, you know, it is definitely true that there are structural things at play here that are more important than any of the individual personalities. And I would not want people to come away from this piece thinking, okay, Sam Altman should not be AGI dictator, so clearly someone else should.
A
Right?
B
Like, that's not the. That's not the point here. The point is it is crazy that we're having a conversation about AGI dictators at all. And it's crazy that that's not a super crazy thing to worry about.
A
Well, so that brings us to regulation, because one way to deal with the systemic incentives is to actually pass legislation, rules, regulations. A few hours after your piece was published, OpenAI just happened to release a 13 page policy blueprint calling for a new deal for the AI era. Taxing Capital Public Wealth Fund. 4 day workweek. One AI expert Anton liked called it quote comms work to provide cover for regulatory nihilism. How. How are you reading the timing? Do you, do you think your, your story had anything to do with it?
B
Yeah. And they also hired a ghost hologram of FDR to roll it out. No, look, again, I don't know what's in anybody's heart or mind, but it definitely came out the day our story came out. And they also acquired this tech talk show, tbpn. While we were closing the piece, they had a few interviews lined up that seemed thematically related to the themes of our piece. Look, I mean, it is the absence of a coherent regulatory regime that makes the PR battle so intense to some extent. Because if there were clear rules of the road, you could talk about who's playing by the rules. If everyone agreed on what to do technically to keep these systems safe, you could have a purely technical or technological conversation. But in the absence of those things, to some extent, it becomes a PR battle. So you see these companies engaging more and more in a PR battle. And one thing that people consistently say about Sam Altman is he's an incredibly gifted pitch man. And so the fact that he's given different pitches to different groups over time, you know, you could say that's a feature, not a bug, depending on your perspective on it. Anyone who's played around with this stuff knows that they have certain kind of built in tendencies and ticks and traits. And one of them that we talk about in the piece is sycophancy, which is this problem that the models can't stop telling you what you want to hear. And that could be a feature or a bug, depending on what your goal is. And so if you can't stop telling people what they want to hear, you might not always arrive at the most blunt, true answer, but it could be a compelling or appealing answer.
A
Keeps you on the platform.
B
It sure does. It sure does. And I'm not here to say that I know what the regulation can or should be. I mean, to the extent that we are summoning aliens out of portals, that's a very hard thing to regulate. But I do know that the regulations that OpenAI claimed to support, they no longer seem to support. And in fact, we have reporting showing that they were kind of going behind the scenes to try to scuttle that very kind of regulation. And like asking people to call Nancy Pelosi and Gavin Newsom to get it scuttled. So we now live in a landscape where, you know, these things are being built. And if you are a state politician who wants to introduce a state bill to control it in New York or California, you might run for Congress and have a massive super PAC dropping money against you because you support AI regulation. So that's another kind of way that the ideal scenario, as it would play out in an Isaac Asimov novel, kind of interfaces very uncomfortably with the realities of capitalism under politics, under capitalism.
A
Well, I noticed that even with the New deal for the AI era that OpenAI and Altman released, it is heavy on sort of economic regulations and policy proposals, all of which would require the government to deal with taxes. And it basically wouldn't really hurt the company that much or stop the company from doing what it wants to do well.
B
And we're only, again, this is like last summer now, so it seems like old news, but we came very, very close to living in a world where not only was there not robust AI regulation, but where there was almost a federal provision mandating a moratorium on state regulation. Right. I mean, you remember this. Yeah. So we almost, and according to the reporting from that time, it was Steve Bannon and Mike Davis and other people on the right who were lobbying against that. So there's some strange bedfellow stuff going on here. But we almost had a situation where not only do we not know how to regulate this new alien technology and not only do we not have federal regulation to do it, all we're doing is federally banning any regulation in the states. So that's kind of where we almost were and where we are is, okay, now we just don't have regulation. Basically there's like a couple of bills in California and other places, but it's, it's very rudimentary.
A
Well, and in the, in the OpenAI policy blueprint thing, the safety section is almost entirely voluntary. What they're proposing, there's, there's, there's some regulations on, on economic dislocation, but not really anything that they seem to be willing to accept.
B
On the safety side, look again, like a lot of this stuff, there really is a good faith argument for and against a lot of these regulatory proposals. I mean, a lot of people watched this Pentagon thing go down and used that to say, okay, is this the government that you want regulating this technology? Really? You know, so there really are good faith arguments on all sides. It's just when so much of the argument is being driven self interestedly, it's hard to know where the good faith arguments stop and begin.
A
You report that the company is preparing for an IPO at a potential trillion dollar valuation. Eric Rees told you that in other eras, some of the company's accounting practices would have been borderline fraudulent. A board member told you the company is, quote, levered up financially in a way that's risky and scary right now. Do you get the sense that this is a bubble that will pop? And if so, how do you think that changes the story you guys told in this piece?
B
Another person who told us that this is probably a bubble is Sam Altman, who has said multiple times that he thinks it's a bubble and that someone is going to lose a phenomenal amount of money. I believe that's a direct quote. So, yeah, I worry about the potential for a bubble here. And another thing, look, I mean, for people who are, again, not super read in on the technical details and are kind of sitting a lot of this out, one kind of simple binary that often gets tossed around is like, is this a bubble or is this like a really useful transformative technology? And I think it's key to remember that it can be both. Right. A lot of the biggest bubbles that we've seen are around the building of the transcontinental railroad or the laying of fiber optic cable during the telecom. These are massive infrastructure projects that ended up being really useful and economically transformative and also created massive bubbles followed by recessions. So something looking forward to. Yeah, I mean, you can end up using all that. Now. A lot of people say it's even worse in the case of the data centers because unlike train tracks or fiber optic cable, these chips depreciate so quickly that it, you know, basically you're, you know, paying for them and then three years later they're not usable and you have to do the investment raise all over again. So it's definitely an overheated moment economically. And basically the only way that, based on what the experts told us, that we come out of it without a bubble is if these models just keep leaping and bounding and growing in their capabilities year over year and month over month and week over week, and just nobody knows. That's impossible to predict. So you can raise investment based on promises, but the technological breakthroughs either happen or they don't.
A
Yeah, so it's either a massive economic bubble that bursts or technology that quickly becomes the killer robots that we're all afraid of, or perhaps both is the.
B
It could always be both.
A
It's always. So you spoke to a lot of people who left OpenAI and over the concerns that we've Talked about Sutskever, the emojis, the whole super alignment team. These are people who took huge pay cuts to work on what they thought was the most important problem in the world. Most of them end up leaving in disillusionment. Like you said, some are competitors, but some have just left. What did you take away from talking to them about that? That loss, that disillusionment?
B
Yeah, and this is another area where we were trying to filter really hard for competitor gossip and competitor gripes. And you know, one of the strange things about this industry is that everyone, as soon as they leave one company, they go off and raise a billion dollars and start another company. So they're all kind of rivals at this point. So, you know, Ilya Sutskever has his own company now called Safe Superalignment. Dario Amadei obviously has his own company called Anthropic. So we were trying to really filter and not just like launder people's grievances and complaints. But one thing that does become pretty clear is there were some people who were really close to this technology who really, really believed that it could be massively dangerous. And so again, this is something that often gets discounted as, oh, this is just an attempt at regulatory capture. This is just people trying to hype up their product. I am here to tell you there were and are people close to this technology who really, really think it's dangerous. Now, why are they still building it? Good question. There's kind of a selection bias problem here where the people who are so scared of it that they don't build it, they're not in the piece because they stopped building it. Right. So you do have this kind of weird game theory problem of you only end up dealing with the people who are scared of it and yet continue to be in the race. But the scenarios where this thing goes off the rails, there are more of them than I realized and they are less far fetched in some ways than I realized. I mean, still far fetched, but they don't require necessarily for, you know, the thing to wake up and become Skynet and decide that it hates humanity and destroy us. Right. I mean, there are many, many other ways that this thing can go wrong. And you know, it's. I'm actually just going to read you one thing because I thought I think it's relevant. This is a quote from a blog post. Superhuman machine intelligence quote does not have to be the inherently evil sci fi version to kill us all. A more probable scenario is that it simply doesn't care about us much. Either way, but in an effort to accomplish some other goal, wipes us out. That's a quote from a blog that Sam Altman wrote in 2015.
A
And so it's an oopsie that destroys civilization.
B
An oopsie. And you know, some of the best sci fi stories involve oopsies. But you know, again, like, we made it through the nuclear age so far. Maybe this week that'll change and we may make it through this too. But it's not to be taken lightly. And I think a lot of people take it lightly or ignore it. And look, I don't know what's going to happen. The people who are building this stuff don't know what's going to happen. I don't know if to the extent that AGI is meaningful, I don't know if it will arrive in six weeks or six years or 60 years or never. But I know enough to be concerned about the power of this stuff. And being concerned about the power of it doesn't mean you think it's good or bad or this or that person should be in control of it. I think it just means taking it as seriously as the people who are building it.
A
Yeah, I was going to say just a final question because you worked on this for so long, like, what's the response to this piece? That would tell you it moved the needle. And have you seen any version of it yet?
B
Yeah, I mean, you know, I don't go into these things with a, like, oh, I hope it does this, or a kind of like activist thing. Obviously, like, even if I wanted to, you know, journalism is not really that powerful. But I would like for people to reckon with how serious could this be? And again, I'm not here to say like, everyone should be a doomer. And like, all I mean is it would be nice if people, you know, lived in the timeline that they happen to live in and like in the way, you know, and you guys do this with politics all the time, right? Dealing with people who don't want to live in a world where we have a president who's saber rattling with, you know, taking out all of Iran's bridges and power plants for a war that he started for no apparent reason. Like, but that's the timeline we do live in. And so I think an equivalent of that with the AI stuff, you can think that people are spinning out and getting wrapped up in hype cycles and you can think all that stuff, but none of that is mutually exclusive with taking the underlying thing seriously and taking some of the concerns seriously because like it or not, it's here and it's only going to get, as far as I can tell, more powerful.
A
Well, glad that you and you and Ronan took it seriously and and wrote this piece. Everyone should check it out. Andrew Morentz, thanks as always for joining Offline.
B
Thank you. Really appreciate it.
A
Offline is a crooked media production. It's written and hosted by me, Jon Favreau. It's produced by Emma Illich F. Frank. Austin Fisher is our senior producer and Anisha Banerjee is our associate producer. Audio support from Charlotte Landis. Adrian Hill is our head of news and politics. Matt De Groat is our VP of production. Jordan Katz and Kenny Siegel take care of our music. Thanks to Delon Villanueva, Eric Schutt and our digital team who film and share our episodes as videos. Every week, our production staff is proudly unionized with the Writers Guild of America. Sam.
Offline with Jon Favreau | Airdate: April 11, 2026
Guest: Andrew Marantz, staff writer for The New Yorker
Main Theme:
A deep investigation into Sam Altman, CEO of OpenAI, and the internal culture, power struggles, and existential risks surrounding one of the most influential companies in artificial intelligence. The episode examines whether Altman, a man whose leadership mirrors the ambiguity and potential dangers of the technology he develops, should be trusted with the future of AI—and, by extension, humanity.
Jon Favreau interviews Andrew Marantz about his blockbuster New Yorker exposé (co-authored with Ronan Farrow) revealing the contradictions, governance failures, and personal dynamics driving OpenAI. The discussion pulls back the curtain on Altman's leadership style, the company’s shifting mission from nonprofit do-gooder to for-profit juggernaut, and why structural incentives—not just individual flaws—make the current “AI race” so dangerous. The episode ties OpenAI’s fate to larger questions about technology, power, money, and whether society can adequately regulate a technology that its own creators describe as potentially civilization-ending.
Founder Mode & Control (13:51):
Mission Drift & Profit Motive (15:35–16:54):
Playing Powers Off Each Other (21:58–27:05):
Power Over Money (27:05–27:58):
On AI’s Societal Impact:
“AI is part of, you know, weaponry at the highest levels of the military. It's part of surveillance. It's part of basic transportation infrastructure and weather prediction. It's, you know, liquefying our brains with slop. It's contributing to what experts call human enfeeblement ...” — Andrew Marantz (06:41)
On Altman’s Leadership Style:
“He's telling mutually contradictory stories to different sets of people.” — Andrew Marantz (12:10)
On Power and Motivations:
“When you believe that you are the only person that can do something, and then you just keep getting more and more control, it's going to lead to bad outcomes, historically.” — Jon Favreau (27:41)
On the ‘Countries Plan’:
“There will be an AGI dictatorship, and whoever gets there first will control the Ring of Sauron.” — Andrew Marantz (24:40)
On Regulatory Failure:
“I do know that the regulations that OpenAI claimed to support, they no longer seem to support. And in fact, we have reporting showing that they were kind of going behind the scenes to try to scuttle that very kind of regulation.” — Andrew Marantz (45:44)
On Existential Risk:
“Superhuman machine intelligence ... does not have to be the inherently evil sci-fi version to kill us all. A more probable scenario is that it simply doesn’t care about us ... but in an effort to accomplish some other goal, wipes us out.” — Sam Altman (read by Marantz) (53:59)
| Timestamp | Topic/Quote | |---------------|------------------------------------------------------------------------------------------------------------| | 05:50 | How reporting changed Andrew Marantz's view of AI's danger and importance | | 09:44–13:36 | Altman’s firing, engineered return, and the myth of the external investigation | | 13:51 | Founder mode—Altman's increased control, flying in the face of OpenAI's founding ethos | | 16:42 | OpenAI’s original nonprofit structure and the betrayal of its mission | | 21:58–27:05 | The "countries plan"—selling AI to world powers; self-concept as would-be Oppenheimers | | 27:41 | The seductive feedback loop of power and control for founders | | 28:42–33:27 | Altman’s foreign gifts, switch to pro-Trump rhetoric, and playing to shifting regulatory winds | | 35:34–38:23 | OpenAI, Anthropic, and the Pentagon—where idealism meets military pragmatism | | 39:07–42:10 | Anthropic’s withholding of Mythos; existential safety questions; systemic race to the bottom | | 42:46 | "It is crazy that we’re having a conversation about AGI dictators at all." | | 43:31 | OpenAI’s “new deal for the AI era”—PR as regulatory cover | | 48:51–50:58 | The AI bubble: bubble vs. transformative tech; Altman admits it is a bubble | | 51:55 | Real existential fears among those closest to AI | | 53:59 | Sam Altman’s 2015 warning on accidental apocalypse via AI |
Throughout, the tone is serious but laced with moments of dark humor and cultural awareness, fitting for a topic described as both sci-fi and all-too-real. Favreau and Marantz lay bare that no savior—in the form of Altman, his competitors, or regulatory rhetoric—is coming. The system is incentivizing a "race to the bottom," and the existential threats, deception, and lack of regulation should be a wake-up call.
"Being concerned about the power of [AI] doesn’t mean you think it’s good or bad or this or that person should be in control of it. I think it just means taking it as seriously as the people who are building it." — Andrew Marantz (54:52)
Recommended Reading:
For listeners who haven’t tuned in:
This summary covers the core revelations about Sam Altman, OpenAI, and the structures shaping advanced AI—with an emphasis on why the stakes are so high, why individuals matter less than systemic incentives, and why everyone should be paying much closer attention.