Transcript
Robert Wiblin (0:00)
Hey, everyone. Just wanted to quickly let you know that the 80,000 Hours podcast is currently hiring for three new roles. There's a podcast growth specialist, someone focused on packaging, promoting and distributing the show. There's a research specialist, someone to help make the content sharper and more insightful. And finally a producer or content strategist, someone to plan episodes and figure out how to fix them in post. The roles might be done in London, San Francisco or remotely. You can find a lot more details in the job listings on our job board at jobs dot 80,000 hours. Or if you like the show and would like to make it better, or make there be more of it, or help it find a bigger and more valuable audience, then please do consider applying by the deadline of 30th November 2025. Let's get on with the show. So there's a sense in which this group of four people, these are incredibly powerful people. If you're saying that they have full discretion to define what safety and security is. There's like almost no limit to, I guess, what constraints they could put on OpenAI's releases, at least the forces that would be brought to bear to discourage them from preventing OpenAI from training or deploying very powerful AI models. You might have to have, I guess, a lot of intestinal fortitude to be able to actually exercise that authority and really have the courage of their convictions.
Tyler Whitmer (1:06)
I think it'll depend a lot on the people and how well they perform in these roles. I'm sure you could dream up a better team, but it's certainly not crazy to think this is quite a good one. You know, it's going to be a really, really difficult job and one of the things we'll be looking out for is like, how are they supporting themselves? Like, is there staff that is supporting the SSC or is this really going to be the obligation of four volunteer corporate directors? Which I think would be a big failure mode. It is certainly a better world if you take the baseline as what they were planning to do with the December announcement. It is harder to say it's a great deal compared to the status quo. And I think it's easy to say it's just a bad deal compared to what OpenAI should have been.
Robert Wiblin (1:47)
Okay, so we are back with an OpenAI emergency podcast because we have something of a resolution on their attempted for profit restructure. The Attorneys General of California and Delaware have forced through quite a lot of juicy changes on OpenAI's plans before they would allow the restructure to go through. But OpenAI, for some reason or other, didn't mention them very conspicuously in the press release about the for profit restructure. So they've gone, I think, reasonably underreported on in the press. And to walk us through all of that, we've got Tyler Whitmer, for many years a commercial litigator and then a partner at a major trial law firm. In 2024, he founded legal Advocates for Safe Science and Technology, which for the last year has been closely tracking OpenAI's restructure proposal and making public interest submissions to the California and Delaware attorneys. He's a co author of an article all about the announcement, about the changes and what he thinks that people should be looking for going forward, which you could read@notforprivategain.org so, Tyler, last December, OpenAI proposed to completely kind of sideline the for profit entity and basically become a for profit with roughly no effective constraints. Yeah. Can you quickly refresh our memories on what they said they were going to do?
