
Loading summary
A
If the final input, at the end of the day that informs regulation is what the public wants and who they vote for, then at a certain point, the money stops working for you. Within the next 10 years, we'll have AI systems that could outperform all humans, and along with that, they'll outperform all humans in conducting cyber attacks or developing new weapons. The transparency letter, it wasn't making any claim about how the restructuring should go, but it was just asking for kind of more clarity from OpenAI. OpenAI is willing to go to great lengths to silence critics when they think that it's important to do so. They wanted to know every single person who'd ever donated to us and the date and amount of that donation. It is a matter of fact that they didn't succeed in really slowing us down. In fact, I think that they kind of made a mistake here, where this is a bad comms moment for them. In absence of, like, really strong technical solutions to the problems AI faces or governance solutions where it's like, here are the exact controls and committees and evaluations you need in place, we should at least know what's happening. We should at least not be walking to the cliff blindfolded.
B
Welcome to the Future of Life Institute podcast. My name is Gus Ducker, and I'm here with Tyler Johnston, who is the executive director of the Midas Project. Tyler, welcome to the podcast.
A
Thank you so much, Gus. It's great to be here.
B
Great. All right, why don't you start by telling us about the Midas Project, its mission, its history, and so on.
A
Yeah. So the Midas Project is a watchdog nonprofit focused on frontier AI developers. I started the Midas project about a year and a half ago. Now I was transitioning from working at animal rights, where I did corporate accountability work and had been following AI since, you know, 2019, when I was playing with, like, Jukebox and the GPT3, like, API. When that came out and thought it was, like, so exciting that AI models could generate coherent paragraphs of text. And then, yeah, starting in 2023, when GPT4 came out, I started to feel the acceleration a bit, and I started to get a bit worried that our society wasn't prepared for what was coming. And the companies themselves weren't prepared for what was coming. I think many of them would admit. I don't know if they're so keen to admit it now. And I thought that some of the same corporate accountability playbook that, you know, I was using in the animal rights space to try to encourage stronger Self governance among like food companies in terms of how they treat their animals. I thought similar things could be effective when it came to using public communications and advocacy to ask AI companies to adopt stronger voluntary safeguards. So that was.
B
And what is that playbook? Could you walk us through your thinking here? Why did you expect that to be successful?
A
Yeah, at the core it's an incentives question. In an industry where companies respond to the incentives created by public opinion, you can have a lot of leverage by just taking a flashlight and shining it around to show customers kind of what's happening behind the scenes in whether it's the supply chain that the company sourcing its materials and kind of inputs from, or in the case of AI, I think whether it's sort of externalities that are being generated, even the ones that haven't materialized quite yet, and what the companies themselves believe about what the technology could do to our society, you know, so I mentioned animal rights is the case. I know best. I think there, there's this like very neat thing you can do where you can go to a company and if they're, you know, selling eggs on a store shelf, it's like a very pristine, clean environment and there's this like metaphorical curtain that is like hiding that experience from like the pretty grisly experience that animals are facing in their supply chain. And through public communications you can essentially like pull that curtain open and show customers kind of the whole picture. And customers, I think rightly find it outrageous. And the company will respond to that incentive by, in the case of, you know, animals adopting. We were asking companies to adopt commitments to source cage free eggs or improve the treatment of broiler chickens that are raised for meat in their supply chain. But you know, similar tactics have been used in the environmental movement in and in other movements. And I think basically any industry where there are these kind of negative externalities or these kind of ugly things that are happening behind the scenes or like with a bit of like causal distance between the product and the bad thing. And if it's a competitive market where people care about what customers think of them, or if it's an area where regulators are active and want to do something and they care about what the public thinks, you kind of have this like powerful leverage as an actor who is just doing communications and public advocacy to move companies that are thousands of times larger than you simply by shining the flashlight around.
B
And so in the case of animal welfare or animal agriculture, animal factory farming, you have kind of concrete externalities or concrete cases of harm. And perhaps you're beginning to see that in the AI space. But what are you pointing at? What are you shining the flashlight on?
A
Yeah, it is a good question. And I think you can shine the flashlight even on, say, speculative or currently immaterial harms. And one way to do that is to just point out that experts in the field and many of the people at these very companies will freely admit, to their credit, I should add, like, to their immense credit, they will freely admit that these harms are quite real. Like they're, they're quite likely, they're quite. It's quite likely that they'll materialize on a time frame that would surprise many people, that many people wouldn't immediately believe. You know, things like within the next 10 years, we'll have AI systems that could outperform all humans. And along with that, they'll outperform all humans in conducting cyber attacks or developing new weapons. And so, you know, that's, that's one of the things you can shine a light on.
B
Yeah. Do you worry there about creating the wrong incentives? So if you have a company that's disclosing their worries about the, this emerging tech to the public, they, they are acting responsibly, but they are also giving ammo to an organization like the Midas Project, which, which could come back to, to haunt them. So do you worry about people, and specifically leaders at AI companies express you know, keeping their worries to themselves as to not get. Give ammo to a project like this?
A
Yeah, I do worry about it. And I should also add, I brought that up as a hypothetical, but I think that there are other frequently more useful things to shine a light on, and those can include the more concrete failures that are indicative of problems we'll face in the future. So for instance, the alignment failures that recent models have had, you know, like there were visible ones at XAI around like Mecha Hitler when that was going around the Internet. With cases like that, you are less susceptible to this problem, but you're still susceptible to it because you only find some of those cases when you go looking for them. And so you also don't want to punish the companies that are doing all the looking for the misalignment cases as opposed to accidentally experiencing them in the case of xai. So, yeah, I do think it's a problem. I think like, the best way to address it is just to like, try to be intentional about target selection, where you're thinking about when we are, what, when we're choosing what company to focus on, are we choosing them because they are the most honest about the risks here or because they're maybe the most responsible for the risks or even worse, trying to like deflect the responsibility and mislead people about the risks.
B
So, yeah, it seems like a good approach to take here, if you're thinking about. So in some sense the, the rational response from the, from the companies here would be to, to close ranks and to perhaps have strong legal mechanisms for, for no information getting out and, you know, prevent their employees and their CEOs from, from doing interviews and so on. Do you think that's happening? Do you think this, this incentive is, is actually kind of materializing in the world?
A
I think that might be happening for many reasons. Yeah. It does seem to me that the industry is getting kind of more and more locked down and wants to be more and more careful about even internal paper trails. I know there's recently a case of an intellectual property lawsuit against OpenAI. I believe the plaintiffs have gotten access to all the internal slack messages at OpenAI and are going to try to use that for evidence to support their case. And so I think they're going to be more locked down externally. I think they're going to be more locked down internally where they're thinking, you know, what should we be telling? Like what conversations should we be having on Slack? What should maybe for valid security reasons, what do employees need to know and what do they not need to know and how can we make sure they only hear the former category? In some instances for security reasons, I think that might be advisable. But in many instances I think it poses a big risk in terms of what the public knows and what they don't know. And I think there's some very low hanging fruit around transparency that it would be really good if the companies would kind of draw a line where they're going to disclose whether they're training on chain of thought, where they're going to disclose the sort of evaluations that they're using to test their models and the results of those evaluations. And something we'll talk more about, I'm sure, is also the governance structures they have in place. How can a company make a credible case that there are internal structures in place that will catch something dangerous before it's too late? And so I think for questions like that, more transparency is extremely important and there's a very strong case for it.
B
Yeah, one interesting aspect of the approach you're taking here is the fact that you might be able to affect change with 1000 or even less of the resources of the companies. And so how do you think of that? Because it seems like if you were to go against the companies in a kind of head to head battle on, on resources, say trying to lobby or something, that might be a tactic where you, where you're bound to lose just because these companies have basically infinite resources. So yeah. What can you tell us about power differentials and kind of using your, using the leverage you have even if you don't have as many resources?
A
Yeah, I think that groups like the Midas project are frequently in an insanely leveraged position where you can get a lot more traction than you expect for the size. I mean, one example that I think of in the animal rights space to remind myself that this is not kind of an even playing field. The organization I used to work for, which was the Humane League is the name, they have been pretty successful along with some partner organizations and kind of shifting the entire corporate supply chain for eggs. I think it's gone from like 3% cage free in 2015 to like 50% in 2025. Their annual budget I think was like 10 to 20 million dollars. The annual marketing budget alone for Walmart is 9 billion dollars or something like that. So it's like, even to like change one company, it looks like the odds are totally stacked against you, not to mention to like change an entire industry. I think the reason that it works is because you have this immense intangible asset in the fact that about many of these issues you're fundamentally right. The evidence for you being right is there and the public is kind of already on your side. And so I think you even mentioned offhandedly, lobbying is an example of a case where you don't just want to lose by being outspent. Even in the case of lobbying, if you're going up against an insanely well funded industry, that is lobbying against regulations for a technology where there's a common sense case for regulation and where most of the public is on your side, it's not obvious to me that the industry wins. I think that the intangible asset of like strong public buy in a strong common sense case to be made for what you're asking for is in some ways a more important asset than infinite money. You know, you could have all the material resources in the world, but at a certain point you can't. If, if the final input at the end of the day that informs regulation is what the public wants and who they vote for, then at a certain point the money stops working for you. So I think that's why you Know, I don't think I could go up against a huge company like OpenAI and say, hey, you should change your logo to be yellow. I don't like that it's black and white. I think it should be yellow because like, you know, the public doesn't care. There's no buy in and they could very easily quash whatever niche interest I have. But if I'm making the case for, hey, you should do this common sense thing that the majority of the public, including the majority of your own customers think you should do, that most of them just aren't thinking about right now because they haven't been tracking the issue. And once they are tracking the issue, they'll be pretty upset about the fact that this was even a problem in the first place, that this is something you haven't done already, then you're in a pretty leveraged position to make a case for what you want.
B
Yeah, yeah. Let's talk about some of the projects that you've been engaged in at the Midas project. So you have the OpenAI files and you have an open letter to OpenAI. Could you talk about both of those?
A
Yeah. So this has been our main focus for the better part of this year. And this is because OpenAI throughout the year has been undergoing this restructuring that there's already a case for them being, you know, the most important AI company for advocates to frame their messaging around because they're synonymous with AI for many people, thanks to ChatGPT and kind of, you know, I think if you look at like Google Trends, ChatGPT just dominates all of the other AI products. And so when you're communicating to the public, OpenAI is, is kind of who they're thinking of. And this restructuring was taking place that I thought, and I still think is kind of the biggest story in AI right now. And we asked ourselves what information is not out there right now, or to the extent that it's out there, it's under discussed and under indexed that is relevant to this restructuring, to contextualizing it and to potentially generating a better outcome. Because it lets the public know what they should be worried about and what they should be fighting for, or because it lets regulators know, you know, the like full context of this organization. And that was the motivation for the OpenAI files, which was a kind of web native report that was maybe 14,000 words, it was very long and it was for the most part an archival project. There's relatively few kind of new bits of information in there, although I think there are a few. And for the Most part, it's summarizing kind of stories about OpenAI and quotes from OpenAI themselves and from employees who worked at OpenAI and various concerns we had about their governance and their safety decisions and the integrity of their leadership that had surfaced over the past decade. And the reason it seemed important is because there are a lot of these examples of kind of concerning individual events that happened or concerning things that one person said and taken in isolation, which is how you would normally encounter those stories, it's easy to kind of forget about it and think like, oh, well, it's a relatively small thing, they messed up there, but they fixed it and it's better now. Or, okay, well, this one person has a grudge against them. Who's to say whether this actually demonstrates a pattern of governance failures? And so I thought there was a really strong case to collect all of it in one place and kind of create a narrative and say, take it in its totality. What does this mean in terms of should we by default trust this organization to govern itself well, or do we have to be a little bit more critical of the choices they're making? So that was the motivation for the OpenAI files and the transparency letter, which, as you mentioned, it came, I think, two months after the OpenAI files came out. This was really honing in specifically on the restructuring. And it was a letter that it's now had 10,000 plus signatures, including a number of former OpenAI affiliates leaders in the field of AI, dozens of civil society organizations, and just thousands of members of the public. And it was a letter that I think was pretty simple and common sense. It wasn't making any claim about how the restructuring should go, but it was just asking for kind of more clarity from OpenAI. There were seven questions we asked, and I think these were questions where the answer was very high stakes. It was a very important thing for the public to be clear on. And I had a sense from messaging that they were using that, whether intentionally or not, sometimes they were obfuscating the kind of the truth about that question. Or like, I had some reasons to believe that the answer wasn't as good as they were portraying in public. And so this was a request for them to just kind of go on the record and say plainly for each of our seven questions, what, what the outcome would be if they got their way in the restructuring.
B
Do you have a sense of whether OpenAI is more chaotic internally than your average startup? So this is a defense. I've heard some people give that if you look at a startup, the marketing is sleek, but it's complete chaos internally. And OpenAI is just no different from that. What's in the OpenAI files that are perhaps more damning than just what's happening at a regular startup?
A
Yeah, it's an interesting question. I have never researched a regular startup as thoroughly as I've researched OpenAI. And so I could be subject to some bias here where I do think that OpenAI is unique. And in fact, if you subject any organization to this level of scrutiny, they'll all come out looking this way. I don't think it's the case. And I think Sam Altman himself has said things to the effect of, you know, we're doing something extraordinary here. And as we get closer to this moment where we develop AGI say the stakes are just going to get higher and higher and the conflicts are going to increase and on all sides, everyone who has a view on how this should go is going to kind of become more and more aggressive about pursuing their vision. And so I think that just from the outside, before you even considered the specific mistakes that I believe OpenAI has made, if you were to ask yourself, do you expect more conflict and risk taking and prioritizing, moving quickly over moving safely, do you expect more of that from a B2B SaaS startup that's just making some kind of trivial piece of enterprise software, or do you expect it from the company that genuinely believes that they're going to automate all human labor? I think you should expect more of that from the kind of Hyperscaler startups like OpenAI and Anthropic and XAI and others. In terms of what we actually found, I think, you know, I mentioned we wanted to put all these examples in conversation with each other to like review the totality of them. And I think you see like patterns emerging that indicate that, for example, OpenAI is willing to go to great lengths to silence critics when they think that it's important to do so. And you know, treating these people as like real people. I don't think that like they're trying to silence critics for some Machiavellian reason, nor because they have like a power fantasy or anything. I think they may very well be motivated by like mission related reasons where they really want AI to go well, but they may also think, well, it's much more important that I do it to make sure it goes well than that my competitors do it. It's much more important that we do it versus our national adversaries doing it. And for that reason things that could slow us down Instrumentally, like criticism from former employees or from external critics is something that we should really try to clamp down on.
B
Yeah, let's dig into the subpoenas. So tell me about what you've received from OpenAI.
A
Yeah, so I received a subpoena from OpenAI. I received two subpoenas, actually. This was in August. I wasn't home at the time, but I got a text from my roommate saying that someone's at the door with papers. And there's a bit of back and forth to eventually get a hold of them. But when I did, it was two subpoenas, one directed to the Midas project and one directed to me personally.
B
And.
A
They, we can get into everything they asked for. There were 11 requests for production. But the context for this is plainly about the restructuring. As mentioned, our organization has been speaking about, out about the restructuring. Dozens of other organizations have been speaking out about the restructuring. And similarly, Elon Musk is speaking out about it. And going further than that, he's, you know, taken them to court about it. And so this subpoena was in Elon Musk's case against OpenAI, or maybe more precisely, it was related to OpenAI's counterclaim against Elon Musk. So after he took them to court, suing them for undergoing this restructuring, which I think you could have different views on, if you wanted to be charitable, you could say he was co founded, the nonprofit donated a bunch. Clearly he has a stake. If you wanted to be uncharitable, you could say that he's running a competitor that's not a nonprofit that would benefit from OpenAI being slowed down. Whatever view you want to have of him, he's suing them over it. And they're countersuing, saying that he's waging a harassment campaign about this. So the subpoenas were ostensibly related to this case. And one of the main things they ask for, which I think is basically fine and acceptable, is whether he has supported us, whether he was involved in the formation of the organization or is donated to the organization. You know, I would have rather they like asked me in a friendly way, but I guess I could understand why if I was like a bad actor, maybe I wouldn't answer that honestly or something. And so they want, they want me to like swear to the court, and I'm happy to swear to the court, by the way, that Elon Musk has no connection to the organization, has never donated. We would not accept a donation if he, if he tried. And so that, that's ostensibly, that was the context for the subpoena. And I think the reasonable thing they asked for, the thing that I think went beyond what was reasonable, which we can get into, is the scope of what they asked for. They went beyond just asking if Elon Musk is involved and the context and the breadth of the recipients of the subpoena. So addressing those kind of in order, the. The thing that concerns me about. Well, actually, I'll address them in reverse order because this is relevant. So first, for the context and of who got the subpoena and when. As far as the groups that have gone public now, I think all of the groups that I know of that have been subpoenaed were signatories on that transparency letter that I mentioned. And at least our subpoena came in a few weeks after we published that. I know others came in at different times, but generally these were all the organizations that were kind of speaking out about the restructuring. Some of those organizations, I think, plausibly they could have looked supported by Musk and it could be relevant. Some of them really didn't. One of the subpoenas went to echo.org, which is this massive grassroots organizing group that's been around for 20 years and has been criticizing Elon Musk very insistently for the, you know, I don't know how long, but, like, especially in the past few years due to his role in government. And I think he's even, like, they, you know, take a shot at him in one of their petitions about OpenAI. And so it's surprising to me that they would have a genuine reason to suspect that an organization like Echo or the San Francisco foundation is supported by Musk. Nonetheless, they subpoenaed all these groups. And then going beyond just asking if they were supported by Musk, they asked for, as I understand it, all documents and communications about OpenAI's governance and restructuring. This is very broad. They define what they mean by documents and communications and relating to in the subpoena itself. But it's really like anything tangentially connected to it. If I, you know, text a journalist talking about some component like the profit caps or something, like, suddenly that whole conversation can get drawn into the subpoena, given that our organization focused on it for the better. The year, I think it would really would be thousands or tens of thousands of pages if I were to, like, actually scour through and find anything that touched on OpenAI's restructuring in any of our emails and text messages and documents. Then even beyond that, it wanted to know every single person who'd ever donated to us and the date and amount of that donation. And it wanted to know any documents we had as the Midas project on OpenAI's investors or any for profit entity that had considered an investment in OpenAI. And maybe most egregiously, this didn't apply to the Midas project, but applied to other groups, including encode, who went public about this. It was also asking for documents about totally unrelated legislative battles that these organizations were in where OpenAI was, to varying degrees, on the other side. So SB 1047 and SB 53 are examples of California bills where OpenAI fought one of them. The other one they say they didn't fight, but I think there's a case that they did and they've said some misleading things about the bill and they asked for code, for example, for all of their documents and communications about that legislative fight. So these subpoenas really went beyond just asking about Musk and funding the only thing relevant to their counterclaims.
B
Yeah. Could there be valid reasons for these subpoenas being as broad as they are? Is there some legal complexity that perhaps I don't understand stand here or why do you think they are as broad as they are?
A
Yeah. I also am not the best person to give an opinion on this because I'm not a lawyer, but I have read basically every public statement I could find from lawyers who have spoken about this on social media or in the news, and also talked to a few privately. And I think I've only heard one or two try to make the case defending this. And the case defending this is something like if you're like a bulldog litigator and you like really want to win this case, you have to collect all sorts of evidence. And if, you know, if it turned out that one of these groups was supported by Musk, and you could also strengthen the harassment claim through some random email about OpenAI's restructuring that happened four months ago, then you have to go get all that information. And to the extent that it's burdensome or unreasonable, then it's incumbent upon a group like the Midas Project to fight back on that and negotiate some narrower scope or move to quash it in front of the judge or something like that. I've only heard one or two litigators try to make that case. Most of what I've heard has been that this is a pretty unreasonable scope, that this is something that looks like perhaps an intimidation tactic, a way of, you know, sending people to. I think in my case was like a private investigator that came to our door in other cases, like a sheriff deputy. These are, you know, it's somewhat standard in the field to find people like this who serve as process servers to deliver the documents. But of course it's a little bit scary and intimidating to get these documents saying you're commanded to produce all of this in two weeks in perfect form by order of the court or whatever. And so there's some people who think it looks like intimidation. I sometimes think it looks like kind of intelligence gathering. Right. If you want to know what's being said to people in these legislative battles, what's being said to people in the restructuring, for instance, like what congressional offices the Midas Project has talked to, what journalists we've talked to, to whether we ever slipped up and said something incorrect that OpenAI could go after us for. This would be a great opportunity to do that. And maybe one other hypothesis is that it was an opportunity to slow us down because it throws sand in the gears. If you were to actually comply with it, it would take. I was, at the time we received it, the only full time employee at the Midas Project. So if I was doing all of it, it would have taken a month or two of work and maybe just try to throw sand in the gears during these critical weeks before the restructuring was approved, which in fact it turned out to be, you know, the six weeks before the final approval came through.
B
So, yeah, say more about that. How burdensome is it for the Midas Project or an organization like the Midas Project to receive subpoenas like this? Because you mentioned you need to produce all kinds of information in perfect form and so on. Is, is that possible, do you think? Yeah, how, how burdensome is this?
A
It would have been really challenging. I think the legal requirement is something. And you know, I'm speaking about like legal questions and I'm not a lawyer, so I should caveat that you should take this with a grain of salt. I think the requirement is that you have to give your best faith effort or something. And I do think that my best faith's effort for producing all documents about OpenAI's governed infrastructure, after spending months producing that 14,000 word report and gathering thousands of signatures for our letter and doing stuff like that, I think it could have been a, a huge task to produce it all. Of course, like, you know, if you're worried, if you're wondering how burdensome is it in the real world, we didn't end up having to do any of this. We didn't end up having to produce a single document response to the Subpoena. And I don't know the extent to which, you know, smaller non profits in our class actually do get like scared into doing all of this because they read the document saying they're commanded to, or they get, you know, good legal counsel that advises them that you can fight back and avoid this. So in practice, maybe it's, it's not as burdensome as it looks on paper, but it is. Given the breadth of everything they asked for, it looked very much like a fishing expedition to try to gather all the intel they could.
B
And so how did you respond to these subpoenas and how. Are there some lessons for this? Is it just. My intuition here is that you should just immediately get the best lawyer you can afford and that's the first step. But yeah, something about this.
A
I don't know if my case is particularly representative. I didn't get the best lawyer we could afford. I have been working with the same lawyer we have had since we started the organization, who I knew from the animal rights movement. And our case was a little bit unique. She basically spotted what she thought was an error in OpenAI subpoena, saying something like, I think she talked to OpenAI and said something like, what? You should have gotten this issued by an Oklahoma court, which is where I live, therefore we don't think this is enforceable. Obviously, if you wanted to, you could go and get an enforceable subpoena tomorrow and serve us the exact same subpoena. I think she said something like, if you do this, we will tell you no to all the questions about Elon Musk, because those are the reasonable questions and there's just nothing there. We'll be happy to tell you that. And then for everything else, we will move to quash, which is like, you know, taking it before the judge. And the judge would see it and have to decide whether to compel us to produce it or far more likely tell off OpenAI for the kind of immense breadth of the subpoena. And in fact, this judge in this case had already told off OpenAI once before for their kind of abuse of the discovery process. This was not related to the nonprofits they subpoenaed, but I think rather to their subpoenas about Meta, I believe. But you know, it wouldn't have looked good for them to go before the judge again and have this happen again. So, you know, I think we told the OpenAI lawyer that and then I don't think we ever heard from them again. Yeah, the way it resolved it was like a couple follow up emails, no Response, we understand ourselves to be free of the obligation. Now, if you don't respond, no response.
B
So, yeah, what lessons do you take from this? We talked about how a project like the Midas Project is in a leverage, leveraged position and can challenge much larger organizations. Is this then where that reverses? And you know, if you have a team of hundreds of lawyers, you can, you can, you can basically stall small organizations that are, that are working to make you more transparent?
A
I don't think so. I think that it is a matter of fact that they didn't succeed in really slowing us down. In fact, I think that they kind of made a mistake here where this is a bad comms moment for them. It's a bit of like a mask off moment where when it was made public, there's a lot of interest. You know, the San Francisco Standard wrote about it, NBC News wrote about it, the Verge wrote about it, and it looked pretty bad, and they didn't have a good answer to it. You know, there was this funny moment on Twitter where I think Nathan Calvin at ENCODE first had a very viral tweet, kind of detailing his experiences about the subpoena. And Jason Kwan, OpenAI's chief strategy officer, he had a long thread responding to it, saying basically something like, the situation is not as simple as Nathan's making it out to be. Because even though we sent this subpoena, what Nathan didn't mention is that in code, his organization actually submitted an amicus brief in support of Elon Musk's case. And for that reason, subpoenas are to be expected. And, you know, once you've inserted yourself into a case like this, you could expect to receive these documents. One reason that this doesn't work is because, like, it doesn't explain why the subpoena mentioned things like SB53, which are just totally unrelated the case. Another reason it doesn't work is because the subpoena went to other groups, including my own, that never touched the case with Elon Musk. And then a third reason that I should mention about why this argument doesn't work is that from experts I heard from, it's actually not normal to start subpoenaing nonprofit organizations that submit an amicus brief in support of kind of a narrow claim of like, is the restructuring bad for the public or something like that? This is like a pretty aggressive thing to do. So, in fact, I think there is like already a financial declaration in the amicus itself saying that they received no financial support for the production of the, of the document. So you know, I don't think it worked for OpenAI. I don't think that they got what they wanted out of it. And I think it, like, backfired for them in real ways, you know, among the employees who I don't think fell for kind of like, I think Jason Kwan's poor arguments about this. You know, there's a tweet Twitter thread from Josh Ahaim, the head of mission alignment at OpenAI, who, to his immense credit was honest about it, where he said, I think at great risk to his career, he said that this doesn't look good and that OpenAI should be striving to avoid even the appearance of the misuse of power, not to mention the actual misuse of power. And, like, you know, whether or not you think that this was a misuse of power, it certainly looks like one. We can tell that. And so, yeah, I don't think it worked for them. I think it remains the case that it would be hard for them to use 100 lawyers to slow down kind of scrappy organizations working in the public interest. One thing I'll mention is that people could also ask, well, what if they just did a baseless lawsuit to drown you in legal fees or something like that? Which is something that I think about. In fact, after this whole subpoena incident, we were trying to get an insurance policy for media liability to make sure that if I'm going on a podcast like this and I slip up and I say something wrong about OpenAI that I got wrong to make sure that, like, we are covered in case they, you know, take us to court for defamation or whatever. And every. In every insurer that our broker reached out to said no about a policy for us, you know, I think implicitly, at any price, with multiple of them citing the article in the San Francisco Standard about our subpoena. So, you know, one way that, like, the subpoena could have chilled speech is that, like, it maybe made us unturable. And most, you know, nonprofit managers or, like, managers at journalistic entities, in absence of a policy like that would tell you to be, like, way more conservative with what you do say in public and stuff. So someone could ask, okay, well, you know, you don't have any insurance policy. OpenAI could just take you to court if. For even baseless things, even if you're totally, totally in the right, could that just drown you in legal fees and thus could they win? I don't think it's true. And part of that is just, like, unique to the United States as far As I know, actually, I don't know to what extent it's unique to the United States. I only know about the US context, but I think there's, you know, a pretty strong protection for the First Amendment, particularly when it's in the public interest in the United States. And there are like statutes I believe, called like anti slap laws, where slap is slapp, strategic lawsuits against public participation. So, you know, in some ways, like to the extent that OpenAI wanted to be even more litigious against organizations speaking out against it in the future, I don't. I think it would backfire for them. Again, I think, like, to the extent that those organizations are making reasonable, careful claims, they could find themselves in another situation where their attempt to be legally aggressive ends up kind of backfiring in a mask off moment for them and they have to do damage control on the calm side and stuff.
B
Yeah. What's the positive case for more transparency? How could this benefit OpenAI and benefit the world in kind of a positive upward spiral? Is. Yeah. How do you see the future of transparency in the AI industry?
A
Yeah. The big question that we have as a society have is how we actually create the rules and the structures and the controls that prevent bad outcomes here and enable us to like, the great futures that AI could bring us to without kind of all that potential being curtailed due to the various things that can go wrong with the technology that the people building it themselves admit is a huge risk. And, you know, they'll sometimes give probabilities that I think we wouldn't accept with any other technology. You know, you wouldn't get in an airplane with a 10% chance of crashing. You wouldn't cross a bridge with that chance. And so when people building the technology say, yeah, there's a 10% chance, this goes really wrong in a catastrophic way, like no other technology has gone wrong. It's a huge challenge for us as a society to think of how do we implement the rules needed to address this. And unfortunately, I wish I had better news here, but I wonder if you share this perception. I've not heard a single answer to that that is satisfying to me. No one has the prescriptive solution where they're like, here are the exact rules and controls we'll put in place to make sure that we can avert these risks. There are arguments that I hear that I find compelling that we are just so far away from the understanding of the technology that it's as if we're alchemists trying to think about how to convert lead into gold. Or something. And so in this, I think transparency is especially important in a world that we find ourselves in like this one, where when you don't actually know what the solution is, at the very least you want to have the monitoring capability and the visibility to know when things are getting serious and when the models are getting really powerful and when there are warning shots that maybe make us want to put our foot on the brakes or sit down in some sort of international body and talk through some sort of treaty to determine a safe future for this. And so yeah, in absence of really strong technical solutions to the problems AI faces or governance solutions, where it's like here are the exact controls and committees and evaluations you need in place and the mitigations and stuff, transparency is just like, we should at least know what's happening. We should at least not be walking to the cliff blindfolded. But there should be an obligation for companies to be discussing how capable their internal models are and to have some red lines in place so that when those models hit critical thresholds, worlds we as a society know. And I think it'll generate, you know, transparency could generate more political will to invest more money and more time and more kind of good faith effort in finding solutions that will, that will work.
B
Yeah, I mean we have the technical side of technical AI safety where it's at a minimum, it's difficult to solve the alignment problem. We, it's difficult to, to set up various technical kind of half solutions that in combination could constrain AI in the ways we want. We also don't. I've talked to many guests about interesting governance schemes and it seems that these are quite advanced that would take a long time to actually implement in the world. And so if we're talking about what we're working with today, we, we are working mainly with the US legal system and the, the corporation as a structure, whether that's a public benefit corporation or a normal, a normal corporation. But we're kind of working within the constraints of a system that's, that takes a long time to change. And so in that environment, I think the argument for transparency is very strong, that we need to, this is, this is like the, the, this is what we need at a minimum. We need to understand what's going on in order to, to respond to it. Are you hopeful that we will implement new governance techniques? That we will innovate in the governance space as we have innovated in the technical space? And here I'm thinking just on the capabilities front, we seem to be way ahead on capabilities in terms of where we are on governance.
A
The honest answer is I am hopeful, against my best judgment, I think that it really needs to happen. But, you know, I've spent some time thinking about analogous past technologies that have been new and have required kind of new forms of governance or rulemaking or social response. And I think we primarily respond to these things reactively rather than proactively. And so I expect that we are just going to be continuing to treat this in a business as usual way until, until some moment where something major goes wrong or there is a very compelling demonstration of the capability of these models that will lead to some sort of regulatory response that will either be kind of judicious and responsible and kind of well measured, or won't be. And the extent to which it won't be. I can see it going in multiple directions. Like it might be the same mistake we made with, in my opinion, nuclear power, where we just, you know, ban it too early. It might be the same. Or on the flip side, you know, it might be that we don't do enough even in a reactive posture because we're worried about an international race, for example. And then, then it's like the same mistake we made with, you know, nuclear weapons where we just exposed ourselves to an immense and unacceptable amount of risk throughout the 20th century because we kind of couldn't get everything screwed on straight in terms of orienting with the international community and finding some sort of solution to make deproliferation or kind of global cooperation the, the forefront of like the nuclear weapons regime. So, yeah, I expect that the same will be true for AI, and I think that there probably is a good solution out there in terms of what the governance strategies are, whether it's the government rulemaking or, you know, the corporate structures, or whether this is happening in corporations at all, or whether it's happening in some sort of CERN for AI or something. But I don't think that the odds are good. I think that we're going to have to like really fight to, to make that a reality.
B
Yeah, One pretty advanced system we have for dealing with risk is the insurance industry. So we have professionals that are quite capable of, of pricing risk. And yeah, we know how to do this in many industries. Is there hope that we might apply what we know about how insurance work to the AI industry?
A
Genuine question I have, and I don't know if you have an answer to this, but do we have evidence to think that the insurance industry is good at pricing risk when there isn't a history of the frequency or severity of that risk. Of course, the insurance industry is probably good at pricing hurricane coverage or something because there's just been enough hurricanes that they know. It's not obvious to me that they have the information they need to price risks from AI or especially if they are convinced, as I think many reasonable people are, that there's immense downside and like a greater than 1% chance of that immense downside being realized. Maybe you end up in a situation where like the proper pricing just looks ridiculous because there's like this outlier downside that is like kind of like dominating the speculative equation. So I don't know, it's not clear to me that the success of like insurance and pricing other risks actually applies to something as like one of one as AI.
B
Yeah, I mean they must know about, you know, black swan events and you know, tail risks. Of course, of course. Kind of, this is, this is a standard part of, of the math behind insurance. But it's, it's true that if, I guess if, if you were to, to price the risk of, of AI in, in, in kind of quantitative terms, it might look just, you know, like something that can't work in the real world because then the industry would, wouldn't, wouldn't work. And if that's the case, that tells you something about the AI industry as it's, as it's currently functioning. I guess the question is whether we have a better alternative compared to the insurance industry for rigorously thinking about the risk of AI.
A
Yeah, I'm not sure.
B
Yeah, I'm not sure either. All right, so we have different options for trying to deal with this problem. We can try to deal with it technically we try to deal with it in terms of governance. We can try to think about transparency and kind of pushing the companies in, in the right direction from the outside. What's the government's role in all of this? Is there a say when transparency we find best practices by trying to push the companies from the outside. Is there a point at which transparency should be incorporated into the law and so you would have kind of legal processes or legal frameworks for how transparent AI companies should be?
A
Yeah, I think this is absolutely necessary. The Midas project has operated with more of a focus on just trying to get companies to do more under kind of self regulation regime. And that's for a few reasons, but maybe the most important is that that's just mostly kind of where we live today. Although you know, I know SB53 and the EU AI act have kind of been like the first I think really significant steps toward kind of codified transparency obligations. Another reason we focused on this like kind of self regulation is that I think there are just fewer people who are really focused on this and in particular trying to make effective self regulation happen than there are people trying to make government regulation happen. But it doesn't surprise me that there's that imbalance because government regulation is just stronger insofar as it covers everybody immediately into perpetuity. I am, you know, kept awake at night by all the flaws of the self regulation regime, the least of which is that it can just be thrown away at any point that a company making a transparency commitment can decide when they see the results of their most recent model or you know, say they have a new model that they want to deploy internally as an automated AI researcher, they can weigh the costs and benefits and they can say, actually even though we promised to be transparent about the existence of this model and its capabilities, now that was just a promise and now it's way too costly for us to fulfill that.
B
So yeah, I do think we have to worry about sort of safety tax where if you're in a, in an environment where there's a lot of funding and things are going well and perhaps your models aren't that dangerous yet, well then you can, you can spend capital, social and monetary capital, on projects that are not directly related to making your models better. If the race tightens and you are behind, perhaps you throw all that to the wayside and you just focus on, focus on racing ahead. And so it's, it's true that, yeah, the worry is that the self regulation regime just gets pushed out as more important things, so to speak, arises.
A
Yeah, yeah, I think that's exactly right. And it makes a strong case for government regulation, but it also demonstrates the fact that the government regulation needs to have like real implementation and enforcement strategies attached to it. It can't just be like a nice, in theory, like oh, you'll let us know at your discretion about these things through your contact at the agency or something, because then the exact same problem applies. And so like, yeah, I think it also teaches us a lesson that the regulatory solution, if and when that comes, needs to actually like have some teeth to it and, and create ways to monitor the behavior and activities of these companies in like non falsifiable ways or ways that the companies can't just get around by skirting regulation, which I think many of them are willing to do when the benefits outweigh the costs in their assessment.
B
How would you rate the transparency of the current, say, US AI industry, perhaps, compared to other industries. Where are we on a scale from 1 to 100, would you say?
A
Maybe it depends on if 100 is where other industries are at their best or if 100 is where the AI industry needs to be for things to go well. Because, yeah, I don't think there is a single industry that is really at the place the AI industry needs to be eventually.
B
So you're saying the standards for the AI industry should be much higher because the stakes are much higher perhaps, than perhaps any other industry.
A
Exactly, yeah. So, you know, if you're comparing to other industries, I think you could maybe say that the AI industry does pretty well because there is this strong culture of kind of discussing this stuff openly, of publishing model cards, of even the competitive advantage stuff. It's always surprising to me that the transformer architecture was just freely given away by Google. And yeah, a big part of this is that I think the AI industry is. So maybe it's its academic origins or something, but there's this huge culture of public, you know, putting your papers up on archive and of releasing pretty detailed reports about your systems. So, yeah, compared to other industries, I think it's pretty good. I think it's getting locked down, as we've mentioned, and I think it will continue to get more locked down over time and I think it's still pretty far from where we would want it to be.
B
Yeah, yeah. How would you, how would we measure this, this move? So one way to do this is just talking to you, who know a lot about transparency in the, in the, in the industry, and then perhaps interviewing you again in a couple of years and seeing where we are. Is there a way to make this quantitative? Is there a way for us to kind of measure the transparency or perhaps produce some kind of. Yeah. Report on, on how transparency is moving up and up and down in the industry. I'm not necessarily thinking that it would. This would have to be something super naive like, you know, the industry is now at 23 transparency, and last year it was a 20 or whatever. Right. I'm just thinking, is there a way to be more rigorous than just kind of informed impressions?
A
Yeah, to the extent that the goal is to be more rigorous about ensuring transparency, I think that solutions like auditing are kind of the way forward. I don't know if it would result in sort of like quantitative measures of transparency or anything like that, but I do think that it would, you know, by reading the reports of auditors over time and if they're, you know, truly third party and independent their own assessment of their level of access to the companies. I think we would get a pretty good sense of like how meaningful the transparency being offered is. And I know this is a priority, you know, I think this is what it would look like for, for regulation to have teeth in terms of making sure companies can't avoid it. And so I know it's a priority for people who are thinking about regulation and also for people who are trying to help the industry come up with better self governance tools. Practically though, I think that means waiting till there is a good ecosystem of auditors which is kind of growing right now, but I think is underdeveloped and looking to them and you know, hopefully they can speak freely about like to what extent they think they have the access they need or if they're struggling to get access. There are some early, you know, concerns I have, I think I've seen, you know, I know they're not exactly an auditor, but the Group Meter has done evaluations for companies in the past and I think I've seen that like in some system cards it's like noted that like their evaluations they get like seven days to do it. And I think sometimes they're even like, yeah, we feel pretty uncertain because we didn't get much time to do it.
B
Yeah.
A
And like that's the sort of thing I'd be looking for from an auditor to come to the belief that there is not enough transparency at the moment.
B
How helpful would you, do you think AI is here? We're talking about processing a lot of information, perhaps doing it very quickly. Of course people at organizations like yours could just use AI as a helpful tool here. Is there a way for us to use AI to create more transparency? Perhaps have some automated processes to look into, into say documents that, that AI companies have to produce publicly to kind of search through and see if there's anything you can surface that's that the public is perhaps not aware of just because it's buried in some, some document somewhere.
A
That's pretty interesting. Yeah, I hadn't thought about it much. When I imagine the benefits and the costs of that right now, it seems to me like maybe one reason that AI companies would be excited about this being used would be actually as a way of, you know, the concern with transparency is that you give away competitive secrets or that you reduce national security through increased transparency about your models and how you're training them and their current capabilities and whatnot. And if you could have AI transparency auditors, you know, I don't think they would want them to find like random documents hidden in a way that like, are big red flags that the company wouldn't want to find. I think what they would want is like for that to be kind of identified and disclosed by a party that they can like ultimately trust and that, you know, in the same way, I think model evaluations, like the Judges are frequently AIs and the reason to have the AI be the judge is like, because it has some assumption of neutrality. You know, if you don't necessarily trust the incentives or goals of your auditor and you nonetheless have to give an incredible level of access to some party, maybe like an AI intermediary would excite AI companies. I have like, yeah, my gut reaction, like I have some concerns related to like loss of control issues, like do you really want to hand over the transparency processes to the AI itself? So, yeah, yeah.
B
And of course it's also, it seems almost comical to, you know, you can imagine like an instance of GPT5 investigating open AR or something. So you would perhaps need perhaps an open source model or something that's, that's more credibly neutral to engage in this process. It just seems that as we, you know, we, these companies will get larger and larger, they'll produce more and more paperwork and much of that will be public because it has to be for legal reasons. And so perhaps there's something in there that is already public but that needs more attention. And yeah, this, perhaps this is, this is where AI could be useful, but I'm unsure about that.
A
Yeah, it's an interesting idea.
B
Yeah. I think we should, we should end by talking about kind of your hope for where we end up if the MIDAS project is successful. Where are we in five or 10 years? What's the situation with regards to transparency in the industry? And yeah, how did the MIDAS project help?
A
So maybe it's hard for me to answer in like a 5 to 10 year time horizon because of the weird beliefs I have about AI progress relative to the public, but normal beliefs I have about it relative to the industry. So it's easier to answer on like a two year time horizon. And I think on like a two year time horizon, I have a sense right now that a great deal of discourse around AI isn't fundamentally bought in to the power of the technology and to the inadequacy of our current institutions to control and monitor that technology. And this is like discourse happening amongst people who are worried about it, who write it off as a stochastic parrot or something, as well as People who are excited about it, like effective accelerationists on Twitter, who are kind of role playing this pro technology attitude without taking seriously how important a dual use technology like this really would be. And then even among the people who fundamentally buy that it's going to be powerful, I think I sense some undeserved trust in the institutions developing it, where, you know, it's very easy to hear U.S. congress people talking about the fact that regulation would be a terrible idea right now because we just can't slow down industry relative to our national adversaries. And like, implicit in this is this assumption that the kind of free market model where like the companies are setting their own standards for the technology and operating in their own kind of dark corners of the industry and putting up their own walled garden where they're keeping all of their activities to themselves, there's like an assumption that that's going to work for us, that that will actually lead us to developing this economically valuable, prosperous technology before other countries do, or something. And so the fundamental goal that I have for the Midas Project is that we can contribute to the public discourse in a way that both convinces the people who don't believe that this technology is a big deal, that it really is a big deal, and that there's evidence for this and there's like mounting evidence in the research being done on these models and that the companies themselves, when they tell you it's a big deal, they're not doing that because they're trying to increase their market share or to the extent that that's like a benefit they're getting out of it, it's an unfortunate coincidence because they're just right, their reasoning is solid and they'll walk you through it, and you can walk through it yourself. And then even among the people who believe that it's going to be a big deal to convince them that the institutions are not prepared for this. And some of them will admit that to you, some of them will not admit that to you, but look at their track record and you'll be able to see that they're not prepared for this. And so hopefully, you know, in 2027, those two beliefs are kind of table stakes for anyone having a serious conversation about what we do about AI. You have to know it's a big deal. You have to know that we're not prepared. And hopefully, yeah, the Midas Project's kind of investigative research and our public communications will kind of help help tell this story to the extent it's true or to the extent that we're wrong about it. We'll update.
B
Great. That's a fantastic answer, Tyler. Thanks for chatting with me.
A
Yeah, thank you, Gus.
Podcast: Future of Life Institute Podcast
Episode: Why OpenAI Is Trying to Silence Its Critics (with Tyler Johnston)
Date: November 27, 2025
Host: Gus Ducker
Guest: Tyler Johnston, Executive Director, the Midas Project
This episode explores the recent attempts by OpenAI to legally challenge and, as some allege, intimidate nonprofit watchdog organizations speaking out about their governance and restructuring. Tyler Johnston of the Midas Project shares firsthand experiences, discusses the broader stakes of transparency in AI, outlines pitfalls of both self- and government regulation, and reflects on the future of public advocacy against powerful tech incumbents. The discussion centers on the role and necessity of transparency, the ethics and effects of legal tactics used by AI companies, and how civil society can leverage limited resources to influence industry giants.
Trigger: Midas Project and other orgs publicly questioned OpenAI's transparency and restructuring processes, joining critical open letters.
Subpoenas: Tyler and the Midas Project received broad subpoenas from OpenAI’s lawyers, officially as part of their dispute with Elon Musk, but with scope far beyond Musk’s involvement.
Community Response:
The scale and breadth of subpoenas led to public backlash, with coverage in major outlets and critique from legal experts.
On corporate leverage:
"If the final input at the end of the day that informs regulation is what the public wants and who they vote for, then at a certain point, the money stops working for you." (A, [00:00])
On the need for transparency:
"We should at least not be walking to the cliff blindfolded." (A, [39:32])
On OpenAI’s legal tactics:
"OpenAI is willing to go to great lengths to silence critics when they think that it's important to do so." (A, [18:01])
On the chilling effect of legal threats:
"In every insurer that our broker reached out to said no about a policy for us, you know, I think implicitly, at any price, with multiple of them citing the article in the San Francisco Standard about our subpoena." (A, [36:13])
On the future of regulation:
"I expect that we are just going to be continuing to treat this in a business as usual way until, until some moment where something major goes wrong..." (A, [42:42])
On the Midas Project’s underlying goal:
"You have to know it's a big deal. You have to know that we're not prepared. And hopefully, yeah, the Midas Project's investigative research and our public communications will kind of help tell this story to the extent it's true or to the extent that we're wrong about it. We'll update." (A, [60:04])
This episode provides a sweeping look at the battle between public interest advocates and powerful AI companies, revealing the strengths and vulnerabilities of each. Tyler Johnston illustrates both the possibilities and hazards of public advocacy in AI, describes the legal tactics used by OpenAI as counterproductive, and stresses that radical transparency is not just an ideal but a bare minimum for effective, safe AI governance. Listeners come away with a nuanced understanding of the stakes in transparency, the reasons watchdogs persist despite legal and financial risks, and the path forward for more robust and enforceable disclosure in advanced technology industries.