Podcast Summary
Episode Overview
Podcast: Future of Life Institute Podcast
Episode: Why OpenAI Is Trying to Silence Its Critics (with Tyler Johnston)
Date: November 27, 2025
Host: Gus Ducker
Guest: Tyler Johnston, Executive Director, the Midas Project
This episode explores the recent attempts by OpenAI to legally challenge and, as some allege, intimidate nonprofit watchdog organizations speaking out about their governance and restructuring. Tyler Johnston of the Midas Project shares firsthand experiences, discusses the broader stakes of transparency in AI, outlines pitfalls of both self- and government regulation, and reflects on the future of public advocacy against powerful tech incumbents. The discussion centers on the role and necessity of transparency, the ethics and effects of legal tactics used by AI companies, and how civil society can leverage limited resources to influence industry giants.
Major Themes & Purpose
- The need for transparency in the development and deployment of AI, particularly at the frontier level.
- The rising tension between watchdog organizations and powerful AI companies like OpenAI.
- Legal intimidation tactics (subpoenas) employed by OpenAI against critics and the resulting backlash.
- The mechanics and philosophy behind public advocacy and corporate accountability in high-stakes industries.
- The limits of current regulatory and governance structures for AI risk, and prospects for future improvement.
Key Discussion Points
1. The Midas Project: Mission and Approach
- Mission: A nonprofit watchdog focused on AI accountability—especially targeting frontier developers (e.g., OpenAI, Anthropic, etc.).
- Origin: Founded by Tyler after observing the explosive growth and downstream risks of AI, and borrowing corporate accountability tactics from animal welfare advocacy.
- Quote:
"I thought that some of the same corporate accountability playbook... could be effective when it came to using public communications and advocacy to ask AI companies to adopt stronger voluntary safeguards." (A, [02:00])
- Quote:
The "Flashlight" Tactic
- Drawing public attention to hidden or underappreciated risks, both speculative and concrete.
- Impact based on aligning public sentiment with underlying facts—especially where the public is plausibly on the advocate’s side.
- Quote:
"You have this powerful leverage as an actor who is just doing communications and public advocacy to move companies... simply by shining the flashlight around." (A, [04:47])
- Quote:
2. Leverage and Power Differential
- Even small, resource-constrained organizations can drive significant industry change if their advocacy is aligned with public sentiment or common sense.
- Example from animal welfare: Small orgs changing Walmart’s practices and the entire US egg supply chain with a fraction of the resources.
- Quote:
"I think the reason that it works is because you have this immense intangible asset in the fact that about many of these issues you're fundamentally right. The evidence for you being right is there and the public is kind of already on your side." (A, [11:35])
- Quote:
3. Projects Targeting OpenAI
OpenAI Files
- A web-native, in-depth report chronicling governance, safety, and leadership failures (14,000 words), aggregating public records and incidents.
- Purpose: Collect scattered incidents to build a comprehensive narrative of risk and accountability.
The Transparency Letter
- An open letter, now with over 10,000 signatures (including former OpenAI employees and AI leaders), asking OpenAI for specifics and clarity about their ongoing restructuring and governance.
- Quote:
"It wasn't making any claim about how the restructuring should go, but it was just asking for kind of more clarity from OpenAI." (A, [15:22])
- Quote:
4. OpenAI's Legal Tactics: Subpoenas and Intimidation
-
Trigger: Midas Project and other orgs publicly questioned OpenAI's transparency and restructuring processes, joining critical open letters.
-
Subpoenas: Tyler and the Midas Project received broad subpoenas from OpenAI’s lawyers, officially as part of their dispute with Elon Musk, but with scope far beyond Musk’s involvement.
- Requests included:
- All documents and communications related to OpenAI’s governance/restructuring.
- Complete donor records.
- Any connections with OpenAI investors or any for-profit involvement.
- Materials on unrelated legislative action.
- Quote:
"They wanted to know every single person who'd ever donated to us and the date and amount of that donation." (A, [24:43])
- Requests included:
-
Community Response:
The scale and breadth of subpoenas led to public backlash, with coverage in major outlets and critique from legal experts.
Tactics & Outcome
- Tyler’s lawyer identified technical flaws in the subpoena, explained to OpenAI how the Midas Project would respond (denying any Musk connection, moving to quash the rest), and OpenAI did not follow up.
- The process highlighted how large companies can attempt to intimidate critics but may risk a public relations backlash.
- Quote:
"I think that they kind of made a mistake here where this is a bad comms moment for them. It's a bit of like a mask off moment..." (A, [32:56])
- Quote:
5. The Case and Limits for Transparency in AI
Why Transparency Matters
- As technical and regulatory solutions remain immature or slow, basic public transparency is the minimum safeguard.
- Transparency enables public monitoring, encourages better governance, and might create positive feedback loops—pressuring others to follow best practices.
- Quote:
"In absence of, like, really strong technical solutions to the problems AI faces... we should at least know what's happening. We should at least not be walking to the cliff blindfolded." (A, [39:27])
- Quote:
Shortcomings & Barriers
- Self-regulation is inherently weak: promises can be broken without consequence as profit or competition pressures increase.
- Government regulation (e.g., SB53, EU AI Act) is stronger but needs robust enforcement capabilities and guardrails against being undermined or ignored.
- Quote:
"It can just be thrown away at any point that a company making a transparency commitment can decide... now it's way too costly for us to fulfill that." (A, [48:19])
- Quote:
Auditing & Measurement
- The ideal path would involve a robust third-party auditing system with real access and the ability to report candidly about transparency.
- Current efforts are underdeveloped, but the ecosystem is growing.
- Quote:
"To the extent that the goal is to be more rigorous about ensuring transparency, I think that solutions like auditing are kind of the way forward..." (A, [53:02])
- Quote:
6. Structural & Cultural Considerations
- Compared to other industries, AI (so far) has benefited from an academic culture of openness but is rapidly becoming more secretive as commercial competition and risk-awareness increase.
- The standards for AI transparency must be higher due to the stakes.
- Quote:
"If you're comparing to other industries, I think you could maybe say that the AI industry does pretty well... I think it's getting locked down, as we've mentioned, and I think it will continue to get more locked down over time and I think it's still pretty far from where we would want it to be." (A, [51:25])
- Quote:
Notable Quotes & Memorable Moments
-
On corporate leverage:
"If the final input at the end of the day that informs regulation is what the public wants and who they vote for, then at a certain point, the money stops working for you." (A, [00:00]) -
On the need for transparency:
"We should at least not be walking to the cliff blindfolded." (A, [39:32]) -
On OpenAI’s legal tactics:
"OpenAI is willing to go to great lengths to silence critics when they think that it's important to do so." (A, [18:01]) -
On the chilling effect of legal threats:
"In every insurer that our broker reached out to said no about a policy for us, you know, I think implicitly, at any price, with multiple of them citing the article in the San Francisco Standard about our subpoena." (A, [36:13]) -
On the future of regulation:
"I expect that we are just going to be continuing to treat this in a business as usual way until, until some moment where something major goes wrong..." (A, [42:42]) -
On the Midas Project’s underlying goal:
"You have to know it's a big deal. You have to know that we're not prepared. And hopefully, yeah, the Midas Project's investigative research and our public communications will kind of help tell this story to the extent it's true or to the extent that we're wrong about it. We'll update." (A, [60:04])
Important Timestamps
- [02:00] – The Midas Project’s founding and animal welfare analogy
- [10:59] – Leveraging limited resources and the power of public sentiment
- [14:01] – Motivation behind OpenAI Files and the Transparency Letter
- [21:02] – OpenAI subpoenas served to Midas Project and others
- [24:43] – Scope of what OpenAI demanded in subpoenas
- [29:29] – The burdensome nature of complying with subpoenas
- [32:56] – Johnston reflects on why the legal intimidation backfired for OpenAI
- [39:27] – Minimum need for transparency as a safeguard
- [48:19] – Inherent weakness of self-regulation for transparency
- [53:02] – Auditing as a future measure of industry transparency
- [60:04] – Midas Project’s vision: elevating AI risk and institutional inadequacy into mainstream discourse
Conclusion
This episode provides a sweeping look at the battle between public interest advocates and powerful AI companies, revealing the strengths and vulnerabilities of each. Tyler Johnston illustrates both the possibilities and hazards of public advocacy in AI, describes the legal tactics used by OpenAI as counterproductive, and stresses that radical transparency is not just an ideal but a bare minimum for effective, safe AI governance. Listeners come away with a nuanced understanding of the stakes in transparency, the reasons watchdogs persist despite legal and financial risks, and the path forward for more robust and enforceable disclosure in advanced technology industries.
