Transcript
Podcast Host (0:00)
Does this podcast make you happy? Of course it does. That's why you're here. But it only comes out once a week for happiness, every night. You need Adam and Eve. Yes. I'm talking about sex toys. It's cool. It's cool. You have earbuds in, right? Adam and Eve, America's most trusted source for adult products, has been making people very happy for over 50 years, with thousands of toys for both men and women. Just go to AdamandEve.com now and enter code IHEART for 50% off. Almost any one item, plus free discreet shipping. That's AdamandEve.com, code IHEART for 50% Off.
Ed Zitron (0:38)
Callzone Media. Hi, I'm Ed Zitron and this is your weekly Better offline monologue. Better offline Precarious. That's the nature of the generative AI industry, and especially OpenAI. A company that has never made a profit, has no pathway to profitability, and is contingent upon other companies spending tens of billions of dollars on new infrastructure to power their models in the future. Throughout countless podcasts and newsletters, I have argued that all of these Factors mean that OpenAI, and by extension the greater generative AI industry, will eventually collapse. OpenAI is just one of a long line of dominoes, and it really only takes one to fall before the entire thing collapses. But by comparison, I haven't really paid nearly as much of the attention as I should have to the unusual structure of OpenAI, which I believe will also contribute to its downfall. OpenAI was initially started as a non profit, intended to further the safe development of artificial intelligence. If you believe them. Over time it morphed into an entirely different beast, becoming the most valuable startup in history and the startup that has now raised the most capital in history. Sort of. But for legal reasons, it couldn't quite walk away from its non profit origins. And so we're now left with this strange hybrid that consists of a nonprofit that owns much of OpenAI's intellectual property and assets, and a for profit business sort of tacked on awkwardly at the side. In order to satiate its infinite thirst for capital, OpenAI must radically restructure the entire organization, moving valuable assets and intellectual property from the nonprofit to the for profit entity. Their ability to raise money is entirely contingent upon this, as generally, investors don't plow tens of billions of dollars into philanthropic ventures where they will never see a return. Indeed, many of OpenAI's previous funding rounds have had caveats that would radically alter the terms of their deal if OpenAI fails to convert into A for profit business. Last October they raised $6.6 billion from a bevy of investors. But the deal included a covenant of sorts that should they fail to convert into a for profit entity in two years, so October 2026, the investment would convert into a loan. OpenAI would in effect have to return the capital to investors and potentially pay a punishing interest rate. Similarly, OpenAI's latest $40 billion with SoftBank is structured in a way where 10 billion of the dollars are contingent on OpenAI becoming a for profit business. The point I'm trying to make is that for OpenAI, this current structure represents an existential threat. In many ways it is more dire than any shortage of compute capacity or the fact that they spend billions of dollars more than they'll ever make. So they just change structure, right? What's the big deal? Well, this is a complex and bureaucratic procedure that typically only happens in sectors like healthcare, where hospitals are bought out by larger for profit companies. I struggle to think of any similar examples in tech, and even if those examples exist, they don't involve entities of the size and value of OpenAI or indeed the prominence. The transformation isn't something that Sam Altman can do unilaterally either. In essence, he needs the consent of regulators and lawmakers in the state of California. We've already seen OpenAI's moves be challenged by the likes of Elon Musk and Mark Zuckerberg, although these efforts did not or have yet to amount to much, though there is an upcoming trial with Elon Musk over this. Part of the complexity comes from the very nature of what it means to be a non profit. Non profits enjoy certain tax benefits, both from their exemption from taxation to the benefits that come when a person makes a charitable donation to a non profit. If you give money to like the Red Cross or a church or whatever, you can write that off against your taxes. As a result, OpenAI is constrained by what it can do with the assets that are held within the nonprofit. Those assets are supposed to be used for the benefit of society or to serve some kind of charitable purpose. OpenAI can't just transfer them to a for profit entity. It doesn't work like that. Nor should it, even though they're very much trying to make it. Now. I'll link to this in the episode notes, but a recent petition against OpenAI's restructuring alleges that OpenAI has already effectively broken the rules against how nonprofits should operate and manage their assets, with the first violation coming in 2019, right at the start of OpenAI's metamorphosis and around the time when it obtained its first billion dollars worth of investment from Microsoft. Addressed to the Attorney General of California and signed by innumerable figures in California national philanthropy, it articulates a compelling case that OpenAI has already broken the law in several meaningful ways and urges the state to take action to prevent a further dilution of the OpenAI charitable mission. It describes these violations as, and I quote, factually complex but legally simple. It claims that the 2019 restructuring which created the for profit element, involved the wrongful transfer of assets from the nonprofit. This is bad in and of itself, but as time dragged on, the influence and relevance of the non profit wing over the for profit entity effectively ended. It gives the example of the November 2023 coup when Sam Altman was fired from the company over alleged dishonesty about the safety processes surrounding model development and also a bunch of other stuff like not telling board that ChatGPT was coming out anyway. This firing, the letter states, came at the behest of the non profit directors. As we all know, it didn't take Sam Altman long to return to the helm of OpenAI in the same role that he'd left. Many of the board members that had approved his termination left, and though they signed a letter, it was very much a they have my family kind of thing, or I should say, they have my stock units and they were replaced by those who were more aligned for the for profit goals of OpenAI. The petition also makes the case that OpenAI's renunciation of its commitment to open source research within its purpose clause, the thing that defines what a nonprofit is for, also represented an illegal diversion of assets. Open source software benefits whoever uses it, whereas proprietary software, even if it's provided for free, has comparatively fewer benefits to society. You can't change or modify or improve a program, and you're at the best. You're really at whatever the vendor wants to do as far as accessing or using the platform or program or whatever it might do. In practice, there's little open about OpenAI. They share little source code and don't even provide specifics about the training data they use. They're less concerned with public research and now shrouds their development in the same cloak of secrecy that you would expect from basically any other tech company. Now they're claiming they're going to release an open weighted model, but it's bullshit. I'm sorry, that's not enough. Now the letter does make some specific demands. It wants the AG to investigate what assets were siphoned off from the nonprofit, block any conversion until the investigation is completed, sure that all charitable assets are returned to the nonprofit, and indeed to create a truly independent entity separate from Sam Altman and the business interests of OpenAI to act as a steward for these assets. If this petition succeeded in delaying the conversion, OpenAI's future becomes much more uncertain and perhaps may not even be possible. It'll make it harder to raise new funds that will only increase their cash burn, as OpenAI will now be forced to start making repayments and investments that will automatically converted into loans. Will it succeed? Maybe. First, this petition doesn't come from a rival with an axe to grind against Sam Altman, like Elon Musk. The Black Freedom Fund, the Asian Law Caucus, and the California Teamsters Public Affairs Council aren't exactly the natural adversaries of Altman. Unless they, of course, secretly hate people who wear Patagonia vests, in which case I retract the entire last sentence and also add that I heartily congratulate them and agree with them. Moreover, this petition isn't framed about tech or AI or AI safety, but rather something far more simple. It's about what a nonprofit is and how it should be run. And if, as the letter's authors argue, OpenAI should be allowed to restructure without being challenged or scrutinized, it will set a dangerous precedent that would make it harder to protect the charitable mission of nonprofits and make them easier for them to be plundered. Like I'd argue OpenAI has. It's a strong case, and while it's hard to tell if the Attorney General will take it on, it could lay the groundwork for OpenAI's demise. And I'm sure some of you would love that. I don't know. I might too.
