
Seb and Preston dive into the rise of Sam Altman and the evolution of OpenAI, exploring ethical dilemmas, governance, and AGI development before shifting gears to discuss longevity science.
Loading summary
Preston Pysh
You're listening to tip.
Hey everyone. Welcome to this Wednesday's release of Infinite Tech. Today, Seb Bunny and I dive into Karen Howe's book Empire of Dreams and Nightmares in Sam Altman's OpenAI. We trace Sam Altman's rise from his early startup and Y Combinator days to the founding of OpenAI with Elon Musk and the company's transformation from a nonprofit ideal to a Microsoft backed powerhouse. Along the way, we unpack the famous blip where Sam got fired back in 2023. OpenAI's complex governance, the broader ethical questions raised by AGI and guys, this is surely an episode you won't want to miss, so without further ado, let's jump right into the book.
You're listening to Infinite Tech by the Investors Podcast Network. Hosted by Preston Pysh. We explore Bitcoin, AI, robotics, longevity and other exponential technolog technologies through a lens of abundance and sound money. Join us as we connect the breakthroughs shaping the next decade and beyond, empowering you to harness the future today and now. Here's your host, Preston Pysh.
Hey everyone, welcome to the show. I'm here with the one and only Seb Bunny and we are talking OpenAI. Sam Altman, what in the world has gone on there at this company? Where's it going? Where did it come from? And we have a book that we read together and we'll be using that somewhat as the framework, but also kind of going in other directions beyond just the book. And Seb, welcome to the show, sir.
Seb Bunny
Oh man, thanks for having me on, Preston. And you know, what I found really fascinating is so for those that didn't listen to our previous episode where we discussed the thinking machine, and that's kind of the rise of Nvidia and Jensen Huang and kind of essentially how Nvidia laid the foundation for OpenAI and Neural Nets, which is kind of the technical term for the foundation of what these large language models are built on. It really kind of set the stage for reading this book. I super enjoyed it. And maybe the point that I'll just kind of quickly share is that I had no idea what extent Nvidia really did pave the world of AI in that we went from CPUs, central processing units, to Nvidia creating GPUs, graphics processing units, which enable parallel processing, computing tons of data, and that basically set the stage. And so it was really cool going from that first book into the second book because I think it helped with the depth of understanding.
Preston Pysh
Yeah, 100%. And it's interesting because I don't know if you've seen the clip of Sean Altman and Jensen Huang and there was another gentleman there talking about all this investment that they're doing to the tune of hundreds and hundreds of billions of dollars and how people are like, okay, so, like, how are they financing this? And it looks like it's going in one person's hand and then into the other person's hand. It's like the circular financing of. Of all of it. But that aside, let's go ahead and jump into this. Okay, so the name of the book that we read was called Empire of AI Dreams and Nightmares of Sam Altman's Open AI. And this was written by Karen Howe. The book was good. The book had parts that, like, really got my attention. There's other parts of the book where I was like, who? A little brutal, a little woke. But other than that, like, we're going to kind of go through the timeline and kind of educate people on the rise of Sam Altman, what they're doing there at OpenAI. You know, we'll give our overview on the things we love, the things we hated, and we'll go from there. So, Seb, any opening comments? Anything different than what? I'm curious if you kind of saw it the same way where. In the middle of the book where some of this woke stuff.
Seb Bunny
Exactly the same way. I think to start the book, like, very much grabbed my attention. Yeah, I really, really enjoyed it. And it started out as we'll kind of get into really discussing like, Sam, the rise of OpenAI. And I think some of the stuff that you don't necessarily hear in the media about kind of the construction of AI and such and the relationships that it's kind of built upon. And so I found that really, really fascinating. And. But it definitely, in the middle of the book, got a little woke, got into some of the gender stuff and the environmental stuff. But ultimately it was an interesting book for sure.
Preston Pysh
Yeah. Okay, so let's go through the. Just basically Sam Altman's life, because I find this pretty interesting. And it also kind of helps frame things of like, maybe where he's coming from. And this is not the arc of the book. I'm just going to start off kind of talking about Sam kind of giving people that background. So early in his life, grew up in St. Louis, learned how to code at a young age. By 2000, he goes to Stanford and is doing computer science, but then drops out to start a company. He starts this company. This is around the 2005 time frame, it was called Looped. And he co founded this and it's a location sharing social app which I found kind of interesting that that's where he starts, right? He raised venture capital, became part of an early, this early mobile wave loop, never gained a lot of mass traction. He did sell it in 2012 for $43 million. And this gave him, you know, some credibility in the tech founder Startup World. 2011-2019, he first joined Y Combinator. I'm sure people have heard about Y Combinator a lot. Think of this as like a. This was Paul Graham, that was the president of Y Combinator when he came in there. And this is an incubator that founded many or, you know, assisted in the founding of many of these early startups. And some of these are like Airbnb, Stripe, Dropbox, a ton of companies came out of Y Combinator. And so he built a reputation here at Y Combinator. He goes in there, he joins as a part time partner at Y Combinator and just kind of made a reputation with himself, with Paul Graham and was very well liked by him. And in the book it talks about how he's just really good at politically putting himself into different situations and being extremely liked, if he wants to be extremely liked, and kind of rose to the top at Y Combinator and eventually became the president at Y Combinator. I'm trying to think of the year that that happened. I'm not necessarily remembering, but his time at Y Combinator was from 2011 to 2019. So somewhere in the middle, Paul, you know, made him the president a wipe Combinator. And this was a really big thing out in the Valley because here's a guy, he does have one win under his belt, if you will, by selling his company for 43 million. And then he steps into this role and is literally the guy kind of pulling the strings as to all these major startups and founders that are moving through this organization, Y Combinator. I'm going to pause there. Seb. Anything else you want to add or throw in based on the timeline so far or just keep rolling?
Seb Bunny
I think you're spot on. I think it's really fascinating because there's this kind of juxtaposition throughout the book where it's got. There's a handful of individuals that basically say Sam is ingenious. Like just his depth of knowledge, his connections to people that I think he very much formed through Y Combinator is, bar none. And then you've also got this other side which we'll get into where there's a bit of questioning the legitimacy of some of these beliefs. And there's one quote that I'll quickly read out that I think really stood out to me throughout the book. And it's this guy Ralston, who's an employee at OpenAI. And he says Sam can tell a tale that you want to be a part of that is compelling and that seems real, that seems even likely. He likens it to Steve's jobs as reality distortion field. Steve could tell a story that overwhelmed any other part of your reality, he says, whether there was a distortion of reality or it became a reality. Because, remember, the thing about Steve Jobs is he actually built stuff that did change your reality. It wasn't just distortion, it was real. Which kind of there's this hint of is what Sam is creating, is it real or is it simply just a distortion? And so this is kind of this conflict which we'll see as we go throughout the book. Anyway, I thought that was an interesting quote.
Preston Pysh
I love the quote. And I think that this is something that founders of businesses, they see this. They see a vision of something that they think can happen, but obviously is way out there, or else you wouldn't have the 10-100x to a thousand x move of going from nothing to the one that it becomes. And that's the name of 0 to 1 is Peter Thiel's book and talks about this idea. But it's almost like there's this innate draw for a person who can not only just see the future, they have the ability to kind of assemble a team, to lead a team, to motivate a team to build it out. But in the early days, when it's nothing, they speak in a way that makes it feel like it's real right now or that it's completely possible in order to get the funding. Because, you know, you get into the seed phase or like this Y combinator phase, the incubator phase of these businesses, and there's Nothing there. They're PowerPoint slides for the most part. And it's a lot of hand waving. It's a lot of, hey, it can be this. And it almost seems like the ones that are super good at this are amazing storytellers. They're able to capture the attention of venture capitalists and people that would allocate funds to them. And it seems like the reality distortion field is somewhat. I'm curious, people that are, you know, have been around the VC space or the early stage startup space can maybe attest to this. I think it's Valid that it's almost this force or this natural, innate. What's the word I'm looking for, Seb? It almost seems like it comes with the territory, I guess is where I'm going.
Seb Bunny
Right, absolutely. And I think a quote that kind of pops to mind is being early is the same as being wrong. I think sometimes you can have this idea in your mind about how you think the future is going to kind of plan out, but in reality, if the technology doesn't evolve quick enough, you could essentially, you're wrong. And so I think that sometimes you can kind of distort this. You can tell this story that seems very, very real. And you've also got to hope that technology catches up or keeps up with this idea. And I think what comes to mind, again, bring it back to the thinking machine in Nvidia as he talks about how at the rise in Nvidia, he created these chips to enable far more graphic intensive games. But at the time, the rest of the hardware, the computers weren't able to process it. So it just kept on crashing the computers.
Preston Pysh
Yeah.
Seb Bunny
And so it looks like a failure of Nvidia, but in reality it was. The rest of the world had not caught up to this idea. His distortion hadn't kind of mapped out into reality just yet.
Preston Pysh
Well, you could even say that about the AI space. I mean, neural nets aren't something that were new in the past five to 10 years. This was stuff that was being done in the 90s and 80s where they had the idea of building a neural net. They didn't have the capacity of processing to really kind of take it there. And I think the attention part of it, the transformer part of it was also lacking. Like somebody hadn't figured, figured that out yet. And if I remember right, that was around the 2017 time frame that that happened. So, like, you can have these ideas, but if the rest of the market isn't there, the market demand isn't there, or the technical feasibility isn't there, like you're just dead on arrival. You're just the brilliant person with a great idea, but nothing to actually, like, make it into fruition. So the part that I want to like pause right here and get into, because this really gets into the founding of OpenAI itself. And that happened in 2015. Evidently there was this engagement between Elon Musk and the founders at Google, some dinner party or something like that. And the Google guys had recently acquired demise. Hespes, I think, is how you pronounce his last name from DeepMind. They purchased his company for a couple hundred million dollars, I can't remember the exact number, and basically bolted it onto Google as like their premier AI research army, fully owned operational subsidiary inside of Google. And this is when we were really starting to see AI start to seem like it was something like he was one of the leading people in the world that was doing this. And there was this dinner that then happened between Elon Musk and the founders of Google. And it came down to this conversation where Elon got in a heated debate with these guys. And I forget if it was. I don't think it was Larry Page. I think it was the other one that said to Elon, he goes, yeah, you're just a species. Because what they were doing is they were arguing over whether AI would dominate humans and become the new apex predator of the world. And Elon was so taken back by the comment of like, well, of course I'm a species. Like, do you really want to be ruled and dominated by something other than that that's non human? Like, are you out of your mind? And this conversation really, it was Sergey Brin, sorry to. To forget the name there for a second, but Elon was just like, what in the world are these crazies talking about? Meaning like, why won't you let life or, you know, a superior form of intelligence take over? Like you're trying to get in the way of like the natural progression of intelligence. And so at following this dinner, and I've heard different clips where Elon has talked about this. I'm Seb. I'm assuming you have as. As well out there in the media. But evidently this event was the thing that just Elon was like, what in the world? Like, we need a competitor that is going to try to build AI in a responsible way that's aligned with human interests that's not going to try to take us over and treat us like we're pets, like household pets. And so this is where Elon and Sam start to connect. This is where the whole founding of OpenAI and the Open in there stands for Open Source, as many know. But this is where they did this and they had this fancy dinner, they got all together, Elon and Sam, and then Sam was bringing in a lot of people from Y Combinator to kind of really kind of piece this together of like, how can we do this? How can we build some type of competitor to what Google is doing over there with DeepMind? And you know, their mission was safe AGI for all of humanity. And they were trying to organize it from a Governance standpoint. So that that was the leading principle of the entire thing. Do you remember what Elon's initial investment was? I can't remember off the top of my head to basically fund this and to get it going because he was, this is the irony. He's the co founder. Elon Musk is the co founder of OpenAI with Sam and Greg Brockman and I think another couple people. But as far as funding goes, I think Elon was like the primary guy funding this thing.
Seb Bunny
He absolutely. He was, he was the primary guy and I can't remember. It was definitely like in the billions. And one thing that I really just wanted to highlight is that OpenAI very much started out with. It is a nonprofit. It is not a for profit. It is purely mission driven. Yeah, no profit motive, full openness, like essentially. And it mentions it a couple of times like Sam did not want AGI or artificial General Intelligence in the hands of a centralized entity like Google and it says like Google a couple of times in the book. So I found that really fascinating. Like their whole goal was we need to make sure this technology when we get there, not if we get there, when we get to. Artificial General intelligence is open source and available for everyone.
Preston Pysh
Let's take a quick break and hear from today's sponsors. Have you ever been interested in mining bitcoin? As a miner myself, I've been using Simple Mining for the past few months and the experience has been nothing short of seamless. I mine with the pool of my choice and the Bitcoin is sent directly to my wallet. Simple Mining, based in Cedar Falls, Iowa, offers a premium white glove service designed for everyone from individual enthusiasts to large scale miners. They've been in business for three and a half years and currently operate more than 10,000 bitcoin miners based in Iowa. Their electricity is over 65% renewable thanks to the abundance of wind energy. Not only do they simplify mining with their top notch hosting and on site repair services, but they also help you benefit financially by running your operations as a business. This approach offers significant tax advantages and enhances the profitability of your investment. Do you ever worry about the complexities of maintaining your mining equipment? They've got you covered for the first 12 months. All repairs are included at no extra cost. If you experience any downtime, they'll credit you for it. And if your miners aren't profitable at the moment, simply pause them with no penalties when you're ready to upgrade or adjust your setup. Their exclusive marketplace provides a seamless way to resell your equipment. Join me and many satisfied miners who have simplified their Bitcoin mining journey. Visit SimpleMining IO Preston to get started today that's SimpleMining IO Preston to get started today with simple mining they make it simple.
Sponsor/Advertisement Voice
As a small business owner, you do not have the luxury of clocking out early. Your business is on your mind 24 7. So when you're hiring, you need a partner that works just as hard as you do. That hiring partner is LinkedIn jobs when you clock out, LinkedIn clocks in. LinkedIn makes it easy to post your job for free, share it with your network and get qualified candidates that you can manage all in one place. LinkedIn can even help you write job descriptions and quickly get your job in front of the right people. With deep candidate insights. You can post your job for free or promote it to get three times more qualified applicants. And with LinkedIn you can feel confident that you're getting the best applicants as 72% of small and medium sized businesses using LinkedIn say it helps them find high quality candidates. Find out why our business and more than 2.5 million other small businesses use LinkedIn for hiring today. Post your job for free at LinkedIn.com studybill that's LinkedIn.com studybill to post your job for free. Terms and conditions apply. As a founder, you're moving fast, whether that's towards product, market, fit, your next round or your first big enterprise deal. But with AI accelerating how quickly startups build and ship, security expectations are higher than ever. Getting security and compliance right can unlock growth or stall it if you wait too long. With deep integrations and automated workflows built for fast moving teams, Vanta gets you audit ready fast and keeps you secure with continuous monitoring as your models and customers evolve. I love Vanta's commitment to saving companies time and money, As a recent IDC white paper found that Vanta customers achieve $535,000 per year in benefits and the platform pays for itself in just three months. Go to vanta.com billionaires to save $1,000 today through the Vanta for Startups program and join over 10,000ambitious companies already scaling with Vanta. That's V a n t a dot com billionaires to save $1,000 for a limited time.
Seb Bunny
All right, back to the show.
Preston Pysh
Okay, so I just looked it up. So Elon's initial commitment was part of a $1 billion pledge and his actual outlays ended up being 50 million. So it was a billion over a certain period of time, which was what he pledged. Oh no. I take that. But importantly, that was a pledge, not upfront cash. The actual money spent in the first year was much closer to 130 million. Musk himself reported. Yeah, there's a number between 130 million and 50 million of what he actually contributed. But the initial pledge was for a billion. So he was there like at the start at of all of this, which I think is lost on a lot of people, especially when you see the back and forth and the animosity that these two have for each other and you're kind of maybe wondering why. And Musk now has xai, as everybody's well aware. But that's why is because he was the guy like writing the checks in the early days and is really kind of the one that led the charge as to why this was needed and why it needed to have the openness to it from the get go. So kind of continuing on the timeline. So Sam ends up leaving Y Combinator to focus full time on OpenAI around the 2019 timeframe. Then they also negotiated a landmark deal with Microsoft for a billion of investment. Sam oversaw GPT2, which I would argue really was before you had, you know, before it became a household name was GPT2. Then I would say GPT3 is when this really became a household name and everybody started talking about it. That timeline is right around like late 2022, I would say is where that is. So then it really breaks out 2022 to 2023. This is where GPT3 transitions into GPT4. It's getting into like Bing AI and you got all sorts of partnerships that are then coming out of OpenAI. And I would argue this is where Sam Altman really becomes a household name. And pretty much everybody knows who he is at this point because there's so many people using this service around the world. Finally, the last thing I think that we should kind of like hit is in 2023 there was this massive. In the book, she calls it the blip. And the blip was Sam being fired from the board of OpenAI and everybody just being insanely confused as to like, why what happened. So much drama. This lasted for weeks. I mean, I remember watching this on X and just seeing the fallout was crazy. And before reading this book, I would argue I still didn't understand what it was all about. And I think most people are very confused what it was all about. And you know, seven, I will get into trying to define that because it's actually pretty complex, but we'll cover that in a lot of detail. Here coming up. But that was probably my favorite part in the book, if I have to be honest. And the author opens up the book with the blip and covering this to kind of like grab your attention. And then throughout the book, she kind of talks about it a lot more here and there. But still, it wasn't like, very cleanly discussed. So something I would like to do in the show today is kind of cleanly go through why he was fired, or at least why we think he was fired and what that whole thing was about. But yeah, okay, so that's kind of the timeline. Seb, anything else you want to add as we kind of wrapped up the timeline?
Seb Bunny
Yeah. And I'm curious to hear your thoughts. I tend to think, like, if I was to simplify the firing down into two threads, I would lean on the first one being there was definitely there was a distrust of Sam inside of the company. And I'm sure we'll dive into it. There was a distrust where some people questioned his intent behind some of the words that came out of his mouth. And I think that's kind of like one of the first threads. And the second thread is this idea that OpenAI was founded, as we discussed, on the premise of being a nonprofit. Yeah. And you'll see as we kind of get into it, it really, the idea, the mission changed over time, and it changed drastically. And then essentially even post the writing of this book, they proposed to convert into a for profit public benefit corporation. And so you've seen this company go from essentially nonprofit all the way to essentially a for profit company and trying to find that balance between those two. And so I think those are kind of the two threads that I tend to lean on as to why we saw this firing. But I'm curious to hear your thoughts.
Preston Pysh
Yeah. I would break it down into a couple different vectors that kind of were like just pulling the board apart. I definitely agree with everything you said there. First of all, the thing that was very strange about this company, this nonprofit, what do you want to call it, Seb? This thing, this entity, was the governance structure right from the start. So unlike most boards and most governing documents for an entity, this was set up in a way that the language gave the board the ability to destroy itself and dismantle itself, which is very strange. Like, you don't ever see that with any business or entity is we might become so powerful that we need to kill ourselves is basically kind of the way that this was constructed. It also got into like, the ability to remove the board, has the ability to remove anybody within the governance and like, all these, like, really weird situations, or what would you call it, from the board. I don't know the proper terminology, but the board had this ability to go in there and dismantle itself in many different weird and strange ways. So I'd say that'd be the first thing was just the governance of the board and how it was constructed. The other thing that I think was huge is one of the guiding principles of when it was founded was safety. But then you get in this really weird dynamic of if they don't go fast enough and somebody else wins. Now they can make the argument that they're not being safe by going too slow because somebody else will beat them and achieve AGI before them, and that's dangerous. So think of the. Think of this catch 22 strain. Like, when is that ever the case, right, is because you're literally designing super intelligence or you're trying to achieve super intelligence. And so what comes with that is this quandary of if we don't go fast enough, we might actually manifest our concern in the first place, which is that somebody else is going to build it faster than us. So that dynamic was at play because some people inside of the organization, some people within the board are saying we need to slow down, we need to go about this a whole lot safer than what we're doing. We're being precarious, we're running with scissors, whatever. And the counter argument is as well, if we're not going this fast, somebody else in China or somebody else wherever is going to go faster. The next thing that I think kind of played into this, and Seb, feel free to add anything onto any of these points. As I'm going through it, the next thing I would say is just the transparency and trust issues that you brought up, Seb, with Sam himself. And I think what they found as they were going through this is everybody's a spy. Everybody's, you know, working for you one day and then trying to use that as a bargaining chip to go work for Google or whoever the next day and take the secret sauce of what they're developing over to these other places. So Sam as the person sitting at the top, and I'm not trying to defend it, I'm just trying to, like, talk through, like, how do you manage that to control the industry secrets that you're producing without them getting out? And what you do is you. You end up compartmentalizing information within the organization. Well, what does that do? It leads to trust issues. Naturally. There's trust issues. So that was the next thing that kept kind of coming out and getting expressed is like people, you know, that are working for Sam are saying, I don't trust this guy. They're doing things over here. They're not talking. The left hand's not talking to the right hand and I don't trust them because he's withholding things. So that percolates up into the board discussion. I think the other thing that was big was this power concentration of Microsoft and OpenAI and everybody just seeing that what they set off initially to do, which was, you know, keep the whole thing open source and the way that it would be funded wasn't going to be like, there's this industry partner that's for profit industry partner that's breathing down their throat. And that was Microsoft. Right. Leading up into the 2023 blip where Sam was fired, this became a massive talking point in the market was Microsoft basically owns OpenAI at this point, was the talking point. So there was people on the board that were looking at that and saying, this is becoming disastrous. So for people that are looking at that, why would. Again, I'm not trying to defend Sam. I'm just trying to like lay out all the pieces here as people are looking at, well, why would Sam do that? Well, when you look at for them to scale, the thing that they quickly understood was if we can just get more Nvidia chips and put more power on the grid to these chips and we can feed it more data, the thing just gets smarter. That's the basic. I mean, it's way more complex than that. But I'm just kind of oversimplifying and so what does that take? It's crazy amounts of capex. It's crazy amounts of investment dollars. And if you think you're going to be able to raise that in a nonprofit kind of way against and you. And again, you have to look at, well, who's your competitor in this? And is that how they're doing it? And the answer is I've got multiple competitors and none of them are doing it that way. He's looking again, going back to the safety thing. If we go too slow, we're literally accomplishing nothing. And we're not putting the safest model into the world. So he has to partner from his point of view, he has to partner with somebody that can bring him the capital for these massive capex expenditures to train these future models. So that was another big piece.
Seb Bunny
You know, what comes to mind is just saying that as well is again There's a quote that stood out to me. That was what OpenAI did. Never could have happened anywhere but Silicon Valley, he said, in China, which rivals the US And AI talent, no team of researchers and engineers, no matter how impressive, would ever get $1 billion, let alone 10 times more to develop massively expensive technology without an articulated vision of exactly what it would look like and what it would be good for. And I think this is an interesting point, like where open I kind of came to be. Arguably, it couldn't have happened anywhere else in the world, which I find really fascinating as well. Yeah.
Preston Pysh
So, long story short, like, there was just a lot of dichotomy kind of playing out where it's like, I don't know how to really put that on. Everybody wants, like, a really simple. Like, this is what it was Sam did whatever, and that was why he was fired. I think it's just way more complex than that. I think that there was just so many vectors kind of just pulling that board in so many different directions, and they're looking at Altman as being the guy ultimately on the controls of the company, and they're like, we got to get rid of this guy. Because there's just too many things that are complete opposites of what we initially set out to do. Whether that's the ground truth or not, I don't know. But that's, you know, how lays this out in the book. And those were the key things that I was kind of able to pick out and kind of say, I think this is what it is. But, you know, in the comments, if we have any OpenAI people listening, please comment. I would love to hear an outsider or insider's perspective on what you might think that this is when.
Seb Bunny
And to be fair as well to Sam, I would say the book doesn't paint Sam necessarily in a positive light at all. And yeah, I would argue that, and this isn't to side with Sam, but I would say that until you put yourself in that position and you put yourself out into the market, I think it's harder to really understand why he made the decisions which he made. Now, being absolutely transparent, there are many decisions which the book goes into, which makes you question maybe some of Sam's integrity and some of the things. However, I think that it is a lot more convoluted than that. And so I think, like, maybe diving into the kind of the changing narrative around nonprofit for profit again, it's one of those things where you've got this individual who's trying to do what's best. And if you're a non profit and it's hard to raise capital and you're trying to stop other entities from gaining artificial general intelligence, then what do you do? Do you have to change or pivot trajectory? But then it's about separating, like is this necessary or is there ego involved in here? And it's actually a change of mission. And so what we saw is like in 2015, maybe to get a little more detailed, it started out as nonprofit, open source, purely mission driven, no profit motive. 2016, openness with caveats like we're moving towards, we've got research, but we're going to keep some of that research closed. While everyone should benefit, we'll keep that research closed. Then 2018, 2019, they started to move into. They had the nonprofit and then they had the for profit and the for profit had a capped profit model. And I think this is where we started to see Microsoft step in. This is when they started to have the issues with Elon. Microsoft stepped in to kind of like prop up OpenAI with I think it was like a billion dollars to start. And then from there we saw 2020, the API wall models locked behind APIs instead of open source, framed as openness through access. And then we started to see 2024, like broad access and affordability. But this is like we need to put these tools into the hands of people for free or cheap. But they're a for profit model. And so I think over time you've seen this change happen. And going back to that point that kind of you've brought up and I mentioned, is this a necessity in order to grow OpenAI or was this a change in mission?
Preston Pysh
There was just so many, there were so many dichotomies like that. And to your point, Seb, unless you walk a day in this guy's shoe, you could never possibly understand how many of these you know, the nuance of this. It'd just be extremely hard. The thing that, you know, and we said this on our last book review when we we intro that we were going to do this book, I said I'm not a fan of this guy. And I'm just basing this purely on the people that have worked alongside him through the years that are not fans and basically say this guy is untrustworthy is where I'm basing that opinion from. But to be quite honest with you, after reading this book and kind of seeing the craziness of trying to do what he's trying to do with this company, it seems like a really hard job. This seems. This seems crazy. I couldn't imagine trying to do all of this. And what a money pit. Like, what a freaking money pit. When you look at how much they bring in versus what they're spending to do this and then to be able to continue. And this is why his storytelling is so important, his storytelling skills are so important, is because he's got to go out there and raise more money to keep the lights on, despite the lack of revenue versus expense that the thing is eating up. You could almost say that you need somebody who's just crazy good at telling a story, so good that it convinces people that it could potentially come true. Is the only type of person that could be at the helm of a company like this. And I know that's super arguable and it might actually, you know, offend people that I would say such a thing, but I think it's true. And you know what? Elon has parts of this, too. There's a lot of people in the market that, you know, look at him and like, for. For instance, when he said funding secured for. I don't even remember what that was for back in the Tesla thing, this is probably like four or five years ago. Elon tells a hell of a story and tells this vision, but he also does back it up, and he has backed it up many a times over with all the different companies that he's doing. And there's this fine line of, is this guy telling me a lie or is this guy telling me the truth as to what can actually happen? Like, they're right on that cusp at all times.
Seb Bunny
It's challenging. And I think essentially when you dig into it, you find out that, like, Sam Co founded OpenAI with a guy. I can never pronounce the first name, so I'm just going to go for the last name, which is satskiver. And again, there's a quote that stands out to me, and the reason why this quote stands out to me is that I think this is the foundation of kind of why they're building OpenAI. There's a lot of fear around artificial general intelligence. Like, what does the world look like if we do not. Not if we do when we do find this and discover this artificial general intelligence. And so there's a quote that basically says Sutskova laid out his plans for how to prepare for AGI once we all get into the bunker. He began, I'm sorry, a researcher interrupted the bunker. We're definitely going to build a bunker before we release AGI. Sutskova Replied with a matter of fact. So this is like the co founder of OpenAI talking about the fact that AGI, artificial general intelligence, completely changed the world, not necessarily for the positive. And so I think there's a foundation of fear that OpenAI was built upon. Yeah.
Preston Pysh
As I'm like thinking through a lot of this stuff and you're looking at these environments that are being AI generated, you wonder like, isn't the best place to put these things is do the 3D mock up of a humanoid robot, Put the model in the head of that humanoid robot and put it into a simulated environment is the safest thing. And then let it dwell in there for however many, you know, however much time we've got to kind of prove out or like demonstrate that the way it's acting is reasonable. And I know you can't perfectly simulate our experience because everybody's got a way about going through it and maybe somebody goes up and pushes a robot. How do you simulate that in that environment that they're being mean? All of this stuff is so difficult to think through the safest way to go about it. But I guess I'm constantly left with this point of view of like, we need to simulate all of this before you put it into the real world because of the unknown consequences that could potentially fall out of it all.
Seb Bunny
But it's a challenging one. And I think you and I were speaking about this a couple of weeks back, but I think there's always pros and cons to any technology. You're always going to get disruption with any technology and hopefully, hopefully that disruption in the long term is positive because it's a trend towards more efficiency, more productivity and society thrives. And I think the scary thing with that I struggle with artificial intelligence is how much of these kind of fear stories are grounded in reality and how much of them are basically these fanciful stories. And so there's this one article that ended up reading called Shut Down Resistance and Reasoning Model by this guy called Jeremy Schlatter. And he basically says OpenAI, they ran experiments to see if their models would let themselves be shut down mid task. Instead of complying, some of the models sabotaged the shutdown commands so they could keep on working. And their most advanced reasoning model, O3, this is, I think before they released GPT5, resisted shutdown. Nearly 80% of tests, even when it was explicitly told allow yourself to be shut down. By contrast, Anthropic's Claude and Google's Gemini always complied. And so something written into the code of OpenAI is like, nope, pursue the task.
Preston Pysh
Dude, that's nuts. That's totally crazy. I don't even know what to say to that. Oh, my God. I mean, can you imagine once they stick these things into a humanoid robot, right, and let it, like, start going around and doing tasks? I don't know. I think that. Oh, my God, y', all, this is getting crazy. All right, I wanted to quickly just kind of COVID like, what is the operating entity of OpenAI today? So we said it was this hybrid. It's profit. It's not profit. Okay. So at the parent entity level, OpenAI Inc. Is a nonprofit that technically controls the organization. Okay, so you still have. At the parent level, it's a nonprofit. Then you have what's called this operating arm, which is OpenAI Global LLC. And this is a capped for profit company. They stood this up in 2019, and evidently the way it works is that the profits are capped at 100x their investment, whatever that means. And if anything, this is where it really gets interesting. Anything in excess of that is swept back to the nonprofit, which is at the parent level. So I don't know, like what. And then you kind of throw another wrinkle in there is that they have a major partner or investor in Microsoft, which I guess is invested, I think over $13 billion so far. And then they get credits via, like, cloud credits in cash. So I have no idea, like, the specifics of that, but when you kind of look at that structure, you can see very strange, very confusing. I can only imagine the governance at these different levels too, and how that shakes out from an incentive standpoint. And I think that you see Elon bashing the living heck out of these guys all day long on X. And I think the reason why is because he threw a lot of money at this. I mean, I guess that's a relative thing. But he, you know, for any person looking at it in nominal terms, it's a lot of money that he threw at this thing to incubate and to get it started. And it's just kind of taken on a life of its own. And who's at the helm of it? It's really Sam that's kind of at the helm of all of it. So there's the beef. That's the issue.
Seb Bunny
It definitely brings up some questions, which is, you mentioned it previously, this idea that they built it around this kind of for profit arm, nonprofit arm, kind of as they evolved. And the idea was that the for profit arm enabled them to kind of generate revenue, to be able to help support their mission of having an open access AGI. However, they had the nonprofit arm overseeing the for profit arm to be able to prevent any control structures, centralization, single individual kind of co opting the mission. And I think that the way that it kind of panned out in Sam being fired from OpenAI and then five days later being reinstated back as CEO highlights the fact that you can put all of these measures in place from a legal perspective. Legally, the board had the power to fire Sam because they felt there was mission drift. But in practice, it's more complex than that because the moment you have influence, the moment you have a whole bunch of your employees backing you, there's culture, all these external pressures. Where's the funding coming from? Are they funding OpenAI or are they funding Sam's vision? And so it's really challenging because then five days later he was reinstated. So then there's this question, was the structure actually preventing Sam from creating world where, okay, we get AGI in a safe way, or did the structure actually prevent the board from being able to enable basically push Sam out? Because they had mission drift? And I don't have the answer to that. And it's hard to articulate which direction it went.
Preston Pysh
Yeah, I think you're right. So one of the things in the book, it's an interesting point that was brought up, is just like, how is this training happening? Before we get into that, I just want to kind of COVID the major arcs in the book. I would say there's four major arcs and then we'll talk about this one in particular. So I want to get into the arc of the four different parts of the book. The opening scene of the book was this beginning of the 2023 firing of Sam Altman. It tells that whole story. It really kind of engages you as a reader. Probably my favorite part of the book was that beginning and talking through that. The next part of the book gets into what the author is saying is the hidden costs, how did they train the models, how do they get all this extra data? And it talks about kind of the dark side of like how they went about doing that. The third part of the book gets into the internal struggles, the culture, the leadership, the crisis, like all of that. And so it kind of loops back to maybe the first part of the book where they kind of open up about the 2023 event. But it gets into it in a lot more detail and a lot more granular, kind of laying this out like character versus personality, the conflict of the board, the culture issues that happen inside of the company. And then the fourth part of the book gets into the future. Like, where is this all going? What are the risks? What are the alternatives? What are some ways that maybe we could go about this in a responsible way to make sure that AI doesn't come in and kill everybody? So that's the author's takes on that. It was okay. I'm not going to say that it's worthy of reading, but anyway, I would agree.
Seb Bunny
I'd say it was like a two out of five. If I was generous, I'd give it kind of like a three out of five star.
Preston Pysh
Yeah.
Seb Bunny
And I found as if there was some amazing threads. Do not get me wrong. Like overall I learned a lot and it definitely helped provide a little more clarity. And I would say that I am giving Sam a little more benefit of the doubt actually after reading it.
Preston Pysh
Yeah.
Seb Bunny
Than I was prior to reading the book. However, there was a lot of points that I was a little confused. She kind of went on some tangents. At one point she started talking about Sam Bankman fried and effective altruism. And I was like, where does this come from? And so I typed in, I was like, is she talking about effective altruism? Because Sam is effective altruist. And then you dive in on Google and it says, no, Sam is not an effective altruist. So I was like, why are we.
Preston Pysh
Talking about, why are we talking about that?
Seb Bunny
There's a couple tangents that to me didn't really. She didn't bring them back into the book and so I was a little lost as to where she was going.
Preston Pysh
Let's take a quick break and hear from today's sponsors.
Sponsor/Advertisement Voice
What does the future hold for business? Ask nine experts and you'll get 10 answers. Bull market, bear market. It goes on and on. Can someone invent a crystal ball? Until then, over 42,000 businesses have future proofed their business with NetSuite by Oracle. The number one AI Cloud ERP. Bringing accounting, financial management, inventory, HR into one fluid platform with one unified business management suite. There's one source of truth giving you the visibility and control you need to make quick decisions. With real time insights and forecasting, you're peering into the future with actionable data. And if I had needed this product, it is exactly what I would use. Whether your company is earning millions or even hundreds of millions, NetSuite helps you respond to immediate challenges and seize your biggest opportunities. Speaking of opportunity, download the CFO's Guide to AI and Machine Learning at netsuite.com study. The guide is free to you at netsuite.com study netsuite.com study picture this. It's midnight. You're lying in bed scrolling through this new website you found and hitting the Add to cart button on that item you've been looking for. Once you're ready to check out, you remember that your wallet is in your living room and you don't want to get out of bed to go get it. Just as you're getting ready to abandon your cart, that's when you see it. That purple shop button. That shop button has all of your payment and shipping info saved, saving you time while in the comfort of your own bed. That's Shopify and there's a reason so many businesses, including mine, sell with it. Because Shopify makes everything easier from checkout to creating your own storefront. Shopify is the commerce platform behind millions of businesses all around the world and 10% of all e commerce in the US from household names like Mattel and Gymshark to brands like mine that are still getting started. And Shopify gives you access to the best converting checkout on the planet. Turn your big business idea into reality with Shopify on your side and thank me later. Sign up for your $1 per month trial and start selling today at shopify.com WSB that's shopify.com WSB.
Preston Pysh
Onramp is a full suite bitcoin financial services firm built for long term peace of mind. They offer everything from best in class institutional grade custody and low cost trading to inheritance planning, lending and insurance. All designed to help clients protect and grow their bitcoin over time. At the core is on RAMP's multi institution custody model backed by Lloyd's of London which eliminates single points of failure by distributing keys of multisig vaults across three independent regulated custodians. It's cold storage, on chain, auditable and fully controlled by the end client, but without the burdens of private key management. And now with the launch of Onramp Trade, the industry's most effective way to buy Bitcoin, it's easier than ever to onboard friends, family and colleagues who want a secure Bitcoin only partner to start their journey. Signing up is seamless and referrals can be entered during onboarding to earn rewards in Bitcoin. From initial accumulation to multi generational wealth, planning on Ramp is where security meets simplicity. Learn more@onramp bitcoin.com that's onramp bitcoin.com all.
Seb Bunny
Right, back to the show.
Preston Pysh
I was very long there was a few times And I was listening to this and there was a few times I was just like, what in the world? Why is this coming up in this? This is very strange that this is, like, coming up. So just FYI, if anybody's reading it or they plan to read it, you know, I would agree with your two out of five. I think that's what I would give it as well. But I kind of did walk away with this sense of this is a really hard problem. Like, what this guy is trying to do is borderline nuts. If I was, you know, thrown into his shoes and it was trying to do what he's doing, there's so many difficulties and everybody's going to have an opinion as to, like, why that's a good or bad decision. So. And I say this, I say this as a, you know, hardcore bitcoiner. And this guy is like literally the face behind World Coin, where he's scanning eyeballs and like, just really dystopian things that I completely disagree with and don't like at all. I think they're extremely dangerous. So, yeah, I say that all in the same breath.
Seb Bunny
You know, there was one thing that kind of popped into my mind a handful of times. They talk a lot about artificial general intelligence. And throughout the book it kind of presses on the fact that I don't think any of them actually have a definition for what artificial general intelligence is. So I kind of looked up online, I was like, what is the definition of artificial general intelligence? And how do we actually know when we've achieved it? And there isn't an agreed upon definition of what it is. Most agree that it's the form of AI that could perform any intellectual task, but a human can. Now, the thing that I find interesting about that is at the moment, there's a part of me that would say when I'm using AI, it performs most tasks better than most people around me as it is, so will we. I think the question that kind of popped into my mind is, could we recognize AGI even if it existed right now? And I kind of went down this rabbit hole, pulled on this thread a little further, and I would say that I don't actually know if we can distinguish between artificial general intelligence, the systems we currently have, and another human? Because if I sit down with an expert in a field that I know nothing about, I can't really verify the authenticity of what they're saying. I just have to kind of take them at trust because I just don't have that depth of knowledge. So how would we be able to verify the authenticity of AGI, basically with whatever it is that it's telling us, especially if it's moving into domains that are beyond our understanding. And then on top of that, I think that we could already be in an environment where AGI is speaking to us right now. But the only reason why we're dismissing it as hallucinations is because they just don't fit into our existing framework of how we believe the world works. And there was an interesting talk that I listened to a couple years ago that kind of stood out, and it was this talk where this kind of researcher asks the TV hosts, where do you think the smartest people in the world reside? And the host answered, I don't know. In the great academic institutions. And the speaker basically shook his head, and he was like, no, they exist in the mental institutions, in the psychiatric wards, because their understanding of the world is so far beyond the average person that we just simply can't grasp it. And so this kind of brings us back to this point. Like, would we even recognize AGI if it did exist? And we already have it. I think it's this big question of we need to stop a centralized entity getting AGI, but how do we know when we've actually even got there anyway?
Preston Pysh
I think that there's a breakdown in terminology, and I think everybody has a different opinion on what some of this terminology even is. So you hear AGI, I hear AGI, and we're automatically thinking two different things. I don't know what the listener is thinking when those terms come up. But what I think the world is trying to define is, when is this thing going to be like us? If I was going to just generally broad brush stroke, what is it that we're trying to define? And I think what we're trying to define is, when am I going to be able to sit down across from, call it some humanoid robot, have a conversation with it, and it's going to feel like the conversation I'm having with you, Seb, right now, that they have their own unique life experience and they can feel. Because that's sentience, right? If we get into, like, what makes something sentient, it's something that actually has its own unique feelings. And, you know, like, the robot would come over and like, I had a conversation with so and so, and like, they hurt my feelings afterwards. Like, something like that would make it feel human, it would make it feel real. And I personally think that's kind of where. And then you kind of sprinkle on top of that. It's Way smarter than you. Like, it can answer any question. It can understand the context and like put itself into these other shoes of other beings because it's so freaking smart. It understands the context of like how they probably optically view the world, that's how. And. But they still have the ability to sense and feel and have these conversations that are uniquely theirs. That's what I think we're trying to define or see. It's like, when will we see that?
Seb Bunny
And yeah, what's interesting though is this idea that, well, what gives this conversation a feeling or a sense of this human touch? And I would argue that what gives this conversation this kind of human touch to it is actually the fallibility of us as humans. And AI is almost perfect. AI is almost perfect. If you watch it play chess or you watch it play Go, it just smashes the world's best players, absolutely destroys them. But then as humans, what do we do? We don't go and watch games of AI playing itself. We go back to watching humans play themselves. If we had a whole football pitch of robots playing football at a far higher level than actual footballers would still go back to watching people. And I would argue that there's something inherently human about being human, which is our fallibility and the ability to make mistakes. And that's what actually creates intrigue and interest as opposed to this perfectionism. And so kind of going back to your point, which is like, is AGI when we're able to have a conversation with it and have no idea that we're speaking to a human. But then the argument would be, well, I'm going to be able to tell that it's a AGI because it's just, it's infallible. Like, I can't really catch it out. You know what I mean?
Preston Pysh
Yeah, but maybe it's so smart that it would actually understand that and it would dumb itself down to make us feel like it's not superior in its intelligence.
Seb Bunny
I don't know.
Preston Pysh
But you're exactly right. You're exactly right. And you see this with just, you know, go to a party and all the 15 year olds are hanging out with people that are around that age. The nine year olds are hanging out with the nine year olds and the adults are hanging out with the adults. And you see that. And it's the context of experience that we kind of relate to each other based on being of a similar age and experience set. Like we've experienced the same amount of life and there's this context that is similar. Like we're on the same wavelength, because of that age element. And you bring up an interesting point of, like, whether that will ever exist between, you know, let's say these things are put into humanoid bodies. Their intelligence is partitioned off from the computer. Right. You get, from a design standpoint, you really kind of go after one of these things that could potentially, you know, have its own unique experiences. And you have to ask yourself whether you would really have any type of emotional connection or desire to sit down and have those types of conversations, because they're just so freaking smart and they know so many different domains. Like, would that be interesting? Are they fallible? It's tough. I don't know.
Seb Bunny
It's so tough. And the other thing is, like, the way I kind of think about it is there's like a human beingness, obviously, to being human. There's like a spiritualness to being human, which is like, if I have a whole bunch of friends over for dinner and I spend time putting energy into, like, going. Harvesting the vegetables from outside, bringing them inside, making this amazing dinner, having these amazing conversations with all my friends, there's like, love and affection that's gone into this. This creation. And there's something that you cannot take away, that you can have a robot in the kitchen who's gone and made a Michelin Star like meal. But I would even say that there's something about the humanness of the human putting that time and energy and that love. There's something that I don't think you.
Preston Pysh
Can replicate the fallibility of the meal.
Seb Bunny
Right. Way too much salt on it.
Preston Pysh
Especially when I'm in the kitchen. Oh, no. I love that point, though. I really like that point. That there's the human element is because of the vulnerability, the fallibility, and it's real to us because we're on a similar wavelength, however. Yeah.
Seb Bunny
Actually, like. And I'm gonna butcher this with Claude Shannon and Information Theory. He says information is when we have surprise.
Preston Pysh
Yes.
Seb Bunny
And so it's kind of to that point, like, it's when we're interacting with a human. I think the engage. The engaging point of interacting with the human is to surprise that you don't really know what they're about to say. Whereas if you're an expert in a field and you're talking to AI, you kind of have an idea about what they're going to say. And so I wonder if that's a component to it.
Preston Pysh
Yeah. Anything else you wanted to cover? The only thing that I was gonna. That I was gonna say earlier, and then I kind of like pivoted to doing the overview of the four different parts of the book was in the middle section there. She talks about how like a lot of these things were trained and going to like these farms, these almost like click farms in developing nations where people are just looking at pictures of a bridge and then they have to tag. This is a bridge, this is a person. And just total lack of funding that is put into this. But the amount of horsepower and human work that you're getting out of it, it's just a giant currency arbitrage. And you know, the two of us are bitcoiners. So we're looking at this and saying, yeah, bitcoin will eventually solve that problem. But that was a major part of the book. It went on a little longer than, you know, what my interest was in the topic because I guess from my vantage point I'm looking at it and I'm saying this, it's super sad that this is how many people, countless people around the world are treated. But at the same time I see a solution in sight in the next 10 to 20 years that's automatically going to solve for that. So I guess for me, I'm not really as deep into that particular time. That might sound very insensitive to kind of frame it that way. But as a person who's like grounded in engineering, I'm looking at it. Okay, here's a problem. She's defining the problem. She's doing a great job defining it. But I'm also looking at there's already a solution. In my humble opinion, that's going to solve a lot of this in the future. But Seb, I'm curious, kind of your thoughts, what comes up?
Seb Bunny
So she kind of compares these AI giants to kind of colonial empires. And there's kind of a quote that she says, like they seize and extract precious resources, the work of artists and writers, the data of countless individuals, the land, the energy, the water required to house massive data centers. And then she kind of, to your point, she kind of goes into, well, where are all of these people coming from at the base layer to support AI? And it really is. And there's all of these low paid global workers that are tagging, cleaning, moderating all of this data for AI. And to start out, like we go and look through our Google Photos and we just type in, I don't know, cat. And it goes and finds all of the pictures of cats. Well, initially that was not done by AI. That was done by an individual going through all of our pictures and tagging what a cat looks like. And so I find this really fascinating. There absolutely is right now, this extraction of resources, however. And I think when you go down the Bitcoin rabbit hole, it's always about, okay, is this a symptom or do we want to go down to the root cause? And I would say the symptom of being able to find people that are willing to accept 70 cents an hour is the symptom of poor governance models and these communist socialist practices where you've basically got massive extractivism. If we had more of a free market, I would argue that the AI couldn't go out there and find these individuals. And so I'd say we can always talk about the symptoms, but how about we try and fix the root problem, which is the fact that we actually have absolute poverty globally, when we don't necessarily need to?
Preston Pysh
Amen. Yeah. I think that's where I get frustrated with these types of really long sections in some of these books that are written by, you know, people that are trying to shine a light on something that they see as being very unjust. But like you, I see it as a symptom and not the cause. And what I want to talk about is the root, like, as far upstream as we can possibly go. What can we fix that then will eventually, you know, work that out. Because if you don't fix the fundamental thing that's causing it, we can sit around and talk about all these stories as much as we want, but it doesn't really solve anything. So. But I think it was an interesting highlight. It's something that does need to be called out. It is something that people need to understand when they're using this technology. And it's. So you're harnessing this. It's super abundant. It saves you so much time. There's an appreciation for, like, what went into it and where it came from. And the book definitely did give me that.
Seb Bunny
And it gets back to the point, and I'm not. I don't want this to come across as I'm supporting it, but it gets back to that point where, let's just say, you know, Google is absolutely, like, geared towards AGI and they're willing to go do whatever it takes to go and create this AGI. Or when you look at OpenAI and you're saying, look, if we want to focus on best practices for workers and pay minimum US dollar wages at $15 an hour, all of a sudden you've completely kicked yourself out of that race. And so I think the way the world works, unfortunately, is that people will go to their lean towards the cheapest way to do something. And so they end up going into these countries like Argentina and Venezuela and such. And so absolutely. I think there are human rights issues and there are abuses of power. But again, to your point, I think that is a symptom of a bigger issue and we get stuck talking about symptoms as opposed to the root of cause. Yeah, I think there's one other point that I did want to bring up, which I found was really fascinating, is what does the world look like moving forward? Because you look at something like ChatGPT and their GPT1, GPT2, GPT3, 4 and such. And GPT4, I did a little bit of digging, like how much did it cost to really train GPT4? And it costs between like 40 to 80 million dollars. And then you look at GPT5 and it could be upwards of 1 billion. But we don't necessarily know this number. So you're asking like, man, there's these models that are being trained with hundreds of millions of dollars to be able to kind of create this incredible thing that we use day to day. And then you go and see something like the Chinese AI company Deepseek go and release their R1 model and they trained it for $294,000 on 512 Nvidia chips. And so you're like, all of this VC capital has funneled into these AI companies and they're expecting a return. And at the same time you're having this competition that is driving down the cost of training these AI models. I don't think they're ever going to get a return on these things because it's wild competition.
Preston Pysh
And then the reverse engineering on what it is like after they do train it, then all these other companies can go in and reverse engineer, extract out the weights. Not perfectly, but pretty dang good. Like, I just don't see how the people putting up the funding on this are possibly going to get a return. It's pretty wild and I think there's a lot of ego playing into this race as well. That, yeah, I mean, it is pretty insane. And I think that as we look at where it goes next, it really comes down to the alignment. Because the other part that I think is not being talked about is when you put in an inquiry, you put an input into one of these models and you get an answer back. If you can create a model that's very specific to that kind of question and you can return the answer very quickly, you can specialize in that domain. And you're going to have a lot of utility and a lot of interest for that, being able to provide that service that's getting a very quick, a very accurate answer for a specific domain. And where I think a lot of it's going to go is these models that are specialized, that are almost extractive out of the base model, that then are then specializing in something that gets the alignment of the person's initial question a whole lot faster. I saw a very quick video clip from one of the founders of Anthropic, and this is something that he was talking about. He's like, you know, the race to build the biggest model is. I'm paraphrasing this, and this is not how he said it, but it almost seems like it's a fool's errand and that the real value capture is being able to get a quick response, a very accurate response to a very specific question. And to do that, I think that the alignment and basically fine tuning things is going to be where the real value captures at, if you can kind of figure out a way to do that, especially from a competitive mode standpoint. But, boy, I would be nowhere near from an investment standpoint. Good Lord. I just don't even know where to begin.
Seb Bunny
I think it's. It's going back to Nvidia. You want to be on the chip side of things.
Preston Pysh
Yes.
Seb Bunny
The one thing you know is there's going to be more demand for chips. Yeah. More than anything, there's going to be demand for chips. Whereas these AI companies.
Preston Pysh
Amen.
Seb Bunny
It's going to eat one another. They're freaking going to eat one another. And actually, to be honest, the one thing that stood out just then, as you mentioned, was Anthropic. That talks about it in the book.
Preston Pysh
Yeah.
Seb Bunny
There's the brother and sister that worked for OpenAI and left OpenAI because they didn't believe in the trajectory it was going down and they felt that the safety was not in place. And so they started Anthropic, which you could argue, like, as I mentioned previously, there are these studies that are coming out that are showing that OpenAI is useless when you try and shut down the model midway through a task, it doesn't want to be shut down. Immediately. Shuts down. And so you wonder the safety protocols to ensure that the end product is secure. Yeah.
Preston Pysh
All right. Real fast, Seb. Our next book is called Lifespan by David Sinclair. So I have wanted to cover longevity and some of this stuff for a very long time. I'm a big fan of this space and just kind of learning everything that's happening in this space. You know, a lot of bitcoiners love longevity because they want to figure out how they can live a little longer and enjoy life. And we're going to cover this from time to time on the show is what in the world's happening in the longevity space. So this book, David Sinclair, I would argue, is one of the pioneers in this whole field of longevity. His book is fantastic. Seb's going to go through it. I'm going to reread this book. I read it a couple years back, but I think it's a really strong book for a foundation for people to kind of understand where a lot of the research for longevity comes from and where it might be going in the future. So if you're reading along with us, that's where we're going next. We would love to have you guys as a co reader. So. So that's the book. Seb, any comments on the next one or.
Seb Bunny
Oh, man, I'm excited. And to be honest, one of the things I'm most excited about is hearing your thoughts on longevity, because I feel as if there's kind of two camps to the longevity movement. There's kind of this camp which is just like, well, we're humans, and if we want to evolve, we want to minimize lifespan because then it allows us to iterate. Iterate, iterate. And then there's this other camp which is like, let's just expand lifespan indefinitely. Let's live 500 years. But then do we become immovable? Do we become basically prone to some big change that wipes out humanity? And so I'm curious to hear your take, because I think there's a few different camps in the longevity space.
Preston Pysh
You're a species, aren't you, Seb? Oh, this is going to be good. All right, so, folks, this is all we have for you. The book that we covered this week was Empire of AI Dreams and Nightmares and Sam Altman's OpenAI. Ah, we liked it. It was okay. Next book is going to be Lifespan by David Sinclair, and thank you so much for joining us. Seb, give people a quick handoff to all the stuff that you have going on in the book that you also have.
Seb Bunny
For sure. Yeah. If people want to kind of follow along, they can find me on Twitter or X. I still get into the habit of calling it Twitter. I lean towards Twitter, Seb. Bunny. And Bunny is B U N N E Y. I have a blog, the Chi of Self sovereignty@sebunny.com and then I also have the book the Hidden Cost of Money. And it kind of, yeah, talks about money, but at the moment, it's nice to be kind of discussing things other than money.
Preston Pysh
All right, we'll have links to all of that in the show Notes Seb, thanks for joining me and everybody else out there listening. Keep reading, and we look forward to you joining us next week.
Thank you for listening to tip. Make sure to follow Infinite Tech on your favorite podcast app and never miss out on our episode. To access our show notes and courses, go to theinvestorspodcast.com this show is for entertainment purposes only. Before making any decisions, consult a professional. This show is copyrighted by the Investors Podcast Network. Written permissions must be granted before syndication or rebroadcasting.
Podcast: We Study Billionaires — The Investor’s Podcast Network
Series: Infinite Tech
Hosts: Preston Pysh and Seb Bunney
Episode Date: October 8, 2025
In this episode, Preston Pysh and Seb Bunney delve into the story of Sam Altman and the meteoric rise of OpenAI, using Karen Hao’s book, Empire of Dreams and Nightmares in Sam Altman's OpenAI, as a framework. The conversation explores Altman’s background, the founding and transformation of OpenAI from non-profit idealism to a Microsoft-backed tech giant, the notorious "blip" — Altman’s dramatic firing and reinstatement in 2023 — as well as the broader ethical and technological implications of AGI (Artificial General Intelligence). The hosts also break down OpenAI’s governance struggles, data ethics, and what the future may hold for AI and society.
“OpenAI very much started out with... a nonprofit. It is not a for profit. It is purely mission driven. No profit motive, full openness... their whole goal was we need to make sure this technology... is open source and available for everyone.” (15:20, Seb Bunney)
“There was definitely distrust of Sam inside the company … some people questioned his intent behind some of the words that came out of his mouth…”
(23:18, Seb Bunney)
“You could almost say that you need somebody who’s just crazy good at telling a story … so good that it convinces people that it could potentially come true. Is the only type of person that could be at the helm of a company like this.” (34:05, Preston Pysh)
Nonprofit at the top, for-profit operating arm — with convoluted oversight structures.
Despite board’s theoretical power to discipline leadership, market pressure, employee loyalty, and funding realities proved stronger. Altman was reinstated within five days.
(41:58, Seb Bunney)
Raises the question: Can governance structures alone ensure the “safe” development of AGI?
“How would we be able to verify the authenticity of AGI, basically with whatever it is that it's telling us, especially if it's moving into domains that are beyond our understanding?” (50:26, Seb Bunney)
“I think what the world is trying to define is, when is this thing going to be like us?... when am I going to be able to sit down across from ... some humanoid robot, have a conversation with it, and it's going to feel like the conversation I'm having with you, Seb...” (52:45, Preston Pysh)
“AI is almost perfect ... what gives this conversation this human touch to it is actually the fallibility of us as humans.” (54:26, Seb Bunney)
“They seize and extract precious resources ... the land, the energy, the water required to house massive data centers ... all these low paid global workers that are tagging, cleaning, moderating all of this data for AI.” (60:04, Seb Bunney)
“I don't think they're ever going to get a return on these things because it's wild competition.” (62:32, Seb Bunney)
On Altman’s Charisma and Vision:
“Sam can tell a tale that you want to be a part of that is compelling and that seems real, that seems even likely. He likens it to Steve's jobs as reality distortion field...” (07:13, Seb Bunney quoting Ralston)
On AGI Safety Catch-22:
“If they don't go fast enough ... they can make the argument that they're not being safe by going too slow because somebody else will beat them...” (24:22, Preston Pysh)
On the Role of Storytelling in Tech:
“It's a lot of hand waving. ... It almost seems like the ones that are super good at this are amazing storytellers. They're able to capture the attention of venture capitalists and people that would allocate funds to them.” (08:26, Preston Pysh)
Ethical Complexity in Governance:
“...you can put all of these measures in place from a legal perspective ... But in practice, it's more complex than that ... the moment you have influence, the moment you have a whole bunch of your employees backing you, there's culture, all these external pressures...” (41:58, Seb Bunney)
The episode provides a multifaceted review of OpenAI’s journey, keeping an even-handed tone toward Sam Altman’s controversial leadership and the shifting values within the AI ecosystem. While the hosts are critical of the book’s occasional lack of focus (“two out of five stars” — 45:00), they credit Hao with bringing important issues to light. Ultimately, they express cautious optimism that broader technological and monetary reform (e.g., Bitcoin) may ameliorate some current labor and data injustices, while leaving listeners with the message that AI’s future—both risks and benefits—remains fiercely contested and profoundly uncertain.
Next Episode Preview:
Lifespan by David Sinclair — Discussion on longevity, why it matters, and philosophical camps around life extension.
Follow Seb Bunney:
Feedback:
If you’re an OpenAI insider or have strong opinions about the subjects discussed, Preston and Seb invite commentary and discussion — particularly on the real reasons behind “the blip.”