
Loading summary
A
Hello, everybody. This is Marshall Po. I'm the founder and editor of the New Books Network. And if you're listening to this, you know that the NBN is the largest academic podcast network in the world. We reach a worldwide audience of 2 million people. You may have a podcast or you may be thinking about starting a podcast. As you probably know, there are challenges basically of two kinds. One is technical. There are things you have to know in order to get your podcast produced and distributed. And the second is, and this is the biggest problem, you need to get an audience. Building an audience in podcasting is the hardest thing to do today. With this in mind, we at the NBM have started a service called NBN Productions. What we do is help you create a podcast, produce your podcast, distribute your podcast, and we host your podcast. Most importantly, what we do is we distribute your podcast to the NBN audience. We've done this many times with many academic podcasts and we would like to help you. If you would be interested in talking to us about how we can help you with your podcast, please contact us. Just go to the front page of the New Books Network and you will see a link to NBN Productions. Click that, fill out the form, and we can talk. Welcome to the New Books Network.
B
Hello, I'm Deidre Woolard with the New Books Network Economics Channel. I'm here today with David Z. Morris, who's author of the book Stealing the Future. It's about the ideas that drove crypto criminal Sam Bankman Fried. I've read a couple books about Sam Bankman Fried, but this one was different. It dives into the thought processes divide, driving not just the behavior of Sam Bankford Fried, but also a lot of other tech elites. David Z. Morris is a former academic turned journalist who's written for Slate, Fortune, and many others.
C
Welcome. Hi. Thank you for having me.
B
So Stealing the Future, you've got this background in studying crypto, but also in understanding social systems. So it seems like those two things came together a bit. Like I said, few books on this topic. What were you hoping to achieve specifically with this one?
C
Yeah, I mean, I think you really hit it right on the head with your description, which is, you know, I got very interested in fraud and scams in general. And I would say that, you know, even before I was focused on crypto, I was. I was writing for Fortune and spotting and writing about and investigating other frauds, including Nikola Trevor Milton, if that name rings a bell for some people. And so, you know, crypto became my focus, but perhaps even more broadly, I was looking at investment frauds and scams. And a lot of it really hinged on very similar dynamics, even separate from what happened with FTX and Sam Bankman Fried, which is this specific thing called effective altruism. Even sort of zooming out further from that, going back even to my work in 2018, 2019, it was becoming very obvious that technology fraud and technology investing fraud was, was accelerating. And then obviously in kind of 2020 we had these things called SPACs. A guy named Chamath Palapitiya, who for better or worse is now extremely influential, made his money off mostly these SPACs that largely crashed to zero after he had collected a lot of his fees. And so that title Stealing the Future, it sort of speaks to this much broad, broader phenomenon of sort of deceptive investment pitches. Things that some of them are illegal frauds, some of them in Chamath's case were perfectly legal. And, and it all hinges on this idea that if you paint in various ways this very hypnotic picture of something you're going to build in the future. I think part of the problem or part of the situation I guess, is that we have these outlandish technologies that do exist and that people don't entirely. Cryptocurrency is one of them. And I think that this is perhaps one of the other big distinctions between my book and others about Sam Bankman Fried is that one of the reasons I have written so much about crypto and I think understand it fairly well, is that I don't believe that cryptocurrency itself is a fraud inherently. It's a new kind of technology, there are interesting applications, but it also, because it's so complex in a way very similar to artificial intelligence, self driving cars, other, other technologies that exist at this frontier where people don't understand them that well, that opens up a huge space for frauds and scams. I think that AI we now have and you know, arguably in much the same way Tesla is sort of a long term fraud. I think that OpenAI is going to, it can be understood in many ways as a securities fraud. The things that they're promising and talking about just will happen. And, and you know, this is not something we get into, I get into in the book but you know, the regulatory environment is such that securities fraud is essentially unprosecuted these days. And so we, we have this, it's complex situation where we're still in the hangover of yes, there were some huge technology returns in the early 2010s. We can talk about why that was but we've sort of entered the, the diminishing returns on technology investment. And so a lot of people who maybe in different timeframes would have been honest dealers are effectively transitioning to fraud as a business model. And there is this kind of justification behind it that gets to the ethics of FTX and Sam Bankman Fried specifically. So he's kind of this one case of a much larger, I think situation.
B
Well, that's what I found so interesting about the book is, is that you talk about effective altruism, which we'll get into, but also rationalism. So there's sort of this, this thought process that kind of justifies whether it's fraud or not fraud or just sort of winning at all costs. So talk a little bit about these philosophies and also the role of predictive analytics in all of this. Fascinating.
C
Yeah. So I'll say one thing about rationalism. Capital R ism capital. I'm beginning to talk to my publisher about the next book I'm going to write for them and it's probably going to be some sort of taxonomy of tactics and methods of fraud. And one of the ones that I actually was thinking about this morning. I'm just sitting down working on my little outline for the next book. And sort of imitation and similarity are very powerful and also like once you spot them, really big red flags for fraud or misdirection. So I mentioned Nikola earlier. Nikola even back in like 2018 when they first appeared, really set off my radar because I'm like, oh, Nikola, Nikola Tesla. They're evoking parallels to Tesla intentionally and not trying to kind of stand on their own feet. Rationalism, which people might be familiar with because Eliza Yudkowski, kind of the figurehead, recently published a book called if anyone Builds It Everyone Dies, which maybe we can talk about a little bit, but we can put, put some context on it here. So rationalism honestly is a fraud based on a very similar dynamic to naming your company Nikola instead of Tesla. Rationalism wants you to think that it is synonymous with rationality, which is to say the way that we all think on an everyday basis when we focus on facts and try and deduce from the facts on the ground instead of engaging in mythology or whatever you, whatever we think of like as rationalism on a dated irrationality on a day to day basis thinking rationally. Rationalism is in fact the entire inversion of that. Now we had the Yudkowski book come out. I think it, you know, made a little bit of a manufactured splash because they are Very well funded, which is one of the things that I talk about in the, in the book. But, you know, didn't have much impact because I think the AI bubble is popping more generally and people are less concerned. But the origins of sort of rationalism and Yudkowski thought about AI are that since about 2000, so for 25 years, since he was 19 years old, Yudkowski has been entirely convinced, not of any particular way of thinking or of any method. He's. He's sort of became most famous for a book, a sort of fan fiction book called Harry Potter and the Methods of Rationality. But rationalism is not actually about the methods of rationality. We can talking about the Harry Potter thing. It is weird and it's part of the cultish nature of this that they sort of targeted frankly vulnerable people by using fan fiction. But the subtitle is the Methods of Rationality. But the reality is that rationalism is not based on any method. It is based on a conclusion, which anybody who knows what rationality means, it is not based on a preconceived conclusion. Yudkowski, since he was about 19, has been entirely convinced that artificial intelligence will gain sentience, invent nanobots and destroy the human race. This is the basis for rationalism. The reason that Yudkowski decided to promote the methods of rationality or try and develop them is not because he had some abstract belief in the power of rationality. It's because people disagreed with his conclusions about AI and so he decided that they needed to learn how to think better. Anybody who's trained in, you know, actual critical thinking knows that this is entirely backwards. And much as, you know, I talk about in the book, FTX has this original sin, which is they immediately started embezzling customer funds in 2019, almost after they were founded. Exactly. In the parallel sense, rationalism has an original sin, which is it is based on a conclusion and not a method. And to dig a little bit deeper into the commonalities between rationalism, capital R and effective altruism, the reason, and this is, this is where I push my argument a little bit. And there obviously it's quite, you know, open to disagreement, but what I see with rationalism is a deep belief in the predictability of the physical universe. They'll hedge around it in all kinds of ways, but operationally, Yudkowski has said that we operate on the assumption that the physical universe is predictable. You can make sort of ontological arguments about whether that's a fact or not. But both effective altruism and rationalism hinge on the belief that, that regardless of whether the universe itself is predictable, that human beings can actually predict outcomes. And this is provably untrue based on a lot of mathematical and just sort of existential knowledge. But rationalism, as part of its program to make AI safer, wanted to introduce this thing called, oh, I'm going to get the exact terminology wrong, but something like programmable intelligence, like they want everything to be step by step, A plus B equals C. Everything is accounted for. It's purely about what we would call analytical philosophy and believing that analytical philosophy can actually bring you to clear, actionable results about how you act towards the future specifically. And so this is a real problem for them. Effective altruism is ultimately a subset of utilitarianism. And this is a problem common to all utilitarian philosophies, which is utilitarianism claims that we should base our morals on what our impacts will be rather than on moral rules. And this is how you get things like. Sam Bankman Fried told his friends that he didn't believe that rules like don't lie and don't steal were defensible. Under utilitarianism, there are no universal moral rules, there are only outcomes. The reason this entire ethical framework breaks down and the reason you end up with somebody like Sam Bankman Fried who's willing to commit crimes sort of nominally because he thinks there will be better outcomes in the future is because there is this basic error when you tell people that they can calculate the impacts of their actions. Arguably in a sort of post religious era, our moral rules, things like don't lie or don't steal are actually there specifically because we cannot predict the impacts of our own actions. And we're trying to, I guess, limit harm, I think would be the way to put it. Without those rules in place, Sam Bankman Fried was able to convince himself essentially that taking all of his customers money that they were trying to invest through him, he this is the sort of basis of the scam, right? He got all this cash, US dollars that people were taking to FTX with the intention of investing in cryptocurrency tokens. That money was never actually used to buy a cryptocurrency that those US dollars stayed with Alameda Research and were used to invest in startups, to donate to political and to buy real estate for his parents, among many other things. And what this boiled down to was his belief effectively that he individually could make better investing choices than the people who were trusting him to custody their assets while they made investing choices. He essentially preempted his own customers desired investments and sent them to, among other things, anthropic, his friends at an AI startup. And so this is sort of winding way to explain the ethics of effective altruism and utilitarianism, which is, you know, and kind of the hidden implicit authoritarianism of all of this is also a really big subject for my book, which is if you're a utilitarian at a certain level or an effective altruist at a certain level, you have to believe the thing that Sam Bankman Fried believed, which is, I'm smarter than all of these customers. I know better than them because I'm thinking again, rationally, we go back to rationalism. And that even further down from that. One of the big points I make in the book is that this is about eugenics. This is about believing that you have genetically higher intelligence than somebody else so that you can justify you disagreeing with people, but being firmly correct about what's gonna happen in the future, even though there is this very clear reason to believe that we can't predict our actions. So this is the sort of nested doll of justifications that these people. And this again extends, I think out to whether it's Elon Musk chamath, Sam Altman. This logic underpins all of these people's desire to agglomerate huge amounts of investment capital towards these far future projects that they believe they can fulfill, even against any kind of skepticism. No matter how grounded, no matter how many times you tell people that it is literally impossible for humans to colonize Mars, you still have Elon Musk out there saying, no, that's not true. I can't offer any specific reasons why we're going to do this Mars thing, except that I'm a genius. And so again, and this all feeds into a sort of again quite authoritarian structure to our economy on top of our politics at this point. Well, the holidays have come and gone once again. But if you've forgotten to get that special someone in your life a gift. Well, Mint Mobile is extending their holiday offer of half off unlimited wireless. So here's the idea. You get it now, you call it an early present for next year. What do you have to lose? Give it a try@mintmobile.com Switch limited time.
B
50% off regular price for new customers. Upfront payment required $45 for three months, $90 for six months or $180 for 12 month plan taxes and fees. Extra speeds may slow after 50 gigabytes per month when network is busy See, terms. Well, that's what I find fascinating too is, is the myth of the boy genius, which you talk about a little bit in the book. And there's this connection that I found really interesting that you talk about science fiction, so you talk about Frank Herbert, who wrote Dune, and Isaac Asimov, and there's this thread of dystopian and utopian fiction that influences not just Sam Bankman Fried, but a lot of these techno billionaires and how they consider the future.
C
Yeah. And that future is a very specific one. And the more you dig into it, the more fantastical and frankly delusional it reveals itself to be. I should shout out the work of Emil Torres and Timnit Gebru. Timnit was the AI, AI safety slash ethics researcher at Google, who was fired from her role in roughly 2019, I want to say after maybe 2020, after publishing a paper that coined the term stochastic parrot to describe LLMs effectively, she was fired for revealing that the entire current generation of AIs is essentially a mechanical Turk, a kind of con job that imitates intelligence but doesn't actually perform intelligence. But they coined this term test real, which I don't use in the book because it's a little bit unwieldy, but it's essentially a synonym for my term, techno utopianism, which is based on this long legacy of thought test. Real stands for transhumanism, extropianism, singularitarianism, cosmism. Cosmism is a fascinating one that goes back to pre Soviet Russia in the 1920s. Cosmism, R. Rationalism, E.A. effective altruism. L. Longtermism. So longtermism is arguably, I would say, the most toxic of all of these because it's this idea that 10,000 years in the future there will be 20 billion human beings living in outer space and all of their lives are ethically equivalent to those of any presently living human. This is, I think, the real, the really ethically nefarious shared root of all of this stuff. And it's really what, you know, you look at Elon Musk and you can see it in action, this idea that we have to ultimately make sacrifices in terms of the lives of currently living humans in service of this future vision of humanity living in outer space. And it's very, very worrisome because the sacrifices are real. Obviously, we are. I think pretty much everybody in the United States right now is paying more in electricity bills because of the cost of running AI data centers. And that's just one very small example. The reason. And they're able to rationalize that. Again, rationalism becomes rationalizing, which is a third thing is this totally made up idea that the current generation of data crunching AIs is actually going to solve problems like climate change, like the colonization of Mars, even more basic problems like cancer. These are things you hear Sam Altman promise all the time, like we're going to solve cancer and therefore we need to like spend, you know, $3 trillion on these data centers that don't really at this point provide any noticeable improvement to human life. And so it all just gets pushed off into the far future. And it's all kind of like maybe, and what if, you know, these LLMs that are fueling things like ChatGPT, they have almost nothing to do with any sort of data mining technology that would, for example, improve cancer diagnostics or work on protein folding in order to advance like medicine. These are like almost completely different things. And you know, here's where you get from optimism to fraud, which is they're talking about these things as if they're the same and they're absolutely not. So this is kind of like the. And this is where I think another really important part of these visions of the future is that because they're so far off, this is something that a writer named actually going to forget exactly who said this. It might have been Ulrich Beck, a German philosopher who thought about risk in the 1980s and 1990s. But when he talked about calculating risk and thinking about the future, he used I might have been back, I might be getting it wrong. But there's this term a hidden external determinant of all projections of the future, which is when we start thinking about the future and we're coming from a perspective where we have a stake in the future, where the way we think about the future, for example, impacts our current finances, we start to be motivated by those present day determinants. The, the thing that is not rationalism, but is whether or not I'm going to get a check next week. And with rationalism you can see it in action because there's a, there's a phenomenon of very short AGI timelines. So all of these AI investors are talking about like we are going to invent AGI, we're going to invent computer brains, they're going to solve canc, they're going to come up like super genius level stuff, right? But if you're saying you're going to do that 40 years from now, it's not a very interesting investment. When you start saying you're going to do that five years from now. It becomes a much more interesting investment and you actually can connect, collect those investment dollars which you know, are actually the real source of wealth for a lot of these people rather than actual productive revenues. And so you're thinking about, you know, AI or Mars colonization or anything like that. It's one of the really big mistakes that I think people get caught up in frauds about is are you considering that these people making these predictions are benefiting directly from you believing them?
A
Right.
C
Like, like Elon Musk vision of Mars again is, is a very, you know, typical one. He would not have the level of investment in Space X if he didn't have this pitch for something even bigger down the road. Right. And so I think this is a very important way to think about all of these predictions of the future or projections of the future is that they're very directly tied to attracting investment capital. And I think you'll rarely frankly hear these people even talk about any version of the future that isn't directly tied to investment capital. Very little discussion about new political structures or new ways that average people are going to engage with the economy. These are not their concerns. They want to paint a vision of the future that is investable and that's what really matters to them. And obviously Sam Bankman Fried embodied that as well because the future horizon for cryptocurrency at that particular time of 2020 to 2022 was just astronomically inflated. And, and I think that's another sort of thing to keep in mind is that in this bubble driven economy, once one narrative dies out, we have to cook up another one so that we can collect more, more investment funds. And I think that structures their entire way of thinking.
B
Yeah, there's this, there's this odd connection between very short term thinking, like Sam Bankman Fried saying, you know, I have to make money, there's only a short period of. And then there's this, on the other hand, there's this way far out thing like what you just talked about with, you know, we're seeing into the future, we're seeing into a thousand years from now. And to dive into that even a little bit deeper, there's this weird connection I want to talk about between the focus on longevity. So you have a lot of these people think that they're very interested in doing all sorts of crazy things to extend their physical bodies at the same time. And you talk a lot about this too. They want to sort of elude death, death by, you know, uploading their consciousness. The Digital self. So how does some of those ideas really fuel some of these techno billionaires?
C
Yeah, yeah. I mean, I think that there has always been some version of this, of, you know, I think that there's, in a weird way this longevity stuff, I feel like, is connected to what political theorists call exit. And obviously we're seeing a lot of pushes towards exit it in the sort of right wing of tech politics right now. I mean, we have this, you know, really emergent. I think we're now more willing than ever to call it techno fascism because we have people like Balaji Srinivasan and others who are calling for we need to like build these independent cities with no laws and all of this like crazy libertarian stuff. And I do feel like the, both longevity research and the sort of computerized immortality research are versions of this desire for exit. That is to say, like a desire to no longer have to deal with the politics of living with other people. And what that boils down to, I think, is, I mean, on a very, I mean, a lot of this stuff I think boils down to very basic impulses that have been filtered through this techno vision. And the very basic impulse or the very basic kind of understanding that we live with now is that our like, healthcare system is collapsing. And so just like I think a lot of billionaires, their main motivation is the fear that they might have to like eat at a McDonald's someday. I think that another one of their major fears is they might have to actually like go to a hospital with normal people someday. And so this like idea that we're going to, you know, have boutique medicine, have stuff that is like, you know, the very cutting edge, privately funded. Oh, and if I actually get something that. This goes back, by the way, to the 1960s, right, to cryogenics. Cryogenics was just an early version of this life extension stuff where cryogenics was, if I get a fatal disease, I'm just going to freeze myself. And then in the far future they're going to come up with a cure for that disease. They're also going to know how to thaw us out and fix us. And one of the anecdotes that I have in the book that I think is very interesting is that cryogenics has basically turned into a, a pyramid scheme at this point. The early cryogenics firms in the 1960s, they obviously not only did they not last long enough for medical science to advance far enough to reconstitute and save all of their patients, they didn't have a business model that allowed them to keep everybody frozen. So some of the very earliest cryogenics operations actually eventually had to thaw out their clients and just dump them in graves like normal people. And so now cryogenics companies actually require a subscription while you're still alive in order to pay for not just your own freezing, but actually the maintaining of the people who are currently frozen. So you're paying it forward to the people ahead of you in line. And I think that. That I think is quite emblematic of, you know, we're not going to rely on any kind of social system. We're not even really going to rely on, like, funding medical research, which is interesting here too. Like, cryogenics is not. Not funding medical research to actually fix any of the things that they promise. It's just someday it'll be fixed. Which I think is actually quite the common logic with a lot of these techno guys, weirdly enough. But to speak to the sort of short term versus long term, right. The key concept here is emergency. The emergency of the present moment becomes a justification for handing power to whoever has the most convincing pitch for the moment. Basically, we're not thinking, when we stop thinking in concrete and grounded terms about the near future, like five years from now, 10 years ago. We don't have any. In American politics, we don't have any serious proposals for how we're going to fix anything really right now. And so this visionary vision of the far future becomes much more attractive to the general populace because we're not being offered any alternatives by conventional politics. And at the same time, a few other things happen. One is because the far future is so speculative and we now have financial assets tied to the far future. Speculative assets are incredibly volatile. Right? If you look at like Tesla stock, for example, Tesla stock trades at something like, like, you know, I was a staff writer for Fortune for a long time. I. Or and I've just been writing about finance for a long time, so I try and bring real finance to this analysis. And if you look at a stock on the public stock market, you have this thing called a price earnings ratio, or pe, which is always the sort of the multiple of each year's profit that it would take to pay back the price you paid for the stock. So if you look at something like an established company like Apple, these changes, these have changed over just based on the amount of the number of people who are invested in the stock market, basically. But Apple's P E ratio is something like 26 to 30. So theoretically you would have to own Apple stock for 30 years to get back in profit. The price that you paid for the stock itself, that's already really high and reflects some sort of inflation over time of those expected future earnings. Tesla's PE ratio, in fact, let me just look it up real quick. It has been in the past as high as like 100 and oh my good Lord, it is now 300 apparently, which I'm. Yeah, it's, it's 310 right now, it looks like. So looking at that, that means that looking at the actual profits of Tesla Inc. You would have to own the stock for 300 years, having bought it at today's price in order to actually pay back the price that you paid for it. Now the reason for that is that stocks that are considered speculative have a much higher earnings multiple versus established companies like Apple. The trick that you have now with Tesla is that it's no longer a growth company. You have a growth premium to your price earnings ratio that you get when you have a small new company. Tesla, Tesla earnings aren't even growing anymore. And so this is why you have Elon Musk out there singing and dancing about things like humanoid robots and robo taxis. Because he has to keep promising more and more things about the future right at the same time, of course, he's selling a lot of his own stock nominally so he can go out and fund his purchase of Twitter, which he can then run into the ground. But that's a whole other topic. But the short term, long term is you have long term promises that are being made about highly speculative assets by founders or investors who then cash out into present day value. So when you look at somebody like Chamath, right, he promised these big things about startups like Clover Health, I think Virgin Galactic, another space startup, was another one of his spacs. So he was saying like, oh, we're going to have space tourism in five years or 10 years or whatever. But he made, between his company and him Personally, he made $1.5 billion in dollars. He didn't keep the stock, he got cash, he got US dollars. And then all of those stocks went to zero. Just the way that I think Tesla probably they might not go to zero, but they're going to correct pretty hard in the next year or two. And that I think is a big part of the game right here is that you make these future promises. I mean, this is why the title of the book is Stealing the Future. You make big promises about the future in a way that can be captured in a saleable asset. So that can be the risk of some future thing. So we the rise of, of platforms like polymarket or Kalshi, they're about pricing risk. And so that risk of the future becomes a saleable asset in the present, where you can either offload it to somebody else who you have convinced this is a good bet, or you take fees. I mean, Polymarket and Calisthen, you take fees off of people trying to gamble about the future and so those become present gains. Right. Like that's money that I have now. You can go off and bet on the future all you want. I'll take my US Dollars today, thank you very much. That's the actual attitude of a lot of these tech futurists. While they're talking about how big and great the future is going to be, while they're talking about these huge investment opportunities that they're offering you in the back door, they're taking dollars and that's what they're holding on to.
B
Well, the thing about Kalshi and polymarket is fascinating to me because you have a line in the book about how rolling the dice can seem more reasonable than trying to build anything meaningful. And I think that's a lot of what is happening, especially for young men in a sports Gambling became more prevalent and more legal in more states and crypto sort of runs a similar track. AI is similar too, and it connects back to Sam Bankman Fried. You talk about how he has this inability to feel things. And we know right now young men really around the world are struggling with high rates of depression. They're spending more time online. They're sort of susceptible to the kind of thinking that, that you're talking about. So explain a little bit about how crypto and trading culture and I kind of feed each other.
C
Yeah, yeah. I mean, I would say I separate AI a little bit in this. I mean, I think a lot of the same dynamics are in place, but the fact is, like AI is not in public markets as much, so there's a little bit less of that directly with that. But there are connections for sure. And you know, to sort of rewind to something I said earlier. You know, I am in the rare position of I believe in the sort of basic premises of crypto, but I also believe that it has become, you know, a huge vector, frankly, for these gambling dynamics that, you know, this stuff was happening in crypto as early as like 2017, 2018. It really hit Wall street and kind of the mainstream during COVID in 2020 with the meme stock craze. So people heard about like Gamestop and there are others that are evading me right now. But, you know, people were stuck at home trading. And you know, just to put it right out there, like day trading as an individual is just like gambling, a way to guarantee that you are losing money. Day trading is actually probably more extractive than gambling. And you know, we can go into that, but that's just sort of something to really signpost for anybody who's considering this. Like, you should not day trade, period. Like, if you're not a professional, if you're not really trained in it, if you don't literally have experience with like a Wall street style shop to teach you how to do this, you shouldn't do it. And I think that this is where we really get, you know, I come from like a Marxist sort of theoretical background of trying to think about things in material terms. And obviously the economy is in a, in a rough place right now. People are having trouble with finding jobs, but it's also not like a disaster zone. And so I think that, you know, they're like, we have relatively reasonable unemployment rates and things like that at this point, miraculously, given how, how hard the right wing has, has tried to erode and destroy the economy for their own ends. But you know, we do have this baseline of prosperity in America still that is accessible. And so it really gets you thinking about it from a cultural perspective, which is in, in crypto even again, going Back to like 2017, 2018, there's this kind of like pride of being what, what we called jokingly a degen or degenerate or degenerate gambler. And this is where, you know, there was actually a big Wall Street Journal piece about this a couple of days ago. And, and that speaks to what I think a lot of people have begun to call financial nihilism, right? This idea that, yeah, if you're not reasonably in sight of saving up money in order to buy a house, then why bother saving at all? Why bother slowly, like the slow, boring work of those of us who are lucky enough to put 500 bucks a month into a 401k and like watch it grow by 10% a year, which again, to do an opposite plug for, right? I'm 46 and I'm lucky enough to have had a very conservative pair of parents financially who just like taught me to put away a little bit. And you know, that does add up. And so I'll just, on a personal level, put out a plug there. Do not day trade. Do the boring thing. Put your money into a tax deferred ira, use an index fund. You're not going to beat the market market, just ride that index. It starts looking pretty cool after a while. You know, I get a notification once every month or so that I got like a $2,000 distribution just back into my retirement account. It just, it adds up. So again, just to like, as a personal note, like, do it the boring way. It's what works. But there has been this, like, cultural valorization of gambling in a way that I think is somewhat disconnected from the material reality underlying it. I think that there is still path to just be a reasonable conservative person, like, save. I think your parents and grandparents scrimped and saved a lot more than you might think. When you look at, like these boomer memes about how easy everything was, like, people still struggled in the past, you know, and so there is this sense of like the opportunity in the economy declining. But there's also a very big cultural push towards normalizing this stuff. And again, you have to think, think about, well, who does it benefit and who it benefits are the people like, you know, Robinhood, the trading platforms who are attracting all of these new traders, and also the asset issuers who, you know, if you're a crypto company that doesn't have any real product, but you have a token, you want to encourage people to gamble on the token, maybe even knowing that there's no actual utility there, maybe even knowing that it's a little bit of a scam. And so you just normalize this stuff because that's really where your income is coming from is either the fees from trading or the appreciation of an asset that you control. A lot of which is this is another major part of the FTX fraud was this token called FTT that they essentially printed from nothing and then used kind of financial shenanigans to use as collateral for loans. And so again, another example of by encouraging people to be speculative gamblers, you actually give your nothing token or your shitty spac a little bit of value by just encouraging people to bet on it. If the bet becomes the value, then you just disconnect increasingly from the idea that a business is actually something that should provide value. All of this, this, like, it's obviously all a crazy nest. I mean, we live in a society, right? There's a lot going on, but it's all kind of converging towards sort of training people to normalize. Yeah, gambling, essentially.
B
And it feels, it feels very male to me. And I wanted to get you, get your impressions a little bit about Caroline Ellison. She becomes the fall Guy for Sam Bankman for you and for FTX research. Maybe getting out of prison a little earlier than expected. Yeah, everyone focuses on sbf. He's the guy on the COVID of the magazine. They don't really discuss her motivations beyond the idea that she loved him. But she was also very interested in effective altruism as well. Yeah, well, missing about her.
C
Yeah, I mean, it is very tempting and you know, I do think that she was essentially emotionally manipulated by him, but, uh, she's not a good person. That's definitely very clear. Not only was she interested in like, you know, effective altruism, abstractly, not only did she name her personal blog Fake Charity Nerd Girl, which, you know, is this. Like there's this cynicism running through all of this, but she was also, which is another common thread running through a lot of this. She was a big aficionado of scientific racism, IQ theory, eugenics and all of this other stuff, which, which I alluded to at the beginning that there is sort of a eugenic underpinning to all of this because the authority depends on you having some inherent native intelligence. But it's really specific. I mean, FTX money went to fund this group called Lightcone Research, which hosted white supremacists and eugenicists at an event in, in San Francisco both in, I think it was 2022 and 2023, called Lighthaven. So to circle back around to Carolyn, you know, like, she's no angel. And I think that the female, the sort of maleness of it and the sort of female counterpart is really important. And I think that despite the fact that she's a problematic figure, I think it's important to acknowledge that like effective altruism and rationalism as culture cultures are extremely, I don't know, anti woman, you would say. There is extensive reporting from inside these movements of powerful male leaders leveraging their control over funding to harass and exploit women sexually. Basically. There is a lot of reporting that people within these movements have used these like one on one meetings that are supposed to introduce people to the movement to harass women. If you. I mean, I hate to be. Well, I'll resist the urge to be a lookist about it, but there are various reasons why you can imagine a lot of the men in these movements need some help or think they need some help to, you know, to, to manipulate women. They're. They're off putting in various ways in addition to being racists. And the numbers don't lie. I mean, the proportion of participation in effective altruism and rationalism is something like 70% male to 30% female, if not more imbalanced than that. And I think that this is where it's difficult not to turn into Larry Summers at some point because there is this sense that again, not because of any inherent advantage or disadvantage, but the culture around trading and probability calculation and a lot of this stuff that is very deeply ingrained in effective altruism and rationalism, those things have just been made male. I mean a lot of people know this history, but computer science was a heavily female field in the 1950s, 1960s, 1970s. The word computer originally referred to a human performing calculations on paper. And most of those computers, those, those, that profession was largely filled by women. And so this shift of, you know, computing culture, crypto is an extension of computing culture to a significant degree. It grows out of the cypherpunk movement. And you know, this sort of like the male, again, everything goes back to material, right? The maleness of computing accelerated as computing became more profitable and those jobs became more highly paid. And so this sort of exclusion of women from the spaces of computer science, of you know, rationalism is sort of this idea that is intended to be like a philosophy for the computer industry. And so, you know, it's, it's dominated by males and male thinking. But again, this is where I, you know, it's difficult to not, I think I would, I would separate, you know, sort of female cultural identity from what you might call the Jungian divine feminine as a way of separating just kind of ways of thinking about the world. And there is something very, you know, in a negative connotation, very male about, about the desire to calculate every outcome before it happens and the belief that you can gather all the facts and put them on paper and then derive a conclusion from that. Whereas again, at the risk of being reductive and maybe a biological reductionist here, the sort of feminine way of thinking which again I think is in all of us exists in some proportion is this more you know, sort of critical, intuitive, holistic way of thinking that I think is when people look at something like FTX and they're or like effective altruism and you're like, or rationalism, you're like. This just like inherently doesn't make sense intuitively to me because we can't predict the future. Right? Like that's the, the, the, the listening to yourself that again, not to make it about actual men and women, but as about like a, a way of thinking, the sort of hyper masculine rationalist view that we can know and control everything. And those are the conclusions that matter versus this more openness to an intuitive view of the world to like. And this is something that once you start thinking, this is the funniest thing about artificial intelligence and rationalism and all this stuff is that these people have no idea how actual human brains work, right? And we can separate it from the male and femaleness of it at all entirely. Because human emotion is actually a very significant portion of how we make decisions. Our emotion centers are actually cognitive centers of the brain that collate information. And our emotions are indicators about conclusions that we have reached subconsciously, you might say. But really those conclusions integrate more information than we can rationally put down on the page. And so when you look at epic screw ups like ftx, one way to think about it is this like rationalist calculating view of the world always fails because there is always an exclusion. When you're trying to put everything down on the page, you're going to miss the something. And this is not to say that you should operate on an entirely intuitive framework, because intuitive and fact based decision making are counterparts, right? Like a lot of great scientific discoveries have come. You know, you hear this phrase, flash of inspiration. I mean that leads to serious mathematical conclusions. People who are the most objective reasoners in the world stuff comes to you in a moment and then you, then you put it down, right? If you exclude your intuition, if you exclude that part of your brain that is always working and you're only paying attention to this forebrain here that can put things into words and put it into numbers, you're actually sacrificing a huge amount of your own cognitive power. And so I think maybe that's the big part of symbolically thinking about the male and female, but also culturally, really. Again, I hate to be a reductionist, but like there is like the sort of the female cultural identity that actually does maybe for problematic reasons, but a lot of women are more attuned to that intuition and that like instinct that might have actually pulled people back from the edge of something like FTX where, where you're saying, I could put it all down on paper and I can do the math. Well, that's the male brain just completely ignoring the screaming red alarm that this fundamentally does not make any damn sense. What are you doing? And so I think maybe that's like the big picture, maybe I'm excluding the actual real women here. But it is about philosophy, it is about the way that people think. And so I think that an important way to think about intuitive versus purely Rational thought is that you have to have both. And maybe the imbalance of real world gender in these movements is a reflection of the a priori rejection of this more complex, subtle, nuanced way of thinking that lies. It's a supplement or external to the purely rational, purely mathematical. Although something else we can talk about. All this math is fake, but it's like this performative mathematics that people think is just pure rationality and we're always going to come to the right conclusion. And that is just an omission of like both half of the human population and in some not entirely one to one corresponding way, it's an exclusion of at least half of how every human brain works.
B
Yes, well, and all this math is fake is a very important point because I think there's a lot of, there's a lot of data, but people don't necessarily know that the data isn't actually data. So last kind of quest, last question for you is we know this is gonna happen again. We see this already happening, these types of frauds, you know, we're seeing it in, in our, in, in government, in everywhere else. So what else should be, should be watching out for aside from, you know, the, the myth of the boy genius? We know that. We know the next SBF is out there. What do they look like?
C
Yeah, I mean, the myth of the boy genius is quite interesting because I think it's also these archetypes change historically, right? Sam Bankman Fried arguably is sort of the end point or playing out of what Mark Zuckerberg started in 2006, 2007. I can't remember the exact timeframe, but he was sort of. Mark Zuckerberg was kind of infamously the first CEO to wear a hoodie in the boardroom. And now that's like de rigueur. It's standard. And in some ways one of the arguments that I make is that Sam Bankman Fried saw people like Mark Zuckerberg, saw these other tech founders of basically what we now know is just like the social media age and nothing else actually worked, but they were all kind of schlubby, I guess, and in the minds of investors. And I mean, this is maybe my first general tip is like archetypes get played out until they die and you don't want to be the last person playing out the archetype. And I do think that like, you know, the boy genius, I think Sam Bankman Fried might have killed the boy genius archetype. We'll see. I think that it's going to be.
B
A hard I don't know. Sam Altman's a boy genius too.
C
Well, but you'll notice that like Sam actually, Sam Altman actually cuts his hair, he shaves, he's, you know, slim. He takes care of him himself. Sam Bankman fried, like ran it all to the very end by just being completely disheveled. He went to Congress and like, like didn't tie his shoes. He just like pushed it all to the absolute limit. And I think that, so that's, you know, what we'll see. If you look back at the history of scams and frauds, it's actually not that common to see, you know, going back to like the 1920s or something. This emphasis on youth is actually somewhat novel in our own time. And so it really could be like a blip or a passing thing. But the more general thing to watch out for, I would say there are two, two points to watch out for. One is heroism as a general category. If somebody is selling themselves to you as a hero, that's a big warning sign. And obviously we have probably the two biggest self styled heroes of our moment are Elon Musk and Donald Trump. Trump and make of that what you will. The other, I think big red flag and you know, this is like investing fraud, but it's just fraud in general. Heroism is one thing. Emergency is the other really big one. And we sort of alluded to this and I could go into it in a lot more depth, but you know, this idea that we are in a dire, like let's say if it's anything other than the climate emergency that somebody is talking to you about, they're probably trying to sell you something and it's not a solution. And so those I think are the two general categories and there's a lot more to be said about both of them. But I do think that like heroism in particular is one to watch out for. And it's hard to not feel like I'm preaching to the choir because anybody listening to something like this is probably already engaged in, you know, a desire to learn to, to self reflect, to better themselves frankly. But I think a lot of the sort of big frauds of the past five to 10 years have been based on the idea that somebody is going to sell you a solution to a thing that you know, is a problem but that you don't have any, you know, ability to fix. Or like, you know, they're going to emotionally satisfy you by saying that you're part of this solution. And you know, I'm going to do it, but I Need some help from your money, right? Like, so you're going to join in my quest to do something great because I'm the hero. I'm the genius. Genius is just one type of hero. There's, there's various kinds of hero. But just, you know, I'm going to do a great thing, just like give me a little bit of your money and then you're on. You're along for the ride. We're going to colonize Mars. We're going to invent super intelligence. And you're part of the team. Right? And I don't think, I don't. I think that especially right now with, with such a shaky regulatory environment around investing in particular, I think that joining somebody else's team is probably a generally bad idea unless you're. Again, I think that this doesn't apply necessarily if you are an actual expert, but I think it's important to understand that there are real experts in whether it's investing in general or specific fields. If you want to make yourself an expert in a field, there are paths to that. And, but this is like, you know, a process of years. People don't even understand really what it means like to become an expert anymore, which I think is a dangerous thing in itself. Asking chat GPT a couple questions doesn't make you an expert, you know, so, so having somebody offer to be an expert or an achiever on your behalf is, is I think a big red flag. So, so yeah, just, I could, I could probably stretch this out longer, but heroism and emergency, watch out for those two, I think.
B
Great. Yeah. Good things to keep in mind. The book is Stealing the Future. David Z. Morris, thank you so much for your time today.
C
Thank you so much for having me on.
Episode Title: David Morris, "Stealing The Future: Sam Bankman-Fried, Elite Fraud, and the Cult of Techno-Utopia"
Host: Deidre Woolard
Guest: David Z. Morris
Air Date: January 5, 2026
This episode explores the deeper psychological, philosophical, and cultural drivers behind the frauds of Sam Bankman-Fried (SBF) and other tech elites, as discussed in David Z. Morris' book, Stealing the Future. Morris, an academic-turned-journalist, expands beyond the specifics of SBF and FTX to examine the intellectual currents—such as effective altruism, rationalism, techno-utopianism, and the myth of the boy genius—that both enable and excuse high-level fraud in the tech sector. The conversation critiques the dangerous intersections of speculative finance, visionary technology narratives, and the increasing conflation of investment with gambling.
For further insight, read:
Stealing the Future by David Z. Morris (Watkins Media, 2025)