A (9:01)
Boo urns. Are you saying boo or boo urns? Boo. I was saying boo urns. All right. But it's not just a collision with the world of finance that I think helped make this point. I think the AI CEOs are also picking up something that's being measured in recent public opinion surveys, which is the public is turning on these companies. Quinnipiac survey for March revealed that a healthy majority of Americans now thinks that AI will do more harm than good. And this is a sharp increase from a year earlier in which those numbers were reversed. Now, of course, this is going to happen. It shouldn't be surprising. How much can you tell people, we are going to destroy your lives, everything you love, before they finally say, I don't think I like you. And that's exactly what I think we're starting to see. So I think the AI CEOs are reacting to that as well. And finally, I think the third factor that's leading to a change in rhetoric is that more and more reporters are beginning to develop some skepticism around some of the more breathless claims being made by the AI CEOs. Just last week, Ezra Klein, writing the New York Times, wrote a column with a title that warmed my heart. His column was called why the AI Job Apocalypse? Probably Won't Happen. If you read the article. Klein goes on to say, economists, I found, are quite skeptical that mass joblessness is on the horizon. Right. So we no longer have this sort of phenomenon in the reporter space where they say, well, these CEOs, they know more about this technology than anyone else. Have to believe what they say. That grace period has ended. They've been too bombastic. Not enough of their claims have come true. They've changed their minds too much. So now they have skepticism from there as well. So I think these factors are all coming together. The impending IPO forcing them to behave like normal, responsible citizens of the world, turning public opinion against them. You can't just scare the public constantly and expect that people are still going to like you and your products. And finally, increasing journalism skepticism. That pressure has led the AI CEOs to back off. They're more apocalyptic discussions of what's going to happen to the job market because of AI. Well, this brings us to the most complicated question. Our third and final question. Why did they ever think it was a good idea? Like, why were they actually talking that way, trying to scare people about their own products? Well, there's a common explanation for this that I've mentioned myself on this show before. The common explanation is, oh, it helps attract investment. Yeah, it might be scary that your company's going to automate all jobs, but that does make your company very valuable if you're an investor. If there's only going to be one company left in the world that does everything, that's where I want to put my money. So that's the common explanation for why the AI CEOs have been so apocalyptic in the way they've been talking about AI impacts. And I think that's partly true. Partly true. And I think this certainly happened. I saw a bunch of good coverage in the last week or so about the mass evaluation bumps that Anthropic got. For example, presenting Mythos as if it had made a major leap and it was going to destabilize all cybersecurity. That was very scary. They hit a trillion dollar valuation for the first time, so it made a big difference. All right, so I think that's partially what's going on. But there's a deeper reason that I want to explore here. And this came out of. I just finished teaching a doctoral seminar on superintelligence at Georgetown. We read a lot of papers from a lot of different fields, and it's really giving me a deeper appreciation of the cultural context from which these AI CEOs emerge. So I want to tell you a story here. This is my alternative explanation for why these AI CEOs were trying to terrify their customers. All right, so here's the story. You got to go all the way back to the first decade of the 21st century. This was a point in which a loose movement, especially among engineers, especially based on the west coast emerged that was known in part as rationalism. It came out of some online discussion boards such as Less Wrong and Slate, Star Codex, which now has a different name. And it became quite popular in particular among engineers in the San Francisco area. At the core of the rationalist movement was this idea that, that humans have cognitive biases in the way they think and if you could be super rational, you can overcome your cognitive biases and in doing so actually be more effective in the world. So it's a very sort of engineering way of thinking. I'm very used to this as someone who, I'm an MIT trained computer scientist. I'm around engineers. I am an engineer. I know this way of thinking. It's foreign to other people, but in engineering circles it makes sense. You're like, I'm going to be super logical. I'm like data from Star the next generation. And by doing so I'll get over all these weaknesses that we have in our minds and then I can be more effective at my job or in helping the world or politically or whatever it is. Right, so that's rationalism. And it became a sort of well defined movement in the early 21st century. Okay, so how do we connect this to AI today? Well, rationalism had many sort of subgroups within it. For example, one of the best known subgroups coming out of rationalism was the so called effective altruist who try to be hyper rational about where to invest money, time or effort charitably to get the biggest return. Right. So this was this idea. If we're super rational, we can be better at charity. We won't be just like emotionally manipulated or biased in what we're doing and. Right. So that became like a really big movement. Famously, Sam Bankman Fried was very interested in effective altruism. So that's like a well known sub community within rationalism. Well, there was another well known sub community that rose out of rationalism that was called the existential risk, or X risk for short, the X risk community. And here was their idea. We need to be super rational about studying existential risk to humanities. And the core mathematical rational tool they were applying was expected value. And here was their core idea, which is a completely sound idea. There's, you know, mathematically this makes sense. They said, here's the cognitive bias that we're worried about. If a negative event is really rare, humans discount it. I don't have to worry about that because it's very rare. But they said, no, no, no, you got to do an expected calculation, expected value calculation, where you Weigh cost and benefits against their probabilities that something that's very rare, but that has a super negative cost if it does happen, can be just as relevant as something that is not so rare and has a much lesser cost. Let me be more concrete about it. They would say an asteroid hitting the Earth is very rare. It's very unlikely to happen, but the cost of it happening would be incredibly high because it would kill all of humanity. And so the expected cost there is something we should care about. And if we compare that to like a hurricane, like a hurricane hitting me is not nearly as rare as an asteroid hitting me. And the cost though would also be not nearly as bad as an asteroid. And if we multiply those together, it might actually be a similar expected cost as the asteroid. So we shouldn't let rareness by itself determine what we care about. It needs to be rareness multiplied by the potential cost. That's what the X risk community was focusing on. It's a rationalist way of thinking about things. They ended up with three major categories of existential risks that they begin to argue that we should, even though super rare, care about. It was asteroid hits, it was deadly pandemics. And here comes the connection is super intelligent AI. So now we have by the 2010s, the X risk sub community of the rationalist, that these are, these are people like Nick Bogstrom out at Oxford or Eliezer Yukowski who's kind of doing his own thing. They were writing these papers, we read a bunch of them in my seminar where they would just like do these ontologies of risks and asteroids and pandemics and super intelligent, you know, super intelligent AI and talk about like how these could unfold and why we should care about them now, even though none of these things are like about to happen or we have any reason to fear that they're about to happen. And so that was the X risk community. It was them, for example, who organized that kind of infamous conference in Puerto Rico in 2017 to talk about existential risks. That coming out of that you got Elon Musk, Stephen Hawking, Bill Gates, where you got all these quotes from these famous scientists saying, oh, we should worry about AI. It was coming out of this conference in 2017 and it was an X risk conference. This is one of the far future abstract concerns that should be on our mind because who knows, it could happen one day and we have to worry about these. So the original sort of existential risk AI safety concerns came out of this subculture of the rationalist based on these Online forums and largely in San Francisco. All right, then what happened in this story is ChatGPT. Now this is where I'm kind of throwing my own, this is like my own original take here, just trying to understand this world. You get ChatGPT, which is super impressive and it's very anthropomorphizable, right, because you're dealing with language. And we project minds on the other side of a conversation where we're getting fluent language because our mind connects the fluent generational language with another mind. It was impossible not to encounter these early large language model demos without being like, wow, AI is now advancing faster than we thought it was. Something is accelerating. There's changes afoot. And for the X Risk community within the Rationalist, this presented a completely life altering, terrifying, exhilarating possibility. What if we were right about this risk? And not only were we right, but it's happening, right? It would be like if you had, you had been warning about aliens and abductions for years and years and years and then the Independence Day mothership comes onto Earth, you'd be like, this is terrifying. But you would also be like the people dancing on top of the building in New York in that scene before they got, you know, destroyed by the lasers. They were excited that they were there. I'm kind of stretching this a little bit, but I think this was completely mind melding, life changing for the Ex Risk rationalists because they had spent years making list upon list and sub list among sub lists. I mean, just go read a Yukowski paper, like a mirror paper from 10 years ago. It's 19 levels of lists with sub lists with sub lists with sub lists with sub lists about da, da. The CDI does this or this or that. I mean like they, they obsess. They've been obsessing over super intelligence and all the ways that might unfold. And I assume they're wearing, when no one's looking at home, they're wearing matrix trench coats and pretending like they're Neo. And you know, I don't want to. I'm just guessing all this type of stuff is going on, right? They've been obsessing about this and suddenly there's this thought, what if it's real? Think about it. This would make them the heroes. They would make them John Connor, it would make them Neo, it would make them, we are the ones who pointed out and are going to help save you from worldwide destruction. I think that thought was so intoxicating that it sort of overcame the sort of rationalist guardrails like, yeah, but is this technology going to do that or not? And it just became all consuming. It gave meaning and structure to their lives. Superintelligence coming. We warned you. We're the heroes. We're going to be the ones to lead you to do it. Where's my Matrix trench coat? I think that's what happened. And in Silicon Valley and San Francisco More broadly, post ChatGPT, there was this huge X Risk culture, just became boom, ubiquitous and accepted. You go 2023, you're walking around San Francisco talking to people. Everyone is just straight up apocalyptic, massive disruption. Everything is going to change. It became like the central meaning, the central engine of meaning and understanding the world and making life interesting. It was a structure for life in that part of the country because we had laid the foundation with the rationalist community. And then this technology came and it was just too intoxifying of a possibility that maybe they were right, that that had to be the case. And it really took over that city. Really took over that city. All right, now here's what you got to understand. A lot of these big tech companies, these AI companies, they came out of that. OpenAI was an X Risk nonprofit, Elon Musk, funded, largely funded, OpenAI to be an AI safety firm, because they were sitting here doing these abstract thought experiments about super intelligence. They had money to burn. And like, let's put together this organization that just study AI so that we can figure out how to do it safely. Right? That was an X Risk hobby project. That's why you have Sam Altman, who, as we learned from the New Yorker, reporting from my colleague Ronan Farrow, and Andrew Martin's article in the New Yorker a couple weeks ago, why? He's not really that great of an executive. That's why the board tried to fire him. Because this wasn't meant to be a trillion dollar company. It was meant to be, you know, this was meant to be a nonprofit that was like a hobby for ex Riskers. Right. What about Anthropic? It came out of OpenAI. Anthropic is OpenAI employees who felt like OpenAI was insufficiently, insufficiently rational. They weren't being X risky enough. And so they left to start their own company. What about Grok? Elon Musk was like, deeply in this world, right? So these companies came out of that world. This monoculture, this eccentric, strange, almost cultish sort of X Risk superintelligence monoculture that was really ruling out there in Silicon Valley. These companies all came out of that. So what I think was happening with altman and Amade, et cetera. I don't think they were playing 4D chess. I don't think they were thinking about how to move the markets or attract investment. I think they just kept talking the way that every single person they knew was talking. And finally, as their companies got big enough and their platforms got big enough and the amount of people involved got big enough, finally someone had to say, hey, guys, we're not in the Mission District anymore. I don't think you can talk this way when you're a company that's taken on $60 billion in investment trying to do a $500 billion IPO. I just don't think they realize that most people didn't think and talk that way for a while. They were just ex riskers that became the king of the ex riskers. And it was exciting. They're like, yeah, look, all these guys, they're all the people I hung out with, and now I'm the king of it. I'm at the leading edge of this. They were talking to their people, and then they looked around and realized over here was the rest of the world who were terrified out of their skin by what they were saying. It's like when you go to a new high school and you realize, the group of friends I was hanging out with middle school, I used to think they were awesome, but they're a little strange. And I kind of like sports and girls, and maybe I'm going to have to kind of chill out a little bit about, you know, whatever it is I'm doing. So I don't know if that's completely true, but I'm increasingly convinced this is a cultural thing. It's just the way that that community talked. And I'm so used to, this is what engineers are like. And it's off putting to other people. This rationalism stuff is off putting. I mean, like, my wife, after a while, was like, don't take me to the MIT Christmas parties, because you guys are also weird. That's just the way we are. But I don't think it plays well for the rest of the country. So this is my theory. I'm putting it out here for you to take or leave. But there is a lot of what's going on with the terrifying, this baffling strategy of trying to terrify your own customers. I just think part of it was just cultural. That's the way people in San Francisco came out of these rationalist communities. That's just the way everyone they knew talked, and they didn't know any better. And now they're learning, and I think we're all the better for it. So we'll see, who knows? But I'll just say I it all these things. It's. I am welcoming the end of the, like, faux terror. I'm welcoming the new wave of skepticism among journalists. I'm welcoming the east coast people that are coming over and, like, will you guys stop talking like you're Sarah Connor from Terminator 2? All of this is good for all of our mental health. Maybe it's bad news for the AI Reality check because I'll have less to reality check. I don't think that's really gonna be a problem, but who knows? But there we go. That's what's going on. I think it's good news. My explanation might be right, it might not be, but at the very least, it's entertaining to follow. All right, that's all the time we have for this week's AI Reality Check. I'll be back on Monday with an advice episode of the show, so definitely check that out. And until them remember, take AI seriously, but not everything that you hear about it.