Transcript
Host (0:00)
We have Ben Thompson in the restream waiting room from sirtechery. Welcome to the show, Ben.
Ben Thompson (0:05)
How are you doing? I'm good. Hopefully I have the right microphone turned on this time.
Host (0:09)
You do, and it sounds fantastic. Thank you so much for joining on short notice. Thank you for writing Anthropic and Alignment. It is a fantastic piece that I think covers all of my questions. But I want to start with just how did you process the. The weekend? How did you get to this particular place? And then like, what is your key thesis with Anthropic and Alignment?
Ben Thompson (0:32)
I mean, this is one of those ones. I don't know if it's good or bad that it came out sort of at the end of the week. So I had a lot of time to think about it. Ultimately, I think it was good because I'm not sure anyone very as explicitly made the point. I did. And maybe it was bad because I feel there's a lot of like caveats. Maybe in retrospect I should have put in the article that would have addressed a lot of the points that people are upset about. Yeah, basically zooming out. This was not a normative article where I'm saying what's happening is good or bad. And that's really the one caveat I really wish I would have put on there. I mean, I'm being out there accused about like a Neeli Patel the full throated fascist endorsement of fascism or something like that. And it's like, relax, okay? Can I get some credit for the last X number of years? Basically the. And there is a deep rooted concern that I've had for a long time about. And I'm now hesitant to even use sort of EA as a term because it's kind of now politicized thanks to the events of the last week. But a failure to grapple with a world of guns is basically the long and short of it. And I actually think Eliezer has been the one guy who's been honest about this, where he wrote that Time article about potentially bombing data centers someday. And that's actually a point worth bringing up, which is all this stuff is right now in the digital realm with robotics and potential other applications, and it's obviously being used for military operations. It's crossing over into the physical realm. But if AI is as powerful as people say it's going to be, then there are going to be real world reactions to that. And if we're going to analogize it to nuclear weapons, as Dario Amade has done repeatedly, you have to think through what's what would happen in a world where a private company developed nuclear weapons? What would the government's response be? And that's not to say that the government response in that case is good or bad, or does it follow sort of constitutional principles or whatever it might be. Obviously I want them to. On the surveillance point, I've been concerned about the application of computers to our surveillance laws for years. Like so many things in our society assumed a certain level of friction in doing things that computers already obviated. And AI is going to just do that on steroids. I do think we need new laws. I think all this stuff is correct. And I think the idea that AI being applied to these commercially purchased data sets, for example, is a huge problem that I don't want to happen. The concern I have is that if this technology is as powerful as it is on pace to be unilaterally imposing restrictions, even if those restrictions are good, isn't just an issue as far as who rules us. The democracy issue, that sort of Palmer Luckey I think very eloquently raised. It's inviting very bad outcomes for those asserting that in general. And I feel there's been a lack of awareness of this. That's why I brought up the Taiwan China thing. This has been a frustration I've had with anthropic. Generally they talk about, you know, Amade's been very outspoken in terms of opposing selling chips to China for in a narrow aspect, very, very good reasons. My pushback has always been what happens if we get super powerful AI and China doesn't? What are they going to do? Sure, the optimal thing would be to just bomb TSMC out of existence because suddenly that becomes optimal even with all the cost that that does. And. And then what then are we going to do? Like we're entering this. Like, I don't like getting into political posts. It's not fun at all. I'm not having fun with this. It's not enjoyable. I could promise you this. And some people are like, well, you should have just made the post private. I'm like, no, I actually, I really want anthropic and people associated with this to read this because people have theorized for a while about what's going to happen as AI becomes more powerful. And now it's starting to happen for real. And I guess over the weekend part of it was just I felt compelled to say this and girding myself to do so. And even then I still wasn't. I haven't waited in this for a while and it's no fun, but it is what it is.
