AF - AI doom from an LLM-plateau-ist perspective by Steve Byrnes
Apr 27, 2023·Tap to summarize
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI doom from an LLM-plateau-ist perspective, published by Steve Byrnes on April 27, 2023 on The AI Alignment Forum. (in the form of an FAQ) Q: What do you mean, “LLM plateau-ist”? A: As background, I think it’s obvious that there will eventually be “transformative AI” (TAI) that would radically change the world. I’m interested in what this TAI will eventually look like algorithmically. Let’s list some possibilities: A “Large Language Model (LLM) plateau-ist” would be defined as someone who thinks that categories (A-B), and usually also (C), will plateau in capabilities before reaching TAI levels. I am an LLM plateau-ist myself. I’m not going to argue about whether LLM-plateau-ism is right or wrong—that’s outside the scope of this post, and also difficult for me to discuss publicly thanks to infohazard issues. Oh well, we’ll find out one way or the other soon enough. In the broader AI community, both LLM-plateau-ism and its opposite seem plenty mainstream. Different LLM-plateau-ists have different reasons for holding this belief. I think the two main categories are: Theoretical—maybe they have theoretical beliefs about what is required for TAI, and they think that LLMs just aren’t built right to do the things that TAI would need to do. Empirical—maybe they’re not very impressed by the capabilities of current LLMs. Granted, future LLMs will be better than current ones. But maybe they have extrapolated that our planet will run out of data and/or compute before LLMs get all the way up to TAI levels. Q: If LLMs will plateau, then does that prove that all the worry about AI x-risk is wrong and stupid? A: No no no, a million times no, and I’m annoyed that this misconception is so rampant in public discourse right now. (Side note to AI x-risk people: If you have high credence that AI will kill everyone but only medium credence that this AI will involve LLMs, then maybe consider trying harder to get that nuance across in your communications. E.g. Eliezer Yudkowsky is in this category, I think.) A couple random examples I’ve seen of people failing to distinguish “AI may kill everyone” from “.and that AI will definitely be an LLM”: Venkatesh Rao’s blog post “Beyond Hyperanthropomorphism” goes through an elaborate 7000-word argument that eventually culminates, in the final section, in his assertion that a language model trained on internet data won’t be a powerful agent that gets things done in the world, but if we train an AI with a robot body, then it could be a powerful agent that gets things done in the world. OK fine, let’s suppose for the sake of argument he’s right that robot bodies will be necessary for TAI. Then people are obviously going to bu...