Transcript
A (0:00)
In the medium to long term, automation will just win out eventually in most cases, because it's always going to be more efficient to have the tireless doesn't get sick AI systems that can just kind of run 24 7. But that of course, comes with all kinds of costs. There are some really important decisions to be made by individuals, by groups, by societies, about where we actually go next and how it is that we're directing that. If we can in the meantime enable better decisions to be made by individuals, by societies, then hopefully, even if it's the case that everything eventually gets handed off to AI, we'll be in a better position to trust that and to know that it's going to be trustworthy and a society kind of endorse that and move in that direction in a way that we think is wise. If your system has systematic blind spots, then you might expect that it could perhaps surreptitiously or even inadvertently kind of surface a biased summary of the situation. Now, that could lead you to systematically biased decisions. Far too often things are just sort of chaotic and confusing. People fail to coordinate, people fail to understand the consequences of their actions. They even fail to understand what their options even are in the first place. And each of these problems, we think is remediable to some extent with support from tools, many of which would incorporate AI.
B (1:03)
Ollie, welcome to the Future of Life Institute podcast.
A (1:06)
Hi, Gus, Nice to be here.
B (1:08)
Great. All right. Do you want to introduce yourself?
A (1:11)
Sure, yeah. I'm Oli. I work at the Future of Life Foundation. One way to think about FLF is it's kind of like a little spin out from fli. We take a slightly different strategy. We're looking at being a kind of accelerator or a. I think an accelerator is the right term for projects that might be neglected in making the future go well. And especially we've got a big focus on AI right now, as everyone has. This is, you know, this is a hot topic right now.
B (1:36)
So we can categorize these tools into, say, three categories. So epistemics, coordination and risk. Targeted application.
A (1:43)
Yeah, so one area that we're really interested in is what we've called AI for human reasoning. And this is kind of one large focus for FLF right now. It's not the only focus, but it's an important one. We have some other kind of back burner priorities that we're trying to work on as well. But when we say human reasoning, what do we mean? Human? Well, both individuals, but also groups and all the way up to large societies and even humanity. As a whole. And then reasoning, we're referring to the whole decision making cycle. So from making observations, coming to understandings, modeling the world, and with groups, we're talking about communicating and then through to making decisions and even acting together and coordinating. And this kind of thing, reasoning is supposed to encompass this whole thing. And part of why we think this is an important kind of package of things to consider together is that they're really important synergies. So when you understand things better, you can come to better decisions. When you can come to better decisions and understand each other better, you can coordinate better and so on. And there's all these interesting synergies. So the other reason we think it's important right now is of course, the world is only getting more complicated and enabling individuals and groups and societies to reason better about the options we have in front of us for our near and long term future is going to be really important to make sure that that goes well. Because very often things kind of just kind of meander around. It feels like things are happening by accident, or it feels like things aren't really being chosen in what we might think of as a wise way. And often things go in directions that really not anyone wants. And that's, you know, on its face, that's paradoxical. And we think it's because as a society we need to kind of elevate this ability to do reasoning.
