Jose Merchal (22:35)
Yeah, yeah. And I mean, I talk about that in the book, right? Because from one way of thinking about it is like, if we're all outliers, how do we ever engage with each other if we all think we're sort of an island unto ourselves? Right? There's a great political theorist and his name is Escaping Me, but he talks about liberalism as kind of an archipelago, right? But if we're all individual islands, can we ever sort of engage with each other? Right? So I kind of lay that out and I'm like, yes, true, if we are all sort of striving to be idiosyncratic and always looking for novelty, that's not good either. But it's a Balance, right. And I think the balance is tipped too far away from novelty seeking, self creating, authentic self creation. And so in the book I have three ways in which I think we can steer the algorithmic contract towards more beneficial outcomes. One of them is more engagement with serendipity. So there's a Dutch writer, Sebastian Olna, who wrote this great book about serendipity about a decade ago. And so I quote him a lot in that last chapter. And a lot about it is putting yourself in position to encounter surprise. A lot of political theorists, Diana Motz, Liliana Mason, wrote this great book about cross cutting cleavages, about how we used to, two or three decades ago, we used to have multiple organizational affiliations where we encountered difference. So maybe 20 years ago I might have been in a softball league and I might have liked classic cars. So I went to classic car shows on a weekend. And I might, maybe if I have children, maybe I coach their little league or I'm a member of their parent teacher association, or I was in the Rotary Club or the Kiwanis Club. This is the old classic Robert Putnam argument. But it makes sense that in every one of those different organizations I'm going to meet people that I disagree with and I'm going to be having counters with difference. I'm going to be having encounters with surprise that I might not otherwise have if I'm just engaged in my algorithmically curated life. Right. So by engaging with like just people that. And I completely understand that if you're part of a marginalized community, that might be scary, that might be dangerous. And I certainly, I wrestle with it because I don't want to put anybody in a position of danger. I don't want to put anybody in a position where they have to defend their humanity or defend their existence, because I think that's BS to have to do that. But at the same time we need to be put ourselves. It's a cost to not put yourself in situations where you may learn new things about yourself. Because we're all multitudes, marginalized or not marginalized and dominant. We are all blended, diverse people who benefit from engaging with others on safe terms. And so one of the things about the algorithmic concept is, yeah, how do you create moments of serendipity so that we can all learn and grow individually from each other? The other one is promoting fuzziness. And so I really like sort of this idea of fuzzy set. Charles Reagan was a professor at University of Arizona, political scientist who advocated for this idea of like fuzziness in methodology. And fuzziness is Basically, like, instead of seeing the world as a binary. Right. Is 1,0. You see the world as probabilistic. Right? And so it's very natural, you know, very, very natural for modelers and computer scientists and machine learning people to Y probabilities all the time. But instead of seeing ourselves as either members of group A or group B, we see ourselves as, well, maybe I'm 75% Group A, 25% Group B. Right. And this is all very theoretical and like, how you actually implement it is a whole different question. But algorithms have to begin to think about giving us things that are more, instead of saying, well, you like this, so we're going to keep giving you more of that, saying, well, what if you're like, you like you're 60% of this group and 4 and 30% of that group, so let's sort of give you something that's in between, right? And fuzziness is really another word for nuance. It's another word for seeing the world as in its blended reality, in its pluralistic reality. And so how we do that, how we move people out of like, very binary boxes, very binary, like I'm a member of this group into something that's a little bit more fuzzy and blended, is another thing that I care about a lot. And then the third one is I really like Henri Lefebvre, who's a critical theorist. He wrote a very influential essay in 1968 called A Right to the City. And he was critiquing urban design and geography. But his general, he was a neo Marxist. And so he's writing and he's saying, well, if we're never going to have the Marxist utopia in terms of economic relations, can we at least have it in terms of social relations, in terms of space and where people have full autonomy in their use of space, so that space is more flexible, you have more things are public, you have more access to the city. You see examples of this in Los Angeles. In Colombia, they had this thing called ciclavia, and on the weekends they would open up the freeways to bicycles. So the people had a right to. You know, the way this looks in real life is like in the Bogota, where you had a mayor who was in the 90s, the 2000s, who was a big proponent of this. And he made the sidewalks much bigger and the cars lanes for cars much smaller because poor people don't have access to cars. And so they could have access to the city by having more opportunities to walk. Right? So, you know, the idea of Lefebvre is The city should be a work of art. It should give out individuals an opportunity to become an oeuvre. Right? It's French, right? He's going to use French words. An oeuvre, a work of art. Nietzsche did this, right? Nietzsche talked about aesthetics as the Ubermensch. The uber mind should try to live an aesthetically beautiful life. Right? Certainly debatable. Certainly some issues with that. But the idea being that, well, can AI help us see possibilities, help us see how we can live differently? Right. One of the examples I use in the book is Google got into a lot of trouble a year ago because when they released one of their early vision AI models, it hallucinated too much. It hallucinated in a way that we might quote, unquote, call woke. Right? So if you asked it to give you pictures of a pope, it would give you a black pope and an Indian pope. Well, those have never existed, Right? If you asked it for World War II figures, it would give you. Or Nazis, people in Nazi uniforms. It would give you, you know, a rainbow of. Not of people in Nazi uniforms. And it's like, well, that. Or. Or in the Old west, right? It would give you all the. And I was like, well, that's not really. And. And people sort of said, well, this is broken because it's hallucinating. This is. This never happened. And it's like, well, yeah, on one level, if you're. If you're looking for the AI to be factual, then of course this is bad. But if you're looking for the AI to uncover possibilities for you, digital possibilities, potentialities. And those could be left or right. You know, maybe potentialities that bring you closer to God. If you're, like, more on the religious side, but. Or potentialities that encourage you to live. To live in a more utopian, elevated, idealistic way. Right. That we should maybe demand from these tools that they help us spark our imagination and our creativity in ways that unlock potentiality. And so those are the ways that I thought about. I think about, like, well, if we renegotiated a social contract, it could look something like that.