Ted Strefas (32:29)
Yeah, just thank you for that lead in and a really interesting set of examples and experiences that you're narrating there, Jeffrey. And I want to go back to the image of the turnstile, which I think is a really interesting one. And the argument that, you know, like, a turnstile is more persuasive than, you know, most people and, you know, kind of getting someone to behave in a certain way. And, you know, I don't disagree, but I think in many ways, you know, the reason I use the word orient rather than manipulate has everything to do with the fact that I believe that some people look at the turnstile and, you know, they comply. But there are a lot of people that look at the turnstile and just jump it, you know, and I. And I think that that is in some ways, you know, really the. The crux of what's at stake here, and that is that we have these objects and these technologies that are doing their best essentially to provide a path of least resistance, right? To say, this is the direction in which I need you to go, and you're supposed to go, and you will be rewarded if you go. And I think part of the allure of that is it just allows us to make this kind of unthinking progression through our everyday lives. But I think the most interesting part about that is there is always that kind of room for maneuver, and it's room for maneuver that we don't often recognize until other people partake of it. And more to the point, I think until more people sort of partake of it en masse. So the more people who jump the turnstile, the more people who then feel at liberty jump the turnstile, right? It becomes this kind of accumulative process. And so I think that that is in some respects, kind of like what we're looking at here with algorithmic culture. And so I think there are a couple of parts to the question that I'll try to address in turn. The one is about definitional agency, which admittedly is kind of a somewhat strange part of the argument, because here is a book that is ostensibly all about technology, and then it sort of ends on this note about defining terms like what's. What's that? What. What is that about? How do you get from the one to the other? And so, you know, so much of the narrative of the book is driven by language, right? And to return to what I had said earlier in the conversation, right, the argument of the book is that the rudiments of algorithmic culture emerge in language, right, long before they are manifested in technology. Right? And so what that means then is that words essentially give us a certain type of imagination, right, an ability then to go on to build material worlds based on the kinds of understandings and accesses to reality that that language opens up for us. And so that's why I spend a lot of time thinking about the definitions of words and also the politics of defining words. Because part of what's at stake, if you don't like the way in which this world of our is going, you know, technologically or otherwise, you know, well, maybe, you know, there are different modes of struggle, right? We have to, you know, struggle with respect to political economy. We need to think about, you know, law, policy, regulation, all of those kinds of things. But maybe part of what we also have to do, too, is dream up new definitions of worlds of Words, excuse me, so that we can then go on to build different worlds. And so that's the, the argument about definitional agency. The other piece of that though, of course, is that most people don't feel at liberty to define terms, right? So if you want to define a word, what do you do? You go to the dictionary, right, which is the authoritative source, and it does the work for you. And so the sort of like sub argument here then is not only about creating a different vocabulary by means of which to talk about algorithms and culture and their relationship, but also to reclaim the idea that you, you are at liberty as a human being to define the terms and conditions of your life literally and figuratively, and not only to rely on those kinds of authoritative sources which have a certain type of investment in defining words in particular ways. So there's that piece. And again, I don't mean to say that defining words is going to get us out of all of our problems, but I do think it needs to be a piece of a larger sort of political struggle. And then, you know, to go back to this larger point, you know, how do you resist algorithmic culture? I mean, it really is just so incredibly difficult to escape, as you were just describing, through the parable of not having a smartphone. I mean, to your credit, you know, you, you kind of got out, right? Although one is never completely out because of course, you know, Alex is sitting right next to you. Alex may have a smartphone in his pocket, and that may be eavesdropping on you and recording things and making judgments and things like that. So it is quite the enclosure. And so I think in many respects we do need different forms of regulation. I think that that really is a fundamental baseline that is just deeply and profoundly lacking right now. And that's not going to change anytime soon given sort of the relationship between the tech industry and government, at least in the United States. But I think that that has to be part of what Raymond Williams long ago called a long revolution, right? Where you're playing essentially a long political game, rather than one that is simply about six months from now or a year from now, or even five years from now. What does the world look like 50 years from now? And what are the things that we need to do to create a regulatory environment that's going to get us to there. And I think that that's really the interesting and difficult question that we need to be thinking about, because so much of the world of technology is about short term thinking. And I think that part of what we have to do is exercise that muscle which is the long term thinking.