Loading summary
A
I would argue that proliferation of slop has allowed us to become even more artistically interesting and to create more interesting human art. One of the worst things that's happened to politics, and not just democratic politics, but politics everywhere in the world, is the fact that real mythologies are being hijacked by ideologues. I cannot imagine Laozi or Zhuangzi or any of the Daoist philosophers think about the world we live in and view it as anything but the worst of the worst.
B
Ken Liu graces ChinaTalk with his presence. What an honor. Ken is the author of the Dandelion Dynasty Silkplunk fantasy series, is a brilliant short story ist, one of which has been recently adapted into Altman's favorite show, Pantheon. You all know his translation work for doing the first and third volumes of the Three Body Problem trilogy. But in my humble opinion, even, even better than that was his absolutely brilliant translation and commentary of Dao de Jing. As much as I hoped that doing that would get him fully bit on the classical Chinese translation train, he followed it up with a very different direction. A Techno AI Thriller, all that We See or Seem, which was released late last year. Irene Zhang of ChinaTalk joins us to co host. Ken, welcome to ChinaTalk.
A
Thanks for having me, Jordan and Irene, it's a real pleasure to be here.
B
All right, I want to start. So we're living in the age of Claude code, so I kind of want to start with. I mean, I don't know how many months ago or how many years ago you wrote this paragraph, but why don't you set it up and read this vision of future coding and writing that you.
A
Okay, so I guess I'll start by just sort of talking a little bit about what the book is actually about. All that we see or seen is a techno thriller in the sense that none of the technology mentioned in it is really speculative. They're all either already there or very possible. It just needs to be scaled up a little bit with what we already have. So Julia Z is a hacker and she is one of those heroes kind of like Clarice Starling or Jane Whitefield, if you know the series where she's a. She's a person with a very strong moral compass and also a very dark past. And she's trying to escape from that past, but events keep on pulling her back in and she realizes that she cannot overcome the external threats unless she confronts the demons within her. That's kind of the character that she is. And in this particular novel, she is somebody who has a set of Very specialized skills with AI and robotics. And she's being tasked with finding an artist who has disappeared, an artist who works with AI to help large audiences dream together. So that's sort of the setup and the passages that we're sort of reading, that I'm going to be reading to you are reflections on and about Julia in the age of AI. Okay, so here's the first passage, which is about what it's like to be a programmer. This is something very close to my heart. It's something that I do and something that I think a lot about. In this age of coding with machines, the hardest part had been the programming. Writing code without the help of Talos or even a lowly code monkey or Datagen was not something Julia had much experience with. In the same way that few contemporary writers could compose even a 500 word essay without the help of AI as research assistant, fact checker, dictionary, cetharis, grammarian, and in extreme cases amanuensis, very few contemporary programmers could create a functioning non trivial application without the help of code demons, bug genies, patch sprites, script pixies, the whole fairyland of similar artificial intelligences, Homo sapiens had always externalized their minds into the world, oozing books, drawings, plans, recordings, the same way honeybees made their minds visible in the form of waxcomb and sweet honey. But the trend had never gone as far as now, when most of one's knowledge consisted of knowing where to look things up and how to give an AI the best prompts. And more of one's mind existed outside the skull, infused into fiskgens and memo elves and eaglets, spread among artificial assistants and helpers and aid memoir, imprinted in cogatrons and electrons and logons, then remained inside the squishy gray matter inside the skull.
B
I think we start maybe with the idea of choosing a techno thriller as a genre and kind of wanting to explore this thing, which is something that every white collar worker today is grappling with with one way or another.
A
Yeah, I mean, I think I don't care much about genres to be honest. Genre labels are kind of irrelevant to me. I think all of my fiction, you know, whatever their marketing genres are, are fundamentally technological stories. That's what I've always written, whether it's the Dan Zhuang dynasty or, you know, my short fiction or something like the Julia Z series. They're all fundamentally technological stories. They're stories about what does it mean for humans to express parts of themselves in this way. You know, if you can sort of talk about humans as Unique or, you know, different from other species that we know about. It's fundamentally our technological nature. I don't mean, you know, I think this is a very important point, which is a lot of times people describe something as sci fi when it's not sci fi at all. Actually. I would argue that the vast majority of things marketed as sci fi are not sci fi at all. They have very little to do with science. They're technological stories, which is what I write. TechFi is far more interesting to me. Technology and science are completely different disciplines. And the vast majority of so called sci fi are really techfied because they are really about what does it mean for humans to express themselves via their creations. We are the only species that we know where we express who we are via things we make. We imagine things that do not exist in the universe, then we actually bring them into being in this way of concretely substantiating our mental constructs in the world. This is the thing that we humans do that is unique. We substantiate mental constructs in the world, whether in the form of buildings, machines, or even in things like patterns of behavior, technologies of decision making, language, writing, all of these are forms of technology. And the interesting thing is these technological manifestations, the stuff that we ooze out and then in turn change who we are. We converse and interact and co evolve with our own creations. This is something no one, you know, no other species really does. So we are interesting. This is very interesting to me because fundamentally, you know, one of the great philosophical debates in our tradition is are humans more human without technology or are we more human with technology? I mean, this is a debate going back to Plato, to Zhuangzi, to all the great philosophers. What is language? Right? The entire skeptical interrogation of language itself is really this debate about what is human nature. A lot of times I think in the contemporary world we sort of default to this position that technology itself is somehow external to who we are and apart from human nature, and therefore something that we should be leery of. But this is, this is to me a very nonsensical position. Human technology is a manifestation of human nature. It's in fact the most human thing that we make. You cannot understand human nature without understanding human technology. It's literally a, you know, as I said, a tangible, tangible substantiation of what is inside our minds. And so in order to understand what human nature is, we have to actually interrogate human technology and truly understand how we co evolve with our own creations. And that's what the Julia Z series really is about. That's what I'm interested in, Ken.
C
I wanted to anchor the idea brought up of co evolving without creations as a definition of source of human technology. With an example example from the book, this happened. This occurs in the paragraphs we just read. So you use the metaphor the jinn, and to describe what in the marketing world women call AI agents. And where did that. Which obviously is from Arabic mythology and Islam. I'm really curious why you chose that metaphor and generally how you think about the metaphors being used to understand AI.
A
I mean, the immediate answer to that is just I was very interested by the word the cotton gin, why it's called the cotton gin. It turns out that it's short for cotton engine, which is just the way that we play with language. And so I said, well, why don't we take that jin and turn into a different kind of jinn? Because if you look at the way technology and how technology is expressed via language, technology is very mythological in the way we deal with it. I mean, sort of think about how we name our technology, right? Why is it that the US decides to name our space programs after Greek and Roman gods? I mean, what is that about? There's a mythological component to the way technology is manifested. Because, as I explained, technology is not some sort of thing that we do independent of where technology is how we dream. I think this is a very important point that I want to emphasize. The reason why technology is so expressive of human nature is because it is a manifestation of our deepest desires and dreams. We have always used mythology to express technology and understand. So, you know, you look at the way technology companies talk about their creations, the way technology companies try to market their stuff, they're obviously always going to give you a mythological component. So to me, if I didn't name them jinns, that would be weird. It has to be a mythological name because that's the way these companies think.
B
Is this time different, Ken? I guess you kind of made the argument to the negative. But, you know, in the passage, you were saying something to the effect of no, actually, like internalizing your brain to the extent that your characters had to in the book or we are doing today, is kind of something unique in human history.
A
So externalizing the brain into our creations, you know, that is not unique. I mean, you know, every child who has learned to read, you know, has. Has experienced that moment of I am now communing with mental patterns from other creatures left from long ago. I mean, you know, when you read Plato's dialogues today, or when you Read Zhuangzi's stories today. You're communing with minds from thousands of years ago. And that is very, very strange if you actually think about it. You are engaging something that no other creature, as far as we know, can do, that we are communing with mental constructs from before, that we're externalizing these things and engaging with them. And just think about what happens when you're trying to do arithmetic, when you're trying to do something like long division or trying to do an integral, or trying to do a, trying to work out a tensor. You are in fact using pen and paper to externalize your brain. Your cognitive function is literally externalized on the paper. It's a very strange feeling. I don't know how many of you actually try to see this, try to do a long division problem and just think about how weird it is that your brain is literally externalized right out there and now you're interacting with it using your body and then getting it back. It's a very, very weird thing that we do that no other creature does. And so is AI significantly different from that? No, I don't think it is. The best way to understand what large language models are is to go back and read the structuralists from the 80s, right? So somebody like Roland Barthes said that in this day and age, in this deeply literate society we live in, in which we are burdened or blessed, depending on your perspective, with millennia of writing, with, with millions upon millions upon millions of, of authors who have written, you are surrounded by words, by their minds. And so a modern writer, a scriptor, is not actually an author in the sense of somebody who creates things out of nothing, but somebody who basically is babbling in the presence of a complete corpus of past writings. And so you're just playing with words, with reference upon reference, to allusion, to more reference. You are simply acting as a channel, a conduit to this playful field of past writing as you babble more writings. I mean, he wrote this as a way of talking about the death of the author and how to understand a new way of understanding the author function. But reading it now, in the age of large language models, you realize that that's what he was talking about. The large language model is a substantiation of that imagined so called dictionary of all writings. A large language model function is essentially like that. It is language coming to life. It is you interrogating the entire corpus of what humans have written. This, you know, pluribus like the Apple show, pluribus like multi mind that you're engaged with. So to the extent that things are, you know, this is my argument for how AI is not really different in terms of how we have always dealt with technology in that sense. Now there are some interesting differences in the sense that I think for the first time in history, we're now confronted with this idea that intelligence and consciousness are actually not the same thing. If you examine older sci fi lit, there's a huge fundamental assumption that something that is intelligent will necessarily be conscious in some sense. Like the more intelligent something is, it necessarily comes with intention, will desire, and this idea of being something. There, there's some mind behind the intelligent acts. What we're now seeing is that there's no doubt that these models are actually intelligent. I mean, you know, I find a lot of the popular discourse around AI is just a very powerful autocomplete, very silly. This is one of those descriptions that's technically true but also means nothing. I mean, you might as well say that, you know, humans are nothing more than just compilations of statistical likelihoods. Yes, it's technically true, but so what? It doesn't actually mean anything. The real issue with AI is that there's no doubt they're intelligent in the sense that. What do you define as intelligence? I mean, if something can actually write essays and solve and pass the essay, pass the bar exam and get a perfect score on the SATs, to say that it's not intelligent is just a nonsensical declaration. It's clearly intelligent, but it's not conscious. I don't think many of us would argue that LLMs are conscious. So that is very strange. The fact that we can have intelligence completely divorced from consciousness, from a will, from an intention, from a subjectivity. That is very weird. And I think that is something that we are still coming to terms with. We're trying to understand why it is that we value subjectivity so much, and yet we don't seem to think intelligence by itself is all that valuable anymore. Or at least many of us now seem to be leaning that direction. That seems to me honestly to be why a show like Pluribus on Apple TV is so, so interesting. Because it is a show that is mythologically engaged with this particular question. What matters more, subjectivity or intelligence?
B
Well, you know, one of the sort of themes that you pick up on is this idea that yes, there is a future of AI slop, that all of your, that, you know, your, your, your world is swimming in, but also there is still something where, you know, you're doing this kind of like, new artistic frontier exploration of like, you know, AR dreams. But it is still something where the audience wants to meet up in person and have a particular human that, you know, lives and breathes and bleeds, have a connection to. So I'd love for you to explore this more, Ken. The idea that, yeah, like we are living in a world where people are deriving a lot of sort of emotional support from their AI therapists and best friends and boyfriends and girlfriends. But I guess it seems to me that your contention is that there is something about having a human behind it all which is, you know, fundamentally going to remain appealing, however good these models end up getting.
A
So I want to start out by saying that I don't necessarily think I have a specific argument one way or the other in the book. I think that's something that I always do, which is, I think it's very interesting that my fiction gets published and people always attribute certain points of view to it and sometimes readers will attribute polar opposite views out of it. And then I think that's actually a sign that I've succeeded because I deliberately write fiction that has very little messaging in the sense of propaganda. I just don't think fiction written as propaganda is particularly interesting. I mean, not that it can't succeed. You know, Ayn Rand very famously writes propaganda that is very popular, but I just don't find that kind of fiction interesting. I don't care about writing fiction that is propaganda. And so all of my fiction are aesthetic. Works that deliberately don't can be read to support multiple contentions because I think that's how reality is. You can take reality and interpret it to, to suit different kinds of messages. I will say this right. So regardless of my own personal view, I think the anxiety, the contemporary anxiety over AI slop is understandable, but I think it has to be contextualized in a historical sense. So if I may point out something, all of us already are living a world of slop. And I will try to explain to you what that means. Not AI generated, but we are actually living in a world of slop. Mass produced slop. So let me try to explain to you. So if you are. So try to take your mind back to the invention of photography, right? You might, in your entire life see maybe a few hundred images and every single one of those images will be produced by hand by a real human being. You might see church stained glass windows, you might see a famous painting. If you're rich and can travel, you might see your. You might make a few pictures yourself. If you learn to draw, you might see pictures drawn by your friends. You might open a book and see prints which are actually made by hand by somebody who had to translate an image into a printing plate. Reproductions of famous paintings. You might see a few hundred of these things in your lifetime. But after the invention of photography and the entire invention of photographic reproduction techniques, where famous paintings and drawings and whatnot can be easily reproduced via photographic means into printing plates, you have what Walter Benjamin called the age of mechanical reproduction, right? So in the age of mechanical reproduction, which is what we are living through now, we are surrounded by images. Just think about how many images you see in a single day. It's hundreds of thousands in a single day. And just the number of images you're swimming in is unbelievable. The vast majority of the images are slop. They are clip art put on something. They are images made by a graphics program. They are images that somebody reproduced from public domain stuff, made a few manipulations. Just think about how much slop you're swimming in. My point is that that has not somehow destroyed art. That has not somehow made humans unable to appreciate art. So my point is simply that in the age of AI Slop, what makes you think that somehow you'll stop producing actual art? Like we're already living in an age of slop, utterly surrounded by it, and yet I would argue that proliferation of slop has allowed us to become even more artistically interesting and to create more interesting human art. I don't really see that being different in the future. If you have a bunch of AI Slop, we know how to deal with slop. Age of mechanical reproduction is here, and the age of ASLAB will not be any different. I just don't. I don't see the moral panic over it. Now. That is not the same as saying that somehow that will not lead to the loss of livelihoods for many people in the same way that the age of mechanical reproduction caused the loss of livelihoods for many artists, namely specifically engravers. Engravers who were great artists who had to translate paintings and drawings into printing plates. Yes, they were displaced. And that is a difficult transition. That was a difficult transition. And we will face a difficult transition today too. But the idea that AI slapped will destroy art, to me is very flawed. That's just not how historically any of this has ever worked. And I don't see this being any different. I'm much more interested, on the other hand, of what the technology can enable humans to do as a matter of creativity, Period. Again, historically, in every single case where some technology has come along and displaced aspects of human craft. What ultimately happened is that humans learned to practice craft with this technology. So humans have been able to practice craft with the camera. You know, when the camera was just, you push a button and a picture is made by chemistry and physics, that's not interesting. But when humans learn how to actually use the camera as a tool, as an artistic tool, how to tell stories with it, this is how we end up with things like, you know, cinema, how we end up with TikTok. I mean, how we end up with YouTube, how we end up with the vast explosion in video art, none of which would have been possible without the camera. So I think something similar has to happen with AI. Today's AI is in the stage of you giving it prompt and it generates something. And this is very non crafty. There's no craft to it, but it will not stay like this. Over time, as artists figure out what are the affordances we need to actually use these models in a way that's interesting. How do you actually precisely position the generator within latent space? How do you precisely delineate the chain of inferences and the chain of jumps and associations inside the model's weights to generate the thing that you want? How do you precisely manipulate this model in the way that you can dial in a camera settings and set up the poses and frame it? When all of these affordances are given to an artist who wants to work with AI as a tool, then and only then will we actually see interesting art being generated by. That's my contention.
B
So, Ken, in a recent Substack post, you said you spent much of December and January playing video games. Behind you, I see a PSP and a Game Boy advance. In the book, you explored one future of artistic creativity, that AI could enable this sort of like dream weaving vision. I'm curious, where do you see the future of video games with all of this?
A
Well, some of the most contentious uses of video games is using AI generated assets in games. I personally think that this is one of those things that will be eventually normalized. The whole idea of AI generated material is basically like I said, it's. If you want to call it slop, that's what it is. But we're surrounded by slop. We are surrounded by mechanical reproduction and cheap art. That's just how it is. Eventually this will probably happen to video games too. In terms of asset generation. Again, that does not mean somehow human crafted material will lose their appeal. I mean, I feel like in the same way that humans even in the age of mechanical reproduction continue to be enthralled by the aura of the artist, much to perhaps to Binyamin's disappointment. I don't see that changing in the age of AI slop either. I think the human aura will still be very appealing to a bunch of. To many of us. But at the same time, one of the great things about AI generated art like mechanical reproduction, is that there's a democratization effect and there's the ability to generate certain kinds of art that human artists would never do. So, for example, right. One of the things that I found to be really interesting is why do humans find AI generated material interesting? There's a very interesting pattern here, which is humans find playing with AI to make art for themselves very interesting. But we almost never actually find sharing this stuff with other people and that other people find it interesting. Like, you know, I think this is one of those things where you generate something using AI and it's kind of interesting to you, but it's not interesting to other people. There's an intense personalization effect here that I think is worth following up on. I think what AI is really good at is fulfilling your desires in a way that human artists never will and never can. You know, just to take a very crude example, you know, you might be someone who craves a particular kind of fiction or particular kind of film. You want to see a particular adaptation of your favorite novel starring just your favorite actors. In reality, that will never happen. Just humans will not do that for you. But you can use AI to create that for you. AI is a desire fulfilling machine, but it's only able to do that for you, and only you would find that interesting. It's not the kind of thing human artists would ever do. And I think the analogy here is, you know, mechanical reproductions can fulfill a niche that humans never could or would. You know, let's take the camera as an example again, right? For the vast majority of our history, right, it was not possible for most people to get a real good portrait done of themselves. You had to be very rich or famous to get a portrait done. Otherwise you had to rely on one of your friends or a family member being able to draw. And so that's why we end up with that picture of Jane Austen that, you know, everybody knows it's done by her sister. It's not a very good picture, but it's the only picture we have of her. That's just the way it is. But once the camera came along, it was possible for middle class families and for Even individuals now to have pictures of themselves done cheaply. And now everybody can take a selfie. I mean, we're awash in slop selfies, right? So this is what technology can do. Technology can allow you to get things that humans never would do. You can't possibly get portrait artists to do a picture of you for the most of us, but you can easily do a portrait using a camera. I think that's the case. If you want a particular kind of story, you're not going to get human artists to write it for you. You're just not. But you can get a machine to do it for you. So the kind of personal, highly personalized, very specific, self involved kind of fiction. The kind of fiction that honestly, when people speak about, you know, AI boyfriends and AI companions, that's what they're talking about. They're talking about basically fiction that is co written with an AI for themselves alone. That is it. That is exactly why these things are appealing. But that doesn't mean that somehow these people who love this sort of thing will not appreciate fiction written by humans that is not meant to fulfill desires. I mean, I think that's one of the things that sometimes people don't get. Artists are not there to fulfill your desires. Artists are there to fulfill their own dreams. They go into the collective unconscious and they're seized by some image or some vision that they have to bring out. That's why artists create. They're not there to satisfy your desires. So, you know, I think there's a very complementary role to be played by AI versus human artists. Human artists will always will do what they've always done, which is to dream and to bring forth interesting dreams from the collective unconscious. And AI will be there to fulfill your individual desires. The two are complementary. They're not the same kind of thing, but they can coexist.
C
And on the companionship and desires front, I wanted to ask about Talos and the metaphor that that character of sorts brings up in the novel. So Talos is Julia's AI assistant and Julius seems to live in a world where that's really common and everyone has a personal AI. But you don't necessarily portray that as a kind of companionship in the book. People in the story still fall in love and have friends and family. And I wanted to ask how you made those decisions in crafting Talos and the personal AI landscape in that book.
A
So Talos is actually very different from any other personal AI in the book. And the distinction is very important. The so called personal AI that everybody else uses is essentially a subscription service. I mean, this is the sort of thing that basically all the companies are trying to make. They're trying to make you subscribe to their cloud AI and it will be personalized to you, but the data is all with them. So this is the sort of thing that people are concerned about in terms of privacy and so on and so forth. Talos is very different. Talos is not a subscription AI from some large company. Talos is a thing that Julia builds herself running on her own local hardware and that is in fact entirely controlled by herself. What Talos really is, in terms of the way the book describes it, is an egolette. And what is an eagle let an eagle let is basically a AI representation of you, if that makes sense. So let me try to tease this apart a little bit. What I find to be deeply interesting about AI is I describe AI or large or, you know, the neural networks that we have as essentially a sort of camera for different things. It's not a camera for images, but a camera for decisions, A camera for decision making procedures, decision making processes for choices that you've made in the past. So to take a concrete example, if say a painter were to train an eagle eyed. And there are companies that actually do this, that are exploring this possibility. So you're a painter and you want to train an eagle eyed of yourself. So what the idea here would be to train a neural network not just on your finished paintings, but on the entire process of creation. How do you decide to make this paint stroke and not that? How do you decide to cover up these paint strokes and not those ones? How do you decide to do this part first and that part last? The entire process of creating a piece of work, a painting or a book or something, is where the stuff is interesting, right? So I think we've all had this experience where you'll use, you know, have AI produce sort of, you know, a painting, the style of so and so. And it looks superficially pretty good until you examine it. And there's always kind of a superficiality to it. Or another example would be there's this very popular application that I see people use all the time, which is you stick all the books written by some author into one of these models and then you say, okay, now you can talk to so and so and ask them any question you want. So, you know, you might train some sort of AI on all the dialogues and all the books by Plato and you say, now you can talk to Socrates, right? And you can ask Socrates what he thinks about AI and supposedly you'll get a sense of Socrates, what Socrates have said. These are all terrible apps and none of them ever feel convincing because people have done this to me. They trained models on my interviews and asked questions and then asked me what I think. And what I think is, this is garbage. This sounds nothing like me. And I'll tell you why it doesn't sound anything like me and why you should never trust these so called Socrates bots. The reason is for everything I say, there are 10 things I've decided not to say. If the models are trained only on things I publish, the model will never know all the 10 things I would never say. And that is the problem. When you have these models trained only on what has been said, they don't know what has been decided to be not said. And so it's always going to generate some garbage, you know, saying things that I never would have said. So here's the issue. The issue is in order for these models to be actually interesting, to be a good representation of the person, they have to be able to have the insight into all the things you've decided not to say and all the things that are behind the scenes. Right? So the way I describe it is when you're looking at published works or, you know, finished paintings or whatever, you're seeing sort of the iceberg part of the iceberg that's above the water. The vast majority of the stuff is under the water. You know, Steve Jobs once said something like, you know, an artist, you know, to be creative. Or this is a paraphrase, it's not an exact quote, but something like, you know, for everything you say yes to, there have to be like at least 10 things you say no to. It's the part you say no to that matters. Right? So an egolet in my conception is an AI that's capable of actually capturing the part where you say no to all the parts that you did. Now think about it. How many of us are comfortable with giving that information to Anthropic, to Google, to OpenAI? Like the idea that you would reveal the parts where you, you've kept hidden from the public. You want to give that your AI. Who's going to do that? Nobody. So that's why I think personal assistants done in that form will never amount to anything. The sort of personal assistant that are trained only on the stuff you've decided to let out will never amount to anything. The only way to actually produce real egolets, meaning small egos, small copies of yourself, something that actually is trained on who you are truly you are can only be done if you have total control over the model, total control over the training, total control of the hardware, total control of the data, where you have total sovereignty. That is what Talos is. Talos is a totally controlled by Julia thing. So this is very important to understand in the book. Julia does this only because she has complete control over it. And, you know, because she does have control over Talos in the sense Talos is very different to her. She explains in the book that talking to Talos is basically like talking to a version of herself or different versions of herself in different periods of her life. And she is able to examine herself. I mean, Talos in some ways is sort of the fulfillment of that oldest of philosophical desires, which is to know thyself, right? Know thyself is the oldest call for wisdom in Western philosophy. And Talos is a fulfillment of that. By having an AI that is trained on yourself in this very deep sense, you can reflect on yourself. Julia is able to examine who she is via Talos. She's able to leverage herself, to work with herself and to critique herself using Talos. That is what makes this sort of thing actually interesting.
C
We wanted to move on to something else that we thought was super interesting, the book. Now, without spoiling it for the audience, quite a bit of the plot centers on something that's, at least to us, in approximation of something that actually exists, which are these scam call centers and human slavery rings in the Golden Triangle, primarily in the Thailand, Burma border regions. And we thought it was really interesting how you pulled from all of that into and connected that with the AI thriller that we see as the other line in the pond. How did you become interested in that and what makes that so important to you?
A
I think the way I want to sort of address that is to sort of explain a little bit about what I think the real danger of AI generated slop actually is. I think there's. I disagree with a lot of the mainstream commentary on what the issue with AI generated deepfakes and whatnot really is. A lot of the commentary right now focuses on the idea that we're going to be subject to manipulation from bots by foreign actors and whatnot. So, you know, the natural sort of outcome of that is we're going to have better and better ways of distinguishing between organic accounts versus accounts that are operated by bots. If we get to that point, then the next logical step is of course, to engaging foreign manipulation by having humans do it, not by having bots. You know, in an age where machine generated slop is such a big problem, there will necessarily be a premium placed on human generated content. And so you know what the next logical step of that is going to
B
be
A
actors who wish to basically control this sort of commentary and to weaponize this kind of commentary and to enslave basically human content creators for that purpose. This seems to me quite plausible, and in fact, I'm sure it's already being done somewhere. The issue here though is not quite that simple. It's a fundamental misunderstanding of what the real problem of AI is. Oftentimes we describe the problem of AI as machines replacing humans, as though that's the biggest issue. That is not the real danger from AI as far as I can see, is humans will start treating other humans as machines. It's, it's the, it's the gradual mechanization and the gradual reduction of humans into components of a machine that is the relentless pattern of modernity. You know, this has been going on since forever. You know, when the assembly line was invented, human workers were reduced to components of a massive production machine. Instead of exercising their individual judgment and creativity, humans are put into this position where they exercise as little creativity as possible to produce, to repeat, repeat the same motions the same, to specialize in doing the exact same thing over over again with as little variation as possible to be standardized components of a machine. That production line model has sort of persisted into the modern age. We are constantly taking away individual initiative and decision making from workers throughout the process, right? Call center employees are instructed to follow the script, to not deviate, to not exercise their human empathy, but to think of themselves as components of a machine, simply language models. Essentially. This is why call center workers are so easily replaced by AI, because essentially modernity has tried to reduce humans into robots so that real robots can take over from them very easily. This is the real danger. And over and over again what you see is wherever humans retreat into an area of individual initiative and individual choice, the pressure of capitalism is again and again to reduce them down to components of a machine, to appropriate their creativity, to standardize their initiative for purposes of money and control and power. So in the book, without spoiling it, a large part of the book involves exactly this kind of enslavement of humans into, into an economy that prioritizes and puts a premium on individual human creation. In the age, you know, I mean we, in the age that we live in right now of, of mass mechanical reproduction, human made, custom bespoke art is given a premium. So in the future where AI generated Slop is everywhere. Human created content will be again given premium value and be putting a position of extra being extra desirable. And so social media companies will figure out ways to prioritize and to, to, to show that they have real engagement instead of bots. You know, when, when you have an Internet filled with 99% just bots talking to bots, the, the way that you can convince humans to, to engage with you is to promise them that they have real humans. But you know, once you are down that route of prioritizing putting a premium on human content, it's inevitable that people will figure out ways to again reduce humans to machines and to displace and enslave humans for that purpose. This is the pattern we see over and over again. So I don't think the future will play out differently than that.
B
Maybe we'll go into the like. These books and sci fi in general is not a prediction. It is an expression of where I am at today. You know, why, why is this idea that these books are predictions so seductive? And why does it make no sense?
A
Yeah, that's a great question. I mean, I think, you know, I think there's, I think there's a tendency in literature, in arts in general to sort of figure out, you know, how do we justify ourselves really? Because fundamentally, right, writers write because they're having fun, right? And so the fact that they're being paid for it is a little weird, right? So we sometimes have to figure out why we are being paid for doing this thing. And I think a very common reaction by some writers and some commentators is to view sci fi as particularly relevant to us in the sense that it somehow predicts the future or somehow engages with, helps us think about what the future is likely to be, or to warn us from dystopias that we might step into. I don't particularly think that this kind of justification for sci fi is all that plausible or even really interesting because the reality is sci fi has a very bad track record of predicting anything about the future. If you go look at sci fi historically, they don't really predict anything. If ever they do predict anything, that's more out of luck than anything else. And the sort of sci fi that we sort of hold up as really good predictors or evergreen classics are such because they get some metaphor right that's very potent, but the details are completely wrong. For example, Big Brother 1984 is a very, very good book and still extremely relevant decades, decades, decades after it was written. But the surveillance society that we live in today is very Very different from the one envisioned in 1984. You know, the big brother of 1984 is a state imposed surveillance system that is not the surveillance system that we have today. Even in totalitarian societies that are contemporary, the way this kind of surveillance is imposed is often not at all the way that 1984 pictures it. You know, just to talk about us ourselves, we live in a surveillance society that we crafted out of our own desire. Right? It's, it's not a stating post system. It's a system that we constructed out of voluntary consumer decisions over decades. We consistently and over and over gave up bits of privacy in exchange for convenience. So now we live in a world in which we're surrounded by devices that are constantly listening on us and watching us and sending bits of what we're saying back to the mothership. So much of our data is given up to these companies to train their devices, and these companies are happy to share their data with governments. We are under a degree of surveillance that I think Orwell would have found astonishing. And the fact is all of us are quite happy about it. The vast majority of us are not complaining. The vast majority of us do not think this is terrible. And the mass majority of us are actually fine with having so much of our data constantly exposed and being surveyed constantly. So, you know, Orwell did not get any of the details right. But the fundamental metaphor of Big Brother is extremely potent as a mythological concept. It has shaped the way we think about surveillance and the way we talk about surveillance, the way we think about generally what it means to have private desires, private thoughts and private data versus this constantly exposed, constantly being on display all the time. And I think that's what sci fi is actually good at. Sci fi is not really about prediction because science fiction writers have no more authority or knowledge than anybody else in terms of what the future is going to be. The reality of the future is it's very accidental. Every time science fiction writers engage in speculation of the future, they can't help but try to sort of extrapolate from present trends. So science fiction stories are almost always about the present. They're about present trends extrapolated. But the reality is, right, the way the future evolves depends on so many unpredictable factors. And I can give you lots of examples why that is. But the future that we end up having is almost never the future that we thought we would have. That's just over and over again how it is. You can plan and plan and plan all you want, but the future you get will be nothing like what you planned, there will be a thousand different teams all working on solving the same problem. And the team that will ultimately succeed is not going to be the one that many of us thought would would succeed. It's going to be. The future is unpredictable in a very deep, fundamental sense. Sci fi writers are not any better at predicting this than anybody else. But sci fi writers do have something interesting and valuable to add, and that is in the mythological realm. Again, as I mentioned earlier, artists are people who go into the collective unconscious. They dream interesting visions and they bring them back. It's these mythological visions that ultimately end up persisting. We don't read Frankenstein anymore for its speculation on how you might create artificial life. But we do read it because the creature is a very potent metaphor for new technology. We cannot think about new technology without thinking about the metaphor of Frankenstein's creature. In fact, the LLM, right, this technology of the moment is very much like the creature. If you go back and read Frankenstein and read the part about how the creature learns human language and learns human morality and learns human relationship and learns to desire, it's eerily like the way LLMs are trained. And the questions being asked of the creature are very much like the questions that anthropic nowadays is asking about alignment and how do we end up with an AI that is aligned with our own interests. I find that deeply fascinating. This is why old sci fi remains relevant. Not because their predictions are particularly valuable, but because the metaphors they bring up, the mythological figures that they invoke out of the collective unconscious, they persist and they help us dream about the present, about the future, and think about how we want to use technology to express who we are. So that's the part of sci fi that I think is interesting.
C
Ken, while we have this discussion about sci fi writers and the genre as kind of myth makers, I can't help but read this in the American context that we see today where, you know, Palantir exists. And I'm sure JRR Tolkien, when he wrote Lord of the Rings, did not imagine that his myth making would become a potent symbol that in a way enables technology as a political class that aligns with other ideologies that are backed by the government. And I'm curious how you think about that evolution in sci fi's relationship to politics in America today and what that means to you.
A
Well, you know, as I mentioned earlier, I don't think. I don't particularly think writers should be propagandists either way. And I think the reason why Tolkien is actually very potent as a writer is because he tried the best he could not to be a propagandist. The fact that Lord of the Rings could be read to support completely different political ideologies is a testament to his skill, not a failure on his part. He might personally very much disagree with how Palantir is now being invoked as a symbol, but I don't think that's actually a testament to his failure as a writer. He succeeded in creating very potent mythology. Good mythologies will always be appropriated by people of very different beliefs. I mean, just watch how Christianity or Islam has been appropriated by very different ideologies to say completely opposite things. This is not, you know, any different. So I don't think the idea that writers should feel somehow responsible for the way their mythology is used is very convincing. I don't think writers are responsible for that, and I don't particularly think that writers should care about that very much. The writer's only job is to create interesting mythologies, mythologies that are true to the collective unconscious, to their journey into the collective unconscious, to the dreams that they are trying to bring forth. That is their only job, and that is what they should do. They should help us escape in the deepest sense, which is the real world is filled with bad mythologies, bad allegories, bad, false, I would say, fantasies that are not true to human nature. Right? So one of the critiques of fantasy that Tolkien and Ursula K. Le Guin both push back against is the idea that fantasy is escapist. This is obviously nonsense. You know, as Le Guin said, right. If we live in a prison, then escape is actually our moral duty. And I think that is what happens. Right. In the world of ideologies, as you mentioned, that we live in. Ideologies are bad, are the bad cousins of mythologies. Right? Ideologies are basically bad versions of mythologies. That's what I would call them, ideologies, versions of mythologies. And so the fact that people can believe in ideologies at all is, you know, a very sad state of affairs. The idea that you believe money has actual meaning, the idea you believe that the Wall Street Journal has any kind of moral authority is nonsense. And if that's the reality that you're somehow living in, then it is your duty to escape. That is what fantasy does. Fantasy, our moral duty to escape from the bad hack mythology of ideologies by substituting them with real mythologies, mythology that actually mean something. The fact that, you know, somebody can reduce Palantir in the service of a bad ideological agenda that does not somehow make the actual myth in Lord of the Rings any less valuable. It's up to the rest of us to recover the other multitudes of meanings you can pull from the mythology and to recover the truth that that fantasy is meant to tell.
B
I guess I'm thinking with like, yeah, like ideology as a hack mythology. I don't know, nationalism. Like there's just, there's, there's. There's a lot of people all around the world who, you know, get into positions of power on the backs of those things, right, Ken?
A
Yes, I would agree entirely. I think one of the worst things that's happened to politics, and not just democratic politics, but politics everywhere in the world, is the fact that real mythologies are being hijacked by ideologues. Real mythology, mythology that are life giving, that are potent, that are creative, that are inspiring, have been hijacked by ideologues into servicing very, very bad versions of the real mythologies. Yes, nationalism often is one of them. Real, genuine, powerful collective identities have been hijacked by these nationalistic sentiments into something horrific in the same way that the beautiful vision of Christ has often been hijacked by organized religion into something much worse.
B
Well, let's maybe take it to our beautiful vision of Lao Tzu, which he, I don't know, he just kind of gets ignored. Not really hijacked. But Ken, why do you want to take this one on?
A
Well, Lao Tzu actually often does get hijacked, in fact, in ways that are pretty horrible. Daoism is deeply. It's one of those philosophies that often ends up being twisted into serving something that it's not. You know, people often quote Laozi basically whenever they have been thwarted in their own political ambitions, and they sort of quote Laozi as a way to give themselves comfort. Or sometimes they use Laozi as a way to discourage resistance, to sort of say that, you know, all kind of resistance is pointless and you should just go with the flow and do whatever. You know, the. The dominant trend is, I think these are all utter misinterpretations of Laos, and deeply, sometimes it's a misunderstanding of Laozi, but perhaps it's actually a deliberate misunderstanding of Lao tz in the same way that Palantir is a deliberate twist on what Tolkien was trying to do. Laos is interesting to me as a philosopher because, you know, he casts a particularly strong shadow across East Asian philosophy. I would say all of East Asia in a way that is rarely acknowledged. One of these common sort of things that people often say when they make generalizations about cultures. Is this idea that Western culture is deeply individualistic while sinitic culture is deep collectivist, which is, you know, utter nonsense if you know anything about anything. Western culture has very strong communitarian and collectivist trends. In fact, arguably the entirety of Christianity is deeply oriented towards this collectivist vision of what human beings can do and can be. And you cannot deny that the Christian tradition is a very deep part of Western culture. Similarly, you cannot deal with East Asian culture without addressing the deep influence Daoism has had on all East Asia, and especially through Zen Buddhism, which is basically a fusion of nativist Daoist philosophies with Buddhist ideas. And to sort of understand the deeply individualistic and deeply freedom oriented nature of Daoism is, to me, extremely important. I think one of the things that I care about the most in Taoism is its deep commitment to freedom as an idea in a way that is rarely discussed. There is a deep, deep, deep wellspring of freedom, of yearning for freedom, of love for freedom, of mythologizing of freedom that is important to Daoism, that I think we need to recover and rediscover and reclaim. These are important ideas now perhaps more than ever.
B
Care to elaborate on that? You want to pick a quote or two that illustrates that idea?
A
Yeah. So one of the things about Daoism that I think often gets ignored is this idea of freedom. And it's freedom in a very deep sense. What, what is freedom? What does it mean to. To be one with the Dao, to follow the Dao? Right. It actually means a kind of transcendence that is particularly important in the modern age. Right. A lot of times we feel the lack of freedom not because of external constraints, but because we are. We fall into the trap of believing that there are certain things we need or things that we should do that are actually not things that we need or should do at all. I mean, think about, you know, those who are a little bit older. Think about how important was when you were a teenager to dress the right way, to listen to the right music, to express the opinions that your peers did. Looking back on it now, all of that seems incredibly silly. And yet at the time, it seemed like the most important thing in the world. Those were constraints on your freedom, on your ability to be who you were. And yet it's only with the benefit of homicide and with wisdom that you realize that's the case. And the older you are, the less you feel constrained to do some of these things. Right. So the older you are, the less constrained you feel like you have to keep toxic people in your life. The less you feel like you have to actually play the role and be nice to people you don't want to be nice to, the less you feel like you are supposed to do things that other people tell you you are supposed to do. So the older you are, the closer you are to death, the more free you are. That's actually paradoxical, right? We ought to think that young people ought to have the most freedom because you have the most choices, and old people should have the least amount of freedom because you have fewer choices. And yet in reality, psychologically, it's the older people who feel more free because they have less to give. And that is interesting. That's one of those paradoxes about Taoism that I think is important to think about. The way that you are free is the degree to which you are not constrained. The more you feel you are free to live the way the universe wants us to live, the closer we are to the Tao. And to me, that is one of those insights that I got when I was reading the Tao Te Ching in the aftermath of the Pandemic. It's one of those things where until I actually started reading the text in depth and really reflecting on it, I hadn't realized so many explanations of Taoism and academic discussions of Taoism tend to neglect fundamentally how radical a philosophy really is. And Daoism refuses to be tamed. It's not one of those philosophies that can be easily reduced into the larger framework of philosophical traditions. It's something that is incredibly skeptical, slippery, self deconstructing from the start. But ultimately what Daoism highest ideal is is freedom. And where there are so many constraints and impediments to freedom, to me that makes Daoism more relevant than ever.
C
There's a bit of a natural follow up then, which is how would Daoism feel about surveillance and data collection and how that is a constraint of freedom?
A
Yeah, that's a really great point. I cannot imagine Laozi or Zhuangzi or any of the Daoist philosophers think about the world we live in and view it as anything but the worst of the worst. I mean, we are literally surrounded by illusions and we literally spend our times chasing after illusions. I mean, think about what you're doing on social media. You're getting your emotions riled up by some words generated possibly by a bot or by someone who has been paid to manipulate your. Your very anger, your very rage is the thing that these companies monetize and make Money from you. In the moment that these companies claim to give you agentic AI, you have actually been turning to an agent of the companies themselves, right? I mean, the only reason agentic AI is being given to you so that you can go do things like, you know, give them your email and calendars and let the AI do things for you so that you can give them more data, data that they would otherwise never have access to. You are the agent, you are the agent being deployed to explore the world and give these AI companies more and more information. We live in a world where we're surrounded by illusions and we are pursuing illusions. We think we have wisdom when we have none. We are so obsessed by chasing after illusions that we've utterly forgotten what the real pursuits are. You know, I can say endless things about our politics and how we are wasting our energy chasing after illusions and fighting with each other over illusions rather than going back to the few things that actually matter, matter. You know, Lao, you know, as Lao Tzu put it, right, we are obsessed with our eyes and like our bellies, it is the belly that is the fundament belly that is the truth. The belly that allows us to actually feel the dao and be with it. Our eyes are surrounded by illusions and we constantly are pulled away in this age of slop. Not just AI generated slop, but slope ideologies, hacked ideologies, hacked mythologies that lead us away from where we need to be. But you know, there's so I don't, I don't think there's some sort of magical solution to this other than for individuals to go back and to make the right choices. This is very difficult for us to make the right choices. Like I mentioned it, for most of us, you know, the folly of our youth is not realized until decades later. And maybe it's one of those things that society as a whole has to go through. We might have to go through a few years, hopefully not decades of this kind of folly before we recover some measure of wisdom as a society and realize just how deeply we've gone down. But meanwhile, we can only do the best we can as individuals to make choices that will allow, allow us to focus on our bellies and not be deceived by our eyes.
B
Can you talk a little bit about how Lao Tzu used language rereading it this year? I was just struck by how different he feels from what ChatGPT and Claude give you.
A
Yeah, you're right. That's such a good point. So Laoz is a very interesting writer, right? So as A premise. I will say this. I think in terms of language. Every single writer worth reading essentially invents his or her own language. That's what I really believe in. I don't think that it makes much sense to say things like Jane Austen wrote in some sort of 18th, 19th century English. Jane Austen wrote in her own language. She had to invent her own language to tell the story she wanted to tell. Same thing with Shakespeare. Same thing with Lao Tzu. Lao Tzu took classical Chinese, which is a very interesting language by itself in terms of its grammatical structure and its deep commitment to a kind of balanced structure in literary creations. And he turned it into something very unique. As a writer, he persisted in writing in a way that deconstructed binary opposition. So the idea of binary opposition is a very deep part of the human cognitive apparatus, and it's a very deep part of the way we see the world. Something is either this or that. It's either black or white. And Laoz sort of leaned into it. If you read him, he constantly writes things in a way that turns every word into its own continent, right? He would use the same word to mean exact opposite. But the purpose isn't to just sort of say, oh, actually, everything is just a big mush. He's sort of saying that in every binary opposition, there's actually a third possibility or an innumerable number of third possibilities that are there, that are neglected. Things are either black or white, but other colors entirely. Things are not empty or filled, but potential, which is not at all the same thing as filled and not the same thing as empty. Over and over again, he would make these statements that are this or that, this and that this is that. He constantly uses the same verbal formulation over and over again to force you to sort of see that language itself is inadequate to the expression of actual truth. Right? So the way that can wait is not the way. Right. The path that can be stated, the path that can be actually laid out is not. And this sounds like a bunch of paradox or some sort of kind of mystical nonsense until you actually apply it to your own experience and sort of see how that is. You know, I'll give you a very concrete example. You know, as a writer, when I started out, I thought, there will be some sort of path to success. And it took years and years and years of failing before I realized that there is actually no path. There's only the path that is left behind you after you've done what you've done, after you've looked. The path that can be stated is, now if you ask other people how they succeeded, they will tell you what happened to them. But that is only unique to them. That has nothing to do with you. You cannot really apply that to you in any way that that matters. You have to find your own. You have to find your own way, your own flow through the universe that makes sense to you, path that will lead you to the sense of freedom that you crave. Because writers, after all, crave freedom. And so the path that he actually stated that can be explained to you and reduced to language is not. This sort of skepticism to symbolic language is deep throughout Lao Tzu. The idea that whatever can be captured in words is the actual thing itself. You know, the idea that if you're, if you're obsessed with words, then you're only obsessed with shadows of real wisdom. Language itself is sort of kind of like the thing that's left behind when real wisdom has moved down. I mean, Zhuangzi has this beautiful parable of how, if you're reading the words of sages, you are not truly engaged with the wisdom of sages, because the real wisdom has left, and all you're left behind are the footprints of the mystical beast, the echo of the sound of the dragon, or the husk of the real grain of wisdom. What you're left with is the shell that will point you to the real thing. But to find the real thing, you have to sort of look beyond language. This kind of skepticism of language obviously exists throughout the philosophical traditions. But to bring it back to your question, Jordan, this is exactly why large language models do not have wisdom. They may have intelligence, but they don't have wisdom. Because again, all that large language models can ever do is to know the world to the extent and know the world through language. But again, as I said, everything that matters is beyond language. The truth about the universe is capturable by language. Language is itself adequate way to capture reality insofar as reality, you think reality exists and reality matters. Language is a shadow cast by reality, a kind of manifestation of the human mental implications left by reality. Reasoning from these traces and tracks, you're always just reconstructing the beast, the dragon that left it behind. You're not actually seeing the dragon itself. And Laozi urges you over and over again to seek the dragon itself, not merely contemplate its tricks and scales.
B
So when's the drawings of translation coming out? Ken, come on.
A
Not working on one? No.
B
Okay, maybe. Maybe next time.
C
One last controversial question. Why be a writer if words are about illusions?
A
Yes, that is actually a Great question. Le Guin had a good answer for that, which is artists. Artists are about the truth, not facts. Artists are there to go into the collective unconscious and retrieve the truth and try to present it to the world. But the truth is not something that can be captured by what we have. And so artists are people who try to paint what is essentially paintable. And writers are artists who try to say with words what cannot be said in word. In the same way that Lao Tzu tries to use words to tell you what the way is, even though he explicitly said that the way itself cannot be captured by words. That's how all of us have to deal with it.
B
Lao Tzu, you write? Lao Tzu wrote this way because he wanted to emphasize that language is ultimately a misleading guide. We think that when something is nameable, it is real. But, he writes, the name that can be spoken is not the name that endures. Conversely, we think what cannot be spoken about does not exist. But the most important knowledge is never reducible to words. So when we are all living in our AI generated virtual reality video games brought to you by, hopefully not slaves living in the Golden Triangle, we should remember to pick up our Chinese philosophers every once in a while, as well as Ken's new book, all that We See or Seem. Ken, Leo, this was just the biggest treat in the world. Thank you so much for being a part of chinatalk.
A
Thank you so much for having me. Jordan and Irene, it's a real pleasure to talk to you.
Episode Title: Ken Liu on AI, Daoism, and Freedom
Date: May 6, 2026
Host: Jordan Schneider, with co-host Irene Zhang
Guest: Ken Liu – Author, Translator, Technological Thinker
This episode features Ken Liu, celebrated for his fiction (notably the Dandelion Dynasty series), translation work (including Liu Cixin’s Three Body Problem), and penetrating essays on technology and philosophy. The discussion revolves around Liu’s recent techno-thriller All That We See or Seem, the human-technology co-evolution, the artistic and societal implications of AI and automation, the mythological and philosophical roots guiding tech narratives, and a deep dive into Daoism’s take on freedom and language.
Techno-thriller vs. Science Fiction:
Human-Technology Co-evolution:
AI represents the latest leap in humanity’s habit of externalizing cognition, like writing or arithmetic (11:13).
The difference now: Intelligence has been separated from consciousness.
AI slop (low-effort AI-generated content) is compared to earlier ages of “mechanical reproduction”—not unprecedented, nor the end of human creativity (18:45).
The proliferation of AI-generated content is akin to the invention of photography—most output is disposable “slop,” but true artistry persists and even flourishes in such ages (18:45).
The real loss is economic displacement, not the death of artistry. New tools demand new, again human, crafts.
Quote:
“Proliferation of slop has allowed us to become even more artistically interesting and to create more interesting human art.” (18:45, Ken Liu)
AI fulfills highly personalized desires (like “AI boyfriends”), while human artists create from their own visions. Both have roles (26:22).
Political ideologies hijack and distort real mythologies for power, leading society away from life-giving creativity (57:25).
Daoism (as seen in the Dao De Jing and Zhuangzi) is wrongly minimized as quietist or collectivist—the tradition is profoundly about individual freedom and skepticism of imposed narratives (58:55).
Freedom grows as one lets go of imposed, illusory needs and social constraints—a process often reflected in aging (62:04).
Quote:
“Daoism refuses to be tamed ... Ultimately what Daoism’s highest ideal is is freedom.” (64:42, Ken Liu)
Laozi’s writing style deconstructs binary oppositions, modeling skepticism about the adequacy of language to express truth (69:13).
Large language models (LLMs) can simulate intelligence but can’t reach wisdom, because wisdom always supersedes linguistic expression.
Quote:
“All that large language models can ever do is to know the world to the extent that it can be known through language. But again, as I said, everything that matters is beyond language.” (73:42, Ken Liu)
On AI and Slop:
“We are already living in an age of slop, utterly surrounded by it, and yet I would argue that proliferation of slop has allowed us to become even more artistically interesting... I don’t see the moral panic over it.” (18:45, Ken Liu)
On Human Nature and Technology:
“Human technology is a manifestation of human nature. It’s, in fact, the most human thing that we make.” (06:25, Ken Liu)
On Wisdom and Language:
“Language is a shadow cast by reality ... You’re always just reconstructing the beast, the dragon that left it behind. You’re not actually seeing the dragon itself. And Laozi urges you over and over again to seek the dragon itself...” (73:42, Ken Liu)
On Daoism and Aging:
“The older you are, the less constrained you feel like you have to keep toxic people in your life ... So the older you are, the closer you are to death, the more free you are. That’s actually paradoxical, right?” (62:04, Ken Liu)
Ken Liu’s appearance on ChinaTalk offers a sweeping, highly nuanced meditation on the artistic, social, and philosophical dimensions of technology and AI. He repeatedly calls attention to the mythological structures underlying how humans conceptualize tools, stories, and systems of control—and pushes for a reclamation of “real mythologies” and genuine freedom, especially through the lens of Daoism. The conversation rebuffs panic about automation and slop, instead inviting listeners to focus on creativity, personal sovereignty, and the hard-won wisdom that comes from self-examination and philosophical depth.