Transcript
Podcast Host (0:02)
Tetragrammaton.
Dan Brown (0:23)
I love to write about people who do the wrong thing for the right reason. So I like to find a big topic, ideally one that I want to learn about, that has that moral gray area. It might be civilian privacy versus national security. It might be AI is going to save us or AI is going to kill us. It might be the future of human consciousness, whether learning about how our minds work is a good or a bad thing. And so for me, really, it's choosing a topic. And after that comes location, and last of all, come the characters.
Interviewer (0:55)
Are there other topics floating around for potential future books?
Dan Brown (1:00)
That's funny. Yes, there are. But the bar has gotten so high. Having just written about human consciousness and before that, AI, there are plenty of things that interest me, but it's very difficult to find something that I think resonates on the level of the influence that AI is going to have or the influence that understanding human consciousness is going to have. I do have some new ideas that I'm sketching out, but at the moment, I think I'm pretty tired. This book took eight years.
Interviewer (1:24)
Wow.
Dan Brown (1:25)
It was by far the most ambitious thing I've ever tried to write. And I also happen to think it's the most fun. But maybe that's just what every author says when they finally finish a book.
Interviewer (1:33)
I'm about midway through and it's a roller coaster ride. I can't remember going on a roller coaster ride as fun in a long time. So it's working.
Dan Brown (1:42)
Thank you from you, that's very, very kind.
Interviewer (1:45)
You said someone who does the wrong thing for the right reason. Now, when someone does the wrong thing for the right reason, would that person be a hero or a villain?
Dan Brown (1:55)
Well, that's the question. And you get it. Those are the interesting villains. I wrote a book called Inferno, where somebody believed that overpopulation was going to kill us as a species and decided really the only thing to do was to stop overpopulation by cutting the population in half to a very creative way. And it just asked this question of if you could save humanity by exterminating half of the population, could you do it? Could you pull that lever? And that's just a fascinating question. It's funny, when I was researching AI this is sort of along the same lines of the right thing for the wrong reason. I went to the Barcelona Supercomputing center, talked to a specialist in AI and I said, I don't understand why everybody's so afraid of AI we're writing these programs. Can't we just write one Line of code at the bottom of everything that says, okay, A.I. do whatever you want, but it has to be in the service of humankind. And this man sort of laughed. He almost patted me on the head like a child and said, well, it's a little bit more nuanced than that. Let me tell you what's going to happen if we add that line of code. An AI is going to take a look at all of the resources on the planet Earth and say, well, I see you have resources for about 4 billion people. You currently have about 8.5 billion people. Let me take care of that for you. And then that's when I kind of realized that I was approaching from a very naive point of view and that the problem really is nuanced and subtle and difficult.
