Transcript
A (0:00)
AI is not telling me what to think about. Mostly it's telling me what I don't need to think about. Cognitive surrender is an uncritical abdication of reasoning itself. It reflects not merely the strategic delegation of deliberation, but a relinquishing of cognitive control. There is something about AI, about its allure, about its potency, that could make cognitive surrender more and more widespread. The way in which I think has changed much more than I expected, and I'm still figuring out what it means. The Argument Engine is built for about 100,000 words of my writing House views, runs an argument against our established positions stylometer. And of course, we still have the golden thread. Check. Great thinking traditionally happens with care and self reflection. And in a world where there are all these tools around me, am I still doing that quality of thinking? Am I doing the quality of thinking that I could do with 10 uninterrupted days? What I want to do today is talk about something that I've been circling around for a few weeks, perhaps even months, which is about the way in which our thinking processes change during this period where we're using AI. Is our thinking getting better? Is it getting worse? This matters more than it ever does before. I mean, AI is inside all of the processes that I use. It's not a tool that I pick up and down. It's a tool that's there the whole time. It's become completely ambient. You've seen us and heard us talk about this. We've struggled with this issue. We've tried to expose it to our readers and our listeners. What is AI doing to that process of thinking? How can you use it well? Where does it harm rather than help? Just a couple of weeks ago, we had the first of our AI Vistas conversations where we sat down and discussed this question. Do we use our tools or do they use us? And I think that really gets to the heart of this question. And having had that discussion with Vistas, I wanted to come back to all of you and show to some extent how my thinking process has evolved and what it looks like now in a world where I'm using a lot of AI the hundred million tokens a day that I currently use to support my activity. Of course, we're not the only ones to be asking this question. The New York writer and podcaster Ezra Klein was on a podcast with Dave Perel last year and he was talking about this very issue. And I'll quote Ezra here. So I'm just going to read from my notes. Having AI Summarize a book or a paper for me is a disaster. It has no idea what I really wanted to know. It would not have made the connections I would have made. I'm not interested in the thing I will see that other people would. Pardon me, I've messed that up. I'll try that again. I'm not interested in the thing I'm interested in the thing I will see that other people would not have seen. And I think AI typically sees what everybody else would see. So that's what Ezra said. And actually what just happened there with my fluffing my lines twice is a perfect example of thinking. Ezra has his own interiority, his own world model, his own way of speaking. And so me reading out his words was difficult. I've practiced them, of course, and there you saw me fluff them up twice. That is the heart of the question of thinking. And of course we're, you know, we're concerned about becoming intellectually lazy as well. And there was a paper in earlier this year available on the SSRN network by two academics, Shaw and Nave, and they draw a distinction between cognitive offloading and cognitive surrender. So cognitive offloading is a strategic delegation of deliberation. For example, I used to remember all my friends phone numbers. It was all actually their parents numbers because we didn't have phones, because that was in the days before the mobile phone. And now we've offloaded that. Nobody really remembers any numbers and it's not really a problem. Cognitive offloading is a tool to aid one's reasoning in one way or another. But what Shaw and Nave describe is they describe also cognitive surrender, which is an uncritical abdication of reasoning itself. It reflects not merely the use of external assistance, but a relinquishing of cognitive control. So cognitive surrender is rather more pernicious as a practice because we uncritically abdicate our reasoning. And I think a lot of people are worried that AI will lead to widespread cognitive surrender. And for good reason, I suppose. One thing I would add to that is we probably do all exhibit some cognitive surrender just in our day to day. We can't be critically reasoning about everything that we, we deal with in life. And there are systems and structures that we simply have to accept because it just makes life easier and. But there is something about AI, about its allure, about its potency, that could make cognitive surrender more and more widespread. So let's come to how I think about thinking itself. My work, my job is anchored around research, around analysis, around communicating those ideas. Ultimately, it's really anchored around thinking. And I think that's true of course, for many jobs. But you can say that job like mine, which deals with ideas and data and connecting them together, is really all about that. And if I don't think new things, things that are challenging to you, things that perhaps you agree with, things that perhaps you don't agree with, well, I'm not doing my job at all. Thinking is the essence of it now. I've been doing this since, well, well before ChatGPT. I mean, my last book came out in 2021. I talked about GPT3. In it, there was no AI of any meaningful capacity or capability to help me with it. When ChatGPT launched, I realized immediately that this was going to change the way I worked and I started to play around pretty quickly. Of course, over the last three years the tools have become, well, I mean, amazing. There have been at least three significant paradigm shifts. Of course, the first one was ChatGPT itself. The second was the arrival of reasoning models. And I think the third has been the arrival of these long context tool using Systems like Claude 4.5 and Claud Cowork. They are so much more than ChatGPT running on GPT 3.5 was three years ago. And I suppose the question is, how's that changed my process of thinking? In what ways is it better? And in what ways might it be worse? In preparing for this and thinking about this over the last couple of days, I really came to the conclusion that the way in which I think has changed much more than I expected and I'm still figuring out what it means. And one question that I ask myself, and I'll return to this during the course course of this conversation, is am I doing the quality of thinking I could do with 10 uninterrupted days? I mean, it's everyone who's got an idea, their dream, their fantasy that you could just have 10 days without the noise of the world around you to really allow your mind to get deep into an idea and play around with it. When I was writing my first book, I might sometimes get two or three days at a time. And by the time you get into the second day, it's like an intermittent fast, right? You get all of the benefits in day two, well, the same thing was happening with deep thinking. So I'm going to go through this question in what I hope will be five brief acts. How I find signals, how I reflect slowly, how I write, how I critique, and ultimately how the words come out. And at each stage I'll try to illustrate How I'm using some of the AI tools largely that we have built ourselves to do this. And when we put the essay of this podcast out, we will share some screenshots with more details. So the first stage is, you know, in a way, where do those ideas come from? What are the signals that trigger my nervous system to send a signal to my brain to say, listen, you should write about this? Of course, we use a lot of AI in this area, right? I have a signal detection layer that sweeps a very wide range of inputs automatically. Just think about all the email that I receive or that you might receive. All of these things show up as signals in various inboxes, and I'll run detection across that volume of insight. I say insight. I mean, a lot of it isn't insight, of course. A lot of it is just a news story, a funding announcement, a new chip that's been launched. Some of it is Pabulum, some of it is incredibly powerful and relevant. And so the question is, how do you get that AI system to do the detection to. How do you get it to detect signals? I mean, typically what you would do in a signals detection system like this is that you would do something statistical. You'd go off and say, well, listen, normality looks like this signal. And if we have a signal that's two standard deviations away from what normality looks like, that's an anomaly and draw attention to it and bring it to someone's desk so they can deal with it. And, I mean, that's a perfectly reasonable thing to do. But it does depend on your being able to characterize those signals in any kind of statistical reality. But what it also has you doing is seeing things to come back to what Ezra said, seeing things that everybody else sees, because everyone else is going to run that similar to standard deviation approach. So, of course I will do things like this. But I also make really heavy use of archetypes. So archetypes are synthetic Personas of people that I think are interesting and distinctive. And I have these archetypes scanning my inboxes to find things that they, as archetypes, might find interesting together with their interpretation. And give you some examples of those archetypes. One is based on me, and I'll speak a bit later about how we build that. I have another based on the Silicon Valley investor, my friend Vinod Khosla. I have another built on the hedge fund investor who spotted and shorted subprime back in 2007, 2009, called the guy called John Paulson, another synthetic is Based on Clayton Christensen, a Harvard professor who came up with the sort of disruption theory of innovation. So each of these archetypes, and there are several others, will scan signals, dozens of items a day and hundreds a week, come back and provide a view. What I noticed is that you do want them to look at things daily, but you also want them to look at outputs over the course of a week or perhaps even longer. So that's one way of helping me understand and build a picture of the world that is out there. I also look for cross cutting relationships between themes. Is there a pattern of centralization or decentralization emerging across different industries? And if so, why? And if not, why not? Is there a trend towards open source? Are there differences between age cohorts in behavior or attitude that are emerging across all of these different signals and pieces that are coming into my inbox? And the purpose of that again is to try to find common threads that could be interesting, that could be evidence for underlying processes that are going on. And of course there are the prosaic summaries of industry trends and academic papers and momentum here and breakthrough there. Like that's all the hygiene stuff. That is probably least interesting but most commonly read. When we launched Exponential View Daily last summer, we hinted at what was under the hood. What we were attempting to do was sort of show our ability to sort of make sense of this and how that feeds into the kinds of decisions that we make. The end result of all of this signal processing is something that shows up in my email or it shows up in my WhatsApp. I read maybe half of it. What it does is it gives me situational awareness. That's all it is. It is situational awareness. I'd say it's a haystack. And occasionally there's a needle in it out of these signals. Very rarely do they actually trigger me to write something. What they might do is reinforce a belief, just provide me with a bit more of an evidence or a critical view. So what's going on here is that the AI is not telling me what to think about. Mostly it's telling me what I don't need to think about. So I'll give you two concrete examples. So we've just seen these incredible results from Anthropic where their revenue is approaching $20 billion per annum on an ARR basis. They added about 5 billion in February alone. It's kind of staggering. Now that could be a hot take, right? I mean, if I had time, maybe I would have produced something that people would have read on that subject. But in a way, if you've been reading Exponential View, you know that we've already predicted that, suggested that that could be an outcome relative to OpenAI, because we've spoken extensively about how we felt that OpenAI was spreading itself thin. It didn't have a coherent way of tackling a market. It was choosing a ubiquity strategy rather than a focused beachhead strategy. And on the other, we had been talking about how Dario's strategy at Anthropic was so specialized that it likely lended itself to the kind of classic land and expand that has made so many startup companies successful. So in a sense, I feel like I don't really have all that much new to say right now on that particular angle. So I'll pass. Another one is prediction markets. There's a lot being written about prediction markets. Should insiders trading on prediction markets be allowed or not allowed? In a sense, they're working better if it's is allowed. Should prediction markets make their way across the economy more widely? There are some really interesting debates there, but actually I don't have a framework to exploit, so there's no essay. The way to understand this is I get the situational awareness from the AI, but the writing triggers are still very, very experiential. They emerge from conversations I have, they emerge from listening to people, they emerge from a lecture or just some kind of moment of friction with an existing idea. Look, this isn't quite Kekole doing some lucid dream, screaming and imagining the snake eating its tail, and voila, there is the benzene ring. But it isn't a process that I think gets systematized. It gets supported by these tools more than anything else. And signals are noisy. So let's get to the second act, quietness. Like, what's the value of quiet when you're thinking? I wrote my last book without AI, and there was Google, there was Wikipedia, there was hours of writing and reading. There would be stints of 10, 15 hours where I wouldn't leave my desk, leave my chair, and be reading and taking notes, note after note after note quite often. I mean, it's very hard to do that, right, because you're not living any other part of life. And I did wonder whether AI would create more interruptions the way the smartphone and Twitter and so on has. But as many of you know, I have felt that AI has given me more time back and more time to think and time to do it without interruption. That isn't to say that I'm not using AI tools to summarize long documents. I do that all the time. And let's be honest, much of what you read, much of what I read deserves to be summarized and should only be summarized, but the best stuff doesn't that gets ordered from Amazon or downloaded and read. But where the summaries do help is that they actually de risk. They help me understand whether I should dedicate for hours or 8 hours or 10 hours to a section of a book or a book in its entirety, whether there is a likelihood that it will challenge me sufficiently to develop my thinking. And so there's lots of reading that is going on as a consequence of this really, of the space created by AI Quietness is not a practice. I wish I was that disciplined. It's a capacity. The discipline isn't in scheduling that quietness. My team knows I've got these blocks in my diary that says quiet writing time, which no one pays any attention to, least of all me. But what it is, it's about resisting the urge to colonize those gaps at every opportunity. You know, some weeks I don't have any time. Some weeks I will have literally entire mornings and afternoons where I can be with a pen and paper. And you all know about the fountain pen. Well, here's the other fountain pen I use. And actually it leaked a little bit. So I've got inky fingers today when I'm thinking about writing and doing that quiet work, I will use a fountain pen and I will use A4 paper in landscape mode, by and large, not small books. Small notebooks means small ideas. They're to do lists. They're not for your thinking. And what happens with the pen? Writing with a pen, it's about flushing the internal cache in my head. It gets things out of my brain differently than typing. There's connection making that somehow reaches further back in time. It's more associative, it's more surprising. And when something does need to trigger me to check a fact, Honestly, at that stage of your thinking, you're not thinking about facts. They're irrelevant. I just put a little asterisk with the thing that I need to check and put it on the side of the piece of paper. And in theory, I'll go back and check that fact. In practice, I normally forget. But the pen sets a much higher standard for what counts as worthy of an interruption. A quick note. If you want to support us in bringing more of these conversations to the world, please consider subscribing to the show. So we're getting into this idea of writing now. Been all Over X over the last few months, this notion that writing is thinking. And I'm going to quote Ezra again, I'll probably fluff the lines again because I'm reading someone else's words, not mine. Here's Ezra Klein. You can have an epiphany through writing, but weirdly, I think you have to be careful with that. Because sometimes writing is a process of persuading you of what the piece needs you to think. You've got to be careful not to become accidentally persuaded by the formalism of whatever your own assignment is, which is really, really a very, very nuanced reflection by a great writer on that process of writing and on the AI vistas discussion that we had a few weeks ago. Nita Farahani dove straight into the writing as thinking meme. She said, I've heard writing as thinking too many times now, and I think it's crap. When I write, I actually give a talk first. I think in public speaking more than in written form. And I agree with both of them in that sense there, that it's not as simple as saying writing is thinking. Not all thinking is writing. Mathematical thinking isn't. Certain types of pattern recognition isn't. If you are wandering through an art gallery and you're looking at paintings over a period of time or from a particular artist, you're thinking, you're pattern making, your connection making. But that's not writing. The writing process itself is a fractal. It operates at lots of different dimensions. It's not just the words on the page. There are a lot of people who are writers who say, that book is entirely my own work, and LLM didn't write a single word. But the words themselves are just the final surface layer of the writing. There are many other layers, and they all have different degrees of importance at different parts of the process. There's, you know, why am I writing this in the first place, right? What's its purpose? What's its provocation? How will I know if it does whatever I want it to do? That is not about the words that get put down? Because the next decision I can make is how do I actually want to argue this? Do I want to make it in a very directed way? Do I want to make it quite Socratic? Do I want to embrace ambiguity? Do I want to write it in an essayistic style that shows it doesn't tell, that gradually reveals? Do I want there to be narrative tension? Where do I want that tension to be? This is all the art of writing, and it's nothing really to do with the words. And then there's the nature of the structure that you'll put down with your writing. If you've been trained at one of the strategy consultancies, you're probably familiar with Barbara Minto's pyramid principle, situation complication resolution. It's useful for certain classes of business writing. It's a structure that we'll often use when we're doing a very analytical piece within exponential view. But that is also a part of the process of writing. And the words themselves really are often the final surface layer. And sometimes there are words that are great that don't make it into an essay. And I do find myself pulling words out from the off cuts of a previous essay and wanting to drop them into a new one. And I think that it's important to understand that the way in which an AI can help at this level operates also at those many levels. So let's go back to the purpose. What are we writing? Why are we trying to make a claim? Well, one of the tools that we've built is a golden thread analysis. The idea that there should be a single golden thread that exists in an essay and that all of the sections in the essay are in service to that golden thread. All of the paragraphs in the essay are in service of that golden thread. All of the sentences in the essay are in service of that golden thread, except when they're not in service of that golden thread, because writing and thinking is all about the exceptions rather than the rules. And so the golden thread analysis that we will do, which is a pretty sophisticated, I think now that's on iteration 50 or whatever else, is really designed to help me or whoever's writing to think about the deliberate nature, the intent they had when they set out to write that piece through the golden thread. And of course, we can do similar types of structural analysis on the pacing of an essay as well. Again, not to be told what to do, but to be given things that we can critically reflect on. And when you finally get to the words itself, you know, when I was writing my first book, I had a subscription to the Oxford English Dictionary, which has got the, you know, the best dictionary that's out there on the web, if you enjoy words. I would go and get that subscription. It was only about 25 bucks. And, you know, it's got the best thesaurus, so much better than thesaurus.com, but of course, the LLMs themselves are also quite useful thesaurus. So we're getting through this process now. We've had the signal detection we have had the idea of where do you reflect? To start to develop the idea. We're getting into the writing, but there is this middle stage which is a circular loop because my writing process is explicitly not linear. And that means that I go round and round this iterative circle. But the starting point is often that I just have this idea. Maybe it's come from the signals detection. No, it never has come from there. But maybe it's just come from something that I have been thinking about. And it will happen on a walk or in a long shower, or walking to the tube station. And I will start to build that argument up in my head. And then almost all of the time I will outline it on paper, handwritten like this. In fact, entire sections of my book are written by hand. And I'll just show you. This is my notebook for when I travel. I have this, these pages, you're going to find them hard to read. This will show up in my new book at some point. It was the start of a chapter I'm working on so effectively. I will generally always start anything that I'm writing, whether it is for Exponential View, whether it's for my book, whether it is a proposal, a talk I have to give by sketching something out by hand. And if you remember the Magnitudes of Intelligence essay that we published a couple of weeks ago, that again started like that. I had this idea of like powers of 10. I thought about the film I had seen, I started to write it out by hand. And then I get to the second stage, which is that I speak aloud because I do think when I speak, I like to extemporize. I hear myself, I hear the ideas coming out serialized and I'll often change and adapt in mid flow. So having got a sketch that could be an outline, it could be something that is a lot longer. I will then speak out and get that transcribed. The transcription is normally done with otter. I've just been using OTTER for a long time, I think nearly a decade now. So it's what I would generally use for this. Sometimes I will use granola. If I'm on a plane and there's nobody around me, I might use the local Mac Whisper, which is not brilliant, but it's better than not being able to write at that time. And then I'll analyze that transcription and I will edit from the transcript. And I might go through that process several times. So handwriting, read it out loud, transcribe it, edit from the transcription. And that speaking step is really important because when I speak it reveals something about the argument that I don't see when I write it, write it down. Now, we do have some AI tools that I use at this point to check and improve the argument. What I would say as well is the way these. The purpose of these tools is not to tell me what to do, it's just to provide some critical reflection on the quality of the argument. How strong and robust is it? So I've built the Argument engine with Armini Arnold. I didn't name it. Armini Arnold gave it that name. The argument engine is built from about a hundred thousand words of my writing. And what we did was we used an approach called the Toulmin typology. Stephen Toulmin was a linguist who came up with these kind of way of categorizing arguments in a particular way. It is a old, reasonably robust, but honestly a bit rigid way of formally analyzing arguments. But what we did, when we did that process was discovered, the archetypes of arguments that I tend to use when making a case. And we compared them to, you know, a big catalogue of other writing which also made arguments. And so what I'll do with my essays at this point is I'll pass them through the Argument engine, really, to get that critical reflection from those different argumentative structures. And I can look at them and I can say, well, I agree with this, and I don't agree with that. Oh, that's a good challenge. Maybe I should bring that in to. To the essay. And what it will also do is, because it's a bit simplistic, it will also tackle that moment where, as all writers know, we sort of get excited and we. We meander off and it feels good to us at the time. It just doesn't feel good for the reader. The other tool that I've started to use, this is only a few weeks old, is house views. So again, against a corpus of tens of thousands of words of writing and against work that we have done internally, that my team has done, to look at the arguments that we use. Arguments like learning curves, arguments like modularity, establishing some house views. Some of these are structural house views. They're sort of embedded in the idea of exponential view, which is learning curves, those things getting cheaper, the importance of feedback loops. And some things are more tactical. So a House view on what we think about the differences between the anthropic and the OpenAI strategies. And these house views live in an API which you can access, bots can access. They're an MCP that is available. They're also available as Plaud skills for the team. And the main point is to run an argument against our established positions. That's not for answers. That's not to tell us what to do. It's for critical friction. And it's also to help us understand whether we might need to update a house view. So the main point here is that we are looking for generative constructive frictions that rub against existing thinking. Now, this process goes around and around and around in a loop. And again, as people who write, no, when do you stop? And normally it's a deadline, but when do you stop? And we just agree to stop at a certain point when you get to the late stage. I think we're in the world of classical drafting. So this is classical back and forth drafting. What am I saying? Am I saying it well? When we go off and get an illustration or a chart, does that change anything? Change the way I need to think about something? This is the long process of word craft, sentence by sentence, word by word. Is the pacing right? Have I asked too many questions? Are these sentences too short, too punctuated? We just have to figure that out right at the end. And of course, we have tools. Stylometer, which is built from 60,000 words of my writing. It's an advanced style guide. It is Grammarly or spell checker. It's available to the bot, it's available to Claude, to Claude code, and there's an API to the team. It's a thesaurus and style guide. It identifies problems and ranks how serious they are so a human editor can come in and act on them or choose to not act on them. And of course, we still have the golden thread check that is going on. We might at this point also introduce more synthetic Personas. I talked about how we like using synthetic Personas. So one of my favorites is a Persona called R Cukier, C U K I E R. That's named after Ken Cukier, who's a senior editor at the Economist. He is renowned for his clarity around the frame of an argument. In fact, he's written a book called Framers, which is exactly about this and that. Synthetic editor also, of course, comes from the background of Ken being one of the world's best editors. What frame are we actually taking here? And this is a useful final lens for us to use. So all of these loops are there. They can run from the early stage to the late, the late to the mid, late to the late, mid to the mid. They're there because this process is extremely iterative. It is not a pipeline. It is not processized it is not industrialized, it's not mechanistic. So I want to come back to the initial challenge, which was, are we living in this world of cognitive surrender? Are we taking processes that need to take a lot of time and trying to speed them up? Because of course, one can use a chainsaw to cut down trees faster than with a hand axe. And is this the type of process that lends itself to that? Well, I should say that of course, writing is sort of infinitude of it, and it expresses itself in different ways. And I'm writing a particular style to a particular audience in a particular way. And what works for me may be completely unsuitable for somebody who works in a different field. An academic writer, a poet, somebody working in literary fiction, somebody writing screenplays. These are all different types of writing and they have their own craft and discipline. So I wouldn't speak for them, but I can speak for the kind of work that I do. And that initial challenge was that, you know, great thinking traditionally happens with care and self reflection. And am I still doing that quality of thinking in a world where there are all these tools that are around me? Am I doing the quality of thinking that I could do with 10 uninterrupted days? Well, look, the truth is probably not, because I'm just not sure I'm going to get 10 uninterrupted days at any point in the next 10 years. Cognitive offloading is comforting. It's helpful. We do it a lot with quite a lot of things. Cognitive surrender remains a risk because it's all too easy. But anecdotally and purely subjectively, what I have noticed is a higher degree of criticality because I'm going through the process of criticism so much more regularly than I have and in so many more domains than I have over the past six months than in sort of previous years. And one really interesting one is now, of course, I'm shipping much more code through these coding agents. I'm having to think about engineering and development considerations which may seem quite far away from the act of writing. But they are critical lenses that actually only really help the process that I'm going through. So I feel like I'm embracing things that might make it better and wary about the things that could make it worse. I'm pretty certain I haven't got that balance right. This is really still about deliberate intent. It's about self reflection and metacognition and thinking about your own capabilities and keeping tools as tools, because there really aren't any easy shortcuts. Now. I'VE built some partial answers. There's the quiet practice. There's the fountain pen. There is the spoken draft. But the question stays open. Thanks for listening all the way to the end. If you want to know when the next conversation is released, just hit subscribe wherever you're listening. That's all for now, and I'll catch you next time.