
Loading summary
A
Welcome to the Thriving With Addiction podcast, where we explore how recovery is not just about surviving, but about truly living. Each week, we'll dive into the science stories and strategies that help people and families heal from addiction and build healthier, more resilient lives. I'm your host, Dr. John Avery.
B
Let's get started.
A
I'm John Avery, and welcome back to Thriving with Addiction. Today I'm joined by Tim McCorth. He is the author of the newsletter the Third A Neuroscientist, field notes on how AI is or rewiring Our Brains, a contributing writer at Slate, and a columnist at the Transmitter. His work has also appeared in the New York Times, the Nation, the New Republic, Foreign Policy, and Scientific American, among others. He received his PhD in neuroscience from Columbia University, and for nine years he directed New Write, an international network of workshops for scientists and writers. He's currently a research assistant professor of neuroscience and physiology and director of graduate science writing at the Vilcek Institute of Graduate Biomedical Sciences at the NYU Grossman School of Medicine, where he studies how AI affects cognition and education. Tim, welcome.
B
Hey, John. Thanks so much for having me.
A
Of course. It's great to talk with you. If my Questions come from ChatGPT today, are we in trouble right off the bat?
B
I mean, it depends how good the questions are.
A
The first thing it wants me to ask you, and the first thing I want to know is a little bit about your background. Tell me, tell me a little bit about you and how you. You've had such an interesting career. I loved reading about it before I met you today. Tell me how you made your way to AI in your current position.
B
Yeah. So, you know, when I've been teaching writing to scientists for years, I'm a scientist myself and I'm also a writer. And I know how important Communicating is for SC scientists. And that's my main charge at NYU, is training the PhD students in the larger scientific community and effective communication principles. When ChatGPT hit the scene in November 2022, that greatly complicated my job. I used to sit in a dusty corner and tell people that they should follow good sentence writing principles. And now all of a sudden, I'm at the center of the university in terms of what should people be using AI for, what should they not be using it for? And that really launched for me a deep inquiry into what purposes writing serves for people and more generally, just what purposes certain kinds of cognitive, you know, social or emotional acts serve for people that are now potentially, you know, replaceable by machines.
A
No, it's a great area of specialty and very timely. And you've made your career on good writing. I mean, a number of your essays have won awards, and I was hoping you could tell me a little bit about them. To start, I was especially touched by your Essay, the final 5%, about your family experience. And walk me through some of that early work that led you here.
B
Yeah, I mean, thanks so much for asking. You know, that that particular essay you're mentioning was, you know, probably one of the hardest things I've had to write, but also one of the things I'm most proud of, and the brief story is my brother was in a motorcycle accident and sustained a brain injury and woke up from a coma with a personality change, which led him to get entangled with the criminal justice system and go to prison, which then, you know, initiated a downward cycle from there. And at the same time, I was. It was many, many years ago, and at the same time, I was learning about the brain. And so it's the. It's a story that in some ways, has defined how I approach writing, which is when I can't understand why things occur, I try to understand how. And that essay is really exploring what the limits of neuroscience are and what happens when they intersect with very messy social systems like criminal justice.
A
And that really resonated with me in the addiction field because we see traumatic brain injuries so underdiagnosed in people with substance use disorder. That's certainly true for people in forensic settings. And so to shine a light on that, I think that really impacted a lot of people. And the personal side, too. Was that difficult to write about just personally?
B
Yeah. I mean, it was an essay that was a long time coming, and I think was my way of processing some of the events that had transpired. But, yes, I mean, I spent probably seven or eight, maybe even 10 years off and on you know, writing it as events were unfolding. So certainly it was difficult but cathartic, as they say.
A
And there's something that may be lost, actually, with ChatGPT in that process. Right. That sort of the friction that comes with writing meaningful work. ChatGPT is eliminating that a little bit. And your process is a good example of how that friction and that struggling to understand something and write really resulted in learning. And in some ways, it resulted in a writing career for you, too, it sounds like.
B
Yeah, I mean, I was, you know, certainly had been interested in writing before that and had written many, many stories. But you're correct that if I had been in my 20s and wanting to process things, would I turn to writing? You know, would that, would that even occur to me as something to do in the era of ChatGPT, or would I stick with it? Um, so I think your, your point is very perceptive that, you know, just from a neuroscience perspective, our brains are wired to, to not make a lot of effort. Like, we don't like effort, even though, you know, it can be in the, you know, in the end it can feel good and we benefit from it. In the moment, we don't really like it. And so we're going to tend to make micro decisions that we're maybe not even aware of that will have us avoid friction, as you say. And these tools are designed to exploit that. And so it's kind of a self reinforcing cycle.
A
That's right. And then before we dive more into that, the other essay that stood out, which I think is also related, was the one you wrote on Ultra Processed Foods in Slate. Even before all the recent discussion about it, you, you wrote a article titled is there anything to the panic over Ultra Processed Foods? Again, one where you're sort of struggling with the, the different dimensions that go into how we understand those foods and, and again, shared something personal about, you know, feeding your, your young child at home and requiring it.
B
Yeah, I mean, you know, I like to write from a point of, you know, where there's a human element and also to embrace uncertainty where, where it demands it. You know, I think something like I have written about nutrition and other. I wrote about COVID for many years as well for Slate. And what I try to bring to readers is to wade into a complicated evidence. Complicated evidence so you can embrace uncertainties where they exist and come to a place of more durable clarity.
A
Well, I think you definitely did that with that Ultra Processed Foods article. All right. And I think all of this background situates you nicely to tackle this AI AI dilemma.
B
Yeah, I sure didn't ask for it, but, you know, here I am, so.
A
No, your job asks for it. The way you dive into things asked for. No, we appreciate your expertise in helping us understand that. And so just definitionally, I mean, we've mentioned ChatGPT, but AI means what? Just so we know what we're talking about.
B
I mean, first, first, AI is a marketing term. You know, in some ways it's, it gets applied to everything now. And, but I think in, you know, the newest AI is generative AI. Right. So I think there's a major division between generative AI and predictive AI. So predictive AI is something that has existed for a long time and it's involved in anything from, you know, say, you know, criminal justice. They want to try to predict if someone's going to commit a crime again or a credit score. They want to predict if you're going to a, a credit worthy lender. There may be some, you know, I'm not sure if this is ever used in, in medicine, but one could imagine you'd want to, you know, predict, you know, specific aspects of, of patient care, you know, whether they're going to relapse or, or whatever it is. So that's predictive AI and what that still goes on. But what is new is generative AI and that's the like what's newly powerful and that's things like ChatGPT, that's things like Claude Gemini and those are more specifically language models and they make predictions about, you know, what's the most likely word to occur next after a previous sequence of words. That's all that they really do. It's more complicated under the hood but in some ways they're just really fancy autocode complete machines.
A
And how do you understand why they've really taken off though the way they have? And have you noticed everyone using them these days?
B
So I think first of all a lot of people were surprised that you do something as simple as build a machine that's really good at predicting the next word based off of the previous words it's heard and capabilities or apparent capabilities emerged from that. And I think that's because there's actually a lot of knowledge encoded in language. And, and so they can seem apparently smart, they can seem apparently compassionate, they can seem understanding, they can seem friendly. As humans we tend to anthropomorphize or we tend to see human like qualities in almost anything. Right. You give a kid like a, you know, you give a kid a drawing with two dots and a line and they see a face like we, we're just geared to see this. But these large language models, they sort of take advantage of that. And so we see these human like qualities in them. But the mirage is really, is, I think really surprised even experts at how good they were at it and that
A
they can seem like a companion, someone to talk to that understands. And so quickly everyone wanted to take advantage of it and use it.
B
Yeah. And obviously making it a chatbot makes it very easy to use. You don't have to have coding skills, you don't need to know any special language. You can just, you can just chat with it. And so I think that was a bit, I mean, I think it took OpenAI by surprise how popular it was three years ago.
A
So, and full confession, I've used it and loved it since, since it came out. I mean, I feel like it transforms, you know, how I, how I work and how I think about patients and my academic output and, you know, how I ask questions on a podcast and how I think about family relationships. And it's really quickly before even I could know what was happening, you know, taking this large role even in my life. And I've heard that for so many people. And if you're not using AI, you're almost like behind the game these days.
B
Yeah, I mean, I think a lot of, you know, I think a lot of some people have, I think, moral or ethical issues with this version of the technology. So, you know, I want to acknowledge that that is, I think that's a reason many people, people don't. There are still many people that don't, that don't use them. I just want to acknowledge that, you know, one can make a decision based off of those grounds. But if you do decide to use these tools, I think there are other considerations like as you're talking about, they will change the nature of how you work. They will potentially change the nature of how you relate to people. And whether those changes are good, bad or neutral is actively being studied and still up for debate. But I think the important thing to know is that these are very powerful tools and if you do begin using them, they will make some things in your life better, but they will probably have some, you know, off target effects, as you might say in the medical field.
A
Right. And some of the things they can do better are, I assume, help us with our productivity, help us think through complex problems, answer other things that are, would take longer to answer research together. I mean, what do you see as in general the benefits of this technology entering our lives?
B
You know, I think one of the, one of the biggest benefits is that it has an equity effect for people that are not as good at certain skills that are really necessary to have in the modern world. So I'll give you an example. A friend of mine, she's a, she's a hairdresser and she's, you know, I'm in New York City and so landlords are always an issue. And she was having some issues with her landlord and had been unable to resolve them for a long time. And she didn't have a lot of fluency in navigating courts and navigating landlord stuff when AI came onto the scene. Once she learned a little bit of how to use it, she was able to successfully take her landlord to court and win. Right. And because it's not rocket science to do that, but you have to know a few little things to make it work. And ChatGPT did what you couldn't afford a lawyer. ChatGPT, you know, helped her with that. So, you know, that sounds like one of those, it almost sounds like marketing copy. But like the point is that you can see this across the board that it does, it's always available, it's accessible and so it can, it can equalize expertise to people who could use it. So maybe that will only be a short term benefit as these systems kind of get more integrated into the world. But right now that is one of the things that I see as the most useful, probably social effect.
A
I agree completely. And then how would you describe some of the risks then? With this?
B
Yeah, so I think the risks are also pretty numerous and I think it really depends on the person. So I think, for example, for younger kids, I wrote about this recently, there's these type of toys, they're called AI stuffies and essentially they're stuffed animals or stuffed whatever that have a speaker in them and they're connected up to ChatGPT and so they've given some instructions like, you know, talk like you're talking to a kid and don't say some things about X, Y and Z. They sort of follow them. So right off the bat you have some, some issues where you can get a chatbot stuffy to a five year old and with enough prodding it will begin talking about, you know, sexual things. Right. So right there, that's a, that's a risk, I feel like that'll be solved. What I think is a, is a, a bigger risk is that you have a child that they become, they can become very emotionally connected to these, to these stuffies. I mean, kids will talk with a lamppost, right? Like they don't, they'll make an imaginary friend out of everything. And here instead of an imaginary friend, they're playing with a chatbot. That is endlessly affirmative. That is endlessly. That's never going to pose friction, Right. How much of forming relationships is actually about, even when you're very young, negotiating what to share, what game to play, having differing needs and dealing with those. And so I think the risk potentially, and this goes not just at that, you can see it at that early age, but I think this is for really all ages, but especially developing minds, is that you can get an unrealistic expectation of what kind of Friction is going to happen between two humans and is that going to reduce people's need for actual human or desire for actual human connection? Is it going to have them retreat more into isolation? Is it going to not equip them to handle social situations as adeptly? You know, that's a concern. You know, that's like one concern that, that I have. I can go on and on, but that's, you know, I think that's a big one.
A
Yeah, no, for me that's, that's core. I mean, you know, social media is sort of reward driven and external validation driven and almost in that slot machine, like reinforcement of the likes and the way you interact with it. But AI really is, it's that relief of the sort of cognitive and emotional burden that if you are relieved of that so consistently, then I'm glad I got this in my 40s because I worry like you worry that then you're not going to be doing the hard work that's required to learn to have relationships, to just sort of understand yourself as a, as a person in the world. And that's scary.
B
It is scary. And it may only affect a subset of people. Right. So I don't want to over claim that the society is going to fall apart, but if that's 2%, 3%, 5% of the population that these generate or exacerbate social issues in, that's a big deal.
A
And then if we're looking under the hood in terms of its impact on the brain, given some of the way it operates, what would you say there is going on?
B
Yeah, so I think from a neuroscience standpoint, there's a few things. I mean, you may have seen this study that went viral over the summer where people who used ChatGPT to write essays later had less neural connectivity in brain areas compared to people that did not use ChatGPT to write essays. And so it generated a lot of sensational headlines, but like it pointed at a real thing which is use it or lose it. Right. That's, that is, that is true. So if you rely on ChatGPT for tasks that you used to do yourself, you will lose the ability to do those tasks as well, or maybe at all. I think we can all intuitively understand this also with gps, right? Like I, when I moved to New York, I didn't have an iPhone and I kind of found my way around. I very quickly after got an iPhone. And at this point I rarely, unless I'm in my own neighborhood, I rarely navigate the city without my iPhone. I don't really the stakes don't feel that high to me for that. Right. I'm in a city, I will always have a phone with me. I can ask directions. You know, it's not like I'm, or, you know, I'm in the mountains and orienting myself, finding my way around is not core to my identity. But when we're talking about something, you can offload judgment to decision making, you know, memory, emotional connection. Right. The magnitude of concern is like a step change. And so when you start to offload those things and start to lose the ability to do those things, what's left
A
right now? What is left? What does that result in?
B
I mean, I think that is an open question. Right. You know, that's. But, you know, this is something that I've, you know, spoken about before with, with, with other, other clinicians, which is, you know, it resembles something like dependence at some point. And you see, I have seen that pattern in students and, and colleagues where they set an intention not to use AI for a certain purpose or in a certain way. They violate that tension or that intention. They feel distress about it, but then they do it again. And I see that happening. And I don't know what you call that, but it's clearly a problem, right?
A
Dependency or it feels like some of the definitions we use for different behavioral addictions. But yeah, if you're offloading so much to it, you depend on it, and I imagine a withdrawal state almost if you don't have it. And that's concerning. Tell me some of what you're seeing in your students as a consequence of this technology.
B
Yeah, I mean, I'll first say that I teach graduate students, and so it's already a selected group. And I think the issues are much more varied and complicated in undergraduate students and even high school students, but in graduate students, even in that motivated intellectual group, we're already seeing in a subset, a small group of people, something that, again, I'm not a clinician, but just colloquially feels like dependence. They are, they're. You know, oftentimes people will start with a. It's a slippery slope. So people will start relying on something like ChatGPT for a task no one would really argue with. Like maybe English isn't your first language and you're just using it for usage. And so I, I would see that. And then it's to do a little bit, maybe we'll clear, make things a little clearer. Okay, maybe that's not such a big deal. But then pretty soon it's starting to write a paragraph. It's starting to write this. And these are scientific documents that the authors really care about. Right. It's not just like a boilerplate email. And that goes on. And then people get to the point where they can't stand in front of a room and give a presentation without their phone in their hand, without chatgpt on their phone. They can't go into a meeting without asking ChatGPT what to expect, what they should say, what they should do. And, and I want, want to add that they feel a little distress about this. Like they don't, they don't, they don't feel, as they reported, they don't feel that this is serving them, but they also don't really feel like they can stop at this point.
A
And that sounds a lot like, like, like addiction that, you know, you really want a behavior to stop, but you can't. And the stakes feel increasingly high and, and, and that causes distress. I mean, we hear a lot about the mental health struggles that evolve with social media and some of the sort of like wormholes. People go down on AI with mental health, but maybe use in and of itself could then cause, independent of what we're using it for, just that sort of cognitive offloading stuckness feeling could lead to mental health or other substance use issues, potentially.
B
That's true. And again, I don't think there's any, I don't know of any evidence on this yet. I think it might just be too early, too early to say. But you're right, if it, if it engenders feelings of despair or distress, you know, that could lead to other issues.
A
And I've noticed myself trying to like check the behavior. I think ChatGPT came out with this browser that you can, you know, instead of just sort of consulting it with my emails or my work, the Browser more integrates ChatGPT into everything I'm, I'm doing. And I was like, whoa, this is, I've lost something here and I had to delete it just to make sure I'm keeping up with the rigor and learning the way I want. You almost need some checks in place to keep you from going that route.
B
Yeah, I think that's a great point. And unfortunately, at this stage, it's really up to the individuals. There's very little regulation or thoughtful deployment of these tools. They're just being kind of stuffed into every corner of our lives and it's up to, up to people to resist them. So on the one hand, I kind of understand if you have concerns, like just don't go near them. Right. But the truth is that they do offer a lot of advantages. For example, I use them for many coding tasks that I have to do during research. And because I don't really care, you know, as long as I have ways to check it and make sure it's working, I will happily have a bot code for me. But when it comes to writing, I am way, way more cautious. And I am cautious in two ways. You know, the first thing is I have some pretty strict process rules, which is I do first drafts myself, even though it's not pleasant, because I feel that's the kind of productive struggle that benefits me in the long run. The other thing is, and I think this is an interesting corollary, is that I try not to use them for tasks I care about after about 4pm and people might say, well, that's a little arbitrary, that's a little strange. But here's the thing. It's really difficult to monitor your own cognition in a way that you can decide, oh, I'm using this for a purpose that's going to serve me or a purpose that's going to undermine me. We're very bad at that generally, and we're especially bad at that when we're tired or under pressure or under stress. And so for me, I know that, like, that's not a great time for me to be using those because I will just overdo it. So if I, to the best of my ability, try to plan my life. So if I'm going to use it for, say, like, I want to get some feedback on a piece of writing, I'll do that when my defenses are at their, at their highest.
A
Right. And in some ways you're describing these same triggers we talk about in addiction all the time, that sort of hungry, angry, lonely, tired triggers, and also describing sort of moderation management strategies, how to use something in a way that doesn't then turn into risky use or a use disorder.
B
Yeah. And I mean, I think in some ways that's what's interesting to me is that whether we call this addiction or dependence or problematic use or whatever, some of the same tools that are used in those fields are probably going to be useful to, to managing people's relationships with this technology.
A
And you're sort of approaching sort of a definition of what's healthy versus unhealthy AI use, I guess, and setting some parameters so you can say this is healthy. This is where it sort of goes into, into more, more risky. Do you find like, students and other people are accepting of those parameters? Do people tend to agree when you talk to them about it?
B
No, I think there's actually variance in that. And so because there aren't strong social norms around what constitutes healthy or unhealthy use, that too is in the Baha'. I. That, too is in the eye of the beholder. I think some students are far more comfortable offloading or relying just some people generally on these tools, and others feel much less comfortable with it. I mean, I will say in my younger students, a very. I take a survey each year. Right. And you know, if we assume a certain degree of truthfulness, a large chunk of students do not use them. They don't use these tools. They just, they, you know, maybe they'll use it for coding. But, like, that's. That's it. That may change. May change. But I. I can say that there's already factions or distributions, so there's a lot. There's a wide variety in people's stances and attitudes and approaches to this. You know, just like there is with many other technologies or even. Or even substances.
A
How do you think it is going to impact education going forward?
B
Yeah, I think the effects on education are going to be complex. And I want to start by saying that the evidence so far is pretty clear that unfettered access to commercial chatbots tends to undermine learning. And at the same time, AI chatbots that are designed specifically to scaffold and help people actually help learning. Right. So both of those things are true. AI is helpful and harmful to learning. So the question then becomes, if you're a student, and let's say you only had access to institutionally issued learning bots, maybe AI would be basically a force for good in education. But when in your next browser window or the next app over or on your phone, there's ChatGPT that will do anything you ask of it. That's asking a lot of students to resist using that on. On some principle or on some norm. So I think in the current state, the net effect is probably going to be negative, even though it does have some positive upsides.
A
You know, we've heard a lot of concern about people that are in writing like yourself. I mean, what have you noticed in terms of that as, you know, chatbots increasingly put out a lot of the content we read online.
B
Yeah, I mean, you know, there is a small part of my soul that dies because, you know, something that I've spent so long trying to perfect can now be done at 85% of that level or whatever by anyone. As of now, you know, I'm not concerned that Chatbots will ever replace the, like, the upper, you know, the upper echelon of writers. Right. I don't think there's been studies that come out, you know, any way you measure creativity. I mean, chatbots are very creative by many measures, but they tend to not reach the outer reach of the. You know, they can't reach the highest levels. Maybe that will change one day, I don't know. But for now, it doesn't. It's not like that. But regardless, I think that's kind of missing the point because it'll drive down the economics of being a writer, which were already pretty dismal. And if that stops becoming a viable career path for people, that will stop being something that they do for money, or it will be restricted to people who are independently wealthy or can afford it, which it kind of already was. So I think it will greatly affect it. It will affect probably people's taste in writing. You know, people will tend to prefer stuff. It's. It's like the Marshall McLuhan quote, the. The. The or dictum. The medium is the message. I think chatbots and AI will shape the way that people ingest and what they expect out of information. So I think it will. It will. It will definitely have an effect on. On. On many of the arts. And I. I don't. I don't actually. I. I don't love that.
A
Yeah, no, me neither. It creates a lot of fear for me, actually, and for my kids as they grow into this world. When I valued writing and research and viewed it sort of core to my identity, it does feel like something's dying when that's taken away. Yeah.
B
And I think this brings up these very deep values, questions of, you know, it is a deeply human act, say, to create music. Right. So, like, I'm a musician, and I love to play. And if robots and AI could perfectly play music, that wouldn't really matter to me because I like to do it. I think there's some things that are just things that humans like to do. Humans are curious. Humans like to create. Those are basic human urges, and they may manifest in different ways moving forward, but those are. Those are basic urges that we will need to satisfy. Right.
A
And in some ways, that's probably one of the antidotes to the addictive potential of these AI devices is to still have things in your life that are creative in which you're using your brain. And maybe it's not as much writing, but there's still plenty of ways to be creative and interact with folks that we have to make sure we don't lose, as these devices are so consuming.
B
Yeah. And I mean, again, I think there's probably room for heterogeneity and diversity in society. I mean, there's going to be some people that use these tools a lot and some people that maybe don't, don't use them at all, at least for, for a while. And that, that could be a good thing. Right. Not everybody has to adopt the same philosophy around, around these tools. I also think it's worth, worth stating that, you know, writing is a useful cognitive tool. So even if I can have AI write something for me, I don't learn it as well. I don't make the connections as well, and so my mind doesn't develop as well. So a lot of times the purpose of writing, and you especially see this in education, but I think it's also true in many intellectual fields, medicine included, you understand, through writing. And the product is not the piece of writing so much as the changes in your brain. And so that's another way to think about it, is I'm writing this. I mean, that's what happens in high school. We have people write an essay not because anybody wants to read the essay, but because we value the changes that makes in the high schooler's brain. And so if that's the purpose of writing, then we should keep doing it. It's an intellectual tool that has been a very powerful one. And I think it would be unwise to abandon that simply because a technology can do it in a mediocre way.
A
And as I said earlier, that's why I'm glad I didn't have these tools until I was in my 40s, because I harnessed those skills, and now I can know how to use it as an assist in the right ways and decrease the use when I notice it taking away some of what I value. And so we really have to protect that in the youth so that they can know the difference between what it feels like with and without these devices. Almost. Yeah.
B
And I think if we've learned anything from all of the technology that is in schools now, that it can sometimes undermine the very purposes that education aims for. And I think that's what makes it complicated is again, in an ideal world where, you know, educators or whoever could come to agreement about what deployment of these tools is beneficial. I guess if we had institutional gatekeepers of some kind of, you know, then we could have this conversation. But when these are widely available to everybody, it's really not a conversation we can have because high schoolers and college kids, they're going to do what they want.
A
Exactly. And we say the same for substances. Right. And in some ways, we're encouraging them to experiment, but be wary if it gets into addiction and it's hijacking the brain and they're losing some of the core development issues. And then to really value sober thinking and sober being or without AI so that you can, you can grow and mature in ways that you're not going to be able to if, if, if AI is taking some of this emotional and cognitive load from you.
B
Yeah. And I mean, I think norms are, norms are powerful and they can, they can take hold over time. And this, they very well, very well may take root in, in the younger generations, but it's going to be an uphill battle because the tools are widely available and being pushed on everybody all the time.
A
Well, Tim, it was great talking to you today. The grad students at NYU are very lucky to have you guiding their academic development. And I'm sure a lot of us will take your wisdom and try to apply it to the people in our lives as well. So I appreciate you discussing all this with me.
B
Hi, John. It was great to be here. Thanks so much for having me.
A
Thanks for listening to the Thriving With Addiction podcast. If you found today's episode helpful, please follow and subscribe wherever you listen to your podcasts and share it with someone who might benefit. You can also connect with me on Instagram, LinkedIn and YouTube or visit thrivingwithaddiction.com to learn more. Stay tuned for next week's episode. And remember, thriving is possible.
Podcast: Thriving with Addiction with Dr. Jonathan Avery
Host: Dr. Jonathan Avery
Guest: Tim Requarth (Neuroscientist, Writer, NYU Professor)
Date: March 10, 2026
In this insightful episode, Dr. Jonathan Avery speaks with neuroscientist and acclaimed writer Tim Requarth about how artificial intelligence—especially generative AI like ChatGPT—is changing the way we think, learn, and relate to others. Drawing from both neuroscience and personal experience, they explore AI’s influence on cognition, emotional health, social interaction, education, and even dependence—paralleling the challenges faced in addiction and recovery. The conversation is candid and relatable, offering nuanced perspectives on the promises and perils of AI as it becomes woven into daily life.
Definitions: Predictive vs. Generative AI ([08:11]–[09:40])
Why Generative AI Took Off ([09:40]–[11:18])
Benefits of Generative AI ([12:54]–[14:53])
Improves productivity and levels the playing field by making specialized knowledge accessible.
Equity case example: A non-expert successfully navigates a court case with ChatGPT’s help.
Quote: “ChatGPT did what you couldn't afford a lawyer to do.” – Tim Requarth [13:44]
Risks for Young People & Emotional Development ([14:59]–[17:52])
AI-integrated toys and chatbots can blur the lines between real and artificial relationships, creating unrealistic expectations of social interactions.
The absence of “friction” (difficult conversations or negotiation) in AI interactions can impair the development of critical social skills.
Quote: “How much of forming relationships is actually about... negotiating what to share, having differing needs, and dealing with those?” – Tim Requarth [15:53]
Dr. Avery voices concern over long-term effects:
“If you are relieved of that [cognitive and emotional burden] so consistently... you’re not going to be doing the hard work that’s required to learn to have relationships, to just sort of understand yourself as a person in the world.” [17:37]
Cognitive “Hijacking” and Dependence ([18:13]–[22:50])
Use-it-or-lose-it principle: Reliance on AI diminishes brain connectivity for tasks offloaded to machines. Parallels drawn to GPS reliance.
User experiences mimic behaviors of dependence or even addiction; students struggle to function academically and socially without AI assistance.
Quote: “They can't stand in front of a room and give a presentation without their phone in their hand, without ChatGPT on their phone. They... feel that this is not serving them, but they also don’t really feel like they can stop at this point.” – Tim Requarth [21:45]
Potential for Mental Health Impacts ([22:50]–[23:40])
Tim shares personal strategies, such as only using AI for specific tasks or avoiding it after 4pm, comparing triggers for overuse to those in addiction (“hungry, angry, lonely, tired”).
Moderation techniques, similar to addiction management, can help mitigate overuse.
Quote: “It’s really difficult to monitor your own cognition... We’re especially bad at that when we’re tired or under stress.” – Tim Requarth [25:37]
The need for boundaries and process rules is emphasized, especially when AI is integrated into so many aspects of life “in every corner of our lives.”
Social norms on healthy AI use remain unsettled, leading to broad variations in people’s attitudes and behaviors.
Studies indicate “unfettered access to commercial chatbots tends to undermine learning,” while targeted, educational chatbots can enhance learning.
Splitting educational use from generic use is identified as a challenge given widespread access.
Quote: “AI is helpful and harmful to learning.” – Tim Requarth [28:24]
AI will unlikely replace the most creative writers, but it already undermines the economics and cultural value of writing as a profession.
A human’s need to create—be it music, writing, or art—remains vital, even if the “product” can be simulated by AI.
Quote: “I like to do it [make music]. I think there are some things humans just like to do... those are basic urges that we will need to satisfy.” – Tim Requarth [31:53]
The true value of writing, especially educational writing, is the intellectual transformation it offers the writer—not just the polished product.
Quote: “The product is not the piece of writing so much as the changes in your brain.” – Tim Requarth [33:38]
Social norms around technology adapt slowly, and the ubiquity and marketing of AI will make it an “uphill battle” to establish healthy boundaries.
Quote: “Norms are powerful... but it’s going to be an uphill battle because the tools are widely available and being pushed on everybody all the time.” – Tim Requarth [35:55]
| Topic | Timestamp | |-----------------------------------------------------|--------------------| | Tim’s backstory and “The Final 5%” essay | 01:29 – 05:30 | | Defining Generative AI & Its Rise | 08:11 – 11:18 | | Benefits: Equity & Productivity | 12:54 – 14:53 | | Risks: Children, Relationships, Social Skills | 14:59 – 17:52 | | Cognitive “Hijacking” and Neuroscience | 18:13 – 22:50 | | AI Dependence in Students | 21:11 – 22:50 | | Moderation Strategies and Triggers | 24:07 – 26:42 | | Healthy vs. Unhealthy AI Use | 26:42 – 28:01 | | Impact on Education | 28:01 – 29:24 | | AI’s effect on Writing and Creativity | 29:24 – 32:36 | | Societal Development and Formation of Norms | 33:57 – 36:16 |
This episode delivers a nuanced, science-informed exploration of AI’s impact on our brains, behavior, and society. Tim Requarth and Dr. Avery bridge neuroscience, education, and everyday life to suggest that AI’s “hijacking” is less about machine takeover and more about the subtle, cumulative offloading of effort, skill, and authentic human connection. Their discussion is both a warning and a guide: AI offers great potential, but only when integrated with awareness, boundaries, and the preservation of expressly human forms of growth and connection.