Loading summary
Indeed Sponsor
Support for this show comes from Indeed. When the pressure's on and you need to hire the right person for the job, Indeed Sponsored Jobs has your back. Sponsored Jobs posted directly on indeed are 95% more likely to report a hire than non Sponsored jobs. Join the 3.3 million employers worldwide that use Indeed to connect with quality talent that fits their needs. Spend less time searching and more time actually interviewing candidates who check all your boxes. Less stress, less time, more results when you need the right person to cut through the chaos. This is a job for Indeed Sponsored Jobs and Listeners of this show will get a $75 sponsored job credit to help your job get the premium status it deserves@ Indeed.com podcast. Just go to Indeed.com podcast right now and support the show by saying you heard about Indeed on this podcast. That's indeed.com podcast. Terms and conditions apply. Need to hire? This is a job for Indeed Sponsored Jobs.
Derek Thompson
Study and play come together on a Windows 11 PC and for a limited time, college students get the best of both worlds. Get the Unreal College deal. Everything you need to study and play with select Windows 11 PCs. Eligible students get a year of Microsoft 365 Premium and a year of Xbox Game Pass ultimate with a custom color Xbox wireless controller. Learn more@windows.com studentoffer while supplies last ends June 30th terms at aka mscollegepc when you need to build up your team to handle the growing chaos at work, use Indeed Sponsored Jobs. It gives your job post the boost
Indeed Sponsor
it needs to be seen and helps
Derek Thompson
reach people with the right skills, certifications and more.
Indeed Sponsor
Spend less time searching and more time
Derek Thompson
actually interviewing candidates who check all your boxes. Listeners of this show will get a
Indeed Sponsor
$75 sponsored job credit@ Indeed.com podcast that's Indeed.com podcast.
Derek Thompson
Terms and conditions apply. Need a hiring hero? This is a job for Indeed sponsored jobs.
Ed Elson
Today's number 100,000. That's how many dollars it reportedly cost to get one ticket to the Met Gala. Meanwhile, the price of a table was $350,000, or as attendees call it, half a facelift.
Cal Newport
Money market matter.
Hostinger Sponsor
If money is evil, then that building is hell.
Cal Newport
The show goes on.
Derek Thompson
The phrase and the sell Sell.
Ed Elson
Welcome to Prof. Markets. I'm Ed elson. It is May 12th. Let's check in on yesterday's market vitals. The major indices all climbed, led by a rally in chip stocks. The S and P and the Nasdaq both hit new records. Those gains came despite President Trump's rejection of Iran's proposal to end the war. He also said the ceasefire was on, quote, massive life support. Brent crude climbed higher as hopes for peace faltered and the yield on 10 year treasuries rose. Okay, what else is happening? Since the 1800s, every generation has been smarter than their parents, except for Gen Z. That is what neuroscientist Dr. Jared Cooney Horvath told Congress last month. Today, 90% of college students and 84% of high schoolers use AI in class or for their homework. And according to OpenAI's own data, one of the most common use cases for AI is writing. Meanwhile, a recent study found that AI tool usage among business students was associated with weaker critical thinking skills. And this data raises an important question, and that is, what do we lose when we outsource our work and our thinking to AI? After all, 900 million people use ChatGPT every week. In other words, is AI making all of us dumber? Now, you might remember that we discussed this question last week. We've been investigating this question a little bit more, but today we want to bring in two experts who are thinking about this, who understand these issues. So we're going to do something a little bit different. We are going to move away from the markets for today and focus on this question. So we're joined by Cal Newport, professor of Computer Science at Georgetown University and New York Times bestselling author of eight books including Slow Productivity and Deep Work. And we've also got Derek Thompson, host of the Plain English podcast and author of Abundance. Cal and Derek, thank you so much for joining us. Welcome to the show. Cal, I'm going to start with you because you have written about this and you've talked about this idea of cognitive fitness and this potential reality that it's in decline. What do you make of what's happening on the ground in terms of AI usage and what it's actually doing to our brains?
Cal Newport
Well, I think AI has the real capacity to make us dumber. It's new enough and usage of it is still growing that we're not seeing the major effects yet. But I fear that we are going to see it. And the way I conceptualize this world of cognitive fitness is that social media and highly engaging tools on our phones started this trend. It moved us away from more sustained, concentrated activities through which we strengthen our brain. AI is now taking target on the other main cognitive activity that makes us stronger, which is writing. This is emerging as one of the major uses of this tool is to alleviate the strain you feel when you look at a blank page and have to fill that blank page. So if AI does, in fact, significantly reduce the amount of writing we do, whether it's super important or just a memoir, I do think we're going to see a continued diminishment of our intelligence that began with highly distracting phones about 10 years ago.
Ed Elson
We'll get into what we do about this. But, Derek, do you agree with Cal?
Derek Thompson
Yeah, of course. Of course. He's right. Maybe we'll explore some disagreements between me and Cal in a few minutes, but on this, I think he's hit it right on the money. I mean, if you doubt what Cal is saying and you use AI, pay attention to your own life. Pay attention to your own use of time. When you ask artificial intelligence to summarize an article or to summarize a paper or, God forbid, to summarize an entire book, do you understand that article, that paper, or that book as well as if you had read it? Of course not. Okay, now maybe you could argue that. All right. Well, I saved time because now rather than read that one book, which might have taken me 10 hours, I can summarize 15 books, and that'll take me sort of 10 hours to process or something. Well, even there, you're engaging at such a shallow level with each book that I'm not sure you really understand the degree to which they agree and disagree with each other, but also what you're depriving yourself from. The inability to read anything for more than five or 10 minutes at a time. And that is a skill that leads over time to the ability to make those sort of deep connections that I think are the basis of all true insightful thinking. So I absolutely think that the risk here is really, I guess, as I described it, sort of at least two layers. One, that you're depriving yourself of the experience of truly understanding something that you think you're trying to understand. And number two, that you fall out of a habit that is necessary to think deeply in the future. And to Cal's sort of maybe first point to end there because of going achronologically, you know, we're looking at things like the Flynn effect, and we're looking at things like test scores over time. Well, if we're depleting the inability of fifth graders and sixth graders to think, and they continue to use AI in seventh grade and eighth grade and through 12th grade and through college, that's not just one year of losing the practice of doing deep reading and deep thinking. Now, we're talking about a decade, a formative decade that you've chosen to essentially not work on the kind of muscles I do, like this fitness metaphor, not work on the kind of muscles that are so necessary in the long run for understanding something deeply to be smart about it. So absolutely, I think the cal's onto something.
Ed Elson
I mean, it seems like there are two main forces that we're kind of identifying here. One is the screen in general and our increasing addiction to those screens. And then the other is AI and our dependency on AI to solve harder problems, more nuanced, difficult problems. Carl, just going back to you, which is the more dominant force? Or is that even a relevant question right now?
Cal Newport
Well, the biggest impact so far has come from a decade of hyper optimized engagements on a portable device that we have with us at all times. That has had a massive impact. Essentially what happened is the machine learning algorithms behind especially short form video platforms built an approximation of our short term reward centers in our brain so that it could give exactly the signal that's going to resonate strongly with those particular circuits. This makes the phones essentially irresistible. When it is with me, I have to take it out, I have to look at it. So that over the last decade or so has done substantial damage to multiple generations ability to actually not just sustain attention, but again to build those circuits you can use to think deeply when the time is required. These circuits are built through the activities of reading and writing. These are privileged activities in the history of modern humanity. Post Paleolithic humanity, AI is new on the scene, but I really feel like it's going to be a catastrophic cousin to what we already were encountering with hyper engaging content on a screen. Because if that really focused on reading, we no longer sit and concentrate on a book in a way that could build deep understanding. Writing was its partner. Writing is the pair to reading. Writing is where we take the circuits we etched with deep reading and then we apply them in reverse to create original thoughts of our own. We have to practice that muscle as well. And now for the first time, we can begin to substantially outsource that activity. So I really think about reading and writing as activities. This is not nostalgia. This is not, oh, we're talking about horse buggies in an era of automobiles. I really do think those are the activities on which the post paleolithic modern human brain were built. The brain that gave us theology, that gave us politics, that gave us philosophy, that gave us theology. The brain that everything we hold dear was built around substantially depended on reading and writing to shape it. So I'm really worried about what we already lost with reading. And that we have a new tool that's going to start to take writing off the table as well.
Ed Elson
It sounds like sort of the argument you're making because, you know, someone would say in response, if someone were to push back, the argument would be, well, every technology in history has made our life easier in some capacity. Like, you know, you invent the engine, you invent the car, it makes it easy to get around. And the argument would be this makes it easier to do the job of critical thinking in the same way that other technologies do make other jobs easier. Carl, it sounds like what you saying is that this is different, the brain. Critical thinking is on a different level. It's so endemic to what it means to be a human, to the point where this is actually a bad thing, unlike other technologies. Would that be the right characterization?
Cal Newport
No, I think that's right. If we use the fitness analogy, reading was a great technology to make us better at critical thinking. Writing was a great technology to make us better at critical thinking. But to use something like AI is like bringing a forklift into the gym and be like, you know, we've been in here for years, we've been using weightlifting to try to get stronger. Well, I figured out with a forklift, it'll be a lot easier. I don't have to lift the weight myself. You're actually being counterproductive to the actual goal, which is strengthening the cognitive muscle to get stronger. So, no, I do think this is not a technology that's making us better at critical thinking. It's allowing us to sidestep the hard activities that previously we used to make our brain stronger. The product, the benefit being sold by this product is convenience in the moment, not a stronger brain or stronger ability to think.
Ed Elson
Stay tuned for more of this panel right after the break. And by the way, we are heading out on tour at the end of the month. So for more info and to get tickets to a show near you, head to profgmarketstore.com.
Hostinger Sponsor
Support for the show comes from Hostinger. The biggest barrier to entry for most entrepreneurs isn't a lack of capital. It's the friction of starting. You can spend months in the strategizing phase, which is precious time that could instead be spent actually making moves. But these days, the rules have changed. AI is redefining who gets to build a business. So when you're building the next big thing, go live in minutes, not weeks, with Hostinger. Hostinger is an all in one platform that brings everything into one place. Your domain, website, email marketing, AI tools and AI agents so you can launch online without stitching together five different subscriptions. Start with a prompt and add your personal touch. You can create websites, online stores and custom apps without coding or designing skills. Then use AI agents to automate tedious tasks and grow your business. Hostinger powers over 10 million websites and there's a reason it earned a CNET Editor's Choice award. Turn your one day into day one. Go to hostinger.com theprovg to bring your ideas online for under $3 a month. Plus get an extra 20% off with promo code TheProf G. That's less than the price of a cup of coffee per month. That's hostinger.com theprofg promo code theprof g for an extra 20% off.
Cal Newport
You tell yourself no one wants your college era bandtees, but on Depop, people are searching for exactly what you've got. You once paid a small fortune for them at merch stands. Now a teenager who calls them vintage will offer that same small fortune back. Sell them easily on Depop. Just snap a few photos and we'll take care of the rest. Who knew your questionable music taste would be a money making machine? Your style can make you cash. Start selling on Depop, where taste recognizes taste.
Ed Elson
Where is Daredevil? I'm right here. Don't miss the return of Marvel Television's Daredevil Born Again.
Derek Thompson
So what's next? I feel liberated.
Ed Elson
We're gonna take this city back over
Derek Thompson
medicated in an all new season now
Ed Elson
streaming only on Disney plus.
Derek Thompson
They're hunting us. It's time we started hunting them.
Ed Elson
I can work with them.
Derek Thompson
This should be tons of fun.
Hostinger Sponsor
Marvel Television's Daredevil Born Again now streaming only on Disney.
Ed Elson
We're back with Profg Markets. So if we all agree that this technology is making us dumber and it seems that it's. I mean, I think that I'm not sure who disagrees with that at this point. I think it's pretty clear to us. I mean Derek, let's model this out. Game theory it out. Where does this go in terms of the economy? I mean, if we are dependent on AI, but none of us can really come up with original ideas and we can't think critically about issues, do you think that that steers the trajectory of our economy in perhaps a different direction?
Derek Thompson
Let me try to take this question at a really high level of abstraction and then I'm going to zoom in on some specifics. I think that technology is use how the effect of AI is exquisitely dependent on how we use it. Like, if you look at how artificial intelligence was recently employed by the Mayo Clinic in radiology to see pancreatic cancer, on average 2.4 years before a doctor could see it in a scan, you cannot possibly argue that that is AI making people dumber.
Ed Elson
Yeah.
Derek Thompson
That is clearly making us better, smarter as a species at seeing pancreatic cancer. The use of technology, the use of artificial intelligence there is to supplement the human radiologist's eye to see pancreatic cancer. So I don't. And that is obviously good. So I don't want to represent my opinion here as being. And maybe Cal agrees as being like, oh, all AI is bad. But that's not the way that artificial intelligence is being used in high schools and college. It's being used to cheat and to cheat at a scale that is keeping students from learning how to learn. So I am very optimistic about how this technology is being employed in some industries, while at the same time, I think Cal is absolutely right that if you look at the use of artificial intelligence in high school and college, I see practically no reason to be optimistic about that generation's ability to learn, to think deeply, to write by the time they graduate. So technology is use. There are some wonderful use cases of artificial intelligence, but within the education system today, like, I think it is basically a tool for mass cheating that is, in fact, cheating students out of the ability to think in the long run.
Ed Elson
Yeah, you bring up an important point here, which is we should probably distinguish who is getting dumber because of AI. And the reality is we're mainly talking about children here. We're talking about people who are in school or even high schoolers who are using AI to do their homework, to cheat. And we're seeing, as you mentioned earlier, that math scores are going down, science scores are going down, all of these standardized testing scores are going down, even literacy rates are going down. So, I mean, it sounds like maybe the point on which we would all agree is that AI has fundamentally transformed what it means to go to school. And that is the point that perhaps needs further and deeper exploration, deeper discussion, and perhaps some regulation. Derek, if this has meant that everyone cheats, now what do we do?
Derek Thompson
Yeah, if I was going to write a magazine piece about this, I think the way that I would frame it, and I really like Cal's framing, so I'm borrowing this from him. But I would say that for the last 10 to 20 years, we've been running this experiment of distraction in our schools. We have very clear correlative. But I Think causal evidence that suggests that phones are an enormous distraction that's responsible for the global, not just us, but global decline in math scores, in literacy scores, and in other measures of one's capacity to maintain attention. Now, on top of this weapon of mass distraction, you add artificial intelligence, which is this extraordinary tool for synthesizing information which allows students to cheat at an extraordinary scale that we know is happening in colleges and high schools. If you want to fix that, if you want to fix this weapon of mass distraction followed by this weapon of mass cheating, you have to solve it directly. Take the phones out of the classrooms, put them in pouches, run that experiment, certainly, to see if it works. And then when it comes to testing knowledge, you just have to move out of the modes of testing knowledge that can be cheated toward modes of testing knowledge that can't be cheated. So one thing that can't be cheated is something a little bit more like the Oxford model, where most of the grade is dependent on in class oral exams. You have this system or culture of, you know, you take the history class, you learn about the Habsburg Empire. Rather than write an essay about the Habsburg Empire, much more likely, just ask Jatgpt to write it for you. You get up in front of the class and talk about the Habsburg Empire and talk about the Holy Roman Empire, and people ask you questions, and you defend and prove your intelligence to the classroom, to the teacher. So it's a little bit like, my wife just finished her PhD in clinical psychology. At the end of the PhD, what's the verb that we use to describe the end of the Ph.D. you defend your dissertation. You get up in front of a group of experts, and you don't just give them the paper and say, read it, and then give me my degree. You defend it. They ask you questions. They say, what about this methodology? What about figure number one? And you say, oh, well, here's why I did the methodology, and here's why Figure 1 looks the way it does. You prove in real time that you are the author of that paper, that you understand the work that you did. And I just think that more education, if we really want to get around the cheating epidemic, probably has to slurp in this Oxford model or this dissertation model, because it's much harder to cheat in an oral exam.
Ed Elson
It's a really interesting point. Cal. Do you agree?
Cal Newport
No, that has to be right. I mean, this is what's happening in academia right now. It's a combination of the Oxford model and what I've long been advocating for, which is the explicit discussion and promotion of the ability to aim your mind's eye towards complicated topics as the goal of school. And it's something that we should be talking about starting at grammar school and moving all the way through the university system that we are here not just to get content and reproduce content on test, but to teach our mind to be comfortable thinking. And that's a frame through which to see almost every activity we do. I would also throw into this, I think specificity is a really important point we made earlier. So I'm just going to throw in a sort of specificity constraint here. What we're really talking about, if we're going to use my terminology, AI is the wrong term that's way too broad. That includes things like the Cleveland Clinic or Mayo Clinic model that Derek was talking about. That model, for example, has nothing to do with a large language model like the type you would see produced by the frontier AI companies. Right? This is a prediction model that's custom trained on labeled data sets of radiology scans. We've been doing this since the 90s and they're making slow and steady progress. Like these sort of AI models that are very utilitarian and useful, aren't new, aren't currently experiencing a massive exponential takeoff and capabilities. But often the frontier AI companies will launder the results from these non LLM models and sort of mix in with what they're doing. But what we're really talking about here is large language model based tools, and in particular using those for the production of written text or in some sense to sort of aid thinking. And that's exactly where we get to all the problems in the academic setting that we've been talking about.
Ed Elson
How big of a problem do you consider this in terms of like a national economic scale? Because I mean, there's one side of this which is like, you know, we want to protect our kids, it's important that our kids have fulfilling, interesting school experiences, they get a good education, et cetera, which I'm sure we all agree. But then there's also another side to it which is like we kind of need children to have functioning brains for when they eventually lead the nation. And there might even be like a China versus USA argument here. Like if students over on the other side of the planet are being trained properly, their AI chatbots are being regulated properly, they know how to use their brains. Doesn't that mean that sort of 50 years down the line they're going to beat us and outperform us on every which metric? I mean, is that an argument that you see as relevant or important Cal, is that something that comes up in your conversations when you discuss this topic?
Cal Newport
Well, I have a relatively radical view on this. I'll be interested. You know Derek is the economics expert here so I'll be interested in his take on it. But I argue we have already seen the economic impact of this reduced cognitive fitness. This has already been a major storyline of the last 10 to 20 years. I mean given the technological advancements we've had in the digital, the intersection of the digital and the office, we should be seeing the exploding total factor productivity especially in non industrial sectors like the knowledge sector. And we haven't. Right. There's been a lot of different things have been playing on that. We have the economic crisis and other things going on but total factor productivity in non industrial sectors has been more flat or uneven than you would expect. And I would argue this is in part a result already of massively increasing the distractions and context switching that happens in our lives and in the workplace. We're in a world now. I think one of the most telling statistics of the current office is now the average worker is going to check an email, inbox or chat channel once every three minutes on average. That is a disastrous cognitive context to use your brain to add value to information which is the core activity of knowledge work. So I already think we're seeing a flatlining. This is sometimes called the productivity paradox of the 2000 2010s because of this impact on cognitive fitness. So yes, if we go farther down this road and using LLM based produced writing take that important strengthening activity off the plate in our educational system. It's not just about kids brains and some sort of abstract notion of smartness equals good. I think the economic impact that we may already have been feeling for 10 or 20 years is just going to get way worse. And it is something we do have to really care about from a national perspective.
Ed Elson
Derek, what are your views on these economic impacts?
Derek Thompson
Yeah, as I was listening to you and Cal talk sort of these two different statistics sort of popped in my head that I think juxtaposed together interestingly. One is that there are a lot of indications that Gen Z is the most materialist generation that we've ever seen in American history. If you ask various groups sort of bucketed by generational cohort how much money they consider success in America you tend to have about $150,000 be the norm in most generations until you get to Gen Z and they say it's four to $500,000 institute for Family Studies recently looked at a Monitoring the Future survey that asked various questions about materialism among young boys and girls in high school. And that line of materialism has just gone up and up and up. And I think for the first time in the last 30 years, women are now higher on a certain measure of materialism. So on the one hand, you have this extraordinary desire among young people to be successful. They open their phones, they look at influencers, they see rich, successful, beautiful people living their rich, successful, beautiful lives. And so that's one train track that's coming along here, but there's this other parallel train track, and that is students cheating constantly in high school and college. In the short run, if you cheat on every test, you're cheating the test. In the long run, if you're cheating on every test, you are cheating yourself. You are removing from yourself the ability to lift the weight. And if you want to be rich and if you want to be successful, I myself certainly know of absolutely no individual who is rich and successful, who doesn't work unbelievably hard, who isn't very good at what I think of as cognitive time under tension. That is, to extend the fitness metaphor, this idea that if you do sort of one rep of 150 pounds on, you know, a bench press and it takes one second, that's a certain amount of resistance. But if you make that a 5 or even 10 second up and down, it's much more tension on the muscles. That's time under tension. And I think thinking has a similar principle that really great ideas benefit from the ability to sit with those ideas for a long period of time, to figure them out, to find the simplicity that I think, as Oliver Wendell Holmes said is on the other side of the mountain. You learn about something and then you are able through your learning about it, to make it simple and make it effective. If you are cheating yourself out of all these tests, you're cheating yourself out of the ability to become rich and successful. And so one thing that I'm afraid of is not just that these people who are cheating are gonna lose out to the Chinese or whatever, the Finnish or the Danes. Maybe they are, maybe they aren't. They're going to lose out to people who can think, who are doing the work, who can sit with ideas, who do have and are building cognitive time under tension. And so I just think that a world in which you have a generation of people with extraordinary expectations of material success, but underdeveloped abilities to actually achieve that success, that just seems like you're setting up a generation for unbelievable disappointment, anxiety and depression. So, you know, this goes, I think, not just to, you know, the concept of national greatness, US versus China, although maybe it touches on that. It goes to like, you know, what do we want from our life? Like, what do people who want to be rich and successful, what should they want from their life? They should want the ability to sit, the ability to sit with discomfort, to work hard, to enjoy complicated problems, to love thinking through them, because that's where your money is made. If you lose that, you really lose out on this ability to achieve of like, what is the new American dream?
Ed Elson
I guess the reason that I'm so interested in the economics angle is because I feel like the argument against what we're saying is that it's sort of this Luddite argument that you're anti technology, anti progress. And I think the thing that really resonates for me to your point, Derek, is if you have a generation of people who have been trained since their infancy to take shortcuts, to not sit with ideas, to not work hard, to just scroll, scroll and kind of like live this sort of fleeting imaginary version of success and you never actually build the tools or the abilities to actually go out there and achieve it, then ultimately we'll have an entire generation, an entire nation of basically lazy, non thinking losers who can't really get anything done, who can't really come to a consensus and make decisions and build things. And I just wonder if that is the argument that needs to be made to those who would be pushing against this argument. I mean, there are certainly going to be people out there who would say CAL is just afraid of technology. Derek thinks AI is bad. They're sort of anti progress, they're anti innovation. And I wonder if they're missing something. They're missing a productivity angle, which is that if you have a generation, I mean an entire society of dumb people, then just economically speaking, GDP is going to go down. I feel like that's the only outcome. Derek, does that resonate?
Derek Thompson
I guess I don't consider myself a Luddite and I think I'm probably more positive about large language models as a technology than cal. I want to be very clear about what it is that I think is bad. And I think here CAL and I don't have intersecting Venn diagrams. I think here it's the same Venn diagram. What I think is bad is not artificial intelligence. What I think is bad is using artificial intelligence to do the thinking for you and then representing your thinking as just the synthetic information that you got from artificial intelligence when you prompted it. That is what is cheating. That is definitionally cheating. And my point is that in the short run, when you cheat, you are cheating the task. But in the long run when you cheat, you are cheating yourself because work is one damn task after another. And if you lose the ability to be comfortable with what I'm calling time under tension, cognitive time under tension, well then you are really putting yourself at an extraordinary disadvantage in what's going to be a very, very competitive labor market. And that's my fear for students today, is that they are taking a shortcut that in the long run is going to atrophy muscles that they're actually going to need in the labor force.
Ed Elson
Just as we wrap up here, Cal, what would be your advice to those people? I mean, I don't think that we're going to see real regulation on this stuff. OpenAI even built a tool that detected AI generated work and they decided not to release it because they worried it was going to hurt usage. I mean, it doesn't seem like anyone else is going to solve this problem for you. So what would be your recommendation to people who don't want to fall behind?
Cal Newport
Well, I mean, I think time under tension, that's a good analogy or metaphor that I think Derek is pointing out. You should be thinking as an individual. If I want to be economically viable, don't listen to the voices that are saying, oh, you won't be replaced by AI, you'll be replaced by someone who uses AI better and say what is fundamentally what do I do in my job, right? Where do I actually create new value in the world If I'm pulling in in a knowledge work type of employment, a salary that's non trivial, it's not because I'm good at answering emails. It's not because I create PowerPoint slides really quickly. There must be some fundamental activity where I'm taking hard won skills and knowledge and applying it to information to add new value. The harder I can think, the more I can sustain my focus, the better I am at that core activity that matters. So I've been arguing this since my book Deep Work a decade ago. Don't lose sight of the fundamental cognitive activity that actually moves the wheel, that actually moves the needle on these knowledge work types of endeavors. If you cannot add original value to information through deep skilled thought, then what you're doing is imminently replaceable. If you turn yourself into a sort of cybernetic LLM prompter, your unique value to the marketplace is going to plummet. You're putting yourself into a dangerous situation. So don't mistake busyness for productivity. Don't mistake speed for better. What matters is what is the high value output I produce that I'm uniquely suited to do it, and how do I get better at that activity? There's all sorts of ways technology can help you do it, but you have to be very wary about the ways that technology makes you worse at it. Because it has a way in the last 20 years of sneaking in the back door and making you feel more productive and you look up and you're worse at what you do. So let the first things be first.
Ed Elson
Cal Newport is a professor of computer Science at Georgetown University, New York Times bestselling author of eight books including Slow Productivity and Deep Work. Derek Thompson is host of the Plain English Podcast and author of Of Abundance. Derek and Cal, this was fascinating. Thank you so much.
Derek Thompson
Thank you, sir.
Ed Elson
Thank you. Okay, that is it for today. We appreciate you joining us for another Prof. G Markets panel. If you have a guest that you think we should speak to, please drop us a line in the comments or email our Producer, claire@marketsofgmedia.com We hope to hear from you. This episode was produced by Claire Miller and Alison Weiss, edited by Joel Patterson and engineered by Benjamin Spencer. Our video editor is Brad Williams. Our research team is Dan Shalon, Isabella Kinsel, Kristen o' Donoghue and Mia Silverio. And our social producer is Jake McPherson. Thank you for listening to Profg Markets from Profg Media. If you liked what you heard, give us a follow. I'm Ed Elson. I will see you tomorrow.
Date: May 12, 2026
Hosts: Ed Elson, with guests Cal Newport and Derek Thompson
This episode of Prof G Markets steps away from its usual focus on financial markets and zeroes in on a pressing societal question: Is artificial intelligence making us all dumber? Host Ed Elson gathers insights from Georgetown Computer Science professor Cal Newport and journalist Derek Thompson (Plain English podcast, author of Abundance). The panel investigates the effects of AI and digital technologies on human cognitive fitness, especially among students, and the potential societal and economic implications.
"AI is now taking target on the other main cognitive activity that makes us stronger, which is writing... So if AI does, in fact, significantly reduce the amount of writing we do... I do think we’re going to see a continued diminishment of our intelligence that began with highly distracting phones about 10 years ago."
(Cal Newport, 04:50)
"When you ask artificial intelligence to summarize an article... do you understand that article... as well as if you had read it? Of course not... you’re engaging at such a shallow level..."
(Derek Thompson, 05:49)
"These [reading and writing] are privileged activities in the history of modern humanity... The brain that everything we hold dear was built around... depended on reading and writing to shape it."
(Cal Newport, 08:34)
"To use something like AI is like bringing a forklift into the gym... you’re actually being counterproductive to the actual goal, which is strengthening the cognitive muscle..."
(Cal Newport, 11:22)
"Within the education system today, like, I think it is basically a tool for mass cheating that is, in fact, cheating students out of the ability to think in the long run."
(Derek Thompson, 16:06)
"Much more likely, just ask ChatGPT to write it for you. You get up in front of the class and talk about the Habsburg Empire... you prove and defend your intelligence to the classroom, to the teacher."
(Derek Thompson, 18:25)
"A world in which you have a generation of people with extraordinary expectations... but underdeveloped abilities to actually achieve that success, that just seems like you’re setting up a generation for unbelievable disappointment, anxiety and depression."
(Derek Thompson, 25:29)
"If you turn yourself into a sort of cybernetic LLM prompter, your unique value to the marketplace is going to plummet."
(Cal Newport, 32:32)
| Timestamp | Quote | Speaker | |---|---|---| | 04:50 | “AI is now taking target on the other main cognitive activity... which is writing... I do think we’re going to see a continued diminishment of our intelligence...” | Cal Newport | | 05:49 | “When you ask artificial intelligence to summarize an article... do you understand that article... as well as if you had read it? Of course not...” | Derek Thompson | | 08:34 | “These [reading and writing] are privileged activities in the history of modern humanity... The brain that gave us theology, politics, that gave us philosophy, that gave us theology... depended on reading and writing to shape it.” | Cal Newport | | 11:22 | “To use something like AI is like bringing a forklift into the gym... you’re actually being counterproductive to the actual goal, which is strengthening the cognitive muscle...” | Cal Newport | | 16:06 | “Within the education system today... it is basically a tool for mass cheating that is, in fact, cheating students out of the ability to think in the long run.” | Derek Thompson | | 18:25 | “Much more likely, just ask ChatGPT to write it for you. You get up in front of the class and talk about the Habsburg Empire... you prove and defend your intelligence to the classroom, to the teacher.” | Derek Thompson | | 23:43 | “We should be seeing the exploding total factor productivity... and we haven’t. I would argue this is in part a result already of massively increasing the distractions and context switching...” | Cal Newport | | 25:29 | “A world in which you have a generation of people with extraordinary expectations... but underdeveloped abilities to actually achieve that success, that just seems like you’re setting up a generation for unbelievable disappointment, anxiety and depression.” | Derek Thompson | | 32:32 | “If you turn yourself into a sort of cybernetic LLM prompter, your unique value to the marketplace is going to plummet.” | Cal Newport |
This episode makes a compelling case that how we use AI—especially in education and foundational cognitive tasks—matters profoundly. The panel urges active effort to bolster the harder, slower, but more valuable human skills of deep reading, rigorous thinking, and writing. The short-term convenience AI affords may carry long-term risks for individuals, economies, and society at large.