
Loading summary
A
Welcome to the LSE Events Podcast by the London School of Economics and Political Science. Get ready to hear from some of the most influential international figures in the social sciences.
B
So welcome and good evening, everyone. For those who don't know, my name is Larry Kramer. I'm the president and Vice Chancellor here at lse, and it's my privilege to welcome you all to this very special event hosted by LSE's Data Science Institute. Tonight's event is really the end of a year of programming by DSI that's been showcasing LSE research, exploring how the rapidly changing technological environment and all the different new technologies and advances are affecting our politics, our economies, and our societies more broadly. That research has been shared across the year through a series we've had short films, blogs. There was a special edition of our Research for the World magazine, events across the year, and it was also featured in the 2025 LSE Festival. So tonight's panel, in a sense, I guess, is drawing the year to a close. It'll give three scholars a chance to discuss how their work at the intersection of artificial intelligence and the social sciences can help to ensure that AI, that advances in AI serve the good of the community and the greater good, rather than the less good or the bad. What we're gonna do is we'll explore how insights drawn from the social can shape AI innovation, talk a little bit about the importance of research into the most consequential impacts of AI, in particular its impacts on our economies and societies, and how AI tools and methodologies can transform social science research and investigation itself in a kind of feedback loop. Now, as you know, we've been talking a lot at LSE in terms of organizing our questions around those issues along five themes, right? Democracy, political economy, society, sustainability, inequality, and the new technologies themselves. And that'll provide a lens here for the comments that our speakers will offer, and I'll say a little bit about them in a moment, but just by way of an introduction and why I think about it this way. AI isn't really a technology problem, right? AI is fundamentally about people. AI models are trained on tokens, trillions of them. But they're tokens that are meant to capture human behavior, communications, human interactions, and at the same time, of course, the effects of AI are then on people, and it's changing our economies, our societies, how we interact with each other, our institutions, and our ways of living and learning, which is, of course, why social sciences are so important. If we're going to think about how to manage the potential downsides of AI, and how to capture the upsides. And not just AI, of course, other newly emerging technologies and some old ones as well. So for this, this evening, we have really brilliant academics who know more about this subject than just about anyone, each bringing a slightly different lens to the topic. So my far, that's my right is Helen Margetts, who's a professor of society and the Internet in the Oxford Internet Institute. A senior advisor and visiting professor here at LSE's Data Science Institute. She's written extensively about the relationship between technology, politics, public policy and government. Next to her is Cosmina Dorobantu, who's a professor of practice at the Data Science Institute. She is, among other things, a winner of the 2025 AI and Robotics Research Community Award for her contributions to the UK's AI research and policy landscape. And then to my immediate right, Marianne Dumas is Assistant Professorial Research Fellow at the Grantham Research Institute. Her research interests encompass green innovation, the institutional processes underpinning decarbon, and the interactions between reducing inequality and fighting climate change. So what we're gonna do here is each of our panelists will share a few minutes with opening remarks, after which I will pose some questions to each of them to flesh those issues out a little more. And then we'll turn to give you an opportunity to ask questions. There is an online audience as well as those of you here in the audience, and we'll take questions from, I'll do my best at least to take questions from both. So with that, I'm going to hand it over to Helen, who will start the conversation for us.
A
Thanks so much, Larry, and thanks everyone for coming. It's lovely to be here. So, as Larry said, I mean, AI is sold to us as a technological transformation, but it's really a social transformation. It's completely about people and it only really matters if you think about it, if it affects people, if it makes people better off, if it improves people's well being, or if it harms them in some way or puts people in danger. So AI is about us and it's already kind of bringing change to society, the economy, to democracy. Actually, it's been doing that for several years already because AI has been powering the algorithms of big technology platforms for many years. But at the end of 2020 was launched GPT. GPT 1 or 2, I can't remember what number. And after that a whole host of other large language models, and those large language models, which are basically AI, were the first time that we as people could interact directly with AI. So Any impact that AI is having on society has really been turbocharged by that and they have diffused incredibly rapidly, faster than any previous digital technology. So now, for example, half of people in the US use AI in the form of large language models or so called generative AI. Three quarters of young adults, 10% of people say that they're using it kind of continuously, all the time, it's kind of always on. And obviously that is likely to be affecting all kinds of things about economy, society and democracy. And if we want to understand that impact, then we really need the social sciences. The social sciences have a really important job to do in an AI powered world. And also they're going to give us as social scientists kind of new work to do. We're going to have to rethink some very long held social science concepts and frameworks. The logic of collective action, for example. How does that change in an AI powered world? What's the role of the state? What are the policy instruments that governments can use? How do they change when AI companies that are developing these technologies have so much power? One of the most predicted things for AI and society is large scale displacement of work. What will the meaning of work be in that environment where so many people don't have formal employment? There's work on all these things going on at LSE already and I think we can expect to see a whole lot more in the future. What does it mean when people are using AI for care and companionship and emotional engagement? We are seeing examples, increasing examples of that and I'm sure we'll talk about it later. What does that mean for people's future long term being for their social connection and so on. And just finally, AI is also going to bring us a lot of really tricky policy questions. For example, take the large scale displacement of work. What's that going to mean for fiscal policy? If there's large scale displacement of work, that's going to mean that income tax revenues decline, unemployment benefits will rise. Some governments that are particularly dependent on labor taxation may face fiscal crises. What kind of policies will we need to keep government going to keep public goods provided in that environment, there's going to be a lot for policymakers to think about. Some commentators have even suggested that we may need, because of the inequalities that will result, we will need universal basic income to work out what a universal basic income would be. If you think about it is an terribly complex economic modeling task. It's actually incidentally one that AI could really help with. We'll probably talk about that Later. But it's also not the way that we tend to make policy. Think about it. At the moment, we're waiting for the budget tomorrow. We're waiting to hear if the 2 child benefit cap is going to carry on or be done away with. That's a rather crude kind of way of working out what a universal basic income is. Right. It doesn't involve much AI. You know, one child, two child, three child. It's kind of lumpy. So policymakers, when they think about making policy, are going to need new kinds of expertise. The state is going to need new kinds of capacity resting on technological and data science expertise. The social sciences will be really well placed to kind of develop those models powered by both AI and by the best of social science. But they're going to need to think about how to. We're going to need to think about how to train the next generation of policymakers in that new environment. Lots to do for social science.
B
Thank you, Cosmida.
C
Yeah, hi. Thank you so much for being here. Look, I started my career at Google 20 some years ago, and I wanted to tell you a little bit about it because it really informs the way in which I think about AI. And, you know, I mean, as you all know, Larry, the early 2000s in Silicon Valley were like a time of great hope and enthusiasm and optimism. And there are all those wild ideas floating around, from sort of putting data centers on the moon to scanning all the books in the world to make them freely available to people. And I think at Google there are all sorts of people coming in to talk to us. And we'd get an email saying, come to the cafeteria, Madeleine Albright is here, or Colin Powell, or Gorbatov, or you name it. And one day we had Thomas Friedman come in, who was and still is a New York Times columnist. And at the time, he had just written his book, this was in 2005, titled the World is Flat. And when he came to speak to us, he wanted to introduce this idea in his book that somehow geographical divisions and borders were gonna disappear in this brave new technological world. And that's an idea that was highly appealing to sort of Silicon Valley types at the time. Google's CEO at the time, Eric Schmidt, started speaking about it a lot and declared in 2012, in front of an audience that included Angela Merkel and Brazil's president, that the web will dissolve national borders. Now, also in 2012, Eric embarked on a European tour and he gave talks in places like Hanover, Germany, where he said propaganda will be harder to sell to the public. In times of war and suffering, it will be impossible to ignore the images that come out. It will be far easier for communities to mobilize against autocratic regimes. Now Friedman's book and Eric Schmitz's speeches rode on a massive wave of techno optimism, a time of great hope and unbounded possibilities. And theirs was a story that everyone wanted to believe. But of course, we are in 2025 now we know what the world looks like, and it's quite. It is actually vastly different from the way we imagined it back then. And I have this history at the back of my mind every time we're thinking about AI. And in telling you those stories, I wouldn't want you to walk away from here saying, gosh, they got it so wrong. Because ultimately, I mean, let's face it, like, none of us is particularly good at predicting the future. But I want you to walk away from here thinking, how can we do better? And this is something that I give a lot of thought to. And one of the things that we got wrong 15, 20 years ago was that there was very, very little social science expertise inside of those tech companies. We knew back then as economies that E commerce transactions actually had a much harder time crossing borders than physical goods. Psychologists knew that people can't cope with a constant stream of horrific images and videos, they just tune out. Political scientists knew how autocratic regimes established themselves, but the tech companies weren't talking to those people. And one of the questions at the back of my mind is if we had managed 15, 20 years ago to embed expertise into the tech companies, if we had social science experts feeding into how digital technologies were designed and marketed, would today's world look different? And I'm convinced that the answer to that question is yes. So the mistake that I do not want us to repeat with AIs is exactly this. And the reason I'm sitting at LSE is that I feel a responsibility to bring the expertise, the decades, almost centuries of scholarship that we have into the tech companies themselves. So that is an absolutely necessary step if we want better technologies. And I'm spending a lot of my time these days trying to figure out how to bring those communities together, trying to understand how I can link social science expertise to the leading AI companies. And the good news is that there's an awful lot of openness and desire to collaborate on both sides.
B
Thank you. I do, I have to say I had a dinner with Eric Schmidt, I think it was around 2012, in which we raised this question with him. And his answer literally was he said, give me any problem in the world and a team of Google quality engineers and they can solve it better than anyone. And here we are. Marianne?
D
Yes, Hi. So I'm doing a little pivot to sustainability questions and in this introductory remarks, I'm going to focus on the research I've been conducting with my colleagues here at lse, Jeanne Dugouin and Pia Andress, on the impact of AI innovation on the energy transition, and specifically innovation for the energy transition. So first, the energy transitions. The key driver of the energy transition, or one of the key drivers, is the development of clean energy technologies and the process by which they catch up with fossil based ones that are very mature, perform extremely well, super reliable, flexible, they're really good at what they do, and it takes a long time and a lot of support to bring the clean technology alternatives to that level. So we're thinking solar, we're thinking wind, but we're also thinking offshore wind, floating wind, electrolyzers, smart grids, hydrogen based fuels. And a lot of these technologies are not yet mature. Solar has made it, and that was on the basis of a lot of policy support. So we need innovation policy. So what does AI have to do with this? So the AI is, no pun intended, is a GPT, a general purpose technology. And so general purpose technologies start irrigating the entire economy and they can have applications everywhere, so they can have fantastic applications and helping to develop clean technologies. We already have a lot of good examples from forecasting solar irradiation so that solar plants can be a lot more productive, to designing auctions to make electricity markets work better, or smart grid optimization, looking for better chemicals for batteries, even automated chemistry labs to develop better batteries. But the catch is they also can help the dirty technologies since their general purpose. So they'll go and they'll assist robots in exploration of new oil fields, or optimize internal combustion engines to make them more efficient, things like that. Therefore, it's not clear in the end whether it helps or hurts from the point of view of trying to transition. So the first thing we show is that, well, if the spillovers, we call these spillovers of AI, are higher to clean technology than they are to dirty ones, then that gives us a leg up in the transition, makes the transition a bit less costly. And so when we go to the data, to the patent data, and we find indeed that clean technologies are drawing on AI innovation much more than dirty ones. And that is also the case for the earlier wave of ICT technologies for the last 20 years, before the AI uptake. And when we look at that. We also look within companies who are working on multiple energy technologies and within the same team company, they use AI and ICT more for clean tech than for dirty tech. So it's not about the capabilities of companies. There seems to be something intrinsic to the technology that it's more complementary to clean tech than to dirty tech because it needs more information. And yeah, there are many reasons why that would be the case. So that's good news. But how can we make use of that fact to further improve the effectiveness of innovation policy for sustainability? Well, the idea is that you can increase exposure of innovators, energy innovators, but more broadly, any innovator in industries that have a sustainability challenge could be in agriculture and so on, increase their exposure to AI and to AI innovation. And so we see also in the data that those that get more exposure, companies like energy innovators that get more exposure, they absorb more of it, they use AI more and they start doing more clean tech than dirty tech. So innovation policy can ride that wave and increase colocation of projects, knowledge exchange and deliberate sort of steering of who works with whom and how can we create these networks of knowledge. So that's the more general lesson, is that all sorts, there are some challenges in society that require us to direct technological change, to direct innovation, to solve those problems. And then there's a lot of other innovation that happens because it happens, but we can. And for those of us who are focused on trying to solve these specific problems, we can ride those waves and use spillovers more deliberately through various innovation policies. So you might have noticed that I've just been focusing on knowledge and how increasing knowledge in the area of digital technologies, AI and so on, is affecting the changes in our ability to generate knowledge for clean technologies. I haven't been talking about AI's resource uses and at the risk of being provocative, I think that knowledge is the first place we should start with. It's the first order problem. And the resource use of AI is a bit second order. And the reason I say that is because any industry is going to need energy. AI is no special in that regard. And that's why we need to find planet compatible ways of generating energy to allow AI, but to allow all the other things we do. There's nothing specific about that, so that's just to provoke the conversation. I have a lot more to say about AI's environmental footprint and whether or not we should be doing stuff about that, but that's sort of how I put a contrast here between knowledge and actually resources.
B
So thank You. Okay, so I've got questions for each of you. Let me start with you, Kasmina, because you talked mostly about how social science can help AI. So let's take the opposite question. And what can AI do for social science and social scientists?
C
Yeah, so I mean, what's really interesting about this is that the very technology that is causing this upheaval in our lives is also the technology that can actually really help us understand our world better than ever before. And in the hard sciences, we're seeing the scientific revolution brought about by AI. And there's great excitement at the possibilities of AI for the hard sciences. We've seen AI crack the 50 year old protein folding problem, for example. We've seen it be able to predict in a year the structure of about 200 million proteins, a task that would have taken a PhD student about a billion years to complete. And this is going to allow a scientific discovery on an unprecedented scale. Really what we're not seeing yet is sort of this revolution of AI driven discovery in the social sciences, but we can see the potential for it. And there's sort of three ideas that I've been thinking about and playing around with. One of them is that we can get AI to help us in the social sciences do what we already do. And we have a fair bit of work in this category, including here at lse. So one of the things that we do as social scientists, for example, is that we talk to people to understand their views, to understand their beliefs, to understand their behaviors. A lot of that is conducted through interviews. And there's a really nice paper that was published by LSE researchers on how you can conduct large scale interviews with AI, and they're getting some really brilliant results on that. So that's one way in which you can sort of accelerate, just do the same things that we do, but at a bigger scale and better. Now the second idea that I've been thinking about is how can you get AI to do what we cannot do? And that is something that excites me quite a bit. The one thing that we're not particularly good at as humans is integrating very, very diverse sources of knowledge and also understanding complexity. So if you think, for example, of our policy world and the way we make policies, we know that investments in health have an effect on education. In the pandemic, we knew that the virus transmission had an effect on the economy and the economic policies had an effect on the pandemic itself. But our models are currently in silos, right? We had epidemiological models and we had economic models, but there are no links in between those. If you take a holistic view. Helen mentioned that we're about to hear at the budget. Last time I looked at this, in the UK we have about 52,000 budget lines. That means we have about 52,000 budget decisions that underpin how about a trillion pounds of funds get allocated in the UK each year. Now, what are the interdependencies between these budget lines and these policy decisions? How is the whole functioning together? How are they interacting with each other? That's something that we just simply do not know, but it would be possible for us to know if we actually had the models that would build in those interdependencies. And I hope that it's going to allow us to work a little bit outside of our silos. And by saying that, I don't want to just point fingers to policymakers because within academia we have the exact same problem that we work within our departments and we have various sorts of specialist knowledge and interdisciplinary research is incredibly hard to do. But these technologies might have ways of bringing us together, sort of giving us access to knowledge that we didn't yet have. And then the third idea that's sort of on my mind these days is can we build some tools that would make us better researchers? Can we build. And again, in the hard sciences we're seeing co scientist tools. Can we have co social scientists that would help us generate hypotheses, think with us how to test them, think with us through research design, check our results and so on. So those are sort of three areas that I'm thinking about in terms of what AI can do for us, really.
B
So just for follow up, when you think about at least the last two of them in particular, so it's using these tools to really go well beyond what we've been able to do with the tools we have. But one of the common criticisms is all the biases that we already have in the work that we do are just built into the AIs, except we can't see them in quite the same way. So what do we do about. And this goes to both the inequality concern and the political economy concern. So how do we prevent AI from just leaving either the same people behind or just a whole different set of people behind, or how do we know how that's going to work?
C
Yeah, so I mean, and this is a massive issue with AI and it's something that, when whenever I hear people talk just about the productivity improvements of AI, it scares me a little bit because I don't think we'll get anywhere if we focus just on productivity and we don't focus on equity. Those questions on whom we would leave behind are quite crucial. We know that these are technologies that are going to make all the digital exclusion bits worse for us than before. We know that they are biased. And I think the term artificial intelligence sort of hides the fact that there are created by humans. They inherit our own biases. But I do think that there's sort of hope there because we could be putting those technologies in the service of laying those biases bare. And I think we tend to have higher standards of our technologies than we do of humans. So we expect driver's cars to be safer than human drivers. And I would hope that we would expect AI technologies to be less biased than us as humans. But if we don't make these a priority, we're not going to get to it. And if we don't study whom we're leaving outside of this technological revolution, and if we don't understand that, we won't have a way to tackle it.
B
Yeah, I mean, I just want to throw in a thought which is I think that the economic problem really requires us to think up a couple levels from where the AI itself is. Right. So we have a system of capitalism now that's organized around the idea that labor's priced at the demand supply frontier and capital keeps all of the surplus. And if you do that, we have a huge problem going forward. Right. So we're going to have to. It's not getting rid of capitalism, but it is rethinking the way in which we divide that surplus. And that's the kind of political economy problem that I think we need to get to.
C
Yeah. And actually just a few weeks ago we had an event here at lse, which we did together with Anthropic. And the idea was to get people to think about the economic policies that we need for AI. And we had this open submission process in which people from all over the world could submit their policy ideas and we can sort of just pitch them in front of a policy audience and. Exactly. That was one of the ideas. Can we start thinking about a progressive tax system for capital? And of course, sort of all the policymakers in the room are sort of rolling their eyes and saying, you know, that's never going to be possible. Like capital moves, you know, we're going to be doomed if we do that. And it's fine, like, I'm totally happy for people to disagree with this, but I do think that we need to Start thinking about these and other ideas in earnest.
B
Yeah. Mary, let me turn to you. You know, you said in a way that there's nothing special about AI and the use of energy. Of course, I think for many people what's special is the amount of energy it takes. This is a question we're getting all the time now as we think about incorporating AI into our own processes. How is that consistent with our commitments to wanting to minimize our carbon footprint and so on. So how do you think about AI's use of energy resources and its impact on warming and climate change? And how do we decide whether and at what level this technology is worth it and whether and at what level it becoming harmful or wasteful just in terms of energy usage?
D
So my first point was that all industries use energy and so any growth will create increase in energy use. And to some extent that's the problem we have to solve rather than the specific problem or pointing our fingers specifically at AI. That being said, I worry about just using AI mindlessly in a way that we don't internalize the energy cost of each query we submit and so on. There are a lot of digital technologies. Is that the way the pricing structure works? You know, uploading a photo to Google Photos like the marginal cost? For us, we don't pay anything for doing that. It feels like it's free, but for the environment it's not free. So a pricing structure where you pay per token, essentially that would force us to think, is it worth it right now to make an AI powered search or to make a prompt or so on, or can I I query this information in a cheaper way so that's on the side of the consumer instead of subscription based things that feel like every little thing we do is free, maybe that's not the right way to price this. But then of course carbon pricing is absolutely essential if we're going to have AI use and AI development that makes sense for the common good. Both encourage us to use it in a way where it's useful and worthwhile rather than mindlessly. And of course develop energy for AI and AI processes that are energy efficient and so on and so forth. So that's kind of on the side of how I would see that. It's very important that we. But this is the stuff we already know pretty well from a theoretical point of view. But then is there any discussion in policy as to the. Well, we have a lot of discussion about carbon pricing, of course, but then, for example, this issue that we're not paying per query we make, that might have to sort of come a little bit more higher up on the agenda in that sense. But then the AI does require locally a lot of new resources. And the way we deploy that infrastructure to service AI could be really helpful or really not. So if you just think of a data center that agrees on a power purchase agreement with an energy provider with the cheapest, closest to market current technology on offer, we're not using that niche market to either test and improve a new clean technology like Fusion for example, it's a bit of a wasted opportunity. On the other hand, instead of power purchase agreements that are dedicated energy resource for that data center, the deployment of data centers could help all of us pay for the fixed cost of cleaning up and modernizing the grid. It but that again depends on the details of how utilities, energy regulators, those AI businesses and so on how they negotiate the grid infrastructure investments and who pays for them. And so there are a lot of different things that are developing in different parts of the world in the US even across different states. Sometimes it's know, fairly narrow and siloed. Some of it is much more forward looking. And then there's water and water is a problem everywhere. Water, the cost of using water is externalized by everyone almost everywhere. And locally that could be an enormous.
C
Stress I think from the.
D
And so at this point we're only in the domain of requiring companies to transparently, you know, report on their water use so we gain a better understanding of how it's going to impact local water resources. But we don't yet have really a clue as to the magnitude of the problem and how we're going to deal with it.
B
So just quick follow up. So on the energy side, is that just a temporary, I mean maybe not immediate, but transitional issue? Once we get to a grid that's fully powered by renewables, that problem disappears. Yes. And then the reason I ask that is then the second question because this wouldn't be true for water, which is still going to remain a scarce resource both ends. Can we, should we just leave this then to markets to figure out? So basically you're talking about allocation of some kind of scarce resource or externalities. So can we just, you know, let the consumers and the utility companies work that out as a normal pricing problem?
D
Yeah, so I think on the side of the user. So I, I worry a little bit like what if we just start using AI that consuming a lot of energy just to you know, kind of the equivalent of the doom scrolling on social media that makes everybody miserable and we do the equivalent of that on, you know, ChatGPT or whatever. And on top of that, it's emitting tons of greenhouse gases and ruining the planet, ruining our soul, ruining the planet at the same time. So that, you know, that's a little bit sort of very pessimistic view. And so on the side of the user, I would argue that there is a moral conversation and you an ethical conversation to be had, an educational conversation to be had for people to learn and discuss with each other what's a good use in their lives. And you all are specialists on that. But we should also pay the energy costs of making all those queries. That's why I was suggesting that. But then on the side of supply, building up these grids and so on, those are, yes, the market has a big role, but the market is incredibly regulated and organized by both energy regulators and utilities. And for example, in Virginia, where there's a lot of data center build out, the main utility there has created a specific rate for data centers so that they pay most or all of the infrastructure build out that they require. But that will then benefit everyone else, unless there are stranded assets, unless AI is a bubble, and then they just leave all that extra infrastructure there for everyone else to pay for in the future. But so then it can be helpful because instead of, of just consumers or small businesses shouldering all the fixed costs of building out clean smart grids, which is a huge investment we have to make right now, we don't, you know, we're leveraging this, this, this economic development to, to, to help us do that.
B
Thank you. So, Helen, let me come to you. Let's talk a little bit about democracy, which we haven't touched on yet. So another very common charge is AI facilitates or as some will say, I think you did turbocharge all the online harms. We already worry about doxing people, misinformation, disinformation, hate speech, harassment, et cetera, et cetera. Seems pretty clear how AI can be used to exacerbate all that. Is there an argument about how it might actually counter that or be good for democratic governance?
C
Hi, I'm interrupting this event to tell you about another awesome LSE podcast that we think you'd enjoy. Lseiq asks social scientists and other experts to answer one intelligent question like why do people believe in conspiracy theories? Or can we afford the super rich? Come check us out. Just search for lseiq wherever you get your podcasts.
A
Now.
C
Now back to the event.
A
Yes, there is. And I think it's really incumbent on us as social sciences, to Start making that argument and show how it could be good for democracy. I mean actually AI, but also every generation of technology since the Internet has had a really bad press when it comes to democracy. There are a lot of books now about the End of Democracy, the Decline of Democracy, the Twilight of Democracy, Control, Alt Delete, How Technology Destroyed Our Democracy. And we're already seeing those come forward with AI, How AI Undermines Our Democracy and so on. And we've actually had, I mean even the same tech leaders that Cosmina was talking about. I mean today's tech leaders are actually themselves being very pessimistic. I mean Last year, 2024 was a year when half of the world's population voted in elections and you had a lot of tech leaders talking about AI and democracy. They didn't mean democracy, of course, they meant the next election in whatever country they were in, which was the us but they framed it. I mean as a political scientist it was quite exciting because it felt like they were all talking about democracy. And Sam Altman in particular said that kind of targeted, micro targeted political persuasion was what was going to really do it for democracy. And he made this claim with absolutely zero empirical evidence. And I mention that because I think it's really important that we don't just be told what we're going to get. We need to understand and kind of research what we're going to get and if necessary we need to think about how we can change that and shape it. Actually a very brilliant PhD student of mine called Kobi Hackenberg, he wrote, it was actually his master's dissertation about micro targeting ran as large an experiment as could be afforded when you're a master's student. Looking at what happened when people received micro targeted messages targeted at their demographics about various policy issues and found found two things. One was that GPT it was at the time, that was the only easily available model at the time was very good at persuasion. It was persuading people of the position it put forward. But micro targeting made absolutely no difference. It was no better when it started micro targeting and that study it's published in PNAS if anyone wants to take a look at it, has now been been massively scaled up. He's now working at the AI Security Institute where they're doing quite a lot of research in that kind of vein. We looked at model size for example. Are the largest models much better at persuasion? Actually no. So if you're building a foundation model and you're trying to persuade people size is not the thing to go to but there's various other things which are important. For example, the amount of information that people are provided with and sometimes the amount, amount of false information they're provided with. We need that kind of research. We need to try and really understand what's going to happen as these technologies are embedded in our democracy. And the other thing I would say I think we need some of that research is done by social scientists like my now PhD student. But we do need kind of social science and particularly political science attention on that. I'm sure there's political scientists in the audience and if there are, they will know that with earlier generations of technology, political science has been really slow on this, they're very slow on social media because there's been this sort of enduring idea that these technologies, the technologies are sort of, it's an added extra, it's not really bad matter, it's not really important. They carried on saying that about the Internet and social media for a long time. And then somewhere about, sometime about 10 years ago, they sort of flipped and said, oh no, it's ruined everything and we mustn't make those same mistakes again. We must really focus on how these technologies interact with democracy. And actually there is some good news there. There are people doing research and that research suggests that, that large language models are good at, they're good at helping people to find common ground, they're good at debunking conspiracy theories. And there was one piece of research in particular. It was a large scale sort of policy forum experiment. It was a randomized control trial and experiment, 5,000 people. I think it was called the Habermas machine or something like that. And it was trying to help these people find common ground. And there were human moderators and AI moderators. And the AI moderators were much better at coming up with statements that there was wide agreement on that didn't alienate minority groups and that just generally seemed to reduce division in the group group. I really do think we need more and more research like that from a political science perspective. Most of that research, and there's a reasonable corpus of it now are based on large scale experiments which are rigorous and they give us some idea of causality, that is what AI will do. But, but experiments are not the same as the real world. And I think we do need to kick off work that actually moves those experiments into real world settings and starts thinking about how we can. Because after all, the thing is, people like listening to music and shopping and watching videos and all of that. They don't tend to like politics Quite as much as that. I mean, it's for other political scientists here, I hate to break it to you, so incentivizing people to use things like that and interact with AI in that way will obviously be a challenge. But I do think there's a lot of possibilities there.
B
I mean, I'm curious. You may think this is wrong, in which case. Tell me. One of the things that always struck me as weird about information technology is if you think about any other kind of technology, biotechnology, if we see that it can do harm, we don't seem to have too much trouble saying, you know what, we should just regulate that. So we think cloning is going to be really bad. So let's just say you can't do it and we'll punish you if you do so. And we think it's really bad to engineer a virus or to build a nuclear bomb. So we're going to say, you know what, the technology's there and you can't do it. And for some reason, when we slip into information technology, it's like, if we can do it, we have to figure out a way to live with it, and let's figure out how we can nudge people. I mean, is that wrong? And why is that?
A
Certainly it's something that we talk about a lot without doing, but it's not quite, quite. I mean, the Internet has not for a long time being the complete wild west. But it is true that we consistently struggle with regulating these technologies. I mean, for example, we came into law last year here, the Online Safety act, and several other countries like Australia and Germany and various. And now the European Union have introduced legislation like that. What worries me is this tendency that we have to sort of agonize over it, as you say. So we were seven years talking about the Online Safety act was a bill. We talked about it for seven years as a bill. And that doesn't. Meanwhile, absolutely nothing happened. And I think it's really important that we don't put everything on regulator. There are other ways to sort of shape what's going on. There are standards, there's education, there's all sorts of ways that we can shape the way that these technologies impact on us. I mean, we are going to really have to think about that with AI. I mean, a lot of people are developing very sustained relationships with. With AI. Chatbots and developers in Silicon Valley predict that in five years time, kind of half of your care, companionship and emotional needs will come from AI chatbots. Look, we're not there yet. I mean, you know, there probably aren't very many people here who are kind of in love with their chatbot. But there is an increasing incidence of sustained social engagement. So that needs research, but it also needs policy, it needs regulation. I mean, for example, there have been several very high profile cases where parents of teenagers believe that their teenagers have been encouraged to commit suicide by either LLMs or by these kind of character, these role playing chatbots which are based on LLMs like Character AI. Character AI has now, has now decided to ban people under the age of 18 from using that application. But they've decided to do that. Why? We're gonna have to make that as a society. We're gonna have to make those kind of decisions.
B
You know, it's in 2013, way back, right? How many of you saw the movie her, right? I mean it sort of saw this coming there. I actually made the mistake of taking my then 13 year old daughter with me to see it. Big mistake. It was like five agonizing minutes in that movie. So the question, I mean, say a little more about how big a worry that is and what kinds of things we can do to sort of head off that particular kind of future nightmare. Right? You started us with that point, the way in which people are increasingly absorbing their time and their energy and getting their companionship and friendship out of, potentially out of these, these machines or whatever that take us away from human interaction. So what is it actually beyond research that we're going to do about that?
A
I think research is important because it's like Jonathan Citron, the tech lawyer, says, first of all, it seems too early and then it will be too late. We really need to do this research now because at the moment it's not complete, ubiquitous and we need to start to understand what the effects will be. And I don't think we've got any precedent to that to understand what it will mean for people's kind of goals, their identity, their social connection and these kind of nebulous things. I don't see anything we can do apart from, from research to understand, for example, what will it do to people's connection with other people and institutions and so on. I have a new project on AI and dehumanization and we started that project because we thought that actually sustained engagement with AI would make people, it might make people ruder to other people if they were interacting with other machines all the time, then they might get kind of ruder, more misogynistic, perhaps kind of objectifying other people. But the more we start the project, the more we're Thinking that actually it could be that it might make people, you know, we might start expecting people to be nicer. You know, the way large language models always say, oh, that's such a good question. You know, you're so clever. And we may start to expect people to be more like that. I mean, I suppose there could be some good effects about that, but it's also having. It may make us expect more of people and institutions. Don't know if anyone any of you saw. There was a long article in one of the newspapers the other day about somebody in China who'd developed a possibly rather pathological relationship with Deep Seek about their health condition. They had chronic kidney disease. But one of the things that they said to the journalist was doctors that Deep Seq was much warmer to them about their condition and that the doctors were more like machines. Now this is not what we thought about when we started this project, but you can see it could happen. You know, a doctor has 10 minutes, GPT's got all day to talk to you about your health condition. And you know, yeah, we will have to think about regulation but we need to understand a bit more about how it's gonna play out.
B
I would find it so annoying if anytime I talked to somebody they started with that's such a good point. Okay, let's go to. We've got plenty of time to get questions from the audience now. Where are the microphones? Just so I know where they are and also just so you know for people online, type your question into the Q and A box. I'll go to the online every periodically when you get the microphone, if you could just say who you are and ask, ask a hopefully short question, not long statements and let's see what we can do. So we'll go there for the first question.
A
Hi, I'm Nata Zamund from nhs. I was going to ask about AI in health but I changed my mind after a comment Marian made. If you were going to do the pay per token model you were mentioning, aren't we risking monetizing information? Again, we've spent centuries reach demonetizing in information and if you're going to make them people pay, we are going to produce a two tier economy where only wealthy can access the AI augmented research. So why don't we make the data centers more green as opposed to make people pay for the information they can get by access of AI?
D
Yeah, thank you. Yeah, we can make them more green but even renewable energy has a lot of impacts on landscapes, on biodiversity, on critical mineral resources, meaning mining in other countries and so on. So at some point, just pretending we have a, you know, free resources runs us into problems. So that's one answer I think a lot of if you would, you still have to, I mean, you can always have rate differences. It's very nice that so many of these GPTs allow you to experiment for free. And I get a lot out of Gemini just for free. And it's very nice. But then beyond some use, I think it makes sense that we be a bit reflective whether it's worthwhile or not to spend another hour asking the GPT if we should be worried about, about our anxiety, for example. And a lot of information is monetized in part because we also have to reward those who produce the information. That's a topic we haven't discussed here is injecting new information into AI systems so that we're not just stuck talking the same thing over and over again, whatever has been. And I don't know if that's something I'm sure you, you know a lot about, but we still quality content requires effort, even if we're in a world of LLMs. And so at some point we, we do need to also pay for, for things.
B
So I mean, it's also, by the way, the realpolitik of it would be the best way to get the centers to be more green would be to make people pay if they're using the ones that aren't right. And that'll, that'll create those pressures. So right here. Hi, Udi Shapiro, I'm a visiting professor.
A
Here at Maths and dsi.
B
So when Google started, it had this tagline do no harm. And then when it discovered that to make money it needs to do harm, they removed it. And Facebook, when they started, they just shared information among graduate students that had their photos online. And they only later they started manipulating that information.
A
So AI will very soon face the choice of making money while making harm.
B
And as a concrete example, what would prevent Coca Cola and OpenAI to do the following deal for the next year? Induce adolescents to drink more Coke, take a billion dollars and do that, and OpenAI, is it legal? Are we willing to live in a world in which Coke can pay open AI to induce adolescence to consume more Coke? That's my question. Take it away.
A
Well, I mean, there's going to be questions like that coming up in every single market, because as somebody said, I think it was you. AI is a horizontal technology, a general purpose technology as well as a vertical one. And that comes back to the question that I mentioned at the beginning about kind of state capacity or in this case regulatory capacity. Every single regulator is going to need to be able to scrutinize whether in their market that kind of thing is going on. And it's going to be very, very difficult. But it already is very difficult. I mean, all sorts of regulators who never expected to be regulating technology, like the Equality and Human Rights Commission, for example, or taxi regulators, they are all technology regulators now. And they're going to have to be able to understand, in this case, kind of advertising and that. They're going to have to be able to understand how to scrutinize that and how to, I mean, you know, and how to stop it happening.
C
But I think also, you know, just. And I, it gets back to my earlier point, just having this conversation with the AI companies themselves helps, and it helps in terms of understanding what's happening. So one of the things that I'm thinking about now, for example, is that a lot of young people, it turns out, are getting their news from large language models. Rather than sort of reading the papers, they go and ask them about, you know, tell me about this issue. Now, when we go and read a paper, we're sort of familiar with the biases that it has, right? So I would expect different content if I'm to read the Daily Mail than if I'm to read the Guardian. Now, because there was such a big deal about licensing deals and media companies were so upset that, you know, companies were using their quality data to train their models. The AI companies are now licensing, are making deals with the media companies for their content. You don't see that as a large language model user, but virtually some models that you go to will be trained on sources like Al Jazeera and so on. Other models will be trained on sort of what we would consider more mainstream for us, like the Guardian, the Telegraph and so on. So here again, just the partnership making abilities of those kinds of companies determine the content that you see. But as a user, you have no visibility into that. And very few people in this room have ever given this any thought. So just us being aware of what's happening within those companies and also regulators coming in, stepping in when they need to, I think is absolutely crucial. But if we lock ourselves out of those conversations of this knowledge, I don't see how we're going to be able to. But to do it twice, you know.
B
I just, I'm going to go to an online question next. I just want to put out a question, really a democracy question that underlies all of this because the core premise underlying liberal democracy is the notion that we're autonomous individuals capable of making our own decisions and responsible for them. So persuasion is generally part of democratic discourse. So it is. I mean it's adolescents and kids, we may draw a line, but a lot of this, the assumption is of a kind of false consciousness. Right. This is persuading you in a way that then strips away the notion that you're actually that kind of autonomous individual making up your mind. I don't have an answer to that, but I do find it really bother troubling in a way that we have to figure that decision out before we say we're not going to allow this technology to persuade you all that we're okay with traditional advertising or conversations where other people do it. Again, no answer to it. I'm just. I put it out there. I think it's something we all have to figure out. We tend to gloss over it. You got any online questions?
A
Yes, I've got a question from an undergraduate student. What are we doing as political scientists and society to fight the rising number of derogatory deepfakes as a result of increased access and ease of using AI?
B
The rising of what?
A
The result of increased access and ease.
B
Of using AI and what was the harm?
D
Deep fakes.
B
Deep fakes. Thank you.
A
Well, there is quite a lot of research on deepfakes. I'm one of the co authors of the State of the Science report on the safety of AI and that does have quite a lot in there. It's led by Yoshio Bengio and it was put into action after the Bletchley safety summit in 23. Having said that, we should. The question is absolutely right. As political, as social scientists, I would say it's not just a political science question. A lot of deep fakes are sort of non consensual sexual imagery. So it is going to be a question of regulation again. And on that International Safety Report there just aren't enough social scientists. I think I'm probably the only political scientist. And the dominant narrative is very much about kind of technical solutions to that problem, kind of watermarking and things like that. And I think you do need those things, but you do need to think about it in a regulatory way with things like non consensual imagery. You know, we've had to keep changing the law about that with successive generations of technology, we've got to do that. It's got to be informed by social scientists and they've got to be there as cosmina said together with the technologists, not kind of kept separate because it's a social problem that has to be tackled together.
B
Anybody else? I mean, that. Oh, that's back to that question. Like, that strikes me as an easy one. Right? Like you do a deep fake, you go to jail or you get fined unless you have consent of the person or you clearly indicate. Indicate that it's a deep fake.
A
Well, yes. I mean, it should be easy, shouldn't it? Yeah.
B
And yet this notion. No, we gotta, like, live with it and accept it and oh my God, what are we gonna do that's so peculiar?
A
Yeah, yeah. Well, I mean. And it's been true with a lot of other things. I mean, were you here when we suddenly discovered that upskirting, for example, was not illegal? Illegal? You know what I mean? It never ceases to amaze me that particularly things targeted women somehow managed to carry on being. Okay, that's okay. Sorry.
B
I actually didn't know that until just this moment. No.
A
Well, it is illegal now, but it wasn't.
D
Isn't it the case that with information technology it feels more amorphous? Like all the ways in which it can change an interaction, change what we're doing. And so it feels like it takes a long time to see the contours. So the deepfake seems like a clear one, but maybe even there it isn't. But to most of us, that would seem like an edge case where you could say pretty clearly if this is completely blurring the lines of reality. But then the more you go in this direction, the more ambiguous the use and the harm and the question of autonomy you raise becomes. And we feel like we need time to learn and to experiment.
B
Yeah. I want to take a question from the back. So there. Yeah. No, no, you. Yeah, she'll bring it. No, no, farther back. Going to the nosebleed seats.
D
Yes.
A
Hi.
B
Thank you. So the development of AI has the potential to. To massively centralize power both in the AI companies and in the AI itself. And my question is, number one, do you agree? Oh, sorry. And additionally, that I think has the risk that it just makes democratic decision making irrelevant. So I guess, do you agree, number one. And number two, what do we do to mitigate that risk?
C
It has this power to centralize things and the more data you have, and when we see. I mean, why don't we have more AI companies? Right. And the more data that you have and the more compute that you have and so on, the bigger models that you're going to be building, I think the geopolitics are changing quite significantly as a result of it. So again, do we have solutions to this? Not very good ones. I think there was some promise when Deep Seq came along. I think when you to places like India, for example, there's some really, really interesting work being done there. And the reason for that is that in India they have the issue that they have this digital government initiative and they feel quite strongly about that and they want it to be available to people and they have a ton of languages. So in order to build AI applications and to continue to expand the digital government initiatives, they need to find a way to have large language models for those very sort of small languages. So I think there are some really interesting developments happening, but I think sort of in our part of the world, bigger and bigger and bigger, is the solution to advancement nowadays. And I think that's a problem for multiple reasons.
A
I mean, it's a problem because there's a kind of big move in Silicon Valley to a kind of accelerationism where, where you let AI kind of run and you don't regulate it. And it's based on ideas of minimal government and democratic decision making. And I do think that that is something we should understand and worry about and try and do something about. Francesca I write on rethinking age, and I'm also an alumna here at the LSE Social Anthropology.
C
A question for Professor Helen Margetz.
A
Do you see any discrimination or benefits emerging from the context you clearly set.
C
Out at the beginning for people who.
A
Are growing into their later years? Well, yes. I mean, wherever there's AI, I mean, there is the danger of inequity. And it's easy to see that that might be. I mean, for example, large language models have come along and younger people have had much more the kind of education and technological experience to be able to manage them and maybe be more scrutinizing of them and more kind of wary of what can happen? It may be. I mean, the social media generation has become quite good at sort of limiting itself. And I think it's going to be difficult for older people, particularly in some countries. I mean, you know, in some kind it's going to feel like magic. These things are going to feel like magic. Say you live in a country where there's no doctors or, you know, you can't access medical care, or you can't access all kinds of things. And here's something which knows everything and can just tell you in voice you don't even have to be able to read. But I Mean, there'll be all sorts of inequities. It won't necessarily be particularly older people. It will be different for all sorts of different groups. I don't know, I'm trying to think here whether it will be particularly difficult for older people.
B
I hope not.
A
Me too, Kasmina.
C
I was gonna say that, you know, I mean, it is difficult for young people as well actually. And you know, one of the things that we, so we talk a lot about the future of work and you know, is AI going to automate our jobs and so on. Graduates, recent graduates are finding themselves locked out of the job market and there's a reason for that. And the reason is that, you know, the more you have, I mean, anybody can apply to any job at the moment, moment, and just sort of like put a job description, so a large language model and you say, you know, draft me a cover letter and a cv and that's what it does. The average job posting on LinkedIn gets about 2,000 applications. That's a problem that AI has created and you need AI to sort it because we don't have enough human recruiters to read through 2,000 applications for each job. Now it is those algorithms that are screening out the young graduates, you know, because if you don't have any job experience, you just got, you know, you just get kicked out. So you never even get a chance at an interview. Now that's a problem that we know about. Are we doing anything about it? No, that's a massive issue, right? And that's a system problem. And if we don't get, if we don't give these people a chance to train themselves and to enter the job market, what are we going to expect of the future? I'll tell you one other sort of system wide problem and you know, this is based on research here at lse and it goes to, it gets to the question of, you know, whom are we locking out? And Professor Judy Wiseman has done some research looking at venture capital investments in the UK and she has found that in the UK, 0.8% of venture capitalist investments go to AI found to female founded AI startups. So that's about 130 as opposed to 13.5 billion going to the male led startups. But these are systemic questions and if we don't understand them, if we don't bring them to light, we won't be able to fix them.
B
Here's an interesting fact. The average lifespan increased by about 30 years from 1900 till today, which is more than the prior 2000 years. Before that. Culturally, it takes a lot to figure out what to do with extra 30 years of life. And we haven't come close to adapting to that. So people are working much longer than all of our social systems and cultural systems are set up for. And lifespan is likely to increase by an equal 30 years within a much shorter period of time coming out ahead. So in some ways, AI offers opportunities to rethink the whole of life around that. I mean, there's some work at Stanford that I know where they're thinking, imagine a society in which you do your basic education as a young child and into the years where you now go to college. You don't really go to college. We don't front load the education. Something you do gradually over the next 20 years while you have your children and live a regular life. And you actually begin your career in the way that we think of hardworking careers in your 50s or your 60s. And you still have 30 or 40 years to work and think what that could do to the. I mean, as a parent, I'm not sure if it's good or bad. Um, but you know, these opportunities are going to be before us and we're going to have to figure it out. And AI would actually be the kind of thing that both the upskilling that you do across those middle years where we're not working all the time, and the capacity it would give older people to really be able to continue working hard. I mean, there's all sorts of possibilities that we're going to have to figure out over the next 10 or 20 years. It's kind of fantastic. Right in the middle there. And then we'll do another online question.
D
Thank you. My name is Irene. I'm currently a freelance journalist and an LSE alumnus. My question was pertaining to something I heard Taiwan's digital minister speak about, which is how she integrated an AI software into a policymaking process they had. So basically they used this software, I forget what it's called, but it's basically good at taking various votes or opinions and then illustrating to people the connective tissue between them or the common ground they all have. So they first piloted this software when they were trying to develop policy around the arrival of Uber in Taiwan because people were afraid of how it would influence, or how it would influence jobs for taxi drivers and maybe encouraged the development of a gig economy. And then they, it was so successful that they used it again, you know, hundreds of times. And so my question is, do you think. I mean, I'm guessing none of you are Taiwanese. But do you think this was, this is something that could be rolled out in other democracies? Or do you think this is a fluke that's specific to Taiwan, that it works so well? There's there. And then also specifically for Marian, I also wanted to know, because you spoke about a lot of really technical subjects, do you think this is something that could be useful when developing policy around sustainability and you're, you know, talking about energy grids and all these technologies. Do you think this is something that could be possible to bring people in, into the democratic policy around these emerging spaces?
A
Well, I. No, I'm not Taiwanese, but I have been to Taiwan and I'm a huge admirer of Taiwan. It is a particular sort of population, incredibly well educated, as I remember, and, and possibly more kind of sort of willing users of an application like that, very sort of politically aware that we might not see in all countries. Look, I mean, there are applications like that, there are platforms like that that have had varying degrees of success. AI does lend itself very well to that kind of platform. And as I was saying, there have been experiments, AI moderators are better than human moderators at bringing people some sort of consensus on very contentious issues and finding a way through finding a statement that's kind of palatable to the maximum number of people, if that's what they're told to do, of course, I mean, via some sort of reward model. So, I mean, yeah, I think that's a very famous example and I think it is very hopeful. There have also been examples of large scale citizen assemblies, for example, on environmental issues in particular, in other countries where AI can also play a role. I mean, yeah, it's one of the things I had in mind when I answered positively to Larry's question.
D
Yeah, on the environmental side, I appreciate your question. And at the moment we have a completely different project on sustainability that's asking. Well, we know that deliberation, engaging people through conversation, having them look at their considered preferences rather than habits or quickly arrive to conclusions can shape, you know, change how people think about policy issues. But might it change also how they think about, about their own consumer behavior and their own consumer preferences and how.
C
They engage with what they consume.
D
So this is a global school of sustainability funded project where we're using AI enabled deliberation to get people to talk about all the different aspects of food, of consuming foods, of sharing food, and the health impacts, the environmental, the climate, the biodiversity, the animal sentience. Here we have a Center for Animal Sentience at lse, but also the difficulties of modifying your diets. And people can then maybe learn from each other and just arrived at their considered preferences around what foods they want to eat and whether or not they might want to reduce animal based products in their diet. So that's something that, it's like we need the AI moderation. We need the online platforms to bring, you know, thousands of people together, well, in small groups, but and also the AI to help us make sense of all the discourse that we're gonna, you know, that that will be generated from that, that can help us understand the underlying conflicts, the common grounds that can then help us shape food sustainability policy moving forward, which is one area of sustainability that's not very much developed at this point. Point.
B
Let's get another online question.
A
Given the possibility of large scale labor displacement, what kinds of jobs or human capabilities do you believe are genuinely future proof in an AI intensive economy?
B
Love that question.
A
Professor of AI and society.
B
Soon we're going to have 7 billion of them. Oh, come on, you got to do better than that.
A
Okay, now I'm thinking, no, look, it's.
C
A really good question and I think, you know, I get this asked a lot. You know, when I go to and speak to people about the future of labor markets, there is usually sort of, you know, someone coming in the elevator with me and telling me, but, you know, what should I tell my child to do? And that's a really difficult question. Like, what's really sort of strange about those technologies is that we're seeing some of the professions that work we're used to calling safe investments Go. So computer programmers, for example, or lawyers, or management consultants. This is new for us as a society now are they? It's strange that a lot of the studies that we have at the moment look at task automation. Which tasks are getting automated? Automated. And there's a huge difference between task automation and job automation. It is possible that our jobs will change. And with every wave of technological innovation we've seen jobs change. But yeah, look, I mean, one of the things that I'm desperately trying to understand that and one of the things that I'm seeing at the moment is I talked to the leading AI companies. They do have access to much more powerful models than we do. They have access to almost unlimited access. They have access to compute, they have access to. They don't have the API limits that we have and so on. What they're saying is, if you could only see what we're seeing, you would be really worried. Now, of course they're looking at computer programmers. They're seeing, they're seeing their models being much, much better at code. They don't have to hire the same amounts of people. But I do think, I mean it's important to understand what's happening there, but I do think that's a little bit of a bubble. And on the other side we talk about, one of, part of our jobs is to understand the research that's happening at lse. I talked to someone recently who was saying and seeing where the potential of those technologies are. And we talked to someone recently who explained to us us how the marked electoral registers work in the uk and he was saying that virtually these are the sort of official record of who has voted in the uk and the way it works is the political parties go and pick up physical copies of the registers and they just transcribe them. There are a lot of mistakes and so on. So I feel like I live in a really strange world which on the one hand some people are yet to discover spreadsheets and on the other hand the hand somebody saying we won't have any jobs. And the truth is probably somewhere in the middle. But one of our Nobel Prize winning economists here at lse, Philippe Aguillon, I think he would say that with every wave of technological innovation, actually you need quite a lot of time for that to diffuse because you still need to build the structures, you still need to get the skills and so on. So I don't think it's going to, you know, I don't think we're going to see like massive disruptions within a year or two or whatever, but we will see quite a lot of change down the line.
B
Do you want to add anything?
A
Well, I mean, I mean there is, I don't think we should just see it as a wave that we do have the chance to shape it. And it's like another Nobel Prize winning LSE alumnus says, you know what happens? The reason that inequity comes when we, when we automate things is because some people just aren't retrained or given an idea of what kind of skills they need to actually carry on having a role. And some people are. And that's where the inequity comes from. And we do have to think about what we can do about it. And even if it does mean, I don't know, our children staying at home until they're, was it 40 or 50.
B
Not staying at home, they live full lives, just not career focused.
A
But, but how are they funded?
B
Well, it's just the inverse of the way we fund elderly People now. So it's still the same system?
A
Yeah, it's still the same system. But I mean, you know, we're gonna have to think about what sort of society we want and what kind of inequity we're willing to put up with and what kind of how we want it to be.
B
Yeah. Again, I'm gonna offer a couple thoughts on this. I'm the chair, but I'm talking way the hel think about. This is a question obviously anybody who's running a university has to be thinking about a lot. And what I would say there's a short term answer and a longer term answer. The short term answer which we're hearing from the people who employ our students is they don't care so much about the technical skills already. What they want is critical thinking, problem solving, learning to learn, collaboration, communication and learning from failure. They say that set of trans substantive cognitive skills that can then as jobs evolve and change, they're still gonna need people who can do those kind. Now you can teach that through any discipline, but it will in the long run, I think maybe even in the media run, result in a resurgence of both social sciences, humanities and arts as places actually to study and prepare for careers, which I think is actually a good development on the whole. Not that people will stop studying the sciences and technology fields either, but so that's the kind of short term, in the longer term, I think Helen touched on on it. Which is the reason we have to worry about this is because we have an overarching approach to our economies that incentivizes people as consumers, that focuses on people and consumers and incentivizes creating things that increase productivity and efficiency to drive down price. And as long as that's the case, AIs are gonna continue to eat whatever jobs there are out there. And unlike prior technologies, it's adaptive. So it'll learn how to do the new jobs too. But that's the choice. We can choose to incentivize the creation of AI that is job creative. We can incentivize companies to hire people. Now that requires making some choices about how we want to use AI and what we want to let it do, and moving away from a focus on people as consumers to people also as workers and constructing our policies to incentivize that those are just choices we'll make as we go along. And then the last thing I'll note here's a really interesting fact, which is there were some recent surveys done in the UK about graduates of university. Only 3% regretted going to college. 97% were really happy they went to university, but something like 68% wish they had studied something differently when they were in university. And that's because you guys are being forced to make choices when you're this age. And of course as your career evolves, it turns out there are different things you need to learn. So the longer run also for all this in education may be again a move away from so much front loaded education towards lifelong learning where you'll also be educating yourself with adaptive skills as the world is changing. And I think that again, all of those are things that can easily absorb this AI but will depend on choices we make. Time for one more question. And then right here, because you've had your hand up so calmly all through. So. Yeah, in the gray jacket.
A
Yeah.
B
No, no, no. Yeah, one forward.
A
That's okay.
D
Hi there.
C
Thank you very much for this.
D
Martha Beath. I work in product and tech.
C
I'm really curious.
D
We been speaking about AI as a monolith, largely about LLMs and obviously there's a large diversity of models out there.
C
Particularly ones that are now exploring things like models that don't have as many.
D
Problems as with hallucinations, non determinism.
C
I'm curious, how do you think that.
D
Would impact some of the answers you've given tonight, particularly around sustainability? So small language models for example, or.
C
Small language models that have less compute.
D
Or some of the other alternatives that are now coming, particularly from things like thinking machine labs for policy, I would say we need diversity, right. I mean in terms of the sustainability impacts, most of the examples I have in mind in terms of the impacts on clean technology are not really LLMs. The be just I mean all sorts of different kinds of machine learning and yeah like algorithms that would have helped in the research process or made these technologies optimize better and so on and so forth. So there's already a long sort of wave or not long, as I would say like six, seven years of incorporation of a wide variety. Variety. But in terms of you're I think alluding to lower energy use with small language models. But overall I think the response to the question about power is that hopefully the barriers to entry to creating AI models are not going to go up but are going to go down and we can have that diversity of models and of users and of applications and of people profiting from them.
A
Right.
B
And then yeah, and also just a final statement if you want. So we'll bring things to a close.
A
So I think we have to get better at incentivizing different kinds of models. We're so terrible at attacking, taxing these companies or kind of shaping the market in any way at all. And I think we have to get better at that. Of course we should have different kinds of models and we can, if we want, we can have models which aren't going to. I don't like the word hallucination because I think getting it sort of humanizes it. It's just getting things wrong. Right. Why do we have to make it something sort of magical, as if it's been eating magic mushrooms? So I think it is really important that we think of diversity of models. I always think in the very early days of GPT, when it very first came out, some of the researchers that were working with me or PhD students who are now doing the most exciting work about large language models, models, they really had their head in their hands because there were open source versions of these things before which you could do really good research with. And it was like, why, why, why? It's so inefficient, it's so wasteful to use these things to do that. And I just wish we could get better at shaping the markets to make that happen.
C
I think large language models are sort of the latest hype in a way. Right. That's where all the discourse is. And I think our lives as has changed tremendously in November 2022. You know, we weren't talking about this before and then all of a sudden this is what everybody sort of wants.
A
Us to talk about, but we were talking about AI.
C
Yeah, yeah, but just to give you a sort of, you know, a calm and composed voice, you know, we used to work quite closely with the president of the Royal Society, Sir Adrian Smith, and he's a statistician and he always sort of, you know, sat back in his chair and he said, you know, when the hype is over, everybody's going to go back to calling it statistics, which is ultimately what.
B
Any final words?
C
No, that's fine. Thanks.
B
So listen, I hope you'd all agree with me, certainly this was a really fascinating conversation. I want to encourage you all to visit our YouTube channel where you can see DSI's series on AI technology and society, the short films. You can hear some of the events from last year's LSE Festival and so on. There's a lot more to this and there will be a lot more to come. And just let us thank the panelists for a really interesting.
A
Thank you for listening. You can subscribe to the LSE Events podcast on your favourite podcast app and help other listeners discover us by leaving a review. Visit lse.ac.ukevents to find out what's on next. We hope you join us at another LSE event soon.
Episode: AI, Technology and Society: Shaping the Future Together
Date: November 24, 2025
Host: Larry Kramer, President and Vice Chancellor, LSE
Panelists:
This episode brings together acclaimed academics to examine the interplay between artificial intelligence (AI), technology, and society. Their discussion covers the most pressing challenges and opportunities posed by AI, focusing especially on policy, democracy, sustainability, inequality, and the evolving role of the social sciences. The panel emphasizes not only how AI is transforming society but also how social science must adapt, and even shape, the technological future.
Helen Margetts recasts AI as fundamentally a "social transformation," affecting every facet of society, economy, and democracy—not just technological systems.
The rapid public adoption of generative AI (e.g., GPT models) means social scientists must rethink frameworks like the logic of collective action and the role of the state.
Margetts:
"The logic of collective action, for example. How does that change in an AI powered world?" (A, 07:11)
Implications for labor markets are significant, with large-scale displacement posing massive challenges for fiscal policy and possibly necessitating universal basic income.
Cosmina Dorobantu shares lessons from her early Google days, reflecting on how previous waves of techno-optimism ignored social science expertise—with problematic results for society.
Dorobantu argues for embedding social scientists within tech companies to anticipate and mitigate negative social impacts—a mistake not to repeat with AI:
Marianne Dumas brings a sustainability focus, detailing research on how AI is a "general purpose technology" with both positive and negative spillovers for the energy transition.
Dumas finds evidence that AI disproportionately benefits clean technologies over dirty (fossil-based) ones, offering hope for targeted innovation policy.
However, she stresses that AI’s contribution to resource use—particularly energy and water—deserves attention, proposing both ethical reflection and new economic models (e.g., per-query pricing).
Dumas:
"If the spillovers of AI are higher to clean technology than to dirty ones, that gives us a leg up in the transition... And in the data, we find indeed that clean tech are drawing on AI much more than dirty ones." (D, 18:00)
Dumas:
"Any industry is going to need energy. AI is no special in that regard. That's why we need to find planet compatible ways of generating energy." (D, 21:40)
AI is not just a challenge but also a tool for social scientists:
Dorobantu:
"The very technology causing upheaval in our lives can actually really help us understand our world better than ever before." (C, 22:38)
Both Dorobantu and Kramer stress that pursuing productivity gains must not overshadow equity concerns:
Kramer:
"The economic problem requires us to think up a couple levels from where the AI itself is... we have a huge problem going forward." (B, 29:06)
Margetts notes media and tech pundits often predict dire threats to democracy but research hasn't always backed this up.
Microtargeting, widely feared as a democratic disruptor, didn't show significant effects compared to standard persuasive messaging in recent studies.
Audience Q: Risks of per-query pricing potentially re-monetizing access to information, deepening inequalities.
Audience Q: “What would stop OpenAI and a corporation from making deals to manipulate young consumers?”
Helen Margetts:
"AI is sold to us as a technological transformation, but it’s really a social transformation. It’s completely about people." (A, 04:41)
Cosmina Dorobantu:
"I’m spending a lot of my time these days trying to figure out how to bring those communities together… There’s an awful lot of openness and desire to collaborate on both sides." (C, 14:50)
Marianne Dumas:
"General purpose technologies start irrigating the entire economy..." (D, 16:00)
Larry Kramer:
"We have to rethink the way in which we divide that surplus. That’s the kind of political economy problem that I think we need to get to." (B, 29:06)
Audience Member:
"What would prevent Coca Cola and OpenAI to do the following deal: induce adolescents to drink more Coke… is it legal?" (A, 56:40)
Helen Margetts:
"We really need to do this research now because at the moment it's not complete, ubiquitous, and we need to start to understand what the effects will be…" (A, 50:10)
Cosmina Dorobantu:
"If we had managed 15 or 20 years ago to embed expertise into the tech companies… would today’s world look different? I’m convinced the answer is yes." (C, 14:38)
The panelists make a compelling case for integrating social science with technological innovation, warning against "techno-optimism" without critical reflection, and urging society to shape AI’s impacts—whether on democracy, the economy, sustainability, or human connection. They call for proactive research, robust regulation, and greater interdisciplinary collaboration, all while thoughtfully examining tradeoffs and future uncertainties. The episode closes with an invitation to continue engaging with LSE’s ongoing public debate about tech and society.