Loading summary
A
Welcome to the LSE Events Podcast by the London School of Economics and Political Science. Get ready to hear from some of the most influential international figures in the social sciences.
B
Okay.
C
Wow.
B
Good evening, everyone. For those who don't know me, my name is Larry Kramer. I'm the President and Vice Chancellor here at lsc. And it's my privilege to welcome you all to tonight's very special event conversation with Nick Clegg on his new book, how to Save the Internet. I did know that I didn't have to look at the page to know that. I'll say a bit more about Nick in a moment, but I want first to acknowledge our host for this evening, which is the LSE program on Cohesive Capitalism. So, Cohesive Capitalism is a major new initiative here at lse, and it's aimed at shaping a new political economic paradigm to better serve the common good. So it brings leading thinkers from across the social sciences together to explore the values, institutions and policies needed for a more democratic, inclusive and cohesive society. So, for anything who knows anything. For anyone who knows anything about tonight's Speaker, Nick, it's not surprising that he was in the sights of the initiative as someone we wanted to bring here. After a somewhat wayward youth at Cambridge, the University of Minnesota, working for Christopher Hitchens and Guy Speer, more study at the College of Europe, a stint as a reporter for the ft, and a variety of positions in the European Commission, Nick entered politics as a Liberal Democratic member of the European Parliament in 1999. What followed then was a meteoric rise into the House of Commons in 2005, and then leadership of the Liberal Democratic Party in 2007. As party leader, he moved the party into a position he described as radical centrist, supporting lower taxes, electoral reform, cuts in defense spending, and an increased focus on environmental issues. When the Conservative Party failed to achieve a majority in the 2010 election, it formed a coalition with the Liberal Democrats, which had won 57 seats. And Conservative leader David Cameron appointed Nick deputy prime minister. Now, in that capacity, Nick made a controversial decision that affected everyone in this room, or pretty much everyone in this room, but also lost him his position. He abandoned his party's pledge to oppose increases in tuition fees, which had been a key issue when the party that had won the party support from students, now, in as much as the government at that time, was not prepared to continue covering costs and was rather looking at massive cuts in funding for instruction. That decision arguably saved universities at the time, which is why it affected all of us here. It nevertheless cost his party dearly in the 2015 election, leaving the Liberal Democrats with just eight seats and resulting in Nick's resignation as party leader. In 2018, Nick relocated to the United States to become Vice President of Global affairs and Communications at Facebook, which, as you know, was renamed Meta in 2021. He was promoted to President for Global affairs in 2022, but stepped down from that role to return home to the UK in 2025. Since returning home, Nick has spent time raising his sons, helping his wife, Miriam, a formidable lawyer and politician in her own right, launch a political party in Spain, and helping a clueless American friend he met while living in Silicon Valley find his footing in London as President and Vice Chancellor of lse and writing a book, which is of course the subject of tonight's discussion. So we'll begin by inviting Nick to take a few minutes to lay out the basic thesis and argument of the book, after which he and I will spend some time teasing out the ideas further in conversation. And then after that we'll open things up to questions from the audience. After the event, Nick will be. There'll be a book sale right outside and then if you want, you can come in and Nick will be available to sign the book up here on the stage. And with that, let me turn the mic over to tonight's guest, Sir Nicholas Klenk.
A
Well, thank you, Larry. I didn't know you were going to give the sort of potted history of my triumphs and disasters in public life. It's sort of giving me a sort of hint of PTSD just listening to. It's lovely. And in fact, there are some things I could quibble with as well about that potted history, but I will come back to that. Maybe on a.
B
Well, you have to talk to Wikipedia. Yeah, right.
A
Firstly, it's lovely to be here, it's so nice to see so many of you. Thank you for giving up your time for this event. Why did I write the book, I think is what you want me to just quickly dwell on. Well, I sort of had this rather unusual, wholly accidental, not sort of deliberate career of 20 odd years in public office and politics in Europe and Westminster, as Larry described, and then seven years in a sort of senior capacity in one of these great big monster companies in Silicon Valley. And rather unusually, I think I was probably the only sort of ex politician in a sort of senior executive position, certainly one of the only ex European politicians to be in that capacity. And lots and lots and lots of books are written about big tech. A good or bad. There isn't a week that goes by without a flurry of new books being written about this. And I didn't really want to. I enjoy writing books, I've written a few books, but I didn't really want to go through the whole palaver of writing a book just to go over a lot of that ground. So the thoughts crept up on me that as I was doing the day job, there is the aforementioned Miriam, by the way, entering on Spanish time, only five minutes late. And as I sort of beetled around the world on behalf of Facebook Meta, it sort of, as I say, it crept up on me that I was seeing a very powerful but somewhat under analyzed and under commented on trend in all the interactions I was having with governments, with regulators, with lawmakers, with other tech companies, which is, as I describe in the book, this constant incremental fragmentation of the online world and what's sometimes commonly called the move towards a sort of splinter net. And there are lots of reasons for that. There are some geopolitical reasons for that, there are regulatory reasons for that, there are cultural reasons for that, there are technical reasons for that. But the longer I worked for one of these big sort of global platforms, and interestingly, even though these US Companies are extraordinarily American in outlook, in culture, perhaps even more so now in the way that they have sort of become almost joined at the hip with the Trump 2 administration, they are quintessentially global entities. I mean, I probably should check the specific data, but I think well over 90% of the users of Meta's products, Facebook, Instagram, WhatsApp and so on are outside the United states. But about 95% of all of the energy in terms of how the company is sort of governed and run, is America. So you've got this great mismatch between the very US centric culture of Silicon Valley and the very global presence with which they are that their products and apps and services are used across the world. And so it is my view, as I describe in the book, that unless there is an active political decision by the major techno democracies in the world, and I count the US India and Europe probably in that descending order of importance, unless there is a deliberate act by those major techno democracies to establish new guardrails, new swim lanes, new common approaches to the governance of the online world, particularly in an era of AI, it is my view that current trends will drive the Internet as we know it from a relatively open experience, certainly outside China, where data flows seamlessly, where we can share content and data from jokes to satire, from Misinformation to short form videos effortlessly across the globe in a way which doesn't recognize geography. I think we will find over the next decade that that seamless open experience would instead become one which is more balkanized, more fragmented, where there is more friction and being able to communicate from one jurisdiction to the other. And I think all of those trends, like many trends, as I no doubt we'll discuss, are only going to be accelerated by AI and divergent responses to AI in society. And so it's like a sort of, you know, it's like the sort of frog, the slow boiling frog problem. The fragmentation of the Internet is something which happens slowly and you only really notice it when it's too late to do anything about it. And that is basically why I thought I'd write this book, because I don't think enough attention has been devoted to this, to this growing balkanization of the Internet. And as I set out in the last third of my book, much as the Internet as we know it really, really came about by accident, no one, there was no, there was no blueprint, there was no common design, there was no top down design said, this is how the Internet shall be born. It was incremental, almost organic process of innovation. A lot of it spun out from the kind of defense funded research in Silicon Valley. It was then layer upon layer of innovations and different norms and standards. There was no overall kind of central design to the Internet. So it's a magical thing. I think that this thing, this extraordinary all encompassing phenomenon should have almost really come about through accident, but I think it can die through neglect. And so my view is that much though it has always flourished in a sense through benign neglect, the global governance of the Internet, in an age of AI and particularly in an age of political deglobalization, needs this deliberate act of political decision making amongst the major techno democracies. And the final point I'd make is the backlog to all of this. The book are these two great big forces that are colliding with each other and will continue to shape our world. It's the globalization technology. AI is only the latest manifestation of technological innovation that doesn't recognize geography, doesn't recognize borders. And the rapid de globalization of politics everywhere, whether it's Brexit, whether it's Modi, whether it's Trump, doesn't matter. Erdogan Politics everywhere is about retrenchment. Perhaps thing to be debated in halls like this, perhaps the latter is a reaction to the former, but these two things are increasing, I think becoming Uncomfortable bedfellows, the globalization of technology and the deglobalization of politics. And I think a lot of that is what is driving this fragmentation, which I believe needs, as I say, a deliberate act of political will to avert the worst in the years to come. So that's the book in what, four or five minutes? I think I managed it. So you don't need to read it now, but please do, but still buy it. Yeah, yeah, buy it. You don't read it. I mean, just buy it.
B
So, okay, so in some sense, and I'm going to slightly oversimplify this, but the kind of core value that has always driven Facebook was this notion that connectivity is just an unqualified good, that anything that increases it is good, and that things that make it easier for more people to connect with more people is the driving idea, even when it conflicts with almost any other value. And then a lot of the controversies involving Facebook over the year were where it did conflict with other values. And, you know, there was this sort of acknowledgement of error claim that we would fix it and then retreat back to that same value. And that's been the kind of pattern. The reason I mention that is, as Nick knows, the book has come in for some harsh criticism here, at least in the British media, basically for being just a defense of Meta. And so I assume. I want to give you a chance to answer that, since that has been thriving.
C
Yeah.
B
I mean, why are they wrong?
A
Well, the British press. Where do I start? I've never had the most fruitful relationship with large parts of it. Well, sorry to be specific, the book is not actually about Meta. That's actually not what it's about. It's not about Facebook or Meta, but it certainly covers in the first third, because it'd be rather odd for me to spend so much time in Silicon Valley not to talk about it. But that's actually not the central thesis or argument of the book.
B
Right. Although in some sense it's a defense of that approach, that value.
A
Sorry, to that extent, yes, it is definitely a defense of openness and the value of connectivity. And I suppose I do, by temperament and by disposition and by outlook, generally, think maybe I sound like just the most woefully unfashionable globalist, but I do think that connecting people across jurisdictions and cultures and places is generally a good thing to do. I generally think history suggests that when we build barriers, reject each other, vilify each other, push away the other, it doesn't turn out very well. And I'm a child of globalization. And I unfashionably believe there's a lot about the period of post war globalization which has brought untold bounties for us. And I think the Internet is very much part of that. So yes, I do start from an assertion that openness and the ability to communicate with others, but perhaps much more important in that because it's the corollary of the openness, the ability, which is what social media has brought to billions of people around the world to express themselves and to avail themselves of these tools for free because it's paid for by advertising, I think is on balance and unalloyed good. I think in that sense I am an unapologetic sort of old fashioned liberal who believes that if you empower people to express themselves without having to wait for the say so or seek the permission of religious or political or ethnic or other leaders, it is a good thing. And I think social media has emancipated self expression on a vast, vast scale. Doesn't mean it hasn't come up, hasn't allowed for a lot of bad people to say a lot of bad things as well. And we can no doubt come to that. But yes, I do start from the assertion that this technology, because of the way it's built, because of the way it's paid for, I think has had a very powerful democratizing impact on communication self expression around the world. And I think anything which inhibits that is generally a bad thing, particularly when the inhibition of it is invariably authoritarian or semi authoritarian or self interested political players trying to interfere with the way in which their citizens express themselves. And they might have very good reasons to do so. But at the end of the day it is a conflict between the self empowerment that I think these technologies can bring to people and various forms of political authority trying to control or limit what is said and what is done online. And I will generally always be in favor of the former and against the latter.
C
So.
A
Sorry, what was that? Oh, sorry. Well the meta. No, the book's not really about meta. Look, don't read if you want a book which is a sort of kiss and tell book where I say I've seen the belly of the beast and they're all demons and they should all be locked up, don't read it. That's not what the book's about. It's not what I. It's not actually what I think and it's not, I wouldn't even write it I suspect if I thought that be rather silly thing to work for this for seven years and then sort of trash them. In that case, I shouldn't have worked there in the first place. So I'm afraid I unfashionably don't subscribe to the well articulated and rather sort of fashionable throwaway caricature of these companies and this technology. And so I challenge a lot of those assertions, some of which I think. Hold some water. Some of which I think are manifestations of a moral panic about technology, which we always have as a society. I had a drink last night with Ian Livingstone, the wonderful British legend of the games and gaming industry. He was retelling, recalling to me, and I remember as a kid, the moral panics around gaming and video games. In the 1980s, if you looked at a particular game for more than 10 minutes, you were in danger of becoming an axe murderer. I'm being facetious here, but I unpack a lot of this in the book. I try and unpack it not in my name, still less in Meta's name. What I try and do is say, here are the claims made, for instance, about the link between the use of social media and political polarization. Here is the counter evidence. And I try and marshal academic evidence. And sure, I hope the book is of some interest because it's not just repeating what I think are a lot of well worn, unempirical platitudes about technology and society. It challenges some of those things and I guess, of course some people don't like that because it's much more comfortable to say it's all the fault of Big Tech. If only we could tweak. Again, I'm being facetious here. If only we could tweak the algorithms, all would be well, the sun would shine, democracy would flourish, Brexit would not happen, Trump would not have been elected. All this kind of stuff, I just think all of that's nonsense. I think it is reducing highly complex reasons to a sort of techno determinism, that somehow it's because what you see on your feed, that makes us feel and think and, and say things that we otherwise wouldn't do. There's very little evidence for that, but you can judge for yourself. I try and lay it out as dispassionately as I can, but yeah, I suppose some people don't like it because it's not. It is highly fashionable and no doubt if you're trying to sell books, quite lucrative these days to publish books or write articles saying that it's all the fault of technology and it's all the fault of these algorithmic systems, I think there's a lot that is wrong with them. And there's a lot of things that I feel I spend a lot of time changing and I like to think changing for the better. So I'm not naive about these things. I just don't believe there is any evidence and I don't think there has been down history. I mean, if you look at the moral panics around, you know, when bicycles were invented, it was considered to be particularly subversive and dangerous for women because they'd be, you know, sitting astride this bicycle thing. The radio is considered to be as it was a favorite tool for fascists. Electricity was considered to be a deeply dangerous thing because it would light up your house and thieves would be able to see when you're at home. I mean, this is well documented and if you look at some of the adjectives and adverbs that are used every time you have these new technologies, it's unnerving how similar they are over history. And quite rightly, a lot of those fears are then particularly focused on the effect on the young. I think there are a lot of very legitimate concerns about the effect of social media and the use of social media on the young. None of which I duck in the book. In fact, I think personally governments could go a lot further than they currently have. But to give you a long winded answer to your first question, I will be shorter going forward. I give a nuanced view. I give a nuanced view. I try and marshal the evidence and the counter evidence, decide when you read it. But yeah, to your question. I guess some people don't like that because nuance, your answer is irritating.
B
Okay, so let me. There was a lot in that answer. I want to touch on, I think three of the different points, right, that again come back to the core ideas. Still, openness is better, right? Connectivity is what we're after. We should avoid things that limit that. So just to touch on three of the criticisms that are often made of it. So one you touched on there at the end, which is the arguments about social media's impact on democracy and on young people, on their mental health. And okay, so the argument is that's overstated. It's based more on correlation than causation. And I actually think that the really strong social science critiques of the book are consistent with that in the sense of absolute causation shown. But that is when the argument is made sort of as if social media is the sole cause. But I think more realistic. And what's. I think, I think you're agreeing is plausible. Social media is at the very least a major accelerant, an amplifier. So then, you know, shouldn't. So then don't we have to hold tech leaders accountable for how their algorithms are exacerbating the problems? And then we're into the area of regulation. That seems to be counter to what you're trying to argue for.
A
Yes.
B
So I agree.
A
No, I basically agree. I basically agree firstly with the assertion that. Which, to be honest, is not often made. This is my point. My point, absolutely not. Is that technology and all the time we spend on our phones and the apps that particularly kids use and the algorithmic recommendations that are fired at people, I'm absolutely not pretending that that is somehow a risk, free exercise or doesn't have a number of collateral effects which can be negative as well as positive. I strongly believe that it is over. There is an over ascription. Do you say that people are over ascribing to those technological factors a whole bunch of wider forces that can drive, whether it's political behavior, whether it's the shifts in the patterns of adolescent mental health and so on? So it's not that I'm saying it has. No, of course it. Of course it must do. I think it is just sometimes described as the sole or the principal driver of the behavior. And that in turn, I think, leads to something which worries me because it leads decision makers and the population at large thinking there's a technological fix to these things, and there very rarely is. You can improve technology, you can put guardrails in, you can regulate. We'll come to that in a minute, do all of those things. But I worry that sometimes it is described in such breathless terms that if only these amoral people in Silicon Valley would do something different, then all would be well. And that is the sort of falsehood that I think has taken root in recent years, which I think is simplistic, is not backed up by any of the evidence, and will leave a lot of people very disappointed. On regulation? No, no. I mean, one of the first things I did when I arrived in Silicon Valley in late 2018 was to persuade Mark Zuckerberg that Silicon Valley needed to sort of stop this. What I felt at the time was ludicrous assertion which all the tech leaders were making at the time against any kind of government intervention, any kind of regulation. And he then, you know, put his name to an op ed, I think, in the spring of 2019 as the first Silicon Valley leader, saying, of course there's a role for government of Course there's a role for parliaments and so on. I'm not, I'm not against that at all. And in fact, when it comes to kids, I spent a lot of time trying and failing to, to persuade ministers in this country to go further than they have done in the legislation which recently has been put into the statute book in this country, the Online Harms Act. In terms of how you verify kids in order to make sure that teens, particularly between the ages of 13 and 16 or 16 and 18, consume age appropriate experiences, I think there's a whole much further they could go. What I am arguing though, and I don't think this is inconsistent in the book is, is if every country regulates in dramatically different ways for a technology which for all the reasons we've discussed, doesn't know borders, then you will get, then you'll get this patchwork of regulation which will lead to. Which will lead to the sort of fragmentation that I don't think people want. So I'm all for sensible regulation. I just think it makes a lot more sense when that regulation is done in a more coherent and consistent way around the world.
B
Just say a few words about what then would count as sensible regulation. The two kids. Youth. I think you hinted at that one. And the democracy problem.
A
Well, I think on the youth thing. On the youth thing. Look, at the end of the day you have to decide as a society and this is not something which should be left to tech bros. It shouldn't be left to the engineering companies or the Silicon Valley people. These are business people, they're not philosopher kings. We shouldn't be remitting our governance to them. We should decide as democracies. Do we think, I don't know that no one kids shouldn't use social media from the age of 16 or 15. I noticed yesterday the Danish government have announced they're going to put a prohibition on using any social media under the 15 and below. I think Australia's passed legislation for 16 and below. You can pick your age. I mean I personally think, saying as they are in Australia, that 15 year olds can't look at Ronaldo and Messi scoring goals on YouTube anymore. I think it's not going to work. But never mind. Let's say you have a social societal consensus arrived at through democratic process that there should be no social, let's call it 30, 40, something like that. Then you have a problem. And the problem is how on earth do you police that and how earth do you police that at a time when young people in particular are extraordinarily versatile at using lots of different apps at the same time. I think the average American teen is now estimated to use more than 40 apps every single month. And as you all know, for those of you who do that or have teens, you know, it's often you'll use one social media app for your site, your public identity, another messaging app for your private messages, another one for your music, another one for your. It's, it's. People are using these apps in a very interchangeable way. So you have to come up with something that can then police that new age limit, let's call it 14 across all of these apps. And that's the thing that this government and other governments around are failing to do. Because if you ask the apps, which is now the law of this land, to basically come up with their own verification age verification processes, you get the mess we now have, which is that it's different on TikTok, it's different on Instagram, it's different on Pinterest, it's different on Snap. Parents have no idea what's going on. They've got different flows trying to verify age on different apps. In my view, there is a technical way that you can force on the whole industry on all the apps, a simple way of verifying age. You do it at the app store level. In other words, the two app stores, which are the two choke points of the Internet, the iOS and Android app stores, and you work with them, basically mandate them to ensure that at the point at which the app is downloaded, there is age verification, which then applies to all apps which are downloaded by that device using the app stores. That's just rather techy, but it's rather important because I think what we're now doing is that the polishers say, job's done, newspapers are full of, ah, yeah, we're biffing Silicon Valley on the nose and there's going to be this age verification. And meanwhile, families are left completely confused, particularly parents who don't use online stuff a lot because they're having to run through so many different hoops.
B
So that's one example do democracy a misinformation.
A
So I think that's way hard. I think some of it is easy. I think, for instance, how politicians and campaigns use the targeted tools that a lot of these platforms have to target messages or political ads. Again, you just need to make a choice. Maybe you don't want to have political ads at all. I mean, actually I tried and failed early on in my time in Facebook to persuade Mark Zuckerberg and the board to stop running political ads altogether because I thought it was a total nightmare for the company, just caused endless controversy, generated almost no revenue whatsoever. But for various American reasons that you'll be familiar with, Larry, this weird jurisprudence that seems to think that paid political adverts are a form of self expression somehow linked to the first. You can explain that. Somehow linked to the first Amendment. Never got that myself, but never mind, it didn't happen. But you know, again, that's just kind of just decide you don't want political ads or you don't want targeting. Just decide, pass the law, make sure you apply to everybody. I think misinformation is way, way, way harder because unfortunately it sounds terrible. We all in this room no doubt don't like misinformation. Misinformation is something that's half true and half false. It's someone talking complete nonsense. It's gossip, it's malicious stuff that is deliberately kind of out there, but it's kind of, it's not illegal. None. Misinformation is not illegal. So this is all when people complain about, oh, I don't like this online, that online it's content that democratic governments and parliaments over generations have deemed to be perfectly legal. And here's the great problem with misinformation. One of the basic tenets of a free society is to talk rubbish, talk crap, is to talk nonsense. How on earth do you police that? How on earth do you police that in a society where there is absolutely no consensus about where the line would be drawn? I guarantee you, you guys in this hallowed hall in the LSE might be able to come up with an understanding. You will share about misinformation. But you try and get Nigel Farage to agree to that and the millions of people who support him, you will fail. Because one person's hate speech is another person's right to free expression. Again, all about content that is legal. I may not like it, I may agree with you and disagree with Nigel Farage, but you don't speak. And I don't speak for society at large. And this is particularly acute in the United States where the very definition of what is acceptable, more not acceptable speech, now splits the country right down the middle. And I personally think this is where the politicians need to look more in the mirror rather than constantly shout at Silicon Valley to come up with a Solomonic kind of compromise which they themselves have failed to. And so I just think on misinformation, yeah, it doesn't sound nice, does it? Misinformation. But when you then double click and really look at what it actually means, and then ask yourself whether you want that to be banned. And just one. I'm really getting in a roll Hill. But it's a really important point. During the pandemic, governments and I would have been in exactly the same position if I'd still been in government. Were freaking out. Totally understand. Just imagine you're in government when people are dropping like flies. The vaccine hadn't been invented. No one knew where this thing had come from. There'd be this extraordinary, unprecedented shutdown. And at the time, the Biden administration became very, very alarmed and aggravated by what they felt was content that was circulating online, whether it was on YouTube, whether it was on Twitter, whether it was Facebook, which would discourage people from taking up vaccines, and they called it vaccine content, which would lead to vaccine hesitancy. That was the sort of jargon that came up. So I used to receive frequent calls, often in the dead of night, from the lead coordinators of the COVID effort, from the White House, yelling at me, sometimes you must take this piece of content down, because if people see this on Facebook, they're less likely to take up a vaccine. And I remember vividly one. It was a piece of satiric. It was a satirical thing. And I remember it had Leonardo DiCaprio sitting on a sofa looking at a TV screen. And it had, you know, these US medicinal adverts, they have all those caveats at the end. If you take this medicine, you may grow a second nose, you may develop bulbous growths on your elbows and all that kind of weird stuff. And they had. So it was like a satirical thing, someone saying, in a few years time, that's what you're going to, you know, remember that from. If you take this vaccine. And it. But it was satire. And I remember saying to this guy who was yelling, I said, are you serious? You want us now, at the behest of the government, to remove satire from the Internet? You've got to be kidding. I didn't do it. I was very firm in my view that we couldn't and shouldn't do that. But it's just a. It's just a little example. He said, no, but it's misinformation. It's misinformation because it's satire, which is implying that there might be after effects. And I said, I don't know, there might be, I might be. But in any event, you can't ask the private sector to kind of suddenly start expunging satire now. It's an easy, it's an easy example for me to give you. There are many others that are, of course, way more edgy than that. But it is a little insight, I think, into the huge difficulty when people say just get rid of misinformation because it's, you know, one person's truth is another person's right to have fun or to, you know, it's way, way, way harder that I think than age verification, which I think by contrast actually has a simpler solution, which we should be taking up more than we currently are.
B
And I'm going to resist the urge to fight with you on that one not. Last point. Well, I actually think there's so first we have libel law, which actually we regulate speech all the time in other contexts. So you don't have to draw the line where you're getting into the really fuzzy cases to say, but actually anybody who knowingly or intentionally or with reckless disregard puts out speech they know is false, false that we can punish. And then the only question is, can we hold the social media platform liable? And we can do that too, because we only need to worry about the stuff that goes viral. So you can say, so there are ways to think about doing that.
A
Actually, can I say something just about that? Because in as much critical. Stop me, I know, stop me if it's not interesting. But I suspect you guys are here because you're into this stuff. This issue of the liability of the platforms is tremendously important because there is this famous law in the US it's section 230 of the Communication Something something Act.
B
Decency Act.
A
What decency act? It's 25 years old.
C
Right.
A
And this, this is what they say. This is the sort of, this is the kind of legend, if you like, in Silicon Valley that without section 230, the Internet would never have evolved in the way that it did. Because what it more or less said, it's a two or three sentence clause. It says as a platform where you are conveying like a sort of like you're providing the pipework for human beings to say stuff and communicate with each other, you can't be held responsible for that content because you're carrying the content but you're not generating it. Right. Like a telephone operates newsstand. What?
B
A newsstand.
A
A newsstand or. Yeah. Or a telecoms operator can't be held responsible for what you say. What we say to each other on the phone. Now that held for a long time, but huge. But I think it is collapsing at a very dramatic speed now. And I don't think Silicon Valley or indeed governments around the world have realized what's going to hit them because particularly Silicon Valley, because as many of you who use social media will see more and more of the content that you now see is not actually a human being posting something and sending it to you or sharing with you. It's content which is plucked from the farest recesses of the Internet by an algorithmic machine and then recommended to you because the system thinks you might be interested in it or which of course will become even more the case now. Synthetically generated content generated either visual or other content generated by these very powerful new gen AI content creation systems and then recommended at you. And at that point, I'm not a lawyer, Larry, I just know what you think. I kind of think how on earth can the platforms then claim They've got Section 230 protection because it's not carrying one human's content and another person's human's content. It's the platform itself recommending content, often content which they've generated synthetically and peppering it at you. And here's the great irony is that the familiar critique, you know, people like Professor Zuboff, the surveillance capitalism and so on, of social media, I always thought was totally wide of the mark because it had this sort of caricature of the systems being able to sort of burrow into our neural pathways and control us completely, when in fact, I think human beings under old social media had much more agency than that. Here's the irony. That critique, I think is now becoming more and more true as social media mutates into a much more synthetically driven automated experience. And I don't think I was in Silicon Valley just a couple of weeks ago. I don't think folk in Silicon Valley have even got their head around what it would be like to operate their platforms when if you know, distressed Individual X and what you're seeing some of the litigation starting in US courts with some of the, the generative AI bots, but I don't think they've got their head around what it means to operate in a space now where more and more people are going to be using their services and they won't enjoy Section 230 protection because they'll be inserting themselves directly into one half of the relationship which is being connected on their platform. So I think it's a massive deal.
B
Let's actually take a second on the surveillance capitalism point because that's one of the other core critiques and you touched on, I think, the Heart of it. Right, which is, you mentioned earlier the ad driven model you think is a good model. The critique of that is so these companies are hoovering up massive, massive amounts of information about each of us. And then not just. And then the model drives them to have to sell it and they sell it to companies who can then use it to target us. Now there's a sense in which advertising, that's always what it's been. But I think the argument is one, the sales are for all sorts of uses that may or may not be good or bad, and that two, the difference in degree is so great that we're not talking about just a quantitative difference between what old fashioned advertising could do in terms of manipulating people. So it's not burrowing into your neural networks, but it is being vastly more sophisticated at manipulating people through this information that's hoovered up and then sold in ways that can be targeted very specifically. And that, that itself is. Is. Is a problem.
A
Yeah, I mean, I don't know. The ads I see on Facebook, I think are pretty rubbish. Quite often they're not well targeted at all. I don't sound sometimes what I get. I mean, I don't think I snore. Do I snore? Me. And I keep getting, I keep getting ads for snoring aids. I mean, I think, I think she's. Oh, no, no, no, no. I mean, I mean they do hear everything. I'd be a little bit facetious here, but I think this kind of description of these amazing things that. Oh my God, they do. So accurate. Yeah, maybe a bit, but they have.
C
To be a bit.
A
I mean, there's no point me receiving ads for a shoe shop in Melbourne, Australia, because I'm not going to go to a shoe shop in Melbourne, Australia. So it's kind of like to that extent. Yeah. Receiving relevant ads. I kind of. And I just don't. I tell you two things why I just don't probably buy this is firstly, that the ads business, the commercial incentive on the platforms actually incentivize the platforms to do the opposite of what they're constantly alleged to do. Because the last thing you want, whether you're selling plastic ducks or garden furniture or washing up power or cars, is have your ad next to some vile, revolting, still less illegal piece of content. So I think the advertising business model, which is held by the sort of Zubov school, I think totally misunderstands. The obvious conclusion of that is that if you're being paid by advertisers and they don't run ads for illicit drugs and biochemical weapons and so on. They've got all the rules you'd expect on not running ads. They sometimes have controversies around when there's a kitchen knifer weapon and things, you know, things like that, which rightly blow up from time to time. But actually the overall incentive is actually to have bland content on the platform because that's what the advertisers want who pay these people's lunch. That's the first thing. The second thing is, in as much as, I don't know, for those of you who look at the ads that you see while you're scrolling on screen, I mean most people just flick strike right past them. But if you go on those three dots next to the ads, you've got way more control over what ads you see. Why do you see them? Not seeing them, blocking them, saying you don't want to see them. You can now pay this, I think as of last week now to pay a little subscription and have no ads at all. You've got much more control over ads online than you do in many other formats. So look, they're powerful systems, they do. They're based absolutely on targeting. They're based absolutely on learning from, from a considerable amount of data something about the identities of people sufficient to be able to then target their ads. All of that of course is going to become increasingly automated by AI. So I think what you're going to increasingly see, AI is clearly going to have a massive disruptive effect and already is on the advertising industry because relatively soon as an online, if you're a, I don't know, you're a small, medium sized company, you advertise on Facebook. Relatively soon you'll be able to say I want to target my galoshes to men between the ages of 45 and 65 in the Midlands who like hiking or something. And you will very quickly just be able to put that into the system. And not only will they be able to target the ad, the systems will be able to create the, the ad itself for you to be able to play around with. So I think this thing is highly automated, but I think there's a level of kind of visibility and transparency. And as I say, the fundamental commercial incentive is kind of weirdly the opposite of what the platforms are constantly being accused of doing. That's my view.
B
I have so many questions, but I want to get to some of the fun ones. In particular, you didn't bring write a tell all book, but how do you explain what do you make of the Turnabout Love Fest from the Tech Burrows with the Trump administration.
A
Yeah, I think it's. I mean, I just think it's just extraordinary, isn't it?
B
I mean, we know all these people.
A
So, yeah, I mean, the first thing I say is, here was the weird thing. So there was I at a pretty senior level in Silicon Valley, and for the years that I was there, I was the only person in the. Of all these discussions about how to deal with governments and politics and elections and so on, I was the only person who'd ever been elected to anything. But I was always the first to say, look, you need to keep a kind of healthy distance from politics, particularly if you're a social media platform, because you're dealing in the most controversial thing possible, which is human speech. You're not building widgets, you're not sending off rockets to the sky. It's speech. And that's why companies like that are never going to be more than a heartbeat away from controversy, because you're dealing with the thing that is the most important thing to all of us. In many ways, it's communication, it's speech and so on, including political speech. So I was always the first to say, and that was part of my job, was to rebuild relationships with governments and parliaments and lawmakers and regulators and so on. But I was always, I always said, look, innovation and great engineering and all rest of it, in my view, generally takes place best at a certain arm's length distance from politics. So when you saw all these tech bros sort of stampede their way to mar a lago and enter into all these endless kind of deals with the Trump 2 administration, and, you know Silicon Valley as well as I do, it's a very. One of the curious things about it, for those of you who haven't visited, is it's firstly, it's this center of extraordinary innovation and wealth and ingenuity. It's an exceptionally dull place to live. I mean, the sun shines all the time, but there's no culture, there's no street culture. Everyone just drives around their car. Miriam and I tried at dinner parties, people would turn up at 6. Well, you would stay much longer, but people turn up at 6, they drink a little thimbleful of wine and they'd bugger off by 8:30. And it's just absolute. So it's a really weird contrast between the sort of energy of the engineering and the absolute flatness of it as a kind of place socially and culturally. But also the other thing is, as you know, the whole tech Bro thing is that they believe that history is made by great individuals, men for whom. Which is why they all got this fascination with ancient Rome. You know, great statues were erected for men who do great things. And then the sort of, all of us, the sort of Lilliputian folk down there will admire these great men who, who will occupy Mars and do all these great things. That's the way they think. So they totally buy into the great man theory of history. And yet they also are slavishly herd like in their behavior. So when one or two of them go down to Mar A Lago, they go, oh shit, I better go down to Mar A Lago as well. Because Jimmy next door is no doubt ingratiating themselves more successfully with the government. So it's so weird. You get this kind of hyper individualistic culture combined with this absolute herd like behaviour which then has led to these extraordinary images of all of them lined up like a sort of school outing, but behind, you know, behind Trump at the inauguration and all of them saying, oh sir, you're so marvelous, you know, around the table in the White House, look, I hope that much as the pendulum has swung from kind of great, almost, almost animosity and suspicion between Silicon Valley and political power in D.C. and is now swung the other way, I hope they find some kind of more sensible middle. There's only one thing which is worse than a kind of great capitalist economy like the us. There's only one thing worse than having businesses and governments at each other's throats. It's having them in each other's pockets. And I just think it's typical, the place, it's so faddy, you get these fads and it just swung totally the other way. What are they going to do if a Democrat president comes along next? They can say, oh no, actually we agree with you, they can't keep doing that. Surely they've got to have a certain sort of through line where they don't keep in this sort of whiplash way, you know, swinging from left to right and back again. So I hope over time they kind of find a more sensible middle than is currently the case.
B
You're a nicer person. I mean, I'm just gonna say, I think, I think for most of them in their heart of hearts, this is where they've always been. They felt super attacked for those years by people on the left. Trump finally gave them permission to express what they really think. Well, they, not all of them, by the way. There are, you know what I mean?
A
But there are some, well There aren't that many exceptions. But I'll tell you what is true. What is true, just as an ex politician, and this is just a, I'm just making a, like a dispassionate place political observation is the Democrats spectacularly mishandled Silicon Valley. I mean, to go from having basically the whole of the Silicon Valley support the Democrat Party as they did for years, to basically having all of them line up behind Donald Trump, I mean, just as an ex politician, that is a, that is a spectacular own goal that you haven't worked out a way of trying to split, you know, split them a bit off from each other, keep at least some people on side. And whatever you say about big tech in Silicon Valley, these are very important entities. They generate a huge amount of donations for the Democrat Party because I'll tell you what, Wall street isn't giving the Democrat Party anymore. So I agree with you that this kind of, oh gosh, Biden administration wasn't nice to us, will be nice to the next lot. I agree with you. It's pretty unseemly. But just as a political fact, is that Democrat Party just going to just kind of try and ignore the whole or berate the whole of Silicon Valley going forward? I don't think that's going to work very well for them. They're going to have to work some way of having a better relationship with Silicon Valley in future, it seems to me. Hi, I'm interrupting this event to tell.
D
You about another awesome LSE podcast that we think you'd enjoy. Lseiq asks social scientists and other experts.
A
To answer one intelligent question like why.
D
Do people believe in conspiracy theories? Or can we afford the super rich? Come check us out. Just search for lseiq wherever you get your podcasts.
A
Now back to the event.
B
Okay, so I want to ask two last questions and then we'll hopefully get some time for the audience. One on broader issues. So tech issues are just one of the arenas in which the US and China are now locked in a kind of cool war. Not quite a cold war yet. How do you think the two sides are faring against each other? And is there a way out of this?
A
So, to be much briefer than I've been, I think that there is an assertion is being made day in, day out by this new embrace of technological power in Silicon Valley and political power in D.C. which, and a glue in that embrace, and you can, you can hear it, they say it all the time, is that America is going to win against China in the race for AI. Supremacy. It's literally what they say over and over again. And my assertion is they can't win. They can't win and they won't win. And I think one of the biggest things that is going to shape the geopolitics of AI and Silicon Valley over the next few years is when the American sort of technology and political elite, when the penny drops, they can't beat China in AI. It won't happen. It's not like the Cold War. It's not like outspending out, muscling the Soviet Union and then having a sort of Berlin Wall moment when suddenly there's a victor and you know, there's a victor and a vanquished in the conflict. The Chinese are way too good at this stuff. They've got a much, much more ambitious and in my view, workable strategy on how to generate energy for this very energy intensive industry. They've got some of the best AI data scientists on the planet and they're producing legions of them from some of the best engineering schools in the world. They have none of the compunctions we've got about data privacy and data protection, so they can hoover up vast amounts of data. And as you know at the moment in the LLM paradigm of AI, that by the way, might change. But right now it's a very data hungry paradigm and they are already leading. They're already beating the world in open sourcing AI models. And if you look at the leading open source models, almost all of them emanate from China. That is an extraordinary thing that the world's largest autocracy is doing. Doing more to democratize the free access to some of these models than almost anywhere else. Slight exception, Mistral in France, Meta in the us. But if you look at the leaderboards of the open source models, the Chinese are already leading. So you can't win. You can contest, you can rival, you can compete, and so on and so forth, but you're not going to win. In my view. That will lead to a dramatic shift, or at least should lead to a dramatic shift in thinking in America away from thinking that you can just unilaterally belligerently beat China. But instead I think the Americans will need to relearn the importance of partnership with India, Europe and others as this geostrategic technological standoff between China and the US evolves in the years ahead.
B
Okay, I gotta ask this last question because we're sitting here at a British university. This is kind of off topic, but I mentioned in the introduction that you made this politically What I regard as Quite courageous decision 15 years ago.
A
You're not going to go back to.
B
I am going back to it.
A
To reverse yourself on two, Larry, it's 15 years ago.
B
Well, we're not going back to that decision. And as I said, I think it saved people.
A
Right. I wouldn't have come.
D
No.
B
Well, my question is, given everything that's happening to us now and the current funding crisis that we're trying to deal with, you know, give us some reflections on what the UK should do to sustain, sustained, this fantastic higher education system that is rapidly destroyed. Then we'll go to the audience.
A
Favorite topic.
B
I told him I was going to do this.
A
No, you did, actually. So this sounds super glib. It's not, actually. I actually think in political terms what is needed. A choice needs to be made. You just have to make a choice. If you think that higher education is essential to this country's prosperity in the future, you have to just choose. And then we'll come to all the different levers. You have to choose how to fund it and to fund it properly. And you can. Boy, have I every minute since been criticized for the choice we made. At least we made a choice and we made a very big choice. And the big choice we made to increase tuition fees then had one very important effect. Obviously had not a great effect on my political career, but that is not important. It's to me, but not to you. But anyway, it exempted the university system from austerity in one fell swoop. It's basically what it is. So the university sector was the only sector, pretty well the only sector where you saw spending or the influx of income massively increase during the those austerity years. So much so that many university leaders now refer to that as the golden age of British. I wish they bloody well said that at the time. At the time, they privately would come up to me and said, oh, we're on our knees, we're going to collapse. And I said, oh, okay, well, I better do something about it. Did something about it. And they fled for the hills the moment there was any controversy. I mean, thank God they've got courageous leaders like you now running university. Because at the time, I tell you, it was just unbelievable how the whole university leadership just went silent the moment they actually got what they'd been privately asking for. Anyway, that little bit of bitterness over you have to make a choice. We made a choice. It's a controversial choice, but it was a choice. It was a choice to significantly rebalance how much money went in from the taxpayer and how much money went in as deferred payments from graduates on the basis that graduates, I'll come to that in a minute, earn much more than non graduates do generally in, in the Labour market. That was a choice. It wasn't a choice, by the way, which came out of nowhere. It was the Labour government, not that the Labour party has ever chosen to ever admit to this, who introduced fees in the first place. They then by the way, increased fees. This was even before I came into government. Having said they wouldn't, they then commissioned the famous Brown review from Lord Brown that we then inherited in the coalition that advocated no upper limit to fees whatsoever anyway. And of course they've now increased fees, but only marginally. And so what they're doing now, which is the worst of all worlds, is that they recognize there's a huge problem, that the British university system is on its knees and they've done this tiny little half decision on a small increase in fees which will make bugger all difference. They then do totally countervailing things which are going to make the situation worse by asking the university system to basically pay fees for foreign students. You've only got a number of variables. You've got some big variables. Taxpayer income, deferred graduate income, basically a graduate tax. And then of course there's all the detail about how you do that and the interest rates and so on, so forth, foreign income and numbers. What you can't do is carry on increasing numbers, make it harder for foreign students, don't increase fees and, and reduce taxpayer funds. You can't do that. Well, you can, but you will destroy the British university system. And I just kind of think you just need to make a choice. And I find it so frustrating. This government is going on about growth and prosperity. It's the one thing that we've still got a little bit left in this country. I mean look at the great universities in London, you prominently amongst them. It's one of the few crown jewels we have left. Honestly, I was seven years in, in Silicon Valley. No one ever took any interest, any interest at all in what happened in the United Kingdom amongst all these Tech Bros. Except for two subjects. One, what did I think of Harry and Meghan? Which was basically not much at all. And secondly, oh, I got my nephew at the university, I did my PhD. The fabric, the connectivity that was so valuable to us and we're screwing it up. So my one thing I would say, and of course, dare I say it, I'm afraid I'm exhibit number one. Why they're not making the choice because everyone goes, oh well look, you can't touch that third rail. You'll end up getting mashed up like Nick Clegg. But that is the fundamental problem. It's the fundamental problem people aren't choosing. And I do have this rather old fashioned view and I've said this to them privately, I'll say it again publicly or semi publicly here I know it's a hackney. To govern is to choose and it's often to choose between a lot of very unpleasant invidious choices in mind. It was particularly invidious because I wasn't frigging Prime Minister. I hadn't won the election, I had no right, I had no mandate to apply my manifesto. Particularly because the two larger parties at the time, labor and Conservative, agreed with each other on this more than they did with the libidum. So I had nowhere to go, I had no leverage. But setting that aside, you have to choose. And the longer we refuse to keep choose about universities, the more I think we will rue the day. In 30 years or maybe less 50 years time we'll say oh my God, why didn't we not do more to save the university system? It's criminal. What's going on?
B
Rant over, but full agreement questions. So let's start over there and do we have a mic up? We do. So we'll start over. Either one of those guys.
C
Thank you Nick. My name is Eliot, I'm an urban policy researcher. I just thank you for your speech but I also want to apologize for what happened in another scandal related to liberal Democrat which is the post office scandal which is caused by the era of the technology system Horizon which we kind of see the track record of relying on big tech. So the question I want to ask you is how impressive you found Tony Blair's approach to technology company and Silicon Valley especially you know Larry Ellison, which is the billionaire owner of Oracle, is trying to push for digital ID in this country via Tony Blair Institute who's also trying to join another group of US tech company trying to buy the ownership of US Tech Talk and Oracle want to be in charge of the algorithm by the way. Tony, sorry Lori Ellison is also the big donor to idf. So maybe any value he extracted from us creating data can be transferred to the idf which I would leave anyone else to judge it. So my question is, is given the track record of big tech company, how would you approach the balance between tech power and people power? Thank you.
B
Wow.
A
Post office, Post office Horizon, Blair ID cards. Ellison. That is A smorgasbord of good topics. What can I usefully say?
B
Pick one.
A
I'll pick one. Which is I, perhaps surprisingly, I mean, I've become more French in my thinking about tech sovereignty. I think the sort of uncritical over reliance on American tech in this country is now a real danger to the sovereignty of this country. There isn't a single bit of the tech stack, the core tech stack, which is not owned, run, administered by the us. Our phones, our US design, all the software apps are basically US operating systems. The two of them are us. Your data is stored on aws, Azure and so on on US cloud. Just imagine if they were French. It's really sometimes worth remembering this. Just imagine what the Daily Mail and Farage and people in Parliament say. All that dependency was the French, even though they're just a few miles across the Channel. But because we've had this fantastic, intimate and positive relationship across the Atlantic in the post war period, we've just become completely inured to the comprehensive loss of agency and sovereignty. And that would be fine if the intimacy of that relationship between the US and the UK was going to carry on, but it's not. It's ending. It's gone. The post Cold War amity between Europe and the US has gone for good. It's not just Trump, it's much deeper than that. It's for a whole bunch of wider reasons. The end of the Cold War, the us, we've been on notice this for years in this country. Obama first said it needs to pay more attention to the Pacific. There are also cultural reasons. I find it maddening that the British political and media establishment don't focus more as they do in other European countries, countries they haven't got the solution, but they do at least worry about this complete wholesale over reliance. Look at this visit recently from Trump and this sort of almost embarrassing, debasing way that we sort of clung onto the coattails of Uncle Sam and sort of. It was just pathetic.
D
It was pathetic.
A
And all these ministers were saying, oh my gosh, some of these big tech companies, they're going to build a shed in Newcastle, I think. I mean, for some, but they are sheds. They're literally. These data centers are literally sheds with racks in them with GPUs. Those GPUs, by the way, they depreciate after about 18 months. But the idea, oh, hurrah, we've got a shed in Newcastle. And look, Nvidia has said we're a nice place. Oh my gosh, it feels so good to Be British. It's pathetic. We have to learn to stand on our own two feet more. So, so I.
D
So.
A
So on the bit that that's such an LSE way to applaud. It's like being in a sort of Lib Dem conference used to drive me mad. The other parties, they'd be really like this and clapping and the Lib Dems would be like this. So I actually think there is a big problem. I don't think there is an easy, easy and immediate solution. But simply relying even more on the American tech stack to do everything for us, which is the direction of public policy at the moment, seems to be a dumb thing to do in my view that far more attention should be devoted to one Earth, we and across Europe. Why is it that we constantly come up with actually good ideas? Do you know the two of the most important companies on the planet who are absolutely essential in the sort of supply chain for AI TSMC in Taiwan that produced the chips and ASML that produced the lithography. The design for the chips in the Netherlands were both spin outs from Philips, from a European company. Do you know how many of these companies in Silicon Valley are now run by Europeans or DeepMind, that is by European soldiers. So we not only ingest all that technology, when we have good people and good ideas, we ship them over there as well. This is, this is crazy. At some point we've got to. The Americans, quite rightly, on military matters keep saying you've got to stand more on your own two feet, you've got to spend more money. I actually started to agree with them on that. Well, then we should also do that in the tech stack. And so. Okay, I've made my point. I think you get it.
B
Okay, so what we're going to do is take three questions and then we'll. Yeah. And then we'll finish. So. So let's go here for one. So the guy in the dark jacket with the gray shirt who. Yeah. Had his hand up all through and I kept saying, I'll get to you later, I'll get to you later. And this is interestingly, all male hands.
A
No, no, no, no, no.
D
Up there.
B
Where?
A
Ladies, up there.
B
Ah, there we go. Okay, up there. They'll bring the mic to you in a minute, so go ahead. First question. There you are.
C
Hi, Nick, I'm George. I'm 17.
B
Speak louder.
C
My question for you is, when you look at the rise of reform and how Labour won 63% of seats with 34% of the vote at the last election, do you think the UK can kind of organically evolve from a two party system to a multi party system so long as first past the post remains in place.
B
Okay, great question, great question, Love that. Second question was up there.
A
Love a question, but I like. My name is Mark, my master student in political behavior. I really understood the point that there might be over vilification of a big tech and that we might over amplify the risk today with the hype, the scare of new tech. But there is a risk. It's not everything but it's here. But in terms of solution, concrete, concrete solutions, what would you advise when we think about the DSA regulations regarding disinformation, misinformation or even the presence of illegal content online? What's your position? What do you think we should actually concretely do?
B
Yeah, and last question, where are you? There you are.
D
Hi Ivana, you. I have a very big concern on tech and Facebook and various other things. Basically I'm Croatian and in Croatia we have a government in power which is actually still communist. It sounds a bit odd because it's apparently democracy, but it's not. And so what we experience in Croatia is that Croatians who are not communists are actually blocked to say anything which is actually against the government. And we are in the eu, just so you know. So I wonder and you will know probably how come that they managed to block our actually free speech, which is absolutely nothing nasty or anything like that. It's just they don't want us to actually expose their corruption and they are very corrupt, I can tell you that much. So please, fast checker, how do they. They get paid by the governments and we don't have money, the public doesn't have money to pay them. So I think there is a massive issue here with obviously private and public kind of who is running what and who is paying for what. And I wonder if you agree with that and is that a massive concern across the world as well? Thank you.
B
Thank you sir. Those are kind of related, but let's.
A
Well, I'm afraid I just can't. I just don't know the situation cross Croatia. I don't know what level of censorship you're talking about. I'm sorry, it sounds very evasive. I literally just don't know what the content moderation is that is being forced upon what I think you say is the platforms by the government. I just really, really don't know. You mentioned, I think fact checkers. Fact checkers is a super fraught thing because a lot of people like the idea of it. Sounds good, doesn't it? Fact checkers? Oh, they check the facts. The problem is it's the age old thing, you know, same facts get interpreted in different ways. In the US fact checking basically lost all legitimacy because around half the population in the US basically thinks that fact checking was kind of ideologically opposed to their interests. So the company is now trying to do a different version of it. They're trying to do a sort of crowdsourcing with Wikipedia approach to misinformation where users get involved in it. But jury's out whether that really works yet. So I'm thinking, let me.
B
Because like I say, I see those two questions as related. So one of the. So one concern is what do you do about misinformation, disinformation?
A
The question wasn't about misinformation, disinformation, it was just generally, wasn't it?
C
Yeah.
B
And one of the other concerns though is about the particular power of some governments because of their ability and greater resources and access to use social media to undermine democracy. So that's not the Soviet, that's like the actual government in a country in the Philippines or something, using it to.
A
Well, look, these companies and I had this constant issue that governments would ask us to censor content that we didn't want to. And I would then say, because that was part of my response, I would then say no to demands in Turkey or Vietnam. I don't actually remember it ever happening when I was there in Croatia. But you definitely have authoritarian, semi authoritarian and you then say no. And then they would say, well in that case we're going to block your apps altogether. Which is what's happened recently in Thailand. And there was huge public demonstrations against that measure and the government was forced to. But it's a huge, huge step for a company to have its services completely shut down. Because then it means that in resisting attempts by the semi authoritarian governments to censor speech, you're depriving everybody of speech. So it's a super difficult judgment to make and indulge democracies, or at least ostensible democracies. The companies quite understandably say, well it's the governor of the day, they've been elected. It's like it's not our right to second guess what a democratic government wants. I'll tell you a very good example where this comes up a lot or came up a lot, was in India. In India there's a huge concern amongst the powers that be, which as you know, very much driven by the sort of Hindu national nationalist outlook of the MODI government about inter ethnic tensions and violence and so on and all the platforms, the American platforms would be put under huge pressure by the world's largest democracy to take down this content and take down that content. And after a while as a private company, you kind of slightly run out, you say no, this is not in accordance with our content standards. And after a while they say, hey, we're the government. With a democratically elected government, you have no right, you're not elected, you've got no legitimacy to kind of second guess what we want. And it's a super difficult thing. I'm not asking you to sympathise with the platforms, but it's tricky because you're dealing with some very conflicting, you know, you're very conflicting priorities here because a lot of people quite understandably have huge misgivings about the unelected and unaccountable power of these platforms. But the problem is you. And then they want those unaccountable and unelected powers to be deployed against, you know, democratic governments who people might feel are doing the wrong thing. So it's a very tricky one. I don't know about Croatia on generally. Well, look, I'm afraid I'm going to have to give you the most annoying answer of the evening. Please read my book. There's several chapters about what I actually suggest you should do about it, particularly when it comes to AI. I think there are a whole bunch of things that as I say, the major techno democracies can do together to establish some guardrails on data use, on data flows, on transparency, on far greater user controls and user agency over their experience to be far more transparent about how these foundation models which the foundational layer of the AI stack are trained in the first place. There's a lot that can and should be done. My point is if everyone does their own thing in disparate ways, it not only becomes messy, it becomes very fragmented. Which is why certainly when it comes to the regulation of AI or the regulation of the effects of AI, the more that can be done by the major techno democracies together, the better. Now the question about electrical reform, George, was it? Yeah. So first thing I'd say is it's going to be amazing to see what happens in the next election because if you try and squeeze, basically what we now have in this country is 4, 5 in some parts of the country, 6 party politics through a two party MINSA, which is what the first past the post is, you're going to get some crazy results. There was A local election result in Cornwall recently where the winner got 18% of the vote. So, you know, if the vote gets distributed like that, you're just going to get this winner takes effect. And I think it's going to be like Russian roulette in a lot of parts of the country. It's going to be super, super weird. Do I think organically the first parts of the post system will be changing? No, you need an act of Parliament, you need a deliberate decision to change the electoral system. I completely failed in 2011 to persuade the British people in that hapless referendum to change the electoral system. And what I learned there was just a classic lesson in power politics. I thought, as it turned out very naively, that by putting on the ballot paper in 2011, not actually the electoral system that I prefer preferred, which has always been a much more proportional system, but the system that the Labour Party had put in their manifesto in the recent election, because I knew there was no way that the Lib Dems on their own, with support of the independent newspaper editorial board was going to win in a country which is deeply conservative about its politics, was going to win the case. So I thought, I have to get one of the other two larger parties to at least advocate for this. And of course I didn't expect, I should have done, done the Labour Party then suddenly being in opposition, the last thing they were actually going to do was stick to their manifesto commitments and actually campaign for electoral reform. So we got completely massacred because both right and left basically campaigned against it all sat on their hands. So you have to have one of the major parties in politics over a barrel. You have to have them completely over a barrel. Now you do that. I mean, what circumstances that happens, I don't know. But this system is wholly unfit, it always was, but it's wholly unfit for the multi party choices that people are now demonstrating in the ballot box. You know, I think the sort of fragmentation on both right and left is here to stay. Who knows, maybe it will require an electoral outcome at the next election which is just so crackers and so kind of chaotic and so unrepresentative that maybe at that point people will realize that the game is up. And it will be very interesting to see whether Nigel Farage, who ostensibly is in favor of electoral reform, continues to be in favour of electoral reform if he's riding high in the polls. History suggests the parties always love electoral reform when they're in opposition and then have all sorts of excuses why it's just not top of their in tray when they get into government. But it's going to be really, really interesting to see what happens in the next election. Could be very, very, very chaotic.
B
Okay, so a couple things before we end, which will be in a second. So first, Nick and Miriam still have kids in school, so there's a book sale outside, get some and he'll stay on stage to sign some of them for a little while. As I understand it. Also, Nick's rise in politics, which was somewhat meteoric, came about when there was a public debate, which was sort of the British public's first opportunity, opportunity to see him in action. And he so demolished the other candidates who were there that it really created it. And you got to see why tonight. So thank you so much. Thank you all for coming. And go buy a book.
A
Thank you for listening. You can subscribe to the LSE Events podcast on your favorite podcast app and help other listeners discover us by leaving a review. Visit lse.ac.ukevents to find out what's on next. We hope you join us at another LSE event soon.
Podcast: LSE: Public Lectures and Events
Host: London School of Economics and Political Science
Date: October 8, 2025
Guest: Sir Nick Clegg
This episode features a thought-provoking conversation between Nick Clegg—former Deputy Prime Minister of the UK and ex-President of Global Affairs at Meta—and Larry Kramer, President and Vice Chancellor of LSE. Clegg discusses his new book, How to Save the Internet, which examines the growing fragmentation of the internet (“the splinternet”), the tension between technological globalization and political deglobalization, and his nuanced perspective on how to preserve a free, open, and global internet while addressing rising calls for regulation, safety, and sovereignty.
“I think it [the internet] can die through neglect… In an age of AI and particularly in an age of political deglobalization, [it] needs this deliberate act of political decision making amongst the major techno democracies.” —Nick Clegg (09:32)
“I am an unapologetic sort of old-fashioned liberal who believes that if you empower people to express themselves…it is a good thing. And I think social media has emancipated self expression on a vast, vast scale.” (15:02)
“It is highly fashionable and…lucrative…to say it’s all the fault of technology and algorithms. I think there’s a lot that is wrong with them…but I just don’t believe there is any evidence…that technology is the primary cause of society’s ills.” (18:44)
On Social Media’s Harm:
“There is an over-ascription…to those technological factors a whole bunch of wider forces that can drive, whether it’s political behavior, whether it’s the shifts in adolescent mental health.” (21:24)
On Regulation:
“One of the basic tenets of a free society is to talk rubbish…How on earth do you police that?” (28:29) - Example: The Biden administration’s push to remove vaccine-related satire during COVID, which Clegg resisted as overreach (30:40).
“How on earth can the platforms then claim they’ve got Section 230 protection because…they’re recommending content…sometimes generated synthetically…and peppering it at you?” (35:04)
“The ads I see on Facebook, I think, are pretty rubbish… Quite often they’re not well targeted at all.” (38:25)
“There’s only one thing worse than having businesses and governments at each other’s throats: it’s having them in each other’s pockets.” (45:37)
“They can’t win. They can contest, they can rival, they can compete… but you’re not going to win.” (50:54)
“To govern is to choose and it’s often to choose between a lot of very unpleasant, invidious choices… But you have to choose.” (54:00)
“If the vote gets distributed like that, you’re just going to get this winner-takes-all effect… It’s going to be like Russian roulette…” (68:30)
“… You’re dealing with very conflicting priorities here… people have huge misgivings about the unelected and unaccountable power of these platforms. But…they want those powers to be deployed against, you know, democratic governments.” (68:50)
“At some point we’ve got to… learn to stand on our own two feet more.” (63:10)
Nick Clegg’s conversation at LSE offers a balanced, insider view on the challenges facing the global internet: the risk of fragmentation, the need for regulatory coherence, and the tension between openness and safety. His defense of connectivity as a net good is matched by thoughtful acknowledgments of tech’s real harms and an insistence on nuanced, evidence-based regulation. The episode provides rich context for anyone interested in the intersection of technology, politics, and society as the digital world faces a decade of profound change.