Podcast Summary
LSE: Public Lectures and Events
Episode: AI, Technology and Society: Shaping the Future Together
Date: November 24, 2025
Host: Larry Kramer, President and Vice Chancellor, LSE
Panelists:
- Helen Margetts (Professor, Oxford Internet Institute; Visiting Professor, LSE DSI)
- Cosmina Dorobantu (Professor of Practice, LSE Data Science Institute)
- Marianne Dumas (Assistant Professorial Research Fellow, LSE Grantham Research Institute)
Episode Overview
This episode brings together acclaimed academics to examine the interplay between artificial intelligence (AI), technology, and society. Their discussion covers the most pressing challenges and opportunities posed by AI, focusing especially on policy, democracy, sustainability, inequality, and the evolving role of the social sciences. The panel emphasizes not only how AI is transforming society but also how social science must adapt, and even shape, the technological future.
Key Discussion Points and Insights
1. AI as a Social Transformation, Not Just a Technological Shift
-
Helen Margetts recasts AI as fundamentally a "social transformation," affecting every facet of society, economy, and democracy—not just technological systems.
- "AI isn't really a technology problem, right? AI is fundamentally about people." (B, 03:10)
-
The rapid public adoption of generative AI (e.g., GPT models) means social scientists must rethink frameworks like the logic of collective action and the role of the state.
Notable Quotes:
-
Margetts:
"The logic of collective action, for example. How does that change in an AI powered world?" (A, 07:11) -
Implications for labor markets are significant, with large-scale displacement posing massive challenges for fiscal policy and possibly necessitating universal basic income.
2. The Imperative for Social Science Engagement
-
Cosmina Dorobantu shares lessons from her early Google days, reflecting on how previous waves of techno-optimism ignored social science expertise—with problematic results for society.
-
Dorobantu argues for embedding social scientists within tech companies to anticipate and mitigate negative social impacts—a mistake not to repeat with AI:
- "One of the things that we got wrong 15, 20 years ago was that there was very, very little social science expertise inside of those tech companies." (C, 13:00)
Notable Quotes:
- Dorobantu:
"If we had managed 15, 20 years ago to embed expertise into the tech companies... would today's world look different? I'm convinced the answer is yes." (C, 14:40)
3. AI’s Role in the Energy Transition and Sustainability
-
Marianne Dumas brings a sustainability focus, detailing research on how AI is a "general purpose technology" with both positive and negative spillovers for the energy transition.
-
Dumas finds evidence that AI disproportionately benefits clean technologies over dirty (fossil-based) ones, offering hope for targeted innovation policy.
-
However, she stresses that AI’s contribution to resource use—particularly energy and water—deserves attention, proposing both ethical reflection and new economic models (e.g., per-query pricing).
Notable Quotes:
-
Dumas:
"If the spillovers of AI are higher to clean technology than to dirty ones, that gives us a leg up in the transition... And in the data, we find indeed that clean tech are drawing on AI much more than dirty ones." (D, 18:00) -
Dumas:
"Any industry is going to need energy. AI is no special in that regard. That's why we need to find planet compatible ways of generating energy." (D, 21:40)
4. How AI Can Transform Social Science (and Vice Versa)
-
AI is not just a challenge but also a tool for social scientists:
- Speeding up and enlarging traditional research methods (large-scale AI-facilitated interviewing)
- Integrating diverse data to understand complex interdependencies (e.g., policy modeling)
- Imagining future “co-social scientist” assistants
-
Dorobantu:
"The very technology causing upheaval in our lives can actually really help us understand our world better than ever before." (C, 22:38)
Addressing Bias and Equity:
-
Both Dorobantu and Kramer stress that pursuing productivity gains must not overshadow equity concerns:
- "We tend to have higher standards of our technologies than we do of humans... if we don't study whom we're leaving outside... we won’t have a way to tackle it." (C, 28:30)
-
Kramer:
"The economic problem requires us to think up a couple levels from where the AI itself is... we have a huge problem going forward." (B, 29:06)
5. Democracy, Regulation, and the Double Edge of AI
AI and Democratic Governance
-
Margetts notes media and tech pundits often predict dire threats to democracy but research hasn't always backed this up.
- Large language models can, in experiments, foster consensus and debunk conspiracy theories, but more real-world deployment and research are needed.
-
Microtargeting, widely feared as a democratic disruptor, didn't show significant effects compared to standard persuasive messaging in recent studies.
Notable Quotes:
- Margetts:
"We need social science... to show how [AI] could be good for democracy... research suggests that large language models are good at helping people to find common ground." (A, 40:00)
Discussion of Regulation
- The panel critiques slow, sometimes agonized regulatory responses. The seven-year debate over the UK’s Online Safety Act is cited as an example of regulatory lag and overreliance on formal lawmaking.
- "It's really important that we don't put everything on regulators... there are standards, there's education, there's all sorts of ways that we can shape the way these technologies impact us." (A, 46:51)
AI’s Social and Emotional Impact
- The rise of sustained engagement with AI—for companionship and emotional support—raises concerns about dehumanization, shifts in social norms, and the quality of human connection.
- "We need to understand a bit more about how it's gonna play out." (A, 50:10)
6. Open Audience Q&A
Monetizing Access & Resource Equity (53:31)
-
Audience Q: Risks of per-query pricing potentially re-monetizing access to information, deepening inequalities.
- Dumas: Balancing costs, the need for quality content, and “reflective” use of AI services, but agrees inequity must be addressed. (D, 54:13)
Regulating Harmful Commercial Influence (56:12)
-
Audience Q: “What would stop OpenAI and a corporation from making deals to manipulate young consumers?”
- Margetts: Highlights urgent need for regulatory capacity across all sectors and new scrutiny tools. (A, 57:10)
- Dorobantu: Calls for deeper transparency and partnerships to understand LLM content and potential manipulations. (C, 58:29)
Deepfakes & Legal Responses (61:18)
- Social scientists need a shared regulatory and technical response to deepfake harms, particularly non-consensual sexual imagery. Current efforts are too technology-focused; law must keep up.
- "It should be easy, shouldn't it?" (A, 63:43)
AI Power & Centralization (65:24)
- Concerns over AI’s potential to centralize power, rendering democratic decisions less relevant:
- Dorobantu: The more data and compute, the greater the centralization, and our current solutions are lacking. Points to local initiatives (e.g., India) as promising. (C, 65:56)
Intergenerational Inequity (67:56)
- Will elderly people be excluded by rapid AI adoption?
- Margetts: Risks of inequity are high, but vulnerable groups will vary. Young people face challenges too, with AI-driven job application filtering. (A, 68:08; C, 70:11)
Democratic Innovation via AI Moderation (74:05)
- The panel supports Taiwan’s AI-facilitated consensus-building as a model for inclusive policymaking, adaptable in other democracies and especially for sustainability policy and behavioral shifts.
- Margetts: "AI moderators were much better at coming up with statements that there was wide agreement on..." (A, 76:00)
- Dumas: Using AI-enabled deliberation for shaping sustainable consumer behaviors. (D, 78:22)
Future-Proof Jobs & Skills (79:43)
- Audience Q: “What jobs or capabilities are future-proof in an AI-intensive economy?”
- Dorobantu: Uncertainty is high; traditional “safe” jobs are at risk, but jobs will adapt. Task automation ≠ job destruction; transformation takes time. (C, 80:14)
- Kramer: Emphasizes critical thinking, collaboration, social sciences, and the need for lifelong learning—future-proofing via adaptability over specific technical skills. (B, 84:53)
AI Model Diversity & Energy (87:51)
- Reducing reliance on large language models by encouraging smaller, more targeted, and energy-efficient alternatives. Diversity of models is key to sustainability and innovation. (D, 88:17; A, 89:57; C, 91:29)
Memorable Quotes & Moments
-
Helen Margetts:
"AI is sold to us as a technological transformation, but it’s really a social transformation. It’s completely about people." (A, 04:41) -
Cosmina Dorobantu:
"I’m spending a lot of my time these days trying to figure out how to bring those communities together… There’s an awful lot of openness and desire to collaborate on both sides." (C, 14:50) -
Marianne Dumas:
"General purpose technologies start irrigating the entire economy..." (D, 16:00) -
Larry Kramer:
"We have to rethink the way in which we divide that surplus. That’s the kind of political economy problem that I think we need to get to." (B, 29:06) -
Audience Member:
"What would prevent Coca Cola and OpenAI to do the following deal: induce adolescents to drink more Coke… is it legal?" (A, 56:40) -
Helen Margetts:
"We really need to do this research now because at the moment it's not complete, ubiquitous, and we need to start to understand what the effects will be…" (A, 50:10) -
Cosmina Dorobantu:
"If we had managed 15 or 20 years ago to embed expertise into the tech companies… would today’s world look different? I’m convinced the answer is yes." (C, 14:38)
Timestamps for Key Segments
- [00:17] Introduction by Larry Kramer
- [04:41] Helen Margetts: AI as societal transformation
- [10:57] Cosmina Dorobantu: Lessons from Google, the importance of social science
- [15:56] Marianne Dumas: AI, sustainability, innovation policy
- [22:35] Dorobantu: What AI can do for social science
- [27:36] Addressing AI bias and equity, policy discussion
- [38:58] Democracy: AI’s double-edge, empirical nuance
- [50:10] Emotional impacts & social engagement with AI
- [53:31–65:00] Audience Q&A: Per-query pricing, commercial harm, deepfakes, centralization
- [70:08] Bias and exclusion, intergenerational impacts
- [74:05] Participatory democracy with AI (Taiwan case study)
- [79:43] Future-proof skills, AI and labor markets
- [87:51] LLMs, sustainability, and the case for diverse models
Conclusion
The panelists make a compelling case for integrating social science with technological innovation, warning against "techno-optimism" without critical reflection, and urging society to shape AI’s impacts—whether on democracy, the economy, sustainability, or human connection. They call for proactive research, robust regulation, and greater interdisciplinary collaboration, all while thoughtfully examining tradeoffs and future uncertainties. The episode closes with an invitation to continue engaging with LSE’s ongoing public debate about tech and society.
