Podcast Summary: The Tech Policy Press Podcast
Episode: Promising Opportunities, Distinct Risks: AI and Digital Public Squares
Date: March 6, 2025
Host: Justin Hendricks
Guests: Audrey Tang (Taiwan’s Cyber Ambassador), Ravi Iyer (USC Marshall School’s Neely Center), Beth Goldberg (Jigsaw/Google, Yale School of Public Policy)
Overview
This episode explores the potential for artificial intelligence (AI), especially large language models (LLMs), to support healthier, more democratic online public squares. The discussion centers on a recent collaborative position paper, "AI and the Future of Digital Public Squares", authored by a diverse group of experts. The conversation ranges from practical implementations to ethical dilemmas and future directions for digital discourse and platform design.
Key Discussion Points and Insights
1. Perspectives on Democratic Digital Spaces
-
Audrey Tang: Discussed Taiwan’s Sunflower Movement (2014) as a case study for creating digital and physical “public squares” for deliberative democracy, highlighting that platform design—not the technology itself—drives polarization.
- Quote: “Polarization is not an inevitable feature of social media. It is a consequence, a direct consequence of how platforms are designed.” (04:06)
- She describes success in elevating government trust from 9% to over 70% through “pro-social media” tools like Polis.
-
Ravi Iyer: Recounts work as a social psychologist and time at Meta, drawing attention to how algorithms incentivize divisiveness by rewarding attention-grabbing, often polarizing, content.
- Quote: “If you try to moderate your way out of it, you end up doing really awful things… The actual question that we should be asking is, are we incentivizing divisiveness on these systems?” (05:17)
- Stresses designing platforms to reward content people are proud of, rather than what just “wins” in current algorithms.
-
Beth Goldberg: Shares Jigsaw’s interdisciplinary and collaborative approach, focusing on building online spaces that genuinely support democratic deliberation. Highlights recent efforts leveraging LLMs to improve both scale and quality of public dialogues.
- Quote: “Our digital public squares today…are not exactly these utopian ideals of pluralistic democratic discourse.” (08:54)
- Stresses collective, cross-sector design to develop and test solutions, such as piloting collective dialogue platforms with LLMs.
2. Risks and Opportunities with LLMs
-
Addressed known LLM issues: bias, interpretability, and potential to reinforce existing problems if poorly designed.
-
Goldberg: Emphasizes both technical (open source, explainability) and social (inclusivity, deep listening) mitigations. Open platforms and interoperable tools allow for cultural adaptations and transparency.
- Quote: “Build in ways that are more interoperable and open source, so that people can learn, stress test, contextualize…” (12:48)
-
Tang: Advocates for open-source tools and “judicious use” of LLMs—using them for summarization and paraphrasing, not for driving new, potentially hallucinatory content.
- Quote: “All the output bits come from the input. So you’re not forcing the language model to hallucinate.” (13:53)
-
Iyer: Points to practical benefits: LLMs can rank quality comments (curiosity, thoughtfulness) more effectively than traditional engagement metrics like likes/replies.
- Quote: “Now with LLMs, it’s a lot cheaper…You no longer have to live in a world where you’re optimizing for replies and likes.” (15:01)
3. Four Major Application Areas Proposed
A. Collective Dialogue Systems
(16:08–21:47)
- Platforms like Polis, Remesh, and others facilitate large-scale, iterative public deliberation, blending qualitative depth and quantitative reach.
- Goldberg: Describes a Jigsaw pilot in Kentucky using Polis+LLMs to scale town hall discussions, elevate consensus, and help decision-makers navigate complex input.
- Tang: LLMs can provide “transcultural capability”—paraphrasing and translating diverse viewpoints for mutual understanding, but warns against over-automating deliberation (“civic muscle doesn’t grow from it”).
- Iyer: Social media often starts with disagreement, but collective dialogue should surface what people agree on, fostering trust and reducing animosity.
B. Bridging Systems
(22:53–26:41)
- Designed to identify and surface points of consensus across divides.
- Iyer: LLMs can identify both the topics and linguistic features (curiosity, compassion, nuance) that foster bridging.
- Quote: “LLMs can help create a space where we start with what we agree upon as opposed to what we disagree upon.” (23:15)
- Goldberg: Avoids trivial consensus (“cat memes”) in favor of content with bridging language, working with partners to test upranking for pro-social features.
C. Community Moderation
(26:41–33:00)
- Transitioning from central moderation to community-driven feedback and norm-setting.
- Iyer: Prefers “feedback” over “moderation,” emphasizing legitimacy, scale, and constant behavioral signals.
- Quote: “It’s better for scale, it’s better for legitimacy...The paradigm of content moderation is not going to solve the problems we need it to solve.” (28:14)
- Tang: Taiwan's “pro-social media” experience shows that bridging systems can be integrated into the main feed, not just for fact-checks or viral divisive content.
- Goldberg: Community leaders (subreddit mods, Discord admins) want bespoke, nuanced tools; LLMs can automate both harmful content removal and the promotion of positive contributions.
D. Proof of Humanity Systems
(33:00–36:22)
- Addresses the challenge of distinguishing humans from bots/AI online while preserving privacy and anonymity.
- Tang: Advocates for spectrum solutions—“meronimic” identity, selective disclosure (e.g., age-only proof), and zero-knowledge cryptography.
- Quote: “Everybody will then be able to establish some sort of proof of humanity credentials without overly disclosing really anything else…” (35:02)
4. Responding to Bad Faith Actors
(36:22–41:48)
- The need to balance inclusiveness with the safety and dignity of marginalized groups.
- Iyer: Stopping the reward (attention, money) for divisive/bad faith actors is crucial; feedback and accountability mechanisms can “stop inviting them to our parties” online as we would offline.
- Tang: Describes “troll hugging”—publicly responding only to the constructive parts of even toxic posts, demonstrating that only pro-social behavior gets recognition.
- Quote: “My hobby is called troll hugging. So some people hug trees, but I hug trolls…” (39:39)
- Goldberg: Platform design choices—like pinning the most constructive comment or removing reply buttons—set the tone and discourage trolling by shaping user expectations.
5. Recommendations for Future Research & Deployment
(43:25–49:33)
- Need for transparency, diversity in development, and genuine user agency.
- Tang: Open-source modularity (“Lego bricks”) enables communities to build/contextualize their own solutions, fostering trust and reducing the risk of opaque “social engineering.”
- Iyer: Every system is engineered—key is whom it’s engineered for. Systems should serve everyday users, not just maximize business or outlier engagement.
- Quote: “The big question is who are you engineering for? … Can we engineer something that’s for—not for us—but for the user?” (46:51)
- Goldberg: Cites three Cs of agency (from Zoe Weinberg): Choice (switching tech), Context (making informed adjustments), and Control (over one's experience and data). Co-design with affected communities is vital to transparent, explainable, accessible systems.
Notable Quotes & Memorable Moments
- Tang (04:06): “Polarization is not an inevitable feature of social media. It is a consequence, a direct consequence of how platforms are designed.”
- Iyer (05:17): “Are we incentivizing divisiveness on these systems? Are we almost paying people with attention to be more divisive?”
- Goldberg (08:54): “Our digital public squares today…are not exactly these utopian ideals of pluralistic democratic discourse. Right? … But we wanted to really harness these opportunities with large language models.”
- Tang (13:53): “All the output bits come from the input. So you’re not forcing the language model to hallucinate.”
- Iyer (15:01): “Now with LLMs, it’s a lot cheaper…You no longer have to live in a world where you’re optimizing for replies and likes.”
- Goldberg (24:39): “We can reward you for those types of comments [curiosity, compassion, reasoning] and actually uprank those.”
- Tang (39:39): “My hobby is called troll hugging. So some people hug trees, but I hug trolls.”
Key Timestamps
- 00:41–01:02 — Guest introductions and paper context
- 02:04–04:59 — Audrey Tang’s experience in Taiwan, pro-social media
- 05:17–07:23 — Ravi Iyer on incentives, polarization, and algorithmic design
- 07:57–11:14 — Beth Goldberg on Jigsaw’s ethos and LLM opportunities
- 13:11–14:26 — Open source, transparency, and limited/human-centric LLM application
- 16:08–21:47 — Collective dialogue systems, practical examples, limits and human agency
- 22:53–26:41 — Bridging techniques, content features, and platform reward systems
- 26:41–33:00 — Community moderation, feedback systems, and moderator tooling
- 33:00–36:22 — Proof of humanity, identity, and privacy-preserving verification
- 36:22–41:48 — Approaches to managing disruptive or bad faith actors
- 43:25–49:33 — Reflections on engineering priorities, agency, openness, and recommendations for next steps
Tone and Language
The tone is earnest, optimistic yet pragmatic, and collaborative. The conversation balances technical, operational, and ethical considerations, with the speakers frequently circling back to human agency, transparency, and the importance of thoughtful—not merely technological—design.
Conclusion
The episode makes a compelling case that the future of online public squares—spaces for digital democracy—will be shaped as much by collective values, thoughtful design, and transparent collaboration as by technology itself. By deploying AI with these principles and engaging affected communities, there is promise for digital public spheres that foster agency, inclusion, and meaningful civic engagement over divisiveness and toxicity.
