Podcast Summary: O Assunto – “O julgamento das Big Techs e a responsabilidade do algoritmo”
Date: 12 February 2026
Host: Nath Ruzaneri (G1)
Guest: Carolina Rossini (Escola de Direito da Universidade de Boston), com participações de Davi Nemer (Universidade da Virgínia), e especialista em legislação da União Europeia
Main Theme
This episode discusses the historic trial in California where Big Tech companies Meta (Instagram) and Alphabet (YouTube) face a jury over alleged harms caused by addictive design and algorithms in their social platforms, with a focus on responsibility for the mental health crisis among young users. The conversation extends to global legal implications, algorithmic transparency, and the debate over whether social media can be considered addictive in a clinical sense.
Key Discussion Points & Insights
1. Case Background
- [00:01] The story begins with “KGM”, a young woman who developed severe mental health issues after years on social platforms starting at the age of six.
- [00:26] The defense claims Instagram filters distorted her self-image, and exposure to harmful content worsened depression, anxiety, and suicidal thoughts.
- [00:52] She is suing Meta and Google (Alphabet), accusing them of deliberately engineering algorithms to foster addiction and maximize profit through “engagement”.
- [01:32] The core issue is not specific content but the platforms' operating models: infinite scroll, notifications, variable rewards—tools compared to slot machines.
2. Case Scope and Its Historic Nature
- [03:52] Carolina Rossini underscores the importance:
“Pela primeira vez aqui um júri popular nos Estados Unidos vai decidir se essas grandes plataformas digitais […] podem ser responsabilizadas ou não por danos causados a crianças e adolescentes pelos usos dos seus produtos.”
- This is a collective case, representing over 800 plaintiffs, with broader significance if successful.
- [05:31] The companies deny causality, saying mental health problems stemmed from family issues.
3. Legal Strategies and Loopholes
- [06:39] Big Techs lean on Section 230 of the Communications Decency Act, claiming immunity for user-generated content.
-
[07:06] Rossini explains these protections, but highlights that, in this case, the plaintiffs argue harm stems from product design (algorithms/features) – a crucial distinction separating it from content liability.
“[O] dano […] não decorreu apenas do conteúdo, mas sim dos defeitos no design do produto […].”
- [09:21] The analogy of algorithmic engagement to slot machines and tobacco is central to the plaintiff’s argument:
“Toda vez que ela desliza o dedo […] está se apostando, e aí a aposta não é em troca de dinheiro, mas por uma estimulação mental.” (Defense lawyer)
4. Algorithmic Responsibility and Precedents
- [12:15] If the jury holds the companies accountable, the case could drive mass settlements (as with tobacco or opioids), influence regulatory momentum, and shape future court narratives around design negligence.
- [12:31] Rossini notes how regulatory ripple effects could strengthen the global push for tech accountability — especially as models of “duty of care” and product design responsibility evolve worldwide.
- [14:44] Davi Nemer adds:
“Caso o júri entenda que as plataformas sejam responsabilizadas, isso muda totalmente a lógica que de responsabilização...”
5. Global Legislative Movements
- [15:33] The EU’s Digital Services Act (DSA) is cited as a major policy shift pressuring tech platforms toward transparency and swift enforcement.
- [16:15] EU regulations now compel platforms to remove illegal content quickly (using Notice and Take Down procedures) and require more algorithmic transparency, especially for services affecting minors.
6. How the Algorithms Work
- [17:29] Rossini unpacks the basic operation of recommendation algorithms:
“Esses algoritmos de recomendação utilizam sistemas de aprendizagem de máquina, […] treinados com volumes massivos de dados comportamentais.”
- She explains that these AI systems optimize for engagement above all, with little inherent ability to distinguish between adults and children — without explicit safeguards.
7. Social Media Addiction: Science, Gaps, and Concerns
- [19:32] Host asks about the scientific consensus around “addiction” to social networks.
- [19:40] Rossini admits there’s no formal psychiatric classification for social media addiction, but mounting research (including brain imaging studies) demonstrates concerning patterns, especially among adolescents whose prefrontal cortexes are not fully developed:
“Tem muito artigo já da área de psicologia e desenvolvimento de produto que você pode aplicar, sim, ao que está acontecendo nesse caso.”
Memorable Quotes & Notable Moments
-
On the scale and seriousness of the trial:
“Esse julgamento está sendo considerado um possível divisor de águas, porque dependendo do resultado pode abrir precedente para outros milhares de processos contra empresas de tecnologia.” (Advogado da acusação, [01:57])
-
The slot-machine analogy:
“O scroll infinito, a rolagem da tela, opera como uma máquina caça-níqueis.” (Advogado da jovem, [01:32])
-
Algorithmic design vs. content:
"O dano [...] não decorreu apenas do conteúdo, mas sim dos defeitos no design do produto que foi entregue." (Carolina Rossini, [07:06])
-
Global legal implications:
“Vai ser um dominó, um efeito dominó aí muito interessante de indenizar as pessoas que sofreram esses danos.” (Carolina Rossini, [13:50])
-
On algorithmic opacity and policy innovation:
"Esses algoritmos não são publicados por aí para a gente analisar. [...] Uma coisa importante é que a gente está falando de máquina. Se não existe uma salvaguarda, um guardrail ou algo assim que fale, olha, separe o que é criança, separe o que é adolescente, eles não vão saber quem é quem..." (Carolina Rossini, [17:29])
-
Neuroscientific evidence and parenting:
“O cortex pré-frontal, que só se desenvolve plenamente aos 25 anos. Eu tenho um filho de 15 anos, então eu presto muita atenção nessa história [...] Não, você não vai. Um, seu cérebro não está formado. Dois, você não tem a experiência de vida para entender o quanto isso é manipulativo.” (Carolina Rossini, [21:30])
Important Segment Timestamps
- [00:01] – KGM’s personal story and framing of the lawsuit
- [03:52] – Carolina Rossini explains the novelty and legal context of the case
- [07:06] – Core legal debate: Section 230 vs. product design liability
- [09:21] – Addiction analogy (slot machine, tobacco)
- [12:15] – Potential US and international legal precedents
- [15:33] – DSA and EU regulatory context
- [17:29] – Technical insight: how algorithms recommend content
- [19:32] – Discussion on social media addiction and developing brains
Final Takeaways
This episode highlights a potentially landmark legal battle, with far-reaching ramifications for how digital products are designed and regulated worldwide. As scientific, legal, and ethical debates coalesce around the impact of algorithm-driven engagement on youth mental health, courts and lawmakers are being forced to grapple with the responsibilities of tech giants in shaping information ecosystems. The podcast provides crucial context, expert insight, and a sobering look at the real-world consequences of seemingly “neutral” algorithms.
For further listening:
Find this and other essential episodes in O Assunto's playlist ‘This Is O Assunto’.
