Hard Fork: Can California Regulate A.I.? + Silicon Valley’s Super Babies + System Update!
Released on October 4, 2024
Hosts: Kevin Roose and Casey Newton, The New York Times
Episode Overview
In this episode of Hard Fork, hosts Kevin Roose and Casey Newton delve into the evolving landscape of artificial intelligence (AI) regulation in California, explore the burgeoning field of fertility technology in Silicon Valley, and provide updates on significant tech industry developments. Featuring insights from journalist Julia Black, the episode navigates through legislative changes, ethical considerations, and the intersection of technology and societal norms.
1. AI Regulation in California
California has recently taken a proactive stance in regulating AI, passing 18 new laws aimed at addressing potential harms associated with generative AI. However, the episode highlights a significant setback with Governor Gavin Newsom’s veto of the controversial Senate Bill 1047 (SB 1047), which sought to impose stringent regulations on foundational AI models.
Key Points:
-
New AI Laws: Governor Newsom signed 18 laws focusing on mitigating risks such as the creation of non-consensual explicit images using AI, expansion of child sexual abuse material (CSAM) statutes to include AI-generated content, and mandates for AI companies to disclose training data sources starting January 1, 2026 (09:29).
-
SB 1047 Controversy: SB 1047 aimed to enforce safety tests on AI models with development costs exceeding $100 million and imposed legal liabilities for AI-induced harms exceeding $500 million or causing loss of life. Despite widespread industry opposition and lobbying, Governor Newsom vetoed the bill, citing insufficient regulation for smaller models (14:00; 16:54).
-
Implications for AI Regulation: The hosts discuss how California’s regulations are poised to set a precedent for national AI policies, given California’s significant market influence. They also examine the dichotomy between model-level and application-level AI regulations, predicting that comprehensive federal legislation is imminent but may arise under less favorable circumstances for the tech industry (12:03; 19:37).
Notable Quotes:
-
Kevin Roose: “...we have to establish trust is more important than ever...” (00:34)
-
Casey Newton: “We are up against a situation where... you have to identify that it's AI.” (12:24)
-
Julia Black: “Regulators are probably behind on this, are probably not working at the same speed as Silicon Valley innovators.” (51:46)
2. Silicon Valley’s Super Babies: Fertility Technology
Joining the hosts is Julia Black from The Information, who provides an in-depth exploration of the surge in fertility technology investments within Silicon Valley. The conversation touches on the motivations behind pronatalism, ethical dilemmas, and the futuristic prospects of fertility tech.
Key Points:
-
Pronatalism and Demographic Concerns: Driven by fears of declining birth rates and “flip demographic pyramids,” some Silicon Valley figures advocate for increased fertility through technology to sustain economic growth and societal stability (31:17; 32:10).
-
Innovations in Fertility Tech: Investments are pouring into startups offering advanced genetic testing, sperm freezing, artificial wombs, and in vitro gametogenesis (IVG), which could enable same-sex couples to have genetically related children (36:09; 39:29).
-
Ethical and Social Implications: The episode raises concerns about the potential for eugenics, socio-economic disparities, and the ethical boundaries of genetic selection. Julia Black emphasizes the slippery slope from preventing severe genetic disorders to selecting for desirable traits like intelligence or appearance (43:55; 44:25).
-
Intersection with Crypto and Decentralized Science: A notable trend is the overlap between crypto entrepreneurs and fertility tech, driven by interests in decentralized science (DSci) and longevity. Influential figures like Brian Armstrong and Vitalik Buterin are investing in these startups, reflecting a blend of technological optimism and speculative investments (35:42; 36:56).
Notable Quotes:
-
Julia Black: “...it’s one thing to make sure that your child doesn’t die of some horrific, rare disorder... but another thing when you start to get into the realm of characteristics...” (44:25)
-
Kevin Roose: “AI is just too powerful. And we regulate every other industry that has that kind of power.” (25:27)
-
Casey Newton: “...it just needs to come to a place where we actually want that and where our lawmakers actually make that possible.” (49:48)
3. System Updates
The episode also includes a series of system updates covering recent developments in the tech industry:
a. OpenAI’s $6.6 Billion Fundraise
OpenAI has secured a staggering $6.6 billion in a new funding round led by Thrive Capital, bringing its valuation to approximately $157 billion—a nearly doubling from nine months prior. Despite generating significant revenue through products like ChatGPT, OpenAI continues to burn through cash, projecting a loss of $5 billion this year. The influx of capital aims to sustain their rapid growth and ongoing development costs.
Notable Points:
-
Investors: Microsoft, Nvidia, SoftBank, and the UAE’s MGX participated, while Apple notably opted out, possibly due to concerns over internal dynamics and recent departures from OpenAI (53:30).
-
Future Prospects: Discussions revolve around whether OpenAI can transition to profitability, drawing parallels to companies like Amazon and Uber that sustained long-term losses before achieving profitability (54:07).
b. Reddit’s Moderation Policy Changes
Reddit has implemented new rules requiring moderators to seek approval from administrators before changing the public or private status of their subreddits. This move aims to curb the platform-wide protest tactic where moderators made their communities private, thereby reducing Reddit’s advertising revenue.
Notable Points:
- Impact on Free Speech: The hosts critique Reddit’s restrictions as reminiscent of diminishing free speech rights, arguing that moderators now have fewer means to organize and protest against company policies (59:42; 60:36).
c. Deepfake Security Concerns
A reported incident involved a deepfake caller impersonating a Ukrainian official and engaging Senator Benjamin Cardin in a misleading conversation about missile support. This underscores the escalating threats posed by AI-driven deepfakes, especially in political and security contexts.
Notable Points:
-
Technological Advancements: With OpenAI releasing APIs for real-time voice cloning, the potential for sophisticated scams and misinformation campaigns has heightened (61:10; 63:19).
-
Preventative Measures: The discussion emphasizes the need for vigilance and verification methods, such as using code words to confirm identities during sensitive communications (62:08).
d. Sonos’ App Troubles
Sonos faces backlash over a recent app update that disrupted user experience by removing essential features. The company has unveiled a seven-point plan to regain user trust, including enhancing customer experience, rigorous pre-launch testing, and appointing a quality ombudsman. However, skepticism remains regarding the effectiveness of these measures.
Notable Points:
-
User Frustration: The hosts express dissatisfaction with Sonos’ response, highlighting the gap between promises and tangible improvements (64:45; 67:03).
-
Corporate Accountability: The episode underscores the importance of aligning executive incentives with customer satisfaction to ensure meaningful changes (69:06).
Conclusion
This episode of Hard Fork provides a comprehensive exploration of California’s pioneering yet contentious efforts to regulate AI, the innovative and ethically fraught advancements in Silicon Valley’s fertility technology sector, and critical updates on major tech companies grappling with growth, regulation, and user trust. The discussions illuminate the intricate balance between technological progress, societal impact, and regulatory frameworks, offering listeners a nuanced understanding of the current tech landscape.
Notable Quotes with Timestamps
-
Kevin Roose: “So, if you are a listener listening to this and you have one of these devices, you are now no longer listening to the podcast. You are listening to Despacito.” (05:03)
-
Casey Newton: “We are not done with AI regulation in this state.” (18:21)
-
Julia Black: “Regulators are probably behind on this, are probably not working at the same speed as Silicon Valley innovators.” (51:46)
-
Kevin Roose: “AI is just too powerful. And we regulate every other industry that has that kind of power.” (25:27)
-
Kevin Roose: “We did get one email this week that I thought was very nice, which was from a person who said... that actually the real Kevin Roos is the hot Kevin Roos.” (69:07)
Note: This summary omits advertisements, introductions, and non-content segments to focus solely on the substantive discussions and insights presented in the episode.
