Odd Lots – Meet the Politician the AI Industry Is Trying to Stop
Podcast: Odd Lots (Bloomberg)
Hosts: Joe Weisenthal, Tracy Alloway
Guest: Alex Boris, NY State Assembly Member, candidate for NY-12
Air Date: December 18, 2025
Episode Overview
This episode dives into the intensifying intersection between artificial intelligence (AI) and American politics through the lens of Alex Boris—a New York State Assembly member running for US Congress—who has become a direct target of the AI industry’s first major political super PAC. The conversation explores the tangible impact of AI on both policy and day-to-day life, the political battle over AI regulation, the dynamics of technological lobbying, and Boris's unique background as both a technologist and lawmaker.
Key Topics & Insights
1. AI Becomes a Central Political Issue
-
The Hosts’ Take: AI’s influence in society and politics is now inescapable, touching everything from labor and energy to inequality, national security, and more.
- “There is hardly a political topic that in some way I feel like AI does not exacerbate.” — Joe Weisenthal [03:27]
-
Recent News: Trump’s executive order establishing a national AI rule, pushing back against state-level regulations, is a pivotal development.
- “Trump issued an executive order for a national rule on AI, which a lot of people ... do not want.” — Tracy Alloway [03:44]
-
Regulatory Debate: Industry’s argument for “balancing safety with innovation,” alongside national competitiveness, especially versus China, is fueling policy conflicts.
2. Meet Alex Boris: AI’s Political Target
-
Background: Alex Boris is running in a crowded primary for NY’s 12th district. Formerly a data scientist at Palantir—a company also backing the pro-AI super PAC that’s targeting him.
- “I joined Palantir in 2014, I left in 2019... then went to a couple of startups... I had this through line of actually having government deliver on its promises throughout.” — Alex Boris [07:14]
-
Broader Context: The super PAC, “Leading the Future,” is reportedly planning millions in ad buys specifically to defeat Boris due to his “Raise Act” (see below), painting him as “Public Enemy #1.”
- “They announced me as public enemy number one... planning to spend multiple millions against me. Last week they upped it to $10 million.” — Alex Boris [08:31]
Why Target Alex Boris?
- Legislative Threat: Boris's “Raise Act” would mandate advanced AI labs to disclose safety plans and incidents, imposing fines for violations.
- Industry Fear: Legislation like this threatens the AI sector’s autonomy and business model.
3. Inside the Raise Act: AI Safety and Accountability
-
Core Provisions:
- Requires “frontier labs” (e.g., Meta, Google, OpenAI, Anthropic) to have public safety plans, report critical incidents, and block releases that fail tests.
- Fines: $10 million (1st violation), $30 million (subsequent). Originally Boris wanted even stiffer (percentage-based) penalties.
- “The original version had 10% of their training costs... but we don't like an uncapped maximum fine in New York...” — Alex Boris [10:10]
-
Triggers for Regulation:
- Applies only to companies spending $100 million+ on compute for a single final training run, emphasizing extremely advanced models.
- Also includes smaller “distilled” models if $5 million is spent, a process China is using to leapfrog US export controls.
- “This is the only bill I know of that would apply some regulatory scrutiny to, for example, Deepseek.” — Alex Boris [12:37]
-
Industry Pushback:
- Warnings that such regulation could lock out smaller innovators and cement big players’ advantages—a critique the hosts and Boris scrutinize.
-
Boris’s Rationale:
- Claims the compliance cost is “one extra full-time employee” per major lab; industry opposition indicates a real regulatory bite.
4. Challenges of AI Regulation & Enforcement
-
The Black Box Problem:
- It’s hard to diagnose bias or unintended consequences in large models. Tests exist, but intent is elusive; Boris points to needing impact-based rather than intent-based rules.
- “So it's tough to know a model's intent, but you can know its impact.” — Alex Boris [13:23]
- It’s hard to diagnose bias or unintended consequences in large models. Tests exist, but intent is elusive; Boris points to needing impact-based rather than intent-based rules.
-
International Disadvantage?
- Industry argues US rules could disadvantage domestic firms versus global, open source, or Chinese labs.
- Boris: Many “open” AI firms still monetize in the US and could be subject to injunctions if they seek American market access.
- “Deepseek open sourced the model, but they're still a company that wants to profit and they sell things on top of it... there is still a real reason for them to comply.” — Alex Boris [15:23]
-
Trump’s Executive Order:
- Threatens to preempt state AI laws and withhold federal funds, even as states like New York already require basic disclosure (e.g., chatbot labeling, mental health warnings).
- “...requires [chatbots] to disclose that they are an AI model... and to alert for when there’s language that might indicate potential self harm.” — Alex Boris [17:04]
- Threatens to preempt state AI laws and withhold federal funds, even as states like New York already require basic disclosure (e.g., chatbot labeling, mental health warnings).
5. Political & Partisan Dynamics (Albany and Beyond)
-
Bipartisan Anxiety, Patchwork Positions:
- While some hard-right want unregulated speed, and some hard-left want AI stopped entirely, most legislators seek a “balance”—mirrored in the assembly vote.
- “The Raise act is squarely within that realm. It passed with co-sponsors who are both Democrats and Republicans.” — Alex Boris [17:54]
- While some hard-right want unregulated speed, and some hard-left want AI stopped entirely, most legislators seek a “balance”—mirrored in the assembly vote.
-
Trump’s Motives and Donor Shaping:
- Hosts probe why Trump favors national deregulation even as he’s protectionist elsewhere. Boris points to heavy donations to Trump from AI and tech execs (e.g., Marc Andreessen, Joe Lonsdale, Greg Brockman).
6. AI’s Broader Societal Impacts: From Schools to Scams
-
Kids, Schools, and Workforce:
- The hope: Personalized tutors, better learning; “right now... pedagogy hasn’t caught up.”
- Labor concerns, environmental impacts, utility upgrades, and the role AI could play in each—especially as private investment swamps energy grids.
- “Our grid is extremely old... you have an unlimited set of private capital... Why aren't we using that to actually upgrade our grid?” — Alex Boris [24:09]
-
Public Perception:
- Voters already see AI’s impact (“entry-level unemployment at 9%”, AI-powered toys, street entertainment).
- Stories of AI-embedded products aiming at kids provoke concern and regulatory reaction.
-
Trust and Low-Level Scams:
- Scams, generated books, deepfake images, and “persistent abuse of technology” erode trust and show the immediate, non-sci-fi risks AI poses.
- “We are in a culture and economy, an era of like persistent low-level grift across almost every dimension of our lives.” — Joe Weisenthal [38:20]
- Scams, generated books, deepfake images, and “persistent abuse of technology” erode trust and show the immediate, non-sci-fi risks AI poses.
7. Deepfakes and Solutions: The Metadata Approach
-
Boris on Deepfakes:
- Technological solutions (like C2PA metadata for images/videos) can address verification—if widely adopted by creators and platforms.
- “There is a free open source metadata standard... that cryptographically proves whether that content was taken from a real device, generated by AI...” — Alex Boris [29:06]
- Technological solutions (like C2PA metadata for images/videos) can address verification—if widely adopted by creators and platforms.
-
Legal Gaps:
- Even with verification, laws are needed (especially for deepfake porn and related abuses); Boris points to state-level action and the dangers of federal preemption.
8. From Palantir to Public Office: Techno-Policy Perspective
-
Alex Boris at Palantir:
- Started as a data scientist, rose to co-lead government business; worked closely with government agencies on data integration, justice, healthcare, and economic analysis.
- “It's data integration and analysis. It's making it different data sources... talk to each other.” — Alex Boris [31:37]
- Started as a data scientist, rose to co-lead government business; worked closely with government agencies on data integration, justice, healthcare, and economic analysis.
-
Implementation ≠ Legislation:
- Emphasis on tracking post-law impact, not just passing bills.
- “The work isn't done when the bill is signed. Right. It's about the actual implementation.” — Alex Boris [34:08]
- Emphasis on tracking post-law impact, not just passing bills.
-
Data-Driven Policy:
- Boris uses data to proactively assess what’s working and what isn’t. Example: Fines for telemarketing quadrupled after his bill; moped registration bill revealed unintended enforcement gaps.
9. On Crypto, District Focus, and Tech Literacy
-
Crypto Regulation:
- Pushes for formal and transparent state-level crypto standards to avoid ceding control entirely to new federal rules.
-
NY-12 Uniqueness:
- The district is a simple midtown Manhattan block, with “more Fortune 500 companies ... than 37 states.”
-
Insider in Tech Policy:
- Boris’s computer science pedigree and industry background make him unique—and a challenge—to tech-industry-friendly PACs.
- “Wait, it's the guy with software patents who worked at Palantir, who has a master's in computer science. There is a disconnect there, I think.” — Alex Boris [46:09]
10. AI Optimism, Realism, and “Nerding Out”
-
On AI’s Promise and Risk:
- AI is as dual-use and unpredictable as nuclear energy in the 20th century—amazing promise and great peril.
- “It is the technology that has the widest bounds of what could potentially come from it... We’re at that moment right now...” — Alex Boris [26:22]
- AI is as dual-use and unpredictable as nuclear energy in the 20th century—amazing promise and great peril.
-
Solving Deepfake Trust:
- Cryptographic standards for media authentication can shift the debate from “can you spot the fake” to “can you verify the file.”
- Legislative and technical solutions both needed.
Notable Quotes & Memorable Moments
-
On AI’s Political Power:
- “There is hardly a political topic that in some way I feel like AI does not exacerbate.” — Joe Weisenthal [03:27]
-
On Big Tech Lobbying:
- “I'm hoping if the campaign continues I can use up all $100 million that they've planned. But we'll see where it goes.” — Alex Boris [08:31]
-
On Deepfake Solutions:
- “It's always been presented to us as like, oh, you'll have to learn how to see what's wrong with an AI image. Like, that's never going to work... There is a free open source metadata standard... that cryptographically proves whether that content was taken from a real device, generated by AI...” — Alex Boris [28:07–29:06]
-
On Being a Tech-Savvy Politician:
- “Wait, it's the guy with software patents who worked at Palantir, who has a master's in computer science. There is a disconnect there, I think.” — Alex Boris [46:09]
-
On AI’s Promise and Peril:
- “If you put yourself in the mindset of someone in the 1930s, you had one set of people saying nuclear fusion is coming... another set saying we're all going to be dead from nuclear bombs... [With AI] we’re at that moment right now.” — Alex Boris [26:16]
Timestamps for Key Segments
- [02:32] — Why AI is now a political powerhouse
- [05:07] — Introduction of Alex Boris and discussion of AI super PAC targeting
- [07:01] — Boris’s background, Palantir and transition into politics
- [08:17] — Details on the “Raise Act” and industry backlash
- [12:59] — The “black box” challenge of AI safety and fairness
- [15:23] — International competitiveness and enforcement dilemmas
- [17:54] — Partisan landscape of AI politics in Albany
- [18:38] — Trump’s motivations and tech funding
- [23:26] — Everyday impacts of AI: education, environment, labor
- [28:07] — How to actually solve deepfakes (metadata standard)
- [30:32] — What Palantir actually does & its government work
- [34:08] — Translating tech implementation lessons to public policy
- [39:26] — Growing prevalence of scams, generated books, and low trust society
- [43:45] — Crypto regulation and the fate of NY’s “BitLicense”
- [45:27] — The uniqueness of NY 12th district
- [46:09] — Perceptions of tech-literate legislators and industry pushback
- [47:05] — The “Bloomberg Terminal for government data” thought experiment
- [48:13] — AI code generation tools as force multipliers for policy work
- [49:17] — Closing thanks, reflection on the importance of tech literacy in political debate
- [50:18–52:59] — Post-interview host reflections: risks of regulation, competition with China, and challenges ahead
Takeaways
- The AI industry’s first big political super PAC sees state-level regulation as a major threat and is using significant resources to shape the policy landscape.
- Alex Boris’s Raise Act proposes robust safety and transparency rules targeted narrowly at advanced labs, sparking industry concern.
- Boris’s experience as a tech insider shapes a more nuanced, practical approach to both legislation and implementation—a sharp contrast to many politicians.
- While regulation poses risks (possibly cementing incumbency, harming US competitiveness), Boris and the hosts agree it’s a debate that cannot be ignored, with direct consequences for trust, privacy, and society at large.
For Further Exploration
- Follow up on state and federal regulatory efforts on AI.
- Examine how disclosure and metadata standards (like C2PA) are or are not being implemented.
- Watch for developments with the AI super PAC and its future political targets.
This episode is a must-listen for anyone interested in the politics of technology, the future of AI governance, and the real-world interplay between tech insiders and public policy.
