Hard Fork – "Data Centers in Space + A.I. Policy on the Right + A Gemini History Mystery"
The New York Times · November 14, 2025
Hosts: Kevin Roose, Casey Newton
Guests: Dean Ball (AI policy advisor), Mark Humphries (history professor)
Episode Overview
This episode of Hard Fork explores three remarkable trends at the cutting edge of tech:
- Data Centers in Space: Google’s "Project Suncatcher" and the industry’s plan to address Earth’s energy crunch by moving data centers to orbit.
- AI Policy on the Right: A deep-dive interview with former White House policy advisor Dean Ball on the Republican approach to AI regulation, industry influence, and ideological battles.
- A Gemini AI History Mystery: Historian Mark Humphries shares his head-turning experience with a mysterious new AI model that may be Google's Gemini 3, which outperformed experts at deciphering centuries-old documents.
Segment Breakdown
1. Data Centers in Space: Google’s Project Suncatcher and the Future of Compute
Timestamps: 02:28–17:24
Key Discussion Points
-
The Problem with Earth-based Data Centers:
- Land, permits, and—most critically—energy are in short supply.
- Tech’s exponential demand for power (esp. GPUs for AI) is quickly outpacing existing infrastructure.
- Casey jokes, “Who wants to be on planet Earth right now?... I like an alternative." ([02:44])
-
Project Suncatcher:
- Google’s moonshot to build scalable AI data centers in orbit, powered by perpetual solar energy.
- These orbital centers, as described, could harness Sun’s energy “up to eight times... [than] solar panels here on Earth” by staying in nearly constant sunlight (dawn-dusk orbit). ([06:58])
- Designs resemble “a giant bird... the wings are these very thin solar panels... the center... clusters of computers.” ([07:44])
-
Overcoming Challenges:
- Radiation: Google tested its TPU chips with proton beams, finding newer chips withstand far higher radiation than expected.
- “Their newer TPUs actually withstood radiation much better than they thought.” – Kevin ([09:25])
- Repairs: Solutions may involve space robots for remote maintenance.
- Latency: Comparable to Starlink; minor extra delay.
- Radiation: Google tested its TPU chips with proton beams, finding newer chips withstand far higher radiation than expected.
-
Testing Roadmap & Competition:
- Google plans to test with prototype launches by 2027 (in partnership with Planet).
- Other players: Stark Cloud (Nvidia-backed startup), Axiom Space, and rumored Chinese and billionaire-driven efforts (Bezos, Schmidt).
- “Maybe we have to get off the planet to realize our ambitions...” – Casey ([05:26])
-
Social & Political Reactions:
- Resistance to local data centers (“NIMBYs”) could evolve into “NOBs: Not On My Planet.”
- Concerns about space debris and the perception of tech elites “fleeing” terrestrial problems by exporting them to orbit.
Memorable Exchange:
- “I think we should make an offer to Google… if you get this project Suncatcher up into orbit, we will do a podcast episode where we go up there and cut the ribbon.” – Kevin ([17:00])
- “You were just dying to be exposed to massive levels of solar radiation.” – Casey
2. A.I. Policy on the Right: Dean Ball’s Firsthand View from the White House
Timestamps: 19:08–49:10
Key Discussion Points
-
Dean Ball’s Background:
- Policy writer turned White House AI policy advisor, now at the Foundation for American Innovation and author of the "Hyperdimensional" Substack.
-
The Landscape in DC:
- AI policy “nascent,” with the right (and left) sitting at a crossroads of “excitement, some worry, and some confusion.” ([23:19])
- Three guiding intuitions:
- AI is the biggest tech-economic opportunity in decades.
- With major opportunities come unfamiliar, possibly unique risks.
- AI is pivotal for American global leadership. ([24:10])
-
Right-wing AI Policy Factions:
- Spectrum from “David Sacks view” (dismissive of doomsayers, wary of regulatory overreach) to “Steve Bannon view” (alarm over existential and societal risks).
- Additional camps:
- National security hawks—concerned about U.S. vs. China and tech supremacy.
- Child safety/online harms group—drawn from lessons of social media, focused on LLM psychosis and teen suicidality, especially with chatbots. ([28:14])
- Industry ("hyperscalers" like Microsoft, Google, AWS) have nuanced interests, often supporting export controls to remain competitive. Infrastructure building is seen as the primary “moat.”
-
The “Woke AI” Executive Order:
- Order bars ideological bias in government-procured AI models—not for consumer use.
- “We do not want to procure models which have top down ideological biases engineered into them.” – Dean ([32:12])
- Analogous to existing federal compliance rules for software, not direct model retraining for the public.
-
Federal vs State AI Policy:
- Ball stresses that only the federal government should set AI standards, given their inherently interstate/global nature.
- California leads by default—“central regulator” role; SB53 (transparency for largest developers) is seen as relatively reasonable.
- “There are some things that inherently implicate interstate commerce... those have to be federal standards.” ([37:52])
-
Big Tech and Catastrophic Risk:
- Despite perception that “Frontier Labs” (Anthropic, OpenAI, Google) get all they want, Ball argues many leaders are “earnest” about mitigating catastrophic risks, like biosecurity or AI-driven pandemics.
- Action plan aims for incremental risk reductions without stalling progress.
- On polarization: Ball predicts AI issues will splinter (data centers, security, child safety, etc.), mirroring the evolution of "internet policy" to sector-specific debates.
Notable Quotes:
- “Rather than [AI backlash] being a singular issue, it’s going to be... sloppification, not safe for kids, driving up your electricity prices, using all the water, it’s taking your job, and... it’s going to kill everyone. And also, by the way, it’s fake.” – Dean ([30:52])
- “If we can’t deal with catastrophic tail risks, then we do not have a legitimate government.” – Dean ([43:11])
3. A Gemini History Mystery: AI Tackles Historical Fur Trade Records
Timestamps: 51:14–73:04
Key Discussion Points
-
The Setup:
- Waterloo-based historian Mark Humphries has been using LLMs to transcribe and analyze handwritten records from the 18th/19th-century fur trade.
- Major challenge: Historical documents suffer from poor and inconsistent handwriting, archaic tabular data, unfamiliar currencies (pounds, shillings, pence), and esoteric notation.
-
The Mystery Experiment:
- In Google AI Studio, Mark noticed a dramatic leap in accuracy when AB-testing a mystery model (likely Gemini 3).
- Error rate on handwritten transcription fell from 5% (with Gemini 2.5 Pro) to just 1%—matching professional human transcribers. ([59:20–60:37])
- Even more impressive: The model performed symbolic reasoning, e.g., parsing an entry like “4145-@-140191” to correctly interpret and calculate as “14 pounds 5 ounces of sugar at 1 shilling 4 pence,” reconciling base conversion and historical context. ([63:20])
-
Significance:
- This was NOT pattern-matching or copy-pasting; it required abstract reasoning and robust numeracy—considered a major leap for LLMs.
- Implication: AI may cross a trust threshold for complex knowledge work in history—and by analogy, for other professions.
-
Broader Importance:
- Suggests "scaling laws" (bigger models, more data/compute) still yield emergent breakthroughs rather than diminishing returns.
- Mark: “You’re getting to the point where [AI] just works. And as somebody who uses coding assistants all the time... that’s what we’re going to see here with knowledge work.” ([68:00])
Memorable Moments:
- “To figure out that 145 was 14 pounds and 5 ounces... the model had to be able to work backwards from a different currency system with a different base... That’s something... I had to think about for a second and realize: the model had done something that was mathematically correct and unexpected.” – Mark ([63:20])
- “I’m really interested in this Samuel Slit and why he needed 14 pounds of sugar. Like, take it easy, Sam.” – Casey ([67:57])
Notable Quotes by Segment (with Timestamps)
-
On Ambition & Power Crunch:
- “We cannot provide enough electricity for the future we want to build on the planet that we live. We actually have to get off the planet to realize our ambitions.” – Casey ([05:26])
-
On Regulatory Approaches:
- “Models... trained to be served to the entire world... those have to be federal standards because you can’t have competing standards.” – Dean ([37:52])
- “If you did try to use federal law to compel a developer to change the way they train models that they serve to the public, that is unambiguously unconstitutional.” – Dean ([36:43])
-
On Historical Reasoning in AI:
- “What it looks like to me is it’s a form of symbolic reasoning... That’s something I had to think about for a second and realize: the model had done something that was mathematically correct and unexpected.” – Mark ([63:20])
- “I think that the interesting thing about history here is that it’s a very typical kind of knowledge work... [AI] takes information and synthesizes it... and you draw conclusions and analysis based on that. It can be 18th century sugar, but it can very easily be any other kind of widget.” – Mark ([68:00])
Key Segment Timestamps
- 02:28 – Data centers in space: constraints, physics, and radical proposals
- 06:04 – Project Suncatcher: technical overview
- 09:25 – Overcoming radiation risks
- 11:04 – Timeline and prototype launches
- 13:56 – Other contenders and the industry landscape
- 15:23 – The social and political dimension (“NOBs” vs. NIMBYs)
- 19:08 – State vs national AI policy, Dean Ball introduction
- 24:10 – White House “intuitions” about AI’s future and risks
- 30:52 – The coming “miasma” of AI-driven social backlash
- 32:12 – The real meaning of the “Woke AI” executive order
- 37:52 – State preemption and California’s de facto centrality
- 43:11 – On government and catastrophic risk management
- 59:20 – Mark Humphries’ “Gemini mystery model” experiment
- 63:20 – Symbolic reasoning: 18th century tabular math, unexpected AI abilities
- 67:14 – What these advances mean for knowledge work
Tone and Style
- Bantering, irreverent, and self-aware, but always anchored by clear technical and policy analysis.
- Technical explanations made accessible, with wry observations (“You were just dying to be exposed to massive levels of solar radiation.”).
- Frequent inside jokes (e.g., Casey’s obsession with obscure details of the fur trade), and a nod to the cultural and existential strangeness of the tech moment.
Summary Takeaways
- Space-based data centers may move from science fiction to “moonshot” reality as AI’s appetite for electricity threatens to outpace Earth’s capacity.
- AI policy in DC is fractious and uncertain, especially on the right, but the need for federal-level standards is recognized—and industry influence is omnipresent.
- Google’s next-gen Gemini AI potentially crosses a threshold into higher-order reasoning and reliability, with implications extending far beyond academic history.
- Across all segments: The future is here—and it’s weirder, grander, and more fraught than ever.
For more engaging summaries of “Hard Fork,” subscribe to The New York Times or visit nytimes.com/podcasts!
