Podcast Summary: Inevitable (MCJ)
Episode Title: AI Hits a Power Wall. Starcloud Launches Data Centers Into Orbit
Date: January 13, 2026
Host: Cody Simms
Guest: Philip Johnston, Co-founder & CEO of StarCloud
Overview
This episode explores a transformative new concept in the world of AI infrastructure: moving data centers into Earth’s orbit. Philip Johnston, CEO of StarCloud, shares how his company is building orbital data centers to take advantage of unlimited, low-cost solar power and unique cooling properties in space. The conversation covers the motivation for this radical shift, the technical and economic details of making compute in space viable, StarCloud’s progress, and what the future might look like for data centers beyond our planet.
Key Discussion Points & Insights
1. Why Build Data Centers in Space?
- AI and the Power Problem:
- Explosive AI compute growth stresses terrestrial power grids.
- Emerging priority: go where the power is rather than force data centers onto stressed grids.
- "Bringing Compute to Power":
- Instead of building new power sources for data centers, take data centers to where abundant energy exists—outer space.
- Comparisons to Bitcoin/Crypto:
- Crypto mining pioneered "stranded energy" use; AI, with even higher energy demands, is pushing further.
Memorable quote:
"It's a sharp reframing from delivering more power to data centers to bringing data centers to power in the most literal way possible." (Cody Simms, 01:02)
2. The Twitter "Yeah" Moment
- Elon Musk's Retweet:
- Elon’s amplification of a StarCloud tweet quoting investor Gavin Baker accelerates visibility.
- Philip describes the “crazy” scale of the response, with millions of impressions overnight.
- Market Validation:
- Musk and others signal the seriousness of the concept.
Quote:
"Immediately I got 4 million views and... like 4000 follows or something on X… it blew up my X feed." (Philip Johnston, 03:00)
3. StarCloud’s Progress: First-in-Space AI Compute
- First Orbital Data Center:
- Launched a spacecraft with an Nvidia H100 GPU (100x prior space compute power).
- Demonstrated running models like Andrej Karpathy’s NanoGPT and Google’s Gemini.
- Next Steps:
- Second craft coming with much greater power generation and compute (multiple H100s and other chips).
Quote:
"Yesterday... we've trained the first model, we trained NanoGPT from Andrej Karpathy... we ran the first version of Gemini in space." (Philip Johnston, 06:00)
4. Economics & Engineering: Why Space Makes Sense
- Launch Cost Revolution:
- Starship (SpaceX) brings fully reusable, high-capacity, low-cost launches: “10x to 100x cheaper.”
- Space Solar vs. Ground Solar:
- No need for costly land or permits.
- 1 sq. m. of solar panel in space yields 8x the energy as on Earth (no night, season, or atmospheric losses).
- No need for battery storage due to constant sunlight in certain orbits.
- Primary Challenges Solved:
- Power: Cheap, constant solar.
- Cooling: Waste heat radiated into deep space via advanced deployable radiators.
Memorable explanation:
"If you move the data center to space, you don't lose 95% of the energy... you have a $500/kilo break-even point where it makes sense for data centers in space." (Philip Johnston, 10:56)
5. Space vs. Earth: Advantages and Constraints
-
Earth-bound Data Center Constraints:
- Power grid bottlenecks, permitting delays, land use, necessity for water- or air-based cooling.
-
Space-Specific Solutions:
- Direct access to solar and radiative cooling (infrared emissions).
- “No permitting,” instant and limitless scale, no competition for land/resources.
-
Engineering Hurdles:
- Radiation shielding for chips.
- Dissipating waste heat (all cooling via radiative panels, no air/water).
- Launch mass and volume constraints.
Quote:
"Thermal [management] is most misunderstood... most people think space is cold—just put a data center there. But one of space's key advantages is we can scale almost indefinitely with radiative infrared cooling." (Philip Johnston, 22:12)
6. Applications & Initial Market
- Short-Term Use Case:
- Edge compute for other orbiting satellites—rapid AI analysis of Earth observation data, reducing downlink bottlenecks and latency for actionable intelligence (e.g., ship tracking, wildfire detection).
- Communication:
- Optical terminals in space for high-data-rate connections between satellites.
- Long-Term Vision:
- Large-scale, general-purpose cloud compute in space for terrestrial customers, with eventual tens to hundreds of gigawatts of orbital compute.
Quote:
"Anyone who needs to get information about what's happening on Earth down quickly... we can run inference on that imagery on orbit, and then just downlink in real time the insight." (Philip Johnston, 25:00)
7. Business Model & Moats
- Partnering with Crusoe:
- Providing power/cooling/connectivity in orbit; customers bring their own compute stacks.
- StarCloud operates as a “power utility”—offering orbital “racks” at ultra-low cost per kWh.
- Intellectual Property:
- Focused on deployable, high-efficiency radiators and radiation shielding.
- Advantage is speed and specialization vs. slower hyperscalers (AWS, Google, Microsoft, etc.)
Quote:
"Any high energy use case in space is going to require being able to dissipate heat in a vacuum... if somebody just want to buy [the radiator] as a component, I can see a world where we start selling that as well." (Philip Johnston, 30:26)
8. Industry & Competitive Landscape
- Hyperscalers and SpaceX:
- SpaceX and hyperscalers are actively exploring space-based compute; multiple winners expected due to vast market size.
- Not everyone will want to run their workloads on SpaceX/Starlink infrastructure.
Quote:
"There will be plenty of hyperscalers who don't want their actual inference loads being managed and run by SpaceX. That’s sort of the core story of StarCloud." (Cody Simms, 31:29)
9. Risks & Criticisms
- Main Existential Risks:
- Energy cost on Earth dramatically dropping (“ultra-cheap fusion”), or flattening compute demand.
- Launch cost reductions taking longer than anticipated.
- Technical skeptics mostly point to cooling—and StarCloud’s progress counters these critiques.
Quote:
"Thermal is completely solvable, as is radiation... if energy costs on Earth dropped to near-zero, that would probably mean we are not a viable business." (Philip Johnston, 33:09)
Notable Quotes & Timestamps
-
On Elon's validation:
"It's kind of crazy being retweeted by Elon..." (03:00, Philip Johnston) -
On Starship's impact:
"Starship is completely revolutionary because it's the first one that has both a reusable booster and a reusable upstage... It changes the fundamental economics completely." (07:47, Philip Johnston) -
On data center economics:
"If you can move the data center to space, you don't lose 95% of the energy." (10:56, Philip Johnston) -
On cooling in space:
"All of our heat loss must come through infrared radiation... our radiator will be just glowing in infrared if we keep it at about 50C." (20:13, Philip Johnston) -
On business focus:
"Our core business is... being an energy provider, a low-cost energy provider." (28:19, Philip Johnston) -
On the market size:
"It's by far the largest market opportunity of all time, times a billion. So I don't think there will just be one company doing it." (03:46, Philip Johnston)
Important Timestamps
- 00:00-02:10 – Introduction, setup of the core thesis: move data centers to power (space)
- 03:00 – Elon Musk retweet and the effect of viral attention
- 04:59-06:00 – What is StarCloud, and progress to date (first GPU data center in orbit)
- 10:00-12:36 – Economics of space-based solar, cooling, battery storage versus terrestrial data centers
- 13:29-15:38 – Orbit selection and power production, cost comparison to land-based solar
- 17:09-18:38 – Scaling constraints: power, chips per satellite, volumetric considerations
- 19:17-21:00 – Thermal management specifics; how radiative cooling works
- 22:16 – Most misunderstood technical constraint: thermal management
- 24:51-26:08 – Near-term applications: edge compute for satellite data (Earth observation, SAR, etc.)
- 27:26 – Progress update: first Nvidia H100 working in space
- 28:16 – Partnerships (Crusoe) and power-selling business model
- 33:09 – Risks, failure modes, and responses to critics
- 34:29 – Industry projections: 5-10 year outlook; gigawatt-scale orbital compute
Future Outlook
- 5 Years: Tens of gigawatts of orbital compute in production, serving specialized and scalable space and Earth-demand compute.
- 10 Years: Majority of new data centers may be launched directly into space, comprising a rapidly increasing but still minor fraction of global compute.
Action/Engagement
- Hiring: StarCloud is actively seeking electrical engineers, especially with power, electronics, and software backgrounds.
- Contact: Interested parties should reach out for roles or partnerships, especially those excited by the frontier of compute infrastructure.
This episode offers a fascinating deep-dive into the technological, business, and planetary implications of transitioning data centers to orbit, and is essential listening for anyone tracking the future of AI, energy, or space industries.
