This Week in Space 175: More AI in Space
Podcast: All TWiT.tv Shows – This Week in Space
Date: August 29, 2025
Hosts: Rod Pyle and Tarek Malik
Guest: Dr. Daniel Selva, Texas A&M University
Episode Overview
This episode dives into three major themes:
- SpaceX's Starship Test Flight 10 – Detailing the successes and lessons from the latest Starship launch.
- Revisiting the 'Wow! Signal' – Discussing recent research into one of SETI’s most mysterious signals.
- AI in Spaceflight – A deep conversation with Dr. Daniel Selva about his study on deploying AI helpers, specifically for diagnosing and handling spacecraft emergencies during extended missions.
The episode is witty, engaging, and informative, balancing lighthearted banter, technical depth, and big-picture implications for the future of space travel and technology.
Key Discussion Points and Insights
1. SpaceX Starship Flight 10: Progress & Hurdles
-
[03:06–08:44] Detailed Recap:
- Starship’s 10th test flight was described as a turning point after several highly publicized failures in early 2025.
- “Three flights in a row, a failure. January, March, May. Each time they achieved less than what they achieved on their final flight in 2024.” (A, 04:22)
- This time, both booster and ship stages achieved near-planned success; the ship splashed down within 10 ft of its target.
- Thermal protection issues remain: a blowout in the engine bay, burn-through on a fin, and orange discoloration possibly tied to heat shield experiments.
- Satellite simulator deployment was demonstrated using Starship's “PEZ dispenser” system—successful, but with minor issues.
- The mission accomplished every milestone initially set for January, setting the stage for Flight 11, which could happen sooner than expected.
- “Very demoralizing year... and they came back and just knocked it out of the park.” (A, 08:44)
-
[08:44–09:50] Implications for Artemis 3:
- Pressure mounts as NASA’s Artemis 3 moon return mission relies on Starship's readiness.
- “There's a very loud ticking sound in the background called Artemis 3.” (B, 08:44)
2. SETI’s “Wow! Signal” Modern Forensics
- [08:51–11:28] New Theories:
- A University of Puerto Rico team revisits the 1977 “Wow! Signal” with modern analytical tools.
- Suggests a natural origin: possibly a magnetar flare or “soft gamma repeater” scattering energy onto interstellar clouds and then back to Earth.
- Tarek’s comic skepticism: “Isn't it easier just to say that it was aliens? That's all I got to say. Right.” (A, 10:08)
- The segment underscores how 21st-century technology lets us re-examine classic space mysteries with much greater sophistication.
3. China’s Accelerating Lunar Program
- [11:28–16:30] Context and Concerns:
- China has tested its lunar lander, abort system, and Long March 10 moon rocket.
- Their planned mission will require two launches—one for equipment, one for crew—a logistical feat, but “direct control over the space program” enables rapid progress, unlike the “longer, sustainable” U.S. Artemis approach.
- “They are showing that vision of commitment, whereas we don't even have anywhere near close to having the lander yet.” (A, 13:22)
- Raises political angst: will Congress fund a “runner-up mission” if China lands first? Panel doubts U.S. will abandon Artemis, but the geopolitical stakes are acknowledged.
Main Interview: Dr. Daniel Selva on AI Helpers in Space
Background & Career
-
[18:45–21:35] Selva’s Journey:
- Trained in engineering in Spain and France, worked at Arianespace on Ariane 5 rockets, then earned a PhD at MIT, assistant professor at Cornell, now Texas A&M.
- “He got bored working on rockets, so just got a PhD at MIT. As one does.” (A, 19:50; lighthearted)
- Early student experience with aerospace, communications engineering, and running space associations.
-
Interest in AI:
- Almost took a PhD in neuroscience due to his fascination with both biological and artificial intelligence.
- “I’ve always been fascinated by how intelligence emerges from bio stuff and, you know, non intelligent stuff.” (C, 22:58)
Research Study: AI Virtual Assistants for Spacecraft Emergency Resolution
The Rationale
- [24:13–25:47] Critical Need for Autonomy:
- On Mars missions, comm delays can reach 45 minutes round-trip—too long for rapid ground support.
- “The crew may need to be more autonomous for these long duration missions. AI agents just seem like a good solution…” (C, 24:43)
- AI's role: assist, not replace, astronauts, especially during mission anomalies.
Experimental Design
- [27:00–29:53] Study Setup:
- Two main testbeds:
- Laboratory studies with students (short 2-hour runs).
- HERA habitat at NASA’s Johnson Space Center (professional subjects, 45-day missions).
- AI agent "Daphne" designed to spot and diagnose spacecraft anomalies (mostly related to life support).
- Comparison of anomaly resolution with and without AI aid; measured metrics included task performance, time, cognitive workload, and trust in automation.
- Two main testbeds:
Key Findings and Surprises
- [31:18–36:16] Contrasting Results:
- In the lab, Daphne dramatically improved performance and reduced workload for students.
- In the HERA environment with seasoned professionals, Daphne’s impact was minimal.
- Hypotheses: complexity of the scenario matters; experienced astronauts may rely more on simple procedures (“emergency chart”) than AI, especially in familiar situations.
- “If you have very simple anomalies or recurrent anomalies, you probably don't need something like Daphne.” (C, 33:35)
- “Where DAPHNE can help…is with more complex, perhaps unknown anomalies where you really don't know what's going on.” (C, 34:34)
Trust in AI & Human Factors
- [34:49–36:16, 48:41–50:45] Generational/Professional Divide:
- No clear lab vs. HERA trust difference, but notable individual differences (some pilots were less trusting).
- “We did see some significant differences...some people, depending on their level of familiarity...tend to trust AI more than others...” (C, 35:54)
- The collaborative approach—AI as assistant, not authority—deemed key by Selva.
Technical Details
- [43:59–45:41] Daphne’s Capabilities:
- Two main roles:
- Monitor for keynote anomalies (based on procedural knowledge, not black-box learning).
- Conversational natural-language Q&A via constrained large language models (LLMs).
- “For the most part, we're talking about an AI system that is extremely constrained.” (C, 44:13)
- Constrained knowledge domains limit “hallucinations” versus general LLMs; crucial for mission safety.
- Two main roles:
Implications for Space Operations
-
[50:55–53:08] No AI-Only Missions Yet:
- Full “AI-controlled” spacecraft not imminent—human control remains critical for years.
- “There's another generation of AI that needs to come.” (C, 52:23)
-
Lessons and Takeaways
- “The context specificity...for lack of a better word, I think is one of the main takeaways.” (C, 53:16)
- A silver bullet AI is unlikely; systems need tailoring, and human/AI integration is complex.
Fun Moments & Notable Quotes
- Quip about Starship’s complex terminology:
- “You're saying starship, then you're saying the super heavy. Oh wait...it’s very disconcerting to write about this stuff.” (B, 03:12)
- On generational jokes about AI and trust:
- “Okay, Boomer moment, on Hera or something, where it's like, I don't need to listen to this machine.” (A, 34:49)
- HAL 9000 jokes abound:
- “I suggest you replace the AE35 unit and allow it to fail.” (B, 36:18)
- “Sorry, I could not do that, Dave.” (C, 36:47)
- Sci-fi faves: Dr. Selva’s pick is the recent film ‘The Wild Robot’ over traditional choices like HAL 9000. (C, 55:17)
- On the future of AI in space:
- “Things are going to change very fast and it's very exciting...I can't wait really to see what that’s going to look like.” (C, 56:22)
Important Timestamps
- 00:28: Episode theme intro and what’s coming up.
- 03:06–08:44: Starship 10 review and implications for Artemis.
- 08:51–11:28: SETI’s wow signal and modern reanalysis.
- 11:28–16:30: China’s moon plans and the geopolitical context.
- 18:45–21:35: Dr. Selva’s career journey.
- 24:13–25:47: Why AI agents are necessary for long-duration missions.
- 27:00–29:53: Study setup—how they tested Daphne, in lab and HERA.
- 31:18–36:16: Surprising results—the lab vs. HERA contrast.
- 43:59–45:41: Technicality—how Daphne avoids the "hallucination" problem.
- 50:55–53:08: Future AI autonomy and practical limitations.
Conclusion & Where to Learn More
- AI helpers in space are not science fiction; they’re being prototyped now, but context and user trust are critical.
- Dr. Selva: “We are going to see AI not only helping astronauts, but also helping mission control, helping the people that design the spacecraft, the people that operate the spacecraft.” (C, 56:22)
- Follow Dr. Selva’s work via the Texas A&M website or Google Scholar.
- For more space news, the hosts’ latest work can be found at space.com (Tarek Malik), astermagazine.com (Rod Pyle), and through the NSS.
Memorable Moment
“If you have very simple anomalies or recurrent anomalies, you probably don't need something like Daphne...where Daphne can help...is with more complex, perhaps unknown anomalies where you really don't know what's going on and you need to figure it out.”
– Dr. Daniel Selva, [33:35–34:34]
Subscribe to This Week in Space for more timely tech and space insights!