Science Vs — "AI: Is It Ruining the Environment?"
Hosted by Rose Rimler
Release Date: November 13, 2025
Episode Overview
In this episode of Science Vs, host Rose Rimler, sitting in for Wendy Zuckerman, investigates the environmental impact of artificial intelligence (AI), focusing on its energy and water usage. The show dissects the claim that AI data centers are "ruining the environment," examines popular memes and viral statistics, and contrasts AI's ecological footprint with other industries and habits. Through interviews with climate journalists, AI researchers, and sustainability-focused engineers, the episode explores facts, context, and misconceptions around AI's real cost to the planet.
Key Discussion Points & Insights
1. Why Is AI’s Environmental Impact in the Spotlight?
- Growing concerns about AI’s energy- and water-intensive data centers.
- Community protests against new data center construction over resource competition ([01:03]).
- Cultural meme: Using AI for trivial tasks is mocked as "ruining the planet for a f***ing Instagram post" ([01:32]).
- Rebuttals from tech companies note that per-user impact is small, and some critics argue it’s less than other behaviors (e.g., eating meat, flying) ([01:32]).
2. How Do AI Data Centers Differ from Regular Computing?
- AI utilizes GPUs (graphic processing units) instead of CPUs (central processing units), enabling massive parallel processing but at higher energy costs ([05:12], [06:09]).
- Memorable Analogy:
- CPUs: "One paintball gun drawing a happy face pellet by pellet."
- GPUs: "200 paintball guns firing at once to create the Mona Lisa."
- — Rose Rimler ([05:57])
3. Quantifying the Energy Use of AI Models
Interviewees: James O'Donnell and Casey Crownheart, MIT Technology Review
- Real numbers for energy use are hard to come by—big AI providers are secretive ([07:47]; “Companies ... not willing to share the details ... so, you know, we weren't going to get it from them." — James O'Donnell [07:47]).
- MIT team tested open-source models themselves. Takeaways:
- "What is AI's energy burden?... One of my biggest takeaways was just how much it depends."
— Casey Crownheart ([08:58]) - Different models, different sizes: Smaller models (8B parameters); larger ones up to 400B; leading products likely in the trillions ([10:44]).
- "What is AI's energy burden?... One of my biggest takeaways was just how much it depends."
How Much Energy Does a Prompt Use?
- Small language model ("Llama," 8B parameters):
- ~114 Joules (about one-tenth of a second in a microwave) ([12:19])
- Huge model:
- 8 seconds of microwave time per prompt
- Industry numbers:
- Text prompt for major models: 1–2 seconds microwave time
- "Probably on the order of zapping something in the microwave for less than 10 seconds." — Rose Rimler ([13:36])
- Image generation:
- Slightly less or comparable to large text model output; ~5.5 seconds microwave ([14:39])
- Video generation:
- Significantly more: "Over an hour in the microwave" for a 5-second, 16 fps clip ([15:57])
4. The Systemic Picture: AI’s Expanding Demand
- Daily usage: OpenAI reports 2.5 billion prompts per day ([17:21]).
- Organizational adoption: 78% of surveyed organizations use AI ([17:22]).
- Impact on grid:
- Data center electricity use tripled between 2014 and 2023 (Lawrence Berkeley National Lab) ([17:56]).
- By 2028, AI data centers could use as much power annually as a quarter of US households ([18:03])
- "Imagine adding 25% more households to the US in 2028." — Rose Rimler ([18:23])
- Problem lies less with electricity itself, but that most grids still rely heavily on fossil fuels ([18:41]):
- Only 9% US power is renewable; 1/3 petroleum, 1/3 natural gas, 8% coal ([19:10])
- Nuclear power considered, but new plants take decades—not scalable fast enough for demand ([20:04])
5. Water Use: Debunking the Viral "AI Is Drinking All Our Water" Meme
Guest Expert: Shao Le Ren, UC Riverside
- Viral claim: Every AI use "wastes a bottle of water" ([27:23])
- Reality (based on ChatGPT-3 for reference):
- ~30 prompts exchanged = 0.5 liter (one bottle) of water ([28:34]).
- Only ~12% is drinking water for cooling; remainder is water used at power plants (from rivers, lakes, not potable) ([29:06]).
- Water used evaporates, but may not replenish the local supply—regional water stress matters ([26:26], [27:02]).
- Total US data center water use: 0.3% of supply (roughly Rhode Island’s share) ([31:29]).
6. Regional vs. National Impact & Future Outlook
- Effects can be severe in small towns with under-built water infrastructure ([30:28]).
- Water and energy demand expected to double in coming years—but proportional national impact remains modest ([31:56]).
- Real opportunity for reduction lies in a greener energy grid and smarter AI model engineering ([21:47], [31:44]).
7. Personal and Societal Responsibility: Who Should Be Accountable?
- Even environmentally-conscious experts report they still use AI for convenience ([33:10]).
- E.g.: Planning trips, editing text, technical research.
- “It's on the companies, it's on the government ... I'm not sure AI is the villain. I think the villain is our reprehensible and baffling inability to switch to renewable energy and to put any kind of real effort into getting off of fossil fuels.”
— Rose Rimler ([34:16]) - Key reason AI gets blamed: Many don’t yet see social or personal value in it, compared to flying or eating meat; 61% of the US public perceives AI as having "more drawbacks than benefits." ([35:23])
Notable Quotes & Memorable Moments
- On Meme Culture and the “AI Water Bottle” Statistic:
- "So that's a distortion of the message that we show in the paper."
— Shao Le Ren ([28:28])
- "So that's a distortion of the message that we show in the paper."
- On Model Complexity:
- "It's like a switchboard that goes on for miles."
— Rose Rimler ([10:50])
- "It's like a switchboard that goes on for miles."
- On Individual vs. Systemic Impact:
- "I'm not sure AI is the villain. I think the villain is our reprehensible and baffling inability to switch to renewable energy..."
— Rose Rimler ([34:16])
- "I'm not sure AI is the villain. I think the villain is our reprehensible and baffling inability to switch to renewable energy..."
Timestamps for Key Segments
- [00:33] — Framing the controversy: AI’s environmental downsides introduced
- [05:12] — How AI’s architecture (GPUs) differ from regular computing
- [07:00] — MIT Tech Review's journalists on measuring energy use
- [12:19] — Explaining energy use in "microwave seconds"
- [13:55] — Text vs. image vs. video energy comparison
- [17:21] — The scale: daily prompts, grid impact, and tripling of power usage
- [18:58] — Fossil fuel dependency: what powers the grid
- [24:23] — Profile: Shao Le Ren and introduction to water cooling in data centers
- [28:28] — Debunking the viral "one prompt = one bottle" claim
- [31:29] — Data centers = Rhode Island’s worth of national water use
- [33:10] — Do the experts personally use AI?
- [34:16] — "The real villain is fossil fuels"
- [35:23] — Public ambivalence: 61% see more drawbacks than benefits
- [36:47] — Has learning the facts changed personal AI use?
Conclusion & Takeaways
- Individual use of AI for text and images is relatively modest per interaction; the worry is in the exponential scaling as usage explodes.
- The environmental impact hinges less on AI itself, more on the source of energy grids. If powered by renewables, AI’s effect would shrink considerably.
- Water concerns are regional, and the viral "one email = one bottle" stat is a simplification.
- Biggest growth in harm is from increasing institutional adoption and demand—not memes or individual silly uses.
- Responsibility for managing AI’s footprint falls on governments and corporations to decarbonize energy and develop efficiency—individual restraint helps, but isn’t the solution.
- Public perception is skeptical—possibly because AI’s societal benefits are less tangible (yet) than those of established technologies or activities.
For more details and sources, check the episode transcript and show notes, including the MIT Technology Review article referenced throughout.
Quotations and key segments are labeled with timestamps and speaker attributions, directly reflecting the original tone and context of the discussion.
