Podcast Summary: Open Circuit – "The Latitude Stage: How AI Changes Our Digital Energy Footprint"
Podcast: Open Circuit
Host: Stephen Lacy (Latitude Media)
Guest: Vijay Gaddupalli (Senior Scientist at MIT Lincoln Laboratory Supercomputing Center, CTO at Radium Cloud)
Release Date: August 20, 2025
Episode Overview
This episode, recorded live at Latitude Media’s Transition AI Conference, explores the rapidly rising energy consumption of artificial intelligence (AI) and its implications for the digital energy footprint. As AI transitions from task-based tools to “always-on” agentic companions embedded in daily life, the hosts dive into the mechanics, real-world use cases, and the urgent need for innovation around AI’s massive power appetite. The discussion breaks down the scale of electricity involved, looks at what's driving inefficiencies, and investigates if tech ingenuity can offset the surging demand on power grids.
Key Discussion Points and Insights
1. A Paradigm Shift: From Simple Tasks to Energy-Intensive Companions
- Main Idea: The move from basic AI tasks (e.g., search queries, email sorting) to persistent, conversational, agent-like systems is drastically increasing energy consumption.
- Notable Quote:
- “A single AI agent booking your flight can use as much electricity as running your dishwasher. New reasoning models burn up 90% more energy than traditional ones.” — Stephen Lacy [01:13]
- Everyday interactions like “please” and “thank you” with chatbots aggregate into huge computation—and electricity—costs at planetary scale.
2. Cultural Impact: The Attachment to AI Personalities
- Refers to waves of emotional attachment formed with AI, referencing the film Her and recent events where OpenAI had to restore older AI personalities after user backlash.
- Notable Quote:
- “When OpenAI released GPT5 this month, it wiped out the personalities that people had become attached to. And they freaked out, forcing the company to reintegrate their old model.” — Stephen Lacy [05:35]
- Raises concerns about AI-induced psychosis and emotional dependence, connecting these trends to a societal scale-up of always-on AI.
3. The Mechanics: Why AI Is So Power Hungry
[07:28]
- Vijay’s Example:
- Booking a flight with an AI agent using a reasoning model (like DeepSeek R1) can use ~3 kWh per task.
- Equivalent to running a dishwasher—for one AI task.
- Reasons for Energy Intensity:
- Modern models (especially reasoning models) run repeated, complex inference cycles.
- Many daily workflows now trigger cloud-based AI models, even for simple queries, instead of running on-device computations.
Notable Quote:
- “It may seem like you’re asking the same question as maybe you did five years ago. But the amount of energy … is just going up. Sure, the fidelity of the answers is going up, but so is the amount of energy and compute.” — Vijay Gaddupalli [08:55]
4. The Invisible Footprint: Everyday Apps, Enormous Hidden Costs
[10:48]
- Example: Vijay’s children's storybook iPad app—generating one AI story for his son consumes as much energy as the entire iPad itself, but most of that draw happens out-of-sight, in remote data centers.
- “It’s somewhere in Virginia or wherever you’re seeing a little cloud of smoke come out thanks to this little story that was created by a 5-year-old.” — Vijay Gaddupalli [11:26]
- Many “edge” or local tasks now offload to energy-hungry AI in the cloud, making everyday digital life exponentially more consumptive.
5. Why Energy Use Is Growing so Fast
- Past: The main constraint for AI’s expansion was access to enough compute (GPUs, chips).
- Present: The bottleneck is power—securing enough electricity and grid capacity for new data centers.
- Technical Innovations:
- Hardware efficiency (operations per watt) has improved 10x in six years.
- Recent breakthroughs stem from using lower-precision calculations, not just “Moore’s Law.”
- However, efficiency is being outpaced by explosive usage and sophistication.
Quote:
- “We’ve roughly 10x’d the operations per watt in the last 5 to 6 years… It’s not really Moore’s Law. … We’ve become just a lot better with using different precisions.” — Vijay Gaddupalli [14:40]
6. Why Some ‘Efficient’ AI Models Are Still More Power-Hungry
[15:47]
- Reasoning models (DeepSeek R1, O1) vs. Standard LLMs:
- DeepSeek R1 is more efficient vis-a-vis other reasoning models, but uses up to 90% more energy than traditional LLMs for the same prompt.
- Because “reasoning” models repeatedly activate large portions of the network internally before returning outputs, a small tweak in a prompt can cause massive unexpected jumps in compute/energy usage.
- Quote:
- “Using something like DeepSeek R1 versus a more traditional LLM, say Llama 3.3, used almost 90% more energy for the same task…” — Vijay Gaddupalli [18:00]
7. Big Picture: Can We Innovate Our Way Out of the AI Power Crunch?
[22:03+]
- Most AI systems are not built with efficiency in mind.
- Opportunities:
- Smarter, context-aware software that can adjust task length or answer detail depending on time-of-day energy cost or carbon intensity.
- Example: A carbon-aware LLM reduced emissions by 70% without hurting answer quality by shortening answers when power was dirty or expensive.
- Quote:
- “Why aren’t our LLMs … aware of the environment around them? They use so much energy and are instrumented less than a light bulb in your home.” — Vijay Gaddupalli [23:23]
Host challenges the “internet will use half of America’s electricity” hype of the late ’90s, asking whether a similar innovation curve can keep AI’s power hunger in check.
- Vijay is optimistic, saying we can avoid reckless growth if we design smarter.
- “We can be stupid and just keep building more stuff … or we can pause, think, and do so in a way that’s far more sustainable.” — Vijay [26:28]
8. Market Forces and Economic Incentives
[26:52]
- Hyperscalers and major AI players have powerful economic incentives to drive electricity and compute efficiency.
- As energy becomes a bigger part of AI’s cost structure, companies will be forced to innovate.
- Fast-falling token costs (from $20/M to $1/M tokens in months) are driving up overall usage, validating Jevons Paradox—efficiency gains lead to more overall consumption.
Quote:
- “As much as … we’re trying to make more efficient models … utilization is just going to go up in a pretty significant way, at least for the next couple years as we AIfy everything.” — Vijay Gaddupalli [31:24]
9. Advice for Energy and Grid Stakeholders
[32:03]
- Industry must rethink “flexibility” beyond binary on/off load management.
- There are big opportunities in throttling and modulating AI workloads to balance grid demands versus simply denying data center power.
- Loads often peak only 1–2% of the time; most of the time, capacity is underutilized.
- By collaborating, energy providers and data centers can optimize demand and possibly turn data centers into grid contributors instead of just passive loads.
Quote:
- “If we can start to design around that fact, be able to take care of the peaks … there are some huge opportunities. … The data center [could] not just be a load on the energy side, but also a contributor at times.” — Vijay Gaddupalli [33:17]
Memorable Moments & Notable Quotes with Timestamps
- “A single AI agent booking your flight can use as much electricity as running your dishwasher.” — Stephen Lacy [01:13]
- Always-on AI and the loneliness epidemic (reference to Zuckerberg and Her) [04:12–06:00]
- “My son’s AI storybook used as much energy to make a story as the whole iPad—except most of that was burned in a Virginia data center.” — Vijay Gaddupalli [11:26]
- DeepSeek R1 benchmark: “Used almost 90% more energy for the same task as a traditional LLM.” — Vijay Gaddupalli [18:00]
- Hardware curve:
- “From 100 giga ops/watt to 1 teraops/watt for commercial chips in six years.” — Vijay [14:40]
- “It’s not that economic incentives won’t always be there, but there will be years where the focus is on launching new products—costs become secondary temporarily. … But the next phase will be commoditization and the big cost is going to be energy.” — Vijay [27:55]
- “Jevons Paradox seems to be playing out right now.” — Vijay [31:19]
- Advice for industry: “Flexibility isn’t just running or not running a workload—throttling is huge. If you say it’s going to run a tad bit slower, that can be huge.” — Vijay [32:14]
Key Timestamps for Segment Navigation
- [01:13] Main theme introduction: AI’s surging energy use
- [03:51] Sam Altman & culture of “always-on” AI
- [07:28] How “agentic” AI workloads spike power use
- [10:48] Everyday apps and hidden energy costs
- [13:10] Power, not compute, is today’s growth bottleneck
- [14:40] Hardware efficiency leaps—but not a silver bullet
- [15:47] DeepSeek R1 and reasoning models’ real-world power draw
- [22:03] Where we’re still leaving massive efficiency unaddressed
- [25:35] Can we innovate our way out? Historical perspective
- [26:52] Hyperscaler incentives, energy as an AI cost driver
- [28:45] Projections for US data center power use
- [31:19] Jevons Paradox and utilization surge
- [32:14] Actionable advice for grid/energy industry
Conclusion
This episode challenges listeners to reconsider the booming energy costs being driven by the AI revolution—costs that stem not just from training but especially ongoing usage (“inference”). The conversation reveals both the urgency and the real-world optimism around technical and market solutions, arguing for collaborative, nuanced approaches to data center flexibility, smarter algorithms, and better integration between tech and energy sectors.
For those working at the intersection of AI and energy, the takeaway is clear: demand is surging, efficiency gains are real but insufficient on their own, and opportunities abound for creative, systemic solutions before AI overwhelms grid capacity.
