Podcast Summary — The Digital Executive | Episode 1223
Guest: Dr. Peter McAllister
Title: The Hidden Cost of AI
Date: March 30, 2026
Duration: Approx. 17 minutes
Overview:
This episode of The Digital Executive, hosted by Coruzant Technologies, features Dr. Peter McAllister—an innovation veteran from mining, healthcare, and supply chain sectors, now a data and AI business leader and author. McAllister discusses the inspiration and reasoning behind his sci-fi novel "The Code: If your AI loses its mind, can it take meds?", while tackling broader questions on responsible AI adoption, ethical leadership, and the urgent need to navigate the accelerating landscape of AI with human values and foresight.
Key Discussion Points & Insights
Dr. McAllister’s Career Journey
- Career Evolution
- Early fascination with technology, programming from age 9–10
- Earned a PhD in biotechnology with a focus on mathematical modeling
- Roles in mining (environmental impact), healthcare (implementing computer systems), and logistics (supply chain & pharmacy wholesaling)
- Transitioned from data analytics ("rear vision mirror") to AI-driven forecasting ("the windscreen")
- "[I like to live at the] collision between people, business and technology." (02:22)
- Writing fiction to unify diverse career experiences:
- "I started writing effectively as a way of getting all the crazy thoughts out of my head." (03:50)
- Novel writing brings together his tech, scientific, and creative passions.
Why Choose Fiction to Explore AI’s Challenges?
- Storytelling as a Human Imperative
- "We’re a storytelling species. We’ve always told stories from the beginning of our ability to communicate." (05:37)
- Fiction enables speculative ‘what ifs’ that aren’t as accepted in the technical/business world
- Inspired by Arthur C. Clarke—reaching wider audiences with fiction than technical papers
- "He also wrote 2001: A Space Odyssey...reached a much larger audience and changed a thought process there through that medium of fiction." (06:24)
- Fiction allows exploration of dark humor and satirical takes on tech and society
- "I’ve got a relatively dark sense of humor, and I couldn’t otherwise let that go run wild in a business context or a technical book." (07:09)
Responsible Leadership in AI Adoption
- Dual Focus: Security & Business Value
- Security:
- "Some of the best AI minds in the hacker community...If we’re not on the next bus chasing [them], it’s going to get very messy very quickly." (08:43)
- Trust is critical—execs must rely on and empower their technical experts
- Business Process Risks:
- Beware "codifying a solution that is a workaround upon a workaround...putting that into an AI framework and then forgetting about what actually happens in the business." (09:41)
- The risk: "There is a significant chance you will make your organization dumber" by automating flawed processes. (09:28)
- Key advice:
- Focus on improving good processes with AI rather than trying to ‘fix’ fundamentally broken ones
- Listen to security/governance, but balance risk and innovation
- "You have business processes first and the technology helps you execute those better...rather than let’s just take what we’ve currently got and put it in a box." (11:29)
- Security:
The Long View: AI’s Trajectory and Ethical Bargains
- Red Flag Analogy (Automobile Evolution)
- Early cars: required red flag bearers and had strict regulations
- "If you went to drive [a car] on a track or a road, you needed someone walking in front of you with a red flag..." (12:36)
- Over generations, cars become ubiquitous and deadly—about 1 million deaths/year is the cost society pays for their convenience
- "Effectively we’ve made a trade-off of having the value of the automobile in our society for a cost of around about a million people a year." (13:51)
- We’re now at a similar inflection point with AI—red flag warning stages, iteratively adjusting as problems surface
- "Anytime you use a chat GTP or similar, you’ll get that little red flag warning at the bottom which says this might be complete rubbish..." (15:02)
- Early cars: required red flag bearers and had strict regulations
- Potential Futures: From Helpful to Harmful
- Tech utopia: AI could automate the majority of production, relegating humans to consumers
- Tech dystopia: Humans become "pets or subjugated by [the] beast" (AI), referencing The Matrix (15:48)
- McAllister’s Prediction:
- "Humanity ends up with this 51, 49% split between good and exploitation...sometimes we’re 51% good and sometimes 49% good." (16:05)
- Urges intentional decision-making now to steer toward a future we desire, rather than letting incremental choices lead us elsewhere
- "We need to make some decisions about where we want this to go, rather than being driven by a series of decisions as time goes by that may take us down a path that we’re not happy with." (16:48)
Notable Quotes & Memorable Moments
-
On why fiction is powerful:
- "Fiction allows you also to explore a lot more what ifs and crazy ideas that people will accept and listen to and let you take them on a journey." — Dr. Peter McAllister, 05:49
-
On the dangers of reckless AI deployments:
- "If you don’t use it wisely or correctly, there is a significant chance you will make your organization dumber." — McAllister, 09:28
-
On the red flag analogy:
- "Right now we’re in a situation of...I think we’re at that red flag rule space. Anytime you use a chat GPT or similar, you’ll get that little red flag warning at the bottom..." — McAllister, 15:02
-
On the stakes for humanity:
- "Humanity ends up with this 51, 49% split between good and exploitation...If we don’t sit down and think now about where we want to be...we will be on the journey with every individual decision being another correction point." — McAllister, 16:05 & 16:38
Timestamps for Key Segments
- Introduction & Career Journey — 02:08–04:35
- Why Fiction? The power of narrative — 05:32–07:09
- Responsible AI Leadership: Security, Risks & Best Practices — 08:19–11:42
- The Red Flag Analogy and Lessons from Automobiles — 12:27–14:50
- Potential AI Futures & Call for Ethical Deliberation — 15:00–16:48
Tone & Style
The conversation is thoughtful, candid, and occasionally wry—marked by McAllister’s pragmatic optimism and dry humor. His analogies bridge sci-fi speculation with real-world executive challenges, urging leaders to wed innovation with caution and ethical accountability.
Summary prepared for listeners seeking a comprehensive understanding of Dr. Peter McAllister’s insights on the present and future of responsible AI leadership and societal trade-offs.
