Podcast Summary
New Books Network: David Elliott on "Artificially Intelligent: The Very Human Story of AI"
Air Date: October 14, 2025
Host: Gregory McNiff
Guest: David Elliott (PhD candidate, University of Ottawa)
Book: Artificially Intelligent: The Very Human Story of AI (Aevo UTP, 2025)
Main Theme & Purpose
This episode features David Elliott, whose book offers a human-centered chronological narrative tracing the evolution of artificial intelligence from ancient mathematics to the current AI moment. Elliott and host Gregory McNiff discuss the philosophies, historical figures, scientific paradigms, societal impact, and pressing dilemmas around AI—making the story accessible, balanced, and relevant for everyday listeners. The book serves both as an introduction and a call to action for a broad audience to grapple with AI’s human, ethical, and political dimensions.
Key Discussion Points & Insights
1. The Book’s Audience and Mission
- Intent: Elliott wrote the book for general audiences—especially people new to AI or put off by fear-mongering or hyper-optimism in the field.
- "I wrote this book for my friends ... they're plumbers, they're marketing executives, they're politicians ... I wrote it for anybody who wants to learn more." (03:15, Elliott)
- Elliott aims for accessibility and fun, avoiding academic jargon, to encourage broad participation and understanding.
- The book is a call to collective action in shaping AI’s future.
2. Origins: The Algorithm, History, and Storytelling
- The story begins with Al-Khwarizmi, whose work in devising algorithms—step-by-step procedures—paved the way for mathematics and computing.
- "He designed a system that anybody could follow ... That was the founding of algebra." (05:52, Elliott)
- Algorithms are demystified from modern associations (“TikTok algorithm”, etc.) to their mathematical and historical roots.
3. Foundational Figures: Babbage, Lovelace, Lord Byron, Mary Shelley
- Ada Lovelace: The world’s first computer programmer who foresaw that computers could go beyond arithmetic, though she doubted their creativity.
- "ADA saw the future ... realized this machine, if he built it, could be programmed to do anything." (13:23, Elliott)
- Lovelace’s father, Lord Byron, and Mary Shelley’s Frankenstein are intertwined with technological anxieties—paralleling AI’s own “creation story.”
- Lovelace’s famous objection: that machines could never create, only recombine what humans give them (20:31).
- "That was the founding of algebra, was his attempt to solve inheritance." (05:52, Elliott)
- "She concludes ... would be eventually very powerful, but never creative." (20:31, B/McNiff referencing Lovelace)
4. Industrial Lessons: The Luddites
- Luddites are often misunderstood as technophobes; in reality, their problem was not with technology, but its disruptive and exploitative impact on labor and society.
- "They weren't against the technology, they were against the way it was being implemented ... to descale labor..." (21:26, Elliott)
- Elliott frames their struggle as one about power, community, and social values, not just machines—paralleling modern AI’s challenge.
5. Alan Turing: Machines, Intelligence, and Spirit
- Alan Turing, mathematician and codebreaker, laid philosophical and technological ground for AI.
- The Turing machine proved that any algorithm can be computed digitally.
- Turing reframed questions from “can machines think?” to “can we construct machines society deems intelligent?”
- "The question of can machines think is irrelevant because we don't know what it means to think." (31:13, Elliott paraphrasing Turing)
- Turing’s test: If humans can’t reliably tell a human from a machine, we’ll eventually shift our concepts of intelligence (34:51).
- "Intelligence is a human construct ... an emotional construction." (31:13, Elliott)
- Turing’s personal philosophy: separates intelligence (replicable by machines) from the non-replicable “human spirit” (36:27).
- "Intelligence is not, to him, a part of the spirit. Intelligence is part of the machine." (36:27, Elliott)
- Turing saw no theological reason why a machine could not, in principle, have a soul (38:53).
6. The Chessboard of AI: Symbolic vs. Connectionist Traditions
- Chess became the quintessential measure of AI, aligning with the cultural interests of the era’s researchers (39:39).
- Symbolic AI: Sought to encode logical rules and representations.
- Connectionist AI (Neural Networks): Attempted to mimic the brain through interconnected “neurons” and learned via data and trial/error.
- "They thought that the way to create intelligence was to mimic logic ... where the connectionist school is more interested in mimicking the human brain." (44:00, Elliott)
- Geoffrey Hinton represents perseverance in neural networks—biding time until available data and computational power (GPUs) made deep learning practical.
- "His supervisor would say, there's not a computer powerful enough to run it and you don't have enough data. ... It took ... like 25 years or 30 years ... but the world changed around him." (43:36, Elliott)
7. The Data, Surveillance, and the New Paradigm
- Modern AI is built on unprecedented quantities of data and advanced GPUs. Data-theft, biased datasets, and privacy loss are major issues.
- "With data ... a neural network works ... building its own rules ... That's what makes it so fascinating, is there's no human telling it ... It is creating its own rules." (45:58, Elliott)
- Biased data can produce discriminatory outcomes. Surveillance is increasingly normalized—companies gain data for both AI development and commercial interests.
- "AI is built on a foundation of data ... The change in how we collect data too, which is just wild." (55:22, Elliott)
- Discussion of "creep" from Google Glass public outrage (2012) to current normalization of smart glasses/data-capture (56:09).
- Notable quote: "We are not [listening on your phone]. But if you knew how we did it, you'd be even more scared." (57:44, Elliott quoting a Google employee)
8. The Black Box, Risks, and Morality
- Large models (e.g. GPT-3, AlphaZero) now operate as “black boxes”—their reasoning and representations are often opaque even to their creators.
- "We don't even understand how it's getting its answers ... This thing literally has a mind of its own that we don't understand." (58:35, McNiff)
- Elliott: We will likely never be able to fully reverse-engineer deep learning systems, akin to tracing thoughts in a human brain (59:51).
- “That doesn’t mean we should stop using it... but we need to think about where we're using these systems...” (59:51, Elliott)
- Risk regulation: The EU’s tiered approach is commended—low-risk use is unregulated, higher-risk requires explainability/audits, the most dangerous are prohibited (e.g. killer drones, social credit systems) (61:42).
9. Power, Politics, and the Corporate AI “Frankenstein”
- Big Tech, not nations, wield the most significant AI power. These companies are guided by profit, not always by public good.
- "It is just seeding too much power in these individual visions ... and that's not how a democracy is supposed to work." (50:25, Elliott)
- Case study: Google Toronto Smart City (abandoned over privacy/trust issues) vs. Barcelona’s citizen-owned, less visible, more democratic smart city approach (52:44).
- Elliott is wary of both unchecked corporate and government AI control—nuance and debate are essential.
10. Hopes and Portents: Human Agency in Shaping the Future
- The future is unwritten. History is full of “branching points” rather than predetermined inevitabilities.
- Elliott distinguishes “hope” (active engagement and potential agency) from “optimism” (expecting good outcomes).
- "I have hope that we can guide ourselves towards an optimistic future ... I choose to be hopeful." (63:06, Elliott)
- AI can liberate or oppress—the direction depends on public engagement and wise frameworks.
- "We get the opportunity to push this forward. ... I want to give the everyday person ... the opportunity to join this conversation and ... try and push this technology towards what it can be." (65:06, Elliott)
Memorable Quotes & Timestamps
- "I wrote this book for my friends ... for anybody who's been hearing all these words in the news ... and wants an accessible place to start." (03:15, Elliott)
- "He designed a system that anybody could follow ... that was the founding of algebra." (05:52, Elliott, on Al-Khwarizmi)
- "ADA saw the future ... realized that this machine ... could become the computer that we know today." (13:23, Elliott)
- "The question of can machines think is irrelevant because we don't know what it means to think." (31:13, Elliott paraphrasing Turing)
- "Intelligence is a human construct ... Deciding what is intelligent is really a construct of our own imagination." (31:13, Elliott)
- "AI is built on a foundation of data ... we've just slowly, over the last 15 years, become so used to our privacy being eroded..." (55:22–56:09, Elliott)
- "We don't even understand how it's getting its answers ... This thing literally has a mind of its own that we don't understand." (58:35, McNiff)
- "I am hopeful every year that the Ottawa Senators are going to do good, but I'm not optimistic about it ... I have hope that we can guide ourselves toward an optimistic future." (63:06, Elliott)
Important Timestamps
- [03:15] – Elliott explains the book's purpose/audience.
- [05:52] – Discussion of Al-Khwarizmi and the creation of algorithms.
- [13:23] – Ada Lovelace’s unique vision for computing.
- [20:31] – Lovelace’s “objection” and Turing’s response.
- [21:26] – The Luddites and fears about tech’s social/economic impacts.
- [31:13] – Turing’s nuanced approach to intelligence and the Turing Test.
- [36:27] – Turing’s concept of the human spirit versus AI.
- [43:36] – Geoffrey Hinton’s perseverance and the rise of deep learning.
- [45:58] – The role of big data and GPUs in neural network breakthroughs.
- [50:25] – Big Tech’s “Frankenstein” role and societal risks.
- [55:22–56:09] – Surveillance, normalization, and loss of privacy.
- [61:42] – Regulation and the European Union’s four-tier risk model.
- [63:06–65:06] – The book’s message of hope and agency.
Tone & Style
Elliott’s voice is balanced—sometimes playful, always accessible, and deeply rooted in his love for both scientific history and human potential. He avoids alarmism, advocating hope alongside realism: technology is messy, political, and always a human story.
For Further Exploration
- Details of AI’s influence in sentencing, employment, and governance.
- The push/pull between private sector, government, and citizen power over AI.
- The philosophical and practical limits of explainable AI.
- Calls for a wider, public debate and democratic frameworks for guiding AI.
Summary prepared to give non-listeners a vivid and comprehensive sense of this engaging, timely conversation about how AI’s very human story is being written today—and why it matters who gets to hold the pen.
