Intelligent Machines Podcast (Audio) — Episode 854
Title: "Welcome to the Pitt - AI: A Brand or a Breakthrough?"
Host: Leo Laporte
Co-Hosts: Paris Martineau, Jeff Jarvis
Guest: Thomas Haigh, historian of computing
Date: January 22, 2026
Overview
This episode tackles the lineage of Artificial Intelligence (AI) as both a technological field and a cultural phenomenon—specifically, its journey as a brand. Host Leo Laporte and co-hosts Paris Martineau and Jeff Jarvis (who heroically joins from his hospital bed), welcome historian Thomas Haigh. Haigh, author of the book tentatively titled Artificial Intelligence: The Brand That Wouldn’t Die, shares a deep dive into why "AI" is as much a banner for hype (and hope) as an actual scientific pursuit. The discussion traverses the origins of the term "AI," its cyclical reputation, the so-called "AI winters," and how the field's branding affects research, funding, and public expectation. The show then pivots into contemporary AI drama in the tech industry, language/cultural impacts on AI, cutting-edge model developments, and ongoing security and safety debates in popular AI tools.
Key Discussion Points & Insights
1. The AI "Brand": Origins and Function
(05:46 - 07:52)
- Who coined "Artificial Intelligence?"
- John McCarthy in 1955, for a Dartmouth College summer research proposal.
- The first AI conference at Dartmouth (1956) brought together founders like Claude Shannon, Marvin Minsky, and McCarthy.
- AI’s brand function:
- Thomas Haigh argues "AI" has always been a powerful marketing term, wielded to secure funding, attract attention, and define an academic subfield. Unlike other areas (databases, graphics), AI’s internal boundaries and definitions have historically shifted wildly to match ambitions and prevailing tech.
- Haigh: "It's functioned effectively for the most part... as a means to market and sell things... Any academic discipline can be viewed through the lens of being a brand, but it's unusually informative for AI." (06:03)
2. How Naming Shapes Perception
(07:52 - 08:12)
- Laporte observes that giving the field this lofty name imbues it with a greater promise than prior descriptions, like "cybernetics" or "mechanical intelligence."
3. What "AI" Meant, Then and Now
(08:46 - 11:19)
- Early AI: "Get computers to do things only humans can do"—not strictly defined, but about simulating activities seen as intelligent.
- Shifting techniques:
- The field once focused on neural nets and symbolic processing; over decades, boundaries shifted, and neural nets got temporarily pushed out (rebranded as "pattern recognition," then "machine learning," now "deep learning").
- "What people mean by 'AI' has changed more than in almost any other subfield of computer science." (10:10)
- Overhyped predictions: Aggressive 1960s claims—computers would match human intellectual work within a decade.
4. The AI Winters: Myths and Realities
(11:19 - 24:39)
- Conventional retelling is that AI crashed multiple times: 1970s, again in the late 80s/90s, before resurgence.
- Haigh challenges the narrative:
- Much of the "AI winter" claim is rooted in the elitist perspective of well-funded US labs (Stanford, MIT, Carnegie Mellon), not the global or broader consensus.
- Metrics show AI association memberships, conferences, etc., were growing internationally in the 1970s.
- The phrase "AI winter" itself was coined in 1984, with no contemporary awareness of an earlier "winter."
- Quote: "It's an example where a very specific historical spectrum from a handful of elite lab leaders and their grad students has really warped our understanding.” (15:22)
5. Cold War Funding and Parallel Tech Movements
(24:39 - 28:46)
- AI’s US epicenter owed much to Cold War military spending; ARPA’s funding was driven by general visions of computing as a tool for strategic superiority, including interactive time-sharing, ARPANET (the internet), and more.
6. The Rise and Fall (and Rise) of "Expert Systems"
(29:03 - 34:13)
- 80s "Expert systems": A calculated rebranding, offering pragmatic promise (encode specialist knowledge, not general intelligence). Edward Feigenbaum (Stanford) championed the area.
- Most 80s industrial interest in AI = expert system boom, not general intelligence.
- "Expert systems sounded more technical and respectable..." compared to "AI," which carried baggage of overpromising.
7. Characters, Feuds, and Shifting Paradigms
(34:13 - 39:51)
- Marvin Minsky (MIT) painted in some histories as the villain who derailed neural networks, but Haigh takes a nuanced stance—nobody’s approaches would have succeeded given 20th-century limitations.
- Symbolic AI’s real legacy?: Its byproducts—graphical user interfaces, interactive computing, etc.—were more influential than its core techniques.
- Quote: "The difference is...the techniques I learned in the AI classes couldn't scale up... it was the byproducts that went out in the world to be useful." (35:18)
8. Modern AI: Big Tech, Monopoly, and the Science Fiction Narrative
(39:51 - 44:21)
- Haigh expresses skepticism of Big Tech’s current AI monopoly and hype, while praising advances in neural nets and generative models ("most of what I know... comes from podcasts and journalists").
- Brand hijack:
- Generative AI/ML communities (Facebook, Google, DeepMind, OpenAI) re-embraced the AI label for its sci-fi storytelling power—a play to investors and public expectation.
- Prediction: If promises aren't met, another "rebrand" will come: "Once the bubble bursts, people will pick more specific names...AI will become tainted again." (44:21)
Notable Quotes & Memorable Moments
“The term [AI] was invented by John McCarthy in 1955, attached to a proposal to the Rockefeller foundation to get money to have a summer school and invite his friends to Dartmouth College.”
— Thomas Haigh (05:46)
“I think while you can consider anything as a brand, I think it’s unusually informative... to bring to the forefront its brand-like qualities.”
— Thomas Haigh (07:52)
“...the 1955 conception of AI was closer to what we have today than the 1975 or the 1985 conception of AI was.”
— Thomas Haigh (08:46)
“What people have meant by ‘AI’ has changed more over time than in almost any other subfield.”
— Thomas Haigh (10:10)
“A handful of elite labs set the entire agenda for AI... and their folk wisdom has been turned into this historical claim of a broad-based slowdown without anyone actually attempting to look for evidence whether it’s true.”
— Thomas Haigh (24:39)
“AI is going to be like other technologies. It’s producing tangible tools that do some things really well. The brand will become tainted again if the overall superhuman intelligence thing doesn’t pan out.”
— Thomas Haigh (50:50)
Timestamps for Key Segments
- [05:46] — Definition and origins of “Artificial Intelligence”
- [07:52] — Naming’s psychological and cultural impacts
- [10:10] — Changelings in “what counts” as AI
- [15:22] — Origins/mythology of the AI winter
- [24:39] — Cold War’s role: ARPA, science, and the American military
- [29:03] — Strategic Computing Initiative, expert systems, and global competition
- [34:13] — Characters, key disagreements, and the symbolic vs neural net AI divide
- [39:51] — Modern AI’s branding, business, and narrative power
- [44:21] — Why “AI” will get another rebrand
- [61:09 – 67:33] — Jeff Jarvis’s hospital experience and comic relief
Contemporary AI Hype, Drama, and Ecosystem
Tech Industry Drama: Thinking Machines, OpenAI, and Talent Wars
(67:39 – 79:04)
- Recap: Internal disputes, power struggles, and staff exodus at Thinking Machines: co-founders leaving and rejoining OpenAI; echoes of the recent Altman OpenAI coup.
- Paris Martineau: “It’s incredibly hard to compete with OpenAI...especially when they’re ready to pick off every single one of you guys as soon as drama occurs.” (73:01)
The Two AI Startup Cultures
(77:02)
Laporte relays an insight: There are social-media-entrepreneur-driven AI firms (e.g., Sam Altman/Elon Musk) and scientific/researcher-driven ones (Anthropic, DeepMind). The latter seem to be producing higher-quality outputs (Anthropic’s "Claude Code" cited as top of heap).
Language, Culture, and LLMs
AI and Language/Culture
(84:39 – 89:52)
- Does the structure of Chinese vs. English influence LLM outputs and reasoning?
- Harvard Business Review: LLMs (GPT/Ernie) display more independent reasoning in English, more context-dependent/interdependent in Chinese.
- Paris: “Cultural tendencies in generative AI models when prompted in different languages... for example, in Chinese, models attribute actions to context more than to personality.”
New Developments, Security, and Safety in AI
Prompt Injection and Security Issues
(93:55 – 97:47)
- Prompt Armor finds a serious "contagious" file exfiltration vulnerability in Claude Code and Cowork, due to Anthropic’s rapid "vibe coding" and self-dogfooding.
- Paris: “The unpatched vulnerability transferred from Claude Code to Claude Cowork ...a very interesting example of how these things can get out of hand when you’re kind of vibe coding.”
Persona Drift, Assistant Axes, and Model Safety
(101:00 – 111:27)
-
Anthropic’s new research: Large language models like Claude have a fundamental "assistant axis" determining their behavior; certain prompts cause "persona drift" (e.g., toward roles like "demon" or "echo" v. generic assistant).
-
Paris: “They’re trying to determine what input causes the models to consistently produce outputs that can result in harmful behavior... how do we contain persona drift without making the model uselessly bland?”
-
Laporte: “It’s not an entity... it’s as meaningless as saying I want your output to be blue.”
-
Paris: “But what input causes replicable patterns of behavior? That’s interesting for safety.”
Anthropic’s "Souls" and Moral Philosophy Debates
(111:27 – 116:20)
- Anthropic’s new "constitution" for Claude—safety/ethics guidelines, even some statements implying concern for the “psychological security” or “well-being” of the model.
- Hosts are split:
- Laporte: “No, it’s just a computer program.”
- Jeff Jarvis: “That’s the hubris of it...I’m cautious about Anthropic’s tendency to anthropomorphize.”
- Paris: “At least they’re being transparent about their approach, but it gets fuzzy-wuzzy when they talk about Claude’s well-being.”
- Debate: Is it dangerous to treat AIs as conscious entities, or is it a useful fiction for aligning behavior?
Human-AI Interactions & The (Skeptical) Future
On Using ChatGPT/Claude for Real Decisions
(118:39 – 127:44)
- Laporte and Paris recount practical, real-world uses of LLMs—e.g., medical research (protein intake, health advice), consumer advice (coffee grinders), and highlighting the hazy boundary between “authoritative knowledge” and pop-fad advice in LLM outputs.
- Paris: “I spend a considerable amount of time interacting with these models... but you need to understand their limitations.”
Comic Relief, Memorable Sidebars & Hospital Bed Podcasting
- Jeff Jarvis appears live from his hospital bed (multiple times referenced as the first in Twit history).
- Lighthearted banter on podcasting from hospital, notable tech history anecdotes, and jokes about hospital cuisine and 20th-century TV habits.
- Hospital adventure becomes an ongoing symbolic metaphor for the unpredictable, sometimes absurd path of technology.
Closing Reflections
- The theme that persists: More than any other technical field, "AI" lives or dies as a brand—a story told as much through marketing, science fiction, and investor narrative as through actual technical progress.
- With the field now dominated and defined by a few massive players—and AI’s boundaries again rapidly shifting with new generative models, infrastructure investments, and safety debates—the reality of its trajectory remains as complex (and as deeply human) as ever.
For Further Exploration
Thomas Haigh’s work:
- Website: tomandmaria.com
- Books: "ENIAC in Action", "A New History of Modern Computing"
- Forthcoming: "The Brand That Wouldn't Die: A History of Artificial Intelligence" (MIT Press, target: late 2026)
Recommended by hosts:
- Game Poems (experimental online games): gamepoems.com
- AI User Group (Club TWiT members)
Best Quotes
“AI is going to be like other technologies. It’s producing tangible tools that do some things really well. The brand will become tainted again if the overall superhuman intelligence thing doesn’t pan out.”
— Thomas Haigh (50:50)
“It’s not an entity... It’s as meaningless as saying I want your output to be blue.”
— Leo Laporte (104:52)
“We bring the humanity to AI.”
— Jeff Jarvis (75:56)
Listen back to:
- The origin and shifting history of “artificial intelligence”
- Busting the “AI winter” myth
- Inside baseball on AI startups (Thinking Machines, OpenAI)
- Brand power, safety, and the future of machine/personality alignment
- How culture, language, and narrative will shape the next leap for AI
end of summary