Podcast Summary:
New Books Network – Interview with Sam Illingworth and Rachel Forsyth
Book Discussed:
GenAI in Higher Education: Redefining Teaching and Learning (Bloomsbury, 2026)
Date: March 17, 2026
Host: Ketam Sulongkumar
Guests: Dr. Sam Illingworth (Edinburgh Napier University) & Dr. Rachel Forsyth (Lund University)
Overview
This episode of New Books Network features a conversation with Dr. Sam Illingworth and Dr. Rachel Forsyth about their book GenAI in Higher Education: Redefining Teaching and Learning. The discussion explores the disruption and potential of generative AI in higher education, focusing on critical AI literacy, institutional policies, assessment practices, and fostering trust and agency among students and educators.
The authors’ central thesis is that while the AI technology is evolving rapidly, foundational pedagogical principles—not just technical know-how—must guide its integration into education. Their approach revolves around four pillars: student-centeredness, trust, relevance, and agency.
Guest Introductions
Dr. Sam Illingworth
- Full Professor at Edinburgh Napier University, UK
- Specialist in critical AI literacy
- Former physicist turned qualitative researcher and poet
"One of the things that I do at the moment is looking at critical AI literacy. So really knowing when to use AI and when not to use AI." (02:38)
Dr. Rachel Forsyth
- Senior Educational Developer at Lund University, Sweden
- Focuses on pedagogical relationships and trust
- Former physicist and educator interested in improved teaching
"My current research is much more about how relationships are built in the classroom. And right now we're looking at how AI might affect those relationships and what teachers and students could be doing to make sure that they maintain trust." (03:20)
Key Discussion Points & Insights
1. Defining Generative AI in Higher Education
(05:03–07:52)
-
Pattern Matching, Not True Creation:
Dr. Illingworth describes generative AI as a pattern-matching machine, not creating anew but predicting based on vast, frequently biased, datasets:"It's generating content...based on patterns that it has observed against its training data set...it's in almost all instances not necessarily creating something new. It's effectively guessing what's going to come next based on the patterns on which it has been trained." (05:03)
-
Implications of Digitization & Commercialization:
Dr. Forsyth notes that not all knowledge can be digitized, so some outputs are faked or incomplete. She prefers calling them “products or services” to highlight commercialization and usage conditions:“A tool implies something useful...But I try now to say product and/or services. Because...it's been packaged up to be sold to us. And when something is sold to us, then it comes with conditions that we might...need to think about a lot.” (06:40)
2. First Steps for Educators Integrating GenAI
(08:28–11:37)
-
Dialogue with Stakeholders:
Both authors stress the primary step is dialogue—with students and colleagues—to co-create meaningful, transparent AI usage in courses:"For me, the first step that people should be doing is having a dialogue with the people that are using it...student centeredness is the core." (08:28)
-
Authenticity over Fear:
Educators should not fear AI but need to explore and test applications themselves, making judgments based on real classroom contexts:"Not to be scared of doing this...you do have to take a step of finding out a bit more about them and ideally testing their use." (10:15)
3. Institutional Policy Missteps and Best Practices
(12:06–15:21)
-
Clear Yet Flexible Guidelines:
Dr. Illingworth presents three institutional rules from Edinburgh Napier:- Don’t upload personal information.
- Don’t upload confidential/internal documents.
- Never use AI for decision-making (e.g., auto-grading or hiring).
“Number three, never use AI for decision making purposes...never use it to automate assessment...if I'm on a hiring panel, never use AI to read through CVS and determine which people are there or they're not.” (12:06)
-
Move Beyond Assessment Obsession:
Dr. Forsyth argues that assessment dominates AI discussions, but the real focus should be broader educational values and fit in context:“Not to get too obsessed with assessment and examination here. That usually takes up all of the discussion time. And there's a much broader conversation that needs to be had.” (13:55)
4. Generative AI Detectors: Efficacy and Ethics
(15:54–20:11)
-
Detectors are Deeply Flawed and Biased:
Dr. Illingworth is highly critical:“I can tell you that without swearing. They're complete bs. They don't work. And they don't work for two reasons. They don't work because they're technologically inept, and more importantly than that, they're pedagogically inept as well.” (15:54)
- False positives are common, especially among non-native speakers and marginalized groups (as high as 63% in certain cases).
- Detectors erode trust between student and teacher, promoting suspicion over open inquiry.
-
Trust & Context:
Dr. Forsyth emphasizes security in assessment but agrees detectors break trust and are ultimately less effective than robust pedagogical strategies:"It takes the trust out of the relationship, which I think...is really not a good thing." (19:42)
5. Rethinking Assessment in the Age of GenAI
(21:39–26:40)
-
Assessing Process over Product:
Dr. Forsyth proposes shifting the focus to students’ learning processes, not just produced artifacts:“We will much more assess the process...than the product itself. We have to be very, very clear about what it is that we want students to be able to do and how we can see that they can do it.” (22:09)
-
Pedagogical Foundations Still Matter:
Both highlight the enduring need for educator expertise and guidance, referencing constructivism and Vygotsky’s “more knowledgeable other.”“There needs to be an element of actual guidance in there...there needs to be a point in which there actually is still instruction.” (23:48)
"The AI doesn't know what's important next and never will. You can't have an AI teacher or an AI tutor an AI coach. That can't be, because the definition of those words...means that it's a human job." (26:05)
6. AI Literacy for Students: Guidelines and Agency
(26:40–29:48)
-
Don’t Outsource Judgment:
Educators must teach students to engage critically with AI, preserving the friction necessary for real learning:"Don't outsource judgment. You need friction to be able to learn...it's in the construction of that output that real learning takes place." (27:26)
-
Dialogue on Expectations:
Teachers should emphasize the value of students’ own thinking, rather than dictating specific tech rules:"The most important thing for teachers to say to their students is I care about what you think and I want your thinking to develop." (28:23)
-
Addressing Student Anxiety:
Many students fear even mentioning AI use will be equated with cheating, creating distance from educators.
7. The Future: Rapid Change, Enduring Pedagogies
(29:48–32:40)
-
Tech Changes, Pedagogy Endures:
Authors acknowledge their book is technologically perishable but stress pedagogical underpinnings will remain relevant."What we tried to think really carefully about, again, keeping coming back to the pedagogy...these are questions that will always be relevant that no matter how much the technology changes." (30:19)
-
Bias Persistence:
Despite ongoing tech developments, issues like systemic and dataset bias remain deeply entrenched.
Notable Quotes
-
On Generative AI Detectors:
"They're complete bs. They don't work...they're technologically inept, and more importantly...pedagogically inept as well."
— Dr. Sam Illingworth (15:54) -
On Student Agency and Literacy:
"Don't outsource judgment. You need friction to be able to learn."
— Dr. Sam Illingworth (27:26) -
On Assessment:
"We will much more assess the process...than the product itself."
— Dr. Rachel Forsyth (22:09) -
On Institutional Policy:
"Never, and I think this is the most important, never use AI for decision making purposes."
— Dr. Sam Illingworth (12:06) -
On the Human Element:
"You can't have an AI teacher or an AI tutor or AI coach. That can't be, because the definition of those words is, to me anyway, means that it's a human job."
— Dr. Rachel Forsyth (26:05)
Resource Links & Further Engagement
(32:40–35:12)
- Open Online Courses: Authors have developed open, internationally available courses for staff and students (Coursera: Gen AI in Higher Education).
- Open Access Book: Thanks to Lund University, the book is available free worldwide.
- Contact: Both authors welcome contact—emails are in the book; Dr. Illingworth runs the "Slow AI" Substack newsletter for ongoing discussions on critical AI literacy.
Timestamps for Key Sections
- Defining GenAI & Concerns: 05:03–07:52
- First Steps for Educators: 08:28–11:37
- Institutional Policy Practices: 12:06–15:21
- AI Detection Tools: 15:54–20:11
- Rethinking Assessment: 21:39–26:40
- AI Literacy & Guidelines for Students: 26:40–29:48
- The Future of GenAI in Education: 29:48–32:40
- Open Resources & Contact: 32:40–35:12
Overall Tone and Takeaways
The conversation is warm, honest, and unafraid to point out both the pitfalls and opportunities of GenAI in higher education. The authors blend critical scrutiny with optimism, always returning to the value of dialogue, trust, student agency, and educator expertise as more essential than any specific tool. The episode provides practical advice, policy insights, and a framework for educators seeking to navigate the complex, fast-changing landscape of AI in academia.
For those beginning to explore generative AI in teaching and learning, this episode—and the book—offers foundational, learner-centered principles and actionable pathways forward in an uncertain future.
