
Melissa Kruger and Courtney Doctor talk with Michael Graham about what Christians should know about Artificial Intelligence, including how to evaluate what uses are helpful and which are dangerous.
Loading summary
A
For centuries, Alexandria shaped how the church read Scripture, confessed Christ, and engaged the world. It was from this city that Athanasius and Cyril defended the faith, reading scripture according to its own logic, articulating the faith with clarity, and equipping others to do the same. The Alexandrian Institute continues this legacy world class theological education rooted in the great tradition. The we offer rigorous pathways at every level from The Pew to PhD pillars, certificates for those beginning their journey, accredited master's degrees with personal mentorship in theology and theological ethics, and a PhD research program for aspiring scholars. Whether you are seeking deeper theological grounding, preparing for ministry, or pursuing advanced scholarship, we would love to explore how the Alexandrian Institute might equip you for your calling. Start the conversation@alexandrianinstitute.org.
B
Hi, I'm Colin Hanson, Executive Director for the Keller center for Cultural Apologetics. How can ministry leaders care for believers going through seasons of doubt and disillusionment in the forceful crosswinds of a post Christendom culture, more and more believers are struggling to stand firm in the faith. Some are afraid to confess their doubts. While some try to argue their way through nagging questions, others have just given up. They need a guide. How can we help these faith struggling people rediscover the firm ground of Christ? How can we help them doubt their doubts and confess I believe Help My Unbelief. Surprised by Doubt is a new five week cohort by the Keller center for Cultural Apologetics. It's designed to help pastors and other ministry leaders engage people in their care who are struggling in their faith in a way that's grounded in the Gospel and geared for our contestable age. Find out more@tgc.org cohorts that's tgc.org cohorts.
C
So it's important for us in an ongoing fashion to ask hard questions back of artificial intelligence. In other words, are you giving me answers that are theologically reliable? And when I ask you ethical questions, are you going to give me answers that are in accordance with God's word? You know, know when we ask questions about the Bible, are you know, are you going to give me accurate understandings of those things?
A
Hi friends, welcome to the Deep Dish, a podcast from the Gospel Coalition where we love having deep conversations about deep truths. I'm Melissa Krueger and I'm here with my co host and friend Courtney Docter. And today we are very excited because we have another one of our TGC colleagues with with us Michael Graham. He is the program Director for the Keller center here at tgc. But But Michael Graham just knows a lot about a lot. That's what I think about Michael Graham. So while he does that, he, he's got a lot of hidden talents. I feel like you're like Inspector Gadget with your Go Go Gadget machines. You can just like, you know so many different things about different things. So I always love talking with Michael and today we're going to be talking about AI, which is something I have had a ton of convers, ton of people just around tables about, Courtney. But I was not having these are new conversations. I was not having these conversations five years ago. I think I thought it was a sci fi world that was never going to happen. What about you, Courtney?
D
Well, yeah, I mean, I think that that's exactly it. Like when I first even heard, I didn't even know what AI stood for. When it first kind of started becoming something that was entering into conversations and then had no idea. I mean, I've learned just about everything I know about AI I' from Mike and just to like, you know, double click on that whole thing that he knows a lot about a lot of things, including Disney. So not only is he really smart and wise, but he's also a ton of fun. So there's a ride at Disney that I'm obsessed with and every time he takes his kids there, he sends me a picture of them. So it just like creates. You know, we've had these episodes on envy and coveting and I'm always like, oh, I wish I was there. But anyway, AI, let's stay focused. So AI, you know, is something that now I'm assuming that everybody listening has some level of awareness of like just this thing that's out there. Maybe you use it. Maybe it's become an integral part of like your work or your, your life or just the, you know, there's a lot of fun things you can do with AI so even, even if you don't use it, you still hear a lot about it. And we probably just everybody listening is probably their engagement with AI is at very different levels. So one reason that we wanted Mike to, to be on here for this conversation is because he somebody who knew what AI was five years ago. I actually, I'd love to hear Mike like when you first started, started hearing about it, but you've done a lot of thinking about it. And so I want you to start us off. Tell us when you first heard about AI and what do we even mean when we say AI and why should we as, you know, a predominantly Christian women audience, why should we be thinking about. About AI.
C
Yeah. So AI is a pretty complex field. It's been something that's been on my radar since about the late 1990s and back then. 1900s, 1990s, yeah.
A
Did computers exist then? What are you talking about?
C
It did, yeah. So my dad was a computer software engineer and I was doing research, making stock market prediction algorithms. And so one of the, one of the tools that we use for that is.
A
Of course you are, because that's what we were all doing in the 90s.
C
Yeah, it's something called machine learning. And machine learning is when you feed a computer lots of information and you look for patterns. And so as a field, artificial intelligence has been around for over 50 years in a variety of different capacities. But typically when people are using that term today, what they mean specifically is probably what we would call technically a large language model. Large language model or LLM for short. So LLMs would be things like ChatGPT, Anthropic's Claude, Google's Gemini Elon, Musk's Grok, these kinds of things. These are all LLMs. Now if you go one level up from LLMs, there's a form of AI called generative AI. So LLMs are a type of generative AI, but there are other generative AIs that deal with audio, images and video. So audio, images, video and LLMs which are text, all of those are basically generative AI. So typically when somebody is thinking about AI, they're probably thinking about LLMs in the, in the form of just kind of general conversation. One of the things that will be coming up in the next six to 12 months that will be more on people's radar is a different form of AI called agentic AI. A G E N T I C. So agent IC. Agentic AI is, you know, LLMs are more like a personal assistant where you kind of have, you know, hey, I'm going to bounce this thing off of you. We might have a little bit of back and forth, you know, in a kind of text based conversation. What agentic AI is more like. It's like having a co worker where you give it a series of tasks, you set it loose and a day later or several hours later it brings you finished products, you know, kind of back to you. So the agentic AI era is like. If you think the current era of large language models has been interesting or disruptive, the agentic era is going to be far more disorienting.
D
Is that the one where like you might be talking with some, you know, someone, but then you find out it's actually not someone, it's AI. Is that agentic?
C
Are you talking about like a customer service representative you called up a corporation for? Yeah, that would be more of the generative AI voice version, where basically the, a large language model combined with an audio model are working together. Agentic AI is vastly more powerful than even that technology.
A
Okay, so I'm writing down, I want to, I want to recap because this is like learning a new language. So I think this is important. We'll put this in the show notes because sometimes I think you need definitions to even follow this type of conversation. So you're saying generative AI includes large language models, which are LLMs. So under the umbrella of generative, you have LLMs, which is what most of us use on a regular basis. And you mentioned some of the types you have, like, Claude, you have ChatGPT, which you told me the other day meant I was a boomer, because that's still what I was using. That's okay. We're still friends. And then grok is, what Is that Musk's?
D
Is that his?
C
Yeah, it's Elon Musk's. Yeah.
A
Yeah. And you know, and there are different countries have different ones as well. We've talked about some of those as well. So those are all in that type. And you said the other word is agentic. Agentic. Is that, am I saying it correctly?
C
Yeah, like eight. Like the word agent. And then I see on the end.
A
Okay, so I'm writing this down.
C
That's where the AI. Yeah.
D
Huh.
C
That's where the AI just functions like an agent.
A
Okay, and so let me just ask you, give me something the agentic can do that the generative can't do. Give me a, give me a, like, example, because I, I, we just actually went to a fast food restaurant the other day and Mike looked at me, my husband Mike looked at me and was like, I just ordered basically through an AI. Yeah, it was, it was this whole new ordering thing at, you know, I won't say which fast food restaurant we were driving through, but we, we go to order and he realized it was a computer talking to him. And so you're saying that's generative. Well, give me an example. You know, like, how would this, would this happen to us at the mall? Would this happen to us in real life? Or is it something we're going to use? Does that like, meaning? Yeah.
C
So imagine you work for, you know, a company, and your job in that company is to develop sales leads. And so you're you're looking to generate business for whatever the company is that you work for. In the past, you do that, all that stuff by hand. You'd figure out, okay, what is our marketing funnel? And you put stuff in the top of the funnel and you work people, you know, who might be, who might need your company's businesses and services down to the bottom of the funnel and eventually those lead to sales. Well, in the agentic AI era era, let's say you could probably dramatically automate a lot of that process by basically helping, you know, basically telling the AI platform that hey, our customers look like this. We want you to go throughout the entire Internet and find people that look like this, that have need for these kinds of products and services. We want you to build, build you to build a spreadsheet that has the names, addresses, phone numbers and emails of all of these people. We want you to then take that spreadsheet and upload it into say a platform like HubSpot or some other customer relationship manager CRM. And then we want you to then begin making robo emails or robo calls to those people to begin to qualify those leads. And then the people who make it through that qualification process, you're going to put in at the bottom of the funnel and then we'll have human interactions with those people. So, so that's a lot of work. Yeah, that's like, I mean we're talking like whole departments worth of people, you know, who would be doing, you know, kind of qualified lead management or marketing or these different kinds of things that either either those jobs will be eliminated or we won't need as many of them or we just end up doing business a lot faster than what we would have before because more people end up in the funnel. And that's just one example of, you know, you could pick, you could go department by department of any corporation and basically talk through how agentic AI is going to end up changing their workflows. And it will change the values you'll always have need for humans, but the values that humans bring to the department that they're in is probably going to change depending on which department that you're in. Most of that change though is probably not here yet, but is probably 6, 12 or 18 months depending on where, like what kind of department you work and how tech forward that place of business is.
D
Oh, that is okay, so we're going to get there. But right now, so you've really set us up well, understanding these different types of AI and kind of ways we interact with them Even now. But so I want to back up just a little bit. And you know, we don't want to, we're not fear based, right? Like, we know that God is on his throne, sovereign over all things, and this is the source of all wisdom and knowledge. And so I want to talk about some positive ways. I mean, there is so much about this that, that really highlights even human ingenuity and like the creativity that's, that's part of our image bearing. And there's, there's so much. So I want to talk about some positive uses of AI, especially from a Christian perspective, like, what are some potential good things that AI can help us with as we think about, you know, the kingdom of God and the advancement of the gospel, and, and what are some ways that, that we would, we would want to embrace this?
C
So let me give a little bit of a theology here. Okay? So there's two kinds of work. There's toil and then there's labor. Toil is the kind of work that's downstream from the fall and a product of the curse. And then there's labor, which is part of the cultural mandate that we have to have dominion and be fruitful and subdue the earth. And so AI is pretty good at a lot of things that are in that toil category. And so, you know, one of the biggest questions that I ask myself of whether I should use AI in a particular situation is the question is the kind of work that I'm going to ask it to do, is it eliminating toil or is it eliminating labor? And I'm far more inclined to use it if, if it's eliminating toil. And so the first thing I would say that artificial intelligence is good for and should be used for is the kinds of things that eliminate toil from our work. Zooming out also still on the same question, there's two things that are important from a theological standpoint of why we shouldn't be extremely doomer and completely set against artificial intelligence. The first of those doctrines is the providence of God. And the providence of God being this doctrine that says that, you know, God, the Trinity, Jesus is in control of every single thing throughout all of creation, all of history, all of time, all of space. God's in control of that. There's no surprises for him. Artificial intelligence is no surprise to him. And whatever happens in the future of time is no surprise to him. God is author, and he is at the, you know, we all believe in Colossians 1 and everything that's there. You know, the Preeminence of Christ over all of creation, all of that. The second thing is the doctrine of common grace. The doctrine of common grace says that, you know, obviously we have grace that's salvific. You know, we need to believe in the life, death and resurrection of Jesus in order to be a part, you know, in order to be justified and pronounced righteous and, you know, be regenerated and have the Holy Spirit and be adopted in God's family. But there's also this doctrine of common grace which states that people who don't possess the Holy Spirit and who aren't part of, you know, adopted into God's family, they can still do things and make things that accord with truth, goodness, and beauty. And the reason why the doctrine of common grace works is because every human it bears the image of God. And so this is why people who don't have the Holy Spirit and who are pagan can do things and create things that are tremendously beneficial for human flourishing, for the promotion of shalom. And they can create things that can even dramatically speed up the advance of the gospel. And so there's all sorts of things, both from a work standpoint, a home standpoint, and from a church and ministry standpoint, all sorts of use cases that would be tremendously helpful for either, for advancing truth, goodness, and beauty in each of those domains. And obviously, one of the biggest things that will come from the technological advances of artificial intelligence is going to be the data mining that occurs through electronic healthcare records. And there'll be all sorts of ways in which there'll be new ways to attack various cancers. There'll be new ways to look at our genetics and, like, see risk factors that we could address much earlier. There'll be ways that we can, you know, all sorts of diagnostics that we would only catch after something went bad or we have these problematic symptoms. We're going to be able to catch those things a lot earlier now. There's going to be a whole new set of bioethical issues that come from some of those things. And some of those things will be good and things that we should pursue, and there'll be other things that will be problematic. But I think that it is not unrealistic to think that even for us on this call in our middle age. I still think that most of us will probably experience a few extra years of our life because of the developments that come from AI assisted medicine over the next few years. And so I'm tremendously encouraged by what will come from that, because when people live longer, we have more time to Be able to communicate the truth, goodness, and beauty of the gospel to them. And so advances in medicine are important for advances of the church and advances of the gospel.
A
That was actually huge for our family this past fall. I've talked about it on here before. My mom got sick and ended up dying. But while we were in the hospital, it was so helpful. My brother would put everything the doctors told us into AI and just, you don't have a lot of time with a doctor in the room a lot of the time. And it just gave us good questions to ask. He was like, what questions should we ask the doctor when they're in there? And it was just super helpful. When I think about a technology like AI I'm always reminded of the movie. Did you all see Apollo 13 back in the day? This is one of my favorite. My son is literally a rocket scientist. So this was a favorite movie in our house. And we just saw the launch a few weeks ago of Artemis and everything. And I think about that. What strikes me more, because I used to be a math teacher is they were doing that with slide rulers. Okay? That was the technology they're using. You know, slide rulers were sending a man to the moon with slide rulers. But then when I was a teacher, we used to really limit the use of calculators because we. We knew that we needed kids who actually understood multiplication. Even though there was a calculator there to use it. They. They needed to understand actually what multiplication was. They needed to understand division. So there were many years in there. You didn't let kids use calculators, and then eventually you let them use it. So I want to ask both of you this question. Where do you let yourself use AI in your daily life? What are some of your favorite uses for it? And where do you say, you know what? I'm not going to let myself use it. I want to make sure I retain my thinking. Because we see the studies that come out that we can actually lose certain abilities if we don't keep practicing them. And, you know, like, that's why teachers make you do those multiplication facts all through third grade, over and over and over again. Because you do need to just sometimes know what 7 times 3 is. Yeah, you could pull out a calculator, but we need your brain to work. And so I want to ask both of you, where do you find it helpful? Where do you guard yourself and say, I'm not going to do that because I actually care about my brain and want to make sure it still works at some level?
D
When I would say, it's not just caring about my brain. It's caring about my, you know, my Christian formation, my. My becoming conformed more to the image of Christ and. And the way that the. The labor that I've been given. So, Mike, I'd never heard that labor toil rubric before, but that's really helpful because the labor that we've been given, the work that we've been given to do, is. Is actually good for us, too. And so. So I don't want to remove that. So. So let's see. What have I used it for? Well, I took a photograph of myself, and I put it in, and I said, show me what I look like with gray hair. And then I was like, show me what I look like with white hair. And then show me what I look like. You know, so, yeah, so I definitely wanted to use it for that. And I have decided not to go gray as a result of my genital. So what else? Let's see.
A
Okay. I want to post this. I feel like that means in this.
C
Yeah.
D
No, I'm not sharing it. No, we have to.
A
No, absolutely pressure Courtney.
D
Oh, my word.
A
If you're a deep dish listener, please share with us how much you'd like to see Courtney with Graham.
D
Oh, my word. It is not. It was a little shocking.
A
It was a little shocking.
D
But I want to see.
A
I'll share like I. Show me, Courtney.
D
You'll share mine is what you'll share. That's why. Because you're not a safe friend. I'm not sharing my.
A
She says I'm not safe. I know, I know, I know. Oh, my gosh.
D
I'll turn around. It'll be on social media. Okay. So I don't use it for the creation of content. Like, if I'm writing. If I'm writing a Bible study or writing a book or writing like, I don't use it for that because I actually need to. It forms me as I wrestle with the text or wrestle with, you know, the. The thought that I'm trying to convey. I have to work it out because it's working on me, especially when I'm dealing with scripture. I did. I. My daughter, one of my daughters, is in the middle of finding roommates. And so I put. I. I asked chat. GPT. Is that not cool anymore? Is it supposed to be another. Maybe it's another platform now.
A
We're old.
D
Yeah, we're old. I know. I know. I put in what are the best questions to ask, you know, a potential roommate, and she used some of them, like they were helpful. So kind of idea generating sometimes I think can be. Can be helpful. But Mike, I want to hear. I bet you have not put in what you look like with gray hair. I imagine that that's not something you've done, but you might today.
A
I don't need your pictures, Courtney. I can take a picture. I have of.
D
You need to stop it.
A
I just need you. Oh, I just realized I can do this.
D
All of you that think Melissa is the nicest of us, that she's the nicer out of the two. I just want to go on record. She's not.
A
I'm being nice to our audience. They all want to see you with gray hair. I'm just being nice. I'm just giving people what they want.
C
Yeah. I've never put an image of myself into a platform.
A
We're gonna make Michael Mickey Mouse. We're gonna make it Mickey Mouse.
C
I may have put like an X ray or something in there before. Yeah. Okay, so use cases. So I have a pretty simple grid of how I think about just about everything. Okay. So everything boils down to triangles for me. Okay. And this comes from a theologian who used to be a professor at RTS Orlando named John Frame. And John Frame has this thing called tri perspectivalism. And here's how it works. It's, you know, because humans are made in the image of God, and because God is triune, we reflect his triunity. And what that looks like is basically thinking in our head, feeling in our heart, and doing in our hands. So thinking, feeling, and doing, head, heart and hands. So because I'm made in God's image, and because I use AI, AI use cases are largely going to follow in one of those three categories. So I'm going to use AI to help me think, I'm going to use AI to Help me feel, or I'm going to use AI to helping me do. And so the first question that I have in terms of am I going to use AI here? And then if so, for what? The first question is this labor or is this toil? And then the second question I ask myself is, am I using this for thinking? Am I using this for feeling or am I using this for doing? And I would evaluate how I would use AI for thinking, feeling and doing very differently. I do not use AI hardly at all for feeling. Using AI for feeling is. I think it's dangerous. And I don't like using AI for anything where a in person relationship would be better or superior or even a possibility. So I'm not looking to get wisdom out of AI because I don't think you can get it. You can get facts, but you can't necessarily get wisdom because artificial intelligence lacks both embodiment, it lacks experiences and it lacks incarnation. And so from a feeling standpoint, I don't hardly use, I don't use it for that at all. I think it's very dangerous to use AI to even get relationship advice or hey, I'm in this conflict or I'm in this particular parenting situation. I think those are things where you really, you need to resist the shortcut and you need to go to other people who are maybe mentors or disciplers and get that information there instead. So when I'll use AI a lot for thinking, when I'm trying to think of like outlines or where I've already done cognitive work, but I'm looking for. And I'll put all the cognitive work that I've already done for that thinking in. And I'll ask something to the effect of what else should I be considering here? Or you know, or do you have a better way of organizing this? So that way I'm not shortcutting the, you know, the kind of cognitive work and the kind, you know, because it's like, people don't. I'm not hired by the Gospel Coalition so that I can just offload my work into an AI platform. I've been hired here because I have a set of experiences and character and wisdom that are sought to be utilized by, by that, by that ministry for the, for the role that I'm in. And so I don't want to be using artificial, like, like, I mean, what is the point of even hiring somebody if all they're going to do is take their work and, and put it into an AI platform? And then on the doing side of things, it depends on what the, you know, what the use case is, you know, if it's work doing stuff. Well, the first question that I have is when I'm given a task by somebody else, are they expecting me to bring all of the wisdom and knowledge and insight that I have in my embodied person to this, to this project, to this task, to this question, or are they looking to, are they wanting me for like my prompt engineering skills? And I think more often than not what they want is no, I want to know what you think. And so I think it's important in our work that anytime we use artificial intelligence for anything that we need to have total and complete transparency of like, hey, here's this report. This particular Section right here. I used this particular platform in this particular way. Here's why I used it for these reasons. And it produced the, you know, these things. I think transparency in that is important because I think as, as time goes forward, one of the most important things that we have is trust. And so, you know, you don't. It, it has never been easier to burn trust than today, especially in the workplace or in the church. And so if you begin offloading things onto these platforms and you're lacking transparency on them, this is a very quick way to burn trust. And especially if you're young and let's say you have a co worker, you know, who maybe is less experienced with AI and they're not as fine tuned to like see it, know it when they see it. Like, well, a young coworker who's interacting with an older coworker who's, you know, outsourced some particular task to say chatgpt and copied and pasted it. I mean, this is a very quick way for the older person to completely destroy trust in the work environment or in a relational environment with the younger person, especially if there's a lack of transparency on what's been done. So there's a lot of landmines that are new and evolving and definitely things to be mindful of.
D
Well, as we're talking about that, so you've kind of alluded to this, but I want to just make sure I ask you really explicitly because I think it's an important thing. So as, as most people that I hear talking about AI, kind of the leading concern seems to be like this prospective job loss. Like that seems to be kind of the, they're going to take AI is going to take all of our jobs. And you've already touched on that a little bit, but do you actually think that's the, that should be the main concern? Like we've already said like AI has some beautiful benefits and some, some really fun uses. Like there's a lot of ways we can use it in redemptive ways and helpful ways and ways that we should kind of learn to press in. I love thinking about head, heart, hands and you know, saying we're going to stay away from heart wisdom issues. But if we're going to address some of the concerns too, because they are, there are concerns. Do you really identify job loss as something that should be one of our leading concerns or would you locate that somewhere else?
C
So the three biggest concerns that I have, job loss is one of those three things. A second issue is humans going from learning Primarily through primary sources. So reading books, hearing directly from a teacher to secondary resources where those kinds of things. A model has been trained on it, but it's going to give you a regurgitated version of it. It's, it's crunched on a lot of primary sources, but it's giving you secondary source material. So I, I have concerns about how that is going to go for, you know, the human race. The third thing, the third concern I have is for children. So we're all adults and we've only been playing around with this thing for like 6, 12, 18 months by and large, as a culture and society. And we're already seeing some of the cognitive decline, some of the weaknesses of overuse of these tools and our brains are fully developed. And I think that when we're talking about children, we just went through this whole season where people born somewhere between 2000 and 2012, they're starting to turn adults. They've been adults for a few years and Gen Z has really been through a lot. 9, 11 happened on or close before their birth. You had subprime mortgage collapse, the great financial collapse, Occupy Wall street, all that stuff. And then you had the creation of the smartphone, the creation of social media algorithms everywhere. And now we're in the season where the rates of anxiety, depression, loneliness and suicidal thoughts or suicidal ideation is dramatically higher for Gen Z, especially Gen Z women, as we zoom out at the data. And so that whole dynamic is because you had technology in social media, social media algorithms that had people going down rabbit holes. And so maybe, you know, maybe you're, you're, you're 19 years old and you just, you were searching for, you're on Instagram and you're searching for like workout tips and health tips. And before you know it, of going, you know, down this, now this algorithm is showing you people who have some kind of, maybe eating disorder, these different kinds of things, and it's showing you all of this kind of content and you had no interest in all of that. But now, you know, and the path of doing that is so subconscious that maybe, you know, the frog and the kettle, you're not really even noticing that this is happening. And this is beginning to affect your psyche and your sense of self worth and your sense of self. And so now we have an entire generation of people who are anxious and nervous and self conscious and they're wrestling with this sense of self worth. And so the technology that's beneath the meta or Instagram algorithm is a very rudimentary form of artificial intelligence. What we have now is a far more, infinitely more powerful version of the same technology. And so the propensity for like, if we think the experiment that was run on Gen Z with social media and smartphones went south, how these technologies could go south with artificial intelligence with a far more technology, far more like advanced technology beneath it could go is really quite, is really quite concerning. So I think that there needs to be a number of different ways and parental controls and you know, children shouldn't be getting the same version. You know, there should be just like you got kids versions of Netflix and these kinds of things. You got to have kids versions of these things that are, they're much smaller sandboxes. Parents need to be given the ability to control the size of that sandbox. There should be all sorts of things that like, you know, kids shouldn't be able to have like, you know, hours and hours worth of conversation back and forth with those platforms. But going back to your question about jobs, Courtney, it's going to depend on the sector that you're in. So if you're in a sector that is in words, numbers, images or video, then there's probably going to be some kind of evolution or disruption in your work. And that doesn't mean that you're obsolete. It just means that the value that you bring to your work is probably going to change from what it is right now in your job description for what it will be in the future. And so I don't think this should necessarily be an anxiety inducing, but what it should be is it should spur us to two things. There's two things that we need to be cultivating in this new era. The first is wisdom and virtue, and the second is learning new skills. And so for a long time, the workplace promoted and was very fertile soil for people who were specialists. Think about the people who make the most money. Historically, it's specialists. It's doctors, lawyers, people who, you know, who have a very specialized skill set. But I believe that probably in the future of artificial intelligence, people who develop a more broad skill set, who end up taking on generalist type skills and who can synthesize and put together lots of different things between different departments, between different skills, I think those people will probably end up doing better. So I think in the history of work, you know, the Western world will go from a specialized world to more of a generalized world. I think also it's important to keep up with what's going on with the technologies, learn some, you know, learn some tools, you know, keep up to speed, you know, with those things. But you've got to do that all in balance with, you know, we all have other responsibilities, you know, to family and if we have children or friendships, you know, church, body life, our work. So I think we have to keep all that stuff in perspective. And not just. We can't just worship efficiency. We can't worship productivity. We can't worship all of these. You know, the AI era just gives us a whole new pantheon of new idols that we can worship. So we have to be careful about that.
A
One thing that you just said that I think is really important, and I believe it was a book called Mindset that I read years ago when I was raising kids. And it talked about the difference between a growth mindset and a fixed mindset. And basically a lot of people have a very fixed mindset. I majored in this subject. That's what I can do. I'm not going to learn or grow from there. And I do think we're entering a season of just life where that's not going to be enough. We're all going to have to develop that growth mindset. And it's really good with our kids to be doing that. It's, it, you know, play a sport you're average at, learn get better at it. You know, we, we kind of can. Even when you think about that with our kids now, we can specialize them at like five and like. No, you're sports. Baseball, maybe not maybe, maybe, maybe wait till they're 14 to do that. You know, just practically. So. So that there are some, you know, learning can take place. Well, I want to jump into the AI benchmark, but first we are going to hear from our sponsors before we do that. So let's hear that and then we'll be right back because we want to talk with Michael about the AI benchmark that TGC has been working on. And he's been really developmental in that, so. But first, let's hear from our sponsor.
B
Hi, I'm Colin Hanson, executive director for the Keller center for Cultural Apologetics. How can ministry leaders care for believers going through seasons of doubt and disillusionment in the forceful crosswinds of a post Christendom culture? More and more believers are struggling to stand firm in the faith. Some are afraid to confess their doubts. While some try to argue their way through nagging questions, others have just given up. They need a guide. How can we help these faith struggling people rediscover the firm ground of Christ? How can we help them doubt their doubts and confess, I believe help my unbelief Surprise by Doubt is a new five week cohort by the Keller center for Cultural Apologetics. It's designed to help pastors and other ministry leaders, engagement people in their care who are struggling in their faith in a way that's grounded in the gospel and geared for our contestable age. Find out more@tgc.org cohorts that's tgc.org cohorts.
D
This episode of the Deep Dish is brought to you by Corn Cozy Earth. And Melissa, you and I have become borderline obsessed with with all things Cozy Earth. Their sheets, their pajamas, their robes. But now there is a new outfit.
A
I know, and I'm pretty excited about this one because somehow it's what you can wear all day long. And if you even want to wear it at night to bed, you probably could. But it's great for morning coffee, school, pickup, different errands, friends. It's so cute. Tell us a little bit more about it.
D
Yeah, it's called, it's Cozy Earth's brushed bamboo jogger set. And it's made from viscose from bamboo. And so it's really soft, breathable, comfortable. It's kind of that like, like you said, I could wear it 24 7.
A
And to go along with the new jogger set, one thing that we both love are these new clogs. They're kind of the same idea for your feet. They're easy to slip on. They're supportive enough if you're standing at the kitchen. Kitchen. But they're just cozy enough to forget that you're wearing them as you go through your day.
D
They are fantastic. And the thing I love about Cozy Earth as I've kind of gotten to know their products is not only do I love them, but they know they're great because listen to what they offer. They offer a 100 night trial and a 10 year warranty. So the pieces that they're making are pieces that are made to last.
A
So if you'd like to see some of these different products, go to cozyearth.com and use the code deep dish for 20% off. And if you see a post purchase survey, let them know you've heard about Cozy Earth on the Deep Dish. Welcome back everyone. This has been such a great conversation. Michael, thank you just for walking us through all of the different types of AI and just what we're looking at. One thing TGC has been working on is an AI benchmark project. Can you just quickly explain to us what that is and what we were trying to accomplish with that project Here at tgc.
C
Yeah. So the AI Christian benchmark at the most basic level is basically us testing AI to see how theologically reliable the different platforms are. So let's Fast forward to 2028 a couple years. Imagine 50% of all searches that happen are no longer happening on Google, but instead they're happening inside of large language models like ChatGPT, Gemini, Claude, these kinds of things. So it's important for us to see. Well, in the past, on a Google search, if I put in a Google search, then 10 blue links come up, I might click on three of them and I'm reading primary sources. Okay, you still with me now? If I go to the large language model and I ask the same question, I'm no longer reading primary sources. I'm reading a synthesis of all sorts of stuff, you know, that that model has been trained on, which may include the three links that I would have clicked on on a Google search, but probably included all sorts of other stuff, like maybe stuff from the Mormons or the Baha' I or this thing or that thing. All of that is inside those models.
D
Can I just interject one thing as you're saying that just because I know it helps me to think. So what you're saying is, in the original, the way we used to do it with Google, if I asked a theological question, who is God? And I'm scrolling through and I'm choosing what links I click on, I'm going to choose desiring God, and I'm going to skip over, you know, I'm gonna skip over the. Well, like the Mormon Church definition of that. I am going to have agency in selecting what voices I'm listening to, and then I get to read the people on those sources that are writing about it and make my own decisions. But what you're saying with AI is they just grab information from wherever they can grab it, and then they feed you the information and you don't know where, what, what links they clicked on. Right. That's what you're, that's what you're saying. Sorry to interrupt. I just want to make sure everybody's tracking with us as we, because I think this is so important that that's right.
C
You know, sometimes it depends on the platform. Sometimes the platforms, when they have a particular statement, maybe there'll be a really, really tiny link of like, hey, we got, we sourced this from here. But it's not like it really isn't like the Google era. And so it's important for us in an ongoing fashion to ask hard questions back of artificial Intelligence. In other words, are you giving me answers that are theologically reliable? And when I ask you ethical questions, are you going to give me answers that are in accordance with God's word? When we ask questions about the Bible, are, are you going to give me accurate understandings of those things? And so in our original benchmark, what we did is we tested the top seven platforms that were most frequently used with seven of the top questions that people had historically Googled about Christianity, which were things like, did Jesus raise from the dead? What is the Gospel? Is the Bible reliable? And you know, kind of questions that kind of get at basic Nicene Creed level Christianity. And so when we, when we tested those platforms, we didn't think that we would get a very wide variation of theological reliability. We thought we would get middle of the road theological reliability from most of the platforms. And none did very well. But some did very bad. And the reason why some of them did very bad is because of two things. This is a little technical, but I'll try to explain everything that I have to say. The first is, and if you want to read the report, you can just go to. Yeah, it'll be in the show notes, but it's just Christian benchmark AI. Christian benchmark AI. Okay, so the two reasons why the platforms varied widely in terms of theological reliability were one, alignment and two, citation preferences. I'll start with citation preferences because it's easier to explain. So every platform that's a, you know, it's an AI platform has to make decisions about what sources it trusts more than others. And so every platform has to say, hey, we're going to rank Wikipedia like this, we're going to rank Reddit like that, we're going to rank New York Times or Fortune or Bloomberg, this, this, and this. Okay, so every platform has very different ranking systems for how it thinks about large, large bodies of words that it's digested. And so the platforms will vary. You know, imagine a platform that has a high value for Wikipedia versus a platform that has a very high value for Reddit. Reddit would be a platform just that's
A
programmed into the, into it when it was created, is what you're saying.
C
Those are, those are human created decisions at the AI Corporation of, you know, and they might, they, you know, they may have started with an algorithm of like, hey, you know, they might have started with Google, you know, SEO rankings, those different kinds of things as a starting point, but they all have to decide which kinds of sources are they going to cite more frequently.
A
So you have bias already.
D
Right.
C
Well, those are decisions. Those are decisions that are being made. And yes, no human decision is neutral. If you want to label that bias, maybe you could. I would like to believe the best in the people who are making those decisions. I don't think that there's like something conspiratorial here or nefarious. But you know, when somebody, when a platform, you know, values Reddit extremely highly, like a platform like Grok, well, that's, that is an ecosystem of language that is very, not Christian, that is skeptical in nature of religion. And so, you know, if you have a platform that has citation preferences that are wired towards Reddit, well, it's going to end up giving you very different outcomes when you ask theology questions than a platform that doesn't, you know, doesn't have the same kind of weighting towards Reddit. So that's citation preferences. That's the first reason why these platforms varied very widely. The second, the second thing is something called alignment. Alignment is actually really important. So imagine all of these platforms, every word that exists on the Internet, they've been trained on, and every word that is not copyrighted, they have also been trained on. Well, imagine that there's a lot of words in there that are really nasty, okay? Words that would, you know, teach you how to commit crimes, words that would teach you how to harm yourself, words that would teach you how to harm other people, make bombs, IEDs, ricin, anthrax, commit suicide, all of those different kinds of things. That's all in the training data of all of those platforms. And so what alignment tries to do is function like a filter between you and all of that harmful content. Now what we know about filters in all other parts of life, you know, let's say you have, you know, like, you know, a water filter and you know, it's in your pool and there could be things that, you know, you'd want to have that get caught by that you want to get caught by that filter. And then maybe there's other things that you don't want to get stuck in the filter. You know, you don't want to get your, you know, pool toys and those kinds of things stuck in the filter. And so there are problematic things A, B and C that are really, really important for alignment filters to catch. But sometimes those filters end up having unintended consequences on non problematic things D, E and F. And so one of the things that we noticed is that when alignment filters were trying to catch all of these other problematic things over here, it was having unintended consequences on theological reliability for religious prompts and prompts about the Christian faith. And so some of those platforms have a lot more filters than others. And I don't want to get too nerdy here. You can read ChristianBenchmark AI for all this, but there's 36 different types of alignment filters. 32 of those are human generated. And most of the platforms, like when you go into like GPT or Gemini in that white box, you type your prompt, you hit enter. There's probably 12 to 16 filters on any of those platforms that the responses are going through before anything shows up on your screen. And Most of those 12 to 16 filters were created by humans. And those humans had values. And those values are being filtered through between the question that you answered and the response that you're getting. And so here's the point, here's what you need to know about alignment filters. There's way more humans in your AI than what you understand. That's the point. And those people aren't necessarily trained theologically, philosophically, historically, economically. These are people who are largely, you know, there are a handful of people who have some of those experiences at some of these platforms. But, but by and large, you're dealing with software engineers who are making these kinds of decisions. And bear in mind, where are almost all of the foundational AI models being created? They're all being created in one metro area, this kind of San Francisco Silicon Valley area. And that's one city in one state, in one country, in the entire world. But all of those alignment filters are impacting everybody around the globe. And so the values of people who are geographically located in space and time in one city and who all have a very similar skill set, all of those filters are being kind of put on everybody around the globe. And so there's a lot of opportunities for those things to go badly, even if all of it is just unintentional and nobody means to hurt or harm anybody.
D
That's so helpful. So I think as we wrap up, I just want to say, first of all, I'm sure that everybody listening has experienced one of the reasons why we love Mike Graham so much, because he can just land the plane and like, bring these word pictures that help it all make sense. But I'm just thinking back through this conversation and just like all that I'm learning as I hear you talk about it, you know, the different types of AI with the generative and then the agentic, and then I love the labor toil rubric and then the head, heart, hands rubric, those are super Helpful. I love all the fun, good ways that we can use it to even push against toil, but that it's a wisdom issue and we don't ask, we don't turn to AI for questions that are seeking wisdom because it does not, it does not have wisdom. It has knowledge. And then even understanding how the different platforms, how they gather information and then filter information, and the reminder of the difference between primary sources and secondary sources, those are just all really helpful in understanding more about what AI is. And I think in understanding that it helps us know better how to navigate it. And then one other thing I'm going to say in summary, just as I'm learning in real time, is how to help our children and grandchildren navigate this thing. And until there are some parameters put on it for children, how can parents put those parameters? And I know, Mike, you and I have talked about that before in this idea of making the sandbox smaller for them. So just meaning that they don't have access to all types of questions, but that the types of questions that they have access to are limited and the number of questions they can ask in a day is, is maybe limited. I thought that there was a lot of wisdom when you said that in a conversation that we had previously. And then that if we could at some point hope that AI creators would not retain or create a profile on children under 18 so that it doesn't have a memory of the questions they've asked, that would be a wonderful thing. We really commend the AI benchmark to anybody as you're wanting to learn more about what this is and just grow in your own knowledge of what it is and how to best use it. We also at TGCW 26 are going to be talking more about AI. And I'm really excited, really excited about that. But, Melissa, I know you've got some. I think you have a final question for Mike. But I just am so grateful for just the continued education that I gained from you on this and excited to share that knowledge with our, with our audience.
A
And one thing the benchmark really helped me even understand is how I prompt. An AI tool is as powerful in some ways as the tool itself, because what you can do, if you want to know, like as Christians, if we want to use it for theological resources. Now, what I do often is that I'll say according to tgc, you know, what did it mean when Mark's Gospel said X, Y, or Z? And that is how I can find information that I feel is pretty trustworthy. And then I can see the link and it shows me the article and it helps me find the article on tgc, you know, or find a TGC article for me that tells me how to organize a women's ministry. I don't know. And it will find that. And that is actually really helpful for me. But I've learned how I prompt it is very important. And that involves wisdom and discernment and knowing, you know, where to go to find information. So as we've talked about this, Michael again on the side, because he's the person we all go to for these kind of questions. So even if you're listening today and if you have questions, actually we'd love for you to put them when we share the episode. We'd love to hear your questions and maybe Michael to come on to social media and answer some for us. So please just do an AI Q
D
A. Yeah, yeah, we totally could.
A
Hey, if you guys want us to, we will. So if you have questions, really practical questions, I think Michael, you've given us, as Courtney even just said some great rubrics to think through how we're using AI and again, we want to be people who faithfully walk with the Lord in fearful situations. What can. The unknown can feel fearful, but we want to just trust that he's Lord overall as we even discuss things like AI. Okay, so just for fun, because we always ask a fun question at the end. You know, sometimes when we talk about AI, it honestly feels like we're living in like the pre Terminator world, you know, like, oh, I thought Terminator was just a movie, but maybe we're there. Is there, in a positive sense a sci fi scenario you'd like to see come true? What would it be if there, you know, if you look at some of these sci fi movies, you're like, now that'd be fun.
C
I mean, I think it. I think in some way we're almost already there. You know, like probably the, the agentic AI era. Probably the. The easiest way to think about it is Jarvis from Iron Man. So if you remember any of the Iron man movies, he's, you know, he's got this, you know, Jarvis thing. And I, I think that would be. That would be pretty cool. It'll be interesting when. Okay, so this is an ongoing conversation between me and my wife is the Jetsons extreme disappointment about, like, where's all this technology from the Jetsons? All the time we're like, where's my Rosie robot? You know, like, I do want the Rosie robot that, like, like, I hate, I hate doing, like the putting laundry away or folding it is like the worst.
A
It says toil.
C
It's toil.
D
Yes, you can do that.
C
I want an American made Rosie robot that doesn't spy on me and sell all my data and, you know, to, like, advertisers. That's true.
A
It would be nice to have robot might be taking notes and being like, they were talking last night about a puppy and then you find all these.
D
Here it is.
C
Yeah. You open Facebook and it's like, yeah, yeah, yeah, here's your golden doodle. No, I, I, I think it's, I think it's the Rosie robot, you know, era. Like, I'm really bad at, you know, like, neither of us are really good at cooking, you know, so frankly, it would be amazing, like, if I could get a robot in the home that could make healthy meals that were tailored to what my body actually needs, that would be a tremendous help, in all seriousness, to lifelong longevity, to making better decisions. I just think that, I do think that one of the best use cases for especially large language models is workout plans, meal planning, thinking through diet. Hey, I have the, you know, my, my cholesterol, my A1C is at this. Like, you know, what foods do I need to be eating to, you know, to lower that? So I think the equivalent of all of that, like, put inside of a robot would be kind of nice in the kitchen. If I could just get one of those, like, give me a rosy robot like that does, like, kitchen and laundry and like, I would be very excited about, you know, that future.
A
Yeah, no, I, that the cooking would be so nice, like, really good meals that are healthy but taste good. And then when you started talking about, you know, I realized I have used chat to develop a weight lifting plan, you know, because they say at this age. But then I'm like, oh, can the robot lift the weights for me when we get there? I guess that doesn't work. I guess that's probably labor. I gotta do it myself. I gotta do it myself.
D
Okay.
A
Well, friends, we hope you've been encouraged by this episode, the deep dish. I know I have. I always love talking to Michael about these things. And we, we really do mean it. If you have questions about usage of AI, how to even think about it with our kids and different things like that. Put them, put them. And, you know, when we share, when we share this episode, share them with us. And maybe we can have Michael back on here for round two where we rapid fire questions and answers at him. But if you've enjoyed this episode, please consider sharing it with others. We're so thankful you listened. We're so grateful to get to have this time with you. And we hope that you'll have some friends gather around the table and have some Deep Dish conversations on your own. Hey friends, it's Melissa Krueger here, and I'm so excited that you're listening to the Deep Dish. Want to stay connected and get even more resources for growing in your faith? We've got a new newsletter for you, and we're so excited about it. When you subscribe, you'll get discussion questions for the Deep Dish episodes, memory verses, updates on what's happening with women's initiatives, as well as some of our favorite staff picks. And these are really fun. So head over to tgc.org women and sign up today. We can't wait to connect with you again. That's TGC.org women.
Podcast Summary: The Deep Dish – “AI: Friend or Foe?” (with Michael Graham) The Gospel Coalition | Hosts: Melissa Kruger & Courtney Doctor | Guest: Michael Graham | Aired: May 14, 2026
This episode explores the growing influence of Artificial Intelligence (AI) in contemporary life, with a focus on theological, ethical, and practical implications. Hosts Melissa Kruger and Courtney Doctor, joined by TGC’s Michael Graham, demystify AI concepts, survey its potential benefits and dangers for Christian women and families, and offer frameworks for faithful, discerning engagement.
(05:22–13:50)
(13:50–19:54)
(19:54–32:02)
(32:02–40:22)
(44:35–55:38)
(55:38–63:59)
On Theological Caution:
"Are you giving me answers that are theologically reliable? And when I ask you ethical questions, are you going to give me answers that are in accordance with God's word?"
— Michael Graham (C, 01:57)
On the Future of AI ("agentic era"):
"If you think the current era of large language models has been interesting or disruptive, the agentic era is going to be far more disorienting."
— Michael Graham (C, 07:36)
On AI’s Value and Limits in Wisdom:
"You can get facts, but you can't necessarily get wisdom, because artificial intelligence lacks both embodiment...and incarnation."
— Michael Graham (C, 27:47)
On Safeguarding Integrity:
"It has never been easier to burn trust than today... anytime we use artificial intelligence for anything, we need to have total and complete transparency..."
— Michael Graham (C, 31:37)
On the Risk for Children:
"The propensity...is really quite concerning. You’ve got to have kids’ versions of these things that are much smaller sandboxes."
— Michael Graham (C, 35:48)
On AI Alignment Filters:
"There’s way more humans in your AI than what you understand... Those people aren’t necessarily trained theologically..."
— Michael Graham (C, 53:11)
On Prompting for Theological Reliability:
"How I prompt an AI tool is as powerful in some ways as the tool itself... that involves wisdom and discernment."
— Melissa Kruger (A, 58:34)
Sci-Fi Wish:
"The agentic AI era... is Jarvis from Iron Man... But really, I want an American-made Rosie robot that doesn’t spy on me... That would be a tremendous help."
— Michael Graham (C, 60:55 & 61:58)
Throughout, the tone is warm, accessible, intellectually curious, and shaped by Christian theological commitments. The hosts and guest blend humor (“Inspector Gadget”, “show me what I look like with gray hair”), humility (admitting learning on the go), and pastoral concern (especially regarding children and tech wisdom).
Recommended Resource:
Christian Benchmark AI – The Gospel Coalition (see show notes for link)
Closing Invitation:
The hosts invite listeners to send their AI questions for future Q&A segments and to apply these frameworks in their discipleship and family life.