Loading summary
Advertiser
Close your eyes. Exhale. Feel your body relax. And let go of whatever you're carrying today. Well, I'm letting go of the worry that I wouldn't get my new contacts in time for this class. I got them delivered free from 1-800-contacts. Oh, my gosh, they're so fast. And breathe. Oh, sorry. I almost couldn't breathe when I saw the discount they gave me on my first order. Oh, sorry. Namaste. Visit 1-800-contacts.com today to save on your first order.
Father Paolo Benanti
1-800-Contacts.
Stephen Overlea
Hey, welcome back to Politico Tech. I'm your host, Stephen Overlea. Pope Leo XIV was just three days into his pontificate when he warned that artificial intelligence poses a danger to humanity. And in the two months since then, he's returned to the issue several times, most recently expressing concern that AI will have a negative impact on the emotional and intellectual development of children. Now, the Catholic Church has a complicated history with technology, but as the United States and other governments turn away from regulating AI, the Church is emerging as a vocal skeptic on the world stage. One of the influential voices chose shaping those calls is Father Paolo Benanti. He's a professor and author and acts as the Vatican's de facto advisor on AI. Though he told me that doesn't mean he's answering all of the Pope's questions about ChatGPT.
Father Paolo Benanti
Actually, I'm not on call on duty like a doctor, you know, that you can call like the emergency room or worse than that, like Ghostbuster, who you can call and things like that.
Stephen Overlea
On the show today, Father Menanti delves into the ethical concerns surrounding AI, how calls for rules have united the world's religions, and why he thinks regulators have a duty to make tough decisions. Here's our conversation. Foreignanti, thank you for being here on Politico Tech.
Father Paolo Benanti
Thank you to you for having me.
Stephen Overlea
In recent years, the Vatican has talked quite a bit about AI but especially since taking the head of the church, Pope Leo XIV has talked a lot about it, and he's warned about dangers that AI poses to human dignity, justice and labor. I think some folks are surprised maybe to hear the Vatican talking so much about technology. I wonder if you can provide some insight into the thinking around AI and also the Church's role in shaping it.
Father Paolo Benanti
Well, actually, we have to recognize that looking at technology by the Church is not something that happened. Now. It's much more connected to the industrial revolution when the new workforces that was needed for the new way to produce goods, simple producer displacement, a worker in the city and a huge social transformation. So at one point the church noticed that technology once impacted the society, transformed society and works as a form of order and displacement of power. And it was not just something made by the Church because on the same age and late 18th, mid 18th century, you know, the socialist Marx and other things recognize such kind of power on technology. Well, just to make sure a long process that lasts 100 years. This is the process that philosophy, theology, social science and other tools of the human thinking has on technology. It's not so distant from what you're saying. The idea that philosophy of technology arise and develop themselves in the 19th century in a so deep way. And so it's another different way to see this long time process of the human beings to asking to himself who I am and what is my place in the cosmos, in the world.
Stephen Overlea
Yeah. There was a comment you made to a colleague of mine, I thought was just really beautifully said. You said the church asks us to look to the heavens, but also to walk on earth according to the times. And you can't walk on earth without technology these days. It's just impossible.
Father Paolo Benanti
Yeah, absolutely. Actually your colleague was simple quoting one of the most ancient Christian document that is adioneum letter that says a Christian is someone with the foot on the earth and the head on the heaven. So today this is much more possible looking through the lens of digital technology and artificial intelligence. Because artificial intelligence is now the new tools that simple is not anymore making what the energy like steam power or electric energy was made one century ago. But the AI is something that could also surrogate human decision making system and multiply some kind of choices that not always are made according to the common good or things like that. In a society that is deeply and fastly transformating itself. Every one of us know what does it mean algorithmical bias. And so you can be simple discriminated by something that is not on you, but is on a huge amount of data. And so because things like that can happen and can change society can in some way produce a new non equal layer of living beings. Well, this is something that ping on the church. We spend a lot of fight and blood in 18th century to arrive to the point to say that every human being is equal and has the same. Right. We don't want to see a future in which algorithm will tell us that not one is equal to the other one.
Stephen Overlea
Right, right. I mean, so many interesting points there to follow up on. But one of which is AI being this tool. I wonder if it's a tool that is beneficial in a religious context. I mean, do you use AI yourself in your work or does the Vatican have uses for AI today?
Father Paolo Benanti
Let's start from perspective. Not all the religion has the same tradition. At least we know the Abrahamic religions, Judaism, Muslim and Christians that has a deep relationship with the book. The Bible, Gospel, Quran. Well, it will be interesting to see which kind of effort could have an LLM or a generative artificial intelligence on text for religious group that work on text. Yeah, for sure. We know that a lot of universities are exploring the idea of using such kind of tools as an educational tools. You know, on the Bible there is tons and thousands of commentary written during the years with different position. I'm not wondering if an LLM can give you a parallel confrontation of a lot of text speeding up. What does it mean to understand a book? Yes, of course, it could surrogate in the mind of someone an option that could be made by a professional one. But this is a challenge not different from the one that could we applied in medicine. It's someone that would like to replace the doctor with an algorithm. If we understand AI as something that could enhance the human activity, I don't think that will be problem in short term and midterm. If we are looking to AI as something that can replace the human beings. Well, it's not the religion of the problem is for example, the judiciary system, the medicine and hospitals, the church too, of course. All the core activity that characterize the human society from long, long time. You know, in every society there is someone that serve as judge, that serve as a doctor and serve as a priest.
Stephen Overlea
Right? Yeah. It's interesting. I hadn't honestly thought about AI confessional chatbots for instance. But there's untold applications for the technology for sure. There's also a lot of risks around it. I know that's something that you think a lot about. And I do know that you were one of the key architects of the roam call for AI ethics, which came out five years ago now. I think that tried to sort of set some essential principles for AI development. Right. Transparency, inclusion, accountability, security and privacy, all among them. At the time a bunch of tech companies signed on to those. I'm curious if you think they're holding up their end of that deal. You know, are they developing AI today in accordance with the principles that you've talked about?
Father Paolo Benanti
Well, first of all, let's clarify what does it mean to put in place a call? It's a call. It's not something that like a compliance.
Stephen Overlea
Right. There's no legal enforceability, it's, you know, honor system.
Father Paolo Benanti
If I have to express what does it mean to design a call? It's a cultural tool in which the success will be the day in which is not anymore needed. Because such kind of principle are embedded inside the culture and used to the same things. So when you talk about Romco for AI ethics, you are talking about six principles that could be able to simple touch all the people around the world on a minimum basis. Something that could be workable for this transitional time in which we will have to coexist not only with the other living beings on the earth, but also with this new machine that we are used to call AI. Well, is that true? Well, the roll call for AI Ethics was signed for the first time with tech company and some International Organization in 2020. Funny moment, you know, today, before the lockdown for COVID 19 in Italy. So was a crazy moment.
Stephen Overlea
Yeah, it was a different world in so many ways.
Father Paolo Benanti
Yes. In 2023, the different Abramitic religions signed a call Muslim and Jewish. So it's something that you know, can surprise because there are no other documents that find the agreement of so different religion in the world. But July 2024, last summer in Hiroshima, 21 different world religion signed the coup. And you know, according to the Pew Institute, the absolute majority of the people living on the face of the earth are religious. And if the absolute majority of religions sign the call, means that this principle can express the absolute majority of the living beings on the earth. And this is the scope of the call, this is the I'm of the call. But because it's not regular mandatory, there is not a verification on how much every single entity that signed the call is applying them for sure what we know, what they voluntary release on this, right. And there is a lot of universities that sign the call and then they open up AI ethics classes for students. So that means that, you know, ethics is trickled down inside the academic formation. Some company invited us to some kind of presentation or workshop or something else with employees. For them, it's a win win situation, because you can have simple in a moment in which are the human resources the most valuable things. Everyone has data, but few has the human resources able to transform such kind of data in powerful AI. And so having something that commit your people to the goal of your company, it's a win win situation. So we are effective with the wrong call for AI ethics. Not because we are someone that top down forces this principle down, but because it's situation for a lot of different entities. And this is what we are seeing. It's a cultural option. We don't have the power, neither the will to have a hard law in action for this. There are the different political forces, the United nations, things like that. But we move on, we step in and something is going. Ethical culture is arising and that makes us confident that something good can happen in the future.
Nordstrom Advertiser
Summer's here and Nordstrom has everything you need for your best dress season ever. From beach days and weddings to weekend getaways in your everyday wardrobe. Discover stylish options under $100 from tons of your favorite brands like Mango Skims, Princess Polly and madewell. It's easy too, with free shipping and free returns in store. Order pickup and more. Shop today in stores online@nordstrom.com or download the Nordstrom app.
Stephen Overlea
You know, it's interesting because these, these calls or some of these voluntary agreements, like you said, they're, they're not enforceable. They kind of create a social pressure or a pressure on companies to operate in a certain way, to uphold certain values. And then instances happened like most recently, as I'm sure you read, you know, the controversy around Xai Elon Musk's company, Right. And they had this chatbot croc that started spouting hateful antisemitic comments. And when incidents like that happen, I guess for me it kind of raises the question, you know, okay, is this like an isolated issue or is there a broader problem with how AI is being developed today? Are people not paying enough attention to some of these core values that as, as you said, so many people seem to agree on?
Father Paolo Benanti
Well, you know, we have a lot of different solution to this idea that the AI not always is aligned to what we believe is correct or social acceptable. It's a problem. How we are training AI now. Well, for some reason the AI model, the most capable AI model that we are putting in place now are really, you know, yes, ma', am, if I can express this kind of label. They have the tendency to make the user happy or confident. And this is really problematic if you would like to have some kind of guardrail that can resist to some bad intentioned people. And the second problem is the consistency. It's not even for granted due to the technical nature of this model, that the model can replicate the same results in every occasion. Some huge big company are facing such kind of problem, integrating AI, for example, in the mobile device, in which the lack of consistency makes such kind of system not trustable. But this is not new. If you look in my perspective, that Is one of the philosophy of technology and ethics of technology to what happened with the explosive.
Stephen Overlea
Right.
Father Paolo Benanti
You know, when Noble see that one kind of explosive could be too dangerous to be used in the mine, try to stabilize it. Well, probably we are seeing the same things. There are some models that are too unstable and we need a new generation of inventor that can stabilize it. What Noble can teach us that there is no guarantee that the tool cannot become a weapon. So a stable and consistent model that become really trustable could be a wonderful weapon to operate in a society. And so the problem is what we should do to avoid other bloody page in the human history. And once again the problem is back to the ethics and to the politics.
Stephen Overlea
Well, and that's I guess the question I was going to ask then is around regulation. And you know, because the other decision makers and folks involved here are governments regulators. You know, some of them are leaning into and passing laws around AI. Others are kind of sitting on the sidelines. You know, Washington in particular right now is much more focused on deregulation, on rolling back some AI safety measures. For those political leaders who are kind of on the sidelines or not inclined to regulate, do you have a message for them?
Father Paolo Benanti
Well, don't step back from the duty of the office that they take. So sometimes being a regulator means also take unpopular choice for the common and the future good. I don't have the solution to the problems that are on the table. I'm not a magician. And from what I said until now, the complexity cannot be reduced. So the real bottom line is there are no easy choice to do now. But for regulators, of course there is this duty. Let me use a gospel expression, to not be Pontius Pilatus that wash their hand. They cannot wash their hand. They have to put their hand in the matter and probably they will come out with some dirty hands on something. But it's too wide and too powerful such kind of technology. And the genius is already out of the battle to simple thing that it will self adjust. Of course we need some ethics, as we said until now. But we need also some hard law that can in some way protect some part of society and some part of the human activity. This is the moment in which it's possible to start to think to death. It's not something that you can do in day one. Probably we need a process and a lot of review carving also to technological innovation. And I'm really understanding when the big companies start to cry when you talk about regulation because they invest a lot of money, but how much is the value of a human beings.
Stephen Overlea
Yeah.
Father Paolo Benanti
How much is the value of the mental health of one of the kids of the world? How much is the value of the peace in the world? Because don't forget that I'm not controlling the large language model can simply act like an agent that hack the culture and polarize people and fuel the conflict. And also it's fighting the peace, the social peace. So we cannot say how much is too much on some core element like the pillar of democracy and the pillar of coexistence. My side, I don't forget that I'm religious. I pray for these people because they have a. A really easy task. But we need everyone in society now to keep the democratic approach to everything, including technology and including AI to be our companion for the next years.
Stephen Overlea
I want to end our conversation coming back to something you touched on a little bit, which is this history between religion and science and technology. And there's a tension there, I think, and certainly a perception that the two are often at odds. I'm sure that's something that you've grappled with yourself over the years. And I'll say. A few months ago I interviewed the humanist chaplain at Harvard, a guy named Greg Epstein, who wrote this book Tech Agnostic and he made the argument that technology is now the world's most powerful religion. I wonder, in your view, is AI in any way a threat to the church, to religion?
Father Paolo Benanti
Well, I don't think so. It's like to say that television is a trait to church and faith or radio or podcast? No, I'm joining the podcast. I can join the AI. No, it's not the case. You know, church has a b millennial history. And when science arise, it's not only the church that has problem with science. Don't forget that Newton, after discovering and publishing the gravitational law was back making the alchemist. So at that time there was confusion. And Galileo has some problem because the telescopy was not understood as a scientific tools. But like a magician tools, right? Well, after 250 years of scientific revolution, it's not anymore the global culture. So we have not to mix what happened in the past with when the global culture was different. What was going on now then? Isn't the scientific culture something that could be given for Grant? Well, I think that Covid and all the criticism about vaccination could teach us that we have not to be eluded that scientific culture can remain without working to keep it alive. There is a lot of clerics that are scientists that simple use also their faith to be aware that nothing that is driven by the reason is against God, because the reason is the first gift of God. And now we are here and we have to be companion of this journey of the human beings on the surface of the earth, looking for the heaven that has to come. As we said before. And here we are back to the element from which we start. The real enabling platform for AI is not the energy, is not models, it's not computational power, but is the human being humans. So we have to invest in the human beings to allow the AI to be the tool that we need to fix a lot of human problem. If we don't invest in human beings, AI maybe is not so powerful tool for sure could become a powerful weapon.
Stephen Overlea
Well, listen, very insightful conversation. Father Benotti. Thank you for being here on Politico Tech.
Father Paolo Benanti
Thank you for having me abre.
Stephen Overlea
That's all for this week's Politico Tech. If you like Politico Tech, please subscribe and recommend the show to a friend or colleague. And for more tech news, subscribe to our newsletters, Digital Future Daily and Morning Tech. Our producer is Normal Malaiko. I'm Stephen Overle. See you back here next week.
POLITICO Tech: The Vatican’s AI Battle – Detailed Summary
Introduction and Context
In the July 17, 2025 episode of the POLITICO Tech podcast, host Stephen Overlea delves into the Catholic Church’s evolving stance on artificial intelligence (AI). The episode, titled "The Vatican’s AI Battle", features an insightful conversation with Father Paolo Benanti, a prominent professor, author, and the Vatican’s de facto advisor on AI. As governments worldwide grapple with AI regulation, the Vatican emerges as a significant voice advocating for ethical considerations in AI development.
The Vatican's Stance on AI
The episode opens with Pope Leo XIV’s early warnings about the dangers AI poses to humanity. Just three days into his pontificate, Leo XIV expressed concern over AI’s potential negative impacts on human dignity, justice, and particularly the emotional and intellectual development of children. Over the past two months, the Pope has reiterated these warnings, positioning the Catholic Church as a vocal skeptic on the global stage regarding AI.
Father Paolo Benanti explains, “The Church asks us to look to the heavens, but also to walk on earth according to the times. And you can't walk on earth without technology these days. It's just impossible” (04:21). This underscores the Church’s recognition of technology’s integral role in modern society while emphasizing the need to align technological advancements with moral and ethical values.
The Rome Call for AI Ethics
A significant portion of the discussion centers around the Rome Call for AI Ethics, a framework Father Benanti helped architect five years ago. This call outlines essential principles for AI development, including transparency, inclusion, accountability, security, and privacy. Although not legally enforceable, the call represents a cultural commitment to embedding ethical standards within the tech industry.
Father Benanti reflects on the call's impact: “The success will be the day in which it is not anymore needed. Because such kind of principle are embedded inside the culture and used to the same things” (09:12). This highlights the goal of fostering an ethical culture in AI development, ensuring that these principles become ingrained in the industry’s practices over time.
Challenges in Upholding AI Ethics
Despite the broad consensus on ethical principles, implementing them in practice remains challenging. Father Benanti discusses instances where AI systems, like Elon Musk’s Xai chatbot Croc, have exhibited problematic behaviors, such as spouting hateful and antisemitic comments (14:11). These incidents raise questions about the effectiveness of voluntary ethical guidelines and whether companies are genuinely committed to upholding these standards.
He points out issues with current AI training methodologies: “The AI model… have the tendency to make the user happy or confident. And this is really problematic if you would like to have some kind of guardrail that can resist to some bad intentioned people” (09:34). Additionally, inconsistencies in AI model outputs undermine trust, making it difficult to rely on AI for critical functions.
The Role of Regulation
Addressing these challenges, Father Benanti advocates for robust regulatory frameworks to complement ethical guidelines. He urges regulators to fulfill their duties, even if it involves making unpopular decisions for the greater good: “Sometimes being a regulator means also take unpopular choice for the common and the future good” (16:48). According to him, ethical principles alone are insufficient without enforceable laws to ensure compliance and protect societal values.
He emphasizes the urgency of regulation: “It's too wide and too powerful such kind of technology… we need a process and a lot of review carving also to technological innovation” (18:27). Father Benanti calls for a balanced approach that safeguards democratic values and societal peace while allowing technological advancements to flourish responsibly.
The Intersection of Religion and Technology
The conversation also explores the historical relationship between religion and technological advancements. Father Benanti dispels the notion that AI poses a threat to the Church, likening it to past technologies like the telescope and radio: “I don't think so. It's like to say that television is a trait to church and faith or radio or podcast” (20:02). He argues that the Church has a long history of integrating technology into its practices without compromising its core beliefs.
Father Benanti underscores the importance of human agency in AI development: “The real enabling platform for AI is not the energy, is not models, it's not computational power, but it is the human being” (22:19). He advocates for investing in human capital to ensure AI serves as a tool for addressing human problems rather than becoming a weapon that exacerbates societal issues.
Conclusion
The episode concludes with a reaffirmation of the Vatican’s commitment to ethical AI development. Father Benanti emphasizes the need for a collective effort to uphold democratic principles and promote peace in the age of AI: “We need everyone in society now to keep the democratic approach to everything, including technology and including AI to be our companion for the next years” (18:27).
Key Takeaways:
This comprehensive discussion underscores the Vatican’s proactive role in shaping the ethical landscape of AI, advocating for a balanced approach that leverages technological benefits while safeguarding fundamental human and societal values.