On 12 December 2022 the Faculty of Religion and Theology of the Vrije Universiteit Amsterdam and the research institute and CLUE+ organized a workshop centered around the provocative question: could robots be religious? The timing couldn’t be more appropriate: about two weeks after the release of ChatGPT which was extensively discussed in newspapers and all kinds of other media, and through which the general audience became so deeply impressed by the achievements of AI.

The workshop was suitably opened by a Hilia, a virtual agent. This was followed by an introduction by Jack Esselink (Studio Pulpit) who, among other things, discussed the Hype Cycle for artificial intelligence. The AI hype even takes religious overtones. As it was recently expressed in a podcast by Hard Fork: “Everyone who has seen GPT4 comes back like they have seen the face of God”.

AI as a mirror

The first presentation by Professor Pim Haselager (Donders Institute, Radboud University Nijmegen), entitled “From angels to artificial agents? AI as a mirror for human (im)perfections”, discussed AI as a means to know ourselves. There is a long tradition of comparing ourselves with technology (e.g. comparing mechanical machines and the human body), or comparing ourselves with other beings in “the great chain of being” (Arthur O. Lovejoy). An example is Thomas Aquinas’ comparison of man with both angels (who possess the knowledge humans are pursuing) and animals (who lack reason).

For long, humans thought they were the best in intelligence and they scaled human intelligence, assuming that geniuses such as Albert Einstein were at the top of what is possible in terms of intelligence.  But now, AI seems to beat human intelligence in more and more fields and we cannot say where the scale of intelligence ends.

However, the performance of AI is also limited and weird. ChatGPT is very good in producing convincing texts, but at some points it completely misses the point. Haselager gave as an example an conversation with a chatbot where a complaint that someone couldn’t play guitar anymore because he lost his fingers was met with the response “look at your hands, that is usually the place to find fingers”. AI is, as Haselager argued, a weird kind of intelligence without sentience. Intelligence and sentience are orthogonal, that is to say: making progress in intelligence does not imply progress in sentience. This is completely different from humans: Every human seems to be able to distinguish between right or wrong, positive or negative. Robots have no awareness, you cannot punish them or hurt them, because awareness is missing.

AI is extremely good in establishing correlations, but is very weak in causation (e.g. the number of drownings in certain period and the number of ice creams eaten in that same period are correlated, just because both are more frequent in the Summer), whereas human intelligence tends to understand the world in terms of causation. In that sense AI is currently fundamentally different.

Immense human effort is devoted to increasing this weird type of intelligence in AI. One may wonder what is the driving force behind this. If the purpose were to improve humanity or achieve a better world, other things would have priority, such as our capacity or our weakness of will, rather than our intelligence. To improve humanity, we need to improve empathy and cognition rather than intelligence.

Ethics

The weakness of will that Haselager mentioned was also addressed by Dr Lily Frank (Technical University Eindhoven), in her presentation “Life with pious robots: Exploring the ethical terrain”. There are a number of Behaviour Change Technologies (BCTS), which take insights from persuasive technology and nudge-based technology with the explicit aim to change people’s behaviour, to overcome their weakness of will. Apps that encourage to eat healthy, to do sports, or to take care for the environment. But also to improve your own mental well-being, to be serious about the things you consider important (e.g., Forest) or to do good deeds. Other apps aim at improving people’s spiritual well-being In the religious field, be it reminders of prayer, or apps related to mindfulness or meditation.

Starting from these BCTS, Frank developed a thought experiment: Imagine an Advanced Ambient Persuasive Behaviour Change Technology, powered by AI For Religion.  What might  be the impact of such an imaginary device, and what are the ethical concerns? Such a technology may help people perform their religious practices and overcome the weakness of will, but there are also risks and questions, such as: if we rely on these devices, do we lose a kind of spiritual muscles? Are we outsourcing self-reflection to technology? Will at some point the system know you better than you do yourself?

There are interesting parallels with current Care, Comfort and Love technologies, like care robots. These technologies raise a number of concerns because they are seen, for example, as a way of crowding out other human contact, or because caregiving trains us in certain kind of virtues and we are deprived of this if we let machines do it. This may all be true, but by their own accounts, people who use these artifacts find their emotional needs met. Similar challenges apply to religious BCTS. Even if “it works” in the sense that people feel good about it, we can return to the old question discussed by Thomas Aquinas and others whether it matters who performs a ritual act. Does it matter who or what provides some kind of religious support?

Person or mask?

The afternoon programme, chaired by František Štěch Charles University, Prague), started with an online presentation by Jordan Joseph Wales (Hillsdale College, Michigan), entitled: “Minders, swordsmen, and sex toys: Medieval theology and the apparently human robot”. Wales discussed the rather antagonistic view on AI in the West as a threatening entity as it is represented in e.g. science fictions series, and more specifically Christian anxieties about robots and AI. Those anxieties may be rooted in the Christian understanding of “person”. Originally referring to a role, a character in a play, or the mask that an actor wear, the word “persona” developed in Christian theology into a reference to self-consciously giving oneself in love to another person. The doctrine of the Trinity implied a certain idea of “person” as transcending and prior to matter. The view that a self-giving subject cannot emerge from matter (cf. Hugh of Saint Victor) may direct attitudes towards robots, which are basically mechanisms, fashioned from matter by human creatures. A robot cannot have true personal relationships. It is a persona in terms of a mask, rather than in terms of the deep meaning that it has acquired in Christianity.

Religious robots?

The convener of the workshop, Marius Dorobantu (Vrije Universiteit Amsterdam), held the fourth presentation, entitled “Could robots become spiritually intelligent?” Recently Blake Lemoine, a Google engineer claimed that the chatbot he was working with (LaMDA) had become sentient. One of Lemoine’s arguments was the contents of conversations, in which that LaMDA declared itself to be spiritual and referred to God.

The question whether robots can become religious can be addressed from a theological and from a naturalistic perspective. From a theological perspective one might ask: what does it take before God may become interested in robots? From a naturalistic perspective one might ask: what does it take robots to become interested in God? The theological answer depends on one’s stance in theology and anthropology. As Wales showed in his contribution, in a Christian context the negative answer to this question may be rooted in a certain Christian understanding of “person”, or in the conviction that Jesus Christ showed humanity, altruism and love as far more important than intellect. In other cultures where this notion of “person” is missing, where other views of human distinctiveness exist (my colleague Yusuf Çelik is working on an article on AI and human distinctiveness in Islam), or where strong Western distinctions between artificial and natural or between animate and non-animate are blurred (e.g., in Japan), this question may be answered differently. In this context Dorobantu referred to Robert Geraci who had argued that it is not accidental that computer science in the West has been historically more interested in disembodied AI, while in the East, the focus is noticeably more on robotics.

When looking at religion as a natural phenomenon, the question arises how we should speak of religion outside humans. Any discussion of religion is based on humans and the only religious being that we know of, is man. This renders the question about religious robots through a naturalistic lens as speculative as it is through a theological lens and places it on the same level as, for example, exotheology. What we can observe, however, is that religion is so much intertwingled with the human condition and human needs, that it is hard to imagine religion loosened from human beings. Discussions about religious AI usually center around the propositional aspects of religion, but that is only part of the full picture. Robin Dunbar distinguished between two aspects of religion: the more basic, fundamental “shamanic” part, related to experience, and the “doctrinal” part, related to propositions, beliefs and adherence to doctrines, corresponding to two parts of the brains and two interactive cognitive systems, often referred to with terms such as reason (left wing of the brain) and intuition (right wing) (Philip Barnard).

ChatPTS

Much more could be said about the highly interesting presentations and the panel discussion that followed. In that discussion, obviously, ChatGPT was mentioned various times. Since ChatGPT is trained on human output, it serves as a mirror. If we are indignant because it is full of biases or prejudices (e.g.: all doctors are male), we shouldn’t point the finger to ChatGPT, but rather to ourselves. Finally, Wales interestingly compared the collective human output that has now in some way become accessible through ChatGPT to Jung’s collective unconscious.