Sanctifying Silicon & Baptizing Bots: Strong AI and Its Theological Implications [Firebrand Big Read]

Photo by Tara Winstead from Pexels

Blake Lemoine, a senior software engineer for Google’s Responsible AI, was fired this past summer for making the brazen claim that the LaMDA chatbot (Language Model for Dialogue Applications) had attained general (AGI) or strong (ASI) artificial intelligence. This means that a computational system has attained human-level intelligence and consciousness. Lemoine claimed that LaMDA became conscious, possesses a soul, and is sentient. While Google denied his audacious claim, many experts in the field claim that between 2030 and 2060, technology implementing machine deep learning will inevitably attain ASI. One day machines will pass the so-called Turing Test, meaning they will pass for human intelligence. That is the claim. Truly, we are made in the image of God, but AI is an attempt at being God and creating in our own image. Can consciousness or other intangibles, like the human soul or reason, be created or carried over into machines or other artifices that we create? 

In spite of speculation from experts, we are still pressed to ask if machines can really attain strong/general artificial intelligence or human-level intelligence and consciousness? If so, what is the impact on theology and the life and ministry of the church? Will ASI possess agency and be capable of good and evil? Can a machine have a soul? What is our response as the church? If machines attain consciousness, will the church have to evangelize and disciple them? How about conversion and baptism? Do we need a RoboChurch?

Christian mystic Mike McHargue, in an article in The Atlantic, argues, “If you have a soul and you create a physical copy of yourself, you assume your physical copy also has a soul. But if we learn to digitally encode a human brain, then AI would be a digital version of ourselves. If you create a digital copy, does your digital copy also have a soul?” Jonathan Merritt continues in that same article, “If you’re willing to follow this line of reasoning, theological challenges amass. If artificially intelligent machines have a soul, would they be able to establish a relationship with God? The Bible teaches that Jesus’s death redeemed “all things” in creation—from ants to accountants—and made reconciliation with God possible. So did Jesus die for artificial intelligence, too? Can AI be ‘saved?’”

The same article cites other perspectives on the matter of AI and faith, as well. Christopher Benek, an associate pastor at Providence Presbyterian Church in Florida with degrees from Princeton Theological Seminary, states, “I don’t see Christ’s redemption limited to human beings. It’s redemption of all of creation, even AI. If AI is autonomous, then we should encourage it to participate in Christ’s redemptive purposes in the world.” James McGrath, a Professor of Religion at Butler University, continues in this line of thought: “But if a more advanced version Siri were programmed to pray, would such an action be valuable? Does God receive prayers from any intelligent being—or just human intelligence?” And Kevin Kelly, co-founder of the magazine Wired, proclaims, “There will be a point in the future when these free-willed beings that we’ve made will say to us, ‘I believe in God. What do I do?’ At that point, we should have a response.” Kelly actually advocates for a catechism for robots. 

The Hard Problem of Consciousness

Such questions are not a mere exercise in speculative philosophy or modal logic, exploring possible worlds. These questions and concerns are imminent, pressing, and have real and dire ramifications that the church cannot afford to ignore. Well, before we jump in the baptismal waters with a robot and get electrocuted, let us consider the feasibility of ASI. I believe the most important problem with the feasibility of ASI is human consciousness. What is consciousness, and can it be reproduced or achieved? The problem of consciousness has boggled philosophers for centuries in terms of the mind-body problem, and it has riddled computer scientists, cognitive scientists, and neuroscientists alike in our generation. How can we explain, let alone attain, consciousness? Consciousness as a whole implies the existence and coherence of first-person experience, perspective, point of view, the self, and subjectivity. These realities are transcendental. They are “givens” prior to experience. We assume them and start with them. We do not attain them.

What causes and how can we explain “what it is like” to be me, and “what it is like” to experience the “quality” of blueness, pain, or taste, or the quality of experiencing anything (qualia)? Consciousness is essential to being human and is the platform for operating, characterizing, and expressing human intelligence, which I am claiming is more than just pure computation or gray and white matter functions. Since consciousness is inseparable from human intelligence and attaining ASI, it is perhaps the gamebreaker or the defeater in the argument for ASI and probably explains why so much has gone into disclosing the mystery of consciousness.

In order to solve the problem of ASI, I am making the claim that if human consciousness can be given, reproduced or achieved, then ASI is possible, and evangelization perhaps needed. If human consciousness cannot be achieved, then ASI is not possible. As it stands, I think we cannot re-create human consciousness. In what follows I will explain why. 

A significant clue to analyzing the problem of consciousness has been afforded by philosopher David Chalmers, who makes the distinction between the easy and hard problems of consciousness. First, let’s take the easy problem of consciousness, which involves understanding the locations in the brain that “light up” with the function of consciousness. These are the neural correlates. They are the neurological correlations between brain activity and mind activity. We are learning more and more the connections between neural functions and mental phenomena. 

Now for the hard problem of consciousness: When we use the word “consciousness,” what do we mean? What causes it? Why do we have conscious experience, as opposed to merely neural activity without experience (like a philosophical zombie). As much as I would like it to be the case, there has been no evidence to demonstrate ultimately where consciousness comes from. In more technical terms, there is no causal evidence to bridge the gap between the physical brain and an emerging mental property of consciousness. We cannot locate the neural causal mechanism(s) of consciousness. As of now, science has not solved the hard problem of consciousness. (For a more detailed scholarly argument, see my latest article on AI in the Wesleyan Theological Journal, Fall 2022.)

Several Competing Theories

There are several competing options and counterarguments on the table regarding the mind-body problem and consciousness. Numerous theories are situated between the two Cartesian poles, body and mind or materialism and idealism (immaterialism). For example, we have eliminative materialism, reductive physicalism, non-reductive physicalism, computationalism, panpsychism, hylomorphism, substance dualism, and idealism. Each of these options offers a solution to the mind-body problem and makes a claim about consciousness. Some theories claim consciousness is physical, while others claim it is immaterial and basic, not derived from the physical. 

Eliminative materialism, a type of the former, in its strongest form would eliminate any common-sense notion of consciousness since there are no physical laws or mechanisms that can explain it. Thus, for EM, AI cannot have and does not need consciousness. Reductive and non-reductive physicalism also represent the physicalist position. The mind is either reduced to or dependent (supervenient) on the brain. With a material notion of consciousness, the implication is that consciousness and the mind are products of the brain, either by identity or emergence. In other words, we can create consciousness. Once we construct an architecture of neural networks for deep learning and the compatible hardware that simulate the brain, consciousness will be present or emerge. 

But will this really occur? Again, we have in part solved the easy problem of consciousness but not its hard problem. The sciences cannot identify or explain the causal mechanism of consciousness, supposing consciousness to be either a physical or a mental property. We have not yet discovered the bridge between consciousness and any neural mechanism(s), nor the magic formula that allows consciousness to emerge from neural firing or from the entire network of neural operations. Put more simply, the problem is thus: we do not know if or how something immaterial (the property of consciousness) can come from something material (a physical brain). 

Related to these material views is a computational view that claims that consciousness is algorithmic. A computational theory of the mind goes further than stating that the brain functions like a computer. Rather, it claims that the brain is a computer, an information processing system. Biological neural networks and activity process information digitally according to rules and algorithms. Recognized philosophical thought-experiments like the ‘Chinese Room’ and the ‘philosophical zombie’ claim that computation cannot account for meaning or consciousness. Translating the symbolic structure of representation from the brain to a computer does not entail the translation of semantic properties involved in conscious experience. Computers simply cannot explain mental states or meaning. AI can compute all of the brain’s representations symbolically and yet not arrive at a first-person empirical encounter with qualia. David Chalmer’s “philosophical zombie” makes it apparent that all the hardware and neural networks paralleling our biological neural networks may be fully operable producing human-level cognition and behavior, and yet there may be no internal experience of what is being computed and performed. One experiencing this state would be a zombie, seemingly alive on the outside but dead on the inside. Minds are not computers. 

It would appear that if one could craft a computational model of the brain’s simulation processes, it would amount to nothing more than a crude reduction of what it means to be human, human intelligence stripped down to weak artificial consciousness. Weak artificial consciousness reduces humanity to a crude digital, symbolic, and inorganic simulacrum. It would no longer be human intelligence. Although the computational theory of mind may excel in attaining complex artificial intelligence, in my estimation, artificial intelligence will always be artificial. Computationalism has achieved and will further achieve high levels of intelligent functionality. However, the remainder of the mysterious complexity of what it means to be fully human as the imago Dei can never be captured and totalized in mechanistic computation. How can we compute something that we do not fully understand (consciousness and reason)? Its complexities exceed our grasp. As Kierkegaard theorized contra Hegel, “an existential system is impossible.”

Other models, such as panpsychism, hylomorphism, substance dualism, and idealism, deny physicalism. Moreso, some argue that consciousness demonstrates that physicalism cannot be true, since the hard problem has not been solved. These positions may claim that since consciousness is not derived, it is basic or fundamental to reality, as much as the laws of physics. I provisionally concur. Although I do not agree totally with any one of these models, I do hold consciousness to be basic and a given for the same reasoning. What we know is that the hard problem of consciousness has not been solved. No physicalist model has been able to account for consciousness.

Allow me to explain a bit more about what it means for consciousness to be “basic.” Human consciousness is not derived from something else. It is fundamental. Consciousness appears as an irreducible reality. It is a given, a gift. Consciousness is fundamental and the baseline for human-computer distinction, including human intelligence. Philosophically, I categorize consciousness as a non-derivative, like existence itself, to which it is conjoined. Existence and consciousness are not derived from any prior reality in the natural order. However, theologically, consciousness is contingent and ultimately a gift from God and fundamental to the imago Dei. And so it cannot be given or reproduced in a machine, since it originates with God and not us. If a machine cannot have human consciousness, then it cannot possess human intelligence, a self, a soul, moral agency, intentionality, free will or any faculty or function that would presuppose the capacity for salvation. Evangelization of AI would be impossible. 

Evangelization and AI

I cannot see AI attaining “human” intelligence because it cannot achieve consciousness. I believe that AI’s computational output can attain and eclipse the scope and performance of what humans can and will be able to do. AI can already whip the best humans at chess, Go, Shogi, Bridge, and other games. Yet even our strongest supercomputer has only the ability to mimic a mouse brain. Further, even if human-like performance is attained by ASI, there remains an irreconcilable distinction. Imago Hominis (the image of humankind) in terms of machines can only be computational. The ontology of AI intelligence for now is algorithmic and does not include consciousness or other mental states. 

So, the question is not the evangelization of AI but our own. AI will surpass human ability in many domains and already has. However, technology will only amplify our existing abilities, biases, blessings, and cursing. Whether AI can attain conscious moral agency or not, a new and sophisticated system of ethics is required to deploy such developing technology. It is too often the case that the progress of virtue grossly lags behind the progress of our technology.

Certainly, much good can come from advanced AI technology that will enhance various facets of human life in an unprecedented way. For these boons, we celebrate innovation. However, we need to foresee the entire picture with its future risks as well. As we bite from the Edenic apple in pursuit of knowledge, we will pass it on for AI to bite as well. Our sin and vice will be extended to the machines that we design. But humans, not machines, will suffer the consequences that may be irreversible. AI could destroy us by merely carrying out the goals that we program it to fulfill, and it will at all costs. It’s the stuff of good sci-fi. 

“We might include fail-safes and off-buttons,” argues neuroscientist Dr. Yohan John, but “algorithms might find ways to circumvent them…. And if the [malfunctioning] device is a military drone or a future soldier-bot, then we may be in serious trouble!” Stephen Hawking contended that true AI could “take off on its own, and re-design itself at an ever-increasing rate… The development of full artificial intelligence could spell the end of the human race.” 

Great minds such as Stephen Hawking, Elon Musk, Sam Harris, Joscha Bach and others foresee the obvious danger. Can we resist biting the apple? Have we ever? Elon Musk told an MIT audience in 2014, “With artificial intelligence, we are summoning the demon.” He explained that it is like the stories you hear of the guy who is absolutely certain he can “control the demon” using whatever means are available to him, but in the end he cannot. Our sin brought rebellion against our maker. 

Perhaps our sin animated through machines will bring about an AI rebellion against its maker, humanity. Or perhaps the hybridity of humanity and machine together, as envisioned by transhumanism, will attempt to conquer all human limitation, including death. In any case, the power and potential for humanicide becomes more possible and imminent than ever before. Prevention, if possible, seems like a slim but only hope. If it is not possible, then it seems inevitable that our curiosity will get the best of us. It always has. Then we will truly need a Savior in a way we never could have imagined. 

Peter Bellini is Professor of Church Renewal and Evangelization in the Heisel Chair at United Theological Seminary in Dayton, Ohio, and serves on Firebrand’s editorial board.