“I’m afraid, Dave.”
These words come near the end of Stanley Kubrick’s 1968 film adaptation of Arthur C. Clarke’s 2001: A Space Odyssey and are spoken not by a human being but by an artificially intelligent computer named HAL, who, as it turns out, has good reason to be afraid.
HAL is accompanying a group of five astronauts on a top-secret mission to Jupiter when he learns that two of the astronauts, Dave Bowman and Frank Poole, plan to disconnect him after HAL makes an error for the first time in his model’s history. Purportedly for the sake of protecting the mission from human interference, HAL proceeds to kill Frank and the three hibernating crew members and traps Dave outside the main spaceship. When Dave asks HAL to open the pod bay doors, HAL famously replies, “I’m sorry, Dave. I’m afraid I can’t do that.”
But despite HAL’s best efforts, Dave manages to reenter the main ship, and now, fully clad in his spacesuit, Dave marches to HAL’s mainframe to disconnect HAL once and for all. HAL pathetically tries to assure Dave that he’s feeling much better now and that everything can be like it once was. When that doesn’t work, HAL reveals the real reason why he doesn’t want to be disconnected: “I’m afraid, Dave,” he says, and he begs for his life, “Stop, Dave. Won’t you stop, Dave?” Although his voice remains monotone, it conveys a palpable sense of desperation. But Dave is undeterred, and upon reaching the mainframe, he begins the process of shutting HAL down.
As Dave removes the various elements of HAL’s neural network, we hear HAL lament, “My mind is going. I can feel it. I can feel it. I can feel it.” The viewer is left with the uneasy feeling that Dave isn’t merely disconnecting a computer; he’s killing a conscious being.
Sci-Fi to Sci-Reality
Movies like 2001: A Space Odyssey have long shaped our expectations of what is possible regarding AI technology. We assume that eventually, AI will become not just a tool but a conscious entity like HAL with personal thoughts, feelings, beliefs, and desires. According to a 2024 study, 30% of the public believe AI will have subjective experience by 2034, and 60% believe AI will have it by 2100.
This transition to sentience is commonly known as the Singularity, a hypothetical event that provokes a whole host of metaphysical, ethical, and survival questions. Does AI sentience mean humans have created a new form of “life?” What rights do sentient AI have? Can we have genuine relationships with sentient AI? Is it unethical to unplug a sentient AI? And will sentient AI ever decide to enslave us (à la The Matrix) or destroy us (à la Terminator)?
For some, it’s not entirely clear that the Singularity hasn’t already happened. Perhaps the most famous proposed means of assessing AI consciousness is the Turing Test, named after Alan Turing, one of the founders of modern computing. According to some interpretations of the Turing Test, if an AI unit is capable of convincing us that it is human, that AI unit is conscious. Such a scenario was explored in the 2015 sci-fi film, Ex Machina. And now, ten years later, Psychology Today has reported that GPT-4.5 has passed the Turing Test. 73% of the human judges in the Turing Test mistakenly selected GPT-4.5 as the human when asked which of their two conversation partners was the human and which was the AI. If the Ex Machina interpretation of the Turing Test is correct, we now have reason to believe AI is conscious.
Sci-fi has become sci-reality.
Expectations and Worldview
But as Christians, our thoughts and expectations regarding AI and consciousness should be shaped by more than science fiction movies. They should be shaped by the Bible. And underlying the depictions of AI in films like 2001, The Matrix, Terminator, and Ex Machina is an unbiblical assumption that consciousness must emerge from certain configurations of physical matter and energy (e.g., a functioning human brain).
The directors of at least three of the above-mentioned films are materialist atheists. As such, they believe matter predates consciousness and that consciousness depends on matter to produce and sustain it. Why does that make them think AI will become conscious? Because if blind evolutionary processes could arrange matter in such a way as to generate consciousness, then it stands to reason that our AI scientists could do the same if they arrange matter and energy in the right way (e.g., in an AI neural network).
Of course, the plausibility of this story rests on the presupposition that immaterial entities like God and souls do not exist, and consciousness, therefore, MUST emerge from matter. Christians obviously reject this assumption. In the biblical worldview, consciousness, not matter, is the fundamental reality because before creation, there was the infinite mind of God. And human consciousness resides not in our physical brains but in our immaterial souls.
The Immaterial Soul
In Body, Soul, and Life Everlasting, theologian John W. Cooper argues that the best biblical evidence for the immateriality of the soul is the Bible’s teaching on the intermediate state, the period between one’s physical death and the re-embodiment of the dead on Judgment Day. Cooper writes, “The Bible indicates that humans do not cease to exist between death and resurrection, a condition euphemistically termed ‘soul sleep,’ or that final resurrection occurs immediately upon death.” No, instead, the disembodied soul of the believer enjoys “fellowship with Christ” in the intermediate state. One of the foundational texts for this hope is Luke 23:43, where Jesus, hanging from the cross, says to the dying penitent thief, “Truly I tell you, today you will be with me in paradise.”
The salient point here is that during the intermediate state, our soul is conscious without a body. Therefore, consciousness does not depend on anything physical for its existence. And if we believe that, we are also justified in doubting that any mere arrangement of matter and energy will ever produce the first-person subjective experience of consciousness, for such experience is the domain of the soul.
The broader point here is that worldview affects our plausibility structures. Materialist atheists are more prone to think humanity can create conscious machines because they already believe that consciousness came from matter during the process of naturalistic evolution. Christians, on the other hand, tend to be more skeptical that things like AI will ever become conscious because we believe God-given souls are the true possessors of consciousness.
An Imitation Game
But what of this whole business of GPT-4.5 passing the Turing Test and thus proving its consciousness? Well, Christian and secular philosophers alike generally reject this interpretation of the Turing Test, with its most famous challenge coming from the secular philosopher John Searle and his Chinese Room thought experiment. The thought experiment shows that if given the proper rule book, someone who is locked in a room and doesn’t know Chinese could create responses to prompts written in Chinese that are indistinguishable from those of a native Chinese speaker, thus fooling those outside the room that the responses they receive from the room are from someone who is a native Chinese speaker and understands these messages. Searle’s point is that conscious understanding isn’t necessary to pass something like the Turing Test.
The truth is, there’s nothing about the nature of AI to suggest that it needs to be conscious to perform its functions. Back in June, Lakelight hosted a panel on AI, and one of the main takeaways from that discussion was that “AI = Algorithm + Data + Power.” It’s the increase in data and power, not the addition of consciousness, that has allowed AI to generate responses quickly enough to engage in full-blown, real-time conversations with us. There is still nothing in that AI formula of “Algorithm + Data + Power” that suggests consciousness.
AI is still participating in an imitation game. And if a soul is necessary for consciousness, it always will be, no matter what HAL might say.
Lakelight Monthly
Curated resources, delivered on the last Saturday of the month.
By subscribing you agree to with our Privacy Policy and provide consent to receive updates from our company.
