Artificial intelligence could be one of the greatest opportunities for humanity – but is it also our biggest threat? Opinion ranges from believing AI can transform health care, government and every aspect of our life, to a fear that it has the potential to go rogue and destroy us all. While the debate intensifies, we asked Selwyn fellow Professor Marta Halina to guide us through the origins of human sensations towards an answer to one of the most-asked questions: could AI develop its own consciousness?
Phenomenal consciousness, the first-person ‘what it’s like’ to be an organism, remains an intensely debated topic in philosophy, psychology, neuroscience and other sciences of the mind. The subjective quality of experience — the specific feeling of your body against a chair, the unique sensation of smelling freshly cut grass, or the pang of a stubbed toe — these characterise phenomenal states. Such ‘raw feels’ or ‘qualia’ may include bodily sensations, perceptions or emotions. Crucially, such states seem inherently private and fundamentally distinct from third-person scientific descriptions.
The unique features of consciousness pose daunting challenges for assessing the inner lives of non-human animals and, increasingly, artificial intelligence (AI). When we ask whether a bat, an octopus, or an advanced AI is conscious, we are asking if it possesses such qualitative, first-person experiences. Thomas Nagel’s famous query, ‘What is it like to be a bat?’ vividly encapsulates this difficulty: how can we truly comprehend the subjective experience of an organism navigating the world through fundamentally different senses to us, such as echolocation? Extending this methodological leap to sophisticated AI, whose architectures may differ radically from biological systems, deepens the mystery. Their potential for experiences remains beyond our current understanding, though it’s a growing area of concern.
Despite these challenges, many researchers today actively investigate consciousness as a biological phenomenon, particularly its evolution on Earth. Indeed, there is a growing conviction that these lines of inquiry might help reconcile subjective, first-person experiences with third-person accounts. My own research contributes to this expanding field by studying the biology and evolution of cognition and consciousness in animals, and applying insights from these areas to the assessment of artificial systems.
What do we know about how consciousness evolved? All animals today likely descended from a common ancestor (the Urmetazoan) over 600 million years ago. There is now an emerging consensus that consciousness evolved early in animal evolution, around 540-485 million years ago. This period, known as the Cambrian, is famous for its diversification of animal life. Before the Cambrian, animal forms were generally simple, with few tissue types and limited mobility. By the early Cambrian, animals had developed complex bodies, appendages, and sensors, including legs, antennae, spines, sophisticated mouthparts, image-forming eyes, and other novelties. Researchers propose that this radiation of life was accompanied by a range of subjective experiences, including a sense of self, complex perceptions (combining information from senses), and rich evaluative experiences (such as feelings of pleasure and pain).
The emergence of a ‘sense of self’ serves as a compelling example of how researchers trace the evolutionary origins of subjective experience. The story begins with multicellular organisms. Unlike unicellular life forms, these are complex collectives of diverse cell types that demand sophisticated integration to achieve unified action. Nervous systems, researchers argue, were pivotal in providing this coordination, enabling capacities such as self-motion. The evolution of self-motion, however, introduced a critical new challenge: organisms needed to distinguish sensations arising from their own actions from those caused by external events. For instance, an organism had to discern whether rustling leaves signalled its own passage or the approach of a predator. The solution lay in coupling motor and sensory systems. While sensory signals inform appropriate motor commands, the motor system, in turn, alerts sensory areas about upcoming movements. This allows the organism to predict the sensory consequences of its own actions. The result is a continuous, internal mechanism that differentiates self-generated sensations from other-caused ones, thereby forming the basis for a rudimentary sense of self. Similar explanations from evolution also apply to other aspects of consciousness, such as what we see and how we judge things. The neuroscientist Bjorn Merker, for example, argues that vertebrates rely on their midbrain (the uppermost part of the brainstem) to generate an integrated ‘reality model’. This model simulates the organism’s body (including its motivational states) within the surrounding environment. Merker contends that such a simulation is not only necessary for acting effectively in a complex world but is also sufficient for phenomenal consciousness, as it generates a unified, first person perspective of oneself in the world. The relevant midbrain structures are highly conserved across vertebrates and thought to have evolved during the Cambrian period.
Intriguingly, a functionally analogous structure, known as the ‘central complex’, has been identified in arthropods, including insects. This is particularly significant because arthropods were not merely present but were ecologically dominant predators during the Cambrian period. It is plausible, therefore, that these ancient arthropods possessed a central complex comparable to that found in modern insects, potentially supporting an early form of subjective experience.
The growing recognition that a suite of neural and behavioural markers for consciousness appeared early in animal evolution carries significant ethical implications. For many, the moral consider ation afforded to other organisms hinges on their capacity for conscious experience, particularly states such as pleasure or pain. If consciousness is indeed an ancient trait shared broadly across diverse taxa (vertebrates, octopuses and even insects), this significantly widens the potential scope of our ethical responsibilities to include these organisms. Conversely, if consciousness were a more recent evolutionary development, perhaps exclusive to mammals or even just humans, then our ethical obligations towards many other animals might be less extensive. While a minority of scholars maintain this ‘latecomer’ perspective on the evolution of consciousness, the philosophical and scientific consensus increasingly favours an ‘early origin’ view.
Early origin views of consciousness suggest that subjective experiences are not exclusive to humans or even mammals, but might extend to a wider range of animals, potentially including invertebrates, such as insects and octopuses. This expanded perspective on biological consciousness prompts the question: if our views about consciousness in relatively simple organisms are shifting, should we also reconsider the possibility of consciousness in complex artificial systems?
Philosopher Peter Godfrey-Smith, while aligning with an early origin perspective on consciousness, draws a conclusion that pushes against this extension to AI. He argues that the very reasons consciousness might have emerged early in biological evolution also make its emergence in current AI unlikely.
Godfrey-Smith’s argument hinges on a distinction between the broad, coarse-grained functions AI might replicate (such as problem solving) and the specific, fine-grained functions he deems essential for phenomenal experience. He argues that consciousness, having evolved biologically, is inextricably linked to a particular fine-grained functional profile. This profile, Godfrey-Smith holds, is rooted in the metabolic activities distinctive to biological life, which are driven by the unique material properties of molecular environments: nanoscale molecules in random motion dissolved in water. Such an environment enables fundamental processes, such as including attraction, repulsion, diffusion and spontaneous motion.
Godfrey-Smith suggests that phenomenal consciousness requires biological material; in other words, qualia cannot be instantiated in non-biological systems like contemporary computers. This requirement, however, stems not from any mysterious property of biological matter itself, but from its unique capacity to support the particular fine-grained functions essential for consciousness. It is precisely these functions, he emphasises, that materials such as the metal and silicon chips characteristic of current AI cannot replicate. Thus, insofar as AI systems lack the ability to engage in the types of fine-grained activities we find in living systems, they will remain devoid of consciousness, regardless of their sophistication in performing coarse-grained tasks.