The Human Normativity of AI Sentience and Morality
What the questions of AI sentience and moral status reveal about conceptual confusion.
What prevents the literal applicability of concepts of thought, reason and inference to our calculating devices are not deficiencies in computational power, which may be overcome by fifth‐generation computers. Rather, it is the fact that machines are not alive. They have no biography, let alone autobiography. The concepts of growth, maturation and death have no application to them, nor do those of nutrition, health and reproduction. It makes no sense to attribute to a machine will or passion, desire or suffering. The concepts of thinking and reasoning, however, are woven into this rich web of psychological faculties. It is only of a living creature that we can say that it manifests those complex patterns of behaviour and reaction within the ramifying context of a form of life that constitute the grounds, in appropriate circumstances, for the ascription of even part of the network of psychological concepts.
P. M. S. Hacker - Wittgenstein: Meaning and Mind
To many of those familiar with Wittgenstein’s later work, in particular his Philosophical Investigations, the contemporary discourse around AI sentience and moral status is, for lack of a better descriptor, frustrating. Wittgenstein famously attempted to demolish much dogmatic thinking, dogmas that still exist to this day in scientific and philosophical discourse, the AI debates included. His radical investigations are only known to some, however, and often only vaguely so, and his achievements are misunderstood by many. I will briefly give an overview of what the questions of AI sentience and moral status are, before proceeding with some lessons from Wittgenstein’s Philosophical Investigations, which are subsequently applied to the AI questions. As should be evident, most of what is said here is owed to the work of Wittgenstein and Hacker.
AI Sentience and Moral Status
Robert Long recently provided an overview of experts that say AI could soon merit moral status. By this is meant “that AI systems might soon be conscious, sentient, or agentic in a morally relevant way.” The list of experts is long, and growing, and includes such notable neuroscientists, philosophers or AI researchers like Anil Seth, David Chalmers, Nick Bostrom and Yoshua Bengio. Among these and others, the beliefs go from not impossible (as in e.g. Seth’s phrasing: “it is unwise to dismiss the possibility altogether”) and upwards, most everyone adding some qualifier like may or might to their statements. Understandably so, for AI sentience lies at the nexus of a web of ideas that force us to reckon with what it means to be human.
Morality is a conceptual cluster commonly considered normative, i.e. what is considered ‘moral’, ‘right’, ‘wrong’ etc. is contextually decided by how we, humans, use these terms. This means that disagreement is to be expected. What is less appreciated is the normativity of AI sentience, and, I will argue, basing the question of AI moral status on AI sentience is misguided, particularly when we are confused about what “AI sentience” means.
Long is the co-author of the 2023 paper Consciousness in Artificial Intelligence: Insights from the Science of Consciousness, that assesses a range of theories of consciousness, from which is derived a set of ‘indicator properties’ of consciousness, before evaluating existing AI systems based on these indicators. Their analysis “suggests that no current AI systems are conscious, but also [..] that there are no obvious technical barriers to building AI systems which satisfy these indicators.” I have no particular qualms about their assessments and conclusions, but their presuppositions betray a common conceptual confusion that underlies most beliefs in the possibility of AI sentience, the clarification of which undermines their analysis. The paper states that “..we adopt computational functionalism, the thesis that performing computations of the right kind is necessary and sufficient for consciousness, as a working hypothesis.” From the presupposition of computational functionalism, the conceptual confusion engendered by AI sentience can readily be exemplified. Before explicitly doing so, we need to understand some of the lessons of Wittgenstein’s Philosophical Investigations.
Lessons from the Philosophical Investigations
Where does our investigation get its importance from, since it seems only to destroy everything interesting, that is, all that is great and important? (As it were all the buildings, leaving behind only bits of stone and rubble.) What we are destroying is nothing but houses of cards, and we are clearing up the ground of language on which they stand. The results of philosophy are the uncovering of one or another piece of plain nonsense and of bumps that the understanding has got by running its head up against the limits of language. These bumps make us see the value of the discovery.
Ludwig Wittgenstein - Philosophical Investigations
Many problems in philosophy, psychology, neuroscience, metaphysics and so on are only problems in virtue of conceptual confusion. This stems from what is, if not a human fault, at the very least a modern one, which is to analyze concepts outside the contexts in which they acquire their meaning. We analyze ‘mind’ in isolation, blind to how we then leap across contexts of how it is used that differ in their meaning of the concept. The same polysemy (multi-meaning) applies to ‘mind’ as to ‘duck’ (compare the exclamation “Duck!” in the context of an incoming projectile and the presence of a quacking bird).
One of the great insights of Wittgenstein is that meaning comes from use. Since use is no bounded, final process, all concepts are as such vague, related by family resemblance. Meaning cannot be encapsulated in rules without exception. This means that ‘mind’ and ‘brain’ overlap but are not equal, and both depend on a whole web of other terms and concepts for their meaning. Their connection is revealed in their contexts of overlap, but if our method of analysis involves artificially separating the concepts, cleaving them apart, the region of overlap evaporates and we are left with a gap. We analyze the concepts ‘mind’ and ‘brain’ divorced from the contexts of their everyday use and arrive at hard problems of how one can give rise to the other.
A consequence coupled to the preceding two points (due to the Private Language Argument) is that essences or objects, whether mental or physical, can play no role in determining meaning. There can be no finitely enumerable definition of any concept, a fact which precludes the possibility of generalizing a concept to make of it an essence. This means that we are easily confused by the apparent ‘crystalline purity’ of our language: a concept ‘mind’, so clearly encapsulated by a name, seems to imply that the meaning of the name is equally clear and crystalline, that it is some absolute and unchangeable object, and the same holds for all concepts, even ‘world’ and ‘reality’. But this is an illusion we fall for, the conceptual ideality apparent in language belies the provisionality and vagueness of concepts manifested in reality. Both ‘object’ and ‘process’, and even ‘essence’, are themselves vague concepts. Think of how the concept ‘electron’ has changed throughout the past couple of centuries: we are led to believe we were always approximating towards a fixed element of reality, while in fact we were unknowingly shaping and reshaping the concept and along with it its concomitant theories. Science, logic and language only ever approximates the order of reality, an order that is only apparently crystalline.
We might begin to feel unsteady now, given that any concept is vague and dependent on other concepts that are vague, yet again dependent on more vague concepts, and so on. But this unsteadiness is another artefact of viewing language separated from its use, for when we use language there is no unsteadiness. We talk and use words and signs in a multitude of ways, rarely encountering issues in getting our meaning across, for our use of words get support from the circumstances they are used in. The unsteadiness is only apparent when we take concepts out of their natural habitat. The world is held together like stones in an arch.
Language and the conceptual web isolated from the living contexts in which they have been shaped and in which they are applied may give the impression of gaps that need closing. The chasm between an expectation and its fulfillment, or an order and its adherence, cannot be bridged in language, but in an experience of which these linguistic pieces are contextual parts, just the same as the chasm between ‘brain’ and ‘mind’ cannot be bridged except in an experience, a context, in which their relationship is clear. Both these words denote vague concepts that are abstractions of our experience, valid in contexts that overlap but are not equal. To experience, to be a human being, is to have a mind, to also have a brain, but also much more besides. There is no absolute essence that is you that any concept may contain.
‘But surely the chair I am sitting on is real, and it is an object, and it is there whether I like it or not!’ You are having an experience (even this way of phrasing it reveals some confusing aspects of language, as if experience is something we have or own, like a birthday party or a toy), from which you abstract parts, likely for reasons of intelligibility, convenience, survival and so on. A portion of this experience you have labelled as a chair, a label you have learnt as part of many different contexts throughout your life. The chair is in flux, just as experience is, its materials deteriorating slowly over time, its shape slowly shifting, etc. You still think of it as a single, definite chair. You label it an object, a label you have also learnt in many different contexts. The rigid names of ‘chair’ and ‘object’ thus both refer to a shifting portion of your experience, and their ‘rigidity’ is only apparent, a consequence of their name and not their reality. Whether it is there whether you like it or not is a nonsensical statement, for anything you might say refers to an experience of which you are necessarily part. That it will be there even if you leave the room is a grammatical artefact. Any evaluation of whether the chair is there takes place on the stage of your experience, whether directly or indirectly. The world is held together like stones in an arch, but the arch is ever in flux, and so the stones are always shifting to keep the arch stable.
Computational Functionalism
It seems to us sometimes as though the phenomena of personal experience were in a way phenomena in the upper strata of the atmosphere as opposed to the material phenomena which happen on the ground. There are views according to which these phenomena in the upper strata arise when the material phenomena reach a certain degree of complexity. E.g., that the mental phenomena, sense experience, volition, etc., emerge when a type of animal body of a certain complexity has been evolved.
Ludwig Wittgenstein - The Blue and Brown Books
From these lessons, we may start to see the shape of the error in such approaches as ‘computational functionalism’. The idea that complex computation is necessary and sufficient for consciousness is an idea built on what Whitehead called misplaced concreteness, what I have spoken of as the ontic projection fallacy (See e.g. Experience and Immersion), and what Bennett & Hacker (2022) speak of as the mereological fallacy. It is based on an inversion wherein we confuse abstract parts of our experience and reality as the primary grounds of experience and reality, of mistaking some common denominator in many phenomena for the key to all the phenomena. While some of what humans do may be called computation, and while some of what neurons do may be called computation, to equate one with the other and by doing so think we have bridged two realms of reality, is a vast confusion. What it is to be human, parts of which is captured by saying we are “conscious”, certainly shares a conceptual overlap with parts of what computation is, and the same can be said about neurons. What it is to be human also shares conceptual overlap with parts of what neurons are. None of this justifies the leap to reduce being human to consciousness, consciousness to computation or the brain, or even reducing the brain to computation. An added qualifier of “complexity” does nothing to change this.
One influential argument for the presupposition of computational functionalism in Butlin and Long et.al.’s paper is via the substrate independence argument (Chalmers (1995)): “if a person’s neurons were gradually replaced by functionally-equivalent artificial prostheses, their behaviour would stay the same, so it is implausible that they would undergo any radical change in conscious experience.” Not only does this argument presuppose physicalism and reductionism (that conscious experience is reducible to physical neurons) which are artefacts of particular methods in science and explanation and not facts of reality (see the first phase of this project for much more on this theme), but also that artificial prostheses can replace neurons, which is entirely reliant on a mechanical view of neurons. Neurons are also mechanical, also physical, and more besides, just like being human is also to think, and more besides. Much of this argument falls to the same kind of conceptual considerations as above. The non-sense of both substrate independence and AI sentience is shown by clarifying the conceptual confusion underlying the presuppositions that seem to make AI sentience a valid proposal.
‘But isn’t it conceivable that some AI system could be conscious, that there could be something to be an AI system?’ We are tricked into thinking that to be conscious is some essence, that there is some core to what being conscious means, that there is some core to what it is like to experience, and we wrongly cleave this core from everything else that consciousness and experience mutually depends on, then think that this core is something that can be posited of other things that may share some concepts with what it is to be human. But this core is illusory. This is of course not to say that our experience is illusory, or that consciousness is illusory, but that it is a categorial, but common, mistake to think that these concepts point to something that can be separated from their use in human contexts, an essence, that may be posited or applied unproblematically outside these contexts, like locomotion is shared with many animals, and like computation is shared with many machines. But consciousness and experience do not work that way, though our language may give that impression.
Only of a human being and what resembles (behaves like) a living human being can one say: it has sensations; it sees, is blind; hears, is deaf; is conscious or unconscious.
Ludwig Wittgenstein - Philosophical Investigations
‘But doesn’t all the preceding imply that if AI systems were to behave and operate far more as humans do (which might be one path to what we conceive of as AGI), that they should then be considered conscious or thinking?’ Hacker (2019) provides the following answer to a related question, but it is equally applicable in answering this: “There is surely no ‘correct’ answer to this question. It calls for a decision, not a discovery.” What concepts like thinking, reasoning and consciousness applies to has hitherto been entirely decided by human contexts of use, though speculation about their applicability both in the animal and mechanical domains has long been popular. But this speculation does not alter the human contexts that define their use. These concepts cannot be separated from the conceptual web of contextual use and family resemblance that yield their meaning. It is the whole of this web that supply the meaning of any one concept. If AI systems should behave and operate in ways far more in overlap with the human psychological conceptual web, there is still no discovery about sentience to be made: AI sentience is a normative question, one we will have to make a decision about. What possible purpose can be served in making the decision to call AI conscious or thinking? As the question of moral status will reveal, one purpose might very well be as a tactic to lower the evaluative standard of moral and legal responsibility.
AI Moral Status
Thinking is a capacity of the animate, manifest in the behaviour and action characteristic of its form of life. We need neither hope nor fear that computers may think; the good and evil they bring us is not of their making. If, for some strange and perverse reason we wished to create artificially a thinking thing, as opposed to a device that will save us the trouble of thinking, we would have to start, as it were, with animality, not rationality. Desire and suffering are the roots of thought, not mechanical computation. Artificial intelligence is no more a form of intelligence than fool’s gold is a kind of gold or counterfeit money a form of legitimate currency.
P. M. S. Hacker - Wittgenstein: Meaning and Mind
None of this means that the discussions about AI morality can be disregarded or that they are unimportant. What Hacker is arguing, and what I am saying, is that we better know what the terms and concepts we use in such discussions mean, and how our language works in general, for if we are unclear about their meaning and operation we quickly end up in a position of puzzlement. From the above it should be clear that the question of AI sentience is “a decision, not a discovery”, and that this normative decision should be in the negative if it is to be based on an understanding of how our concepts and language work. If the premises 1) AI can be sentient, and 2) sentience merits moral status, in combination leads to 3) AI merits moral status, then the conceptual clarifications above showing how premise 1 is non-sense is a sufficient reason to reject the conclusion 3.1
What we should be extremely vary of is what role is ultimately afforded the humans and corporations behind AI systems. In part what the entanglement between sentience and moral status leads to is the strengthening of AI systems as independent and intermediary nodes in the network of moral and legal consideration and responsibility, nodes that may serve as artificial grounds for plausible deniability. Yes, this is cynical, but we should obviously be curious and critical about who may stand to gain from affording AI systems an independent moral status. If the existence of AI systems as moral intermediaries is normative, we better know how this normative process works and make a decision based on that, and not on any misguided theory that posits their existence as non-normative. We cannot allow AI sentience to gain any bearing in the question of AI moral status, for this may serve to lower the evaluative standard of risk, moral and legal responsibility to which the creators (and users) of AI systems are held. Thus, there is both conceptual and moral reasons to treat the question of moral status separate from sentience, and with the non-sense of the question of AI sentience revealed, we should furthermore morally treat these systems like we would any tool or technology: as extensions of ourselves, with the moral implications thereof.
Thank you for reading! If you enjoyed this or any of my other essays, consider subscribing, sharing, leaving a like or a comment. This support is an essential and motivating factor for the continuance of the project.
This essay is part of the second phase of The Magical Flower of Winter, a project that now turns its focus from outlining a metamodern view of reality as a whole, towards the metacrisis. The thread that links these two phases is how the former can be considered an attempt at providing a world view that may better help us deal with the latter. The first phase can best be accessed through its introduction:
References
Bennett, M. R. & Hacker, P. M. S. (2022). Philosophical Foundations of Neuroscience. Wiley-Blackwell. [2003]
Butlin, P. & Long, R. et.al. (2023). Consciousness in Artificial Intelligence: Insights from the Science of Consciousness. URL=https://arxiv.org/pdf/2308.08708
Chalmers, D. (1995). Absent qualia, fading qualia, dancing qualia. Conscious Experience.
Hacker, P. M. S. (2019). Wittgenstein: Meaning and Mind (Vol. 3 of an Analytical Commentary to the Philosophical Investigations, Part I: Essays). Wiley-Blackwell. [1993]
Wittgenstein, L. (1958) The Blue and Brown Books. Blackwell.
Wittgenstein, L. (2009). Philosophical Investigations (Ed. Hacker, P. M. S. & Schulte, J., Tr. Anscombe, G. E. M., Hacker, P. M. S. & Schulte, J.) [1953]
This sentence added for clarification, based on feedback.