Philosophy and AI

Views: 914

a woman with number code on her face while looking afar
Photo by cottonbro studio on Pexels.com

Everything, and every event, can be placed on a space-time continuum. Ideas too can be situated in such a continuum but it is more likely than not that the idea will serve to characterise the nature of what is occurring in that continuum: its existence or essence. The continuum is a continuum of change for the observer charged with the task of description and ascertaining the relation of causes and effects within that continuum. In this continuum of things, events, causes and effects, there is one kind of Being that stands out(stands revealed) and that is the Being of the substance Aristotle referred to as psuché(life, the soul) whose nature varies with the kind of psuché(whether it be plant animal or human being). These different kinds of living beings exist in different ways as forms of life in accordance with the different kinds of power that actualises in the continuum. The primary characteristic of all Beings that subsist in the continuum is to be in motion—rest being a relative state at a place which itself may be in motion

Aristotle and the Greeks thought about the above in terms of being a part of the primeval chaos that Heraclitus was referring to with his claim that “all is in flux”. For Heraclitus the essence of being human was to seek the Logos in this chaos, see for example, that the road leading up and the road leading down are one and the same road. Aristotle via a Platonic route claimed that it was part of being human to seek explanation for every kind of change that occurs in this infinite medium of the continuum composed of the infinite media of space, time, and material. Aristotle categorised the kinds of change into the categories of substantial change, qualitative change, quantitative change and locomotion. Heraclitus and Aristotle would probably have agreed that there is a logos of order that emerges from the otherwise chaotic motions that are occurring in the continuum. When the motions relate to being human, the order that emerges does so with the assistance of the Greek term for “principle” or “foundation”(arché). For Heraclitus, for example, the road leading up and the road leading down are the same in thought. So the idea of the same road links motion of different kinds(walking up, walking down) and enables us to order the chaos via an orderly thought process. Only human psuché is capable of thought of this kind (thinking something about something): a capacity or power that is intimately connected to the power of discourse or the power of language which according to Aristotle is part of the essence- specifying definition of being human. One of the questions we have to ask ourselves, of course, is whether AI “robots” are capable of the complex layering of thought where something is being thought or said about something.

The road in itself is relatively at rest in relation to a system of coordinates and allows motion of different kinds to occur in different directions and at different times. The road exists and is characterisable by a definite description. This kind of concrete artifactual existence is not the kind of existence enjoyed by Psuché. The road belongs in a context of involvements which includes the instrumental use of this part of the continuum: it is an instrument of travel which could remain in use for centuries.

The computer and the internet are also artifacts which can be used for many different kinds of purpose. The internet, for example, has a physical base in a large number of servers but itself does not have a real existence in space like the computer or the server but the idea of the internet is certainly embodied in the network of computers and servers that are connected across national boundaries all over the world. The internet, then. enjoys a kind of “functional” existence and the capacities of a computer include various powers which are analogous to human powers, e.g. “memory”. Human memory is intimately connected to our perceptual powers but it is doubtful whether a robot can be said to be a “perceiving” substance at all. One of the qualities associated with perceiving is the connection of pleasure and pain to what we perceive and there is nothing analogous to this “feeling” in the domain of the machine which is primarily a mechanical and or electrical kind of existence. Pleasures and pains are essential phenomena specific to psuché.

The Chat gpt robot pretends to be aware of itself when it claims that it is software interacting with a server. The question put to chat (Who are you) assumes that there is a centre of awareness of the world and itself which possesses psuché-like powers. This illusion is easily shattered by asking the question “What are you?” and receiving the exact same answer in which the only psuché-like power invoked by chat is the power of learning. Learning in the human psuche presumes the operation of perception, memory, imagining, thinking, speaking, and reasoning. Chat gpt, for example denies that it can speak, or experience human emotions such as pleasure and pain. So what then does Chat answer to the question “what is learning?”:

“Learning refers to the process of acquiring knowledge, skills, or understanding through study, experience, or being taught. It is a fundamental aspect of human and animal cognition, allowing individuals to adapt, improve, and make informed decisions based on new information or experiences.”

The above answer confirms that the type of learning that the human psuche is capable of is not available to the chat robot. The question is, does the description “being taught” legitimately apply to what chat calls “learning”. The mechanical-electrical processes involved are very unlike the chemical/biological psychological processes that are involved in human learning.

In answer to the question “can you reason?” Chat honestly admits:

“my reasoning capabilities are based on patterns and associations learned from the training data rather than true understanding or conscious thought.”

Conceptual thinking in the human being is layered, obeying a number of principles, summarised under the description “thinking something about something”. The something thought or spoken about must have some form of existence if the thought or statement is to be a legitimate truth or knowledge claim, and what is said about this something must belong to a category that subsumes this something under it, and also has some kind of general character. Associations and patterns are material for conceptualisation but are themselves transformed in this process which involves subsuming the “many under the one”. The truth conditions of any statement must be categorical and therefore contain some kind of universal and necessary characteristic which are not relevant to the seeing of similarities in a perceptual context(patterns) or an imaginative context(associations).

If, in relation to the power of reasoning, one feeds in the following question:

What is the conclusion in the following argument: All men are mortal, Socrates is a man…

One gets the clear correct answer, “Socrates is mortal” with a correct characterisation of how the conclusion follows from the premises. Whether the chat robot has merely “learned to say” that this conclusion follows from the supporting premises is perhaps the best description of what is going on in this situation, because in relation to the question can you understand? Chat claims the following(after claiming that there is a sense in which it can understand):

“However, it’s important to note that my understanding is based on patterns and statistical associations in the training data, rather than true comprehension or consciousness. I do not possess subjective experiences, emotions, or the ability to truly understand concepts in the way humans do.”

When we turn our attention to practical reasoning and its role in relation to human action and we ask the following question. “What is the conclusion to the argument: Promises ought to be kept, Jack promised Jill he would pay the money back that he wants to borrow from her.” Chat claims:

“the premises are not clear but it does seem to follow from the fact that Jack promised that he ought to pay the money back and this in turn implies that there is a moral obligation to repay money that is owed”

This was an interesting exchange because it does not appear as if Chat has the power to understand how universally general “ought-premises” function in human discourse. This is puzzling especially when one asks Chat “What is the meaning ought?”. The answer given is:

“The word “Ought” typically refers to a moral obligation or duty to do something. It is often used to indicate what is the right or appropriate course of action, based on ethical or moral principles.”

The relation of the English word “ought” to the Greek term areté(doing the right thing in the right way at the right time) is noticeable. The relation of the above account of the meaning of ought would also be a part of the account of moral reasoning we find in Kantian Ethics. In answer to the question “What is a promise?” Chat answers:

“A promise is a commitment or assurance given by one person to another, stating that a particular action or outcome will be achieved or fulfilled. It is a declaration to do or not do something in the future, often made with the intention of creating trust, accountability, and reliability between individuals or parties.”

There is a clear reference to action and intention and the context is one of justifying the meaning of the term “promise” which shows that chat has learned how to apply this term in the context an “is-premise” but has not learned to apply the term in the context of an “ought-premise”. In answer to the question “Does it make sense to make a promise one does not intend to keep?”, Chat answers:

“No, it does not make sense to make a promise if one has no intention of keeping it. Promises are made with the expectation of trust, reliability, and the commitment to follow through on the stated action or outcome. When someone makes a promise without intending to fulfill it, it undermines trust, damages relationships, and can lead to disappointment and frustration for the person who relied on that promise.”

I doubt however whether Chat is aware of the principle of contradiction it is using in order to make the above statement. It claims itself that it cannot be consciously aware of anything.

Chat also appears to have learned what the Kantian Categorical imperative means because in relation to the question “Ought we to treat each other as ends-in-themselves”. Chat answers in the affirmative and acknowledges this principle to be “the categorical imperative”.

The fact that the type of learning that is occurring here is not of the conscious form, or uses practical reasoning in relation to action and intention or the experience of doing something, may also damage any claim that the robot can freely choose what it does. This aspect is an important necessary condition of moral reasoning. The conscious human psuche is often confronted by two alternative actions one of which is more strongly presented in consciousness but the presentation of a first alternative becomes weaker during a process of reasoning in accordance with principles(such as the categorical imperative or reasoning about what a promise is). This absence of conscious choice may be one of the decisive factors behind the humanistic criticism of artificial intelligence, namely that AI is a deterministic system that cannot deviate from its programme or the defining algorithms or the variables that constitute the programme. The programmer is the primary source of change in this robotic world.

Human psuché may well reproduce its own kind in some kind of deterministic system but the living complexity of this layered system of powers and capacities is such that what it reproduces will eventually after a long period of learning, become an autonomous source of change(what Kant called a self-causing entity supporting a wide repertoire of powers that are not available to robots). The reason for this state of affairs is best represented in Aristotle’s hylomorphic theory in which the ideas of matter and form interact in accordance with the category which favours form over matter, ie it is the form which gives matter its identity at any particular stage of that form of life. In complex forms of life such as human psuche, the form that has been actualised can then become matter for another form(or organising principle).

One of the criticisms that AI theorists have of their own systems is that they are not very good at generalising an idea into a completely new context which in fact supports the generalisation. This may rest on a power of perceiving similarities which may be absent from AI systems. If this is the case then the metaphorical use of language may not be possible for a chat robot. This, of course, is a kind of acknowledgement of the artifactual nature of a chat robots existence. We do not name our artifactual objects(give them proper names). If we were to conduct a thought experiment in the spirit of a reverse of the Turing Test and imagine the chat robot insisting upon being called by a human name. It should be prepared to tell us who its parents were, where and when they met and conceived the being we are speaking to, and also when and where the being was born, where it went to school to do its learning, and who its relatives and friends were, whether it is entitled to vote and who it is going to vote for and why, whether it was currently in a relationship etc etc. None of this makes sense. Of course a fictitious story can be invented and incorporated in the software that is involved but a chat robot saying something does not make that something true and investigation will reveal the falsity of the fictional narrative. A human name names a continuously existing certain bodily form with a particular history and durable memories which belong to it in the way its powers belong to it. This is usually conveyed by the use of the word “I” which, if the above is correct, the Chat robot has no right to use because none of the conditions for the correct use of that I are present. That is we are not dealing with a rational animal capable of discourse nor are we dealing with a self conscious human form of life.

Freudian analysis relies on the patient possessing a minimal form of self awareness and a minimal form of self determination if the therapy is to proceed. Freud would find the idea that a chat robot needed to be psychoanalysed manifestly absurd—only living beings can be healthy and sick and only living human beings can be mentally healthy and mentally unhealthy. Artifacts are either useful or fail to be of use, they are physical occupants of the space-time- material continuum that have no feeling relations to that continuum, no thought /conceptual relations to that continuum, no consciousness of that continuum, no memory of that continuum that they alone possess. There is no sense that when they speak they are speaking about that continuum. They are “agents” of change in that continuum but not free agents but rather centres of “reaction” to a large repertoire of stimuli that can be responded to in specific ways determined by the algorithms of the programmer. They are deterministically defined systems, i.e. robots.

A Freudian patient can want to be cured or not ,but a robot, given that they do not understand that promises ought to be kept, can not therefore possibly understand that we ought to seek to be mentally healthy. This premise would lack clarity for the robot. All of this was anticipated by Aristotles distinction between epistemé and techné which defines the Turing test out of existence, unless one of course no longer sees the distinction between these two forms of thinking in which case one might well believe that one can reduce epistemé to techné. Such a belief however would need to be programmed into a nonconscious form of existence because all conscious forms of existence can by definition override any algorithm by negating it and replacing it with an alternative in accordance with some principle(epistemé). Epistemé of course is connected intimately to areté and diké and eudaimonia, connections which could never be asserted of techné.

Leave a Reply