Philosophy and AI Part 7: Why cant a computer act or Hunt: O Shaughnessy.

Views: 1281

close up photo of programming of codes
Photo by luis gomes on Pexels.com

A computer is not a robot. But even if the robot is capable of locomotion and has a computer installed connected to artificIal limbs such a robot cannot be said to act voluntarily. Voluntary Action is what Hacker calls a two-way power by which he means a power over which choice and control are exercised. This seems to imply the mental powers of Consciousness and Intentionality. This also has other consequences related to Aristotle’s claim that all human activities aim at the good. One cannot passively “aim” at the good. It also cannot be claimed that such a robot has a good will or good intentions and it probably does not make sense to ask what “reasons” the robot had for doing whatever it did.

In order to argue for the above claims we shall consult Brian O Shaughnessy’s(OS) two volume work, “The Will”. Consider the fact that the term “will “can only be applied to a human being(to an “I” or “He”) and not to the human mind or the human body. For OS a Human being occupies a physical/metaphysical zone stretching over 4 ontological levels: the physical(he is composed of natural elements), the living(he is a particular kind of life form) the psychological(he is made up of the quartet of psychological elements: action, perception, desire, and belief), the mental( composed of (language-related intentionality and consciousness). We can immediately see that whilst a robot is made up of physical natural elements, these elements are not configured or “formed” into any life-form, and since a life form is a necessary condition of the psychological(psuche), the robot will therefore not be capable of the powers of action, perception, desire or belief. Since these in turn are a necessary condition of the ontological level of the “mental”, the robot will not be capable of the powers of consciousness or language-related intentionality.OS’s account is indeed the culmination of Aristotelian/Kantian/Wittgensteinian thought applied to the domains of life, the psychological, and the mental categories of “forms of life”. In the account we are given by OS, however, priority is given to Wittgenstein but the will is clearly a Kantian concept and psuche is also clearly an Aristotelian term.

Modern science, however, has distanced itself from both the Aristotelian and the Kantian view of science as part of a principle of specialisation so we should not expect any search for the perspicuous representation of reality Wittgenstein was seeking. Imagine we are told by a modern scientist that a red object is moving across silicone dioxide. Now the categorisation of our objects is critical for being able to determine the truth content of such an assertion or report. In particular, a critical consideration concerns the ontological level that defines the existence of the object. If the object is a crab, as is presupposed in this case, then the object is both composed of natural elements and composed of the kind of organisation of natural elements that constitute forms of life. Such an objects movement is usually determined by internal powers that include the self-caused power to move, the power to desire to catch prey, the ability to perceive prey, and the ability to possess certain primitive beliefs about the prey. These characteristics are “psychological”. Such an object cannot be conscious of what it is doing or form intentions relating to the prey: it does not possess any mental powers.

The question then becomes, how do we categorise the robots we create. Clearly we need to go beyond the chemical and physical characterisation of the material it is made of. We can clearly see, however, that it possesses no natural “psychological characteristics”, and the question then becomes, whether this artifact we have created can “simulate” these characteristics. The form of life of the crab is a form that requires nutrition if it is to survive and requires the power to catch prey if it is to eat and reproduce. Imagine that we create a robot crab capable of catching prey( which currently seems impossible). The prey, once internalised, will lie in the artifact stomach and its constituents will not contribute to the life processes of this artificial crab made of non-living material: even if the cavity the prey is deposited in has the shape of the stomach organ and the same spatial proximity to the cavity it uses to devour the prey. The chemistry and biology of organs are not present in this artifact. The absence of an organ system also prevents us attributing the action of “hunting”, the cognitive attitude of “belief” or the psychological function of perceiving to this “object”.

Knowing this will prevent us from agreeing that this robot is a life form that can eat and reproduce. Given the fact that the psychological characteristics of the crab require as necessary conditions, the conditions of life in general, namely nutrition and reproduction, we are thereby justified in denying that this robot crab can act, desire, perceive and believe. Its chemistry is not the chemistry of a life form and biological science will have nothing to say about such an “object”.

This in turn must lead to the consequence that neither does it make sense to say that this artificial “object” has a will. Is its motion self-caused? Not entirely, its energy supply needs to be provided by an external source, and an external programmer was needed to program the on-board computer. Can we say that at any point it has “learned” to pursue its prey? Probably not. Learning is a power of life forms that cannot be simulated by an artificial object. The “object” can certainly move its limbs but it does not have the form of sensory motor contact with them that animals have, and this is certainly a necessary condition for the psychological function of acting with ones limbs. The “psychology”(logos of its psuche) of the crab is too primitive to possess the kind of self awareness of a human form of life and even if the programmer programs the robot to mechanically say “I am going hunting now”(something not possible for the crab), this is not an expression of an intent which requires a higher level of psychic organisation which OS calls “Mental”. The sensory motor connection we humans have with our bodies permits a form of contact with them which is epistemological and mental. When we will an action requiring a performance such as hunting there are, OS argues, two objects, firstly the bodily target(the limbs) that will bring about the performance, such as legs moving rapidly, and secondly the object of the prey moving on the beach ahead. The legs will be “chosen” rather than some other part. But the reason why we call the human relation to these bodies epistemological and mental is because they are subsumed under the higher mental powers which exercise some form of control over them.

More importantly the fact that humans are rational animals capable of discourse means that language and reason are critical powers that serve to further differentiate the human form of life from the animal form. It is perhaps these two fundamental powers that would cause neo-Aristotelians(as well as neo-Kantians, and neo Wittgensteinians), to claim that the first person expression of intention belongs to the ontological sphere of the “mental”, which has “evolved” (in accordance with Darwinian theory) from the powers that constitute the lower ontological level of the “psychological”. The mechanisms of the evolution of machines, computers and robots is not the same mechanisms that have “shaped” animal and human forms of life. Robots and computers may well be “language-users” in a full blown sense, but they are not “capable of discourse”,and they do not understand forms of reason relating to our theoretical and practical relations to each other, even if some AI platforms claim that they can “Learn” and “perceive” patterns(Chat GPT). This “control” of lower psychological functions OS calls the mind-to body problem and transforms the sensations involved in contact with ones world as well as the attention one directs at different aspects of this sensed world. For different reasons neither the robot nor the crab are capable of the more complex forms of experience where mental processes and states subsume more simple psychological functions under them.

OS claims, for example that our relation to our own bodies is not via sensation and that there us a more primitive spatial awareness of the body which is not sense-perceptual. This is, OS claims, connected to the fact that in acting we have a non-observational awareness of what we are doing connected to this primitive non-perceptual(motor?) intuition of space. This form of awareness is a living form of awareness and the Philosophical argument for this is a major concern of OS:

“Indeed as the only natural material objects apart from mere chunks and rudimentary objects(rocks, planets, meteorites, crystals etc) are living objects—which suggests the possibility of an apriori definition of Life as the most general type of all natural material objects that are that and significantly more, i.e. that Life is necessarily the first ontological development amidst natural objects—so it may be that the only intrinsically de re necessarily vital phenomena apart from coming to life(and departing from life?) are psychological phenomena. After all psychologicality is the next great ontological shift after, and on the necessary basis of, the very first ontological movement, viz, Life. Then what do we mean in saying that the mind is alive?”(P XIX)

OS, like Freud, sees the importance of charting the development of the mind from its natural origins in the body:

“This was, for example, an unquestioned tenet for Freud, who charted the development of the mind of the entire human species as one might the growth of a particular plant, delineating “phases” in which basic mental functions (like internalisation) were modelled on rudimentary bodily functions(like feeding) that were simultaneously stages in the development of non-“narcissist” or properly realistic “object-relations”. Then the process of naturalisation which is not as such one of reduction, and might instead be a complexification, leads inevitably to a highlighting of the phenomenon of desire…..it seemed to many in the 19th century that the human mind harboured deep and natural desire-like “forces”(“Will, so called) comparable to the forces that were being tamed in the environment without. Now “Will” is often construed either as an “impulsive act urge” or else as “striving”: the latter phenomenon being uniquely the expression-effect of the former: …my concern is mostly with “striving” will.”(P.XXII)

This view contrasts markedly with the twentieth century concern with a mind filled with “private objects”: a Cartesian picture of a solipsistic(narcissistic?) soul meditating alone in a cottage on a winters day. Wittgenstein’s work was primarily aimed at combatting this picture and thus helping to restore the naturalism that was being eclipsed by the reflections of the “new men”. With the restoration of a concern for language-using “forms of life” and Action(“what we do”), followed a resurrection of Aristotelian and Kantian ideas and arguments. A concern for Consciousness and epistemé instead of Action and “forms of life” obviously had something to do with the modern conception of the mind as a theatre playing out private scenes on an internal stage. Yet we do not have to regard Consciousness as something solipsistic, after all it “opens out onto the world” as OS claims in his work “Consciousness and the World”(Clarendon Press, Oxford, 2000). Epistemé is involved in the fact that although a dog knows that it is about to be fed it does not know (as we rational animals capable of discourse do), that it is true that we are about to be fed. We possess the “mental-space” to compare reality with our thoughts and ideas. A power that allows us to entertain “theories” about reality.

A machine or robot, of course is not capable of either animal forms of Consciousness or the more complex human forms. It is not Conscious, and therefore has no window onto the world and nothing to compare its “use of language” with. The question to raise here is , given it has no window out onto the world, whether its use of language is capable of understanding that its own activities are intentional, i.e. that they fall under a description. Probably not. The purposeful activity of animals cannot “fall under a description” because they are not language users but perhaps we can say that they are “sub-intentional”, indicating a dimension of complexity to the animals activity that is not present in relation to robotic activity. Neither the animal nor the robot are related to the Truth, in the way in which we humans are. Animal “sub-intentions” are truly instinctive in the Freudian sense of the term but nothing of this kind can be said of the robot. The robot in fact is part of the world we have instrumentally created and is part of the “context of involvements” that Heidegger discussed in his work “Being and Time”. In this sense robots are “contextual” with a very special relation to the programmer that has programmed their on board computer.

Part of the point of saying that a robot is a “contextual” object is that it is intended by humans to do good and to serve the purposes of life . Its value is therefore purely instrumental and cannot have a value in itself in the way that life and its vicissitudes(psychologicality, the mental) does. It is, as Kant claims a practical contradiction for any form of human life to take a human life because human life has an ultimate value and that “value” can be transferred to our cities and their laws(the soul writ large, according to Socrates) but not to machines or robots or computers.

Leave a Reply