Philosophy and AI Part 4: Review of Weizenbaum’s “Computer Power and Human Reason”: Power and Language

Views: 1596

network cables as supply for work of system
Photo by Brett Sayles on Pexels.com

Wieizenbaum begins his chapter entitled “Where the Power of the Computer comes from” with an observation that machines in general follow rigidly defined laws blindly and deterministically. There is no space analogous to consciousness as there is in a human being in which a thought to do something can be interrupted by a negation of that thought, e.g. in reaching for an orange I am struck by a thought that “It will soon be time to eat lunch”, and my action is then interrupted. In other words in a mechanical “electrical system” there is no space for negation which Sartre thought was defining for human consciousness. Now whilst we can try to conceptualise what is going on in a computer in sensory-motor terms, the fact is that we are not dealing with a system which has an important chemical component, (e.g. whether or not a nerve connected to another nerve fires because of an impulse is determined by a chemical interaction at the synapse that lies between the two nerves). This, for Aristotelian hylomorphism, and its four-cause schema of explanation, would be a fundamental difference between an organic system and an inorganic artifactual system. What something is made of, for Aristotle, is determined by its form, and that form, in turn is critical for what powers can be performed.

The Nobel prize winning brain researcher, Gerald Edelman, in defining the system of the brain, claimed that although the brain was the most complex “object” in the universe its form was defined by a certain organisation of the chemical elements, carbon, hydrogen, oxygen, nitrogen, sulphur, phosphate and a few trace metals. The form or “principle”(arche) of this kind of organisation is fundamentally chemical, and although our nerve system is important in both sensing and acting, it requires an organic environment with a blood supply providing the necessary chemistry for the electrical impulses to reach their destination. Witness the catastrophic effect of the clotting of blood in the brain.

Aristotle claims that animal forms of life maintain themselves and grow through the organic process of nutrition which is different to the form of psuche of plant life, owing to the fact that animals have a limb-configuration-system and organ-system which requires a more complex form of nutrition that can sustain a complex sensory-motor system. Edelman in fact claims that brain researchers would not be able to conduct meaningful research unless they used something like Freudian theory as a framework for such research. Exactly what Edelman means here is not clear but we do know that in Freuds unpublished “Project for a Scientific Psychology”, Freud was talking about categorising neurones into three kinds:

a phi system, which can fire and produce experiences that are not remembered because they are not chemically transformed in any way:

a psi system which fires and produces experiences which can be remembered because the neurones involved are chemically transformed:

and an omega system which has the function of “perceptual neurones” that transform a fundamentally quantitative system into a qualitative system and are linked to both feelings of pleasure/pain and forms of consciousness that can sustain images which have a perceptual quality related to the psychological processes/powers of wish and anxiety.

The formation of Freudian secondary processes is founded upon what Freud calls “primary process function” images, (cathected with a desire for wish-fulfilment) and these secondary processes are subject to what Freud calls “reality-testing”(a primary function of consciousness). Language is then factored into this account via the verbal image which adds another dimension of reality into the equation, namely thought-reality, which Freud claims is the highest and most secure form of cognitive process. Paul Ricoeur in his excellent work “Freud: A Philosophical Essay on Interpretation” comments on the psychological secondary process agencies of the ego and the superego which, he argues:

“learns not to cathect motor images or the ideas of desired objects”(trans by Savage. D, New Haven, Yale University Press(1970)), P 79

Ricoeur also points out that :

“the psychical apparatus of “The Interpretation of Dreams” functions without any anatomical reference: it is a psychical apparatus.”(Ibid, P.87)

This comes as part of a chapter in which the language of meaning is contrasted with the language of force(power?), where the latter is described in the account Freud gives of the energetics of the psychical apparatus. For Freud, the secondary process gets its power partly from the primary process and partly from its remembered and temporally structured interactions with the external world. Secondary processes also aim to replace the primary system and its hallucinatory wishes with a reality and thought- based system which includes inhibiting the discharge of energy into primary process based activities.

This distinction between the biological energetics system(functioning in accordance with the energy regulation principle and pleasure-pain principle) and a psychological system(functioning in accordance with the pleasure-pain and reality principle integrated into a unity) cannot be applied to mechanical systems simply because whilst computers might possess sensors they do not possess biologically constituted sensory systems and whilst they can be said to do things they cannot be said to act in the way that humans do. In other words, where a computer gets its power, and where a living system gets its power are two fundamentally different kinds of sources. This also affects, naturally, what computers do with their power and what humans do with their power, although a computer can admittedly be designed to imitate human power.

In the work “The Interpretation of Dreams” Freud characterises Consciousness as a sense -organ whose telos or purpose is the perception of psychic qualities. Consciousness is, of course, oriented toward the external world, but it is also oriented toward pre-conscious thought processes. Consciousness, in other words is hyper-cathected, and this hyper-cathexis fundamentally transforms instinct and the energy involved into something that is qualitatively meaningful and capable of meaningful communication. On these premises a machine can never be conscious because consciousness is a complex function that categorically belongs to forms of life with a sufficiently complex limb and organ system.

Freud’s descriptions and explanations are in accordance with hylomorphic principles. He is often described as a deterministic psychologist and whilst he does focus on biological and psychological principles, these are not conceived of solely in accordance with the kind of law of cause and effect that regulates mechanical systems ,because teleological , efficient and formal causes also play important roles consistent with allowing Consciousness the possibility of, for example, choosing a secondary process activity instead of an activity based on an unrealistic primary process wish-cathected activity that is negated in a cognitive thought process.

Weizenbaum claims that machines may be transducers and transmitters of power and whilst computers are machines (and therefore this description is true of them), computers are also transmitters of information. He then proceeds to discuss computer games and how they are constructed in a computer “language” which, he argues, is differently constituted to our natural languages which, it is also argued, suffer from ambiguity of meaning. Any machine instruction cannot of course be ambiguous because the program quite simply would not work. A computer language, it is argued,, cannot use what Ricoeur calls “symbolic language”, which is defined as having a “double meaning”, i.e. a manifest meaning that refers to an underlying latent meaning. This certainly mirrors the account above, of the relation of psychological to biological levels of psuche.

Weizenbaum apologises for not discussing the idea of “meaning” in his account of the formal unambiguous language which lies behind the operation of the Turing machine which uses a program to perform its function, e.g. of transmitting information.

Chat GPT Defines information as necessarily connected to meaning initially but then pivots to the following:

“The concept of information is closely related to data, which refers to raw and unprocessed facts or symbols. Data becomes information when it is interpreted, organized, or contextualized in a way that it becomes meaningful and useful. The processing and interpretation of data involve extracting patterns, analyzing relationships, and applying knowledge or understanding to derive insights or make decisions.”

The OED defines information in the following way:

“Facts or knowledge provided or learned as a result of research or study.”

We have referred earlier to Chat’s claim that it has been taught or learned certain things, but the question left hanging in the air is whether what is going on with The Chat, when it uses its programs to acquire and organise data, can be called “research” or “study”. When we perceive something consciously, patterns of recognition are to some extent involved according to Gestalt Psychology, and wholes are perceived which are more than the sum of the parts. We go , as Bruner claimed, “beyond the information given” and this is how knowledge or understanding organises the pattern or data. The data, that is, can be perceptual data but this data can be then both conceptually organised and also organised by principles. What we see here is two different levels of meaning that are related to each other as matter is to form.

Weizenbaum, categorically states that :

“A formal language is a game”(P.49)

For Wittgenstein’s later position, language was not a game but an activity necessarily related to discourse(spoken language). Speakers follow grammatical rules, Wittgenstein argued, and an analogy with a move in chess was used to illuminate a move in a language game. Weizenbaum’s account of the task of explaining a particular configuration of the chess board in terms of particular historical moves, unfortunately eliminates the conceptual component of this activity. Conceptual thinking requires not just that a particular pawn was moved closer to a King but the reason why the move was made, e.g. a general principle of the kind, “whenever one has the opportunity to limit the movement of the King one shall take that opportunity”. This is a general principle for use on more than one occasion. Another general principle of chess might be “Control the centre of the board”, and this too is a general conceptual principle that chess players learn as part of their training in chess strategy. Here we can see the clear difference there is between a rule which is also a conceptual truth, e.g.”the bishop can only move diagonally”, and a strategic principle which goes far beyond the information given(of a particular configuration of pieces on a board).

For Weizenbaum whether the configuration of the board was composed of “legal” moves appears to be the primary problem. Language requires an alphabet, the author argues, and he creates a game with three symbols and a set of formation rules which can be used by a Turing machine. These formation-rules are then related to the computers behavioural rules. The presence of “calculation” is very important for the machine and its program and for Wittgenstein calculating is one possible language game amongst others, with no special status. Wittgenstein also urged that language games should not be confused with each other or reduced to each other.

There are, as Aristotle observed “many meanings of Being” and each one is capable of “perspicuous representation”–to use a Wittgensteinian term. Each one will be explicable in terms of concepts and principles that will justify/explain the cognitive activity in question. Realising this means that speaking is not a form of calculating nor is it related to calculation which is a very specific form of organising data: a form that has very little to do with substantial and qualitative judgments. These “forms” of substantial and qualitative judgements have, we wish to argue, everything to do with the categories of understanding and the logical principles of Judgement(Kant).

We have witnessed the growth in the computers power in a way in which we have not directly witnessed the growth in the power of the human brain. We need a number of sciences and a number of theories(including psychological theories) to chart the “meaning” of the growth and articulation of our brains, e.g. the absence of the presence of developed frontal lobes . The development of the power and function of computers has, on the other hand, been historically observed and we do not need an understanding of psuche( biology and psychology) in order to understand what is occurring in this mechanical process of change. Observation and experiment is needed, on the other hand, to understand the function of various parts of the brain.

Hughlings Jackson was a researcher Freud admired and whose theories he embraced: theories about the higher and lower systems of the brain, beginning with the upper part of the brain stem where energy and power is created, continuing with the middle part of the brain where emotions, needs and wishes to achieve certain goals are controlled, and ending with a third level that includes the cortex which is responsible for the processing of sensory impressions, the control of muscles, memory and thought. (Stellan Sjƶdin “HjƤrnan” Jonkƶping, Brain Books,1995)

For Jackson the left hemisphere was the dominant sphere regulating language and will- function. The rear end of the cortex receives “information” from the external environment and processes it . The frontal end of the brain is responsible for processing alternative courses of action, solving problems, giving orders and planning future courses of action. All these “parts” of the brain are related in various ways to the diverse powers that human beings possess. It is however, primarily the person that is the bearer of these powers and not a part of the person such as the brain. The powers require not just the parts of the brain but an organ system and a certain limb configuration to actualise these powers. Not many of these “brain functions” or “psychological powers” can be ascribed to computers or indeed to brains except perhaps metaphorically, so it is a fundamental error to claim, for example, that a brain can think , speak, understand, see , feel, plan, solve problems or give orders: only a person can do these things according to Hacker, P.M S(Human Nature: A Categorical Framework). To insist otherwise is to commit what he calls a mereological fallacy which attempts to attribute to a part of a thing, what is only true of the whole. Similarly Wittgenstein would, in his later work claim that it is only of a person that we can say the above things. Attributing these qualities to a mechanical device is to commit an, as yet unnamed fallacy, which fails to recognise the fundamental difference between a living process/function and a mechanical process/function. The failure to recognise human agency as part of the categorical framework lies behind both fallacies. One power that appears to be common to both machine and man ,however, is the power of calculation. There does not seem to be a problem with saying that my computer is doing a mathematical calculation, or should we even here insist that the term is metaphorical?

The history of the development of the computer is the history of the inventors of the computer and attributing agency such as we do for animals struggling in nature to survive and passing on genetic material to coming generations may be as much a measure of their “intelligence” as much as a a measure of the random selection of their genes. The meeting of physiological and safety needs is not necessary for computers and they do not “reproduce” without the intervention of human agency. Indeed the very concept of “need” may be irrelevant for machines. When we say they need oil or lubrication or programming we must therefore be speaking metaphorically. Wants are related to needs. Can we meaningfully say that a computer wants programming? Can a computer “use” its powers in the way a human or animal agent can(more or less intelligently)?

Leave a Reply