Philosophy and AI: Review of Weizenbaums “Computer Power and Human Reason” Part 5: Kant, Science, and the fallacy of anthromorphization

Views: 1411

Apollo Data Tape
Apollo Data Tape by NASA Goddard Photo and Video is licensed under CC-BY 2.0

The author wishes to rely on a truncated definition of “information” that is disconnected from truth and knowledge and resembles something that relates more to human intuition than to the human conceptualisation of intuitions:

“Thus, however informal a notion of what information is we may appeal to, we must agree that the symbols we mean to discuss here are capable of carrying information.”(P.74).

Chat GPT defines a symbol in a way which is difficult to reconcile with Weizbaums use of the term:

“A symbol is a representation of an idea, concept, or object that carries meaning beyond its literal form. It is a visual, verbal, or abstract representation that stands for something else, often with cultural or contextual significance. Symbols are used in various forms of communication, such as language, art, literature, religion, and culture.Symbols can take many forms, including words, images, gestures, sounds, or objects. For example, a red traffic light is a symbol that represents the instruction to stop, even though it is simply a colored light. Similarly, a national flag represents a country and its values, serving as a symbol of national identity.Symbols can be conventional, where they are widely recognized and agreed upon within a particular culture or community. For instance, a heart shape is commonly understood to symbolize love or affection. Symbols can also be personal or subjective, where they hold specific meaning to an individual or a smaller group.One of the defining characteristics of symbols is that they are arbitrary, meaning there is no inherent or logical connection between the symbol and what it represents. The association between a symbol and its meaning is established through cultural, social, or contextual conventions. These conventions may change over time, leading to variations in the interpretation and significance of symbols across different cultures or periods.Symbols play a crucial role in human communication by condensing complex ideas or concepts into a concise and recognizable form. They provide a powerful means of conveying meaning, evoking emotions, and transmitting cultural values.”

This use of the term “symbol” as a matter of fact is closer to Ricouer’s use, and is also closer to the world of human experience and action than the quantitative idea proposed by the author, namely that of a symbol that carries information about the state of a machine. Ricouer, of course is concerned with the great cosmic, poetic and religious symbols such as the sun, love, and evil which are fundamentally related to Being and the human form of life.

Wiezenbaum has throughout this work referred to Turing machines but has not taken up the matter of the so-called Turing Test, which states that when we can no longer tell the difference between the responses a computer leaves to a stimulus and the responses a human leaves, then we will be forced to agree that the computer is capable of thinking like a human mind and can therefore be said to have a mind. This is the so-called computer theory of thought, and John Searle has provided us with a decisive philosophical argument refuting this claim. Searle urges us to construct a thought experiment in which a human behaves exactly as a computer does in relation to a task such as translating a Chinese sentence into English. The human is given a set of instruction manuals that simulate the information a computer has and manipulates in this task. Let us imagine the human uses these manuals and correctly translates a Chinese sentence into an English sentence. Here the responses of the machine are identical but we are not entitled to say, Searle argues, that the human translator understands Chinese. He is merely doing as the computer does, namely, following instructions. Understanding is an important power of thought This argument can be used in modified form with respect to speaking, reasoning, remembering and a whole repertoire of human mental powers.

Weizenbaum, to some extent, acknowledges the force of these arguments when he claims:

“A computers successful performance is often taken as evidence that it or its programmer understand a theory of its performance. Such an inference is unnecessary and, more often than not, is quite mistaken. “(P.110)

When, however, it comes to imagining particular events and scenarios such as is involved in the design and creation of computer games we are in the world of , as Kant would put the matter, of sensibility and intuition, and the conceptually based law of cause and effect largely determines what is going on in the creation of the game. If the game involves shooting and killing there will also be an instinctive component relating to the vicarious experience which the game represents for the player. What are the consequences for the programmer of living in this world of the imagination, particulars and vicarious experiences? Weizenbaum claims the following:

“Wherever computer centres have become established, that is to say, in countless places in the US, as well as in virtually all other industrial regions of the world, bright young men of dishevelled appearance, often with sunken glowing eyes, can be sitting at computer consoles, their arms tensed, waiting to fire their fingers, already poised to strike at the buttons and keys on which their attention seems to be as riveted as a gambler’s on the rolling dice. When not so transfixed, they often sit at tables strewn with computer printouts over which they pore like possessed students of a cabalistic text. They work until they nearly drop, twenty, thirty hours at a time. Their food, if they arrange it, is brought to them: coffee, Cokes, sandwiches. If possible they sleep on cots near the computer. But only for a few hours-then back to the console or the printouts. Their rumpled clothes, their unwashed shaven faces, and their uncombed hair all testify that they are oblivious to their bodies and to the world in which they move. They exist, at least when so engaged, only through and for the computers. These are computer bums, compulsive programmers. They are an international phenomenon.”

This could be an anthropological study of a generation of the “new men” who have abandoned the form of life of generations in favour of the vicarious “form of life” described above. The description is presumably a result of observations over a long period of time. The author uses the term “compulsive” in relation to people featured in the above account and this is an insightful diagnosis given the usual association of obsessive compulsiveness with aggression. Otherwise this could also be a scene from one of the rings of Dante’s hell. Weizenbaum uses the word “hacking” to describe the “work” of these obsessed compulsive programmers, and points out the meaning of the term “hacker” as being to cut irregularly without skill or purpose. Yet, paradoxically, the author wishes to insist that these “hackers” are superb technicians who wish to master their machines. The author continues by comparing the pathological profile of the programmer he has provided with that of the compulsive gambler who uses the knowledge of statistics and “psychology”(?) to engage in his activities. The compulsive gambler leads a more organised form of life than the hacker, it is argued, because for the hacker the game(being at the gambling table) is everything and winning or losing the game is not that important. The compulsive programmer, the author argues is the mad scientist who has been provided with a theatre, his computer, and who then orchestrates his fantasies.

Weizenbaum, in the chapter entitled “Science and the compulsive Programmer”, proceeds to outline a philosophical view of science which believes that it has a methodical right to distort the reality it observes and experiments upon, and furthermore proclaim this distortion to be a “complete and exhaustive” explanation/justification of reality. Part of this picture is seeing an equivalence between animal and human behaviour, with the only difference between them, being accounted for by the complexity of the environments they live in. What the author calls the inner life of man has disappeared in such stimulus-response scenarios, and there is nothing in the behaviour of the scientist to suggest that he might have missed something of importance. The author then suggests that we view man as an “information processor” as part of a theory of human nature which is defined in terms of:

“…any grammatically correct text that uses a set of terms somehow symbolically related to reality.”(P.141)

This is then amended to include laws and their systematic relation to each other. We use our theories, it is argued, to build models which ought to contain the most essential elements of what it is they are “modelling”. Models are then tested against reality suggesting that the theories which they were based on were hypotheses and not laws regulating concepts and objects. The context being referred to here is a context of discovery in which it is reasonable to suppose that the premises are inductive hypotheses awaiting confirmation or falsification. Such a context must rely heavily on the perceptive powers of observation and the active powers of experimenting with the relationships between variables. Theories that belong to the context of explanation/justification, on the other hand, are used very differently: they are used, namely, to justify and explain how particulars are related in reality via concepts, principles and laws which serve as major premises in arguments leading to secure conclusions.The postulate of man being an information-processing creature, then, is not a principle by which we can judge much of his behaviour, but rather an attempt to illegitimately generalise one narrow aspect of his activity beyond the information given.

Memory is a cognitive function that enables us to “go beyond the information given” but given the fact that the basic elements of human memory are sensations and thought-elements that represent reality, these terms can only be metaphorically applied to the activity of machines. Now characterising humans as information processors is clearly a thinly disguised attempt to place machines and humans in the same category, and thereby try to give substance to the myriad of metaphorical terms we use to describe machine activity. The differences between being powered electrically and neurophysiologically are differences that relate to these two systems being different kinds of system with different kinds of activities. The author appears to defend his position on the grounds that we do not, as he claims, have a theory of how humans understand language, and until we do we cannot justify any claims that machines are fundamentally and essentially different entities to human being.

Putting the accounts of Plato, Aristotle, Kant, Freud and Wittgenstein and their followers together would seem at the very least to be approaching what Wittgenstein characterised as a “perspicuous representation” of psuché(in particular the human form of life) as determined philosophically by the logical principles of identity, noncontradiction, sufficient reason and grammatical statements revelatory of the essence of what is being discussed. Whether or not calling such a perspicuous representation a “theory”, is of importance, depends of course upon whether one conceives of a theory to be hypotheses related to a model in a context of exploration/discovery or, alternatively, whether it is better to conceive of a theory as a perspicuous representation in a context of explanation/justification. In the case of this latter context we are more concerned with questions relating to the right we have to use a particular statement or concept rather than whether we can relate that concept or statement to some observed aspect of reality in an attempt to verify or confirm a hypothesis.

In the chapter entitled “Artificial Intelligence”, the author proposes the task of building a computer that can learn as a child does. The idea is that this robot that is neither alive nor conscious, will not be able to perceive as we do, but will be able nevertheless to “learn” as we do. The designer of course will use the “model” of man as an “information-processor” which is a hypothesis about the nature of man that ignores almost the entire thousands of year old philosophical tradition of reflecting upon our nature and form of life. The claim is that we will thereby have a language-understanding machine: a highly questionable claim. The author is aware of the difficulties associated with making claims such as this and agrees that even if man is an information-processor he does not process information in the way in which machines do.

A red-herring discussion of intelligence quotient is then introduced resulting in the position that we can not calculate an upper limit for machine intelligence and furthermore that the artificial intelligentsia argue that there is no realm of human thought over which the computer cannot range. This ignores the arguments that Searle produced relating to the differences that exist between human and machine activity. There are periodic admissions of the limitations of machine intelligence to comprehend the kind of knowledge humans have of their bodies, but this is characterised merely in terms of “information lost” which may not be important if one does not possess a human body.

Knowledge of the lessons that are learned via the treatment of human beings by other human beings is also not possible for machine learning. Language is obviously involved in such interaction, because the functionality of human language differs fundamentally from the functionality of machine language. In the latter case remembered information concerns “stored” information, which can only be metaphorically referred to as “memory”. But the discussion spirals out of control when it is maintained that because of the complexity of the computer it is possible to talk about it as an “organism”. Now, returning to Edelmans discussion of the brain, there are many very complex formations of the elements of carbon, hydrogen, oxygen, nitrogen sulphur phosphate and a few trace metals, but the way in which a complex object such as a computer is constituted of these elements is very different from the way in which an object such as a brain is constituted, and it would of course be as absurd to say that merely because of the complexity of each of these systems, we can call a computer a brain or a brain a computer: and the fundamental reason may in the end be, that this is so because the constitution of organic tissues and structures obeys very different principles, and this in itself is sufficient reason to refuse to call a computer an organism.

When the programmer reconstructs the world imaginatively in the creation of his game, he is not working conceptually with the understanding at every juncture but only at those points where he “imagines” one particular cause to give rise to another very particular effect. The principle/law of cause and effect is being used here, but otherwise he is assembling a configuration of particular events which are simulations of perception. The author then suggests that a computer can learn to protect its parts before protecting other parts of the world with which it is associated, and it is further argued that this might amount to some form of self-consciousness. This, of course, is absurd, principally because a computer cannot possess life and death instincts which are essential elements of the living organism, however we program it to react to threats. The chemistry and biology of fear cannot be simulated by electrical circuits. The author reiterates that he is prepared to think of the complex computer as a “kind of animal”, which is clearly a category mistake involving the fallacy of anthropomorphising non-living parts of the world. A computer is not born and does not die, and this is part of the reason why we do not consider it to possess life. It cannot breathe or cry or laugh or do any of the myriad things that constitute the human form of life. We do not register its birth or its death in archives, and computers do not get married and reproduce. The list of differences just goes on and on. One of the motivations for these absurd discussions is the fact that the author claims that we can never have any final understanding of any theoretical term. Now “life” is a theoretical term which we all understood until a group of “new men” came along and claimed that we do not understand life, and because of this fact we might as well say that a machine is alive, Neither of these claims are true. Metaphor is essentially a relation between something we do understand and something we are searching for an explanation for(a linguistic form operating in a context of exploration/discovery). Logically there has to be something that we fully understand before we can claim that something else is like this thing. “Man is a wolf” is a metaphor that means to focus on the likeness between animal species. Here there is a fundamental truth expressed in Aristotle’s “Man is a rational animal capable of discourse” and this definition focuses on three essential elements of human nature which are related non metaphorically. The knowledge of this essence specifying definition is presupposed in the above metaphorical assertion. There is, on the contrary, no basis for the assertion that man is machine-like unless one commits the fallacy of anthropomorphisation.

The author then claims that information is “stored” in the muscles and joints of the human being. One question that can, and should be asked is, whether this information is electrical, chemical or sensation-like. This claim is then associated with a further claim that a computer can, in principle, simulate “the entire network of cells that constitutes the human body”. This qualification, “in principle”, is then related to the assertion that we do not possess the neurophysiological knowledge to design such a computer and wont do so for hundreds of years. The fact of the matter is, that we do possess enough philosophical knowledge to know that such an impossibility is not a scientific problem but rather a philosophical problem, that is resolved by invoking the fallacy of anthropomorphisation. In other words this “possibility, in principle”, is in fact not conceptually possible. The counterargument against this position is attributed to the artificial intelligentsia who assert that the difference between human and computer thought is “unproven”. One could only accept such a position if one believed that the principles of noncontradiction and sufficient reason are not “proof”. This of course is the position of the “new men”.

The author, in this chapter entitled “Artificial Intelligence”, engages in a discussion of the intuitive nature of the right hemisphere of the brain and the conceptual/logical nature of the left hemisphere. The author does not recognise the historical footprint of the Philosopher, Kant, who sees intuition to be something we are in immediate contact with, and conceptual understanding to be something mediated by the concepts of the understanding/judgement. Intuitions without concepts are blind and concepts without intuitions are empty, Kant claimed on the basis of very little knowledge of the brain, but in accordance with hylomorphic principles. The anti-rationalism of the artificial intelligentsia has been evident in several chapters and is again confirmed here when it is asserted that the artificial intelligentsia believe that

“every attempt to solve lifes problems by entirely rational means always fails.”(P.221)

A false choice of contrary alternatives is presented as evidence for the above, namely that the left hemisphere can operate alone independently of experience. Without any knowledge of the structures and functions of the brain, philosophers since Socrates have urged that we transcend unnecessary appetites and emotions by examining them conceptually and rationally in the light of their place in our conception of what we believe a life ought to be like(areté, diké,arché, eudaimonia). Weizenbaum rejects the above account, not via an appeal to philosophical recourse to a rational world-view, but rather by an appeal to calculating reason which somehow mysteriously acknowledges the awe we feel in the presence of the “spectacle of the whole man”(P.221). Such a spectacle would, of course, need to be conceptually mediated and explained/justified by means of rational principles and grammatical remarks.

A discussion of Heisenberg’s uncertainty principle ensues and we then witness a frontal attack on the Philosopher Leibniz and his claim that if we knew the position and velocity of every elementary particle in the universe we would be able to predict the entire future of the universe. Heisenberg, according to Weisenbaum, proved that we can never know the velocity and position of every particle, because of the micro-size of the instruments needed which would themselves be subjected to the random Brownian motion discovered by Einstein. This is a dispute between those that concentrate their theories on the quantitative aspects of nature with calculating reason, and it is not clear how this kind of reasoning has any relevance to the conditions of the possibility of other types of judgement such as substantial and qualitative judgements which would be used, for example, to characterise the essence-specifying of man as a rational animal capable of discourse. Wittgenstein, in turn, would object to the generalisation of the language games being used in calculative reasoning, beyond the scope of their proper application. The follower of Kant would acknowledge that the prediction of the particular physical states of the universe in the future, is an uncertain venture if these states will be decided on the grounds of microcosmic elements. and we ought to recall in this context that Kant was a formidable scientific presence during the Enlightenment.

For Kant the quantitative, qualitative, and substantial aspects of scientific activity were seamlessly integrated in his metaphysical account of Natural Science. We encounter this “perspicuous representation”in an essay entitled “The Unity of Kant’s Thought in his Philosophy of Corporeal Nature”. The essay begins with an account of what Kant called the transcendental unity of apperception which, it is claimed, is the same as consciousness–an active state of mind intimately connected with thought in the form of “I think”. This is a very different state of mind to that of sensibility, which is a passive form of experience that essentially merely “receives” intuitions from various sources. This act of apperception has the function of taking up a manifold of intuitive representations:

“synthesizing the manifold of sensible intuition is exactly what is meant by saying that apperception is an act of spontaneity. For the moment, let us say that such synthesizing activity of the mind means that unity can be bestowed upon a manifold of perceptions by the mind’s going through that manifold, taking it up, and connecting it according to a concept which serves as a rule. For example the concept of cause and effect can serve as a rule for synthesizing a manifold, e.g. the perceptions involved in observing a stove heating a room.”(Metaphysical Foundations of Natural Science, translated by Ellngton, J, Hacket Publishing, Indiana,1985)

The complex relation of the sensible part of the mind to the conceptually mediated understanding which is responsible for thought, is outlined here. The imagination is involved in this process of connecting the sensible representations to the conceptual representation of an object. This is part of an account that explains or justifies the role of knowledge in our lives, a role that cannot be reduced to calculation or the activity of the imagination. A computer has no biologically based chemical sensory system which lays at the foundation of all our experience. Programmers might attempt to simulate the consequences of such a system, but such a simulation could never become aware of itself in the form of self consciousness that only higher forms of life possess. The embodiment of humans with a system of organs connected to a configuration of limbs, is the hylomorphic philosophical foundation for the essence specifying definition of man as a rational animal capable of discourse.

Kant sees the categories of the understanding to be judgement-functions which are both constitutive of thought and regulate it, yet are necessarily related in various ways, not just to the sensations that are part of sensible intuition, but also to the apriori forms of intuition, namely space and time. The computer may be a part of the space time continuum but it is neither aware of the space it is in, nor is it aware of the passage of time, (the present, the past, and the future). This awareness of space and time may well be achieved principally through measurement and therefore is constitutive of the quantitative judgements we make, judgements which are intimately related to mathematics and every judgement might be made on the foundations of our intuitive awareness of space and time:– but both substantial judgements(essence specifying judgements) and qualitative judgements are conceptually mediated. Even quantitative judgements, if they are going to become part of the canon of knowledge, may need to relate to the concept of cause-effect, and knowledge claims must be conceptually mediated and related to principles of reason.

Kant has the following account of the different levels of the activity of science, which Ellington refers to as the architectonic structure of the Kantian account:

“When a rock is thrown in a direction parallel to the ground, we know by experience that its path is a curvilinear line ending on the ground some yards away: how many yards away depends on how strong the pitcher is. The exact nature of the curvilinear path depends on the mass of the rock, the velocity it attains by means of the force the pitcher imparts when he throws it, the resistance of the air through which it passes and the pull of gravity upon it. When these things are known, we can plot the exact path by laws of physics, which are generalisations from many experiments. But we are also told that if the air exerted no resistance and if gravity exerted no pull, then the rock would keep on going forever in a straight line…This is Newtons first law of motion…Thus Newtons law seems to be of a character different from that of the aforementioned laws of determining the paths of projectiles. Furthermore there are Philosophers who tell us that every change has a cause. This law is even more general than Newtons first law of motion, for this one covers not only the case of material bodies that stay put or else keep going in a straight line with uniform velocity unless some external cause acts on them, but also the case of living things that act according to an internal cause…( a lion rushes after an antelope not because a big puff of wind propels him but because he has a desire to eat.)”(P X1)

Now Weizenbaum has pointed out in his description of the life compulsive programmers or hackers lead, that they prefer food to be brought to them and the desire for food seems to be overwhelmed by their compulsive activity—making them more like the computers they use than they perhaps imagine. What we see above is an architectonic of activity that is constituted and regulated by laws(arché) ranging from the experiential to the transcendental to the metaphysical . Here we can clearly see how seamlessly the world of thought is connected to the world of sensibility, and that the most important aspect of this process is organised by the categories of the understanding/judgement and the principles of reason embedded in a context of explanation/justification. Quantitative judgements play their role as does mathematical calculative reasoning, but there s no confusion or attempt to reduce different forms of judgement to one quantitative form.

Gödels incompleteness theorem is then used to call into question even the major premises of Mathematical and logical thinking on the grounds that they cannot be proved, thus confusing the logical difference between grounds which are conditions and what these conditions are conditions of. One ground or major premise often contains assumptions relating to other “hidden” premises, or are related to other grounds in ways which one may fail to appreciate.

The author notes that Cultures differ from each other, but fails to note that the kind of civilisation building activities that build the infrastructure of such civilisations/cultures are generic, e.g. the use of tools to build and make artifacts and the use of language( and the grammar of that language). The Greek norms of areté(doing the right thing in the right way at the right time) and diké(getting what one deserves) are also essential parts of enduring cultures ruled by law and principles. Concentrating on the empirical -experiential differences one can find when comparing civilisations and cultures is not a ground for impeaching the validity of essence defining activities. The way in which Japanese relate to each other in certain social contexts does not change the fact that when they lose something of great value to them they will be sad(or pathologically angry), and when they achieve a goal after a long period of attempting to achieve that goal they will be happy(or manifest a limited repertoire of pathological responses). There may be cultural differences in expressing these emotions or pathologies but neither emotions nor pathologies can be reduced to the behaviour expressing them: their circumstances and accompanying physiology are just as important factors as is the grammar of the language related to these emotions and pathologies(the way in which they are related to language-games).

Leave a Reply