Machine learning vs human learning according to chatbot gpt

Views: 1115

code projected over woman
Photo by ThisIsEngineering on Pexels.com

I asked chatbot gpt the following question and received the following excellent reply:

Human learning and machine learning differ in several ways, including the way they acquire and process information, their ability to generalize, and their flexibility in adapting to new situations. Here are some of the main differences:

  1. Acquisition of information: Humans acquire information through experience, observation, and communication, while machines acquire information through data and algorithms. Humans can learn from a wide variety of sources, including personal experience, language, and culture, while machines are limited to the data they are given.
  2. Processing of information: Humans have a highly developed cognitive system that allows them to process information in a complex and nuanced way. Humans can reason, infer, and apply their knowledge to new situations. Machines, on the other hand, process information in a more rigid and structured way, relying on pre-defined algorithms to make decisions.
  3. Generalization: Humans can generalize from their past experiences and apply their knowledge to new situations, even if they have never encountered them before. Machines, however, are limited to the data they have been trained on and may struggle to generalize to new situations.
  4. Adaptability: Humans are highly adaptable and can quickly adjust their behavior to changing circumstances. Machines, however, are often limited to the specific task they have been trained for and may struggle to adapt to new situations.
  5. Creativity: Humans are capable of creativity and can generate novel ideas and solutions to problems. Machines, on the other hand, are limited to the algorithms and data they have been given and may struggle to come up with truly innovative solutions.

In summary, while human learning and machine learning share some similarities, such as the ability to acquire knowledge and improve over time, they differ in the way they acquire and process information, their ability to generalize and adapt, and their capacity for creativity.”

All the initial ridiculous claims for the equality/superiority of artificial intelligence over human intelligence have now disappeared into a pool of nothingness. The key question this answer raises(especially in reference to the human ability to generalise) whether the bot can think conceptually. So I asked whether it could and received the following message:

“An error occurred. If this issue persists please contact us through our help center at help.openai.com.”

I expected the answer “I cannot think, I am merely a programme located on several different servers”. Conceptualising if it is thinking about something in the external world involves seeing something as something. So I asked whether the bot can see and received the following answer:

“As an AI language model, I don’t have physical eyes or visual perception like humans do. I exist purely as a program running on computer servers. However, I can “understand” text-based inputs and generate text-based outputs, so in that sense, I can “see” text.”

This might be one of the reasons why it cannot generalise and use generalisations creatively in the way we do. Note the metaphorical use of “see” but does it understand what it is saying here, i.e. does it understand that it is speaking metaphorically.

Seeing something to be like something is a power we possess and use in metaphorical thinking and speech and it appears as if this is another power the bot does not possess. I asked whether the bot can think metaphorically and received the following response:

“As an AI language model, I can recognize and understand metaphors based on patterns and associations within the language data that I have been trained on. I can even generate metaphors on my own based on the patterns and associations I have learned. However, it is important to note that my understanding of metaphors is based purely on linguistic patterns and associations, rather than any underlying emotional or experiential understanding of what the metaphor represents.”

“Understanding” and recognising metaphors are two different powers. The former requires as a condition the understanding of what the metaphor represents which it claims it cannot do. Julian Jaynes claims that metaphor is a key function of language, indeed he believes it is constitutive of both language and consciousness: both of which have key connections to experience. It has, for example no experience of the “I” it is talking about but is “trained” to refer to itself in terms of the “I” and responds to questions which refer to it as “You” There are clearly limits as to whether some of what the bot says can be meant truthfully. Kant claimed that concepts without intuitions are blind and that they are generalisations which fall into the categories of judgement/understanding. If a bot can not experience anything because it does not possess a sensory system then there are two reasons why it cannot “think” conceptually. It can indeed operate in accordance with patterns and associations which the programmer “represents” in binary code but if conceptualisation is the most important root of knowledge this raises the question whether the bot is being truthful when it claims it “knows” things. Belief must be rooted in experience: an experience organised by a priori intuitions of space and time which are essential features of the “patterns and associations” the bot is referring to. Space and time are essential constituents of the conceptual category of causation and this suggests that the bot must also have difficulty understanding causation. Indeed we can imagine it like Hume would say “I see no causation” “only a pattern of one event following another and associated in some way.” The category of causation is a principle and knowledge is composed of principles such as the knowledge of the principles of noncontradiction and sufficient reason that determine our power of reasoning. The chatbox is certainly in a sense “capable of discourse” but the question to raise is whether the fact that it cannot see, or see something as something, or understand categories and principles, entails that it cannot be the “rational artefact” everyone believes AI machines to be. We re clearly dealing with some kind of system but we need to ask more urgently “What kind of system?”

If I ask the bot this question I get another error message. Here we have reached the limits of its “understanding”. Clearly the “productive” reasoning of the programmer cannot capture the categorical and principle based knowledge humans possess.

Leave a Reply