Skip to main content
Blog

Artificial Minds

Artificial Minds

Introduction

Can there be artificial minds that aid our understanding of human minds? In his paper Minds, Brains and Programs John R. Searle argues that there cannot be. Specifically, he develops a dichotomy of strong and weak AI. In this dichotomy weak AI corresponds to the notion that computers are powerful tools, but they themselves are not minds. Strong AI on the other hand is the argument that a computer program is itself a mind; A mind that can literally have the same cognitive states as a human being. Studying such strong AI would be an affirmative answer to our question. So then the question becomes can strong AI can exist? For Searle he argues that strong AI cannot exist. Throughout this paper I will go through his argument for this proposition along with walking through a thought experiment he provides as an example. After that I will present a rebuttal to Searles arguments and then finally walk through some potential objections to my rebuttal.

Searle presents two propositions and three consequences to those propositions. I will be referring to the arguments later by their corresponding number. Let us start with the propositions, they are:

  1. Intentionality in human beings (and animals) is a product of causal features of the brain. (p. 417)
  2. Instantiating a computer program is never by itself a sufficient condition of intentionality. (p. 417)

The first is not given much justification, the fact that brains have causal features is argued to be empirical fact, and the consequence of these causal features is what Searle calls intentionality. Intentionality is essentially understanding, or more broadly the ability to understand and think deeply about something. An example of this would be the ability to take information from a story and synthesise new details about it, or even the creative capacity to continue the story. Essentially it is being able to take information and manipulate it within a given context to generate a new and often unique output. This intentionality is causally linked as Searle notes, this means it comes from a chain of mental states in tandem that are linked to one another. The second point is largely justified through the Chinese room thought experiment and is pivotal to Searle’s position that Strong AI cannot exist. This can be seen in the 3 consequences Searle presents that supposedly follow from the 2 propositions:

  1. The explanation of how the brain produces intentionality cannot be that it does by instantiating a computer program. (p. 417)
  2. Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. (p. 417)
  3. Any attempt literally to create intentionality artificially could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. (p. 417)

The 3rd point is essentially stating that just starting a program is not a justification for a program having intentionality. Since intentionality is causally linked this simply implies that the beginning of that causal link cannot be instantiating (or starting) a computer program. This is a direct consequence of both earlier propositions. The 4th point is relatively self-explanatory as the failure to have intentionality would preclude an AI from being considered an AI. This is a consequence of the 1st proposition in combination with the fact we are looking to see if AI is useful in our understanding of humans, so it stands to reason they must at least match up with humans. The 5th point is tied to an idea Searle develops that you can’t just look at the outcomes of situations. The fact an AI can do something similar to humans is not good enough, they must be at the same level of causal powers to be worth using in investigating human minds.

There are several problems conceptually with these consequences presented. In particular the 3rd point creates a problem because if you agree with that consequence it would be impossible for causal links to develop into causal features of cognition. Any artificial brain that would be able to duplicate the causal powers mentioned as a prerequisite in point 5 would necessarily have to be the consequence of instantiating a computer program at some point. This is analogous to an argument that a human being having a mind cannot be the consequence of being born. Even more complex programs are themselves either directly be instantiated or have arisen out of the instantiation of a program. Points 2 and 5 will be the primary source of disagreement that will be outlined later in the paper, but first we need to look at Searle’s justification for point 2

Explanation of Chinese Room Thought Experiment

To justify proposition 2 Searle presents a thought experiment. In this thought experiment there is a person who goes into a room containing strange symbols, and are given a set of rules for correlating those symbols with responses. In this example the output provided is generated without any actual understanding of the symbols and is simply a product of following rules. This for Searle is an example of being able to instantiate a program that can provide even insightful outputs without any actual understanding, or intentionality being present. Essentially you can provide the aesthetics and outward impression that the person in the room is considering, understanding and responding to the symbols when in actuality no understanding is present. The concrete example of this he provides is a room where this happens with Chinese characters and a set of questions provided. The person provides the appropriate Chinese outputs without an understanding of the language (p. 418). This experiment shows that even a program that provides the correct outputs (given a set of inputs) has the potential to not understand the processes that govern this interaction it simply put up the façade of understanding the Chinese. This is important because it means that simply instantiating a program that provides accurate outputs is not necessarily sufficient for intentionality. Likewise, it means that evaluating intentionality requires an investigation of the process of the cognition that is happening even more so than the presence of a reasonable output.

Rebuttal of Searle

The first argument I would pose to Searle’s thinking was hinted at earlier. In proposition 2 Searle argues that instantiating a program is never by itself a sufficient condition, however later in point 5 in conjunction with point 3 Searle argues that designing programs could not create intentionality either. The problem is that this begs the question, If a program cannot produce intentionality by being instantiated as is stated in point 3, and being able to duplicate the causal powers of the human brain is a prerequisite, then how is it possible to have a machine do this? This means that Searle’s reasoning here is circular. At some point the machine would need to be instantiated, just as humans need to go through a maturation process to have the capacity to understand. If I were to say that intentionality cannot arise out of simply being born, then the question is what causal link you can possibly develop that does not begin with being born. At some point the causal link for any program that has the causal powers of the human brain would need to be instantiated, the same way that a human capable of understanding would eventually have a causal chain that leads back to birth.

The second argument I would levy against Searle attacks premise 2 and follows a similar form to my last argument. If the person in the Chinese room example were eventually to actually learn Chinese, then although the instantiation of the program itself is not responsible it means that the instantiation of the program is not an anti-requisite to understanding. What I mean by this is that the process of understanding what is happening in the Chinese room is not a matter of capacity by the person. They have the capacity to learn Chinese and be able to understand, but in the current context and with their current knowledge they are unable to. This means that if you replaced the person in the room with someone who did understand Chinese then even if they are following the steps laid out in the procedure, they would be able to understand the process and be capable of intentionality while performing it. This implies that if you were to instantiate a program in the same way that is instantiated with the capacity to understand Chinese and do this same task then that system would be considered a strong AI. This leniency would also be extended to the degree that even if the system did not have this understanding when initially instantiated if it were to later develop the capacity for learning the language then it could eventually be considered to understand in the same way that Searle discusses.

There are in fact currently several types of computer systems that have this capacity, and their existence alone is a challenge to this point raised by Searle. One of the most advanced of these systems is called GPT-3. It is a general-purpose artificial intelligence system that can do the sort of tasks that Searle is describing such as creating an entirely new story from a prompt and being able to engage in conversation that uses context from previous interactions. Not only that but the system itself is dynamically changing. You can receive different answers to many prompts from one day to the next, and the program that it runs on is dynamically evolving and changing. There is a reasonable argument to be had as to whether the system is on par with the causal powers of the human brain however for the sake of the argument let us say that it is. With this being the case necessarily these causal powers arose directly from the instantiation of a program, which is a direct contradiction to point 3. In fact, it seems like for the argument to be cohesive the point should be that not all programs that are instantiated have the capacity for intentionality, but some that are the product of a causal link that starts with the instantiation of a program do have this capacity. The only way out of this would be to find a reason beyond the causal powers of the human brain to explain intentionality since it would also be susceptible to this same argument by equivocating birth with instantiation. This point means that point 3 and 2 cannot be valid if point 1 is valid and vice-versa.

Potential Objections

There are several objections that could be levied at my arguments. The first of which would be to limit the size of the causal links you are seeking to analyze. For example, you could say that a program’s method of understanding is only causally linked to a set number of previous mental states, which would nullify my argument and make Searle’s superfluous. In this case it means that Searle’s belief that machines do not have the capacity for understanding due to premise 2 is now invalidated since the causal chain of mental states is possible. This means Searle would be forced to admit machines are in fact a usable tool in understanding the human mind since some do exhibit intentionality when sufficiently complex and competent. Additionally, the argument about a situation wherein someone where to learn Chinese it could be argued is missing the point. Searle was originally showing an example of instantiating a program without intentionality, not proving that any program could not be made to have intentionality. The problem with this is that Searle himself is prone to making arguments about the whole throughout this paper. His arguments as they are formulated can themselves have counterexamples proposed and because of that it means that his broader dismissal of strong AI systems is invalid.

Searle wrote his paper to argue that machines themselves could not be studied to gain insight into how human brains operate. It seems however like these arguments do fall apart. It does seem that instantiation can be a justification for intentionality since this is the standard that we already use with humans and birth. Likewise, it also seems to be the case that the Chinese room example shows that someone without prerequisite knowledge cannot demonstrate intentionality, but if given that knowledge they would be able to. This means that the intentionality is not a matter of categorical capacity on the part of human beings overall, but just on that individual. Likewise, a machine being unable to demonstrate intentionality based on a lack of information would not discredit AI from being able to display intentionality. All these arguments being the case it seems that Searle is incorrect, and that it is possible that strong AI, and machines worth investigating for insights into the human mind might exist and/or come to exist

Works Cited

Searle, John R. “Minds, Brains, and Programs.” Behavioral and Brain Sciences 3, no. 3 (1980): 417–57. https://doi.org/10.1017/s0140525x00005756.

Back to blog