The Mind in the Brain, the Brain in a Robot: Strong AI in an Artificial Neural Network Brain Replica Housed in an Autonomous, Sensory Endowed Robot
IN THIS ARTICLE
The advent of digital computers and contemporary neuroscience has fundamentally changed possible approaches to artificial intelligence (AI). Mankind’s perpetually evolving technological capacity inevitably leads to faster processors, more complex systems, and as a consequence, more intelligent machines. This technology, when applied to neuroscience, allows us to peer deeper into the brain and develop a more detailed neural map of our cognitive structures. The eventuality of the science is a comprehensive map of the specific architecture of the human brain.
Assuming a supervenience physicalist view of the mind, this map could be used as a model after which to design an artificial neural network replicating the brain’s neural circuitry, which in turn is endowed with the brain’s emergent mental properties. Housed in a robot with artificial sensory organs, a metabolic system for energy autonomy, and a makeshift nervous system, this ANN brain replica would have the necessary inputs and cognitive structures for a conscious intentional existence akin to our own.
Strong AI & Searle’s ObjectionIn his paper Minds, Brains, and Programs, Searle distinguishes between what he calls “strong AI” and “weak or cautious AI.” Weak AI is powerful enough to formulate and test hypotheses about the mind in a precise manner, but cannot be said to be a mind or consciousness in itself. It is in this incapability that Searle makes the distinction between weak AI and strong AI. “According to strong AI,” posits Searle, “the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states” (235). Because in strong AI the system can literally be said to mimic human cognitive states, the said system isn’t a tool that helps to explain the human mind but is the explanation in itself.
An operational artificial neural network (ANN) which replicates the architecture of the brain fits this description. As this is a literal replica of the brain’s cognitive structure, it would be, in itself, a coded explanation and replica of human cognition. Despite this, Searle rejects the idea of an ANN brain replica as strong AI:
The basic hypothesis [of strong AI], or so I had supposed, was that there is a level of mental operations consisting of computational processes over formal elements that constitute the essence of the mental and can be realized in all sorts of different brain processes, in the same way that any computer program can be realized in different computers hardwares: on the assumptions of strong AI the mind is to the brain as the program is to the hardware, and thus we can understand the mind without doing neurophysiology. If we had to know how the brain worked to do AI, we wouldn’t bother with AI. (244)
Searle’s dismissal of an architectural replica of the brain as strong AI is based on a confusion about the mind-brain relationship. Unlike a program, which is written independently of hardware and loaded onto it after the fact, the mind is an emergent feature of the specific, microscopic architecture of the brain that is causally reducible to and ontologically dependant on it. In contrast, software is both causally and ontologically irreducible to the hardware on which it runs. The brain creates the mind but hardware does not create software. This is a fundamental difference between the mind-brain system and the software-computer system. The two are not explicitly analogous.
As for Searle’s suggestion that AI wouldn’t be bothered with if it required an understanding of the brain, he is again mistaken. As Giambattista Vico offered, “we can know for certain only that which we ourselves have made or created” (136-137). In others words, in order to fully understand something, humans must create or make it. In order to fully understand the brain, we must ourselves build one. There is an intrinsic learning experience which comes arises from this process. Before the process of constructing a replica of the brain can begin, however, the mind’s relation to its physical antecedent, deserves further exploration.
Folk Psychology, Eliminative Materialism, & Supervenience
Lay people use a vast vernacular to report information about their mind and make predictions and assumptions about the minds of others (Nichols 1). This naïve psychology is commonly referred to as folk psychology, but is employed by the majority of the population without second thought. Rather than describing the brain function behind a particular behavior or attitude, folk psychology teaches us to use metaphors and analogies, common language, to report. Additionally and importantly, abstract ideas such as beliefs and desires fall under the blanket of folk psychology.
The past several decades, however, have seen the doctrine of eliminative materialism question the emphasis and use of so-called folk psychology. Eliminative materialists predicate that folk psychology is nothing more than a lay theory of mind which will “eventually be displaced […] by completed neuroscience” (P.M. Churchland Eliminative 382). Stich reports, somewhat sardonically, that “what elimitivism claims is that the intentional states and processes that are alluded to in our everyday descriptions and explanations of people’s mental lives and their actions are myths” (2). While the striking frankness of this statement is meant to alarm the lay reader and cause them to dismiss the radical doctrine of elimitivism, Stich does accurately portray the eliminativist attitude. Indeed elimitivists espouse that “there is the only the physical brain – that we think and feel, make decisions, and plan as a result of activity and processes in the physical brain” (P.S. Churchland). The eliminativist viewpoint, then, boils down to that there are only brain states and that mental states are mere illusions.
Eliminative materialists are correct to so intensely interrogate folk psychology, but are mistaken in their complete abandonment of it. The language which is used to define the brain states one experiences is simply representative, in a metaphorical sense, of those brain states. Heil nicely summarizes a similar position: “Let us […] focus just on the propositional attitudes beliefs, desires, intentions, and the like. It is natural to assume that these, like their sensuous cousins, are ‘inner’ states. Such states are states of your mind. If minds are brains, then they are states of your brain” (420). In other words, these folk psychological predications are just naïve representations of what are indeed brain states. The language, even unbeknownst to the speaker, represents that fact.
As an example of a folk psychological espousal, Ravenscroft offers: “the smell of freshly baked bread made Sally feel hungry” (3). This presumption has a neuroscientific basis that the folk psychological language is used to represent. The brain state caused by the physical presence of molecules in Sally’s nostrils caused the brain state that is hunger. This brain state would then motivate Sally to eat. So then, the declaration “I am hungry” is not in itself false, as it is simply a linguistic representation of a physical brain state. It is important to note that the scent Sally smelled is a causal property of the bread’s chemical composition only insofar as the presence of the molecules of that composition in Sally’s nostrils caused a brain state, or internal representation, with the mental property of smell. The scent Sally smelled is a causally reducible mental property of the brain state Sally experienced.
Supervenience physicalism, like eliminative materialism, offers that there is only the physical world and, as a consequence, only the physical brain. However, supervenience physicalists offer one additional criterion: there are certain properties – mental or psychological properties, for instance – which supervene on their corresponding physical properties. For our purposes, these physical properties are brain states on which mental properties supervene. The mental property of scent emerges from the brain state caused by the presence of molecules in ones nostrils, for instance. Humphreys states that many “supervenience accounts” offer the idea that “if A supervenes upon B, then A is nothing but B” and “that if A supervenes upon B, because A’s existence is necessitated by B’s existence, all that we need in terms of ontology is B” (338). At first glance, this system is strikingly similar to eliminative materialism, however, unlike eliminativists who deny the existence of B, supervenience physicalists accept that although B is ontologically dependant of A, it does in fact exist.
To better understand mental states as emergent properties of physical brain states, one must examine the complex biological neural network that composes the human brain. Assuming a supervenience physicalist view of the mind, this examination affords the conceivability of an ANN which replicates the human brain, and therefore brain states and emergent mental states.
A Biological Neural Network
P.M. Churchland describes the brain as “a microscopic matrix of 100 billion neurons and 100 trillion synaptic connections” (Into 29). Each of these 100 billion neurons is connected to those surrounding it by any number of the 100 trillion synapses, creating a vast and complex network of electrical activity. At any time, a given neuron may either be excited or inhibited – that is, firing or not. Drew further explains the neuronal process of inputs and outputs: “neurons receive input from other neurons, which may lead to either excitation or inhibition. When the net excitation achieves a threshold value, the neuron fires and the process repeats itself. As such, a neuron’s output always bears the same relationship to its input” (3).Continued on Next Page »