The Mind in the Brain, the Brain in a Robot: Strong AI in an Artificial Neural Network Brain Replica Housed in an Autonomous, Sensory Endowed Robot
2010, Vol. 2 No. 10 | pg. 3/3 | «
One could not expect to turn on an ANN brain replica and have the machine begin carrying out cognitive processes. An infant born with absolutely no sensory ability – a 100% absence of sensory input – would conceivably not have any cognitive processes. With no input, the infant would not be able to react, learn, or form a sense of self. Seemingly such a being would not have consciousness. The same is true for a computer – a computer with no inputs can generate no outputs. Most importantly, the same can be said for an ANN brain replica – such an ANN, like the human brain, requires at least some initial sensory inputs to carry out cognitive processes.
Heil states that “a desktop computer, unlike a conscious creature, has limited input-output channels. Conscious creatures typically possess intricate sensory systems that include organs for sensing external objects and events, and a nervous system organized so as to monitor infernal bodily states and processes” (206). In that light, it is necessary to house the ANN brain replica within a functional humanoid robot capable of sensing, moving about, and interacting with its environment. Searle alludes to just such a robot as a response from Berkeley and Stanford to his Chinese room counterexample:
Imagine a robot with a brain-shaped computer lodged in its cranial cavity, imagine the computer programmed with all the synapses of the human brain, imagine the whole behavior of the robot is indistinguishable from human behavior, and now think of the whole thing as a unified system and not just a computer with inputs and outputs. (244)This description perfectly embodies the necessary features of the robot being discussed here: a replica of the human brain programmed with its neuronal circuitry in a “unified system” capable of interacting with its environment. However, there is no denying the obvious differences between a synthetic robot and the organic human body.
Evolution has trained the human brain to interact with the body in a particular way. In order for the ANN brain replica-robot system to function properly, the brain must be retrained to interact with its new “body.” At this point in development we would have a comprehensive map of the biological brain. Neuroscientists would therefore understand how the body’s sensory organs communicate with the biological neural network. This presents robo-technicians with three challenges in building the described robot: implementing sensory organs, implementing a metabolic system, and connecting them both to the ANN brain replica through a makeshift nervous system.
Technology has already managed to approximate many of our sensory organs. Artificial hearing and listening devices are in mainstream use in living humans, and the accuracy of this vocal and auditory technology in replicating the human senses continues to improve. Bionic eyes, or artificial prosthesis, are also currently being explored and expanded upon. Though crude, these devices have been successfully used to return sight to the blind. Camera systems, though they do not replicate human vision precisely, could also be implemented as artificial sight mechanisms. Strides in 3D technology in recent years make this option seem all the more feasible. Touch technology has been around for years, and has recently entered into the mainstream. Capacitive sensors could conceivably be designed to coat the frame of the robot giving it a sense of touch which would allow it to physically interact with its environment. Taste and scent are both chemistry based senses and can likely be simulated by research in that field, though it is not obvious whether or not a robot would have much use for either.
These sensory mechanisms would provide the ANN brain replica with the necessary inputs to experience phenomenological consciousness. Properly connected, there is no reason that an awareness of the environment would not arise from these sensory organs, inevitably leading to a human-like self-awareness. As the system perceives and collects information about its environment, it would, as the brain after which it is modeled, perceive a perceiver. In his argument for a computational theory of mind, Pinker states:
Self-knowledge, including the ability to use a mirror, is no more mysterious than any other topic in perception and memory. If I have mental database for people, what’s to prevent it from containing an entry for myself? […] A robot that could recognize itself in a mirror would not be much more difficult to build than a robot that could recognize anything at all. (134)
Pinker’s reduction of self-awareness to a visual event simplifies the complex idea. Imagine the robot faced with its reflection in a mirror. Certainly it would sense, as any “person” would, a causal link between its motions and the motions reflected in the image being perceived. However, the thought I am moving contains a striking feature which comes before any other; the concept of I implies a sense of self – it implies both an actor being viewed (the robot in the reflection) and a perceived perceiving said actor (the robot itself). It is in this perceptual awareness that the robot must be considered self-aware.
Much of human activity is motivated by a need and desire for energy which is achieved by consuming food which is then processed via an internal metabolic system. Although inorganic, any robot has the same need for energy as a human does. Without energy, a robot, or any machine, cannot function. If the robot in question is meant to be truly autonomous – that is, to not depend on human operators for inputted data or energy – it must be endowed with an artificial metabolic system which is capable of powering the ANN brain replica, the robotic body, and the sensory organs. Ieropoulos gives both solar and wind energy as examples of possible sources (191).
The metabolic system involved in processing such energy would be responsible for regulating energy input and output, distributing it throughout the machine, maintaining workable energy levels, and determining which areas take highest priority in times of energy shortage. These tasks are similar to the function of the human metabolic system, but simply differ in the method by which energy is collected. Ieropoulos does offer an additional artificial metabolic option which is in an experimental phase: converting real food to energy via a “microbial fuel cell” which employs microbes to reduce sugar to energy (191). He continues to describe “a class of robot system, which demonstrates energetic autonomy by converting natural raw chemical substrate (such as carrots or apples) into power for essential elements of behavior including motion, sensing, and computation” (191). Such energy systems are still in the early stages of development and can’t yet sustain a lengthy or useful charge, but the technology is developing and could potentially be implemented in a robot such as the one under discussion in the future.
Having a map of the brain’s circuitry makes the job of giving the robot a functional nervous system from which to collect data about and control the robot’s internal and external structures all the more feasible. Heil describes the nervous system as “overlapping networks of afferent and efferent nerves running to and from the brain, providing connections between the brain and assorted sensors” (207). He continues to describe the way in which sensory information is sent along this network from sensors to particular brain areas in electro-chemical signals (207).
An artificial nervous system, then, can be constructed as a connectionist extension linking into the ANN brain replica in such a way that the proper senses and functions of the body are connected to their respective brain areas. This makeshift nervous system would not only provide the ANN brain replica with essential information about the state of the robotic body, but it would also allow it to navigate that body about its environment. Collectively, this robot would be a self-sufficient, fully autonomous machine with cognitive states identical to those had by humans. However, the intentional merit of those cognitive states is a contentious subject with frightening implications.
Searle & Intentionality
To recap, described above is a connectionist replica of the human brain in an artificial neural network that replicates human cognition, which is stored in the head of a humanoid robot, which features sensory organs, a metabolic system, and a makeshift nervous system connecting it all together. This robot is capable of autonomously moving about and interacting with its environment. From this description, it would seem difficult to avoid ascribing intentionality to the actions of such a “being.” However, to a similar response to the Chinese room, Searle contends the opposite – that a robot of this nature, despite its anatomy and autonomy – can have no actions which have that inherent “of-ness or about-ness” of intentionality.
Searle concedes that with only the behavioral information of such a robot, we would need to conclude that it indeed has intentionality (245). Otherwise, we would risk depriving ourselves of that feature. Searle explains his oddly behaviorist view of intentionality further, while offering why he would deny it to such a machine:
The attributions of intentionality that we make to the robot in this example […] are simply based on the assumption that if the robot looks and behaves sufficiently like us, then […] it must have mental states like ours that cause and are expressed by its behavior and it must have an inner mechanism capable of producing such mental states. If we knew independently how to account for its behavior without such assumptions, we would not attribute intentionality to it, especially if we knew it had a formal program. (245)
What Searle here fails to understand is that it is not the behavior of the robot which leads one to conclude that it has intentionality, but rather the architecture of it. Indeed, human-like behavior would, in this case, be a necessary condition of intentionality, but certainly not a sufficient one. It is the structure of the robot – its brain and body, modeled after its human creators that, in turn, give rise to intentional behavior – that causes us to ascribe intentionality to the machine. If we deprive a machine which is a cognitive replica of ourselves of intentionality, then we must conclude that no human has it.
If Searle would be so suspicious of the intentionality of a machine of this nature, he would also have to be suspicious of the other beings whom he can only conclude have intentionality based on behavior – humans. He is further mistaken in his claim that such a robot has a “formal program.” A program which has mapped all of the data of the brain is no more a formal program than the brain itself. We must therefore make the leap: we must ascribe intentionality to this machine.
Life, Implications, and Concluding Remarks
Defining life is a difficult challenge. Such a definition must embody the entire, seemingly infinite population of beings gigantic and microscopic, stationary and mobile, sexual and asexual, which populate the earth. Clark reviews a number of definitions of life, but none of their authors can come to a single consensus (117-118). From this body of definitions, we can select the three most pervasive qualities which are required of living things:
These three criteria may seem cold and disconnected, a simplification of a miracle, but they are indeed the necessary conditions for all living things. There is nothing that lives that does not have these properties. Additionally, there is nothing that does not have these properties that lives.
But what about the machine outlined in this exploration? A machine which has a brain identical to the human brain; a machine which has sensory organs and a nervous system; a machine which has a metabolic mechanism for energy autonomy; a machine that, by virtue of the fact it was designed after its human creators, can replicate itself asexually. Must we ascribe life to this machine?
More importantly, if we must, what does that say about human life? If the above outline one day becomes a reality – if in the future we build a robot that is humanity’s cognitive twin – how will the value of its life be determined? Certainly most would shy away from ever calling a machine human, and many will likely have just as much trouble declaring it alive. But if a machine, a robot, as is conceived in these pages were ever created, it is a reality and an issue we would have to face. It would fundamentally alter philosophy’s ethical projects and potentially injure humanity’s view of itself. Perhaps, then, to avoid having to face these questions, humanity should leave strong AI as a conceivability and allow our brains only to ponder it, but never to create it.
Choi, Charles Q. "Cat Brain Inspires Computers of the Future." 16 April 2010. Tech News Daily. April 19 2010 .
Churchland, Paul M. "Eliminative materialism and the propositional attitudes." Heil, John. Philosophy of Mind: A Guide and Anthology. New York: Oxxford University Press, 2004. 382-400.
Churchland, Paul M. "Into the brain: where philsophy should go from here." tOPOI 25 (2006): 29-32.
Clark, Andy. Mindware: An Introduction to the Philosophy of Cognitive Science. New York: Oxford University Press, 2001.
Drew, Phillip J. and John R.T. Monson. "Artificial neural networks." University of Hull Academic Surgical Unit (2000): 3-11.
Heil, John. Philosophy of Mind: A Guide and Anthology. New York: Oxford University Press, 2004.
Humphreys, Paul. "Emergence, Not Supervenience." Philosophy of Science (1996): 337-345.
Ieropoulos, Ioannis, John Greenman and Chris Melhuish. "Imitating Metabolism: Energy Autonomy in Biologically Inspired Robots." In Proceedings of the AISB '03, SecondInternational Symposium on Imitation in Animals and Artifacts, Aberystwyth, Wales (2003): 191-194.
Kaiser, Marcus. "Brain archiecture: A design for natural computation." Royal Society (2007): 1-13.
Mastermind. Dir. Sophie and Pierre Faye Desandoun. Perf. Patricia Churchland. 2005.
Nichols, Shaun. "Folk Psychology." Encyclopedia of Cognitive Science (2002).
Pinker, Steven. How the Mind Works. New York: W.W. Norton & Company, 2009.
Pylyshyn, Z.W. "The `causal power' of machines." Behavioral and Brain Sciences 3 (1980): 442-444.
Ravenscroft, Ian. "Folk Psychology as a Theory." 23 February 2004. Stanford Encyclopedia of Philosophy. 9 April 2010 .
Searle, John R. "Minds, Brains, and Programs." Heil, John. Philosophy of Mind: A Guide and Anthology. New York: Oxxford University Press, 2004. 235-252.
Stich, Stpehen and Ian Ravenscroft. "What is Folk Psychology?" Technical Report, Rutgers University Center for Cognitive Science 5 (1993): 1-23.
Vico, Giambattista. Opere, Vol. 1 (1853): 136-137.
Suggested Reading from Inquiries Journal
Inquiries Journal provides undergraduate and graduate students around the world a platform for the wide dissemination of academic work over a range of core disciplines.
Representing the work of students from hundreds of institutions around the globe, Inquiries Journal's large database of academic articles is completely free. Learn more | Blog | Submit
Latest in Philosophy
What are you looking for?