The Mind in the Brain, the Brain in a Robot: Strong AI in an Artificial Neural Network Brain Replica Housed in an Autonomous, Sensory Endowed Robot
A simple analogy helps to simplify this process. Imagine a wall of light switches, each of which is either on or off at any given moment and each of which is connected to those surrounding it by electrical wiring. They are wired such that once a light switch turns on, it sends an electrical output to the surrounding switches wired to it. The surrounding switches each have a certain input energy threshold, which, when met by incoming energy from surrounding switches, causes that switch to turn on, outputting an energy equivalent to the threshold. If the input subsides and stops meeting the threshold, the switch turns off. The process, once started, causes a network of electrical activity which is a simplified representation of the neuronal, synaptic, axonal activity of the brain.
In the brain, this biological neural network is laid out in a particular way. It is not just a random mass of neurons and synapses, but rather an organized collection of cortical areas, referred to as Brodmann areas, each with its own unique properties – including function. Kaiser explains the cortical areas as “brain modules which are defined by structural (microscopic) architecture. Observing the thickness and cell types of the cortical layers, several cortical areas can be distinguished” (2). Each of these cortical layers has a unique purpose, such as visual and other sensory processing, which is enabled by its particular structure.Knowing that the brain is divided into different areas is not enough; the specific pattern of how these areas, and the brain as a whole, are composed – the microscopic organization and density of the biological neural network within – is the question which is most pressing, and most ambitious. To answer such a question would be to construct a comprehensive neural map of the brain that offers profound opportunities for architects of artificial neural networks. However, as Kaiser states, “there is not enough information about connectivity in the human brain that would allow network analysis” (3), but the future may be more promising.
Contemporary neuroscience is decades away from completing a comprehensive neural map of the human brain. The daunting specificity and preciseness of the brain’s architecture and its respective behavioral and phenomenological emergent properties make such a task nearly unfathomable by today’s standards. However, the continuing development of the technology and tools of the science makes the ambitious goal of mapping the neural underpinnings of human activity a more realistic prospect. In fact, if one considers the vast and rapid technological advancement of the past century, honing in on the specific architecture of the brain becomes a distinctly conceivable future reality.
P.M. Churchland is as optimistic about the future of neuroscience: “A new generation of techniques and machines of observation has given us eyes to see into the encrypted details of neuronal activity […] and a new generation of theories has given us at least an opening grip on how the brain’s massive but microscopic matrix might perform the breathtaking feats of real time cognition” (1). His voice embodies an excitement about the accelerated growth of scientific technology over the past decade. This technology has not been useful in itself, but in the mysteries it has helped to unravel and the theories which have been formulated as a result. As the technology continues to evolve, so too will the complexity of the mysteries we can solve and the theories that therefore arise.
The future of neuroscience inevitably leads to its completion: a comprehensive neural map of the human brain, a map of each microscopic neuronal detail which makes up this mysterious biological network. Though far out, the progress of neuroscientific technology will indeed get us there. Once this advent is achieved, it is conceivable that such a structure could be replicated by computer technology in an artificial neural network.
Replicating the Brain through Connectionism
Consider again what the brain actually is: a complicated system of on/off switches wired together by axons and synapses. At its simplest level, this model is strikingly similar to computer programs coded in binary: a series of 0s and 1s represent either an on or off state, just as a neuron can either be firing (on) or inhibited (off). Clark extends such an analogy: “With regard to the very special class of machines known as computers, the claim is that the brain […] actually is some such device. […] Neural tissues, cell assemblies, and all the rest are just nature’s wet and sticky way of building […] computing machinery” (8). The underlying principles of this computing machinery, or “meatware” as Clark puts it, can be extended further than binary computer programs; completed neuroscience will give us a specific map of the brain that will serve as a model for an artificial neural network replicating its architecture. Based on the view of supervenience physicalism, one must conclude that if one can replicate the brain, the mind and the mental states which supervene on that brain would too be replicated. This conceivability allows leads one to ponder the possibility of an artificial consciousness – of strong AI.
The connectionist position applies the principles of the brain’s neuronal circuitry to create learning machines. An artificial neural network of simple on/off processors “linked in parallel by a daunting mass of wiring and connectivity” is meant to mimic the basic function of neurons, axons, and synapses in the brain (Clark 62). These processors signal to one another when they are excited, or on, or otherwise remain inhibited, or off, akin to the fashion in which neurons communicate. Clark further draws this comparison: “in both cases, the simple processing elements are generally sensitive to only local influences. Each element takes inputs from a small group of “neighbors” and passes output to a small (sometimes overlapping) group of neighbors” (62).
Present connectionist networks don’t come close to being as complex as the brain itself, but simply use the principles of the brain’s inner-workings to mimic “intelligence.” However, the view complements such a far-out theoretical possibility. Less complex connectionist “brains” have already been built, though they suffer from shortcomings. Choi reports:
One of the world’s most sophisticated supercomputers […] can simulate 1 billion neurons and 10 trillion synapses, exceeding the scale of a cat brain. Still, it is a massive machine with more than 140,000 central processing units that needs a million watts of electricity and it still performs 100 to 1,000 times slower than a cat’s brain. (Choi)
Though this machine is impractical and functionally deficient, its form is an obvious precursor to a connectionist future and engages the possibility of a mammalian ANN brain replica. Kaiser offers: “As the ‘programme’ of the brain is implemented in its wiring organization, the topology of the brain might inspire theoretical work in the organization of parallel processing and integration” (10). Assuming, as predicted in the previous section, that this topology referenced by Kaiser can be comprehensively mapped by completed neuroscience, then a connectionist artificial neural network could replicate that topology and the “programme” that emerges from it.
Such is the theory of Pylyshyn, specifically regarding language: “If more and more of the cells in your brain were to be replaced by integrated circuit chips, programmed in such a way as to keep the input-output function of each unit identical to that of the unit being replaced, you would in all likelihood just keep right on speaking exactly as you are doing now” (424-444). What his comment implies is that if each neuron in the brain were to be replaced with a processor, leading to silicon replica of the brain, cognitive processes would continue as normal. Such a replication could be implemented by one of two methods: a physical ANN composed of processors, as Pylyshyn alludes to, or an ANN running in simulation on a powerful digital computer.
The troubles of creating a physical ANN of this nature are numerous. Though a theoretical possibility, it is difficult to imagine 100 billion processors connected by 100 trillion wires in any kind of sufficiently well organized fashion of a usable size. However, the processors employed in such a construction could be somehow organized on a circuit board with connections proportional in length to the axons and synapses of the brain. Because the specific organization is so important and a physical construction would very likely be prone to imperfections, it is likely that a physical network would fall short in terms of functionality. Furthermore, such a physical network would be made particularly vulnerable to damage and wear because of its trillions of elements.
However, recent developments in microchip technology predate a promising future. Researchers at the University of Michigan have developed a new type of microchip called the “memristor” which works much like a synapse in the brain. According to Choi, “these circuit elements […] carry memories of their past: when you turn off voltage to the device, memristors remember how much was applied beforehand and for how long” (Choi). Such memristors are being hailed in their infancy as a possible brain replication technology of the future. Still, a simulated ANN brain replica has many advantages.
According to Kaiser, many current “ANNs are made up of computer simulations of neurons” (5). Running an ANN in simulation on an extremely powerful digital computer is a much more practical, cost-effective implementation of brain replication. Such a computer would need to be immensely powerful, as the processing power required for such an application exceeds the capabilities of our present machines. However, assuming our computational capacity proceeds, as Moore’s law predicts, before long out computers will meet such requirements. The architectural brain model would be designed as a piece of software. Coded in that software would be the specific, microscopic details of neural structure: the location and population of neurons, the proportional synaptic and axonal lengths connecting them, and the various other milieu that compose the biological brain.
The software would also include the specific details about each of the 100 billion neurons, such as the firing threshold. This software would then be loaded onto a suitable digital computer and would be run as an operating system. An important clarification, however, must be made: the computer is not the brain, but is more of a skull which houses the brain. Just as importantly, the software is not the mind. The software is the brain in simulation in the computer, and any “mind-ness” that emerges does not emerge as software but as a property of the brain-simulation program. The benefits of this method are numerous – it allows for more customizability, simulated experimentation, guess and check testing, and gives rise to a nearly infinite array of potential applications.Continued on Next Page »
Suggested Reading from Inquiries Journal
Inquiries Journal provides undergraduate and graduate students around the world a platform for the wide dissemination of academic work over a range of core disciplines.
Representing the work of students from hundreds of institutions around the globe, Inquiries Journal's large database of academic articles is completely free. Learn more | Blog | Submit
Latest in Philosophy
What are you looking for?