Searle in Two Quotes

Today, I am reading John Searle’s Minds, Brains and Science, which is essentially an edited transcript of his 1984 Reith Lectures. I read two quotes that I thought were worth sharing, one for its humor, and the other for its insight. Enjoy!

Various replies have been suggested to this [the Chinese Room] argument by workers in artificial intelligence and in psychology, as well as philosophy. They all have something in common; they are all inadequate. And there is an obvious reason why they have to be inadequate, since the argument rests on a very simple logical truth, namely, syntax alone is not sufficient for semantics, and digital computers insofar as they are computers have, by definition, a syntax alone.

I think that he is almost certainly right here, but the manner in which formulates this paragraph is nothing short of comedic perfection. My own thoughts on the subject can be found in my article “Minds and Computers.”

Suppose no one knew how clocks worked. Suppose it was frightfully difficult to figure out how they worked, because, though there were plenty around, no one knew how to build one, and efforts to figure out how they worked tended to destroy the clock. Now suppose a group of researchers said, ‘We will understand how clocks work if we design a machine that is functionally the equivalent of a clock, that keeps time just as well as a clock,’ So they designed an hour glass, and claimed: ‘Now we understand how clocks work,’ or perhaps: ‘If only we could get the hour glass to be just as accurate as a clock we would at last understand how clocks work.’ Substitute ‘brain’ for ‘clock’ in this parable, and substitute ‘digital computer program’ for ‘hour glass’ and the notion of intelligence for the notion of keeping time and you have the contemporary situation in much (not all!) of artificial intelligence and cognitive science.

 

Minds and Computers

In everyday conversation, brains are often equivocated to “computers.” This intellectual laziness, as it were, has led to almost an entire generation of academics asserting that minds are nothing more than programs, run on the machinery of the brain. In this post, I hope to clear up a few confusions and oversights related to this position. I do not claim to be the first to say these things, but I’d like to round up a few of these disconnected views and add my own personal thoughts as they seem useful.

The issue is that to claim that the brain is like something—say, a computer—is to assert that we have any idea as to how the brain really works. This is utterly and completely false. We certainly know a lot about the brain, but most of this is in terms of small, isolated events (action potentials) or very coarsely-grained images (e.g., fMRI), so anyone who claims to have a complete view of how the brain processes, integrates, and distributes information is very misguided, to say the least.

There is certainly a lot more about the brain that we’ve learned since these equivocations were first put forth in the literature. We have learned more about how networks of neurons function coherently to produce meaningful representations. We have learned about oscillations in the brain that help unify otherwise disjoint brain functions (gamma-waves are especially exciting in this regard). We have learned about local field potentials, such as those recorded by EEG, and how they help modulate neighborhoods of neurons. In this sense, the brain is still a kind of information processor, albeit much more complex than we once thought it to be, and this is where the equivocation breaks down.

Computers are designed with distinct functional units that attempt to minimize interference from surrounding units. Neurons, the functional units of the brain, certainly do this to an extent—if they did not, the careful modulation of membrane potentials necessary for coherent communication would be impossible to maintain. They are nevertheless heavily influenced by every single signal and associated field potential that passes through any region of the brain they occupy. Neurons could not function properly in isolation (by function, I mean in a way that is conducive to conscious experience), they require complex interactions among themselves—and with the body they represent—the likes of which we do not see in their silicon counterparts. The most complex relation required is that which is often referred to as “dynamic constancy” in chemical terms. The brain is not a static system. It is constantly changing: in order to understand a single word, much less a sentence, it must alter synaptic connections. Sometimes this involves strengthening a handful due to the accumulation of calcium ions in pre- or post-synaptic terminals*. Other times this involves the generation of entirely new synapses through complex genetic regulatory mechanisms. It is astounding to realize that many everyday actions require synchronous activation and deactivation of entire networks of genes with perfect precision, often several times over. Nonetheless, all of this constant modification leaves, at the end of the day, more or less the same brain that started the day, thus dynamic constancy.

We may one day be able to “create” something that is capable of conscious reflection, and we may even call it a computer when the time comes, but it will not be anything that is recognizably a computer according to today’s standards. Likely, it will incorporate some aspects of biological matter, perhaps some actual living cells will be the only solution. The point of all of this is that minds are not merely computer programs, the distinction between program and hardware is nonexistent. The mind is the brain, and the brain is the mind, and nothing more.

*It is especially exciting to learn that the durations of various synaptic modulating processes correlate almost perfectly with discoveries made quite independently in cognitive psychology. For example, estimates of the durations of mono-synaptic facilitation, which can rely on the accumulation of calcium ions as mentioned above, tend to match up almost perfectly with estimates of the durations of various forms of working memory.

The Argument from Fading Qualia

In studies of consciousness, it is often maintained that purely artificial systems—say, those made of silicon chips—could never implement a conscious state. Consciousness is purely the realm of biological systems. In response to this, David Chalmers constructed the following thought experiment:

Suppose that there are two functionally identical systems: one (brain) made of neurons, and another made of silicon chips in place of those neurons, each residing inside a skull and properly connected to peripheral systems. All of the causal relations instantiated by the neuronal version are also instantiated by the silicon version. From the outside, then, the functional role of each in controlling behaviour would be effectively identical. The question lies in what it would be like to be each of these systems. Naturally, we accept that the neuronal construction would have a fully rich conscious experience. If it were me right now, then it would be seeing and feeling my fingers pressing each black key as I type this post. What of the silicon version? Relying on intuition, it seems natural to say that it isn’t “experiencing” anything. It might look like it’s experiencing something. It might even say that it is if we asked it. Still, we would cling to the notion that it is not.

Now, imagine an intermediate state between a fully neuronal brain and this fully silicon “brain.” In this system, half of the functional units in the “brain” are neurons, and the other half silicon. All of the causal relations from the original neuronal version, however, are still in place. (We can imagine a complex system of interfaces between neuron and silicon involving all kinds of sensors and effectors to accomplish this, but these details of practicality are irrelevant here. All that matters is that there is nothing contradictory in this picture.) Now comes the crucial question of the thought experiment: what is it to be like this neuron-silicon hybrid? All of the actions that would be accomplished by a neuronal brain are still being accomplished—the only difference is the physical substrate that is realizing them. Would it be conscious, but only slightly so? If so, this system might see gray where I see black; however, it would still say that it is seeing black due to the functional connections underlying its behaviour. We are left with a system that maintains some aspects of consciousness, but that is systematically wrong about its own conscious state.

The conclusion that Chalmers would like us to draw here is that there is no room in a functionally identical system to produce changes in conscious states. This would produce a “strong dissociation between consciousness and cognition” that we have never seen outside of certain neuropathological conditions. The difference here, though, is that neuropathology causes these problems by disrupting function. In the case of the intermediate, all functional aspects are upheld, so we have no reason to believe that there is any possibility of altered experience.

Explicitly, Chalmers uses this conclusion to support his psychophysical law of Organizational Invariance, which states that any functionally identical copy of a conscious brain, regardless of physical substrate, will have qualitatively identical experiences as the original system. This has strong implications for many areas. Specifically, it opens the door for the construction of “Strong AI,” or artificial intelligence that possesses a conscious mind, which is a very exciting possibility for humanity.

(The relevant supporting text can be found in David Chalmers’ The Conscious Mind, pages 253 to 263)