The Argument from Fading Qualia

In studies of consciousness, it is often maintained that purely artificial systems—say, those made of silicon chips—could never implement a conscious state. Consciousness is purely the realm of biological systems. In response to this, David Chalmers constructed the following thought experiment:

Suppose that there are two functionally identical systems: one (brain) made of neurons, and another made of silicon chips in place of those neurons, each residing inside a skull and properly connected to peripheral systems. All of the causal relations instantiated by the neuronal version are also instantiated by the silicon version. From the outside, then, the functional role of each in controlling behaviour would be effectively identical. The question lies in what it would be like to be each of these systems. Naturally, we accept that the neuronal construction would have a fully rich conscious experience. If it were me right now, then it would be seeing and feeling my fingers pressing each black key as I type this post. What of the silicon version? Relying on intuition, it seems natural to say that it isn’t “experiencing” anything. It might look like it’s experiencing something. It might even say that it is if we asked it. Still, we would cling to the notion that it is not.

Now, imagine an intermediate state between a fully neuronal brain and this fully silicon “brain.” In this system, half of the functional units in the “brain” are neurons, and the other half silicon. All of the causal relations from the original neuronal version, however, are still in place. (We can imagine a complex system of interfaces between neuron and silicon involving all kinds of sensors and effectors to accomplish this, but these details of practicality are irrelevant here. All that matters is that there is nothing contradictory in this picture.) Now comes the crucial question of the thought experiment: what is it to be like this neuron-silicon hybrid? All of the actions that would be accomplished by a neuronal brain are still being accomplished—the only difference is the physical substrate that is realizing them. Would it be conscious, but only slightly so? If so, this system might see gray where I see black; however, it would still say that it is seeing black due to the functional connections underlying its behaviour. We are left with a system that maintains some aspects of consciousness, but that is systematically wrong about its own conscious state.

The conclusion that Chalmers would like us to draw here is that there is no room in a functionally identical system to produce changes in conscious states. This would produce a “strong dissociation between consciousness and cognition” that we have never seen outside of certain neuropathological conditions. The difference here, though, is that neuropathology causes these problems by disrupting function. In the case of the intermediate, all functional aspects are upheld, so we have no reason to believe that there is any possibility of altered experience.

Explicitly, Chalmers uses this conclusion to support his psychophysical law of Organizational Invariance, which states that any functionally identical copy of a conscious brain, regardless of physical substrate, will have qualitatively identical experiences as the original system. This has strong implications for many areas. Specifically, it opens the door for the construction of “Strong AI,” or artificial intelligence that possesses a conscious mind, which is a very exciting possibility for humanity.

(The relevant supporting text can be found in David Chalmers’ The Conscious Mind, pages 253 to 263)