The Conscious Mind


David Chalmers’ The Conscious Mind is an interesting turn in the search for a fundamental theory of mind. It may come as a surprise that a fundamentally dualist approach underlies a current, academic theory. That said, this book, as has been noted elsewhere, can be divided into two more or less self-contained sections. The first section offers a firm refutation of the reductive materialistic approach that seems to dominate the field. The second section represents Chalmers’ attempt to propose his own foundations for a new theory that does not rely on the false assumptions of reductive materialism.

The first section relies on five key arguments, two of which I will comment on here.

The first of these is the (in)famous argument from philosophical zombies. This argument is based on a thought-experiment in which one conceives of an identical copy of a human being, except that this copy does not actually experience anything. That is, there is nothing that it is like to be this copy. From the outside, it certainly appears conscious, and would even say that it is conscious upon inquiry; nonetheless, it would lack anything that would recognizably be a conscious mind from the first-person. The point about this argument, that is often overlooked, I believe, is that it only seeks to refute logical supervenience, not the more familiar natural supervenience. Logical supervienence suggests that a higher-level property is fully entailed by lower-level properties. Natural supervenience suggests merely a lawful connection between lower-level properties and higher-level properties. In another way, logical supervenience implies that, given the lower-level facts, the higher-level facts could not have been any other way, in any possible world. Natural supervenience, on the other hand, leaves this possibility open. For example, it is perfectly reasonable to assert that the law of gravity could have been different from what it is. In another possible world, with all the same physical facts, an object on an equivalent earth might accelerate in free fall at 15 meters per second squared, instead of 9.8 meters per second squared, as it would on our earth. The full force of this argument, then, only says that the physical facts alone do not explain conscious experience, there are further fundamental laws that we need to call upon (laws that Chalmers later calls “psychophysical”). This argument alone can be seen as a sufficient refutation of reductive materialism, since the latter does not assert any further fundamental laws.

The second argument, which I personally find more indicative of the problem, comes from asymmetric epistemology, which I have previously remarked on in altered form. At its most simple, this argument relies on the fundamental difference between physical and phenomenal explanation: that the former is done in the third-person, while the latter is done in the first-person. Even if we knew all the physical facts about the universe, we would not be in a position to postulate experiences being associated with any objects. We could certainly claim that some organisms claimed to be conscious, but this would be indirect evidence at best. This, I think, is the most fundamental argument against reductive materialism because, as I’ve asked before, given this asymmetry, why would we ever expect a materialistic account to reveal consciousness to us?

Now, on to the second part. This part concerns itself with the construction of a theory of consciousness that is free of the influence of materialism. At its heart, this theory differs insofar that it postulates phenomenal experience as a fundamental concept, much as mass-energy and space-time are fundamental in the physical sciences. However, Chalmers tries to distance this view from panpsychism, which states that everything is conscious (however, he later admits that this possibility is not too unreasonable, or even unlikely). The most individuating idea presented is his principle of organizational invariance, which states that any copy of a conscious being, with the same abstract causal structure, will have qualitatively identical phenomenal experiences. In short, his theory could be seen as a sort of non-reductive functionalism that relies on experience as a fundamental. Experience is ubiquitous, then, but only as long as the appropriate causal relations are in place. What counts as appropriate causal relations remains to be seen.

The remainder of the second part is less convincing, but is useful nonetheless as an intellectual enterprise. That said, his treatment of the interpretation of quantum physics leaves much to be desired.

All in all, this book was more than worth the time I invested in it. Even if his theories turn out to be false, they will be no less pivotal in our quest for understanding one of the most puzzling, mysterious, and all around frustrating aspects of life. As Chalmers himself states from the outset, “If some ideas in this book are useful to others in constructing a better theory, the attempt will have been worthwhile.” As for me, I think that his sentiment has been affirmed.

Five out of five stars, recommended to any and all who seek a better understanding of their conscious experiences.

The Argument from Fading Qualia

In studies of consciousness, it is often maintained that purely artificial systems—say, those made of silicon chips—could never implement a conscious state. Consciousness is purely the realm of biological systems. In response to this, David Chalmers constructed the following thought experiment:

Suppose that there are two functionally identical systems: one (brain) made of neurons, and another made of silicon chips in place of those neurons, each residing inside a skull and properly connected to peripheral systems. All of the causal relations instantiated by the neuronal version are also instantiated by the silicon version. From the outside, then, the functional role of each in controlling behaviour would be effectively identical. The question lies in what it would be like to be each of these systems. Naturally, we accept that the neuronal construction would have a fully rich conscious experience. If it were me right now, then it would be seeing and feeling my fingers pressing each black key as I type this post. What of the silicon version? Relying on intuition, it seems natural to say that it isn’t “experiencing” anything. It might look like it’s experiencing something. It might even say that it is if we asked it. Still, we would cling to the notion that it is not.

Now, imagine an intermediate state between a fully neuronal brain and this fully silicon “brain.” In this system, half of the functional units in the “brain” are neurons, and the other half silicon. All of the causal relations from the original neuronal version, however, are still in place. (We can imagine a complex system of interfaces between neuron and silicon involving all kinds of sensors and effectors to accomplish this, but these details of practicality are irrelevant here. All that matters is that there is nothing contradictory in this picture.) Now comes the crucial question of the thought experiment: what is it to be like this neuron-silicon hybrid? All of the actions that would be accomplished by a neuronal brain are still being accomplished—the only difference is the physical substrate that is realizing them. Would it be conscious, but only slightly so? If so, this system might see gray where I see black; however, it would still say that it is seeing black due to the functional connections underlying its behaviour. We are left with a system that maintains some aspects of consciousness, but that is systematically wrong about its own conscious state.

The conclusion that Chalmers would like us to draw here is that there is no room in a functionally identical system to produce changes in conscious states. This would produce a “strong dissociation between consciousness and cognition” that we have never seen outside of certain neuropathological conditions. The difference here, though, is that neuropathology causes these problems by disrupting function. In the case of the intermediate, all functional aspects are upheld, so we have no reason to believe that there is any possibility of altered experience.

Explicitly, Chalmers uses this conclusion to support his psychophysical law of Organizational Invariance, which states that any functionally identical copy of a conscious brain, regardless of physical substrate, will have qualitatively identical experiences as the original system. This has strong implications for many areas. Specifically, it opens the door for the construction of “Strong AI,” or artificial intelligence that possesses a conscious mind, which is a very exciting possibility for humanity.

(The relevant supporting text can be found in David Chalmers’ The Conscious Mind, pages 253 to 263)

Chalmers on Physics and Phenomenology

“Physics requires information states but cares only about their relations, not their intrinsic nature; phenomenology requires information states, but cares only about their intrinsic nature. This view postulates a single basic set of information states unifying the two. We might say that internal aspects of these states are phenomenal, and the external aspects are physical. Or as a slogan: Experience is information from the inside; physics is information from the outside.

The above comes from David Chalmer’s The Conscious Mind and provides a brief account of his personal attempt to reconcile phenomenal and physical aspects of the most basic of entities (to clarify that he is speaking to basic entities, his assertion taken to an extreme postulates nothing more than information states as actually existing, at a fundamental level)—though the view could be translated up to macroscopic structures with careful consideration (he notes the difficulty of such a task in the surrounding text, though I think that he overstates the problem). I’ll take up this problem below (note, not all of my ideas follow directly from Chalmers thesis, I have incorporated ideas from elsewhere, notably Damasio):

On scaling up, from, say, a cell to a full-fledged brain, we start to get successively larger functional units—units with their own informational states—forming a sort of nested hierarchy of phenomenology all the way to the uppermost level: one full self, in the ordinary sense of the word. A problem in this process that he remarks upon is the associated “jaggedness” that would seemingly result from summing up smaller “phenomenal” (or proto-phenomenal, if you prefer) sub-units into one coherent whole. In my estimation, it seems that this is not a necessarily a problem, much in the way that upon summing up individual atoms, or even molecules to give a better sense of the problem, into physical objects, we do not experience macroscopic objects as being “jagged” in any way, but rather as continuous, complete objects. Upon investigation below the level of every day experience with modern tools of magnification, we are able to peek into the jagged quality of physical objects, but our natural tools (i.e., eyes) for observation of such entities lack the resolution to pick out the underlying jaggedness. In other words, the jaggedness is there, but we do not notice it due to the limited resolution of our perceptual systems. It may be that conscious experience is similar to this: it, too, possesses a level of jaggedness, but this eludes our introspective observation due to the high-level nature of introspection itself. On this view, implied jaggedness does not detract from Chalmers related assertions.

As for the strength of Chalmers’ overall argument, I cannot say. On the surface, it seems plausible, though many would disagree with me on that. At the very least, he has advanced thinking on the matter in a fundamental way. I’ll post a fuller critique of the theory later on.

Chalmers and Damasio

I am currently reading David Chalmers’ The Conscious Mind. In reading, I am finding parallels between his theory and Antonio Damasio’s theory presented in his own Self Comes to Mind, which I read a few weeks ago. Damasio’s theory, in brief, rests on biological value and homeostasis. Reduced to it’s smallest, this manifests itself as even the single-celled organism’s apparent “will to live.” Though you may call it what you like, something along these lines seems to be present in species that far predate the emergence of what we think of as consciousness and intentional qualities. Certainly, few would postulate that the single-celled organism is really experiencing any sort of real desire to live, but could this be a primitive, or primordial (per Damasio), expression of what would later become full-blown consciousness? I’m only a hundred pages or so into Chalmers’ work, but I feel like I may be onto something. I’ll update when I’ve read more.