Memory and Personal Identity

Today in metaphysics I had to write up an impromptu response to the question: “Is memory important to Personal Identity?” I only had thirty minutes to write and not much time to prepare, so it’s a little rough, but I am nonetheless satisfied with what resulted, so I’ve reprinted it here:


Memory is not vitally important to personal identity. This is not to say that it is not useful to our everyday determination of “who’s who,” more on that later, but we can show that it is possible to maintain personal identity without maintaining memory.

Consider the case of Tom, or case 1. Tom is about to be tortured. But the torturer, being a slightly nice guy, proposes the following: before torturing Tom, he will wipe all of Tom’s memories. Should this make Tom feel any better? “Of course not,” I would expect Tom to reply, “I’ll still be tortured, I just won’t remember that it’s still me who was tortured!” So we are not making the situation any better—we are actually making it worse! Not only are we going to torture Tom, we are also going to turn him into an amnesiac! If this case is persuasive, then we have shown that memory is not necessary for personal identity.

But I think we can go one step further and show that it is not sufficient either. Consider case 2: Exactly the same as before, except this time we will not merely erase Tom’s memories, but instead transfer them to another body, say, Jane’s body. Now, who would Tom, pre-memory transfer, want us to torture, Tom’s body or Jane’s body? I think that, thinking only of himself, he should want us to torture Jane’s body, and here’s why: this case is no different from the first. If removal of memory is not enough to remove personal identity, as the case 1 seems to show, then how could implantation of memory create a person? If it were able to do so, then we would have two “Toms” at the end of the procedure: one amnesiac Tom in Tom’s body and one “normal” Tom in Jane’s body—but this seems to be obviously mistaken, Tom can only be in one place at a time! So which Tom is illusory? Well, if we stick with our judgment for case 1, then it seems we have to say that the “Tom” in Jane’s body is the illusory one. It’s not really Tom, Jane just thinks that she is Tom. Tom is still in Tom’s original body. If all of the preceding is true, then we have shown that memory is neither necessary nor sufficient for personal identity.

At this point, it would be prudent to evaluate just what it is that we are saying, and just what we are not saying. The preceding argument aims to show that memory is not important from a metaphysical standpoint—but this says nothing about the epistemic standpoint. In real life, memory is often all that we have to go on for determining personal identity. How do I know that I am the same person as I was last week? Because I have the memories of what I did last week! If we agree with the preceding argument, though, it would seem as if we were contradicting ourselves. We cannot determine our own identity based solely on our memories. Well, okay, maybe we can’t, for all we know, some mad scientist  implanted some false memories into my brain while I slept, and I am not who I think I am, this is not outside the realm of possibility. But it seems pretty unlikely—so, inferring to the best explanation, that barring unusual circumstances memory goes hand-in-hand with personal identity, I conclude that I am most likely the same person as my memory tells me I was last week. It’s not certain, but it is very likely. It’s important to note here, though, that these are all epistemic worries. They tell us nothing about the metaphysics of personal identity. I can use memory as a good “indicator” of personal identity, so in that sense it is very important to our conception of personal identity, but that does not mean that the two are inextricably linked. As an analogy, if I hear a dog bark, I usually infer  that a dog is nearby, but for all I know someone is merely playing a recording of a dog’s bark and there is, in fact, no dog nearby. Personal identity refers to the dog, and the recording of a bark is the memory of Tom in Jane’s body.


There’s a technical point about memory that I did not have time to address in my original response, but which I want to bring up here. In order for something to really be a memory, it has to stand in a causal relation to the event that it recalls—that is, the event that it recalls has to itself be what caused the memory to exist. So in this sense, I probably have many things that I would call “memories” in my head that are not truly memories. Perhaps I am misremembering something, or perhaps I have heard a story of my childhood so many times that, even though unbeknownst to me my own memory of the event is gone, I have recreated the scene in sufficient detail for me to be able to picture it vividly. On this definition of memory, Jane never really had memories of Tom’s life. They felt like memories to Jane, but since they were not caused by the events in Tom’s life, but rather by the torturer’s memory implantation, they are not, in this strict sense, true memories. If you were to adopt this more nuanced view of memory, then a memory theory of personal identity may be more plausible. Unfortunately though, you can still show, as per case 1, that memory is not necessary for personal identity.

Personal Identity, Brains and Fission Cases

When it comes to personal identity, the following question needs answering: what does it take for person A at time 1 to be the very same person as person B at time 2? Perhaps more clearly, right now I am sitting in front of my laptop, typing this post. In, say, ten minutes, there will be a person sitting in front of this laptop, publishing this post. What has to be true of that person for us to say that that person, ten minutes from now, is me? Now, it seems to me that this is a rather strange question for us to be asking, and it may be that we are simply confusing ourselves when we ask it—but let us assume for the present discussion that it is a coherent question to ask, as many contemporary philosophers certainly have, so that we may examine one answer that has been suggested.

The brain view, a slightly more refined version of the body view, says that in order for us to determine whether or not we have the same person at two different time points, we need to determine whether or not they have the same brain (accordingly, the body view says that we need to track the body—but this, for obvious reasons, can lead us astray). Neuroscience tells us quite assuredly that the brain is, in some way, the seat of what makes a person a person. Inside the brain lies all of the machinery required for memory, learning, personality, and all of the other traits and abilities that ordinarily allow us to identify the people around us as being who we think they are. The problem, however, with simply examining these surface-level features is that they can be mimicked, they can be replicated in a copy, leading us to the false conclusion that the copy is the real thing, just as if we were merely to examine outward body features. If we track the causal history of the brain itself, however, we should be able to figure out who is who in a more concrete manner.

So far, so good. We have what seems to be a good thesis: track the brain, track the person. Now we would like to refine the view even further. Is the whole brain necessary for personal identity, or only part of it? We know that in many respects the brain is redundant, having two more-or-less copies of each cortical structure—might we only need half of a brain to maintain personal identity? We are not necessarily constrained by specifics here, so let us make a simplifying assumption: each cortical hemisphere is indeed an exact mirror image of the opposite cortical hemisphere (there seems to be nothing in nature that points to this being impossible).

Now consider the following thought experiments: At time 1, Fred is a normal, healthy person. At time 2, he suffers a sudden, catastrophic loss of one of his cortical hemispheres. We now need to ask ourselves, is Fred-2 the same person as Fred-1? Common sense seems to tell us that he is, so perhaps on the brain view one hemisphere is indeed sufficient for maintaining personal identity. Now let us start over: at time 2, instead of Fred simply losing half of his brain, imagine that, instead, his brain is removed from his body, half of it is destroyed, and then the remaining half is implanted into the brainless body of Steve. After sufficient recovery from the operation, Steve’s body wakes back up—but who has woken up? On the brain view from before, we would have to say that Fred wakes up in Steve’s body. After all, it is the brain, not the body, that truly matters here. Alright, one more twist. Imagine this time that at time 2, Fred’s brain is again removed from his body, but this time the left half of brain is implanted into one brainless body, while the right half is now implanted into a separate brainless body. I have provided a schematic below to clarify the situation:




We have one body, Lefty, and another body, Righty (the names merely allow us to keep track of which body gets which half of the brain). After sufficient time for recovery, both bodies awaken. Now we again have to ask: who is waking up in each body? We have three options here, it would seem: 1) Fred, the same Fred as Fred-1, is waking up in both bodies; 2) Lefty is Fred-1, but Righty is not (or vice-versa); or 3) Neither of these people who wake up are Fred-1, Fred-1 died when the transplant took place. It we remain faithful to our previous conclusions, it would seem that we have to go with choice 1: both Lefty and Righty are equally Fred-1. But this can’t possibly be the case! How can Fred be in two spatial locations at the same time? Is he experiencing both bodies’ perceptions at the same time? If so, how? This simply seems to be impossible, and I am inclined to agree with this. Okay, how about option two? Perhaps Fred-1 is now in Lefty’s body—but wait, what reason do we have for him being in Lefty’s body versus Righty’s body? Both bodies, as per our simplifying assumption, have exactly the same half of a brain as the other. So much for option two. We’re now left with a final choice: neither Lefty nor Righty are Fred-1. Fred-1 is dead, no longer in existence.

But if we accept this conclusion, and it seems that we must, what does this say for our first two cases? Is Fred-2 no longer Fred-1 simply because he has lost half of his brain? There’s something that tells us that he has to be the same person. Obviously he is not exactly the same, he now has half of a brain, but intuitions seems to maintain that he is nonetheless still the same person—are we wrong?

I am not sure where exactly I stand on this issue at the moment, but I do have one thought that I think is promising. If the brain is truly duplicated in each hemisphere, but only one is needed for personhood, might there have been two people in Fred-1’s body (that is, one per hemisphere)? We may want to redefine a “person” as two of these “hemisphere-persons” in this case, which leaves us with the following: Fred-1 did not die, but half of him did. Fred-1, in the strictest sense, no longer exists, but part of him does. Returning to the final case, then, none of our original options really suffice. Instead, we would say that half of Fred-1 is in Lefty, while half of Fred-1 is in Righty.

This may not seem to be too strange of a conclusion, seeing that each body indeed has half of a brain, but when it comes to identity, it is at least a little weird. We like to think of personal identity as a 1:1 relation. You either have a person or you don’t, nothing in between. It’s not the case that after ten years of life, I am only 80% me. No, I am still me—the same person as I was before, even if my desires, beliefs, etc. have changed a little or a lot in the intervening time period. Should we re-evaluate this intuitive answer?



Brief Thoughts on the Analogy with Vitalism

It is sometimes asserted that consciousness and many of its aspects are illusory. Some of this, I believe, may turn out to be true. For example, our naive conception of conscious will seems to be at least partially illusory (see here and here for my views on that). Some, however, claim that even such fundamental notions as our sense of self are illusory, finally claiming that consciousness as a whole will one day be explained away as an illusion. I, as you may have guessed, take serious issue with this claim.

A common line of argument taken to support the claim draws an analogy between the present situation and the endeavor to explain life some hundred years ago. There were, in those days, the vitalists. These were the theorists who could not imagine that dead matter and its interactions could account for all that there is to complex life—there had to be something extra, some elan vital, or life force, underlying all of this. Even if we explain heredity, reproduction, growth, etc., we will still be missing something—namely life itself. We now know that they were wrong, dead wrong, and that, in fact, there is simply dead matter interacting in specific ways that leads to the formation of complex, living organisms.

Now take the assertion made by many contemporary philosophers of mind: even if we explain vision, intelligence, emotion, etc. (i.e., the so-called “easy problems” of consciousness), then we will still be missing something—namely consciousness itself. If we explain all of those things that can be worked out computationally, then we will be missing the very thing that we sought to explain in the first place: subjective, conscious minds. But is this necessarily the case? Might consciousness simply fade away the more and more we know about these other processes? Maybe consciousness is simply the elan vital of philosophy of mind, say proponents of this analogy.

This could not be further from the truth. When it came to explaining life, a higher-level property such as the elan vital was postulated to account for something that we did not know the cause of: the difference between living and dead objects. When it comes to consciousness, we are not postulating something above and beyond what we already know. In the words of Descartes, “I think, therefore I am.” His dualism may have been misguided, but he was spot on in stating that there is nothing that we know with more certainty than that we are conscious, that we are our own self. In this way, the analogy is deeply flawed. Something can only be explained away if it was postulated to explain something that we know to be true. Consciousness, however, was never postulated. Instead, it was the very phenomenon that we set out to explain. We may be wrong about the details, and in all likelihood we are, but we are not, as a matter of fact, wrong when we state that we are conscious beings, that consciousness is a real, existing phenomenon that begs explanation in its own right.

Predictability, Determinism and Free Will

In ordinary language, the concepts of predictability and determination are taken to mean roughly the same thing: if something is predictable, then it has definite causes that determine it to be the way it is; conversely, if something has definite causes that determine it to be the way it is, then it is, in principle, predictable. In philosophy, however, these are distinct concepts. Something that is deterministic need not be, in principle, predictable, and again, conversely, something that is predictable need not be deterministic. I will use two examples to illustrate this point, remarking on the second statement first, as I think it is the less significant of the two.

First, we will examine quantum physics. We would like quantum physics to be deterministic and may even have good reason to suggest that it must be, but at this point, we cannot say that with any certainty that it is, in fact, deterministic. Still, even supposing that it is not deterministic, we can use probability based models to predict, with sufficiently high precision, what the results, or outputs, of a quantum system will be.

Second, and I think more importantly, we can look to the universe at large. If we assume that the universe is entirely deterministic—which again we cannot say with any certainty but have good reason to think that it is—then it does not follow that everything in the universe need be predictable, in principle. We could say that a super-being with all the information about every single particle and its momentum could, theoretically, predict the state of the universe at any given time, but if we add materialism to this deterministic universe, this suggestion becomes meaningless. So let us think of it this way: if we want to model a system, we can represent each part of that system in a computer program. In order to do this, we will need to map each bit of information onto its own bit of computer coding, in a one to one fashion. Put simply, if we want to model a system with 10 components, we will need 10 bits of computer code, each mapping one of the ten components*. But we cannot do this with the universe at large. By definition, we need to map every single particle in the universe onto its own bit of computer coding—how can we do this? We have already exhausted every single particle in the universe by defining our system to be modeled—we simply have no particles left that could make up the computer coding for our program. Going back to our system of 10 components: if our universe only contains 10 particles, then we cannot model this system except by using the system itself as the model, but then we aren’t really modeling it, we are just watching the original system play out naturally. In this way, we can see that, even if our universe as a whole is deterministic, we still cannot, in principle, predict everything that is going to happen, because we, in principle, lack the means to do so, excluding the existence of non-physical super-beings.

To drive this home, I am going to borrow a quote from Richard Feynman:

It’s again this chess game business. If you were in just a corner where only a few pieces were involved, you could work out exactly what’s going to happen. And you can always do that when there’s only a few pieces, so you know you understand it. And yet, in the real game, it’s so many pieces you can’t figure out what’s going to happen. So there was a kind of hierarchy of different complexities. It’s hard to believe—it’s incredible, in fact most people don’t believe—that the behaviour of, say, me, one yack-yack, and you, nodding and all this stuff is the result of lots and lots of atoms all obeying these very simple rules.

To conclude, in a way, I want to remark on the relation between determinism, predictability, and our naive conception of free will. Part of the naive conception of free will is that we can, in principle, act in unpredictable ways. It simply is not the case that someone external to me could predict my own own behaviour with perfect precision. Often, the view of determinism, and its lay-equivocation with predictability, is seen as an attack on this conception of free will. But using the argument above, we see this need not be the case. We will never be able to predict the state of the universe at large, and if we cannot do say, we may always be misdefining one of the variables that we use to predict a local, closed system (i.e., for the purpose of this example, a human brain). Determinism does, in fact, have profound implications for free will if it turns out to be true, but they are much more subtle than they might seem at first glance.

*This is an oversimplification. We would also need computer coding for each of the laws describing the relations between the different components, but we will see that we need not even invoke these to illustrate the point.

Indeterminism: What It Is, and What It Isn’t

I want to briefly remark on the concept of indeterminism:

It is sometimes stated that we have two choices: determinism in the strict sense, or probabilistic indeterminism. This could not be further from the truth. Simply because a system is not strictly deterministic does not mean that the only other option is probability, or “lawlessness,” as some have put it. Agent causation is another option (note: it is possible to redefine and subsume agent causation under one of the two former options, but it is not necessary to do so).

That said, it seems to me that the attempt to formulate the problem in this way is not mere carelessness, but, in fact, a deliberate attempt by strict determinists to belittle their opponents. Most anti-determinists do not propose that simple probabilistic indeterminism is the right way to go, but rather endorse some form of agent causation, as mentioned above. If you can convince your audience, however, that your opponents are arguing for nothing more than “simple indeterminism” (i.e., the probabilistic form of indeterminism), then you avoid having to actually take on your opponents arguments, seemingly strengthening your own position.  It is worth noting that some of the arguments that get labeled as indeterminism in this way are actually only arguments against the strictest form of determinism.

This kind of rhetoric is highly counterproductive, and should be attacked whenever it is identified.

Brief Thoughts on Res Cogitans and Res Extensa

Descartes ultimately distinguished between two sorts of substances: those that are extended in space (res extensa) and those that are purely mental (res cogitans). However, physics now tells us that, at their most basic, all those “things”—or “particles,” if you will—that we once labeled as extended are not really extended at all. Atomic and subatomic particles are more accurately described as points of localized mass-energy, rather than spheres with discrete spatiotemporal dimensions. In light of this, Descartes’ dilemma can be, in a way, resolved: He viewed mental contents as distinct and incapable of scientific description because they lacked physical extensions that could be measured. We have now seen, however, that the very “things” that we once praised for their apparent extension (i.e., their property that we believed allowed them to be studied scientifically) are not really extended at all. Thus, it could be argued that the lack of physical extension is not sufficient for the exclusion of res cogitans, or the mental, from scientific inquiry.

(Note: I am not denying any distinction between ordinary physical events and mental events. There certainly is a distinction. I am merely proposing that this view of the distinction may be false, though this is certainly not new.)


A Brief Overview of Integrated Information Theories of Consciousness

I have posted before on the proposed relationship between information theory and conscious, phenomenal states. For a brief background, consider the following: Information states have two fundamental attributes, one being intrinsic and the other extrinsic, the latter of which can also be called “relational.” Take, for example, one bit of information, say 11001101. In this bit, a sequence of 1s and 0s stands to mean something when it is called upon. The individual 1s and 0s can be labelled as the intrinsic elements. The extrinsic aspect, then, refers to the internal structure of the bit, which is where the term relational comes in. Each element has a definite position within the bit—there is a 1 in the first position, and a 0 in the third position—which marks where it is relative to all of the other elements. We can apply this to consciousness research, some say, by thinking of the intrinsic elements as the subjective side of an issue, or what it is like to be something. On the other hand, the relational parts represent the third-person perspective that we take when we study physics (that is, when the study the relations between fundamental things. This is why, as I have commented before, physics is utterly hopeless when it comes to understanding phenomenal consciousness).

Now, the view of Integrated Information theorists takes this a step further. (If we weren’t to clarify the relationship between information and consciousness, then it might seem like we were saying that anything and everything that has an information state—e.g., a thermostat—is therefore conscious, in some, perhaps limited, fashion. Some, the panpsychists, do say this, but this is not necessarily the view from Integrated Information.) They claim that the phenomena of consciousness, while in some ways fundamental to information states, also depends on the integration and differentiation of those information states. Our brains, along with those of many mammals and “lower” species, do an excellent job of fulfilling these requirements. Through less-than-clear mechanisms, our brains are able to both synchronize activity at a global level, but also keep information very well stratified throughout the layers and structures contained therein. For a counterexample, think of the brain during an epileptic seizure: information is everywhere, with electrical signals firing at multiple locations simultaneously. It could be said that this represents a form of integration, but this situation also clearly does away with any sort of differentiation. As predicted, seizures are generally accompanied by a loss of consciousness, or a diminished conscious state at most.

It is still hard to see how exactly the tenets of this theory might explain the “why” of consciousness, but it presents, at the least, some interesting ways to think about the “how.”

Determinism, or Indeterminism: That is not the question.

In my Metaphysics class today, the following argument was put up for scrutiny:

1) If determinism is true, then no one acts freely, ever.

2) If indeterminism is true, then no one acts freely, ever.

3) Either indeterminism is true, or determinism is true.

4) Therefore, no one ever acts freely, ever.

5) If no one ever acts freely, ever, then no one is ever responsible for their actions.

Premise 1, in brief, relies on that assumption that if the world is deterministic, then everything that happened today was a necessary consequence of what happened millions of years ago. If everything that happened today was a necessary consequence of events in the distant past, then no person has any control over the present—it is all set in stone, as it were. Free will dictates a certain amount of control over present actions, so if this control is absent, then so is free will.

Premise 2, on the other hand, relies on a purely probabilistic definition of indeterminism. If events are indeterministic, which is to say that they are merely an odds game with event A having a 40% probability, and event B having a 60% probability, then we still lack any sort of “control” over the situation. Which event occurs is largely arbitrary, relying only on some unknown odds, written in the sky or otherwise.

This is not to say that these are the only ways in which premises 1 and 2 can be formulated, but this is how they were presented in this case.

Most of the objections raised, both in my class and in the literature, from what I’ve seen,  have attempted to disprove either premise 1 or 2. That is, there can be free will under determinism, or there can be free will under indeterminism. Most of these amount to some re-formulation of free will. I will not be taking either of these positions. Instead, I will attack premise 3: That the world is either deterministic or indeterministic.

The core of my argument rests on the claim that premise 3 presents a false dilemma. It is either determinism, or it is indeterminism, but not both. I assert that it is, indeed, both, or at the very least, we are not in a position to rule this possibility out. Current physics, which is where most of these theories claim to have their support, does not itself claim to have sorted this issue out. We know that under certain circumstances, such as when the scale is microscopic, that the world behaves in an apparently indeterministic way. Under other circumstances, such as when the scale is macroscopic, the world behaves in an apparently deterministic way. Many propose that we can link these two, and show that it is really one, and not the other, in virtue of a fundamental property of nature: namely parsimony—or, that the universe is, at its most fundamental, simple (simple in the sense that it all can be reduced to more or less the same thing). But, what they miss, is that it does not have to be this way. There is, in fact, no law that says that the universe must be simple. It may very well turn out that the universe is complicated, perhaps even too complicated for us to understand it, in the proper sense of the word.

(The following is mere speculation, I have absolutely no empirical basis for the ideas that follow; however,  I still, personally, find a great deal of plausibility in them, but you have been warned, nonetheless!)

Building off of this, and the fact that most of the arguments that place free will either in a purely deterministic or a purely indeterministic light typically have to resort to a reformulation of free will itself, I now assert that free will is only a coherent construct in a world that is both deterministic and indeterministic. What I propose is the following, which relates this more specifically to the theme of this blog: free will can only exist in conscious creatures. This may seem unnecessary to state in so many words, but the following should provide reasons for it. Complex brains are, in a general sense, specialized organs for planning and deliberation. Given that the microscopic events of this world are largely indeterministic, and that the macroscopic events are largely deterministic, we can postulate the following: brains serve to make sense of a vast multitude of indeterminacy. Through the process of evolution, and, to steal a phrase from a neuroscientist I once knew, thanks to the goddess of molecular evolution, they came to be in a position to turn underlying indeterminacy into coherent, conscious actions. This is not an appeal to a “collapse-of-the-wave-function” view of consciousness, to be clear. Rather, it is an attempt to reconcile the disparate aspects of reality into one coherent framework.

We can use this argument to strike down some of the objections raised to both purely deterministic and purely indeterministic accounts of free will. One variety of the former asserts that if you could not have acted otherwise, then you could not have acted freely, as stated above. If there is some underlying indeterminacy, however, this is clearly not the case. There are, in fact, a multitude of different ways in which you could have acted. Aha! But this just reduces to a variety of the argument from indeterminacy—that actions are merely arbitrary instantiations of probabilities, right? But that is where the deterministic aspect of reality kicks in. Once the most basic underlying facts about the world are set, in a probabilistic fashion, then determinism takes over. For this, I draw on an idea put forth by John Searle: downward causality, but in no way do I claim to restate his argument. The higher-order functions of the brain, namely consciousness, do indeed have “causes” that exist as smaller, microscopic bits, but these higher-order functions also have the ability to rain down causation on these smaller bits, much in the way that higher-order theories of economics can influence the activities of lower level commodities. Neither of these can be “smoothly reduced,” as Searle puts it, to the other, but that does not imply that one or the other does not exist, or play a meaningful role. In fact, Searle says that typically, reduction of one thing to another serves the purpose of showing that one of those things does not exist, not the other way around, as is often claimed.

This may seem counter-intuitive, and in some ways, it does have to re-formulate the popular idea of free will. In particular, it draws a distinction between free will at its most basic on the one hand, and conscious will on the other. Conscious will, or the idea that you are consciously in control of all of your actions and thoughts, is inevitably false. A handful of psychological experiments demonstrating non-conscious biases and predispositions shows this very simply. But this is not what we are talking about when we say free will, or so I claim. Free will is much more general than the limited definition of conscious will. At its most basic, it requires that you be capable of acting in certain ways that rely on intentional stances. Even if you are not consciously aware of your decisions to act in certain ways, it is still you that is making them. You are your brain, and everything that comes along with it. Simply because something is non-conscious does not make it any less a part of you. It may clash with the popular account of who you are, but at the end of the day, you are made up of more non-conscious pieces than conscious pieces, so restricting our definition of free will to the conscious pieces seems to make little sense. Now, this is not to say that our conscious feeling of free will is irrelevant, but it is a different matter to bring up—specifically, it is more of an epistemic question than a metaphysical question.

Minds and Computers

In everyday conversation, brains are often equivocated to “computers.” This intellectual laziness, as it were, has led to almost an entire generation of academics asserting that minds are nothing more than programs, run on the machinery of the brain. In this post, I hope to clear up a few confusions and oversights related to this position. I do not claim to be the first to say these things, but I’d like to round up a few of these disconnected views and add my own personal thoughts as they seem useful.

The issue is that to claim that the brain is like something—say, a computer—is to assert that we have any idea as to how the brain really works. This is utterly and completely false. We certainly know a lot about the brain, but most of this is in terms of small, isolated events (action potentials) or very coarsely-grained images (e.g., fMRI), so anyone who claims to have a complete view of how the brain processes, integrates, and distributes information is very misguided, to say the least.

There is certainly a lot more about the brain that we’ve learned since these equivocations were first put forth in the literature. We have learned more about how networks of neurons function coherently to produce meaningful representations. We have learned about oscillations in the brain that help unify otherwise disjoint brain functions (gamma-waves are especially exciting in this regard). We have learned about local field potentials, such as those recorded by EEG, and how they help modulate neighborhoods of neurons. In this sense, the brain is still a kind of information processor, albeit much more complex than we once thought it to be, and this is where the equivocation breaks down.

Computers are designed with distinct functional units that attempt to minimize interference from surrounding units. Neurons, the functional units of the brain, certainly do this to an extent—if they did not, the careful modulation of membrane potentials necessary for coherent communication would be impossible to maintain. They are nevertheless heavily influenced by every single signal and associated field potential that passes through any region of the brain they occupy. Neurons could not function properly in isolation (by function, I mean in a way that is conducive to conscious experience), they require complex interactions among themselves—and with the body they represent—the likes of which we do not see in their silicon counterparts. The most complex relation required is that which is often referred to as “dynamic constancy” in chemical terms. The brain is not a static system. It is constantly changing: in order to understand a single word, much less a sentence, it must alter synaptic connections. Sometimes this involves strengthening a handful due to the accumulation of calcium ions in pre- or post-synaptic terminals*. Other times this involves the generation of entirely new synapses through complex genetic regulatory mechanisms. It is astounding to realize that many everyday actions require synchronous activation and deactivation of entire networks of genes with perfect precision, often several times over. Nonetheless, all of this constant modification leaves, at the end of the day, more or less the same brain that started the day, thus dynamic constancy.

We may one day be able to “create” something that is capable of conscious reflection, and we may even call it a computer when the time comes, but it will not be anything that is recognizably a computer according to today’s standards. Likely, it will incorporate some aspects of biological matter, perhaps some actual living cells will be the only solution. The point of all of this is that minds are not merely computer programs, the distinction between program and hardware is nonexistent. The mind is the brain, and the brain is the mind, and nothing more.

*It is especially exciting to learn that the durations of various synaptic modulating processes correlate almost perfectly with discoveries made quite independently in cognitive psychology. For example, estimates of the durations of mono-synaptic facilitation, which can rely on the accumulation of calcium ions as mentioned above, tend to match up almost perfectly with estimates of the durations of various forms of working memory.

An Introduction to the Problem

The problems of consciousness are many, and equally varied, but I think it would prove useful to put forth a rough description, nonetheless.

What it comes down to is the following: there is something that it is like to be a conscious being. You can describe the brain as much as you like and abstract away from the details, but all of these purely physical, third-person accounts seem to inevitably miss some crucial element, namely consciousness itself. Brains can instantiate causal relations, we know that. Brains can control behaviour, we know that, too. But how do physical brains give rise to subjective, mental contents? This is the most fundamental problem that all of the remaining issues derive from.

Everywhere else we look in the universe, we see physical entities and nothing more, we see objectivity. In a way, the entirety of scientific discovery, in crude form, is dependent on this, so how are we even to approach the problem at hand? All of our traditional tools of measurement, of explanation, of prediction rely on roughly deterministic, objective natures. Consciousness alone stands in the face of this, or at least it appears to. We would like to find a theory that accounts for this, but how is not at all clear. In fact, it could be said that conscious existence is one of the few remaining problems in science for which we do not even know how to ask the question. This should help to explain all of the contradictory accounts that are thrown around on a day to day basis. It helps explain why so many resorted, in desperation, to dualism. Life would be a lot simpler if there were some mysterious substance just out of reach that could do exactly what we need it to. But this is probably not the case, and most modern theorists understand this. Equally, it helps explain a great deal of the contemporary resistance to the downfall of materialism (or physicalism, if you prefer). Materialism is all that we know, it seems, are we not lost if we give up on it? From a more reflective point of view, though, and with time, I think these perspectives can be overcome.

To the cry of desperation of the materialist, think of this: physicalism is not all that you know—you know of your own conscious mind much more soundly, even if in an irritatingly limited manner. To the dualist, not all avenues of inquiry have been exhausted. In fact, we likely have barely scratched the surface of this enormously complex phenomenon. Where we go from here, I do not know, and it is probably anyone’s guess, but we at least have some idea of what we need to think about, and the work of others before us will allow us to avoid their pitfalls and misfortunes. This is, I think, the greatest question that lays before us as a species, and I think we may—finally—be prepared to tackle it head on.

via An Introduction to the Problem.