Consciousness, Self, and the Prefrontal Cortex

cartesian-theater

There is a basic question that must be addressed when pondering the nature of consciousness, and that is: why have consciousness at all? The brain processes a great deal of information below the level of conscious awareness, from visual to auditory to tactile, and then the integration of all of these before they can be brought into conscious awareness. Yet conscious awareness itself seems much more limited in the amount of information that it can handle at a time—5 to 9 “chunks” of information, at a time, it would seem. So why rely on conscious awareness as heavily as we do? It certainly seems, at least from this angle, much less able than non-conscious processing—yet, given its apparent efficacy in raising humanity to the heights of culture and insight that we enjoy today, it surely has something essential to offer us.

The prefrontal cortex is the latest structure to appear in the evolution of the brain, and is the structure that shows the greatest development between humans and our closest biological relatives. Furthermore, it is known to mediate a great deal of the abilities considered distinctly human, such as planning, reflection, and empathy, all of which apparently require conscious awareness. Surprisingly, however, a vast abundance of the projections that the prefrontal cortex sends back to more primitive, sub-cortical structures are inhibitory—they function largely to suppress activity in these regions. In fact, this has led several researchers to rethink the concept of free will and, somewhat amusingly, refer to it rather as “free won’t,” in that we are mainly choosing what not to do, of all of the responses recommended by sub-cortical structures. And this is where we might find a reason for conscious awareness.

Consciousness relies on a crucial ingredient for dealing with the world in the way that the prefrontal cortex specializes in doing: it removes behavior from the moment-to-moment sensory perceptions incessantly presenting themselves to sub-cortical brain regions. Instead of constantly responding to each and every stimulus as it comes in, consciousness introduces a disconnect that allows reality apart from oneself to be treated as perceived, and thus distinct from the self and manipulable. Non-conscious responses don’t require perception in the same way that conscious processes do. In order to consciously ponder a course of action while planning, you need a virtual representation to work with, and in order to do that, you need some distance between yourself and the object being represented. Every day perceptions such as the visual field in front of you may function in a very similar manner: a stimulus presents itself, is processed by sub-cortical structures, and then a course of action is offered up to conscious awareness to be chosen or discarded by conscious reflection. There is a whiff of “opponent processing” going on in this narrative, something that comes up a lot in systems biology: two structures working in opposite directions in order to better center around a single desired outcome. Non-conscious, sub-cortical processing is largely reactive, leading to sometimes extreme, reflexive responses; conscious prefrontal processing, on the other hand, divorced from the constant demands of the environment, is more receptive to multiple courses of action, but can sometimes leave us unable to settle on an alternative. With the two of these working with opposing aims, however, behavior that is reactive enough to survive, but receptive enough to be a functioning member of society, can be attained.

This is far from a coherent theory or hypothesis, but the parallels between the roles of sub-cortical and non-conscious processes on the one hand, and prefrontal and conscious processes on the other, along with the connections between the two, are surely going to be important in mapping human consciousness.

Personal Identity, Brains and Fission Cases

When it comes to personal identity, the following question needs answering: what does it take for person A at time 1 to be the very same person as person B at time 2? Perhaps more clearly, right now I am sitting in front of my laptop, typing this post. In, say, ten minutes, there will be a person sitting in front of this laptop, publishing this post. What has to be true of that person for us to say that that person, ten minutes from now, is me? Now, it seems to me that this is a rather strange question for us to be asking, and it may be that we are simply confusing ourselves when we ask it—but let us assume for the present discussion that it is a coherent question to ask, as many contemporary philosophers certainly have, so that we may examine one answer that has been suggested.

The brain view, a slightly more refined version of the body view, says that in order for us to determine whether or not we have the same person at two different time points, we need to determine whether or not they have the same brain (accordingly, the body view says that we need to track the body—but this, for obvious reasons, can lead us astray). Neuroscience tells us quite assuredly that the brain is, in some way, the seat of what makes a person a person. Inside the brain lies all of the machinery required for memory, learning, personality, and all of the other traits and abilities that ordinarily allow us to identify the people around us as being who we think they are. The problem, however, with simply examining these surface-level features is that they can be mimicked, they can be replicated in a copy, leading us to the false conclusion that the copy is the real thing, just as if we were merely to examine outward body features. If we track the causal history of the brain itself, however, we should be able to figure out who is who in a more concrete manner.

So far, so good. We have what seems to be a good thesis: track the brain, track the person. Now we would like to refine the view even further. Is the whole brain necessary for personal identity, or only part of it? We know that in many respects the brain is redundant, having two more-or-less copies of each cortical structure—might we only need half of a brain to maintain personal identity? We are not necessarily constrained by specifics here, so let us make a simplifying assumption: each cortical hemisphere is indeed an exact mirror image of the opposite cortical hemisphere (there seems to be nothing in nature that points to this being impossible).

Now consider the following thought experiments: At time 1, Fred is a normal, healthy person. At time 2, he suffers a sudden, catastrophic loss of one of his cortical hemispheres. We now need to ask ourselves, is Fred-2 the same person as Fred-1? Common sense seems to tell us that he is, so perhaps on the brain view one hemisphere is indeed sufficient for maintaining personal identity. Now let us start over: at time 2, instead of Fred simply losing half of his brain, imagine that, instead, his brain is removed from his body, half of it is destroyed, and then the remaining half is implanted into the brainless body of Steve. After sufficient recovery from the operation, Steve’s body wakes back up—but who has woken up? On the brain view from before, we would have to say that Fred wakes up in Steve’s body. After all, it is the brain, not the body, that truly matters here. Alright, one more twist. Imagine this time that at time 2, Fred’s brain is again removed from his body, but this time the left half of brain is implanted into one brainless body, while the right half is now implanted into a separate brainless body. I have provided a schematic below to clarify the situation:

lY4c1

 

 

We have one body, Lefty, and another body, Righty (the names merely allow us to keep track of which body gets which half of the brain). After sufficient time for recovery, both bodies awaken. Now we again have to ask: who is waking up in each body? We have three options here, it would seem: 1) Fred, the same Fred as Fred-1, is waking up in both bodies; 2) Lefty is Fred-1, but Righty is not (or vice-versa); or 3) Neither of these people who wake up are Fred-1, Fred-1 died when the transplant took place. It we remain faithful to our previous conclusions, it would seem that we have to go with choice 1: both Lefty and Righty are equally Fred-1. But this can’t possibly be the case! How can Fred be in two spatial locations at the same time? Is he experiencing both bodies’ perceptions at the same time? If so, how? This simply seems to be impossible, and I am inclined to agree with this. Okay, how about option two? Perhaps Fred-1 is now in Lefty’s body—but wait, what reason do we have for him being in Lefty’s body versus Righty’s body? Both bodies, as per our simplifying assumption, have exactly the same half of a brain as the other. So much for option two. We’re now left with a final choice: neither Lefty nor Righty are Fred-1. Fred-1 is dead, no longer in existence.

But if we accept this conclusion, and it seems that we must, what does this say for our first two cases? Is Fred-2 no longer Fred-1 simply because he has lost half of his brain? There’s something that tells us that he has to be the same person. Obviously he is not exactly the same, he now has half of a brain, but intuitions seems to maintain that he is nonetheless still the same person—are we wrong?

I am not sure where exactly I stand on this issue at the moment, but I do have one thought that I think is promising. If the brain is truly duplicated in each hemisphere, but only one is needed for personhood, might there have been two people in Fred-1’s body (that is, one per hemisphere)? We may want to redefine a “person” as two of these “hemisphere-persons” in this case, which leaves us with the following: Fred-1 did not die, but half of him did. Fred-1, in the strictest sense, no longer exists, but part of him does. Returning to the final case, then, none of our original options really suffice. Instead, we would say that half of Fred-1 is in Lefty, while half of Fred-1 is in Righty.

This may not seem to be too strange of a conclusion, seeing that each body indeed has half of a brain, but when it comes to identity, it is at least a little weird. We like to think of personal identity as a 1:1 relation. You either have a person or you don’t, nothing in between. It’s not the case that after ten years of life, I am only 80% me. No, I am still me—the same person as I was before, even if my desires, beliefs, etc. have changed a little or a lot in the intervening time period. Should we re-evaluate this intuitive answer?

 

 

Conversations on Consciousness

convoncons

Since the spring semester is now in full swing, it has been increasingly difficult for me to devote the necessary time to a full-length treatment of consciousness as I would like. This in mind, I picked up a copy of Susan Blackmore’s Conversations on Consciousness the other day, and I have not regretted it. It serves as an informal introduction to several competing theories and views on the scientific study of the brain and conscious experience. The set-up of the book is simple: interviews with the researchers/philosophers themselves explaining their theories in their own, colloquial way. Add to this a few questions on free will and the fate of consciousness after death, and you get twenty of the most interesting conversations that I’ve ever had the pleasure to read. They allow, among other things, the chance to see these researchers as actual people, and not simply as objective reporters of experimental results as happens far too often.

I have only one complaint about the book, which you may have guessed given the author, Susan Blackmore. Put simply, Blackmore endorses a few very fringe (and some, myself included, would say absurd) ideas about the nature of consciousness. Normally, this would not be an issue—everyone is entitled to their own opinions, and I often disagree with the authors that I read. The complaint comes, however, in the questions that Blackmore chooses to ask her interviewees. The book is accompanied by the sub-script, “What the Best Minds Think about the Brain, Free Will, and What it Means to be Human.” At times, though, it seems like it would read more accurately as, “What the Best Minds Think about My Ideas about the Brain, Free Will, and What it Means to be Human.” This is not over-riding, and most of the time the book stays true to its stated purpose (I thank the interviewees for most of this, they do a good job sticking to what they believe are the real issues), but it is, at times, distracting. It is not enough to knock this down as a great book, well worth reading, but it is something that should be noted, nonetheless.

I would like to give this five out of five stars—it was great fun to read—but the above complaint makes me unable to do so. That said, I would give the interviewees five out of five stars and Blackmore four out of five stars, so let’s say that the book as a whole gets four and a half out of five stars.

The Rediscovery of the Mind

I finished reading Searle’s The Rediscovery of the Mind last week, and it was quite an exciting read, to say the least. In a field full of confusing, frustrating, and downright baffling theories and assertions, a little bit of no-nonsense pseudo-polemic writing can be a breath of fresh air, and this book is just that.

At it’s heart, this book is an argument for Searle’s own theory of mind: Biological Naturalism, which can be summed up as saying that the brain, under the right conditions, gives rise to conscious experience in the same way that water, under the right conditions, gives rise to liquidity. Even more fundamentally, however, Searle uses this book to remind us all of what we are really doing when we propose theories of mind, hopefully in a way that helps us realize the obvious mistakes we make all too often.

Searle closes the book with a near perfect set of guidelines, which I have re-printed below, because, even if you aren’t able to read the book in its entirety, I think that you should consider them:

In spite of our modern arrogance about how much we know, in spite of the assurance and universality of our science, where the mind is concerned we are characteristically confused and in disagreement. Like the proverbial blind men and the elephant, we grasp onto some alleged feature and pronounce it the essence of the mental. ‘There are invisible sentences in there!’ (the language of thought). ‘There is a computer program in there!’ (cognitivism). ‘There are only causal relations in there!’ (functionalism). ‘There is nothing in there!’ (eliminativism). And so, depressingly, on.

Just as bad, we let our research methods dictate the subject matter, rather than the converse. Like the drunk who loses his car keys in the dark bushes but looks for them under the streetlight, ‘because the light is better here,’ we try to find out how humans might resemble our computational models rather than trying to figure out how the conscious human mind actually works. I am frequently asked, ‘But how could you study consciousness scientifically? How could there be a theory?”

I do not believe there is any simple or single path to the rediscovery of the mind. Some rough guidelines are:

First, we ought to stop saying things that are obviously false. The serious acceptance of this maxim might revolutionize the study of the mind.

Second, we ought to keep reminding ourselves of what we know for sure. For example, we know for sue that inside our skulls there is a brain, sometimes it is conscious, and brain processes cause consciousness in all its forms.

Third, we ought to keep asking ourselves what actual facts in the world are supposed to correspond to the claims we make about the mind. It does not mater whether ‘true’ means corresponds to the facts, because ‘corresponds to the facts’ does mean corresponds to the facts, and any discipline that aims to describe the world is aims for this correspondence. If you keep asking yourself this question in the light of the knowledge that the brain is the only thing in there, and the brain causes consciousness, I believe you will come up with the results I have reached in this chapter, and indeed many of the results I have come up with in this book.

But that is only to take a first step on the road back to the mind. A fourth and final guideline is that we need to rediscover the social character of the mind.

If this closing passage appeals to you, I would recommend that you read this book in its entirety. I certainly found it well worth the effort (and it is an effort). Five out of five stars—I’ll be revisiting this many times.

Predictability, Determinism and Free Will

In ordinary language, the concepts of predictability and determination are taken to mean roughly the same thing: if something is predictable, then it has definite causes that determine it to be the way it is; conversely, if something has definite causes that determine it to be the way it is, then it is, in principle, predictable. In philosophy, however, these are distinct concepts. Something that is deterministic need not be, in principle, predictable, and again, conversely, something that is predictable need not be deterministic. I will use two examples to illustrate this point, remarking on the second statement first, as I think it is the less significant of the two.

First, we will examine quantum physics. We would like quantum physics to be deterministic and may even have good reason to suggest that it must be, but at this point, we cannot say that with any certainty that it is, in fact, deterministic. Still, even supposing that it is not deterministic, we can use probability based models to predict, with sufficiently high precision, what the results, or outputs, of a quantum system will be.

Second, and I think more importantly, we can look to the universe at large. If we assume that the universe is entirely deterministic—which again we cannot say with any certainty but have good reason to think that it is—then it does not follow that everything in the universe need be predictable, in principle. We could say that a super-being with all the information about every single particle and its momentum could, theoretically, predict the state of the universe at any given time, but if we add materialism to this deterministic universe, this suggestion becomes meaningless. So let us think of it this way: if we want to model a system, we can represent each part of that system in a computer program. In order to do this, we will need to map each bit of information onto its own bit of computer coding, in a one to one fashion. Put simply, if we want to model a system with 10 components, we will need 10 bits of computer code, each mapping one of the ten components*. But we cannot do this with the universe at large. By definition, we need to map every single particle in the universe onto its own bit of computer coding—how can we do this? We have already exhausted every single particle in the universe by defining our system to be modeled—we simply have no particles left that could make up the computer coding for our program. Going back to our system of 10 components: if our universe only contains 10 particles, then we cannot model this system except by using the system itself as the model, but then we aren’t really modeling it, we are just watching the original system play out naturally. In this way, we can see that, even if our universe as a whole is deterministic, we still cannot, in principle, predict everything that is going to happen, because we, in principle, lack the means to do so, excluding the existence of non-physical super-beings.

To drive this home, I am going to borrow a quote from Richard Feynman:

It’s again this chess game business. If you were in just a corner where only a few pieces were involved, you could work out exactly what’s going to happen. And you can always do that when there’s only a few pieces, so you know you understand it. And yet, in the real game, it’s so many pieces you can’t figure out what’s going to happen. So there was a kind of hierarchy of different complexities. It’s hard to believe—it’s incredible, in fact most people don’t believe—that the behaviour of, say, me, one yack-yack, and you, nodding and all this stuff is the result of lots and lots of atoms all obeying these very simple rules.

To conclude, in a way, I want to remark on the relation between determinism, predictability, and our naive conception of free will. Part of the naive conception of free will is that we can, in principle, act in unpredictable ways. It simply is not the case that someone external to me could predict my own own behaviour with perfect precision. Often, the view of determinism, and its lay-equivocation with predictability, is seen as an attack on this conception of free will. But using the argument above, we see this need not be the case. We will never be able to predict the state of the universe at large, and if we cannot do say, we may always be misdefining one of the variables that we use to predict a local, closed system (i.e., for the purpose of this example, a human brain). Determinism does, in fact, have profound implications for free will if it turns out to be true, but they are much more subtle than they might seem at first glance.

*This is an oversimplification. We would also need computer coding for each of the laws describing the relations between the different components, but we will see that we need not even invoke these to illustrate the point.

On Schools of Thought in the Sciences

Joseph Schumpeter:

A man expressing his political will and the same man expressing a theory in the lecture hall are two different people . . . Especially in my case, ladies and gentleman, because I never wish to conclude. If I have a function, then it is not to close, but rather to open doors, and I never felt the urge to create something like [my own] school [of thought] . . . Quite a few people are upset about this point of view, because there are [many] who feel they are the leaders of such schools, who feel like fighters for total light against total darkness. That gets expressed in the harsh criticisms that one school levies against the other. But it doesn’t make any sense to fight about these things. One shouldn’t fight about things that life is going to eliminate anyhow at some point. In science momentary success is not as important as it is in the economy and in politics. We can only say that if something prevails in science, it has proven its right to exist; and if it isn’t worth anything, then it’s going to die anyway. I for myself completely accept the verdict of coming generations.

 

Minds, Brains and Science

mindsbrainsscience

John Searle’s Minds, Brains and Science is a collection of the six Reith Lectures that he gave in 1984 on the relation between our conscious, meaningful, phenomenal experiences and the backdrop of nonconscious, meaningless, objective physical reality against which all of the former inevitably play out. Essentially, the problem is this: We experience things, but everywhere else we look in the universe, we do not see experiences. How do we explain this seemingly trivial fact and make consciousness fit in with everything else we know?

This is inevitably a question that he is unable to provide a clear answer for, but, nonetheless, this was a successful, and worthwhile, work. The lectures were intended for a lay audience, so they are largely non-technical, but as far as I can tell, none of the necessary content was lost. Searle is still able to explicate each of the sub-problems and arguments very well. He manages to minimize caricaturing his opponents while simultaneously keeping the focus on what, he thinks, are the real issues. All the while, he manages to introduce several new ideas. His remarks on the social sciences and the freedom of the will are especially noteworthy.

In its entirety, the read comes down to less than a hundred pages, making it a perfect introduction to the problem at large. At the same time, it retains enough depth to catch the eye of even the most weathered philosopher-scientist—and for that, I give it five out of five stars.

Searle in Two Quotes

Today, I am reading John Searle’s Minds, Brains and Science, which is essentially an edited transcript of his 1984 Reith Lectures. I read two quotes that I thought were worth sharing, one for its humor, and the other for its insight. Enjoy!

Various replies have been suggested to this [the Chinese Room] argument by workers in artificial intelligence and in psychology, as well as philosophy. They all have something in common; they are all inadequate. And there is an obvious reason why they have to be inadequate, since the argument rests on a very simple logical truth, namely, syntax alone is not sufficient for semantics, and digital computers insofar as they are computers have, by definition, a syntax alone.

I think that he is almost certainly right here, but the manner in which formulates this paragraph is nothing short of comedic perfection. My own thoughts on the subject can be found in my article “Minds and Computers.”

Suppose no one knew how clocks worked. Suppose it was frightfully difficult to figure out how they worked, because, though there were plenty around, no one knew how to build one, and efforts to figure out how they worked tended to destroy the clock. Now suppose a group of researchers said, ‘We will understand how clocks work if we design a machine that is functionally the equivalent of a clock, that keeps time just as well as a clock,’ So they designed an hour glass, and claimed: ‘Now we understand how clocks work,’ or perhaps: ‘If only we could get the hour glass to be just as accurate as a clock we would at last understand how clocks work.’ Substitute ‘brain’ for ‘clock’ in this parable, and substitute ‘digital computer program’ for ‘hour glass’ and the notion of intelligence for the notion of keeping time and you have the contemporary situation in much (not all!) of artificial intelligence and cognitive science.

 

Determinism, or Indeterminism: That is not the question.

In my Metaphysics class today, the following argument was put up for scrutiny:

1) If determinism is true, then no one acts freely, ever.

2) If indeterminism is true, then no one acts freely, ever.

3) Either indeterminism is true, or determinism is true.

4) Therefore, no one ever acts freely, ever.

5) If no one ever acts freely, ever, then no one is ever responsible for their actions.

Premise 1, in brief, relies on that assumption that if the world is deterministic, then everything that happened today was a necessary consequence of what happened millions of years ago. If everything that happened today was a necessary consequence of events in the distant past, then no person has any control over the present—it is all set in stone, as it were. Free will dictates a certain amount of control over present actions, so if this control is absent, then so is free will.

Premise 2, on the other hand, relies on a purely probabilistic definition of indeterminism. If events are indeterministic, which is to say that they are merely an odds game with event A having a 40% probability, and event B having a 60% probability, then we still lack any sort of “control” over the situation. Which event occurs is largely arbitrary, relying only on some unknown odds, written in the sky or otherwise.

This is not to say that these are the only ways in which premises 1 and 2 can be formulated, but this is how they were presented in this case.

Most of the objections raised, both in my class and in the literature, from what I’ve seen,  have attempted to disprove either premise 1 or 2. That is, there can be free will under determinism, or there can be free will under indeterminism. Most of these amount to some re-formulation of free will. I will not be taking either of these positions. Instead, I will attack premise 3: That the world is either deterministic or indeterministic.

The core of my argument rests on the claim that premise 3 presents a false dilemma. It is either determinism, or it is indeterminism, but not both. I assert that it is, indeed, both, or at the very least, we are not in a position to rule this possibility out. Current physics, which is where most of these theories claim to have their support, does not itself claim to have sorted this issue out. We know that under certain circumstances, such as when the scale is microscopic, that the world behaves in an apparently indeterministic way. Under other circumstances, such as when the scale is macroscopic, the world behaves in an apparently deterministic way. Many propose that we can link these two, and show that it is really one, and not the other, in virtue of a fundamental property of nature: namely parsimony—or, that the universe is, at its most fundamental, simple (simple in the sense that it all can be reduced to more or less the same thing). But, what they miss, is that it does not have to be this way. There is, in fact, no law that says that the universe must be simple. It may very well turn out that the universe is complicated, perhaps even too complicated for us to understand it, in the proper sense of the word.

(The following is mere speculation, I have absolutely no empirical basis for the ideas that follow; however,  I still, personally, find a great deal of plausibility in them, but you have been warned, nonetheless!)

Building off of this, and the fact that most of the arguments that place free will either in a purely deterministic or a purely indeterministic light typically have to resort to a reformulation of free will itself, I now assert that free will is only a coherent construct in a world that is both deterministic and indeterministic. What I propose is the following, which relates this more specifically to the theme of this blog: free will can only exist in conscious creatures. This may seem unnecessary to state in so many words, but the following should provide reasons for it. Complex brains are, in a general sense, specialized organs for planning and deliberation. Given that the microscopic events of this world are largely indeterministic, and that the macroscopic events are largely deterministic, we can postulate the following: brains serve to make sense of a vast multitude of indeterminacy. Through the process of evolution, and, to steal a phrase from a neuroscientist I once knew, thanks to the goddess of molecular evolution, they came to be in a position to turn underlying indeterminacy into coherent, conscious actions. This is not an appeal to a “collapse-of-the-wave-function” view of consciousness, to be clear. Rather, it is an attempt to reconcile the disparate aspects of reality into one coherent framework.

We can use this argument to strike down some of the objections raised to both purely deterministic and purely indeterministic accounts of free will. One variety of the former asserts that if you could not have acted otherwise, then you could not have acted freely, as stated above. If there is some underlying indeterminacy, however, this is clearly not the case. There are, in fact, a multitude of different ways in which you could have acted. Aha! But this just reduces to a variety of the argument from indeterminacy—that actions are merely arbitrary instantiations of probabilities, right? But that is where the deterministic aspect of reality kicks in. Once the most basic underlying facts about the world are set, in a probabilistic fashion, then determinism takes over. For this, I draw on an idea put forth by John Searle: downward causality, but in no way do I claim to restate his argument. The higher-order functions of the brain, namely consciousness, do indeed have “causes” that exist as smaller, microscopic bits, but these higher-order functions also have the ability to rain down causation on these smaller bits, much in the way that higher-order theories of economics can influence the activities of lower level commodities. Neither of these can be “smoothly reduced,” as Searle puts it, to the other, but that does not imply that one or the other does not exist, or play a meaningful role. In fact, Searle says that typically, reduction of one thing to another serves the purpose of showing that one of those things does not exist, not the other way around, as is often claimed.

This may seem counter-intuitive, and in some ways, it does have to re-formulate the popular idea of free will. In particular, it draws a distinction between free will at its most basic on the one hand, and conscious will on the other. Conscious will, or the idea that you are consciously in control of all of your actions and thoughts, is inevitably false. A handful of psychological experiments demonstrating non-conscious biases and predispositions shows this very simply. But this is not what we are talking about when we say free will, or so I claim. Free will is much more general than the limited definition of conscious will. At its most basic, it requires that you be capable of acting in certain ways that rely on intentional stances. Even if you are not consciously aware of your decisions to act in certain ways, it is still you that is making them. You are your brain, and everything that comes along with it. Simply because something is non-conscious does not make it any less a part of you. It may clash with the popular account of who you are, but at the end of the day, you are made up of more non-conscious pieces than conscious pieces, so restricting our definition of free will to the conscious pieces seems to make little sense. Now, this is not to say that our conscious feeling of free will is irrelevant, but it is a different matter to bring up—specifically, it is more of an epistemic question than a metaphysical question.

Minds and Computers

In everyday conversation, brains are often equivocated to “computers.” This intellectual laziness, as it were, has led to almost an entire generation of academics asserting that minds are nothing more than programs, run on the machinery of the brain. In this post, I hope to clear up a few confusions and oversights related to this position. I do not claim to be the first to say these things, but I’d like to round up a few of these disconnected views and add my own personal thoughts as they seem useful.

The issue is that to claim that the brain is like something—say, a computer—is to assert that we have any idea as to how the brain really works. This is utterly and completely false. We certainly know a lot about the brain, but most of this is in terms of small, isolated events (action potentials) or very coarsely-grained images (e.g., fMRI), so anyone who claims to have a complete view of how the brain processes, integrates, and distributes information is very misguided, to say the least.

There is certainly a lot more about the brain that we’ve learned since these equivocations were first put forth in the literature. We have learned more about how networks of neurons function coherently to produce meaningful representations. We have learned about oscillations in the brain that help unify otherwise disjoint brain functions (gamma-waves are especially exciting in this regard). We have learned about local field potentials, such as those recorded by EEG, and how they help modulate neighborhoods of neurons. In this sense, the brain is still a kind of information processor, albeit much more complex than we once thought it to be, and this is where the equivocation breaks down.

Computers are designed with distinct functional units that attempt to minimize interference from surrounding units. Neurons, the functional units of the brain, certainly do this to an extent—if they did not, the careful modulation of membrane potentials necessary for coherent communication would be impossible to maintain. They are nevertheless heavily influenced by every single signal and associated field potential that passes through any region of the brain they occupy. Neurons could not function properly in isolation (by function, I mean in a way that is conducive to conscious experience), they require complex interactions among themselves—and with the body they represent—the likes of which we do not see in their silicon counterparts. The most complex relation required is that which is often referred to as “dynamic constancy” in chemical terms. The brain is not a static system. It is constantly changing: in order to understand a single word, much less a sentence, it must alter synaptic connections. Sometimes this involves strengthening a handful due to the accumulation of calcium ions in pre- or post-synaptic terminals*. Other times this involves the generation of entirely new synapses through complex genetic regulatory mechanisms. It is astounding to realize that many everyday actions require synchronous activation and deactivation of entire networks of genes with perfect precision, often several times over. Nonetheless, all of this constant modification leaves, at the end of the day, more or less the same brain that started the day, thus dynamic constancy.

We may one day be able to “create” something that is capable of conscious reflection, and we may even call it a computer when the time comes, but it will not be anything that is recognizably a computer according to today’s standards. Likely, it will incorporate some aspects of biological matter, perhaps some actual living cells will be the only solution. The point of all of this is that minds are not merely computer programs, the distinction between program and hardware is nonexistent. The mind is the brain, and the brain is the mind, and nothing more.

*It is especially exciting to learn that the durations of various synaptic modulating processes correlate almost perfectly with discoveries made quite independently in cognitive psychology. For example, estimates of the durations of mono-synaptic facilitation, which can rely on the accumulation of calcium ions as mentioned above, tend to match up almost perfectly with estimates of the durations of various forms of working memory.