On Schools of Thought in the Sciences

Joseph Schumpeter:

A man expressing his political will and the same man expressing a theory in the lecture hall are two different people . . . Especially in my case, ladies and gentleman, because I never wish to conclude. If I have a function, then it is not to close, but rather to open doors, and I never felt the urge to create something like [my own] school [of thought] . . . Quite a few people are upset about this point of view, because there are [many] who feel they are the leaders of such schools, who feel like fighters for total light against total darkness. That gets expressed in the harsh criticisms that one school levies against the other. But it doesn’t make any sense to fight about these things. One shouldn’t fight about things that life is going to eliminate anyhow at some point. In science momentary success is not as important as it is in the economy and in politics. We can only say that if something prevails in science, it has proven its right to exist; and if it isn’t worth anything, then it’s going to die anyway. I for myself completely accept the verdict of coming generations.

 

Searle in Two Quotes

Today, I am reading John Searle’s Minds, Brains and Science, which is essentially an edited transcript of his 1984 Reith Lectures. I read two quotes that I thought were worth sharing, one for its humor, and the other for its insight. Enjoy!

Various replies have been suggested to this [the Chinese Room] argument by workers in artificial intelligence and in psychology, as well as philosophy. They all have something in common; they are all inadequate. And there is an obvious reason why they have to be inadequate, since the argument rests on a very simple logical truth, namely, syntax alone is not sufficient for semantics, and digital computers insofar as they are computers have, by definition, a syntax alone.

I think that he is almost certainly right here, but the manner in which formulates this paragraph is nothing short of comedic perfection. My own thoughts on the subject can be found in my article “Minds and Computers.”

Suppose no one knew how clocks worked. Suppose it was frightfully difficult to figure out how they worked, because, though there were plenty around, no one knew how to build one, and efforts to figure out how they worked tended to destroy the clock. Now suppose a group of researchers said, ‘We will understand how clocks work if we design a machine that is functionally the equivalent of a clock, that keeps time just as well as a clock,’ So they designed an hour glass, and claimed: ‘Now we understand how clocks work,’ or perhaps: ‘If only we could get the hour glass to be just as accurate as a clock we would at last understand how clocks work.’ Substitute ‘brain’ for ‘clock’ in this parable, and substitute ‘digital computer program’ for ‘hour glass’ and the notion of intelligence for the notion of keeping time and you have the contemporary situation in much (not all!) of artificial intelligence and cognitive science.

 

The Problem, continued

John Tyndall, 1868:

The passage from the physics of the brain to the corresponding facts of consciousness is unthinkable as a result of mechanics. Granted that a definite thought, and a definite molecular action in the brain, occur simultaneously; we do not possess the intellectual organ, nor apparently any rudiment of the organ, which would enable us to pass, by a process of reasoning, from the one phenomenon to the other. They appear together, but we do not know why. Were our minds and senses so expanded, strengthened, and illuminated, as to enable us to see and feel the very molecules of the brain; were we capable of following all their motions, all their groupings, all their electric discharges, if such there be; and were we intimately acquainted  with the corresponding states of thought and feeling, we should be as far as ever from the solution of the problem, “How are these physical processes connected with the facts of consciousness?” The chasm between the two classes of phenomena would still remain intellectually impassable. Let the consciousness for love, for example, be associated with a right-handed spiral motion of the molecules of the brain, and the consciousness of hate with a left-handed spiral motion. We should then know, when we love, that the motion is in one direction, and, when we hate, that the motion is in the other; but the “WHY?” would remain as unanswerable as before.

Two Quotes

Today, I am sharing two quotes from Christof Koch’s Consciousness: Confessions of a Romantic Reductionist that struck me in a particularly meaningful way. I’ll leave you to interpret them as you will.

I also write in the face of a powerful professional edict against bringing in subjective, personal factors. This taboo is why scientific papers are penned in the desiccated third person: “It has been shown that. . . .” Anything to avoid the implication that research is done by flesh-and-blood creatures with less than pristine motivations and desires.

This second one, for background, is referring to Francis Crick, co-discoverer of the double helical structure of DNA, neurobiological investigator, and through-and-through “a scientist to the bitter end.”

As a theoretician, Francis’s methods of inquiry were quiet thinking, daily reading of the relevant literature—he could absorb prodigious amounts of it—and the Socratic dialogue. He had an unquenchable thirst for details, numbers, and facts. He would ceaselessly put hypotheses together to explain something, then reject most of them himself. In the morning, he usually bombarded me with some bold new hypothesis that had come to him in the middle of the night, when he couldn’t sleep. I slept much more soundly and, therefore, lacked such nocturnal insights.

Chalmers on Physics and Phenomenology

“Physics requires information states but cares only about their relations, not their intrinsic nature; phenomenology requires information states, but cares only about their intrinsic nature. This view postulates a single basic set of information states unifying the two. We might say that internal aspects of these states are phenomenal, and the external aspects are physical. Or as a slogan: Experience is information from the inside; physics is information from the outside.

The above comes from David Chalmer’s The Conscious Mind and provides a brief account of his personal attempt to reconcile phenomenal and physical aspects of the most basic of entities (to clarify that he is speaking to basic entities, his assertion taken to an extreme postulates nothing more than information states as actually existing, at a fundamental level)—though the view could be translated up to macroscopic structures with careful consideration (he notes the difficulty of such a task in the surrounding text, though I think that he overstates the problem). I’ll take up this problem below (note, not all of my ideas follow directly from Chalmers thesis, I have incorporated ideas from elsewhere, notably Damasio):

On scaling up, from, say, a cell to a full-fledged brain, we start to get successively larger functional units—units with their own informational states—forming a sort of nested hierarchy of phenomenology all the way to the uppermost level: one full self, in the ordinary sense of the word. A problem in this process that he remarks upon is the associated “jaggedness” that would seemingly result from summing up smaller “phenomenal” (or proto-phenomenal, if you prefer) sub-units into one coherent whole. In my estimation, it seems that this is not a necessarily a problem, much in the way that upon summing up individual atoms, or even molecules to give a better sense of the problem, into physical objects, we do not experience macroscopic objects as being “jagged” in any way, but rather as continuous, complete objects. Upon investigation below the level of every day experience with modern tools of magnification, we are able to peek into the jagged quality of physical objects, but our natural tools (i.e., eyes) for observation of such entities lack the resolution to pick out the underlying jaggedness. In other words, the jaggedness is there, but we do not notice it due to the limited resolution of our perceptual systems. It may be that conscious experience is similar to this: it, too, possesses a level of jaggedness, but this eludes our introspective observation due to the high-level nature of introspection itself. On this view, implied jaggedness does not detract from Chalmers related assertions.

As for the strength of Chalmers’ overall argument, I cannot say. On the surface, it seems plausible, though many would disagree with me on that. At the very least, he has advanced thinking on the matter in a fundamental way. I’ll post a fuller critique of the theory later on.

I think that consciousness…

I think that consciousness has always been the most important topic in the philosophy of mind, and one of the most important topics in cognitive science as a whole, but it had been surprisingly neglected in recent years.

David Chalmers, expressing a sentiment that I share far too often.