John Searle’s Minds, Brains and Science is a collection of the six Reith Lectures that he gave in 1984 on the relation between our conscious, meaningful, phenomenal experiences and the backdrop of nonconscious, meaningless, objective physical reality against which all of the former inevitably play out. Essentially, the problem is this: We experience things, but everywhere else we look in the universe, we do not see experiences. How do we explain this seemingly trivial fact and make consciousness fit in with everything else we know?
This is inevitably a question that he is unable to provide a clear answer for, but, nonetheless, this was a successful, and worthwhile, work. The lectures were intended for a lay audience, so they are largely non-technical, but as far as I can tell, none of the necessary content was lost. Searle is still able to explicate each of the sub-problems and arguments very well. He manages to minimize caricaturing his opponents while simultaneously keeping the focus on what, he thinks, are the real issues. All the while, he manages to introduce several new ideas. His remarks on the social sciences and the freedom of the will are especially noteworthy.
In its entirety, the read comes down to less than a hundred pages, making it a perfect introduction to the problem at large. At the same time, it retains enough depth to catch the eye of even the most weathered philosopher-scientist—and for that, I give it five out of five stars.
Today, I am reading John Searle’s Minds, Brains and Science, which is essentially an edited transcript of his 1984 Reith Lectures. I read two quotes that I thought were worth sharing, one for its humor, and the other for its insight. Enjoy!
Various replies have been suggested to this [the Chinese Room] argument by workers in artificial intelligence and in psychology, as well as philosophy. They all have something in common; they are all inadequate. And there is an obvious reason why they have to be inadequate, since the argument rests on a very simple logical truth, namely, syntax alone is not sufficient for semantics, and digital computers insofar as they are computers have, by definition, a syntax alone.
I think that he is almost certainly right here, but the manner in which formulates this paragraph is nothing short of comedic perfection. My own thoughts on the subject can be found in my article “Minds and Computers.”
Suppose no one knew how clocks worked. Suppose it was frightfully difficult to figure out how they worked, because, though there were plenty around, no one knew how to build one, and efforts to figure out how they worked tended to destroy the clock. Now suppose a group of researchers said, ‘We will understand how clocks work if we design a machine that is functionally the equivalent of a clock, that keeps time just as well as a clock,’ So they designed an hour glass, and claimed: ‘Now we understand how clocks work,’ or perhaps: ‘If only we could get the hour glass to be just as accurate as a clock we would at last understand how clocks work.’ Substitute ‘brain’ for ‘clock’ in this parable, and substitute ‘digital computer program’ for ‘hour glass’ and the notion of intelligence for the notion of keeping time and you have the contemporary situation in much (not all!) of artificial intelligence and cognitive science.