For my presentation in England,I have been working on my a paper entitled "Defending the Dangerous Idea" I've started work on the part of the paper having to do with intentionality. It may help to answer some objections that have come up on another thread:
I. The Argument from Intentionality
The first of the arguments that I presented is the Argument from Intentionality.
Physical states have physical characteristics, but how can it be a characteristic of, say, some physical state of my brain, that it is about dogs Boots and Frisky, or about my late Uncle Stanley, or even about the number 2. Can’t we describe my brain, and its activities, without having any clue as to what my thoughts are about?
To consider this question, let us give a more detailed account of what intentionality is. Angus Menuge offers the following definition:
1) The representations are about something.
2) They characterize the thing in a certain way
3) What they are about need not exist
4) Where reference succeeds, the content may be false
5) The content defines an intensional context in which the substitution of equivalents typically fails.
So, if I believe that Boots and Frisky are in the back yard, this belief has to be about those dogs, I must have some characterization of those dogs in mind that identifies them for me, my thoughts can be about them even if, unbeknownst to me, they have just died, my reference two those two dogs can succeed even if they have found their way into the house, and someone can believe that Boots and Frisky are in the back yard without believing that “the Repperts’ 13 year old beagle” and “the Repperts’ 8 year old mutt” are in the back yard.
It is important to draw a further distinction, a distinction between original intentionality, which is intrinsic to the person possessing the intentional state, and derived or borrowed intentionality, which is found in maps, words, or computers. Maps, for example, have the meaning that they have, not in themselves, but in relation to other things that possess original intentionality, such as human persons. There can be no question that physical systems possess derived intentionality. But if they possess derived intentionality in virtue of other things that may or may not be physical systems, this does not really solve the materialist’s problem.
It seems to me that intentionality, as I understand it, requires consciousness. There are systems that behave in ways such that, in order to predict their behavior, it behooves us to act as if they were intentional systems. If I am playing chess against a computer, and I am trying to figure out what to expect it to play, then I am probably going to look for the moves it think are good and expect the computer to play those. I act as if the computer were conscious, even though I know that it has no more consciousness than a tin can. Similarly, we can look at the bee dances and describe them in intentional terms; the motions the bees engage in enable the other bees to go where the pollen is, but it does not seem plausible to attribute a conscious awareness of what information is being sent in the course of the bee dance. We can look at the bees as if they were consciously giving one another information, but the intentionality as-if intentionality, not the kind of original intentionality we find in conscious agents. As Colin McGinn writes:
I doubt that the self-same kind of content possessed by a conscious perceptual experience, say, could be possessed independently of consciousness; such content seems essentially conscious, shot through with subjectivity. This is because of the Janus-faced character of conscious content: it involves presence to the subject, and hence a subjective point of view. Remove the inward-looking face and you remove something integral—what the world seems like to the subject.
If we ask what the content of a word is, the content of that word has to be the content for some conscious agent; how that conscious agent is understanding the word.
In reading Carrier’s critique of my book we find, in his response to the argument from intentionality, terms being used that make sense to me from the point of view of my life as a conscious subject, but I am not at all sure what to make of them when we start thinking of them as elements in the life of something that is not conscious. Consider the following:
Returning to my earlier definition of aboutness, as long as we can know that "element A of model B is hypothesized to correspond to real item C in the universe" we have intentionality, we have a thought that is about a thing.
Because the verbal link that alone completely establishes aboutness--the fact of being "hypothesized"--is something that many purely mechanical computers do…
Language is a tool--it is a convention invented by humans. Reality does not tell us what a word means. We decide what aspects of reality a word will refer to. Emphasis here: we decide. We create the meaning for words however we want. The universe has nothing to do with it--except in the trivial sense that we (as computational machines) are a part of the universe.
Now simply consider the words, hypothesize and decide that he uses in these
I think I know what it means to decide something as a conscious agent. I am aware of choice 1 and choice 2, I deliberate about it, and then consciously choose 1 as opposed to 2, or vice versa. All of this requires that I be a conscious agent who knows what my thoughts are about. That is why I have been rather puzzled by Carrier’s explaining intentionality in terms like these; such terms mean something to me only if we know what our thoughts are about. The same thing goes for hypothesizing. I can form a hypothesis (such as, all the houses in this subdivision were built by the same builder) just in case I know what the terms of the hypothesis mean, in other words, only if I already possess intentionality. That is what these terms mean to me, and unless I’m really confused, this is what those terms mean to most people.