Monday, July 17, 2006

This is a redated post on Carrier on intentionality

Thursday, November 10, 2005
Carrier on The Argument from Intentionality
This is a version of part of the paper I presented in England. I am bumping it up to today to help understand some of the intentionality issues we have been discussing.

I. The Argument from Intentionality
The first of the arguments that I presented is the Argument from Intentionality. Physical states have physical characteristics, but how can it be a characteristic of, say, some physical state of my brain, that it is about dogs Boots and Frisky, or about my late Uncle Stanley, or even about the number 2. Can’t we describe my brain, and its activities, without having any clue as to what my thoughts are about?
To consider this question, let us give a more detailed account of what intentionality is. Angus Menuge offers the following definition:
1) The representations are about something
2) They characterize the thing in a certain way
3) What they are about need not exist
4) Where reference succeeds, the content may be false
5) The content defines an intensional context in which the substitution of equivalents typically fails
So, if I believe that Boots and Frisky are in the back yard, this belief has to be about those dogs, I must have some characterization of those dogs in mind that identifies them for me, my thoughts can be about them even if, unbeknownst to me, they have just died, my reference two those two dogs can succeed even if they have found their way into the house, and someone can believe that Boots and Frisky are in the back yard without believing that “the Repperts’ 13 year old beagle” and “the Repperts’ 8 year old mutt” are in the back yard.
It is important to draw a further distinction, a distinction between original intentionality, which is intrinsic to the person possessing the intentional state, and derived or borrowed intentionality, which is found in maps, words, or computers. Maps, for example, have the meaning that they have, not in themselves, but in relation to other things that possess original intentionality, such as human persons. There can be no question that physical systems possess derived intentionality. But if they possess derived intentionality in virtue of other things that may or may not be physical systems, this does not really solve the materialist’s problem.
The problem facing a physicalist account of intentionality is presented very forcefully by John Searle:
Any attempt to reduce intentionality to something nonmental will always fail because it leaves out intentionality. Suppose for example that you had a perfect causal account of the belief that water is wet. This account is given by stating the set of causal relations in which a system stands to water and to wetness and these relations are entirely specified without any mental component. The problem is obvious: a system could have all those relations and still not believe that water is wet. This is just an extension of the Chinese Room argument, but the moral it points to is general: You cannot reduce intentional content (or pains, or "qualia") to something else, because if you did they would be something else, and it is not something else." (Searle, Rediscovery p. 51).
Admittedly, this is merely an assertion of something that needs to be brought out with further analysis. It seems to me that intentionality, as I understand it, requires consciousness. There are systems that behave in ways such that, in order to predict their behavior, it behooves us to act as if they were intentional systems. If I am playing chess against a computer, and I am trying to figure out what to expect it to play, then I am probably going to look for the moves it think are good and expect the computer to play those. I act as if the computer were conscious, even though I know that it has no more consciousness than a tin can. Similarly, we can look at the bee dances and describe them in intentional terms; the motions the bees engage in enable the other bees to go where the pollen is, but it does not seem plausible to attribute a conscious awareness of what information is being sent in the course of the bee dance. We can look at the bees as if they were consciously giving one another information, but the intentionality as-if intentionality, not the kind of original intentionality we find in conscious agents. As Colin McGinn writes:

I doubt that the self-same kind of content possessed by a conscious perceptual experience, say, could be possessed independently of consciousness; such content seems essentially conscious, shot through with subjectivity. This is because of the Janus- faced character of conscious content: it involves presence to the subject, and hence a subjective point of view. Remove the inward-looking face and you remove something integral—what the world seems like to the subject.If we ask what the content of a word is, the content of that word must be the content for
some conscious agent; how that conscious agent understands the word. There may be other concepts of content, but those concepts, it seems to me, are parasitical on the concept of content that I use in referring to states of mind found in a conscious agent. Put another way, my paradigm for understanding these concepts is my life as a conscious agent. If we make these words refer to something that occurs without consciousness, it seems that we are using the by way of analogy with their use in connection with our conscious life.

The intentionality that I am immediately familiar with is my own intentional states. That's the only template, the only paradigm I have. I wouldn't say that animals are not conscious, and if I found good evidence that animals could reason it would not undermine my argument, since I've never been a materialist about animals to begin with. Creatures other than myself could have intentional states, and no doubt do have them, if the evidence suggests that what it is like to be in the intentional state they are in is similar to what it is like to be in the intentional state that I am in.

In reading Carrier’s critique of my book we find, in his response to the argument from intentionality, terms being used that make sense to me from the point of view of my life as a conscious subject, but I am not at all sure what to make of them when we start thinking of them as elements in the life of something that is not conscious. His main definition of “aboutness” is this:
Cognitive science has established that the brain is a computer that constructs and runs virtual models. All conscious states of mind consist of or connect with one or more virtual models. The relation these virtual models have to the world is that of corresponding or not corresponding to actual systems in the world. Intentionality is an assignment (verbal or attentional) of a relation between the virtual models and the (hypothesized) real systems. Assignment of relation is a decision (conscious or not), and such decisions, as well as virtual models and actual systems, and patterns of correspondence between them, all can and do exist on naturalism, yet these four things are all that are needed for Proposition 1 to be true.
Or consider the following:
Returning to my earlier definition of aboutness, as long as we can know that "element A of model B is hypothesized to correspond to real item C in the universe" we have intentionality, we have a thought that is about a thing.
Or
Because the verbal link that alone completely establishes aboutness--the fact of being "hypothesized"--is something that many purely mechanical computers do.
Or again
Language is a tool--it is a convention invented by humans. Reality does not tell us what a word means. We decide what aspects of reality a word will refer to. Emphasis here: we decide. We create the meaning for words however we want. The universe has nothing to do with it--except in the trivial sense that we (as computational machines) are a part of the universe.
Now simply consider the words, hypothesize and decide that he uses in these passages. I think I know what it means to decide something as a conscious agent. I am aware of choice 1 and choice 2, I deliberate about it, and then consciously choose 1 as opposed to 2, or vice versa. All of this requires that I be a conscious agent who knows what my thoughts are about. That is why I have been rather puzzled by Carrier’s explaining intentionality in terms like these; such terms mean something to me only if we know what our thoughts are about. The same thing goes for hypothesizing. I can form a hypothesis (such as, all the houses in this subdivision were built by the same builder) just in case I know what the terms of the hypothesis mean, in other words, only if I already possess intentionality. That is what these terms mean to me, and unless I’m really confused, this is what those terms mean to most people.
Again, we have to take a look at the idea of a model. What is a model? A model is something that is supposed to resemble something else. But if we explain “X is about Y” at least partially in terms of “X is a model for Y,” I really don’t think we’ve gotten anywhere. How can X be a model for Y if it isn’t about Y in the first place.

Nevertheless we may be able to work though the critique and find how he proposes to naturalize the concepts.
Material state A is about material state B just in case “this system contains a pattern corresponding to a pattern in that system, in such a way that computations performed on this system are believed to match and predict behavior in that system.”
In correspondence with me Carrier said this:
As I explain in my critique, science already has a good explanation on hand for attentionality (how our brain focuses attention on one object over others). Combine that with a belief (a sensation of motivational confidence) that the object B that we have our attention on will behave as our model A predicts it will, and we have every element of intentionality.
But I am afraid I don’t see that this naturalization works. My objection to this is that in order for confidence to play the role it needs to play in Carrier's account of intentionality that confidence has to be a confidence that I have an accurate map, but confidence that P is true is a propositional attitude, which presupposes intentionality. In other words, Carrier is trying to bake an intentional cake with physical yeast and flour. But when the ingredients are examined closely, we find that some intentional ingredients have been smuggled in through the back door.

Here is another illustration:
The fact that one thought is about another thought (or thing) reduces to this (summarizing what I have argued several times above already): (a) there is a physical pattern in our brain of synaptic connections physically binding together every datum about the object of thought (let's say, Madell's "Uncle George"), (b) including a whole array of sensory memories, desires, emotions, other thoughts, and so on, (c) which our brain has calculated (by various computational strategies) are relevant to (they describe or relate to) that object (Uncle George), (d) which of course means a hypothesized object (we will never really know directly that there even is an Uncle George: we only hypothesize his existence based on an analysis, conscious and subconscious, of a large array of data), and (e) when our cerebral cortex detects this physical pattern as obtaining between two pieces of data (like the synaptic region that identifies Uncle George's face and that which generates our evidentially-based hypothesis that the entity with that face lives down the street), we "feel" the connection as an "aboutness" (just as when certain photons hit our eyes and electrical signals are sent to our brain we "feel" the impact as a "greenness").
Now did you notice the word “about” in step A of Carrier’s account of intentionality? If there is something in the brain that binds together everything about Uncle George, and that is supposed to explain how my thought can be about Uncle George, then it seems pretty clear to me that we are explaining intentionality in terms of intentionality.

What I think the deepest problem is in assigning intentionality to physical systems is that when we do that norms of rationality are applied when we determine what intentional states exist, but normative truths are not entailed by physical facts. In the realm of ethics, add up all the physical, chemical, biological, psychological, and sociological facts about a murder for hire, and nothing in that description will entail that it was a wrongful act. Similarly, scientific information about what is will not tell you what an agent ought to believe, but we need to know what an agent ought to believe in order to figure out what he or she does believe. According to Searle, for example, intentionality cannot be found in natural selection, because “intentional standards are inherently normative,” but “there is nothing normative about Darwinian evolution.” So any attempt to naturalize intentionality will end up bringing intentionality in through the back door, just as Carrier’s account does. When you encounter a new or unfamiliar attempt to account for intentionality naturalistically, look it over very carefully, and you should be able to find our where the bodies are buried.
Link
posted by Victor Reppert @ 3:56 PM

3 Comments:
At 10:22 PM, Dogtown said…

I hate to continue to be a one-trick pony, but I think reading Dretske would help Carrier out of some of his problems. I think Carrier is on the right track, but he needs some more theoretical machinery to discharge his obligations to use physical yeast and flour when baking his intentional cake.


At 10:38 PM, Steven Carr said…

Victor writes 'There can be no question that physical systems possess derived intentionality.'

How can a system which lacks all intentionality gain intentionality?

What intentionality does an unconscious man have? None whatsoever. Even his mind lacks all intentionality.

So how does an unconscious man regain intentionality?

What exists that could kick-start the intentionality of an unconscious mind? Increasing brain activity?

Rather more likely than the idea that an unconscious man has a soul that can intend to regain intentionality.


At 10:04 AM, Victor Reppert said…

Dogtown: I'm sure Dretske buries the bodies deeper than does Carrier. But I'm convinced they too will be found. (Actually, the phrase was used by Bill Hasker when we had a faculty discussion group at Notre Dame on Explaining Behavior, by Dretske).

I realize this, as it stands, is a bald assertion.


Post a Comment

No comments: