One well-known attempt to “bake an intentional cake out of physical yeast and flour” is Fred Dretske’s Knowledge and the Flow of Information. According to Dretske,“Perhaps…the intentionality of our cognitive attitudes…a feature that some philosophers take to be distinctive of the mental, is a manifestation of their underlying information-theoretic structures.”
Angus Menuge, in Agents Under Fire, (Rowman and Littlefield, 2004) pp. 179-180 offers two lines of response.
“The first is to note that the move does not really work but is yet another example of relocating and hiding the original problem. If the transmission of information is to do any work in explaining human cognition and behavior, this information cannot be viewed as mere uninterpreted signals. We must suppose that the information has content. For example, if certain signals from the visual cortex of as tourist near Banff National Park carry the information that a grizzly bear is ten feet away, then the fact clearly explains the tourist’s recollecting the ranger’s advice, “Do not run; if attacked, lie face down until the bear moves away.” But physical signals are not self-interpreting. Indeed the pattern of events occurring in the visual cortex might be interpreted in an infinite number of ways, only a few of which are relevant to the encounter with a grizzly. The fact is that the salient information in these signals is recoverable by an interpretive agent who understands them. Understanding is, however, an intentional state.”
"Likewise, merely understanding the information that a grizzly is nearby explains nothing without supposing that one not only has a desire to stay alive but also has instrumental beliefs about how to do so….For information to do the kind of work that it needs to do to explain human actions, intentional attitudes toward that information will have to be involved."
Menuge goes on to argue that the information involved in cognition is a prime example of the sort of specified complexity that is best explained by intelligent design.
7 comments:
Well, the way I run the argument, I just point out that according to my understanding teleological explanations can be basic explanations. If what it means to be "told how someone does something" is to be given a mechanistic explanation of how this is supposed to work, then the request appears to be question-begging. Now all I do at this point is point out that we can do better understanding this if we allow telological explanations to be basic explanations. That's my explanatory dualism. If you now say that the explanation I am offering isn't a physical explanation, then it is you, and not me, that is making the claim.
I think Mike Wiest has been trying to argue that using the kinds of arguments that we use to show that there is something non-physical is to presume that there are some things that physics couldn't discover, even if they were true, and Wiest wants to say that QM leaves the door open for "physics" to say a lot of things that would traditionally have been thought to be religious in nature.
Going back to the case of the tourist, the tourist recognizes the intentional content of the instructions for dealing with bears. Part of the explanation will have to do with the brain, but the brain story will underdetermine the propositional content.
In the case of the car, we know there has to be a mechanistic how, and we are not being given one. In the case of the mind we don't know in advance that there has to be a mechanistic how.
If we ask "How did God create the universe? By what means and mechanisms?, it seems to me that we are asking a question that doesn't make a whole lot of sense when we are talking about an omnipotent being.
Dretske uses "information theory" in the technical (Shannon), not in any colloquial sense. He spends two chapters reviewing Shannon's theory, taking pains to point out that it is not a semantic theory. In Chapter 3, he begins by claiming that while semantics is not important for (Shannon) communication theory, that doesn't mean communication theory can't contribute to semantics. The remaining 6 chapters spell out his theory of content/meaning/semantics using communication theory as a starting point.
I said "Starting point", not final theory. He adds a bunch of apparatus to this information theoretic starting point in order to get structures with semantic/intentional contents. It is ultimately fairly simple, drawing on a distinction between analog and digital representational formats as well as learning periods for concept formation.
So, first, Dretske does mean information theory in the Shannon sense. Second, he unequivocally does not think information is sufficient for intentionality. In the index, under 'Information', the first subcategory is 'As distinguished from meaning', which is discussed at least six times in the book. If I had a dollar for every time I have read that misunderstanding of Dretske...
Dretske is the Darwin of intentionality.
Ahab, you might want to check out my new blog here.
HV, hopefully the summaries at the new blog will be adequate. It isn't simple enough to give a bumper sticker (at least I don't know it well enough to do so), and he doesn't use 'digital/analog' distinction in the sense used in signal processing theory. Unfortunately, he uses this well-established terminology in an entirely different way.
I don't think he has solved the problem of intentionality, any more than Darwin solved the problem of the origin of species. What he did was establish a set of good ideas that can be used as a template for thinking about the problem productively.
Also, note his book does not address the problem of consciousness at all. For Dretske, and most philosophers, the problem of intentionality is the problem of propositional contents (roughly, contents that can refer to other states of affairs, have truth values, and which can have empty extensions (e.g., "The present King of France")). Hence, anyone primarily interested in consciousness, and not intentional contents as understood by contemporary philosophers (Dennett, Fodor, Millikan, Dretske), would probably be disappointed in the book and shouldn't buy it. That is not to say intentional contents are irrelevant for concsiousness, but that is not the problem he addresses. I think this is a good move on the philosophers' part. There are certainly features of our cognitive structures that are captured by intentional contents, even though not all features of our mind are captured.
It would be interesting to see specific criticisms of some of the philosophers you have read on this topic. I can't claim to be an expert, but have spent a lot of time reading the Churchlands, Dretske, and a little on Searle, Dennett, Millikan, and Fodor. IMO, Dretske is the best in the lot. Searle is the worst because he is not always self-consistent (he is a naturalist but with a questionably coherent brand of nonreductionism and he provides little in the way of positive theories), Dennett is interesting but still locked into a weird kind of positivistic mindset, the Churchlands have spent too much time defending the merits of reductionism (and since they are eliminativists wrt propositional contents they haven't contributed anything to this topic other than skepticism), and Fodor and Millikan are both very good (and very close cousins of Dretske).
I have read Chalmers extensively, but that is on consciousness which I'm not addressing. He is excellent in his clarity of writing, though I think history will show him to be wrong in his confident demarcation of things into "easy" and "hard" problems. Yesterday's hard problems are today's trivial solutions.
Chalmers put KFI in the top ten philosophy of mind in the past 30 years here.
Check out my blog for a little teaser trailer for what we're in for. I think it is the best think in 30 years, perhaps 40.
Angus Menuge's seems confused.
Dretske explicitly includes an epistemic agent in his definition of informational semantics:
Informational content: A signal r carries the information that a is F = The conditional probability of a's being F, given r (and k) is 1 (but given k alone, less than 1). The variable k represents the background knowledge of the epistemic subject. This definition clearly states that the content which a signal carries is relative to the knowledge of the specific epistemic agent.
One of Dretske's formulations of knowledge is given by something like: An epistemic subject s knows that a is F = s's belief that a is F is caused (or causally sustained) by the information that a is F.
Thus these two definitions of informational content and what an epistemic agent knows are mutually recursive. I am not convinced that this mutual recursion bottoms out and avoids vicious circularity.
As for interpreting agents, I'm not sure a vending machine is not an interpreting agent. Its coin validator interprets any input (coins) as either being of one kind or another, valid or invalid tender, and responds accordingly. And as anyone who has lost his change to one, you know they're autonomous agents to boot.
Source:
Information and Information Flow: an introduction by Manuel Bremer, Daniel Cohnitz, p. 128
Post a Comment