Wednesday, June 29, 2005

Menuge on Dretske

One well-known attempt to “bake an intentional cake out of physical yeast and flour” is Fred Dretske’s Knowledge and the Flow of Information. According to Dretske,“Perhaps…the intentionality of our cognitive attitudes…a feature that some philosophers take to be distinctive of the mental, is a manifestation of their underlying information-theoretic structures.”

Angus Menuge, in Agents Under Fire, (Rowman and Littlefield, 2004) pp. 179-180 offers two lines of response.

“The first is to note that the move does not really work but is yet another example of relocating and hiding the original problem. If the transmission of information is to do any work in explaining human cognition and behavior, this information cannot be viewed as mere uninterpreted signals. We must suppose that the information has content. For example, if certain signals from the visual cortex of as tourist near Banff National Park carry the information that a grizzly bear is ten feet away, then the fact clearly explains the tourist’s recollecting the ranger’s advice, “Do not run; if attacked, lie face down until the bear moves away.” But physical signals are not self-interpreting. Indeed the pattern of events occurring in the visual cortex might be interpreted in an infinite number of ways, only a few of which are relevant to the encounter with a grizzly. The fact is that the salient information in these signals is recoverable by an interpretive agent who understands them. Understanding is, however, an intentional state.”

"Likewise, merely understanding the information that a grizzly is nearby explains nothing without supposing that one not only has a desire to stay alive but also has instrumental beliefs about how to do so….For information to do the kind of work that it needs to do to explain human actions, intentional attitudes toward that information will have to be involved."

Menuge goes on to argue that the information involved in cognition is a prime example of the sort of specified complexity that is best explained by intelligent design.


Ahab said...

Could you summarize Menuge's explanation of how the tourist is able to interpret the information from the visual cortex?
I'm curious to hear it because simply saying this is due to intelligent design still tells me absolutely nothing about how the tourist is able to do this.

Victor Reppert said...

Well, the way I run the argument, I just point out that according to my understanding teleological explanations can be basic explanations. If what it means to be "told how someone does something" is to be given a mechanistic explanation of how this is supposed to work, then the request appears to be question-begging. Now all I do at this point is point out that we can do better understanding this if we allow telological explanations to be basic explanations. That's my explanatory dualism. If you now say that the explanation I am offering isn't a physical explanation, then it is you, and not me, that is making the claim.

I think Mike Wiest has been trying to argue that using the kinds of arguments that we use to show that there is something non-physical is to presume that there are some things that physics couldn't discover, even if they were true, and Wiest wants to say that QM leaves the door open for "physics" to say a lot of things that would traditionally have been thought to be religious in nature.

Ahab said...

Scientists are not adverse to using teleological-like explanations when they are called for. For instance they are widely used in historical sciences like evolutionary biology.
But it seems to me that you are doing the questions-begging by assuming that they can adequately answer “how “questions.
Suppose I ask, “How is the energy generated by the car’s motor transferred into the turning of its wheels?” If you said “That is the way Henry Ford designed his cars.” I would surely be correct in thinking that you hadn’t really given me the answer I needed to understand how the car functions in the way it does.
In the same way, to say that, "People are able to experience intentionality because the intelligent designer wanted them to have this ability", gives me no information about how we are able to think.

Victor Reppert said...

Going back to the case of the tourist, the tourist recognizes the intentional content of the instructions for dealing with bears. Part of the explanation will have to do with the brain, but the brain story will underdetermine the propositional content.

In the case of the car, we know there has to be a mechanistic how, and we are not being given one. In the case of the mind we don't know in advance that there has to be a mechanistic how.

If we ask "How did God create the universe? By what means and mechanisms?, it seems to me that we are asking a question that doesn't make a whole lot of sense when we are talking about an omnipotent being.

Ahab said...

I don't quite understand why you've attached 'mechanistic' to how.
I just want to know how the mind of the tourist is able to recognize that the bear is dangerous and react accordingly. How is he able to take the information given him by the park ranger and base his actions upon that informatio?
And how does ID provide me with the answer to that question?

HV said...

In the quote from Dretske, the term "information-theoretic" is used. It is good to keep in mind that there are two meanings of "information" in this sort of discussion. "Information" as used in Claude Shannon's information theory is a technical term having to do with the statistical properties of signals. It is an objectively quantifiable characteristic, but says nothing about content. Information theory says nothing about content or meaning. In other words two different signals can have the same information measure in Shannon's sense, while one carries meaningful content (information in the normal sense) and the other doesn't.

Henry Verheggen

Giordano Sagredo said...

Dretske uses "information theory" in the technical (Shannon), not in any colloquial sense. He spends two chapters reviewing Shannon's theory, taking pains to point out that it is not a semantic theory. In Chapter 3, he begins by claiming that while semantics is not important for (Shannon) communication theory, that doesn't mean communication theory can't contribute to semantics. The remaining 6 chapters spell out his theory of content/meaning/semantics using communication theory as a starting point.

I said "Starting point", not final theory. He adds a bunch of apparatus to this information theoretic starting point in order to get structures with semantic/intentional contents. It is ultimately fairly simple, drawing on a distinction between analog and digital representational formats as well as learning periods for concept formation.

So, first, Dretske does mean information theory in the Shannon sense. Second, he unequivocally does not think information is sufficient for intentionality. In the index, under 'Information', the first subcategory is 'As distinguished from meaning', which is discussed at least six times in the book. If I had a dollar for every time I have read that misunderstanding of Dretske...

Dretske is the Darwin of intentionality.

Ahab said...

That's one nice benefit of engaging in discussions like this. I'd never heard of Dretske before. But I now have his book on order and should receive it by Wed. of next week.
Looking forward to what promises to be an interesting read.

Giordano Sagredo said...

Ahab, you might want to check out my new blog here.

HV said...

If I had a dollar for every guy who thinks he has succeeded in showing how the intentional can be derived from the non-intentional... And if the US governement could take back all the money it has spent on such claims, it would pay off the national debt. I have read dozens, perhaps hundreds, of papers and books on this subject going back to the 1960's, and have yet to be persuaded. If Dretske has figured this out and it is simple I would like to hear how he has done it, before I go and order the book. I am thoroughly familiar with digital and analog communications systems, so technical jargon is acceptable.

Henry Verheggen

Giordano Sagredo said...

HV, hopefully the summaries at the new blog will be adequate. It isn't simple enough to give a bumper sticker (at least I don't know it well enough to do so), and he doesn't use 'digital/analog' distinction in the sense used in signal processing theory. Unfortunately, he uses this well-established terminology in an entirely different way.

I don't think he has solved the problem of intentionality, any more than Darwin solved the problem of the origin of species. What he did was establish a set of good ideas that can be used as a template for thinking about the problem productively.

Also, note his book does not address the problem of consciousness at all. For Dretske, and most philosophers, the problem of intentionality is the problem of propositional contents (roughly, contents that can refer to other states of affairs, have truth values, and which can have empty extensions (e.g., "The present King of France")). Hence, anyone primarily interested in consciousness, and not intentional contents as understood by contemporary philosophers (Dennett, Fodor, Millikan, Dretske), would probably be disappointed in the book and shouldn't buy it. That is not to say intentional contents are irrelevant for concsiousness, but that is not the problem he addresses. I think this is a good move on the philosophers' part. There are certainly features of our cognitive structures that are captured by intentional contents, even though not all features of our mind are captured.

It would be interesting to see specific criticisms of some of the philosophers you have read on this topic. I can't claim to be an expert, but have spent a lot of time reading the Churchlands, Dretske, and a little on Searle, Dennett, Millikan, and Fodor. IMO, Dretske is the best in the lot. Searle is the worst because he is not always self-consistent (he is a naturalist but with a questionably coherent brand of nonreductionism and he provides little in the way of positive theories), Dennett is interesting but still locked into a weird kind of positivistic mindset, the Churchlands have spent too much time defending the merits of reductionism (and since they are eliminativists wrt propositional contents they haven't contributed anything to this topic other than skepticism), and Fodor and Millikan are both very good (and very close cousins of Dretske).

I have read Chalmers extensively, but that is on consciousness which I'm not addressing. He is excellent in his clarity of writing, though I think history will show him to be wrong in his confident demarcation of things into "easy" and "hard" problems. Yesterday's hard problems are today's trivial solutions.

Giordano Sagredo said...

Chalmers put KFI in the top ten philosophy of mind in the past 30 years here.

Check out my blog for a little teaser trailer for what we're in for. I think it is the best think in 30 years, perhaps 40.

Ahab said...

dogtown wrote:
Ahab, you might want to check out my new blog here.

Thanks dogtown, I've already placed the link in my Favorites folder. Again, I look forward to reading the book and the comments on your blog site.

mrlee said...

Angus Menuge's seems confused.

Dretske explicitly includes an epistemic agent in his definition of informational semantics:

Informational content: A signal r carries the information that a is F = The conditional probability of a's being F, given r (and k) is 1 (but given k alone, less than 1). The variable k represents the background knowledge of the epistemic subject. This definition clearly states that the content which a signal carries is relative to the knowledge of the specific epistemic agent.

One of Dretske's formulations of knowledge is given by something like: An epistemic subject s knows that a is F = s's belief that a is F is caused (or causally sustained) by the information that a is F.

Thus these two definitions of informational content and what an epistemic agent knows are mutually recursive. I am not convinced that this mutual recursion bottoms out and avoids vicious circularity.

As for interpreting agents, I'm not sure a vending machine is not an interpreting agent. Its coin validator interprets any input (coins) as either being of one kind or another, valid or invalid tender, and responds accordingly. And as anyone who has lost his change to one, you know they're autonomous agents to boot.


Information and Information Flow: an introduction by Manuel Bremer, Daniel Cohnitz, p. 128

dfadf said...

Microsoft Office
Office 2010
Microsoft Office 2010
Office 2010 key
Office 2010 download
Office 2010 Professional
Microsoft outlook
Outlook 2010
Windows 7
Microsoft outlook 2010