Thursday, May 04, 2006

Against the San Diego mafia

Does a map of the streets of San Diego have propositional contents? I am frankly not sure. If you think they do, then you'll think that Churchland's theory is just an implementation of propositional attitudes. If they do not, then you'd be an eliminativist. Note this isn't the same as saying that we can make statements (in language) about the map that are true or false. We can do the same with our phenomenal experience, which you have admitted has nonpropositional contents. The key, then is, is the maps representational format propositional or nonpropositional?

Churchland thinks they are nonpropositional, and that neural spaces have the same type of content as maps. To the extent that we can judge a map's accuracy, it is based on the relative locations of points on the map, not the properties of individual points.

I think you are missing the real question. That is, does a map (or whatever else you want to put in there) satisfy the three requirements for a successful successor to propositional attitudes laid down by Baker. Unless you think those requirements are somehow unacceptable. I think the end-of-the-world consequences that are typically attributed to EM actually do follow unless the replacements have these characteristics.

To do the job required by Baker these states have to pick out propositions. If we want to accuse Bush of lying about WMDs, we have to posit a relationship between President Bush and the proposition "Saddam has weapons of mass destruction." If no relationship obtains, then we can't call him a liar. Or a truth-teller.

If there is no relation between persons and propositions, then we cannot be said to lie or tell the truth, we cannot be said to make assertions, and we cannot perform rational inferences, including those rational inferences that establish. We in fact to not know what the sentences we are now asserting mean.

Do maps lie? How do we explain the difference between a lying map and some other kind of inaccurate map? You can look at the map all day and not find out. In order to answer that question you have to look at the states of the person who made the map. And I don't see any alternative other than to ask whether the mapmaker believes that the map is accurate, or not.

I'm still arguing that either the "successor" states to the work of propositional attitudes as per Baker's criteria, in which case they are propositonal attitudes, or they don't do that work, in which case epistemic Armageddon ensues.


Blue Devil Knight said...

Churchland's theory is a theory about how to get representational contents in brains. I ignored the three Baker points because she is attacking EM assuming it is the theory that there are no mental contents. This is the single most common (and ignorant) view of Churchland that I find.

You can disagree with the adequacy of Churchland's theory of mental content, but that is what it is (and obviously I was just briefly summarizing a 100 page book chapter of his). To lie is to have one map of the world in your mind, but say otherwise with the goal of misleading someone. It's like telling someone to take a left when your map shows you that they should take a right.

You and Baker obviously really want to import the semantic properties of public linguistic expressions (e.g., truth, falsity, inferential relations) into the head, as somehow being essential to our internal, nonpublic, nonlinguistic life. This may well be a monumental error of judgment. An EM-ist gets to preserve all those semantic features in language, but without a priori assuming these features are merely reflections of the exact same featurs in some second, internal, language of though.

I understand what EM undermines: an interesting and speculative model of our cognitive contents that has formed the basis for lots of further speculution to which many people get attached. I don't share this attachment. I think it will be a surprising discovery if the cotents of our minds have all these funky features of our public language.

The behavioral and neuronal evidence presently underdetermines which theory is best to explain animal behavior, so I'm agnostic. I do think truth, inference, and the like exists, and its in the medium that you are reading right now (and, I hope, this sentence).

If philosophers had started with monkeys and newts, rather than people and language (again, a recent and idiosyncratic expression of a single species' neuronal information processing systems), philosophy of mind would probably look a lot different (and a lot better) right now.

Blue Devil Knight said...

I said:
I do think truth, inference, and the like exists, and its in the medium that you are reading right now (and, I hope, this sentence).

As stated, this is obviously not enough. Clearly, a sentence scrawled by an ant crawling on the sand ('I am') is not true or false. An EMer has to say that the truth of sentences in the world (produced by intelligent agents) is a novel, idiosyncratic, recent development on the evolutionary scene. The relationship between the rich, internal mileau of neuronal information processing states, and the relatively anemic contents expressed by public propositions, is not something I will pretend to understand. A motivated EMer like Churchland needs to either, show that his map theory applies to external sentences (and so propositional truth doesn't exist at all), or (if truth values do exist) show how the truth of linguistic utterances is somehow parasitic on the contents of cognitive maps. I think both projects are feasible, but frankly don't have the motivation to persue them (being agnostic and all).

This would be no small project, as it would entail building models of language use, and the relation of surface linguistic constructions (words, phonemes, sentences) to the semantics of the underlying maps. Churchland would need to show that grammatical constructions are themselves built up from internal maps of language space, and whether this would be a 'mere implementation' of propositional attitude psychology is impossible to determine without knowing the details.

Anonymous said...

blue devil knight,
Would the “calculations” the brain makes when someone catches a ball be analogous to the role of propositional language?
I don’t believe the brain is actually calculating out complicated equations in the act of catching. But it is possible to translate whatever it is doing into mathematical formulas.
Mathematics and language would be more like tools which enable us to interact with the world, but the mind doesn’t actually need to be structured propositionally or mathematically in order to use propositional language or mathematics.
Or am I completely misunderstanding what Churchland is trying to do?

Blue Devil Knight said...

Tim, that is a good example to think about. I have been focusing on sensory representation, but the topic of brains building models actually comes up much more explicitly in the motor control literature.

Our brain seems to use continuous control structures for such skilled motor control. I think you are right to wonder what this implies for propositional attitude psychology. We can also use a set of continuous, nonlinear, differential equations to describe the activity of a single neuron in a dish, but we don't want to say that the neuron knows differential equations, or has propositional attitudes! The difference is that in forming models of the world to aid in motor control (e.g., a model of the ball's trajectory so that we can hit it with a bat, as our visual system isn't fast enough to do it alone), the EMers want to say the brain is representing the world.

The main difference between the single neuron case (where we obviously are just using math to describe its dynamics, not to say any of those variables are the content of its states), and the higher-level motor control case, is that in the latter the brain seems to actually actively build models of the world (e.g., the physics of moving baseballs and your own body), and then uses this model to help control behavior. So we are not merely interacting with the world, but brains have evolved means to disengage from the world and model things that are not there, that we cannot see, etc..

Now, what exactly it means for a brain to build a model, and, given a model that quantifies over representational states, determining if their content is propositional or nonpropositional, those are things which are at the cutting edge of neurophilosophy.

The best neurophilosophical work, to date, on this topic is done by Rick Grush, as can be found in this paper. For a more empirical/neuroscientific approach, this book looks excellent, though I haven't actually read it yet! For a more technical, paper-length introduction to this vast literature, this paper is a good entry-point.

One interesting argument is that, since nervous systems originally evolved because they made a behavioral difference to the organism, we should look to the motor control systems for the elementary aspects of neuronal representation, and then see if this architecture might not have been iterated during evolution to give rise to the more subtle cognitive processes that emerged. This idea isn't crazy, and for all we know it is right.

Blue Devil Knight said...
This comment has been removed by a blog administrator.
Blue Devil Knight said...

It is important to note: neuroscientists couldn't give a damn about this debate, and it's a good thing, too. Qua neuroscientist, I don't care if my model of how the rat represents touch is propositional or not. We try to build the best explanations of what we are studying, which has the best fit with our data: concern with the philosophers' categories and armchair, speculative models of animal behavior are just not in our minds. No neroscientist I know knows who Dretske is. Some have heard of the Churchlands and Fodor, but none of them give a damn about what they have said (much less Menuge, Baker, and the like).

And this is a great thing. Imagine if Einstein had been a die-hard Kantian about space. Imagine if Darwin had genuflected to the philosophico-religious pressures of his day. Science works best by letting the data and imagination go in seemingly crazy directions (quantum certainly seemed crazy at first!), within certain fairly broad constraints (e.g., we now constrain ourselves to methodological naturalism because history has shown us that the alternative leads to intellectual armageddon).

So, I don't want to mislead anyone that scientists care about this crap. They don't. I am an anomaly because I worked with the Churchlands for a few years in grad school, so I periodically defend their views. My recommendation to anyone interested in how brains process information: get a degree in electrical engineering and/or neuroscience. Studying philosophy will not help you understand how brains process information: for that, we need to study brains.

That said, if you like the abstract arguments about contents, whether they are propositional or not, and what this means for the future of humankind, and don't mind that brain science is too young to have empirically grounded discussions of these things, and you can imagine yourself spending forty hours a week arguing and thinking about such things, again in a field that traditionally eschews getting your hands dirty with data, then you might consider philosophy as a profession.

Victor Reppert said...

I don't know why you say that Baker thinks that Churchland says there are no mental contents. She is very clear on claiming that Churchland is saying that there will be successors to the eliminated concepts; she has serious doubts that those successors will do the job that Churchland wants them to do . Baker also explicitly denies that the sentences-in-the-brain model
will be justified by science; she thinks connectionist models will probably be the most promising from a scientific standpoint. But she thinks the science-to-metaphysics move is far from obviously correct.

Right now the Churchlands do not have an elimination on offer; and they see no reason to stop talking as if they had propositional attitudes. It may end up that the eliminativist model developed will be the best model of the brain, but it will not really be possible to take it as the truth about the mind.
To use van Fraassen's terminology, it will be empirically adequate but not true.

Blue Devil Knight said...

Two of Baker's three criteria stipulate that they can't refer to the content of mental states in their account. If she thinks that EMers have a theory of content, then why would she stipulate this?

The Churcland's do have an elimination on offer: it is Paul's state space semantics. We could argue about whether it works, for all the reasons I gave above (also, as you've pointed out, he needs more than just a map: I emailed him a while ago what I call the 'three points problem': I can randomly write out three points on a piece of paper, and their metric relations will map on to all sorts of metric relations among things in the world, but this doesn't mean they are representing those things...It is the same problem you get with a brute informational semantics (Dretske's project is basically an attempt to preserve the importance of information, but without making it sufficient for content)).

I'll let you know what his response to the three points problem is, if I hear back from him.