Monday, March 27, 2006

Some Responses to Lippard

Some responses to Lippard. His comments are in bold, mine are not.

The conclusion that rationality is *undermined* doesn't follow--at best the conclusion is that the connection between the physical causes and the rational inferences is at best a contingent one that is in need of explanation, which I think is a valid conclusion. But it's one that is in the process of being answered as we learn about how the brain and perceptual systems work, how language develops, and how the mind evolved.

I read Ramachandran's Brief Tour of Consciousness. Maybe the tour was too brief, but I did not see any real progress there toward the solution to these problems, nor did I see any real reognition of the fundamental philosophical difficulties involved. It seems to me that brain science can provide one of two things. One is that it can provide correlations between brain states and mental states. But sophisticated dualists like Hasker and Taliaferro never said that we should not expect to find these. The other thing that brain science might provide is intertheoretic reductions. But I question the coherence of these attempts. I would need expertise in brain science to know just what the correlations are, but that is all you get from the neurophysioloical "hard data." On the other hand, I believe that I can, as a philosopher, question the coherence and adequacy of the intertheoretic reductions that scientists may offer, and in doing so I am not guilty of "armchair science."

Some of my comments in response to Carrier, which originally appeared on Vallicella's blog and later on mine, are relevant here:

Carrier gives me two options for developing my argument. Either I prove conclusively that a naturalistic account of reasoning is impossible, or I conduct an exhaustive study of the finding of brain science and find that reasoning probably cannot be accounted for in terms of brain function. It seems to me that there is a third option available. I can show we are dealing with a conceptual chasm that cannot simply be overcome by straightforward problem-solving. An example would be the attempt to get an “ought” from an “is”. Moore argued that for any set of “is” statements concerning a situation, the question of whether this or that action ought to have been done is left open. To generate any confidence that you can get an “ought” from an "is," it simply won’t do to come up with one theory after another to show how you can get an "ought" from an "is." We need to be given some idea that these theories can surmount the conceptual problem Moore and others have posed.

Another way of putting my point is to say that reason presents a problem analogous to what David Chalmers called the hard problem of consciousness. When we consider seriously what reasoning is, when we reject all attempts at “bait and switch” in which reasoning is re-described in a way that makes it scientifically tractable but also unrecognizable in the final analysis as reasoning, we find something that looks for all the world to be radically resistant to physicalistic analysis.

So I maintain that there is a logico-conceptual chasm between the various elements of reason, and the material world as understood mechanistically. Bridging the chasm isn’t going to simply be a matter of exploring the territory on one side of the chasm.

If the fact that the brain operates in accordance with physical law undermined rationality, then the fact that computers operate in accordance with physical law would undermine their ability to perform logical inferences and computations.

Are computers aware of mathematical relationships? In fact I think I didn't think Lippard really argued that thinks that computers like the one I am typing on are real examples of rationality in the sense that I have been discussing. I thought that, rather, he thought there were possible computers that could possess rationality in the relevant sense. Perhaps Jim can clarify.

The real question is *how* brains came to be able to engage in rational inferences in virtue of the way that they physically operate, not *whether* they do. Gilson (and Victor) argue that they could only have this ability by being divinely designed to do so--a thesis that doesn't seem to be particularly fruitful for scientific exploration.

Is value for the scientific enterprise a criterion of truth? It no doubt slows down the scientific enterprise that it is wrong to inflict pain on people to find out how the body responds to pain, but Lippard will have to agree that it is nonetheless true.


Steven Carr said...

'One is that it can provide correlations between brain states and mental states. But sophisticated dualists like Hasker and Taliaferro never said that we should not expect to find these.'

Suppose Victor is thinking the sentence 'Unicorns might exist on the planet Zarg'

What correlation can there be between his mental state and the atoms in his brain, or in his liver?

Hasker thinks we should expect to find a correlation between the atoms in the brain and the mental state of thinking about the existence of unicorns in the planet Zarg.

But surely the atoms in the brain behave no differently from atoms in the liver , or in the kidneys.

So what does Hasker thinks would correlate to the concept of the planet Zarg?

Which bit of Victor's 'Dangerous Idea' book outlines how we should expect to see such a correlation?

Victor Reppert said...

If there are no correlations between mental states, then that undermines the materialist argument, not mine. It was Lippard who was pointing out the correlations between mental states and brain states.

Steven Carr said...

Victor ducks the question of how, even in principle, his and Hasker's theories predict a correlation between the atoms in the brain and his thoughts, while his theory predicts no correlation between the atoms in his liver and his thoughts.

What is different about atoms in the brain compared to atoms in the liver?

An obvious question, but Victor thinks all material must be explained at the level of sub-atomic particles, in which case there is no difference.

Don Jr. said...

Steven, I've never seen a person so active on another's blog yet so negligent toward his own. Since you have so much to say, I would love (honestly) to see you provide some arguments (or just thoughts) on your own blog.

In regards to your comments I have to ask (unfortunately as usual), "What's your point?" If you realize what it is, could you please state it clearly?

Lippard said...

"On the other hand, I believe that I can, as a philosopher, question the coherence and adequacy of the intertheoretic reductions that scientists may offer, and in doing so I am not guilty of "armchair science.""

I don't think you can reasonably question the coherence and adequacy of those intertheoretic reductions without learning the science involved. It's not clear to me whether or not you agree.

"Are computers aware of mathematical relationships?"

I think that's a distinct issue. We engage in unconscious inferences and computations all the time; the reflective awareness is an additional capacity.

You don't have to consciously know and apply the rule of modus ponens to be able to do it. Most people don't.

Don Jr. said...

So you're maintaining, Jim, that computers don't just do what they've been programmed to do but that they actually think in the same way we do? Do chalkboards—at least those that display arguments of the form modus ponens—also think? If I create a biomechanical arm that can repeatedly write "A implies B; A; therefore, B" then that arm can reason?

You're right, in a sense, that one need not be conscious of what one is doing to apply the rule of modus ponens (heck, a chalkboard could display that rule correctly). That's the point! When we speak of reasoning—at least for humans—we are conscious of the laws of logic. That those laws play a causal role in our thinking is part of our reasoning. This is exactly why the computer analogy doesn't work. (Isn't it amazing that computers never get an answer wrong? They must check their work very thoroughly.)

Lippard said...


In general, no, I don't think computers work the same way human brains do, except for parallels between, e.g., theorem provers and humans doing logical inferences, and certain kinds of unconscious processing. Some parallel distributed computing systems have gotten a bit more analogous, but I'm not sure what has been going on in AI and robotics lately. (I haven't read much of anything on connectionism or what used to be called "parallel distributed processing" in over a decade.)

But I don't see the strong AI program as impossible. I think conscious reasoning is closely tied to linguistic capability, and that machines--given the right sort of perceptual apparatus, representational primitives, learning systems, and ability to move and manipulate objects, could be built to have real, referring representations and language capacity. Once you have that, I see no reason why you couldn't build meta-representation and conscious reasoning on top of it.

My point was really that reasoning, deliberation, and rational behavior don't necessarily have to be conscious. In fact, there is empirical evidence that much of our reasoning is not done consciously, and further that sometimes we even confabulate conscious explanations for unconscious behavior after the fact (e.g., Gazzaniga's _The Social Brain_ reports on some experiments with split-brain patients where the confabulation is demonstrable, and Ramachandran's _Phantoms in the Brain_ regarding anosognosia patients).