I've been revising a paper of mine that I did in England in response to Carrier, and I have added this section, on what I call the brain fallacy:
But more than that, here again we find Carrier explaining one kind of mental activity in terms of another mental activity and then explaining it “naturalistically” by saying “the brain” does it. My argument is, first and foremost, that something exists whose activities are to be fundamentally explained in intentional and teleological terms. In order for talk about the brain to play its proper role in a physicalistic (non-intentional and non-teleological in the final analysis) analysis of mental events, we have to be sure that we are describing a brain that is mechanistic and part of a causally closed physical world. What I wrote in response to Keith Parsons, who had said that we could take what in Philosophia Christi applies here as well: (Parsons had argued that we could simply take all the characteristics that I wanted to attribute to the non-physical mind and attribute them to the brain).
But we should be careful of exactly what is meant by the term “brain.” The “brain” is supposed to be “physical,” and we also have to be careful about what we mean by “physical.” If by physical we mean that it occupies space, then there is nothing in my argument that suggests that I need to deny this possibility. I would just prefer to cal the part of the brain that does not function mechanistically the soul, since, as I understand it, there is more packed into the notion of the physical than just the occupation of space. If on the other hand, for something to be physical (hence part of the brain) it has to function mechanistically, that is, intentional an teleological considerations cannot be basic explanations for the activity of the brain, then Parsons’ suggestion (and Carrier’s as well-VR) is incoherent.
I think that a many people fail to see the difficulties posed by the arguments from reason because they think they can just engage in some brain-talk (well, the brain does this, the brain does that, etc.) and call that good. I call that the brain fallacy. The question should always be, “If we view the brain as a mechanistic system in the full sense, does it make sense to attribute this characteristic to the brain?” Using brain-talk doesn’t mean that the work of physicalistic analysis has really been done.
12 comments:
“If we view the brain as a mechanistic system in the full sense, does it make sense to attribute this characteristic to the brain?”
What does "this characteristic" refer to?
Characteristics of the mind:
intentionality
qualia
Etc.
OK, I'll bite.
"Characteristics of the mind:
intentionality
qualia"
Yes, I would attribute intentionality to mechanistic systems. It's easy to write a program where you could say its trying to find a certain value. Qualia would be a much better bet for this sort of argument. However, qualia is a very non-specific term that covers several distinct phenomena so you would have to be specific on which do and do not apply to mechanistic systems. Do some robots experience sight? I'd argue that they do; they receive information from sensors and then use that information to navigate their environment. Do robots experience pain? Not yet. I'm not sure if we will see it in robots in the future or not; I haven't really researched why we experience pain (as it relates to the brain), so whether it is feasible or not is an open question for me.
curious,
"Yes, I would attribute intentionality to mechanistic systems. It's easy to write a program where you could say its trying to find a certain value."
Im not sure the philosophical meaning of intentionality is the same is its popular definition, which seems to be the one you are describing.
We are not talking about "the desire or will to do something."
From SEP:
"Intentionality is the power of minds to be about, to represent, or to stand for, things, properties and states of affairs. The puzzles of intentionality lie at the interface between the philosophy of mind and the philosophy of language."
I'll admit that I'm having a hard time understanding what this fully means and implies.
ozero,
Thanks for clarifying. I hadn't realized the distinction. It sounds a lot like classes in programming. However, like you said, the implications of the definition are not clear-cut or obvious. For all I know, I am missing another distinction within the definition.
I was wanting Victor to answer my question since he wrote it.
cautiouslycurious,
Yes, I would attribute intentionality to mechanistic systems
"Mechanistic" systems are by definition lacking in intrinsic goal-orientation, i.e. intentionality. So what you are saying is: "Yes, I would attribute intentionality to [systems that have no intentionality]."
It's easy to write a program where you could say its trying to find a certain value.
But you need to distinguish between intrinsic intentionality, and derived intentionality. The symbols in a program (such as "1") don't mean anything at all if you consider just their physical properties; any meaning is assigned (and "held" in place) by an external mind. Their meaning is entirely derived.
Do some robots experience sight? I'd argue that they do; they receive information from sensors and then use that information to navigate their environment
Qualia is that first person subjective "what it is likeness". Current robots are zombies, as far as that goes. They do not have first person experience. Whether such a thing is possible in principle or not is one of the main debates.
But one could make an argument that qualia are not properties of physical things. Indeed, they aren't. Objectively, apart from an observer, an apple does not have the color red. That's just a property our minds interpret it as.
So: nothing physical has qualitative properties
And: first-person experience (qualia) is qualitative
Therefore: no qualia is physical
Martin,
Alas, more definitional complications. If intentionality refers to intrinsic meaning rather than derived meaning, then I would not say that mechanistic systems have it, but I would be a non-realist to the concept altogether. However, they are perfectly capable of dealing with derived meaning. When a robot reaches a fork in the decision tree, it is perfectly able to implement the command that its decision tree has reached. So, no, I don’t think computers have intrinsic meaning, but I don’t think brains do either. But insofar as brains have intentionality, I think computers do.
As for qualia, I tend to think that zombie arguments tend to be misused. A zombie as typically characterized would be someone who is reduced to base urges, to feed; they don’t have feelings like compassion and empathy. However, they still have qualia. They still experience sight, hunger, and pain (e.g. sensitivity to light). They don’t feel pain through their extremities, but their brains still operate similar to ours (hence why headshots are so effective). If computers are like zombies, then they do have qualia, unless you stipulate a zombie is someone who doesn’t have it (like it’s missing a soul or something), which would assume the conclusion outright. How you know that computers don’t have qualia? What is the test for qualia?
It seems like the examples given shouldn’t be applied to mechanistic systems, but I also doubt that atheists/skeptics/etc. would also apply them to the brain. I too would be interested in an example of the brain fallacy.
BeingItself,
"[T]his characteristic", lacking a proper antecedent, means "such-and-such characteristic", or something to that effect. That is how I, at least, made sense out of that poorly constructed sentence.
The "characteristic" might be "believes p" or "infers p from q."
If you think programming will make it easy to argue that the brain has, say, a belief that p, then I would appeal to the distinction between original and derived intentionality.
Might be? Which one is it? What did you mean when you wrote it?
Let's try "infers p from q." I am talking about attributing mental characteristics to an agent.
Post a Comment