Saturday, October 10, 2009

Ham-Fisted empricism: Hasker on externalism and the AFR

It is of course true that a belief, in order to be justified, needs to have been formed and sustained by a reliable epistemic practice. But in the case of rational inference, what is the practice supposed to be. The reader is referred, once gain, to the description of a reasoning process given a paragraph back. Is this not, in fact, a reasonably accurate description of the way we actually view and experience the practice of rational inference and assessment/ It is furthermore, a description which enables us to understand why in many cases a practice is reliable—and why the reliability varies considerably depending on the specific character of the inference drawn and also on the logical capabilities of the epistemic subject. And on the other hand, isn’t it a severe distortion of our actual inferential practice to view the process of reasoning as taking place in a “black box,” as the externalist view in effect invites us to do? Epistemological externalism has its greatest plausibility in cases where the warrant for our beliefs depends crucially on matters not accessible to reflection—for instance, on the proper functioning of our sensory faculties. Rational inference, by contrast, is the paradigmatic example of a situation in which the factors relevant to warrant are accessible to reflection; for this reason, examples based on rational insight have always formed the prime examples for internalist epistemologies.

There is also this question for the thoroughgoing externalist: How are we to satisfy ourselves as to which inferential practices are reliable? By hypothesis, we are precluded from appealing to rational insight to validate our conclusions about this. One might say that we have learned to distinguish good reasoning from bad reasoning, by noticing that good inference-patterns generally give rise to true conclusions, while bad inference-patterns often give rise to falsehood. (This of course assumes that our judgments about particular facts, especially facts revealed through sense perception, are not in question here—an assumption I will grant for the present). But this sort of “logical empiricism” is at best a very crude method for assessing the goodness of arguments. There are plenty of invalid arguments with true conclusions, and plenty of valid arguments with false conclusions. There are even good inductive arguments with all true premises in which the conclusions are false. There are just the distinctions which the science of logic exists to help us with; basing the science on the kind of ham-fisted empiricism described above is a hopeless enterprise.

William Hasker, The Emergent Self (Cornell, 1999), pp. 74-75. From the chapter "Why the Physical Isn't Closed."


Doctor Logic said...

basing the science on the kind of ham-fisted empiricism described above is a hopeless enterprise.

So it's a good thing that no one on this planet does so. :P

Doesn't it work like this?

1) We all assume our rational faculties are reliable.

2) We look at the world and find, to very high precision, the world is physical.

3) As long as the physical world discovered in (2) doesn't contradict (1), there's no problem.

Now, I understand that the AfR is sometimes taken to suggest that (2) does contradict (1), but the AfR doesn't do that by any stretch of the imagination. At best, the AfR convinces some people that they lack an explanation for how (2) could be consistent with (1). But the AfR only gets that far by ignoring the mechanism of abstraction.

Unknown said...

What is "the mechanism of abstraction"?

Doctor Logic said...


When we take abstraction out of human thinking, there's not much left, so abstraction is vital to any consideration of reason.

However, AfR proponents don't really spend much time defining abstraction. If one doesn't define abstraction, it will be very difficult to see how a machine could abstract. Actually, even knowing what abstraction is, it's still not obvious. However, there are known mechanisms that abstract. Auto-associative neural nets and Bayesian networks are, not surprisingly, examples.

Until you guys tackle abstraction, you're all basically saying that machines that obviously lack a mechanism for abstraction (e.g., Deep Blue) will not be able to think like humans (who have brains composed of abstraction machines). I would agree, but that's hardly an argument that materialism is wrong or that strong AI's can't exist.

Blue Devil Knight said...

I think it is pretty common amongst naturalists (Hasker's likely target) to be an externalist about empirical knowledge (or knowledge gained via perceptual contact with the world), but not about logic or mathematics.

On the other hand, there is certainly nothing strange to me in thinking that an animal can follow inference rules without having conscious access to said rules. It's like Lewis Carrol's riddle: if an animal follows modus ponens, does it need an extra rule that says 'Follow modus ponens'?

Could codifying and making explicit such rules be a uniquely human endeavor, one made possible not by our ability to make logical inferences, but by our ability to talk about them?

Note I'm not endorsing this, as I don't think even humans engage in particularly logical thought in the absence of language. I think language allows us to groom our public inferential practices and refine them over time, perhaps as envisioned by Robert Brandom.