IV. Argument from the Psychological Relevance of Logical Laws
My fourth argument concerned the role of logical laws in mental causation. In order for mental causation to be what we ordinarily suppose it to be, it is not only necessary that mental states be causally efficacious in virtue of their content, it is also necessary that the laws of logic be relevant to the production of the conclusion. That is, if we conclude “Socrates is mortal” from “All men are mortal” and “Socrates is a man, then no only must we understand the meanings of those expressions, and these meanings must play a central role in the performance of these inferences, but what Lewis call the ground-and-consequent relationship between the propositions must also play a central role in these rational inferences. We must know that the argument is structured in such a way that in arguments of that form the conclusion always follows from the premises. We do not simply know something that is the case at one moment in time, but we know something that must be true in all moments of time, in every possible world. But how could a physical brain, which stands in physical relations to other objects and whose activities are determined, insofar as they are determined at all, by the laws of physics and not the laws of logic, come to know, not merely that something was true, but could not fail to be true regardless of whatever else is true in the world.
We can certainly imagine, for example, a possible world in which the laws of physics are different from the way they are in the actual world. We can imagine, for example, that instead of living in a universe in which dead people tend to stay dead, we find them rising out of their graves on a regular basis on the third day after they are buried. But we cannot imagine a world in which, once we know which cat and which mat, it can possibly be the case that the cat is both on the mat and not on the mat. Now can we imagine there being a world in which 2 + 2 is really 5 and not 4? I think not.
It is one thing to suggest that brains might be able to “track” states of affairs in the physical world. It is another thing to suggest that a physical system can be aware, not only that something is the case, but that it must be the case; that not only it is the case but that it could not fail to be the case. Brain states stand in physical relations to the rest of the world, and are related to that world through cause and effect, responding to changes in the world around us. How can these brain states be knowings of what must be true in all possible worlds?
Consider the difficulty of going from what is to what ought to be in ethics. Many philosophers have agreed that you can pile up the physical truths, and all other descriptive truths from chemistry, biology, psychology, and sociology, as high as you like about, say, the killings of Nicole Brown Simpson and Ronald Goldman, and you could never, by any examination of these, come to the conclusion that these acts we really morally wrong (as opposed to being merely widely disapproved of and criminalized by the legal system). Even the atheist philosopher J. L. Mackie argued that if there were truths of moral necessity, these truths, and our ability to know those truths, are do not fit well into the naturalistic world-view, and if they existed, they would support a theistic world-view. Mackie could and did, of course, deny moral objectivity, but my claim is that objective logical truths present an even more serious problem for naturalism, because the naturalist cannot simply say they don’t exist on pain of undermining the very natural science on which his world-view rests.
Arguing that such knowledge is trivial because it merely constitutes the “relations of ideas” and does not tell anything about the world outside our minds seems to me to be an inadequate response. If, for example, the laws of logic are about the relations of ideas, then not only are they about ideas that I have thought already, but also they are true of thoughts I haven’t even had yet. If contradictions can’t be true because this is how my ideas relate to one another, and it is a contingent fact that my ideas relate to one another in this way, then it is impossible to say that they won’t relate differently tomorrow.
Carrier responds somewhat differently. He says:
For logical laws are just like physical laws, because physical laws describe the way the universe works, and logical laws describe the way reason works—or, to avoid begging the question, logical laws describe the way a truth-finding machine works, in the very same way that the laws of aerodynamics describe the way a flying-machine works, or the laws of ballistics describe the way guns shoot their targets. The only difference between logical laws and physical laws is that the fact that physical laws describe physics and logical laws describe logic. But that is a difference both trivial and obvious.What this amounts to, it seems to me, is a denial of the absolute necessity of logic. If the laws of logic just tell us how truth-finding machines work, then if the world were different a truth-finding machine would work differently. I would insist on a critical distinction between the truths of mathematics, which are true regardless of whether anybody thinks them or not, and laws governing how either a person or a computer ought to perform computations. I would ask “What is it about reality that makes one set of computations correct and another set of computations incorrect?”
William Vallicella provides an argument against the claim that the laws of logic are empirical generalizations:
1. The laws of logic are empirical generalizations. (Assumption for reductio).
2. Empirical generalizations, if true, are merely contingently true. (By definition of ‘empirical generalization’: empirical generalizations record what happens to be the case, but might have not been the case.)
3. The laws of logic, if true, are merely contingently true. (1 and 2)
4. If proposition p is contingently true, then it is possible the p be false. (True by definition)
5. The laws of logic, if true, are possibly false. (From 3 and 4)
6. LNC is possibly false: there are logically possible worlds in which p & ~p is true.
7. But (6) is absurd (self-contradictory): it amounts to saying that it is logically possible that the very criterion of logical possibility, namely LNC, be false. Therefore 1 is false, and its contradictory, the clam that the laws of logic are not empirical generalizations, is true.Logic, I maintain, picks out features of reality that must exist in any possible world. We know, and have insight into these realities, and this is what permits us to think. A naturalistic view of the universe, according to which there is nothing in existence that is not in a particular time and a particular place, is hard-pressed to reconcile their theory of the world with the idea that we as humans can access not only what is, but also what must be.
13 comments:
The law of noncontradiction is true if you assume first-order sentential logic is the only game in town. Since Godel and QM and Frege we know this isn't the case.
Using natural language, consider the liar's paradox. Is it true or false? Most would say, neither. Using the resources of sentential logic, we can say:
~(L v ~L), which is equivalent to
~L & ~~L, which is the same as
L&~L. For logicians, they could formulate the Godel sentence essentially expressing 'This well-formed-formula of first-order logic is false', thereby generating a formal claim that is undecidable using the standard axiomatic systems.
Further, some quantum logics reject the law of the excluded middle (i.e., it is a tautology that (A v ~A)), which also has repurcussions for the law of noncontradiction.
It is because of such instances that paraconsistent logical systems, in which it is possible for both A and ~A to be true, have been developed (by Priest and others).
These considerations suggest that logical truths depend on the target system being modelled. If I am interested in the number of water droplets in a glass, then I need mathematics in which 1+1=1 to get the right answer.
In the world we live in, in which things don't have different properties at the same time (my cat is not both alive and dead), traditional logical systems work very well. If I were a quark, my pet may well be Schrodinger's cat, and the law of noncontradiction might be inadequate for rational discussions of my reality.
Quine foreshadowed all this in 'Two Dogmas of Empiricism' and other writings.
I fail to see how the appeal to paraconsistent logics solve the naturalist's ontological problem here. If you have any logic at all you have truths that are not true at any particular time or place, but true independent of where you are. So I'll take whatever logic you throw up at me, and argue that it makes epistemological and ontological commitments that are very hard to square with naturalism.
My post was directed at Vallicella's argument, specifically. The one you pose now is different.
On your claim about the temporality (or lack thereof) of logical truths. This is interesting.
I thought this sounded familiar, and just found some old posts on your blog. I pretty much agree with what Dogtown says in an old post here.
My first comment above (on this new thread) actually expands on one of Dogtown's points about mathematical knowledge.
I think that truth/falsity is not a property of most objects in the world (e.g., the rain is not true or false). It is a property of a certain special subset of objects: cognitive and public symbolic structures (e.g., the sentence 'It is raining' can be true or false, and 'It was raining five years ago in Africa' is also true, even though I am not there spatially or temporally). What determines if these structures have the property of being true or false is the properties of the domain being modelled (e.g., if it is snowing, 'It is snowing' is true).
Whether a mathematical claim is true depends on the axiomatic system you are working in, or the physical domain you are modelling (e.g., water-droplet world versus toy block world, 1+1=2 in one but not the other). I don't think axiomatic systems existed before humans (or other suitably cognitive agents), nor do I think models of the physical world exist before them. While our models can discover the truth, and be true, and we can say that 10 million years ago, 1+1=2 (using classical block-type objects), which is to say not that there were such propositions, but that the world was structured that long ago so that this relation among properties (not predicates) held.
Have a nice weekend all.
Vic, I don't suppose you have asked yourself what you mean by "Logical Laws" or asked yourself how the brain/mind functions as a whole?
Take something even simpler to begin with, "mental states." A "mental state" is first of all not something anyone is conceived with in the womb since brains and neurons have to evolve in species after species and finally develop in each individual organism from conception onward, and each organism's brain/mind has to continue to develop much further-- taking in data at a furious pace each second, too fast to consciously acknowledge (even adults are doing that as well, taking in far more data from the world than they can consciously acknowledge), and only then do "mental states" appear in an organism, and probably no mental state arises completely independently to begin with, but arise already linked to some things rather than others. And still later, as the connections whittle down or get refined via trial and error and practice, we are taught such things as learning how to connect our mental states and our ideas with speech and even with the written word.
Only then can we read what others have written or speak with them and consider what they are saying to us, which may then alter our own brain/mind views.
But the question remains how the world's input is taken in each minute--in words, speech, vision, hearing, and then silently unconsciously PROCESSED and then a reaction to such things, or an agreement with what you hear comes out, and all of that PROCESSING is happening silently unconsciously BEFORE we have conscious reactions, BEFORE we think each thought or type each word in response, etc. True, many such reactions come about almost immediately, since the adult brain/mind already has learned and ingrained zillions of brain/mind-routes during a lifetime of learning and reactions and couter-reactions that it has gathered together during its entire lifetime. The brain/mind in that sense is like a trained boxer, ready to counterpunch, raring to "go think."
As for "Logical Laws," the only universal proofs they have provided that convince all philosophers equally well, are also among the most simplistic of proofs, such as "A does not equal B," or "NON-A does not equal A." The fact that one thing is not another is probably an act of distinction that even animals who choose to get out of the rain can recognize to some basic extent, or animals who move out of the shade into the sunlight.
As for explanations of greater and greater scope that involve thinking about myriads of diverse input, philosophers, politicians and theologians continue to disagree, even after employing all the logic, reasoning ability, and rhetorical flourishes and metaphorical and analogical poetry at their disposal.
Another thing that is amazing of course is even after the most involved meticulously logical and rational debates some are still able to convince themselves that to change their beliefs and believe otherwise than they presently do, would mean that they themselves were "possibly bound for eternal hell." And keeping that "logical thought" foremost in one's head (or perhaps unconsciously) seems to be pretty good at stopping most other types of thoughts from popping up.
Vic,
Is Carrier really saying that the laws of logic are contingent? I understand that he's left himself open to that reading, but IMNSHO that would be a pretty desperate blunder, and I'm hesitant to attribute it to him unless he says that in so many words.
BDK's observations are muddled.
(1) First-order logic isn't simply sentential -- it's predicate logic.
(2) Most logicians don't accept "This sentence is false" as meaningful.
(3) Gödel sentences don't arise in first-order logic, which lacks the capacity for self-reference and is provably consistent and complete.
(4) "Quantum logics" are semantically deviant in a way that dissolves the appearance of contradiction between them and classical bivalent logics.
(5) Paraconsistent logics are technically ingenious but provide no reason to give up classical bivalent logic; at best they're useful in particular contexts, e.g., when one is trying to construct an expert system where there may be hidden contradictions in the (massive) dataset and therefore wants to block the trivial derivation of an arbitrary wff by reductio.
(6) The "1+1 = 1" example trades on the ambiguity between pure and applied arithmetical statements.
(7) It's interesting to note that Quine's views on the revisability of logic evolved somewhat over time, as his negative appraisal of deviant logics in Philosophy of Logic reveals. By the time I met him in 1990 he was talking about "stimulus analyticity" with a straight face. But this is beside the point: Benson Mates refuted Quine's arguments against analyticity in his paper "Analytic Sentences," which is even better than the widely read (and pertinent) Strawson and Grice paper.
Victor,
Thanks for the link to Carrier's page. All I can say is that anyone who wants to debate Carrier should be sure to include the nature of logic, since it's a subject where Carrier's barbarian-level physicalism is a complete flop. I think this was my favorite part:
Reppert again offers no evidence that the laws of logic don't actually derive from the laws of physics. As I have already explained, obviously they do: logical operations in computational physics are certainly an inevitable emergent fact of physical laws: because the laws of physics are as they are, the laws of logical operations are as they are. If the physical laws were relevantly different, so would the logical laws be.
It's been a long time since I've read something that silly from someone who has that much to say. This, from the man who is "no less a philosopher than Aristotle" in "knowledge, education, and qualifications." You can find the original quoted here.
I see he's reworded that bit on his website now, though he still claims that his knowledge, education and qualifications are comparable to Aristotle's "in every relevant respect."
Maybe not.
Tim is right that I muddled some of my terminology. My point wasn't that the counterexamples I cited would work in propositional logic, but that when you expand beyond the propositional calculus, things get more complicated for Vallicella's argument.
As for the liar's paradox, you could say it isn't meaningful, but that seems ad hoc. I think it is meaningful, but neither true nor false. That is, it has an intension, but is either of indeterminate truth value or is both true and false. It is such examples which paraconsistent logics are constructed to help us treat logically rather than dismiss as not meaningful.
The water droplets case shows that what is true in one axiomatic system is not true simpliciter. To suggest the example merely shows that in 'applied mathematics' things sometimes work differently than in pure mathematics seems wrongheaded. Statements in applied mathematics are mathematical propositions. That is, there is some formal system within which water droplet maths work out.
It would be more accurate to say that for some applications, different formal systems are appropriate. We don't, a priori, know which formal systems will work in a particular application. Kitcher discusses the empirical influence on "pure" mathematical truths in his book The Nature of Mathematical Knowledge.
I am not sure what it means to get out of the rejection of (Av~A) in quantum logics by reference to its 'deviant' semantics. To get more concrete, if we lived in a world where Shrodinger's cat was in a superposition of alive/dead, would the flat claim, 'Either it is alive or not alive', be a trivial logical truth, a tautology? Clearly not. You need a third alternative: it is either alive, not alive, or the superposition, i.e.,:
A v ~ A v (A|~A)
where '|' stands for the superposition. Hence, if it turns out that (A|~A), then it follows that ~(A v ~A), i.e., A and ~A (the above uses disjunctive syllogism, so would need to be supplemented with additional premises about when that is a valid inference rule).
At any rate, my claim wasn't that it is impossible to cash out QM using standard logic (Shrodinger's equation is a perfectly normal differential equation, after all, and complex analysis can be cashed out in terms of standard set theory), but that it is possible to do otherwise. That is all I need to show. I haven't seen a non question-begging response that shows otherwise.
I agree that the response to Quine he cites is excellent: Quine's argument against the analytic/synthetic distinction is not compelling (unlike his argument against the second dogma).
Note while I think that Vallicella's argument doesn't work, I do agree that not all of mathematics is an empirical generalization. We can construct arbitrary formal systems which induce a set of possible and impossible consequences. It is an empirical question which formal systems are best for our world (if our world was the water-droplet world and all we cared about was 'how many' (not the volume), then the axiomatic systems used in most mathematical modelling today would be useless).
BDK,
Thanks for the response. I'm sorry if my terse comment came across as rude; it wasn't intended that way.
A lot depends on what you mean by "moving beyond" propositional (or predicate?) calculus. Formal systems that are created by enriching a simpler system -- say, the classical propositional calculus -- with additional vocabulary and suitable formation and transformation rules may retain every theorem of the simpler system and be provably consistent. Simple modal extensions of propositional calculus are like this. But such extensions do not make Vallicella's argument problematic, since ~(P&~P) is still a theorem in them. Such conservative extensions of classical bivalent propositional logic pose no challenge to Vallicella's argument.
About the liar paradox, you write:
I think it is meaningful, but neither true nor false. By "meaningful" I meant "either true or false." Of course this is a technical sense of "meaningful," but it is widespread. It does dissolve the paradox, and I see nothing ad hoc about it. That said, there is also something attractive about Prior's suggestion that "This sentence is false" means the same thing as "It is true that this sentence is false," which in turn means the same thing as "This sentence is true and this sentence is false," and that in consequence the liar sentence is straightforwardly false. In neither case is there any problem for bivalent logic.
The water droplet case shows that the physical act of combining drops of water doesn't obey the field axioms. Since "1+1 = 2" doesn't pertain to water drops, nothing about water drops can falsify it. Statements in "applied mathematics" are claims that certain sorts of entities or operations in the physical world are well modeled by certain entities or operations in pure mathematics. To say that they "are mathematical statements" is true if you mean "applied," which means that the question of their truth is a function of the fact that the properties of the physical operations may or may not be relevantly isomorphic to the properties of the abstract operators. But it's false if you mean that they are on the same logical footing as the statements of pure mathematics.
I have Kitcher's book; I just think that he is completely wrong about the nature of mathematical truth. James Robert Brown's position is closer to what I think is the truth of the matter. We can go into that if you like.
In Putnam's Quantum Logic, the truth table for "v" isn't the same as the truth table for "v" in classical bivalent logic. That's an example of what I mean by semantic deviance. It means that you can't take the fact that the inscription
(P v ~P)
isn't a theorem in Putnam's QL, whereas the inscription
(P v ~P)
is a theorem in CBL, to indicate that the two systems are in conflict.
I think you're overreading the quantum mechanical question. The very most you could get out of it would be that "Alive" and "Dead" are contrary predicates rather than contradictories. I'm not persuaded that we need to go even that far, but waive that; there's no longer any challenge to CBL here.
On the other things, I think we are broadly in agreement.
Tim, thanks for the useful comments. I am a total dilletante in philosophy of mathematics. Frankly, as a naturalist, I consider it a major problem. I am grateful you dropped in and took time to offer some of your expertise.
As for the water droplets case, I am not sure who to believe. I thought it over last night, and by my reasoning above, F=ma and F=ma^2 would both be 'mathematical truths'. Clearly this isn't the case. They are empirical truths, as you claim.
My idea was that, given an axiomatic system, supplemented with special terms from a science (e.g., F, m, a), that the laws of physics could be incorporated into an axiomatic system (much as has been done in standard formulations of QM). My thought was that there is no reason to think that we couldn't do the same with water-droplets.
On the other hand, this doesn't mean that F=ma is a 'logical' truth. How about this: "pure" logic is a similar axiomatic system, but stripped of the special terms from science, using terms common to (almost all) science (e.g., the so-called logical terms). I think this is something like what Gila Sher argues, and I think I may believe it.
It is strange: I am in computational neuroscience, but the philosophy of mathematics I try to read is like trying to read Egyptian Heiroglyphics. And not just because they tend to express everything using esoteric mathematical logical notations.
If you have any suggestions for books that get at some of the above topics in a tractable way (Kitcher is readable, but I realize way out on the fringe), I'd love to hear them. Is Penelope Maddy's book on naturalism in mathematics worth reading?
BDK,
I'm sympathetic to the idea that pure logic is (in some broad sense) axiomatic. Unfortunately, once we cross the magic line from first-order to second-order logic and add appropriate vocabulary for representing mathematical operators, we have a system powerful enough to represent elementary mathematics. So far, no one has come up with a compelling proof of the consistency of that system (though you can do it using transfinite induction.) So there are truths of the system (and they are truths) that are not syntactic consequences of the axioms -- that is, it is incomplete. We can't identify the semantic consequences of the axioms with their consequences. Gödel strikes again! It isn't a challenge to the necessary status of the laws of logic, but it's humbling.
That said, I'm inclined to the view that the systems of logic and mathematics are epistemically and perhaps even ontologically prior to those of physical science. This is compatible with the observation that they've grown up -- mathematics especially -- in close proximity to the sciences and that questions in the sciences have presented the motive for developing various branches of mathematics. Unlike F=ma, the theorems of mathematics are not vulnerable to empirical disconfirmation. The most that can be said is that as science evolves, some branches of mathematics (e.g. complex analysis) are more useful than others (e.g. real analysis) that used to be the standard tools in that scientific field.
Good books: I'd strongly recommend Stewart Shapiro's book Thinking About Mathematics, which gives a clear and accessible overview of a wide range of positions, Maddy's included. I also recommend James Robert Brown's iconoclastic book Philosophy of Mathematics. As a naturalist you'll find yourself uncomfortable with a lot of what he says, but it sounds like you're not the sort of person to run away from something you see as a challenge for your own views, so it's definitely worth a look.
Thanks for the references, Tim. I'll give them a read.
You said,
So far, no one has come up with a compelling proof of the consistency of that system
I thought Godel showed that you can't prove any system is consistent if it's powerful enough to 'contain' arithmetic (using only the resources of that system). You could create another system in which the first is provably consistent, but then you can't prove the consistency of the expanded system.
But, if you assume the system is consistent, then you get the true (but not syntactically provable) proposition. (These are his first and second theorems from his main paper, right?).
At any rate, I was not arguing for logicism. I was arguing that what makes logic special is the lack of special terms in its vocabulary (e.g., voltage, current, mass), and the desire to accurately capture the truth-preserving operations that work in all of the special sciences. I don't think this implies that logic isn't empirical, but it does imply that logic is much more general than particular claims in sciences. This is probably because it is constructed precisely to work well for the symbol systems we publicly use to express particular laws.
Hi BDK,
I wrote a long post in response to your question but lost it when I tried to submit it. Here’s another try.
You quote me, speaking of elementary mathematics:
So far, no one has come up with a compelling proof of the consistency of that system
Then you ask:
I thought Godel showed that you can't prove any system is consistent if it's powerful enough to 'contain' arithmetic (using only the resources of that system). You could create another system in which the first is provably consistent, but then you can't prove the consistency of the expanded system.
This is a common way of representing it, but the truth is more complicated.
First, the phrase “using only the resources of that system” needs to be finessed. It’s not of any great interest to show that you can prove, by the use of some formal system S, that S is consistent -- that is, to show that a sentence interpretable as “S is consistent” is derivable within S. Why? Because if S is inconsistent and has the ordinary logical apparatus, then every sentence is derivable within S, including the one interpretable as stating the system’s consistency. So the fact that such a sentence is derivable within S tells us nothing about whether S is actually consistent. So as a rule, for a system of this order of complexity, we’re always looking to prove consistency using resources that are epistemically more transparent than the resources of S itself.
Second, the phrase “expanded system” is tricky. What people are usually thinking of when they describe it the way you did is simple extension. Axiomatize first order arithmetic and call it A. Derive a Gödel sentence for A, and add that sentence to the axioms, calling the new system A+. This system is an extension of A in the simple sense. And A+ will itself be subject to the same procedure all over again. For such extensions, what you said is unproblematically true.
But this isn’t the only or the most interesting sort of case. Gentzen’s proof (“Die Widerspruchfreiheit die reinen Zahlentheorie” [“The consistency of pure number theory”], Mathematische Annalen 112 (1936): 493-565) uses a pretty lean basis of primitive recursive arithmetic plus quantifier-free transfinite induction up to ε0. The system he uses is neither strictly stronger nor strictly weaker than ordinary first order arithmetic, but it isn’t equivalent either: each system has some theorems that the other lacks. So it would be a mistake to say that it’s an extension of A in the same sense that A+ is an extension of A. As far as I know, no one has developed a parallel proof for the unprovability of consistency for Gentzen’s system.
The trouble is that transfinite induction, while used frequently by mathematicians, isn’t epistemically pellucid. Transfinite induction is often claimed to be non-finitistic, in the sense that it cannot be reduced to ordinary induction. While this is true for transfinite induction up to an arbitrary ordinal, it isn’t true generally. Transfinite induction up to ω^ω is reducible to ordinary mathematical induction; in fact, transfinite induction up to ε0 is reducible to ordinary mathematical induction, but the reduction cannot be carried out in ordinary arithmetic. In fact, ε0 is the first transfinite ordinal for which the reduction can’t be carried out in ordinary arithmetic. For this reason, Gentzen’s proof is often considered to be non-finitistic.
Here’s how Nagel and Newman sum it up in their book Gödel’s Proof:
[T]he prospect for finding for every deductive system (and, in particular, for a system in which the whole of arithmetic can be expressed) an absolute proof of consistency that satisfies the finitistic requirements of Hilbert’s proposal, though not logically impossible, is most unlikely.
And in a footnote here they elaborate:
The possibility of constructing a finitistic absolute proof of consistency for arithmetic is not excluded by Gödel’s results. Gödel showed that no such proof is possible that can be represesnted within arithmetic. His argument does not eliminate the possibility of strictly finitistic proofs that cannot be represented within arithmetic. But no one today appears to have a clear idea of what a finitistic proof would be like that is not capable of formulation within arithmetic.
Interesting: I hadn't heard of Gentzen's work.
Just to add to the noise, at Mathworld, they say:
Gerhard Gentzen showed that the consistency and completeness of arithmetic can be proved if transfinite induction is used. However, this approach does not allow proof of the consistency of all mathematics.
Post a Comment