Saturday, July 28, 2007

A metamodel for philosophical arguments

I would like some responses to this metamodel for philosophical arguments. This is part of a paper I am writing for the Blackwell Companion to Natural Theology on, you guessed it, the argument from reason.
Before launching into the discussion of the argument from reason, some preamble about what philosophical arguments can be expected to do is in order. To do this we must consider the scope and limits of arguments. What at maximum one can hope for, in presenting an argument, is that the argument will be a decisive argument in favor of one’s conclusion. A decisive argument is an argument so strong that, with respect to all inquirers, the argument is such that they ought to embrace the conclusion. Even when a decisive argument is present, some may remain unpersuaded, but in these are cases of irrationality.
The difficulty here is that by this standard very few philosophical arguments can possibly succeed. This is largely because in assessing the question of, say, whether God exists, numerous considerations are relevant. Since we can concentrate on only one argument at a time, it is easy to get “tunnel vision” and consider only the piece of evidence that is advanced by the argument. But a person weighing the truth of theism must consider the total evidence. So I propose to advance a different concept of what an argument can do. I will assume for the sake of argument, that people will differ as to their initial probabilities concerning the probability that God exists. The question I will then pose is whether the phenomenon picked out by the argument makes theism more likely, or makes atheism more likely. If it makes theism more likely to be true than it would otherwise be before we started thinking about the phenomenon in question, then the argument carries some weight in support of theism. If it makes atheism more likely, then it provides inductive support for atheism.
The model I am proposing is a Bayesian model with a subjectivist theory of prior probabilities. We begin by asking ourselves how likely we thought theism was before we started thinking about the argument in question. We then ask how likely the phenomenon is to exist given the hypothesis of theism. We then ask how likely the phenomenon is to exist whether or not theism is true. If the phenomenon is more likely to exist given theism than it is to exist whether or not theism is true, then the argument carries some inductive weight in favor of theism.
It should be added that one can be an atheist and admit that there are some facts in the world that confirm theism. You can also be a theist and maintain that some atheistic arguments enhance the epistemic status of atheism. Some theists have made just this sort of claim on behalf of the argument from evil. That is, they are prepared to concede that the argument from evil does provide some epistemic support for atheism, but not enough epistemic support to make atheists out of them.

19 comments:

Anonymous said...

'It should be added that one can be an atheist and admit that there are some facts in the world that confirm theism. '

What is a confirming fact?

If I believe that nobody in the world is over 6 feet tall, and I walk out my door, and the first 3 people I see are 5 foot 10, 5 foot 11 and 5 foot 11.5, do those observations confirm my hypothesis that nobody is over 6 feet tall?

If I calculate that is is physically impossible for a human being to grow to 8 feet tall, and I come across a 15 yeard old person who is 7 feet 11, should I recheck my calculations or regard him as a confirming instance of my theory that nobody can grow to 8 feet tall?

If an observation is consistent with an hypothesis, should everybody be persuaded that that observation *confirms* the hypothesis?

Consistency is surely not confirmation.

Johnny-Dee said...

This soounds much like Swinburne's approach except, of course, for your mentioning of subjective prior probabilities (why do you need to go that route--I think objective prior probabilities would work better). As I read it, your suggestion is that not all theistic need to be C-inductive arguments. For an argument to be successful, it need only work as a P-inductive argument. Do you think that would be a fair characterization?

Victor Reppert said...

Yes, I think that terminology is good. The subjectivism about priors is significant, however, in that it is part of my way of dealing with the fact that intelligent, reasonable-seeming people come to opposite conclusions about religious belief. How do you explain that? People "come in" to the question of religion from different angles, and one approach would be to fix the "angles" from which they come in and argue that they ought to come at it from the correct priors. This is akin to the idea that some side or other possesses the objectively correct "burden of proof." The other approach is to acknowledge that intellectual predisposition are bound to differ based on one's personal experience, and to argue it from wherever they are.

Has anyone solved the problem of the single case? Can anyone tell me how antecedently probable or improbable the resurrection of Jesus is? Some people think you need extraordinary evidence for it, others don't. http://www.infidels.org/library/modern/victor_reppert/miracles.html

Johnny-Dee said...

Vic, I think the problem of assigning objective prior probabilities is a very difficult one, and I'm not confident in declaring it to be "solved." However, I am inclined to accept the way Richard Swinburne proceeds in these matters--priors can be assessed in an objective fashion by considering certain features of a hypothesis such as its simplicity and explanatory scope. After all, not all prior probability assignments are equally rational, and one basis for making sense of that is to use features like simplicity and explanatory scope to show why one hypothesis is more likely to be true than another (prior to examining evidence). For example, if you see one set of footprints in the sand, I think we tend to think that the most rational hypothesis is that one person made those tracks. Of course, there are other explanations such as that two people walked exactly in the same spot or that a cow wearing shoes walked through the sand, but these other hypotheses are much more complicated and thus more likely to be false (than our simpler one). Of course, in light of more evidence, these other hypotheses could quickly beat out the simpler one. It seems wrong to think that the differences in these prior probabilities is merely a matter of one's subjective opinion.

So, how do I explain how different people--including very reasonable people--have such different assessments of the reasonableness of the existence of God? I would agree that part of the problem is that people are working with different prior probabilities for the intrinsic probability that God exists. But the differences of opinion shows that some people are correctly assigning prior probabilities and others are doing so incorrectly. I think another reason why smart people disagree about reasonableness of theism is that they disagree about the significance of the evidence for and against theism. The seriousness of making a cumulative case using Bayesian updating is rarely appreciated, but it is gaining more attention.

One final thought (at least for this comment)--I would recommend Swinburne's book, The Resurrection of God Incarnate, as a place to begin looking at a way to assess a prior probability for something like the Resurrection. It certainly isn't easy, but I think it is something we can get an approximate value on.

exapologist said...

I think you two are right that Swinburne's inductive approach is the way to go (a ticky-tacky point: doesn't Swinburne construe C-inductive arguments as weaker than P-inductive arguments, as the former merely raise the probability of a hypothesis, while the latter make it more probable than not? I suspect that it was just a typo).

One question: suppose we grant Swinburne's objectivist construal of prior probabilities, where simplicity is a primary determinant. Now recall David Lewis' remarks about quantitative parsimony (postulates fewer entities) and qualitative parsimony (postulates fewer *kinds* of entities); specifically, about how it's not clear which sort of parsimony is more important. Now my worry is that David Lewis is right, and If so, then even granting an objectivist construal of prior probability, pending a principled reply to Lewis, we may have no way of determining which, of a range of hypotheses, has a higher prior probability -- at least in a number of interesting cases. What do you guys think?

Regards,

EA

Victor Reppert said...

I should perhaps mention in presenting this metamodel what Monty Python said about Camelot: It's only a model. And what I am trying to do is evaluate the arguments themselves. It's also an atttempt to dodge the endless and tiresome "burden of proof" battles over theistic and antitheistic arguments. I guess I can do that either by being a subjectivist about priors or by "bracketing" the question of priors.

Most philosophers of religion that I know that are not card-carrying Swinburneans consider Swinburne's use of simplicity to be one of the weak points of his philosophy of religion. Parsons' God and the Burden of Proof has a discussion of this issue with respect to Swinburne's inductive cosmological argument. It would be interesting to see if anyone has responded to Parsons on Swinburne's behalf.

Brandon said...

One of the (many) problems I have with Bayesianism involving subjective priors being used this way in any field is that it is less a model for the success of argument and more a model for consistent belief revision given prior assessments. That is, all it does is say what would be required for consistency in the person's willingness to bet on theism. This seems to go to the opposite extreme of the compulsion model you are proposing; that model only accepts arguments it would be a contradiction to reject, whereas this subjective Bayesian model doesn't reject anything that involves consistent changes in our willingness to affirm the position. Surely there is a middle ground, where we recognize non-demonstrative arguments but require them to do more than yield a consistent willingness-to-bet-on-H given our assumptions.

Jason Pratt said...

Anon: I very much sympathize with your problem, but it helps to know that analysts of this sort are supposed to be using a special definition for 'confirmation' (and for 'defeater', too, as it happens.) They don't mean deductive confirmation, they mean inductive confirmation. Think of it as being the same as the old meaning of confirmation: with firming. {s} Any firming at all, no matter how slight, involves a proportional degree of with-firming. Any evidence weighing in favor of the truth of the hypothesis, is 'confirming' evidence, in this sense. The danger, though, is that there will be a temptation to slip over into that stronger meaning of confirmation (or defeater, for that matter). I wish different terms would be used.

I tend to agree with Brandon's evaluation of problems in applying Bayesian models to questions like this, btw.

Regarding whether the burden of proof is on one side or another--the easiest way I can think of to end the dodging is to just accept the burden of proof for one's position. {s} (But I've written about that before.)

Regarding Exap's question: I would go so far as to say the problem may be even more fundamental than that. But I think the crit is a good one (and I nod at Victor's own reply, that Swineburne's principle of simplicity has some problems with practical application, at least "in a number of interesting cases" as Exap puts it.)

Regarding Victor's original post, on the metamodel:

{{We then ask how likely the phenomenon is to exist given the hypothesis of theism. We then ask how likely the phenomenon is to exist whether or not theism is true. If the phenomenon is more likely to exist given theism than it is to exist whether or not theism is true, then the argument carries some inductive weight in favor of theism.}}

Okay, this is where I start having major-serious problems with the model. (I may have problems earlier, too, but this is where I think it completely falls apart.)

My problem is that the model requires us, here, to be trying to compare two specially disparate likelihood estimates to each other. (In passing, the single-case model demonstrates why we really ought to be talking in terms of _likelihood_, not in terms of _probability_, as tempting as it may otherwise be to do so.)

My problem is not that we can’t come up with (at least intuitive, though maybe no more than intuitive!) likelihood estimates that the phenomenon exists given the truth of theism. And my problem is not that we can’t come up with (ditto) likelihood estimates that the phenomenon exists whether-or-not theism is true.

My problem is that trying to compare these two likelihood estimates requires keeping both of them effectively in play, when really one or the other is cancelling out the other one by mutual exclusion!

I don’t mean that it is impossible to compare likelihoods of a phenomenon occuring given h and given not-h. That’s fairly normal. But that isn’t what’s supposed to be happening here.

But I suspect that this is what’s being _imported_ here: P(R|T) is really being intuitively compared to P(R|notT), instead of to P(R) == P(R|TornotT). But it’s supposed to be being compared to P(R), not to P(R|notT). (Note: R stands for the phenomenon of Reason, however that’s defined, since Victor is shooting for a Bayesian version of the AfR. T obviously stands for theism.)

But the moment we try actually comparing P(R|T) to P(R|TornotT), we’re going to run into a blank wall; because the two kinds of comparison are not commiserate with each other.

This can be demonstrated by rephrasing a bit. Try considering a likelihood estimate of Phenomenon being true whether Hypothesis is true or whether Hypothesis is not true, regardless. That may be possible, but it requires that in effect we weigh likelihoods of the phenomenon being true in _both_ cases and find that the likelihood is about the same either way. Ph would be this-likely to be true if H is true or if H is not-true, either way.

Fine. But then, we are supposed to compare this result with our estimation of the likelihood of the same Ph being true if H is true. Period.

But, we already _did_ that! We did that back when we came up with a _either way_ likelihood estimate. Didn’t we?! If we did, then it’s pointless to try to compare that result with the likelihood of what amounts to only half of the process reaching that result.

This is why I suspect (and routinely find hints of, in practice), that the operation of likelihood comparison can only succeed in any cogent fashion, by weighing likelihoods of Ph|H vs. Ph|not-H, to see which seems greater as a likelihood. The question of H is not regardless in _that_ estimation; but the question of H has been rendered regardless already in the _other_ estimation--and yet, we’re trying to sneak a regard for H back into the evaluation for comparison after all!

It seems to me, then, that the only way for it to cogently make sense as a comparison, is to compare H and not-H. And I suspect that this is what is actually happening; at the very least, there will be a strong temptation to do this, which must be avoided on pain of instantly invalidating the comparison being aimed at.

But then, there needs to be a clear concept of what an evaulation of P(R) is supposed to be that (a) isn't P(R|notH) and (b) also _doesn't_ involve already deciding that P will be the same whether H is true or false. Because once we do that, we've _already_ answered P(R|H) as part of P(R); which is what we were supposed to ending up comparing _to_ P(R).

JRP

Sturgeon's Lawyer said...

The problem I perceive with this model is that it reverses the order of induction.

Given a phenomenon to be explained, and two competing hypotheses to explain it, one does not ask which of the two hypotheses would make the phenomenon more likely, for, whichever hypothesis is correct, the probability of the phenomenon is the same: 100%, because the phenomenon is an observed fact.

This is similar to the problem of estimating the probability of a unique event.

Anonymous said...

JASON
' Any firming at all, no matter how slight, involves a proportional degree of with-firming. Any evidence weighing in favor of the truth of the hypothesis, is 'confirming' evidence, in this sense'

So if you hypothesise that no human can ever grow to more than 8 feet tall, and walk out the door, and the first two people you see are 7 feet 11 and 7 feet 11 and one half, then that is *confirming* evidence that no human can grow to more than 8 feet tall?

Jason Pratt said...

Sadly, Anon, yes--in the merely inductive sense of 'confirming'.

Of course, everyone would agree that the first 8'1" man you wandered across would instantly disconfirm the hypothesis in a deductive fashion. (Much like the classic case of "all swans are white"--until you meet a black one.) This makes things goofier, because if one kind of evidence 'confirms' in a merely inductive fashion, but another kind of evidence disconfirms in a deductive fashion (but there's disconfirmation in the weaker inductive sense, too), then why in all bloody poo is the same term being used for two categorically different kinds of results!?

Like I said, I dearly wish the experts in the field would use another term. (Don't get me started on 'contingent' or 'rational', either. {g})

JRP

Jason Pratt said...

Sturg,

I have less problem with that. (Though I strenuously doubt it's a Bayesian application per se.) As a matter of actual fact, we are frequently presented with fait accompli (or whatever the plural is) and then must consider among various hypotheses for the accomplished fact, which hypothesis would be more likely to have accomplished the fact. Thus we conclude (but _NOT_ deductively) it was more probably H1, and less probably H2. This should not be treated as actually _discovering_ the truth of how that fact was accomplished, though. Unless we deductively remove H2 from the option pool, it remains a viable option, even if regarded to be less probable than H1; which means it might have been the generator of the result after all.

Relatedly, I'm trying to decide whether I want to work up a complaint about some things I've found in the first chapter of Dembski's _The Design Inference_. Unless he repairs his procedure pretty quickly, he's leaving himself open to at least a strong temptation to fatally beg the question: his "Law of Small Probability" is flagrantly tautological, once his own definitions are imported into the law. This, in turn, is related to other suspicious apparent blindspots in his procedure (so far), regarding active agents and their intentions. (I don't want to be too harsh; I'm not even through with the first chapter yet. But dang... it reminds me why I've never much cared to make use of the AfD in my work...)

JRP

Jason Pratt said...

{{which is what we were supposed to ending up comparing _to_ P(R).}}

Well, opps. I wish we could edit comments. {g} That should read 'supposed to end up' not 'ending up'.

JRP

Anonymous said...

Jason is obviously wrong,

If I calculate that no human can grow more than 8 feet tall, and the very first human I meet is 7 feet 11.9 inches, then this obviously disconfirms my theory, even though he is less than 8 feet tall.


Unless I just happen to have seen the tallest person in the world, my theory is in trouble.

Jason Pratt said...

{{If I calculate that no human can grow more than 8 feet tall, and the very first human I meet is 7 feet 11.9 inches, then this obviously disconfirms my theory, even though he is less than 8 feet tall.}}

Ah! So, your objection is that the word set 'confirms/disconfirms' should really be used in a way completely different, not only from the way it is (most commonly) used in deduction, but also from the far weaker inductive fashion.

Thus evidence that would not deductively disconfirm your calculation, nor inductively count against your calculation (thus not inductively disconfirming it either), but would actually fit within the expectations of your calculation, should be "obviously" considered disconfirming evidence. Because observations consistent with a theory, far from being treated as confirming evidence in any fashion, should obviously be treated as disconfirming evidence.

Riiiiiight... :P

Note to local specialists: this is what happens when you insist on using the same term set for categorically different (yet still topically closely related) concepts. After a while, people become so confused they may start to apply _reverse_ meanings to the terms! Oy.

Anon--the funny thing is, you're actually falling short (literally!) of the extent of your 'example'.

If you had calculated that no human being could possibly grow more than 8 feet tall, and you walk out the door and the first person you find is 8 feet tall: then either deductively or inductively your theory has _not_ been disconfirmed, and inductively it would weigh (though disproportionately small in comparison with the expected population number) as 'confirmation'; though _NOT_ in the deductive sense of 'confirmation'.

If you then proceeded to find that the next 40 million people you ran into were 8 feet tall, you still would not be disconfirmed in your theory in any sense, and you would be proportionately confirmed in your theory instead, inductively--though still not _deductively_ confirmed in your theory.

Similarly, taking the classic example of induction (though flipping it around to better parallel your 8-foot-limit example): if you reasoned that no swan could be sheerly black, and the first swan you met afterward was anything more colored than sheerly black (using a quasi-quantifiable parallel to the 8 foot limit), then your theory would be to that proportion inductively confirmed. It wouldn't be _deductively_ confirmed unless you managed to poll all swans in natural history, past present and future, but the more swans you ran across which were not sheerly black, the more your theory would count as being inductively confirmed--the risk being that people would start to treat your theory as being _deductively_ confirmed, which would be a category error.

I suspect your problem is really that Victor had written "It should be added that one can be an atheist and admit that there are some facts in the world that confirm theism." Rigorously, Victor would also add (and has also added) that, inductively speaking, one can be a theist and admit that there are some facts in the world that confirm atheism. He's talking about inductive 'confirmation', not deductive 'confirmation'. So long as a result falls within an expected parameter of a theory, it counts as inductive confirmation-by-weight.

But you expect (and not without some reason!) that people who make the (weak inductive) claim Victor is talking about, will insist that "everybody should be persuaded that [one such observation] *confirms* the hypothesis". [Anon's original emphasis]

Technically yes, everyone 'should be' persuaded; but only in the extremely limited and weak inductive sense, and _not_ exclusive to other evidence similarly 'confirming' atheism. If you are worrying that this will be promoted as being a _deductive_ sense, though--I agree, and did agree, that this is a legitimate worry.

But claiming that observations clearly fitting within the expectation parameters of the theory should count as _disconfirming_ evidence, is not the way to protect against that threat.

JRP

Anonymous said...

Jason is still sruggling with the idea that something can be disconfirming evidence, even if it falls within the range of prophesied results.

Suppose Jason is walking around in an earthquake, and he predicts that none of the falling masonry will hit him.

The next second, a slate whizzes past his head, and hits the ground one foot away from him.

'Aha' says Jason to his friends. 'That confirms my theory that falling debris will not hit me.'

Would he be surprised if his friends start running away from the rest of the falling masonry, even after Jason's theory had been 'inductively confirmed'?

Jason Pratt said...

Anon is still struggling with a conflation of two categorically different meanings of 'confirmation'.

No, I wouldn't be surprised if my friends started running away from the falling masonry, for at least two reasons:

(a) my 'prediction' doesn't seem to have had anything to do with whether _they_ would get hit by falling masonry (which could be easily fixed in your example, but I thought I'd point it out {g});

(b) a close miss doesn't deductively confirm my theory, and its inductive confirmation would be very small in itself. Why should they take one miss as being evidence sufficient to weigh in favor over-against the risk that I was wrong in my prediction (or theory or whatever) after all?

To which I can add,

(c) there is a helpful but irrational but strong instinct to get the hell away from a proximity miss, after which reasons for doing so can be rationalized at leisure. Strictly speaking, though, a miss is still a miss. My prediction was that I would never be hit, not that I would never have anything land close to me. The instinctive urge will be to conflate those two claims, though. (Which appears to be what you have done as well.)

I can speak with some practical experience in this, as it happens, because I teach swordfighting; and one of the most difficult skills to train is the skill of ignoring the instinct to react to close misses, as well as when and how to recognize that a miss will still be a miss even though close. Instead of parrying a thrust that will miss, it would be better to be doing some stabbing on target! More often, novice swordfighters (and even some expert ones!) will waste time and energy parrying an incoming blow further away than necessary to guarantee a miss, and/or will fwoof their own swordtip offline when making a parry because they instinctively _feel_ safer doing so. But they would have been just as safe creating a close miss; and would have been better assured of planting their tip on-target in the riposte they afterward attempt by creating a close miss instead of making a windshield-wiper parry.

That being said, the problem with your reasoning can also be demonstrated by extending your own example. I would not be surprised if my friends ran away after seeing only one close miss. After seeing several hundred close misses, though, I would be surprised if they weren't prepared to grant my theory more inductive credence!

That reasonable distinction demonstrates what the point is supposed to be for inductive 'confirmation'.

(All of this is entirely apart from any evaluation of my reasons for making a prediction about masonry missing me. My friends might be running away because they think my reasons for expecting no hits, are full of crap. {g})

JRP

Anonymous said...

Jason writes 'b) a close miss doesn't deductively confirm my theory, and its inductive confirmation would be very small in itself.'

He still seems to think that this close miss is inductive confirmation , however small, for the claim that masonry will not him.

But now he is also saying that this close miss is inductive confirmation for the claim that masonry will hit him.


So there you are.

The same data is inductive evidence for both A and not-A.

It would be simpler just to concede that if a theory predicts A, seeing A happen is not always any confirmation of the theory.

It is often a disconformation as we know that observations of A will vary randomly, so seeing an A close to the limit of accepted results means that there very well might be an A over the limit.

Just as calculating that no human being can grow to over 8 feet tall, would be in trouble if the very first person we saw was 7 feet 11.5 inches tall.

Although supposedly confirming the theory that nobody is over 8 feet tall, we know that it is likely that there is going to be a person taller than the very first person we see.

Jason Pratt said...

Sorry, the post ran off the bottom before I realized a comment had been made again.

{{But now he is also saying that this close miss is inductive confirmation for the claim that masonry will hit him.}}

Nope. I said what I wrote, which was very consistent with what I had already been saying. The fact that a close miss doesn’t deductively confirm my theory, is not remotely the same as a close miss being inductive disconfirmation for the claim that the masonry will not hit me--although as it happens it _would also_ be inductive confirmation of a distinctly different hypothesis, depending on what the terms were. More on that in a minute.

Now, if you had set up a compound claim, to the effect that I would not only never be hit by masonry, but the masonry would not even get close, _then_ a close miss would be disconfirmation of the claim, insofar as the claim is found to be intrinscally compound. (If one element can be removed without reducing the other element to nonsense, then the data might be disconfirmation for one element but not the other one--which in this example would be the case. On the other hand, if I insisted on the two elements being compound, then my whole claim per se would be disconfirmed, depending on my reasons for insisting on the two elements being compounded.)

Such a close miss would even be deductive disconfirmation, depending on how precisely ‘not even close’ was being claimed. If it was being loosely claimed, then the close miss might still be considered inductive disconfirmation for a person depending on whether a person perceived the miss as being close or not; but even then I would probably call it deductive disconfirmation.

But hit/not-hit is a binary mutual exclusion set; and a close miss is not a hit. It fits within confirmation, not disconfirmation--not without further qualification to the claim. So again, had the claim been something like ‘the spread of misses will be like a bell curve’, then the first close miss wouldn’t strictly be disconfirmation of that claim, rigorously speaking, but an increasing number of close misses in any non-bell distribution would be disconfirmation of the bell-shape part of the claim anyway--just as a similar disproportionate bunch of far misses would be!

But you didn’t specify any of that in your example; so it remains hit or miss. A miss is a miss, and a close miss is still a miss. An infinite number of close misses would all still be misses.

A small ‘weight’ of confirmation isn’t the same as disconfirmation. It’s just a small weight of confirmation. One close miss is only a little confirmation. It is also, as it happens, confirmation for a competing hypothesis, which my friends running away may be holding: most falling masonry will hit far, some will hit close, and some are likely to hit me. Inductive confirmation isn’t necessarily mutually exclusive among distinct hypotheses. Since my friends aren’t watching a bunch of masonry falling _only_ way over there, then the data fits their own hypothesis just as much as it does into mine; in which case we’re back to an intrinsic weighing of hypotheses themselves, not of data in relation to hypotheses. After many numerous close misses, though, my friends might decide to reevaluate the weight of liklihood between hypotheses, even though until the end of the data stream (no more falling masonry) they won’t be in a position to call the results diconfirmation of their own tacit claim. (And even then, it would only be inductive disconfirmation, not deductive, I think.)

Had you paid better attention, you wouldn’t have been thinking that a small weight of confirmation meant disconfirmation. It only means a small weight of confirmation, no less and no more. This is why I gave as an extension of the example, the case where bunches of masonry kept falling around me but never touching me. All those close misses add to a rational (though intuitive) evaluation of the liklihood of my claim being true, especially once it is perceived that somewhere in the misses there likely should have been a hit!

On the other hand, there may be a deductive constraint involved, if the evaluator considers it to be an impossibility that I could have rightly made that claim, on other grounds. e.g. if I had said that God told me no masonry would hit me, but the evaluator believed God to not possibly exist, then he couldn’t have believed my claim anyway. He might, maybe, be persuaded to reevaluate his reasons for thinking God’s existence to be impossible, after the masonry has finished falling and there were no hits--especially if he evaluates that in a bunch of close misses there should likely have been some hits--but if he rechecks his reasoning and perceives it to still be valid, then reasonably (whether his perception is correct or not) he would still not accept my claim and chalk up the explanation to something else.

{{So there you are.

The same data is inductive evidence for both A and not-A.}}

No, the same data happens to be inductive evidence for A and A’. The not-A for ‘no masonry will hit me’ is ‘masonry will hit me’ without a qualifier, and so far as a miss goes it fits A and not not-A. A’ can complexly include not-A among other things, and a close miss will tend to confirm the complex expectation the same way it would tend to confirm my (given per your example) simpler expectation. If there are no masonry hits at the end of the earthquake, though, and not-A was included as a certainty in A’, then A’ in its given complexity would be deductively disconfirmed (though other elements might be extracted out of it, especially where not dependent on not-A, for A1’, and _that_ might be confirmed as a hypothesis as much as my own claim, other things being equal.)

{{t would be simpler just to concede that if a theory predicts A, seeing A happen is not always any confirmation of the theory.}}

It would be simpler but wrong, insofar as inductive confirmation goes. You’re still thinking in terms of deductive confirmation.

Which, going back to my original complaint IN YOUR FAVOR (which you seem to be conveniently ignoring), is why I would prefer if the professionals would stop using ‘confirmation’ for inductive purposes and keep it for deductive purposes. _That_ would be the simpler thing to do.

{{It is often a disconformation as we know that observations of A will vary randomly, so seeing an A close to the limit of accepted results means that there very well might be an A over the limit.}}

You’re confusing confirmation of multiple hypotheses with disconfirmation of a hypothesis. They aren’t necessarily the same thing.

JRP