I was looking at some exchanges I had with skeptics on Debunking Christianity, and one aspect of my views that was difficult to get across to them was the idea that, without attributing blatant irrationality to anyone, we can allow that different people are going to be able to assess the antecedent probability of something like the Resurrection of Jesus in different ways. These people are accustomed to working in scientific contexts where Bayes' theorem is used as a forecasting tool, and I take it what happens in those scientific contexts is that there are frequencies that are thought to determine what the antecedent probability of something is. So, we can look at how frequently something has happened in the past, and we can determine how likely it is to occur in the future. There is therefore a single, determinable answer as to how likely something is to occur.
However, to do this, you have to subsume events within a reference class, and ask how likely that type of event is to occur. In the case of historical events, however, all of them are at least in one sense completely unique. How frequent are Kennedy assassinations? The guy could only be assassinated once. So, we receive a report that Kennedy was assassinated. We could argue that since the event was unprecedented, the probability of that event was zero, while the probability of false newspaper reports is considerably higher than zero. Therefore, we ought to disbelieve the report and assume that the newspaper report was erroneous.
On the other hand, political leaders are assassinated from time to time, so if we subsume the Kennedy assassination into the reference class of assassinations of political leaders, it becomes considerably less improbable. If we subsume it under the category of assassinated Presidents, we know that of the 34 Presidents that preceded JFK, three of them were killed by an assassin's bullet.
Extraordinary claims require extraordinary evidence. OK, can we measure the extraordinariness of the Kennedy assassination? How?
This is from the linked Stanford Encyclopedia essay on interpretations of probability:
Finite frequentism gives an operational definition of probability, and its problems begin there. For example, just as we want to allow that our thermometers could be ill-calibrated, and could thus give misleading measurements of temperature, so we want to allow that our ‘measurements’ of probabilities via frequencies could be misleading, as when a fair coin lands heads 9 out of 10 times. More than that, it seems to be built into the very notion of probability that such misleading results can arise. Indeed, in many cases, misleading results are guaranteed. Starting with a degenerate case: according to the finite frequentist, a coin that is never tossed, and that thus yields no actual outcomes whatsoever, lacks a probability for heads altogether; yet a coin that is never measured does not thereby lack a diameter. Perhaps even more troubling, a coin that is tossed exactly once yields a relative frequency of heads of either 0 or 1, whatever its bias. Famous enough to merit a name of its own, this is the so-called ‘problem of the single case’. In fact, many events are most naturally regarded as not merely unrepeated, but in a strong sense unrepeatable — the 2000 presidential election, the final game of the 2001 NBA play-offs, the Civil War, Kennedy's assassination, certain events in the very early history of the universe. Nonetheless, it seems natural to think of non-extreme probabilities attaching to some, and perhaps all, of them. Worse still, some cosmologists regard it as a genuinely chancy matter whether our universe is open or closed (apparently certain quantum fluctuations could, in principle, tip it one way or the other), yet whatever it is, it is ‘single-case’ in the strongest possible sense.
So, if we can't measure the extraordinariness of the Kennedy assassination, how can we measure the extraordinariness of the Resurrection?