![]() |
While I disagree with almost everything Professor Larry Moran wrote in reply to my post, Is Larry Moran a conspiracy theorist?, he did at least ask a good question: what counts as evidence? In his latest post, he forthrightly declares:
I don’t know how to define “valid evidence” and I doubt very much if there’s anyone else who can offer a rigorous definition.
This post of mine is an attempt at such a definition.
Let’s begin with “valid evidence,” and defer for the time being the question of what constitutes good evidence. The question of what counts as valid evidence for a hypothesis was answered nearly 300 years ago, by the English statistician and clergyman Thomas Bayes (pictured above, courtesy of Wikipedia). Broadly speaking, we can define something as valid evidence (E) for a hypothesis (H) if renders H more probable. That is, E is valid evidence for H if the probability of H given E is higher than the prior probability of H before E is observed. The formula below expresses this very neatly:
![]() |
where P(H|E) represents the probability of H given E, the quotient
![]() |
expresses the impact of E on the probability of H, and P(H) stands for the prior probability of H, before E is observed. The quotient can be regarded as the level of support which E provides for H. So we can say that if the level of support is greater than 1 – or in other words, if the probability of the hypothesis increases in the light of evidence E – then E is valid evidence for H. It may not be good evidence, but it is at least valid evidence.
All right. But how do we define good evidence? That’s a trickier question. Before putting forward my answer, I’d like to make a few points.
First, evidence isn’t the same thing as proof. This should be so obvious that I shouldn’t have to point it out. However, one often hears skeptics asking believers in the supernatural or paranormal: “How can you be sure you’re not mistaken?” The short answer is that we can’t be absolutely sure. So what? In real life, very few things are absolutely sure, but we still make decisions on the basis of where the totality of the evidence points.
Second, good evidence for a hypothesis must render that hypothesis reasonably probable, in absolute terms. I won’t attempt to provide a precise definition for “reasonably probable, in absolute terms” (10%? 30%? 50%?), but I think we would all agree that 1% is not “reasonably probable.” To illustrate my poiint, let’s suppose that the prior probability of a hypothesis H is very low: 0.0001%, or 1 in 1,000,000. However, after new evidence E becomes available, the probability of H given E shoots up to 0.01%, or 1 in 10,000. In other words, the new evidence renders the hypothesis 100 times more likely to be true than it was previously judged to be. That’s a very high level of support, but even after we take the new evidence into consideration, the probability of the hypothesis is still very low in absolute terms: only 1 in 10,000. I certainly wouldn’t call that good evidence. The moral of the story is that a high degree of support for a hypothesis does not necessarily constitute good evidence for that hypothesis.
Third, good evidence for a hypothesis must provide a high level of support for that hypothesis, in addition to making it reasonably probable in absolute terms. To see why, let’s consider two pieces of evidence for a hypothesis. Before either piece of evidence becomes available, the prior probability of the hypothesis is rated at just 10%. The first piece of evidence raises the probability of that hypothesis from 10% to 50% – a 40% increase in absolute terms. The second piece of evidence raises the probability of the hypothesis from 50% to 90% – which is also a 40% increase. Which piece of evidence is better? I’d say the first, because it renders the hypothesis five times more probable than it was previously, whereas the second piece of evidence doesn’t even double the probability of the hypothesis.
Fourth, whenever we evaluate evidence, we need two or more competing hypotheses to evaluate it against. Thus when assessing evidence for a hypothesis, we need to not only ask how much it strengthens that hypothesis, but also to what degree it strengthens (or weakens) other, rival hypotheses. This is important, because a piece of evidence might be compatible with two different hypotheses, and might therefore strengthen both. The point I’m making here is that when deciding whether we need to revise our views about a hypothesis in the light of new evidence, we also need to look at the level of support it provides for other hypotheses. In statistical jargon, the likelihood ratio is what determines the effect of new evidence on the odds of one hypothesis, relative to another.
Fifth, when evaluating a hypothesis, we need to compare it with its most plausible rivals. It would be grossly unfair if I were to argue that because evidence E provides strong confirmational support for hypothesis A over rival hypothesis B, we should therefore adopt hypothesis A, without even considering the much more plausible hypothesis C. That would be intellectually dishonest. What this principle also entails, however, is that we can safely ignore rival hypotheses which are wildly implausible and which recieve little or no confirmational support from the new evidence. Skeptics who insist that we can never have enough evidence for the supernatural because there might always be some, unknown naturalistic hypothesis out there somewhere that can account for the same evidence, are therefore being unreasonably stubborn. As a general rule of thumb, I would suggest that when evaluating the likelihood of paranormal or supernatural claims, we should confine our attention to the top half-dozen or so naturalistic rival hypotheses. If these all turn out to be duds, then it’s prudent to conclude (provisionally) that a naturalistic explanation isn’t available.
Sixth, when considering outlandish hypotheses (be they UFO abductions or miracles), we need to be able to quantify their improbability in advance, before we start looking at the evidence for these hypotheses. To do otherwise is intellectually dishonest. For instance, let’s suppose that John Smith says he’d believe in UFO abductions if he could actually film one on videocamera, and then lo and behold, one takes place in front of him while his videocamera is rolling. For a moment, Smith considers revising his belief that UFO abductions never happen, but then he recalls Sagan’s dictum that extraordinary claims require extraordinary evidence, and decides that his video evidence isn’t extraordinary enough: after all, he might have been under hypnosis while witnessing the alleged abduction, and some prankster might have mischievously slipped a fake video into his videocamera. Smith’s fatal error here was that he didn’t attempt to quantify the prior probability of a UFO abduction before recording the event. If he had, he would have been able to resolve his epistemic dilemma: should he revise his beliefs after recording the abduction on video, or shouldn’t he?
Finally, no general hypothesis positing the existence of occult or supernatural agents should be assigned a prior probability of less than 1 in 10^120 (that’s one followed by 120 zeroes). This fraction can be considered as the “floor probability” for bizarre hypotheses of a general nature. Why? Because 10^120 has been calculated by Seth Lloyd as the number of base-level events (or elementary bit-operations) that have taken place in the history of the observable universe. Each non-bizarre (or “normal”) event can be considered as prima facie evidence against any general hypothesis appealing to occult or supernatural agents, and since the number of “normal” events occurring during the history of the observable universe is limited, the cumulative weight of the prima facie evidence against paranormal or supernatural phenomena is also limited. Using Laplace’s sunrise rule, we can say that given a very large number N of normal events and no abnormal events, the prior probability we should assign to the proposition that the next event we observe will be an abnormal one is 1/N, or in this case, 1 divided by 10^120. (Please note that I’m talking only about general hypotheses here: a more specific hypothesis, such as a madcap alien abduction scheme launched by water-people from the planet Woo-woo, will of course have a much lower antecedent probability than the general hypothesis that there are aliens of some sort, out there somewhere, who occasionally abduct humans; consequently evidence for the former hypothesis will have to be more stringent than evidence for the latter.)
Summing up: we can define good evidence for a hypothesis as evidence which provides strong confirmational support for that hypothesis, and which renders that hypothesis reasonably probable (but not certain), when evaluated against its most plausible rivals. And when evaluating bizarre (paranormal or supernatural) hypotheses of a general nature against their naturalistic rivals, the prior probability we should assign to the former is no lower than 1 in 10^120. (Indeed, some people might want to assign a higher floor of 1 in 10^20 for bizarre hypotheses, on the grounds that the number of events that could have been witnessed by the 100 billion-odd people who have ever lived over the course of their billion-second lives is only about 10^20, but we’ll waive that point here.)
We can now address the arguments in Professor Moran’s latest post. Let’s begin with vaccines.
Is the HPV vaccine Gardasil dangerous?
Professor Moran writes:
A few weeks ago the Toronto Star (Toronto, Ontario, Canada) published a front page article on the dangers of Gardasil, a vaccine against human papillomavirus (HPV) that’s recommended for adolescent girls. The article highlighted a number of anecdotal stories about girls who had developed various illnesses and disabilities that they attributed to the vaccine. The reporters thought this was evidence that the vaccine had serious side effects that were being covered up by the pharmaceutical industry…
It’s not hard to see where [reporters] David Bruser and Jesse McLean went wrong. They assumed that anecdotal evidence, or personal testimony, was evidence that Gardasil had serious side effects. They assumed this in spite of the fact that scientists and philosophers have been warning against this form of reasoning for 100 years. They assumed it in spite of the fact that there was abundant scientific evidence showing that Gardasil was safe. And they assumed it without bothering to investigate the stories.
Professor Moran is right, but for the wrong reason. To see why, let’s suppose that there were a number of stories in the press about girls living near a nuclear power plant, who had developed various mysterious illnesses. We would not be in the least reassured by government officials appearing on television and declaring that there was abundant scientific evidence showing that nuclear power plants were safe (even though in fact there is). Nor would we be impressed if these politicians pooh-poohed the press stories as anecdotal. If the illnesses were odd enough, and numerous enough, we’d tell the officials, “The incidence of rare illnesses among girls living near nuclear reactors constitutes striking evidence, which is not easily explained except by the hypothesis that the nuclear power plants are making the girls sick. Get up off your lazy backsides, and go and have a look!” (That’s how people talk to politicians in Australia, which is where I’m from.)
It’s true that correlation does not imply causation, and in the case described above, there might be some other cause at work: perhaps, by sheer coincidence, the nuclear power plants in the areas where the outbreaks have occurred are all located near toxic coal-fired power plants, which are really causing the illnesses. But a sufficiently strong correlation usually does imply the presence of a causal link. The question we then need to answer is: what kind of causal link?
Professor Moran evidently appreciates this point, for he continues:
Now, it’s possible that accumulating stories like those will eventually lead to further investigation and the discovery that there are, indeed, some rare side-effects that went undetected in the initial studies. When that happens, we will have evidence. But as long as there are better explanations for those stories they are not evidence of a serious problem with the vaccine.
Exactly. The critical question here is not: can we trust anecdotal evidence? Rather, the question we need to answer is: is there a hypothesis which better explains the evidence?
Did a man levitate in the seventeenth century?
Professor Moran then attempts to discredit the evidence I brought forward of a man known as St. Joseph of Cupertino, who levitated in the seventeenth century:
Torley says that there’s evidence of miracle and this is evidence of god(s). His “evidence” consists of reports by eighteenth century theologians that thousands of people witnessed St. Joseph of Cupertino flying through the air.
I reject the notion that this constitutes evidence that St. Cupertino could actually fly. There are far better explanations for the reported observations; namely, that they aren’t true. One of the characteristics of valid evidence has to be whether the purported explanation is a logical conclusion from the observation. In this case, is it more reasonable to assume that thousands of people saw St. Cupertio fly or is it more reasonable to assume that they all just imagined it, or that the second-hand reports are untrue? …
I don’t believe that St. Cupertino actually flew around parts of Italy in the 1600s because there are much more reasonable explanations for the reports that have been written.
Let’s begin with Moran’s statement: “There are far better explanations for the reported observations; namely, that they aren’t true.” Sorry, but that’s not an explanation of anything. At the very least, an explanation of an alleged supernatural event would have to acccount for why the witnesses thought they had seen something supernatural.
Moran then proceeds to disparage the miracle reports by referring to them as “second-hand” and by alleging a time-lag of 100 years between the events described and the earliest reports of them. But it turns out that a biography of St. Joseph of Cupertino was written as early as 1678, a mere 15 years after his death in 1663. In my last post, I also mentioned that there were thirteen volumes in the Vatican Archives, containing “numerous testimonies of witnesses (including princes, cardinals, bishops and doctors) who knew St Joseph personally and in many cases were eyewitnesses to the wonderful events of his life.” By definition, eyewitness testimony is first-hand, not second-hand.
I then quoted from an article by a modern biographer, Michael Grosso, who summarized the evidence for the levitations as follows;
The records show at least 150 sworn depositions of witnesses of high credentials: cardinals, bishops, surgeons, craftsmen, princes and princesses who personally lived by his word, popes, inquisitors, and countless variety of ordinary citizens and pilgrims. There are letters, diaries and biographies written by his superiors while living with him. Arcangelo di Rosmi recorded 70 incidents of levitation; and then decided it was enough…
…[T]he Church progressively tried to make him retreat to the most obscure corners of the Adriatic coast, ending finally under virtual house arrest in a small monastic community at Osimo. There was no decline effect in Joseph’s strange aerial behaviors; during his last six years in Osimo he was left alone to plunge into his interior life; the records are unanimous in saying that the ratti (raptures) were in abundance right up until his dying days. The cleric in charge of the community swore that he witnessed Joseph levitate to the ceiling of his cell thousands of times.
To repudiate the evidence for Joseph’s levitations would be to repudiate thirty-five years of history because the records of his life are quite detailed and entangled with other lives and documented historical events. We would have to assume colossal mendacity and unbelievable stupidity on the part of thousands of people, if we chose to reject this evidence.
In order to maintain that there are no first-hand reports of St. Joseph of Cupertino’s levitatiosn, Professor Moran would have to maintain that there was a conspiracy on a colossal scale, involving hundreds or even thousands of eminent people who were prepared to perjure themselves by giving sworn testimony of a miracle they knew never happened. And in order to maintain that “they all just imagined it,” as Moran supposes, one would have to maintain that thousands of hallucinations took place, involving thousands of people at many different locations. The problem with both hypotheses is that their antecedent (or prior) probability is even lower than the prior probability of a miracle occurring, which (as I argued above) can be no lower than 1 in 10^120. For instance, let’s suppose that the prior probability of a large crowd of people all imagining that they saw a person levitate in the air for several hours (as St. Joseph is alleged to have often done) is 1 in 1,000, or 1 in 10^3. That’s a very generous estimate, as there are no similar reports of mass hallucinations ever having occurred, anywhere in the world, over a period of hours and under normal viewing conditions. Since the sightings took place on many different occasions at different locations, and involved different people, we can treat them as independent events – which means we can multiply their probabilities. Thus we can calculate that the probability of 40 such independent sightings is 1 in (10^3)^40, or 1 in 10^120. Since there were in fact thousands of sightings, the combined probability of these hallucinations having occurred is far, far lower than the threshold probability of 1 in 10^120, that we assigned to a miracle.
To be sure, one might hypothesize the existence of some unknown common cause for all of these independent sightings – for instance, high levels of cosmic radiation hitting the Earth (and especially Italy) in the seventeenth century. But as we saw above, the appeal to unknown causes is intellectually obstinate. We have to make up our minds, based on the evidence available to us. That means we might be wrong, of course. But that’s an epistemic risk we have to take. The alternative is intellectual paralysis.
Professor Moran also asks why “only Roman Catholic priests … can fly and why haven’t there been any sightings in modern times.” But I nowhere claimed that this miracle constituted evidence for the truth of any particular religion; I merely cited it as evidence for the supernatural. As for levitation sightings occurring in modern times, Professor Moran should be aware that there have been reports of more recent sightings, although the quality of the evidence for these levitations is nowhere near as good as the evidence relating to St. Joseph of Cupertino. I focused on him, simply because he was my best case. (There have also been reports of levitations associated with other Catholic mystics, as well as spiritualists and Indian yogis.)
The origin of life
In response to evolutionary biologist Eugene Koonin’s peer-reviewed estimate of the probability of life originating anywhere in the observable universe as 1 in 10^1,018, Professor Moran comments:
Eugene Koonin’s calculations are silly. I have no idea how to discuss them.
I don’t know how life originated. That statement gets me in trouble with many defenders of evolution because they think it concedes too much to the creationists. Frankly, I don’t care. It’s the truth and we need to be up front about it. Just because we don’t know doesn’t mean that a naturalistic origin of life is impossible. On the contrary, everything we do know is consistent with a spontaneous, natural, origin of life. It looks to me like it was a very rare event but it’s a big universe…
Life on Earth began about 3.5 billion years ago. It is not evidence of god(s)
The fact that a qualified biochemist has “no idea how to discuss” Dr. Koonin’s calculations, which passed a panel of four peer-reviewers, speaks volumes. It means that Moran has no alternative naturalistic hypothesis. Moran might respond that we had no naturalistic hypothesis for magnetism either, until the late nineteenth century. But even people in ancient times knew the cause of magnetism: they knew, for instance, that pieces of a rock called magnetite attracted shepherds’ iron staffs. Moreover, magnetism was an everyday occurrence. By contrast, we know of no adequate cause for the origin of life, and as far as we know, it occurred only once in the history of the universe: 3.5 billion years ago. In this case, it is not rational to infer that life had a natural origin; it is an article of faith.
The fine-tuning argument
Professor Moran writes:
The universe may not be as “fine-tuned” as most creationist believe. Anyone who has read Victor Stenger will know that it’s not an open-and-shut case [Fine-tuned Universe]. Let’s assume for the sake of argument that the universe is “moderately-tuned for life as we know it.” We don’t know how many other kinds of universe are possible and we don’t know how many different kinds of life are possible…
It seems to be extraordinarily difficult for believers to grasp the essence of the “puddle argument” described by Douglas Adams [Here’s the relevant quote from Adams: “…Imagine a puddle waking up one morning and thinking, ‘This is an interesting world I find myself in — an interesting hole I find myself in — fits me rather neatly, doesn’t it? In fact it fits me staggeringly well, must have been made to have me in it!'” – VJT.]
As I stated in my last post in response to Professor Moran, cosmologist Luke Barnes’ online essay, The Fine-Tuning of the Universe for Intelligent Life amply refutes Victor Stenger’s arguments. It is thus reasonably certain that our universe would be incapable of supporting life if its fundamental parameters were even slightly different. This inference would remain valid, even if it turned out that there were other, unknown values of the constants of Nature which would allow universes very different from our own to support life. All that the fine-tuning argument claims is that a lifeless universe would have resulted from fairly minor changes in the forces etc. with which we are familiar. That in itself is a highly remarkable fact, as the philosopher John Leslie explained, using his now-famous “fly-on-the-wall” analogy: “If a tiny group of flies is surrounded by a largish fly-free wall area then whether a bullet hits a fly in the group will be very sensitive to the direction in which the firer’s rifle points, even if other very different areas of the wall are thick with flies.”
Adams’ puddle analogy completely misses the point, because the puddle of water would still be a puddle, even if its shape were slightly different. It just wouldn’t be the same puddle, that’s all.
I will stop here, and let Professor Moran have the last word in this exchange, if he wishes.