Uncommon Descent Serving The Intelligent Design Community

A design inference from tennis: Is the fix in?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
Thumbnail for version as of 04:59, 12 June 2007

Here:

The conspiracy theorists were busy last month when the Cleveland Cavaliers — spurned by Lebron, desperate for some good fortune, represented by a endearing teenager afflicted with a rare disease — landed the top pick in the NBA Draft. It seemed too perfect for some (not least, Minnesota Timberwolves executive David Kahn) but the odds of that happening were 2.8 percent, almost a lock compared to the odds of Isner-Mahut II.

Question: How come it’s legitimate to reason this way in tennis but not in biology? Oh wait, if we start asking those kinds of questions, we’ll be right back in the Middle Ages when they were so ignorant that

Comments
Kairosfocus, this is becoming a little bizarre! I am not disagreeing with you! Let's go through your post:
Dr Liddle: This is a reversion to the already corrected, and is disappointing. Let’s go over this one more time: the first default — notice I am NOT using the term null, as it seems to be a source of confusion, is NECESSITY. This is rejected if we have highly contingent outcomes, leading to a situation where on background knowledge chance or choice are the relevant causal factors.
Exactly. If we have highly contingent outcomes we REJECT something, namely Necessity. To put it differently, if the observed data are highly improbable under the hypothesis that Necessity produced the observed patter, we reject necessity. If I have this wrong, please tell me, but it seems to me I am saying exactly what you are saying.
We have whole fields of science in direct empirical support, that necessity expressed in a law is the best explanation for natural regularities. But, not for that which is exactly not going to have substantially the same outcome on the same initial conditions more or less: a dropped heavy object falls at g. (The classic example used over and over and over again on this topic.)
I have no disagreement with any of that.
The second default, if that failed, is chance. If we drop one fair (non-loaded) die, the value on tumbling — per various processes boiling down to classes of uncorrelated chains of cause and effect giving rise to scatter — the outcome will be more or less flat random across the set {, 2, . . . 6}, as again has been cited as a classic over and over again. If we move up to two dice, the sum will however show a peak at 7, i.e. a statistical outcome based on chance may show a peak. Where we have a sufficiently complex set of possibilites [i.e config space 10^150 - 10^300 or more] and we have results that come form narrow zones that are independently specifiable, especially on particular desirable funciton, we have good reason to infer to choice. For, on overwhelming experience and analysis, sufficiently unusual outcomes will be unobservable on the scope of our solar system or the observed cosmos; due to being swamped out by the statistics. The classic example from statistical thermodynamics is that if you see a room where the O2 molecules are all clumped at one end, that is not a likely chance outcome, as the scattered at random diffused possibilities have such overwhelming statistical weight. But, on equally notoriously huge bases of observation, deliberate action by choice can put configurations into zones of interest that are otherwise utterly improbable.
Exactly. If what we observe is extremely improbable under the hypothesis of Chance, we reject chance as an explanation. Again, this seems to be what you are saying, and I wholeheartedly agree!
Using yet another repeatedly used example, ASCII characters of string length equivalent to this post have int hem more possibilities than the observable cosmos could scan more than effectively a zero fraction of, so there is no reason to infer that his post would be hit on by noise on the Internet in the lifespan of the observed cosmos. But, I have typed it out by design in a few minutes. (And, making yet another repeatedly used comparative, DNA is similarly digitally coded complex information in zones of interest inexplicable on chance, the only other known source of contingency. And, on yet another long since stale dated objection, natural selection as a culler out of the less successful, REMOVES variation, it does not add it, it is chance variation that is the claimed information source for body plan level macroevo.)
Yes, indeed, natural selection removes variation, it is not responsible for it. Again, I agree.
So, Dr Liddle, why is it that on being corrected several times, you so rapidly revert to the errors again corrected?
Because I don't see anywhere where I have said anything that does not agree with what you are saying here! If I have, it can only be because I have been unclear.
Do you not see that this looks extraordinarily like insistence on a strawman caricature of an objected to view?
Well, no, because I am happy to completely accept your account, in your own words. My only point, and it is such a little point I'm amazed that we are even discussing it (and you haven't even said you disagree) is that the way the analysis is set up is by a series of stages under which we REJECT a series of hypotheses (first Necessity, then Chance) of the observed data are very improbable under those hypothesis. (Please can you tell me whether or not you disagree with this, because it is all I am seeing, and seems to me exactly what you are saying above.) And my tiny (but essential for progress) point, is that the technical statistical term for that kind of analysis, in Fisherian statistics, which is what Dembski and yourself are using, we call the hypotheses that are rejected if the pattern is improbable under those hypotheses, are called "null hypotheses". A silly term, perhaps, but that's what we use. That's all I'm saying - that the technical term for the Chance and Necessity hypotheses is "Null" aka H0, and the technical term for Design is the "Alternate Hypothesis" aka H1, in other words, what you are left with if you have excluded everything else.
GEM of TKI PS: Re MF (who uncivilly insists on ignoring anything I have to say, even while hosting a blog in which I am routinely subjected to the nastiest personal attacks that face to face would be well worth a punch in the nose . . . if you picked the wrong sort of person to play that nastiness with), what I will say is that the many clever objections to reasoning by elimination too often overlook the issue of the match between opportunities to observe samples from an underlying population and the likelihood of samples catching very special zones that are small fractions of low relative statistical weight. My usual example is to do a thought experiment based on a darts and charts exercise. Draw a normal curve on a large sheet of paper, breaking it up into stripes of equal width, and carrying out the tails to the point where they get really thin. Mount a step ladder and drop darts from a height where they will be more or less evenly distributed across the sheet with the chart on it. After a suitable number of drops, count holes in the stripes, which will be more or less proportional to the relative areas. One or two drops could be anywhere, but if they are inside the curve will overwhelmingly likely be int eh bulk of it, not the far tails. But as you drop more and more hits,, up to about 30, you will get a pattern that begins to pick up the relative area of the stripes, and the tails will therefrore be represented by relatively few hits. The far tails, which are tiny relatively speaking, and are special independently specifiable zones, will receive very few or no hits, within any reasonable number of drops. So, we see the root of the Fisherian reasoning, which is plainly sound: with statistical distributions, the relative statistical weight dominates outcomes within reasonable resources to sample. So, if you are found in a suspicious zone, that is not likely to be a matter of chance but choice.
Absolutely. And that "suspicious zone" is called the "rejection region". We agree. And what is "rejected" is the null hypothesis. What is considered supported is the Alternative Hypothesis. Therefore, in the ID version, Design is the Alternative Hypothesis, and Chance and/or Necessity is the Null. I think you have just misread the conversation I was having with Mung. I assume you agree with the above, as you seem to be familiar with the quirks of Fisherian statistical nomenclature.
The rest is dressing up a basic common sense insight in mathematics. Or, better, yet, statistical thermodynamics. The design inference type approach refines this common sense and gives a systematic way to address what it means to be found in special and rare zones in extremely large config spaces.
Yes indeed. The only point at issue is the name we give the hypothesis we regard as supported if a pattern falls into one of those "special and rare" zones. Now that you know what this spat is about, I'm sure you will agree that what we call it is the "Alternate Hypothesis" aka H1, and the hypothesis that would then be rejected is the "Null" aka H0. Yes?
I am now drawing the conclusion that the torrent of objections and special pleadings and reversions to repeatedly corrected errors that we so often see, are not because of the inherent difficulty of understanding this sort of reasoning, but because the implications cut sharply across worldview expectations and agendas.
Well, luckily for me, it will be clear to you by now that your premise is mistaken, due to some kind of communication error that I assume is now sorted out :) Cheers LizzieElizabeth Liddle
June 29, 2011
June
06
Jun
29
29
2011
05:44 AM
5
05
44
AM
PDT
Dr Liddle: This is a reversion to the already corrected, and is disappointing. Let's go over this one more time: the first default -- notice I am NOT using the term null, as it seems to be a source of confusion, is NECESSITY. This is rejected if we have highly contingent outcomes, leading to a situation where on background knowledge chance or choice are the relevant causal factors. We have whole fields of science in direct empirical support, that necessity expressed in a law is the best explanation for natural regularities. But, not for that which is exactly not going to have substantially the same outcome on the same initial conditions more or less: a dropped heavy object falls at g. (The classic example used over and over and over again on this topic.) The second default, if that failed, is chance. If we drop one fair (non-loaded) die, the value on tumbling -- per various processes boiling down to classes of uncorrelated chains of cause and effect giving rise to scatter -- the outcome will be more or less flat random across the set {, 2, . . . 6}, as again has been cited as a classic over and over again. If we move up to two dice, the sum will however show a peak at 7, i.e. a statistical outcome based on chance may show a peak. Where we have a sufficiently complex set of possibilites [i.e config space 10^150 - 10^300 or more] and we have results that come form narrow zones that are independently specifiable, especially on particular desirable funciton, we have good reason to infer to choice. For, on overwhelming experience and analysis, sufficiently unusual outcomes will be unobservable on the scope of our solar system or the observed cosmos; due to being swamped out by the statistics. The classic example from statistical thermodynamics is that if you see a room where the O2 molecules are all clumped at one end, that is not a likely chance outcome, as the scattered at random diffused possibilities have such overwhelming statistical weight. But, on equally notoriously huge bases of observation, deliberate action by choice can put configurations into zones of interest that are otherwise utterly improbable. Using yet another repeatedly used example, ASCII characters of string length equivalent to this post have int hem more possibilities than the observable cosmos could scan more than effectively a zero fraction of, so there is no reason to infer that his post would be hit on by noise on the Internet in the lifespan of the observed cosmos. But, I have typed it out by design in a few minutes. (And, making yet another repeatedly used comparative, DNA is similarly digitally coded complex information in zones of interest inexplicable on chance, the only other known source of contingency. And, on yet another long since stale dated objection, natural selection as a culler out of the less successful, REMOVES variation, it does not add it, it is chance variation that is the claimed information source for body plan level macroevo.) So, Dr Liddle, why is it that on being corrected several times, you so rapidly revert to the errors again corrected? Do you not see that this looks extraordinarily like insistence on a strawman caricature of an objected to view? GEM of TKI PS: Re MF (who uncivilly insists on ignoring anything I have to say, even while hosting a blog in which I am routinely subjected to the nastiest personal attacks that face to face would be well worth a punch in the nose . . . if you picked the wrong sort of person to play that nastiness with), what I will say is that the many clever objections to reasoning by elimination too often overlook the issue of the match between opportunities to observe samples from an underlying population and the likelihood of samples catching very special zones that are small fractions of low relative statistical weight. My usual example is to do a thought experiment based on a darts and charts exercise. Draw a normal curve on a large sheet of paper, breaking it up into stripes of equal width, and carrying out the tails to the point where they get really thin. Mount a step ladder and drop darts from a height where they will be more or less evenly distributed across the sheet with the chart on it. After a suitable number of drops, count holes in the stripes, which will be more or less proportional to the relative areas. One or two drops could be anywhere, but if they are inside the curve will overwhelmingly likely be int eh bulk of it, not the far tails. But as you drop more and more hits,, up to about 30, you will get a pattern that begins to pick up the relative area of the stripes, and the tails will therefrore be represented by relatively few hits. The far tails, which are tiny relatively speaking, and are special independently specifiable zones, will receive very few or no hits, within any reasonable number of drops. So, we see the root of the Fisherian reasoning, which is plainly sound: with statistical distributions, the relative statistical weight dominates outcomes within reasonable resources to sample. So, if you are found in a suspicious zone, that is not likely to be a matter of chance but choice. The rest is dressing up a basic common sense insight in mathematics. Or, better, yet, statistical thermodynamics. The design inference type approach refines this common sense and gives a systematic way to address what it means to be found in special and rare zones in extremely large config spaces. I am now drawing the conclusion that the torrent of objections and special pleadings and reversions to repeatedly corrected errors that we so often see, are not because of the inherent difficulty of understanding this sort of reasoning, but because the implications cut sharply across worldview expectations and agendas.kairosfocus
June 29, 2011
June
06
Jun
29
29
2011
03:46 AM
3
03
46
AM
PDT
Mung, @ #64: Thank you for posting that. Yes, it is possible to reject a null without having specified Alternative Hypothesis in detail. That is where the terminology becomes confusing, and perhaps that is where the communication difficulty has arisen. An Alternative Hypothesis(H1) can be expressed as the negation of the null, just as H0 can be expressed as the negation of H1. The important thing is that there is no Excluded Middle. That's why one is always expressed as Not The Other. So we could express the Design Hypothesis as either H0: Not-Design; H1: Design Or we could express it as: H0: Chance; H1: Not Chance. Or even: H0: Chance or Necessity; H1: Neither Chance nor Necessity. It doesn't matter. A null hypothesis isn't called "null" because it has a "not" (or a "neither") in it! And the Alternative Hypothesis can be as vague as "not the null". So let's just call them A and B to avoid terminology problems for now: In Fisherian hypothesis testing, you set your two hypotheses (A and B) so that you can infer support for A of them if the probability of observing the observed data if B is true is very low. However, if the probability of observing the observed data is quite high under B, we "retain B". We do not rule out A. So it is an assymmetrical test. We plot the distribution of possible data under B, and if the observed data is in one of the extreme tails of B, we conclude that "p<alpha" (where alpha is your rejection criterion) and A is supported. So the way you tell which is H0 and which is H1 when reading a report of Fisherian hypothesis testing is to ask yourself: Which hypothesis is rejected if the observed data is improbable (i.e. low probability)? That is your H1. The other is your H0. That is how we can tell that Dembski's EF, and indeed his CSI filter, Design (or, if you will, Not Chance Or Necessity), is cast as the Null (H0). Because if the data are high improbable, Design (or Not Chance Or Necessity) is considered supported. It's all in Meyer :) Does that make sense now? If so, we can go on to discuss why it might be problematic for Dembski's method, but at first let us be clear on how the method is parsed in Fisherian terminology!Elizabeth Liddle
June 29, 2011
June
06
Jun
29
29
2011
01:10 AM
1
01
10
AM
PDT
#64 and #65 Mung Do you understand that null hypothesis significance testing is a conceptual nightmare and only hangs on in statistics because of tradition? It is one of Dembski's biggest mistakes to hitch the design inference to this. There are many papers on the internet describing this. here is one. To quote: The null hypothesis significance test (NHST) should not even exist, much less thrive as the dominant method for presenting statistical evidence in the social sciences. It is intellectually bankrupt and deeply flawed on logical and practical grounds.markf
June 28, 2011
June
06
Jun
28
28
2011
11:01 PM
11
11
01
PM
PDT
William A. Dembski:
The Fisherian Approach to Design Inferences This is the approach I adopt and have developed. In this approach there are always two events: an event E that the world presents to us and an event T that includes E (i.e., the occurrence of E entails the occurrence of T) and that we are able to identify via an independently given pattern (i.e., a pattern that we can reproduce without having witnessed E). Think of E as an arrow and T as a fixed target. If E lands in T and the probability of T is sufficiently small, i.e., P(T) is close to zero, then, on my approach, a design inference is warranted. For the details, see my article at http://www.designinference.com/documents/2005.06.Specification.pdf titled “Specification: The Pattern That Signifies Intelligence.”
http://www.designinference.com/documents/2005.09.Primer_on_Probability.pdfMung
June 28, 2011
June
06
Jun
28
28
2011
08:35 PM
8
08
35
PM
PDT
William A. Dembski:
(4) Eliminating chance without comparison. Within the Bayesian approach, statistical evidence is inherently comparative—there’s no evidence for or against a hypothesis as such but only better or worse evidence for one hypothesis in relation to another. But that all statistical reasoning should be comparative in this way cannot be right. There exist cases where one and only one statistical hypothesis is relevant and needs to be assessed. Consider, for instance a fair coin (i.e., a perfectly symmetrical rigid disk with distinguishable sides) that you yourself are tossing. If you witness a thousand heads in a row (an overwhelmingly improbable event), you’ll be inclined to reject the only relevant chance hypothesis, namely, that the coin tosses are independent and identically distributed with uniform probability. Does it matter to your rejection of this chance hypothesis whether you’ve formulated an alternative hypothesis? I submit it does not.
http://www.designinference.com/documents/2005.09.Fisher_vs_Bayes.pdfMung
June 28, 2011
June
06
Jun
28
28
2011
08:00 PM
8
08
00
PM
PDT
Der Liddle: Yes, once the relevant criteria of empirically tested and reliable signs as the means of making those two rejections are also acknowledged. However, it must then also be faced that there are many direct positive confirming instances -- and a distinct absence of credible failures [e.g. a whole internet full of cases] -- where we can see if it is reliably known that CSI and/or FSCI serves as a reliable positive sign of choice contingency. Which it does. This is inference to best explanation on cumulative empirical evidence in light of positive induction, not mere elimination on what "must" be the "only" alternative. GEM of TKIkairosfocus
June 28, 2011
June
06
Jun
28
28
2011
05:28 PM
5
05
28
PM
PDT
No, I don't think you are being inconsisent kairosfocus, and what you have said here is exactly in accordance with what you said earlier. If a pattern makes it through the filter it means we reject, in turn, necessity (because of high contingency), then chance (because of CSI). That allows the pattern to make it through to Design. Correct?Elizabeth Liddle
June 28, 2011
June
06
Jun
28
28
2011
04:59 PM
4
04
59
PM
PDT
PS: If you look at the per aspect EF chart, you will see that here are two decision nodes in succession in the per as-ect chart as shown. The first default is that there is a mechanical necessity at work, rejected on finding high contingency. Thereafter, as just stated above, the second is that he result is by chance driven, stochastic contingency. This is rejected on finding CSI. There is no inconsistency in my remarks, and that you think you see that shows that you are misresading what the flowchart and the remarks have been saying. Notice, again, there are two defaults, first necessity,then if that fails, there will be chance, and only if this fails by a threshold of specified complexity where chance is maximally unlikely to be able to account for an outcome, will the inference be to design. Indeed, the filter cheerfully accepts a high possibility of missing cases of design in order to be pretty sure when it does rule design. That is it is an inference to best explanation, with a high degree of confidence demanded for ruling design. Once there is high contingency but you cannot surmount that threshold, it will default to chance as best explanation of a highly contingent outcome. If the outcome is pretty consistent once similar initial conditions obtain, the default is mechanical necessity. --> I do not know how to make this any clearer, so if someone out there can help I would be grateful.kairosfocus
June 28, 2011
June
06
Jun
28
28
2011
04:50 PM
4
04
50
PM
PDT
Well, obviously I have not managed to convey my point, because I'm sure if I had, you would agree with it! Let's try another tack: Do you agree that, if an observed pattern succeeds in making it right through the EF, that the answer to each of the first two questions must have been "no"? (i.e. No to Law; No to Chance)Elizabeth Liddle
June 28, 2011
June
06
Jun
28
28
2011
04:45 PM
4
04
45
PM
PDT
Dr Liddle: Kindly, look at he diagrams as linked. Compare your statements, and I trust you will see why we find your descriptions in gross error. And gross error relative to easily ascertained facts. You may choose to disagree with the point in the EF, but surely where a start node in a flowchart lies, the flow through branch points [decision nodes], and the terminus in a stop point, -- as the very use of the relevant shapes alone should tell -- is plain. GEM of TKIkairosfocus
June 28, 2011
June
06
Jun
28
28
2011
04:40 PM
4
04
40
PM
PDT
I am quite familiar with the structure and logic of the EF. Indeed, I pointed it out. All I am saying is that, in terms of Fisherian nomenclature (and it is a frequentist filter), Design is cast as the null. That is an entirely neutral statement. One could regard it as a strength (although I personally think it leads to a flaw). But the fact is that if you set up a hypothesis so that you make your inference by rejecting some other hypothesis you are casting that other hypothesis (or hypotheses) as the null, and your own as the "alternative hypothesis" aka H1. Clearly, the filter is set up to allow us to REJECT Chance and Necessity, if the observed pattern passes through the stages, and infer Design. That means, in other words, that if the pattern falls in the REJECTION zone, we infer that our Hypothesis is supported. What is REJECTED in the rejection zone is the NULL. Ergo, Design is the Alternate Hypothesis. Honestly, this really is Stats 101! I'm astonished that it's controversial. And in fact, kf, you agreed with it upthread! You wrote (#37):
Without making undue references to simple statistics, we may look at two hyps to be rejected in succession in a context where three are reasonable, and backed up by direct observations.
(my bold) In other words, if the observed pattern falls in the "rejection region" we consider Design supported. No? Or have you changed your mind? What I tell my students: We never "reject" our H1 in frequentist stats - we merely "retain the null". In other words, even if we "retain the null" H1 remains possible, just not positively supported by the data. However, we may "reject the null". In that case we can consider our H1 supported. So if a "filter" is set up to "reject" a hypothesis, the hypothesis set up to be "rejected" is, technically, called "the null".Elizabeth Liddle
June 28, 2011
June
06
Jun
28
28
2011
04:13 PM
4
04
13
PM
PDT
Dr Liddle: I must confess my disappointment with the just above. First, in neither the 1998 or so simple presentation (a slightly different form is here -- nb under given circumstances if LAW is the driver, the outcome will be highly probable, if chance, it will be intermediate, if choice beyond a threshold of confident ruling, it will be highly improbable if it were assumed to be by chance) nor my more complex per aspect presentation is chance the first node of the filter, but a test for mechanical necessity. That is precisely because high contingency is the hallmark that allows one to reject necessity as a credible explanation. If something has highly consistent outcomes under similar start points one looks for a more or less deterministic law of nature to explain it, not to chance or choice. Once something is highly contingent, then the real decision must be made, and in that context the default is chance. Once something is within reasonable reach of a random walk on the accessible resources, it is held that one cannot safely conclude on the signs in hand, that it is not a chance occurrence. Only if a highly contingent outcome [thus, not necessity] is both complex and specific beyond a threshold [i.e. it must exhibit CSI, often in the form FSCI] will there be an inference to design. Or, using the log reduced form of the Chi metric: Chi_500 = I*S - 500, bits that are specific and complex beyond a 500 bit threshold. Only if this value is positive would choice be inferred as the best explanation; essentially on the gamut of the solar system. Here, raising the complexity threshold to 1,000 bits would put us beyond the credible reach of the observed cosmos. Why I am disappointed is that you have been presented with flowchart diagrams of the EF especially in the more complex per aspect form [cf Fig A as linked], over and over and this is the second significant error of basic interpretation we are seeing from you on it. As you can also see, for an item to pass the Chi_500 type threshold, it would have to pas the nodes of the filter so this is an equivalent way to make the decision. This is why Dr Dembski's remark some years back on "dispensing with" the explicit EF had a point. (Cf the discussion in the UD correctives. By now you should know that the objectors, as a rule, cannot be counted on to give a fair or accurate summary of any matter of consequence related to ID.) Please, make sure you have the structure and logic of the EF right for next time. GEM of TKIkairosfocus
June 28, 2011
June
06
Jun
28
28
2011
03:49 PM
3
03
49
PM
PDT
Because he agrees with Dembski. Mung, you keep asserting that I am wrong, then you quote stuff that demonstrates my point! It's actually a pretty trivial point, and I didn't think it was going to be worth even making. I assumed that everyone agreed with it. I can't believe I'm explaining how the EF works on UD! OK, The Annotated Dembski, by Lizzie.
Given something we think might be designed, we refer it to the filter.
LizzieNotes: Our filter lets Designed things through, and keeps back non-Designed things.
If it successfully passes all three stages of the filter, then we are warranted asserting it is designed.
LizzieNotes: If our observed pattern gets through all the filtering stages, we can infer it was designed. (Well, didn't really need that, Dembski is admirably clear.)
Roughly speaking the filter asks three questions and in the following order: (1) Does a law explain it?
LizzieNotes: is our observed pattern probable under the distribution of patterns expected of the laws of physics and chemistry? Note that this may not sound like a null at first glance but the key word is "probable". Because it asks whether the pattern is "probable", not "improbable", we know it is a null. To support an H1 hypothesis (alternative) we need to show that it is "improbable" under some null.
(2) Does chance explain it?
LizzieNotes: This is the classic null: is our observed pattern probable under the null of "Chance". See Meyer, as quoted above.
(3) Does design explain it?
LizzieNotes: As this is where we end, if the observed pattern makes it through the filters, we answer "yes". This is because Design has fallen into the "rejection region", where neither "Law" nor "Chance" are probable explanation. And that's absolutely fine. Dembski poses Design as the "alternate hypothesis" (H1) to the "null hypothesis" of no-Design, which he partitions into two sub nulls: "Law" (at which point the rejection region is fairly large, and encompasses both "Chance" and "Design") and "Chance", which is conventional Fisherian stuff. Are we in agreement yet? :) The slightly misleading part is "law". To give an example that makes might help make sense: Let's say someone tells us they saw someone toss 100 consecutive heads. There are three possible explanations: The coin had two heads; it was an incredibly lucky series of throws; the guy had a special tossing technique and managed to consistently ensure that the coin always landed heads. First of all we examine the coin. Scenario 1: it has two heads. What is the probability of 100 heads for a coin with two heads? Answer: 1. We can retain the null, and consider that the hypotheses that a lucky chance meant that coin landed always the same side up, or that the guy had a special throwing technique are unsupported. Although both these things could still be true. Scenario 2: it has a head on one side and tails on the other. Is there a law that governs how a coin with heads on one side and tails on the other will fall? No, there isn't. So we can actually skip that bit of the filter. What is the probability of a 100 heads for a coin with heads on one side and tails on the other? Well, I make it .5^100. Very improbable. So we can reject the null of Chance. So we've made it through the filter. Design Did It. The man is a genius. And Design was always H1. A bit weird for the first stage, but nonetheless, that's how it works. It's easier for the CSI formula, because that lumps the two (Chance and Necessity) together, in effect, making them add up to "non-design".Elizabeth Liddle
June 28, 2011
June
06
Jun
28
28
2011
03:11 PM
3
03
11
PM
PDT
Then who’s beliefs are you posting? Dembski’s.
Then why are you quoting Meyer?Mung
June 28, 2011
June
06
Jun
28
28
2011
02:33 PM
2
02
33
PM
PDT
Elizabeth Liddle:
...the EF, in two stages, first rejects “Chance” (i.e. Necessity or Design are H1) then rejects “Necessity and Chance” (i.e. Design is H1).
As I have been saying all along, that is not correct. First, there are three stages to the EF. Dembski:
Given something we think might be designed, we refer it to the filter. If it successfully passes all three stages of the filter, then we are warranted asserting it is designed. Roughly speaking the filter asks three questions and in the following order: (1) Does a law explain it? (2) Does chance explain it? (3) Does design explain it?
Design:
- purposefully directed contingency. That is, the intelligent, creative manipulation of possible outcomes (and usually of objects, forces, materials, processes and trends) towards goals.
Explanatory Filter:
For, while chance, necessity and agency may – and often do – jointly all act in a situation, we may for analysis focus on individual aspects. When we do so, we can see that observed regularities that create consistent, reliably observable patterns — e.g. the sun rises in the east each morning, water boils at seal level on Earth at 100 °C — are the signposts of mechanical necessity; and will thus exhibit low contingency. Where there is high contingency – e.g. which side of a die is uppermost – the cause is chance (= credibly undirected contingency) or design (= directed contingency).
https://uncommondescent.com/glossary/ for now I'm just going to post links. Perhaps summarize later: http://www.arn.org/docs/dembski/wd_explfilter.htm http://conservapedia.com/Explanatory_filter http://conservapedia.com/images/thumb/e/ef/Explanfilter.jpg/275px-Explanfilter.jpgMung
June 28, 2011
June
06
Jun
28
28
2011
02:22 PM
2
02
22
PM
PDT
Dembski's.Elizabeth Liddle
June 28, 2011
June
06
Jun
28
28
2011
01:38 PM
1
01
38
PM
PDT
EL:
I don’t have “beliefs about how the design argument functions”.
Then who's beliefs are you posting?Mung
June 28, 2011
June
06
Jun
28
28
2011
01:36 PM
1
01
36
PM
PDT
Mung:
Hi Lizzie, I think you missed my point about Neyman and Pearson. So would you say that it is “chance” that is the null hypothesis, or something else?
"something else". i.e. "not design".
Does rejecting the chance hypothesis get us to design?
Rejecting the null hypothesis "not design" gets us to design.
You agree that Meyer relies heavily on Dembski, correct?
Yes.
How does “inference to the best explanation” fit in with your beliefs about how the design argument functions?
I don't have "beliefs about how the design argument functions". I am simply pointing out that the EF and CSI both cast Design as H1. Therefore they cast non-Design as H0. That is why "Design" is in the "rejection region" of Meyer's plot, and it is why the EF, in two stages, first rejects "Chance" (i.e. Necessity or Design are H1) then rejects "Necessity and Chance" (i.e. Design is H1). Design, is, in other words, in the rejection region of the distribution of patterns. It is H1. This isn't my "belief", Mung, you can read it straight off the page. It shouldn't even be controversial!
From your linked source:
In statistics, the only way of supporting your hypothesis is to refute the null hypothesis. Rather than trying to prove your idea (the alternate hypothesis) right you must show that the null hypothesis is likely to be wrong – you have to ‘refute’ or ‘nullify’ the null hypothesis. Unfortunately you have to assume that your alternate hypothesis is wrong until you find evidence to the contrary.
Now I understood you to say that design does not work like this. I guess I need to go back and re-read.
No, I'm saying that the EF and the CSI formula work exactly like this. They are set up to "refute" the null. And if you manage to "refute" the null (of no-design) you can consider your H1 (the "alternate hypothesis") supported. That's why the EF is a "Filter" - it filters out the null junk and leaves you with Design. Do we now agree that in the EF and the CSI formula, Design is cast as H1? In which case, obviously, "no-design" is the null.Elizabeth Liddle
June 28, 2011
June
06
Jun
28
28
2011
11:17 AM
11
11
17
AM
PDT
Hi Lizzie, I think you missed my point about Neyman and Pearson. So would you say that it is "chance" that is the null hypothesis, or something else? Does rejecting the chance hypothesis get us to design? You agree that Meyer relies heavily on Dembski, correct? How does "inference to the best explanation" fit in with your beliefs about how the design argument functions? From your linked source:
In statistics, the only way of supporting your hypothesis is to refute the null hypothesis. Rather than trying to prove your idea (the alternate hypothesis) right you must show that the null hypothesis is likely to be wrong – you have to ‘refute’ or ‘nullify’ the null hypothesis. Unfortunately you have to assume that your alternate hypothesis is wrong until you find evidence to the contrary.
Now I understood you to say that design does not work like this. I guess I need to go back and re-read.Mung
June 28, 2011
June
06
Jun
28
28
2011
10:43 AM
10
10
43
AM
PDT
Mung? Do you see what I'm saying here?Elizabeth Liddle
June 28, 2011
June
06
Jun
28
28
2011
10:41 AM
10
10
41
AM
PDT
Mung, you are still missing the point I have stated clearly several times: The whole Fisherian convention is that if we fail to support H1, we merely "retain the null", we do not reject H1. So Dembski is absolutely right to say that "When the Explanatory Filter fails to detect design in a thing, can we be sure no intelligent cause underlies it? The answer to this question is No. " That is absolutely standard Fisherian inference from failure to support H1 - you merely "retain the null". You cannot be sure that H1 is not true, you merely remain without evidence that it is. In fact, everything you say, including your further quote from Meyer, makes it clear: in ID methodology, Design is cast as H1. Design is what falls in the "rejection region" under the null of "nothing going on here" to use Meyer's phrase. The passage from Meyer that you thought might "burst my bubble" does no such thing - it merely raises the bar for inclusion in the rejection region. And I agree with your quote in 46 - I think Fisherian hypothesis testing is unsuitable for the task, but it is nonetheless the one Dembski uses, and therefore runs a very large risk of a Type II error, especially given his tiny alpha (which, according to one source, which I am not equipped to critique, is still too large, and if the right value was used, would render Type II errors inevitable and the Filter/CSI useless). However, my own criticism of it is that because Design is cast as H1, it is absolutely vital to correctly calculate the null. And I the filter gives us no way of calculating the null.Elizabeth Liddle
June 28, 2011
June
06
Jun
28
28
2011
12:49 AM
12
12
49
AM
PDT
Elizabeth Liddle:
In other words, always casting Chance as the null, and Pattern (or Design) as H1. So it seems that Meyer agrees with me
Sorry to burst your bubble, but you should have read on. Meyer, p. 189:
Pattern recognition can lead us to suspect that something more than chance is at work. But the presence of a pattern alone does not justify rejecting chance, any more than an improbable event alone, in the absence of a pattern, justifies chance elimination. ...Or to make the point using a logical distinction: the presence of a pattern is a necessary, but not by itself a sufficient condition of chance elimination.
Mung
June 27, 2011
June
06
Jun
27
27
2011
07:24 PM
7
07
24
PM
PDT
...but Dembski goes with Fisher, so, hey...
Which is not to say that Dembski goes with Neyman and Pearson's extension of Fisher's ideas.
Dembski describes Fisherian significance testing, but nowadays this is not common practice for hypothesis testing which owes more to Neyman and Pearson -- who were strongly opposed to Fisher's approach. If you open almost any introductory statistics text book and turn to the section on hypothesis testing you will see that the student is told that they should define two hypotheses -- the null hypothesis (the one being tested) and an alternative hypothesis. In fact hypothesis testing is often considered to be a balance of two risks -- the risk of rejecting the null hypothesis when it is true versus the risk of rejecting the alternative hypothesis when it is true. HT: Mark Frank
Mung
June 27, 2011
June
06
Jun
27
27
2011
07:11 PM
7
07
11
PM
PDT
Elizabeth Liddle:
Yes, but that is how you have to cast a null.
Well, in that case we haven't been talking past each other, lol. Because I took that to be precisely what you were asserting. If design is the alternative hypothesis (H1), as you claim, then the null hypothesis (H0) is "there is no design present here." The null is the logical negation of the alternative. Are we agreed so far? If so, what then do you make of Dembski's statement:
When the Explanatory Filter fails to detect design in a thing, can we be sure no intelligent cause underlies it? The answer to this question is No.
Mung
June 27, 2011
June
06
Jun
27
27
2011
06:45 PM
6
06
45
PM
PDT
Mung:
Lizzie, it occurs to me that we may in some sense be talking past each other.
Yes indeed, Mung :) Right, now let's try to get ourselves face to face....
When I hear you say no design or not design I am taking that literally, as being a statement that the null hypothesis is that there is no design present.
Yes, but that is how you have to cast a null. It doesn't mean that if you "retain the null" you have concluded that there is no design present, merely that you have failed so show that design is present. It's a subtle point, but an important point. To take Meyer's example of the roulette wheel: if the statistician employed at the casino shows that the pattern observed is not particularly improbable under the null of "nothing is going on" (cheating, wonky table, whatever), then that does not rule out "something going on" but it does not allow the casino owner to infer that it is. It's just one of the weirdnesses of Fisherian statistics.
If as you claim design is the alternative, that would be the logical null hypothesis.
Well, I'm saying that the way ID tests are usually cast is with Design as the null. As Meyer explains.
Is that what you mean, or do you mean by no design and not design that design may be there, but we just can’t tell. That seems illogical to me, but hey, stuff like that happens.
No, I mean what I said above. The null hypothesis is "no design". However, "retaining the null" doesn't mean "no design" it just means that design hasn't been demonstrated. Yeah, it's weird, and it's why Bayesian statistics often makes more sense, but Dembski goes with Fisher, so, hey :) It's interesting, and makes it different from, for example, PaV's Blood of St Januarius argument. However, it's also its biggest flaw, IMO. But first, let's agree that that is the way it is :) I have one vote from Meyer. I seem to have a vote from the OP. KF seems on board. Just waiting for you, Mung :)Elizabeth Liddle
June 27, 2011
June
06
Jun
27
27
2011
12:53 AM
12
12
53
AM
PDT
Lizzie, it occurs to me that we may in some sense be talking past each other. When I hear you say no design or not design I am taking that literally, as being a statement that the null hypothesis is that there is no design present. If as you claim design is the alternative, that would be the logical null hypothesis. Is that what you mean, or do you mean by no design and not design that design may be there, but we just can't tell. That seems illogical to me, but hey, stuff like that happens.Mung
June 26, 2011
June
06
Jun
26
26
2011
08:30 PM
8
08
30
PM
PDT
Elizabeth Liddle:
However, the CSI calculation is a frequentist calculation, as is the EF. And in both, Design is cast as the null. At least, you haven’t persuaded me that it isn’t, merely asserted it, and you haven’t addressed my careful post in which I demonstrated that it is.
I haven't even tried to persuade you that it's not. Why would I try to persuade you that design is not the null when you've been claiming that not design is the null and that design is the alternative hypothesis? Elizabeth Liddle:
That is how every formulation of CSI or the EF that I have seen is cast, and that is casting Design as H1, and no-design as the null.
Now I do have to say you are making no sense. In the same post you have asserted that both A and NOT A are true. You: And in both [CSI and the EF], Design is cast as the null. You: That is how every formulation of CSI or the EF that I have seen is cast, and that is casting Design as H1, and no-design as the null. So leaving that aside for now as an irreconcilable difference in your stated positions, let's return to Meyer in Ch. 8 of SitC:
The essentially negative character of the chance hypothesis is suggested by its other common name: the null hypothesis (i.e., the hypothesis to be nullifed or refuted by alternative hypotheses of design or lawlike necessity).
Multiple alternative hypotheses. It's not no design as the null with the alternative being design. At some point we have to eliminate lawlike processes. One might call them patterns of high probability.Mung
June 26, 2011
June
06
Jun
26
26
2011
08:19 PM
8
08
19
PM
PDT
Elizabeth Liddle:
So it seems that Meyer agrees with me.
For now I'll skip over your previous posts and concentrate on SitC. Chapter 8: Chance Elimination and Pattern Recognition. The first thing that comes to mind is Chance Elimination. Are you now claiming that CHANCE is the null hypothesis? Because ALL ALONG you have been asserting that NOT DESIGN is the null hypothesis.
In other words, always casting Chance as the null, and Pattern (or Design) as H1.
So you're starting to come around? NOT DESIGN is NOT the null hypothesis? Meyer:
I wondered if it [intelligent cause] could provide a better explanation than the alternatives. Could it be inferred as the best explanation for the origin of the specified information in the cell? To answer this question, I would need to undertake a rigorous examination of the various alternatives.
Alternatives, not alternative.
...I was familiar with most of the main theories then current for explaining the origin of life. These theories exemplified a few basic strategies of explanation. Some relied heavily on chance - that is, on random processes or events. Some invoked lawlike processes - deterministic chemical reactions or forces of attraction. Other models combined these two approaches.
Since you're just starting Chapter 8, Chapter 7, where Meyer discusses "inference to the best explanation," should be fresh in your mind.Mung
June 26, 2011
June
06
Jun
26
26
2011
07:47 PM
7
07
47
PM
PDT
Heh. Having just gone off to bed with The Signature in the Cell, I just had to log back in again.... Mung, check out pages 178-193, on Dembski, Fisher (Fisher!) and Chance Elimination. Meyer goes through the basics of Fisherian statistical testing (i.e. frequentist stats) and rejection regions and all, making it quite clear that "design" is in the rejection region, then, on page 188, comes right out and says:
The Chance hypothesis in effect says "There is nothing going on in this event to indicate any regular or discernable causal factors". Since patterns signal the presence of deeper causal factors or regularities at work,the presence of patterns negates chance. Because patterns negate the null hypothesis ("that "nothing is going on") and the null hypothesis is the chance hypothesis, patterns negate the chance hypothesis. Patterns negate the negation - the negation entailed in a substantive chance hypothesis.
Then he goes on a bit more about Dembski's refinement of pattern recognition, but always defining it as the pattern that falls in the "rejection region" after deciding "how improbably is too improbable?" for the chance hypothesis to explain. In other words, always casting Chance as the null, and Pattern (or Design) as H1. So it seems that Meyer agrees with me :)Elizabeth Liddle
June 26, 2011
June
06
Jun
26
26
2011
05:11 PM
5
05
11
PM
PDT
1 3 4 5 6 7

Leave a Reply