Uncommon Descent Serving The Intelligent Design Community

A design inference from tennis: Is the fix in?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
Thumbnail for version as of 04:59, 12 June 2007

Here:

The conspiracy theorists were busy last month when the Cleveland Cavaliers — spurned by Lebron, desperate for some good fortune, represented by a endearing teenager afflicted with a rare disease — landed the top pick in the NBA Draft. It seemed too perfect for some (not least, Minnesota Timberwolves executive David Kahn) but the odds of that happening were 2.8 percent, almost a lock compared to the odds of Isner-Mahut II.

Question: How come it’s legitimate to reason this way in tennis but not in biology? Oh wait, if we start asking those kinds of questions, we’ll be right back in the Middle Ages when they were so ignorant that

Comments
WilliamRoache, The inference is automatic through the exclusion of chance and necessity of the entire material resources of the universe,,, another inference is available, but seeing your aversion for links,,,,bornagain77
June 30, 2011
June
06
Jun
30
30
2011
11:23 AM
11
11
23
AM
PDT
I'd like just check we have agreement on a single point, regarding the Design Inference: Design is inferred if an observed pattern is improbable under any other hypothesis. Mung, kairosfocus? Can we agree on this? If not, how would you amend what I have put in bold, above?Elizabeth Liddle
June 30, 2011
June
06
Jun
30
30
2011
10:44 AM
10
10
44
AM
PDT
bornagain77, Thanks for the links etc. However I fail to see the relevance to my original question to Mung. Would it be possible for you to clarify your point and how it relates to my question, preferably without links and in your own words? Unless of course it is a link to a worked example of the Explanatory Filter for a biological entity!WilliamRoache
June 30, 2011
June
06
Jun
30
30
2011
10:32 AM
10
10
32
AM
PDT
WilliamRoache, pardon but perhaps this; Mathematically Defining Functional Information In Molecular Biology - Kirk Durston - short video http://www.metacafe.com/watch/3995236 Measuring the functional sequence complexity of proteins - Kirk K Durston, David KY Chiu, David L Abel and Jack T Trevors - 2007 Excerpt: We have extended Shannon uncertainty by incorporating the data variable with a functionality variable. The resulting measured unit, which we call Functional bit (Fit), is calculated from the sequence data jointly with the defined functionality variable. To demonstrate the relevance to functional bioinformatics, a method to measure functional sequence complexity was developed and applied to 35 protein families.,,, http://www.tbiomed.com/content/4/1/47 f/n; Stephen Meyer - Functional Proteins And Information For Body Plans - video http://www.metacafe.com/watch/4050681 Intelligent Design: Required by Biological Life? K.D. Kalinsky - Pg. 11 Excerpt: It is estimated that the simplest life form would require at least 382 protein-coding genes. Using our estimate in Case Four of 700 bits of functional information required for the average protein, we obtain an estimate of about 267,000 bits for the simplest life form. Again, this is well above Inat and it is about 10^80,000 times more likely that ID (Intelligent Design) could produce the minimal genome than mindless natural processes. http://www.newscholars.com/papers/ID%20Web%20Article.pdf Book Review - Meyer, Stephen C. Signature in the Cell. New York: HarperCollins, 2009. Excerpt: As early as the 1960s, those who approached the problem of the origin of life from the standpoint of information theory and combinatorics observed that something was terribly amiss. Even if you grant the most generous assumptions: that every elementary particle in the observable universe is a chemical laboratory randomly splicing amino acids into proteins every Planck time for the entire history of the universe, there is a vanishingly small probability that even a single functionally folded protein of 150 amino acids would have been created. Now of course, elementary particles aren't chemical laboratories, nor does peptide synthesis take place where most of the baryonic mass of the universe resides: in stars or interstellar and intergalactic clouds. If you look at the chemistry, it gets even worse—almost indescribably so: the precursor molecules of many of these macromolecular structures cannot form under the same prebiotic conditions—they must be catalysed by enzymes created only by preexisting living cells, and the reactions required to assemble them into the molecules of biology will only go when mediated by other enzymes, assembled in the cell by precisely specified information in the genome. So, it comes down to this: Where did that information come from? The simplest known free living organism (although you may quibble about this, given that it's a parasite) has a genome of 582,970 base pairs, or about one megabit (assuming two bits of information for each nucleotide, of which there are four possibilities). Now, if you go back to the universe of elementary particle Planck time chemical labs and work the numbers, you find that in the finite time our universe has existed, you could have produced about 500 bits of structured, functional information by random search. Yet here we have a minimal information string which is (if you understand combinatorics) so indescribably improbable to have originated by chance that adjectives fail. http://www.fourmilab.ch/documents/reading_list/indices/book_726.html etc.. etc.. The Capabilities of Chaos and Complexity: David L. Abel - Null Hypothesis For Information Generation - 2009 To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it: "Physicodynamics cannot spontaneously traverse The Cybernetic Cut: physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration." A single exception of non trivial, unaided spontaneous optimization of formal function by truly natural process would falsify this null hypothesis. http://www.mdpi.com/1422-0067/10/1/247/pdf Can We Falsify Any Of The Following Null Hypothesis (For Information Generation) 1) Mathematical Logic 2) Algorithmic Optimization 3) Cybernetic Programming 4) Computational Halting 5) Integrated Circuits 6) Organization (e.g. homeostatic optimization far from equilibrium) 7) Material Symbol Systems (e.g. genetics) 8) Any Goal Oriented bona fide system 9) Language 10) Formal function of any kind 11) Utilitarian work http://mdpi.com/1422-0067/10/1/247/ag ================ Why the Quantum? It from Bit? A Participatory Universe? Excerpt: In conclusion, it may very well be said that information is the irreducible kernel from which everything else flows. Thence the question why nature appears quantized is simply a consequence of the fact that information itself is quantized by necessity. It might even be fair to observe that the concept that information is fundamental is very old knowledge of humanity, witness for example the beginning of gospel according to John: "In the beginning was the Word." Anton Zeilinger - a leading expert in quantum teleportation: http://www.metanexus.net/Magazine/ArticleDetail/tabid/68/id/8638/Default.aspxbornagain77
June 30, 2011
June
06
Jun
30
30
2011
03:00 AM
3
03
00
AM
PDT
Mung,
First, there are three stages to the EF.
It seems to me that much of the arguments about the EF can be bypassed if you were to simply give an example of it in use. You obviously understand it in great detail. If you were to give a worked example for, say, anything at all biological I imagine that would go a long way to clarifying the questions that Elizabeth has. Darts and coin tosses are all very well, but I understood the claim that design has been identified in biological systems to be founded upon the EF but I've actually never seen an example of the EF as it relates to a biological system. Biological systems are a little bit more messy then easily measured coin tosses and I'm fascinated to see how it's done. And darts are not biological! If such a biological example could be laid out on this thread that would be very illustrative. Mung, up for it?WilliamRoache
June 30, 2011
June
06
Jun
30
30
2011
01:13 AM
1
01
13
AM
PDT
Mung, I was not quoting from the paper. I read the paper, and the equations, and deduced the null. It's one of the things I do. You have to sometimes, when you read scientific papers, where the hypotheses are sometimes not expressed in so many words, and you have to figure it out from the math. Dembski doesn't talk about a two-stage process in that paper, and his null seems to be dubbed "Chance". So I guess I could have called his Alternative Hypothesis "Not Chance". But that would have been misleading,given that elsewhere he grants "Necessity" as an alternative to Chance, and he is not putting "Necessity" in the Rejection Region. So let me be as neutral as I can, and say that Dembski's H1 is "The hypthesis that Dembski considers supported when a pattern falls in the rejection region", and Dembski's H0 is "the hypothesis that Dembski considers a sufficient explanation for the pattern if the pattern falls outside the rejection region". However, as the Hypothesis that dembski considers supported if a pattern falls in the rejection region is Design, we can, by simple substitution, conclude that Design is H1 and no-Design is H0. Mung @ 92: I suggest you stop trying to anticipate "the way [my] argument framing up". It's causing you to see spooks behind every bush. It's also stopping you reading the actual words I write! These are simply not controversial (and you would, I'm sure, readily agree with them if you were not scared I was going to pull a "Gotcha!" with your agreement!) No, it is not "in doubt" whether a null is being specified. Dembski talks at length about how to specify the distribution under the null. If that isn't specifying the null, I don't know what is. And if it isn't, then he'd better go back and re-write his paper, because you can't specify a "rejection region" if you don't have a null to "reject"! And it is true that, in null hypothesis testing you actually only have to specify one hypothesis because the other is, by default, that the other is false. So Fisher is correct. But it remains worth specifying clearly both hypotheses, becauses you want to make sure that the inference you draw if you reject the null is the one you think you are drawing.
Dembski doesn’t even try to specify the null hypothesis.
Of course he does.
Dembski doesn’t even try to specify the alternate hypothesis.
Of course he does. The entire paper is about specifying the null, and defining the rejection region.
Or if he does it’s in ways much more subtle than you have yet to acknowledge.
Oh, it's subtle. As kf says, it's "Fisher on steroids" - not just any old piece of skirt (heh). But in terms of expressing it in words, as Dembski is clear as to the inference he draws if a pattern falls in the rejection region ("Design") then the word description of the null is also clear ("no-Design"). What is subtle is not the names of the hypotheses, but the computation of the expected distribution under the null. Hence all the fancy math. However, I will give you a heads-up on my "argument" - my problem with Dembski's paper is that, in fact, he is not entitled to draw the inference "Design" from a pattern that falls in the rejection region. In other words, oddly, I agree with you: that all his paper does is set up a null distribution and a rejection region. Dembski does not, in fact, spell out very clearly what is rejected if we reject the null. And I think he is wrong to reject "no-Design" because I don't actually think the pdf he computes for his null is "no-Design". I think it's something else. But let us first agree that Dembski computes a pdf of a null, and concludes that if a pattern falls in the rejection region, he considers his hypothesis supported. Yes?Elizabeth Liddle
June 30, 2011
June
06
Jun
30
30
2011
12:33 AM
12
12
33
AM
PDT
Mung: While there are detail debates over Fisher, Neyman, Pearson and Bayes, there is a general practice of hyp testing on distributions that essentially boils down to looking at coming up in tails of distributions, whether normal Chi-Square etc. I think it is fair comment to observe: 1: Since the presence of these random variable distributions implies high contingency, we are already applying one of the design filter criteria implicitly. 2: The sort of far-skirt rejection regions being used are an example of a separately specifiable, relatively rare zone in a distribution that on the presumed search scope [think of my darts and charts example], you should not expect to be in in a given sample, to a 95 or 99 or 99.9 % confidence, etc. 3: So, there is a conceptual link from the hyp testing as is a commonplace practice in real world investigations, and the explanatory filter type inference. 4: Indeed, in the design by elimination paper of 2005, Dembski spoke of how the EF approach offered to help firm up some of the fuzzier, pre-theoretic concepts involved in hyp testing:
let’s begin with a reality check. Often when the Bayesian literature tries to justify Bayesian methods against Fisherian methods, authors are quick to note that Fisherian methods dominate the scientific world. For instance, Richard Royall (who strictly speaking is a likelihood theorist rather than a Bayesian—the distinction is not crucial to this discussion) writes: “Statistical hypothesis tests, as they are most commonly used in analyzing and reporting the results of scientific studies, do not proceed ... with a choice between two [or more] specified hypotheses being made ... [but follow] a more common procedure....” (Statistical Evidence: A Likelihood Paradigm, Chapman & Hall, 1997.) Royall then outlines that common procedure, which requires specifying a single chance hypothesis, using a test-statistic to identify a rejection region, checking whether the probability of that rejection region under the chance hypothesis falls below a given significance level, determining whether a sample (the data) falls within that rejection region, and if so rejecting the chance hypothesis. In other words, the sciences look to Ronald Fisher and not Thomas Bayes for their statistical methodology. Howson and Urbach, in Scientific Reasoning: The Bayesian Approach, likewise admit the underwhelming popularity of Bayesian methods among working scientists . . . . Ronald Fisher, in formulating its theoretical underpinnings, left something to be desired. There are three main worries: (1) How does one make precise what it means for a rejection region to have “sufficiently small” probability with respect to a chance hypothesis? (2) How does one characterize rejection regions so that a chance hypothesis doesn’t automatically get rejected in case it actually is operating? (3) Why should a sample that falls in a rejection region count as evidence against a chance hypothesis? . . . . The strength of the evidence against a chance hypothesis when a sample falls within a rejection region therefore depends on how many samples are taken or might have been taken. These samples constitute what I call replicational resources. The more such samples, the greater the replicational resources Significance levels therefore need to factor in replicational resources if samples that match these levels are to count as evidence against a chance hypothesis. But that’s not enough. In addition to factoring in replicational resources, significance levels also need to factor in what I call specificational resources. The rejection region on which we’ve been focusing specified ten heads in a row. But surely if samples that fall within this rejection region could count as evidence against the coin being fair, then samples that fall within other rejection regions must likewise count as evidence against the coin being fair. For instance, consider the rejection region that specifies ten tails in a row . . . . But if that is the case, then what’s to prevent the entire range of possible coin tosses from being swallowed up by rejection regions so that regardless what sequence of coin tosses is observed, it always ends up falling in some rejection region and therefore counting as evidence against the coin being fair? . . . . The way around this concern is to limit rejection regions to those that can be characterized by low complexity patterns (such a limitation has in fact been implicit when Fisherian methods are employed in practice). Rejection regions, and specifications more generally, correspond to events and therefore have an associated probability or probabilistic complexity. But rejection regions are also patterns and as such have an associated complexity that measures the degree of complication of the patterns, or what I call its specificational complexity. Typically this form of complexity corresponds to a Kolmogorov compressibility measure or minimum description length . . . Note, specificational complexity arises very naturally—it is not artificial or ad hoc construct designed simply to shore up the Fisherian approach. Rather, it has been implicit right along, enabling Fisher’s approach to flourish despite the inadequate theoretical underpinnings that Fisher provided for it. Replicational and specificational resources together constitute what I call probabilistic resources. Probabilistic resources resolve the first two worries raised above concerning Fisher’s approach to statistical reasoning. Specifically, probabilistic resources enable us to set rationally justified significance levels, and they constrain the number of specifications, thereby preventing chance hypotheses from getting eliminated willy-nilly . . . . Once one allows that the Fisherian approach is logically coherent and that one can eliminate chance hypotheses individually simply by checking whether samples fall within suitable rejection regions (or, more generally, outcomes match suitable specifications), then it is a simple matter to extend this reasoning to entire families of chance hypotheses, perform an eliminative induction (see chapter 31), and thereby eliminate all relevant chance hypotheses that might explain a sample. And from there it is but a small step to infer design . . . . Does it matter to your rejection of this chance hypothesis whether you’ve formulated an alternative hypothesis? I submit it does not. To see this, ask yourself when do you start looking for alternative hypotheses in such scenarios. The answer is, Precisely when a wildly improbable event like a thousand heads in a row occurs. So, it’s not that you started out comparing two hypotheses, but rather that you started out with a single hypothesis, which, when it became problematic on account of a wild improbability (itself suggesting that Fisherian significance testing lurks here in the background), you then tacitly rejected it by inventing an alternative hypothesis. The alternative hypothesis in such scenarios is entirely ex post facto. It is invented merely to keep alive the Bayesian fiction that all statistical reasoning must be comparative . . .
5: In short, once we move beyond the impressive algebraic apparatus to the practical applications of Bayesian type reasoning, we begin to see that it is not so neat, sweet and rigorous after all; indeed that he same sort of thinking that Bayesians like to criticise are slipping right back in the back-door, unbeknownst. 6: In any case, the basic ideas shown in the darts and charts exercise, once we factor in the reasonableness of the independent specification of zones of interest, are plainly sufficiently well warranted not to be so easily brushed aside as some suggest. 7: And, moving beyond a probability calculation based approach, once we look at config spaces and the scope of search resources available in the solar system or observed cosmos -- taking a leaf from our thermodynamics notes -- we have a reasonable criterion for scope of search, and for identifying in principle what it would have to mean for something to come from a specific, complex and narrow enough zone of interest. 8: In particular, 500 bits specifies a space of 48 orders of magnitude more possibilities than the P-time Q-states of the 10^57 or so atoms in our solar system, where it takes 10^30 P-times to go through the fastest type of chemical reactions. 9: Similarly, 1,000 bits is 150 orders of magnitude more than the states for the 10^80 or so atoms of the observed cosmos. 10: In short on the available resources, the possibilities cannot be sufficiently explored on those ambits, to be sufficiently different from no search at all, to give any credible possibility of a random walk stumbling on any reasonably specific zone of interest. 11: And yet, 72 or 143 ASCII characters, respectively, are a very small amount of space indeed to specify a complex functional organisation or coded informational instructions for such a system. The Infinite Monkeys challenge returns with a vengeance. Indeed, the simplexst credible C-chemistry cell based life forms will be of order 100 k bits up, and novel body plans for complex organisms with organs, etc will be of order 10 - 100+ Mn bits. 12: With a challenge like that, the proper burden of warrant is on those who claim that chance variations whether in still warm ponds or in living organisms, culled out by trial and error with success being rewarded with survival and propagation to the future, to empirically show that their models and theories and speculations not only could work on paper but do work on the ground. Which has not been met. GEM of TKI PS: I have alerted first level responders, and hope that onward MF et al will begin to realise what sort of bully-boy wolf-pack factions they are harbouring; and will police themselves. If I see signs of anything more serious than what has already gone on, or any further manifestations, I think that this will become first a web abuse then a police matter, perhaps an international police matter. (I suspect the cyber-bullies involved do not realise how stringent the applicable jurisdictions are. But already, such fascistic thought-police bully-boys have underscored the relevance of the observation that evolutionary materialism is amoral/nihilistic and the effective "morality" therefore comes down to you are "free" to do whatever you think you can get away with; we see here the way that the IS-OUGHT gap inherent in such atheism benumbs the conscience and blinds the mind to just how wrongful and destructive one's behaviour is, after all you imagine yourself to be of the superior elites who by right of tooth and claw can do anything they can get away with. That's just what led to the social darwinist and atheistical holocausts of the past century, and silence in the face of such first signs is enabling behaviour, so do not imagine that being genteel is enough when blatant evil is afoot.)kairosfocus
June 30, 2011
June
06
Jun
30
30
2011
12:20 AM
12
12
20
AM
PDT
Elizabeth Liddle:
Well, how you specify your null is critical to the validity of your hypothesis testing.
Whether a null is being specified at all is in doubt. "This first step contrasts sharply with Fisher’s total neglect of an alternative hypothesis." Why would I think it's any different for Dembski? Why would Dembski all of a sudden be trying to specify a null and an alternate? Did he turn his back on Fisher? Why do you think Dembki would agree with you that "how you specify your null is critical to the validity of your hypothesis testing" with regard to his work? Here's how I see your argument framing up: How you specify your null is critical to the validity of your hypothesis testing. Dembski fails to properly specify the null. Therefore the ID argument is not valid. Well, I don't accept your first premise. Or maybe your second. Who knows. We'll work it out. Maybe that's not even where you're going. But here's what I say: Dembski doesn't even try to specify the null hypothesis. Dembski doesn't even try to specify the alternate hypothesis. Or if he does it's in ways much more subtle than you have yet to acknowledge.Mung
June 29, 2011
June
06
Jun
29
29
2011
06:43 PM
6
06
43
PM
PDT
Hi kairosfocus, I regret to hear that your family is being harassed. There may be laws in place to reward the evildoers. Even more so if borders are involved. Since I have my doubts that EL and I will come to a resolution aside from the direct intervention from the designer himself (aka William Dembski), I'm taking a tangential course. If we had to say what the null was for the chi metric, what would it sound like? Surely it would not sound like "no design" or "not designed." So I'm trying to work my way towards putting Demski's mathematical CSI measure (or yours) into English and then stating the negation of it. Something with a bit more meat on it than "no CSI" - lol. But maybe that's enough. The null is not "no design" but rather "no complex specified information." What are the factors we need? Semiotic agents, replicational resources, a relevant chance hypotheses (H) a target T, etc. Want to help me deconstruct it? I'm trying to come up with a way to move things along.Mung
June 29, 2011
June
06
Jun
29
29
2011
06:30 PM
6
06
30
PM
PDT
So, according to this paper the null is “no Design”.
Provide the quote, from the paper, please.Mung
June 29, 2011
June
06
Jun
29
29
2011
06:08 PM
6
06
08
PM
PDT
And, with two successive inferences made on different criteria. First necessity vs choice and/or chance, then chance vs choice; all, per aspect.
Yes indeed.Elizabeth Liddle
June 29, 2011
June
06
Jun
29
29
2011
04:14 PM
4
04
14
PM
PDT
PS: I just saw some pretty nasty spam in my personal blogs, some of it trying to attack family members who have nothing to do with UD or online debates. That is a measure of what sort of amoral nihilism and ruthless factionism -- as Plato warned against -- we are dealing with. --> MF, you need to tell those who hang around with you to back off.kairosfocus
June 29, 2011
June
06
Jun
29
29
2011
04:10 PM
4
04
10
PM
PDT
And, with two successive inferences made on different criteria. First necessity vs choice and/or chance, then chance vs choice; all, per aspect.kairosfocus
June 29, 2011
June
06
Jun
29
29
2011
04:08 PM
4
04
08
PM
PDT
Fisher on steroids.
heh. Nice one :)Elizabeth Liddle
June 29, 2011
June
06
Jun
29
29
2011
04:06 PM
4
04
06
PM
PDT
Fisher on steroids.kairosfocus
June 29, 2011
June
06
Jun
29
29
2011
04:04 PM
4
04
04
PM
PDT
Mung:
So according to Dembski we need a pattern, but not just any kind of pattern. If there were a “null” hypothesis, wouldn’t it be “no detectable specification“?
Well, how you specify your null is critical to the validity of your hypothesis testing. Much of Dembski's paper is devoted to how best to specify the probability density function (of the expected data under the null), and how to decide on an appropriate rejection region (i.e. how to decide on "alpha"). The interesting thing about the CSI concept is that it incorporates its own alpha. It isn't the hypothesis as such, it's what you call a pattern that falls in a special rejection region that is so improbable under the null of "no Design" that we can declare it not possible within the probability resources of the universe. So, according to this paper the null is "no Design". Where specification comes in is to calculate the pdf of patterns under the null. It's a 2D pdf though, because you have two axes - complexity along one, and specificity along the other. So we have a rejection "volume" rather than a rejection "region". The "rejection volume" is the tiny corner of the "skirt" as kairosfocus put it, where not only is complexity high (lots of bits), but specificity is high too (the patterns are belong to a small subset of similarly compressible patterns). So yes, I agree with kf, that it's a lot more complicated than your common-or-garden 1D pdf with an alpha of .05, the workhorse of the lab. A thoroughbred, maybe :) But it's still Fisherian (as Dembski says) and it still involves defining a rejection region under the null hypothesis that your H1 is not true.Elizabeth Liddle
June 29, 2011
June
06
Jun
29
29
2011
03:56 PM
3
03
56
PM
PDT
Mung: Which, per the log reduction gets us to the simplified case: Chi_500 = I*S - 500, bits beyond a solar system threshold. Where I is information measure and S is the dummy variable on specificity. 1,000 coins tossed at random will have a high Shannon-Hartley info metric, but no specificity so I* S = 0. 1,000 coins set out in an ascii code pattern will have a lower I value due to redundancy but S is 1, and Chi_500 willbe exceeded. You are maximally unlikely to see 1,000 coins spelling our a coherent message in English in ASCII, but that will not at all be unlikely to have been caused by an intelligence. Or equivalently, sufficiently long code strings in this thread are on the SIGN of FSCI, most reasonably explained on design. We can be highly confident that on the age of our solar system, such a coin string or the equivalent would never happen once, even if the solar system were converted into coins on tables being tossed, for its lifespan. I think a specific case like this is much more clear and specific on what we mean. GEM of TKIkairosfocus
June 29, 2011
June
06
Jun
29
29
2011
03:33 PM
3
03
33
PM
PDT
Dr Liddle: The design inference in general is a lot more complex than the sort of "it's in the far-skirt zone" inference that is common in at least simple statistical testing cases. The first stage decision on whether the population of samples shows a natural regularity that can then be traced on a suspected mechanical necessity driving it, is itself often a complex process. The observation that instead we have high contingency on similar start points, raises issues on what explains contingency. And, the note on chance or choice, leads to the exploration of searches in configuration spaces and when we are dealing with inadequate resources to catch a zone of interest if relative statistical weights of clusters of configs are driving the outcome. The default to chance is really a way of saying that here is no good reason to infer to choice, given teh available resources and credible capacities of chance. That's an inference to best explanation on warrant, with possibility tipping this way or that, pivoting on complexity and specificity. The decision that if this is a lottery it is unwinnable on clean chance, so the outcome is rigged, is bringing to bear factors like the quantum state resources of our solar system or even our whole cosmos, and its credible thermodynamic lifespan. These, to define an upper scope of search to compare to the gamut of a config space. Notice, this is moving AWAY from a probability type estimate, to a search space scope issue. If there is not a credible scope of search, and you are in a narrow zone of interest, the analysis points to being there by choice as now the more plausible explanation. And then, this is backed up by empirical testing on the credibility of particular signs of causes. As in, use of symbolic codes with rules of meaning and vocabularies, the presence of multi-part functionality that requires well-matched parts, beyond an implied string length arrived at by identifying he chain of yes/no decisions to specify the object, etc etc. In short the sort of did we arrive at the number of women in this firm by accident or by discrimination decision is at best a restricted and simple -- though more familiar case. (For one, there is a presumption of high contingency and lack of an explaining "law"; what if there is a problem that women on average lack the upper body strength to succeed at this task, or have different career interests etc?) The real deal has in it much more sophisticated considerations, and the natural home is actually the underlying principles of statistical thermodynamics as used to ground the second law; which is now increasingly accepted as bleeding over fairly directly into information theory. Problem is, this jumps from the boiling water into the blazing fire below. GEM of TKIkairosfocus
June 29, 2011
June
06
Jun
29
29
2011
03:24 PM
3
03
24
PM
PDT
William A. Dembski:
Since specifications are those patterns that are supposed to underwrite a design inference, they need, minimally, to entitle us to eliminate chance. Since to do so, it must be the case that ? = –log2[10^120 · ?S(T)·P(T|H)] > 1, we therefore define specifications as any patterns T that satisfy this inequality. In other words, specifications are those patterns whose specified complexity is strictly greater than 1.
Mung
June 29, 2011
June
06
Jun
29
29
2011
03:12 PM
3
03
12
PM
PDT
Well, I could see why you might. I did quote your site, lol.Mung
June 29, 2011
June
06
Jun
29
29
2011
02:58 PM
2
02
58
PM
PDT
OK, we are slowly reducing the blue water between us I guess. Kairosfocus: yes, I understand that there is a two stage rejection process. As long as we agree on that, it is fine. The normal terminology is to regard a hypothesis that is rejected, the "null". But if you don't like the term, fine. We can call it something else. Let's use the term H0 for what I would call "the null" and H1 for "the alternative". The important thing is to recognise, and I think we all do, that in frequentist hypothesis testing (in other words where you plot a probability distribution function based on a frequency histogram) you test one hypothesis against a second, but assymetrically. The test is assymetrical because you plot the pdf of the probability of data under first hypothesis, and note where the "rejection region" is for that hypothesis. And if your data falls in the rejection region, you consider your second hypothesis "supported", and your first "rejected". However, if it does NOT fall in the rejection region, you do not "reject" the second hypothesis you merely "retain" the first hypothesis as viable. The first hypothesis - the one you plot the pdf of - is usually denoted H0, and the second as H1. So in any frequentist hypothesis test there has to be a strategic decision as to which hypothesis is going to be H1 and which H0. It's usually obvious which way round it should go. And in none of the ID tests described in this thread is H0 Design. Do we all agree on this?Elizabeth Liddle
June 29, 2011
June
06
Jun
29
29
2011
02:56 PM
2
02
56
PM
PDT
F/N: Perhaps we could put it this way -- why do lotteries have to be DESIGNED to be winnable? (Hint, if the acceptable target strings are too isolated in the space of possibilities, the available search resources, predictably, would be fruitlessly exhausted; i.e. the random walk search carried out would not sufficiently sample the field of possibilities to have a good enough chance to hit the zone of interest. That is, despite many dismissals, we are back to the good old infinite monkeys challenge.)kairosfocus
June 29, 2011
June
06
Jun
29
29
2011
02:53 PM
2
02
53
PM
PDT
Mung - I apologise I should not have got involved with this particular discussion. Right now I don't have anything like the time to do this subject justice - even my reference was a poor one.markf
June 29, 2011
June
06
Jun
29
29
2011
01:32 PM
1
01
32
PM
PDT
ABSTRACT: Specification denotes the type of pattern that highly improbable events must exhibit before one is entitled to attribute them to intelligence. This paper analyzes the concept of specification and shows how it applies to design detection (i.e., the detection of intelligence on the basis of circumstantial evidence). Always in the background throughout this discussion is the fundamental question of Intelligent Design (ID): Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause? This paper reviews, clarifies, and extends previous work on specification in my books The Design Inference and No Free Lunch. - Specification: The Pattern that Signifies Intelligence
So according to Dembski we need a pattern, but not just any kind of pattern. If there were a "null" hypothesis, wouldn't it be "no detectable specification"?Mung
June 29, 2011
June
06
Jun
29
29
2011
12:25 PM
12
12
25
PM
PDT
"The inference to the best explanation" corresponds approximately to what others have called "abduction," the method of hypothesis," "hypothetic inference," "the method of elimination," "eliminative induction," and "theoretical inference." I prefer my own terminology because I believe that it avoids most of the misleading suggestions of the alternative terminologies. In making this inference one infers, from the fact that a certain hypothesis would explain the evidence, to the truth of that hypothesis. In general, there will be several hypotheses which might explain the evidence, so one must be able to reject all such alternative hypotheses before one is warranted in making the inference. Thus one infers, from the premise that a given hypothesis would provide a "better" explanation for the evidence than would any other hypothesis, to the conclusion that the given hypothesis is true.
http://www.informationphilosopher.com/knowledge/best_explanation.htmlMung
June 29, 2011
June
06
Jun
29
29
2011
11:23 AM
11
11
23
AM
PDT
F/N: best explanation for our purposes . . .kairosfocus
June 29, 2011
June
06
Jun
29
29
2011
09:31 AM
9
09
31
AM
PDT
markf:
Do you understand that null hypothesis significance testing is a conceptual nightmare and only hangs on in statistics because of tradition? It is one of Dembski’s biggest mistakes to hitch the design inference to this.
Hi mark. Thanks for weighing in. Please note, for the record, that I have been arguing against this characterization of Dembski and the EF. Elizabeth (and now you as well) has yet to comment on Neyman and Pearson, even though I quoted you:
Dembski describes Fisherian significance testing, but nowadays this is not common practice for hypothesis testing which owes more to Neyman and Pearson — who were strongly opposed to Fisher’s approach.
Which approach is Lizzie following, Neyman and Pearson, or Fisher? Shall we pretend it's not relevant to the current debate? Which approach does Dembski follow? Does Dembski follow Neyman-Pearson? I have yet to find anything he has written framed the way that Lizzie claims. That's why I've started citing him directly. I've found no mention of "not design" as the null hypothesis. I've found no mention of "design" as the alternate hypothesis to the null hypothesis. Feel free to quote Dembski if you think otherwise. From your linked source:
And now Denis (2005) comes along and does a great service by highlighting the under-appreciated fact that Fisher is not responsible for this state of affairs.
On Neyman-Pearson:
While Fisher hypothesized the value of only one parameter, Neyman-Pearson explicitly formulated two rival hypotheses, H(0) and H(A) -- a procedure suggested to them by William S. Gosset (Biography 12.1). This first step contrasts sharply with Fisher's total neglect of an alternative hypothesis here
RegardsMung
June 29, 2011
June
06
Jun
29
29
2011
09:31 AM
9
09
31
AM
PDT
Dr Liddle: Kindly look back at your remarks above that excited my comment:
An Alternative Hypothesis(H1) can be expressed as the negation of the null, just as H0 can be expressed as the negation of H1. The important thing is that there is no Excluded Middle. That’s why one is always expressed as Not The Other. So we could express the Design Hypothesis as either H0: Not-Design; H1: Design Or we could express it as: H0: Chance; H1: Not Chance. Or even: H0: Chance or Necessity; H1: Neither Chance nor Necessity. It doesn’t matter. A null hypothesis isn’t called “null” because it has a “not” (or a “neither”) in it! And the Alternative Hypothesis can be as vague as “not the null”.
Do you see my concern? There is no grand null there. There is a first default, necessity. On high contingency, it is rejected. On seeing high contingency, the two candidates are compared, on signs, chance and necessity. On the strength of the signs, we infer to the default if not sufficiently complex AND specific, and we infer to choice as default where there is CSI, not just on an arbitrary criterion but on an analysis [zone of interest in search space in context of accessible resources to search] backed up by empirical warrant. But, what it is really saying is the old Scotch verdict: case not proven. The BEST EXPLANATION is chance, but the possibility of design has not been actually eliminated. Which is one of Mung's points. None of these causal factor inferences is working off a sample of a population and a simple rejection region as such, though the analysis is related. That is, high contingency vs natural regularity is not directly and simply comparable to where does your sample fall on the distribution relative to tails. Similarly, while there is a zone in common, the issue on the zone is presence in a specifically describable and narrow zone in a config space large enough to swamp accessible resources [on the gamut of solar system or observed cosmos etc], AND an analysis with a base on positive, direct induction from sign and known test cases of the phenomenon, such as FSCI. If anything, Fisherian type testing under certain circumstances is a special case of the design inference, where the alternative hyp for the circumstances is in effect choice not chance. For good reason, I am distinctly uncomfortable with any simplistic conflation of two or more of the three factors; that is why there are three decision nodes in the flowchart. To make it worse, the chart is looking at aspects of as phenomenon, i.e chance necessity and choice can all be at work, on different aspects, and will leave different signs that we can identify and trace on a per aspect basis. As simple a case as a swinging pendulum will show scatter, and that is then analysed as additional effects that are not specifically accounted for and are addressed under the concept, noise. GEM of TKIkairosfocus
June 29, 2011
June
06
Jun
29
29
2011
09:29 AM
9
09
29
AM
PDT
You are the most patient person I’ve ever seen on the internet.
Thanks Lizzie!Mung
June 29, 2011
June
06
Jun
29
29
2011
09:20 AM
9
09
20
AM
PDT
Elizabeth, You are the most patient person I've ever seen on the internet.Prof. FX Gumby
June 29, 2011
June
06
Jun
29
29
2011
07:39 AM
7
07
39
AM
PDT
1 2 3 4 5 6 7

Leave a Reply