Uncommon Descent Serving The Intelligent Design Community

A design inference from tennis: Is the fix in?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
Thumbnail for version as of 04:59, 12 June 2007

Here:

The conspiracy theorists were busy last month when the Cleveland Cavaliers — spurned by Lebron, desperate for some good fortune, represented by a endearing teenager afflicted with a rare disease — landed the top pick in the NBA Draft. It seemed too perfect for some (not least, Minnesota Timberwolves executive David Kahn) but the odds of that happening were 2.8 percent, almost a lock compared to the odds of Isner-Mahut II.

Question: How come it’s legitimate to reason this way in tennis but not in biology? Oh wait, if we start asking those kinds of questions, we’ll be right back in the Middle Ages when they were so ignorant that

Comments
So, here we have, subsumed under the null hypothesis Dembski calls “Chance”, processes that he once referred to as “Necessity”, and which kf would call “low contingency processes”
yes :) The former first stage of the EF. Not to revisit ground already covered, but perhaps you can see now where some of the objections were originating from. And necessity is subsumed under chance. In Debating Design, from 2004, for example, Dembski writes: "To sum up, in order for specified complexity to eliminate chance and detect design, it is not enough that the probability be small with respect to some arbitrarily chosen probability distribution. Rather, it must be small with respect to every probability distribution that might characterize the chance occurrence of the thing in question. If that is the case, then a design inference follows. The use of chance here is very broad and includes anything that can be captured mathematically by a stochastic process. It thus includes deterministic processes whose probabilities all collapse to zero and one (cf. necessities, regularities, and natural laws). It also includes nondeterministic processes, such as evolutionary processes that combine random variation and natural selection. Indeed, chance so construed characterizes all material mechanisms."Mung
July 2, 2011
July
07
Jul
2
02
2011
01:42 PM
1
01
42
PM
PDT
Mung: This is a case where once information is a possibility, you already are implying high contingency. That is the first node has been passed on the high contingency side. Under rather similar initial conditions many outcomes are possible, tracing to chance and/or choice. Necessity being off the table the options to explain high contingency are chance and choice. Then, the issue of CSI tells the difference, on grounds repeatedly discussed: chance is going to be dominated by the stochastic patters reflective of relative statistical weights of different clusters of microstates. Where also, the sort of things that you get by choice -- like the text string in this post -- are strongly UNREPRESENTATIVE of the overwhelming bulk of the possibilities.. But, as I pointed out above, it is easy to misread this through overlooking -- let's be nice -- the implicit point that if high contingency is on the table, for a given aspect of an object, or process or phenomenon, then mechanical necessity issuing in natural regularity is not a very good explanation. That is a mechanical force of necessity would e.g. mean that objects tend to feel a down-pull of 9.8 N/kg, leading to a certain rate of fall if they are dropped. But, if one is determined not to see the obvious or to make objections, one could probably make the above sound like nonsense [nope, it is just a sketched outline that can be filled in in much more details], and if there is an attempt to cover all bases in a description then one can object that it is convoluted, not simple. To every approach there is an objection, for those determined to object . . . BTW, I addressed this above, giving as well the link to UD WAC 30 that addresses it, top right this and every UD page. GEM of TKIkairosfocus
July 2, 2011
July
07
Jul
2
02
2011
12:52 PM
12
12
52
PM
PDT
PS: I found the comment by Dembski in which he says: I’ve pretty much dispensed with the EF. It suggests that chance, necessity, and design are mutually exclusive. They are not. Straight CSI is clearer as a criterion for design detection. Hehe, too funny. I was going to post that same quote and the link to it. Do you see where he says they are not mutually exclusive? I think I need to go buy cat food. It's such a nice day here today. I'll try to spend more time in this thread when I get back. I have a lot of catching up to do. You and markf both sort of hit on the same thing: markf: I absolutely agree that the null hypothesis (H0) for Dembski is “chance”. H1 can be expressed as either: * Not chance Or * Design or necessity you: As long as you aren’t worried about Necessity as an non-design alternative to Chance, that’s fine. After all, I don’t think Dembski uses the EF any more, does he? Question: How could something which exhibits a pattern we would attribute to necessity fall into Dembski's rejection region? To put it another way, doesn't the rejection region embody contingency, which is rather the opposite of necessity?Mung
July 2, 2011
July
07
Jul
2
02
2011
11:55 AM
11
11
55
AM
PDT
Mr Frank: I am glad to see that you asked TWT to leave your blog as a forum; which is news to me. This commenter is responsible for the misbehaviour that I headlined, and the spurt of comments in which this occurred is also connected to another I am for the moment calling Y, who has very unsavoury connexions onwards. I will communicate onwards, and obviously I am not providing details here beyond the summary already given. It does seem that the spurt of abusive commentary has halted for the moment, as of my public notice and warning. This incident should make it very clear to all that "outing" behaviour is not a harmless form of teasing. As the frog in the Caribbean story said to the small boy approaching, stone in hand: "fun fe yuh is death to me!" Good day GEM of TKIkairosfocus
July 2, 2011
July
07
Jul
2
02
2011
09:51 AM
9
09
51
AM
PDT
KF #151 I am sorry. To be precise I expect I can understand your posts and comments but I find it extremely hard work. No doubt others are prepared to work harder, are cleverer or are more in tune with your style. You may not believe me but that is the truth. I am still unclear as to what you want me to do. If this unsavoury comment came from "The Whole Truth" I asked him or her many weeks ago to stop posting on my blog and he/she did stop. I think you know that. I get the impression that this comment comprises a threat to you and/or your family. This is obviously serious and it is important that as few people as possible read or even know about the comment. May I suggest you minimise the risk by: 1) Deleting the offending comment 2) Banning that person from further comments (I imagine you have done both these) 3) Removing all references to that comment whereever you can, including your post and comments on UD (including this one) 4) Ceasing public discussion of the comment If you wish to take it up further feel free to contact me by personal e-mail (where the comment will get less public exposure). My e-mail address is: mark dot t dot frank at gmail dot com.markf
July 2, 2011
July
07
Jul
2
02
2011
08:06 AM
8
08
06
AM
PDT
Dr Liddle: Thanks for the advice. And,the cyber-hug. I have had to further counsel my children to watch for stalkers and what to do. Also on online security, with special reference to those ever so prevalent social networking sites. And next week, I will have some unpleasant interviews, on top of everything else that seems to be at crisis stage. But, once someone has crossed that cyberstalking threshold and the stench of porn-skunk is in the air, I will see the battle through. I am publicly sworn at our wedding to defend my wife, and I am duty-bound as father to protect my children. Bydand GEM of TKIkairosfocus
July 2, 2011
July
07
Jul
2
02
2011
06:52 AM
6
06
52
AM
PDT
Dr Liddle: Pardon, something of graver import came up. The EF and the use of CSI on the sort of reduced form I show are equivalent, once it is understood that the relevance of the concept Information as measured on or reduced to strings of symbols with multiple alternative values in each place implies high contingency. If information is a possibility, you have passed the first node;the question now is if the contingency is meaningful/ functional, specific and complex in the sense repeatedly discussed. So, on your:
if the CSI is used as the criterion for rejection of “a Chance Hypothesis” Dembski appears to be including “Necessity” in that portmanteau null
. . . in actuality, the very fact that information is an open question means that necessity has been broken as first default. This is also not a process of simple elimination on a test, as there is an inference on empirically tested reliable sign involved. Similarly, when you say:
the null space for CSI must include both chance events (noise) and Law-like events (sine waves, crystals) etc, as these can all be found somewhere on the SI-KC grid
. . . the problem is that the first stage of analysis turns on something that is common to chance and choice but not necessity, so there cannot properly be any explicit clustering of chance and necessity that carries the import that they have been sliced away with one test. One may IMPLICITLY rule out necessity as information is a possibility, so high contingency is implicated, but that is different. First lo/hi contingency, THEN y/n on specified complexity or a comparable. In this context, the CSI metric therefore implies applicaiton of the EF, which is why it can be "dispensed with" in analytical terms. Though as well, it is important to observe the elaboration I supplied, on ASPECTS. for different aspects of an object, process or phenomenon can be traceable to each factor. the oerall causal story may be complex. Take a falling die. As a massive and unsupported object, it suffers mechanical necessity of falling under g. It hits, rolls and tumbles. On several samples we find tha the distribution is materially different form flat random, though ti does vary stochastically as well. Then we look and see it has been subtly loaded, with a tiny dot of lead under the 1 pip. All three factors are involved, each on a different aspect and the overall causal story has to account for all three. Just ask the Las Vegas Gaming Houses. Next you ask:
would you agree that CSI patterns are those that have high Shannon Information, High Compressiblity, and are extremely improbable under the null of “non-choice” (that seems a good way of phrasing it) given the finite probabilistic resources of our universe?
Shannon information is a bit loaded, as that strictly means average info per symbol, not the Hartley-Shannon metric: I = - log p, or more strictly Ij = [log a posteriori prob of symbol xj]/[log a priori prob of xj] Which reduces, where a posteriori probability is 1. High compressibility, I take it stands in for independently and simply describable, i.e. not by copying the same configuration like we have to do with a lottery number that wins. There is no null of non choice. There is a possibility of necessity ruled out explicitly or implicitly on seeing high contingency. Then, the second default is chance, but that is obviously conditioned on necessity having already been ruled out. With chance sd default, this is ruled otu on onbserving that the sort of config and the set from which it comes per its description, will be so narrow in the scope of possibilities, that the resources of either the solar system [our effective cosmos] or the cosmos as a whole as we observe it, would be inadequate to mount a significant sample by search. Remember you are trying to capture something that is extremely UNREPRESENTATIVE of the config space as a whole. If your search is at random or comparable thereto -- see M & D on active information and cost of search -- to credibly have a reasonable chance to catch something like that, your search has to be very extensive relative to the space. Needle in a haystack on steroids. the relevant thresholds are set precisely where such searches on the resources of solar system or cosmos as a whole are grossly inadequate by a conservative margin. And I favour 1,000 bits as threshold because it makes the point so plain. Of course, I am aware you are real, as real as I am, real enough to be plagued by cyberstalkers. Back on topic, oops:
CSI seems to be a one stage inference, which is quite neat
. . . is an error. The use of CSI is based on implicitly ruling out necessity, but in analysing why it works that is needed to be understood. Also,
you’d certainly need decision trees, IMO, to construct that null space for real patterns, because you’d have to figure out just how deeply nested your contingencies could be under that null. As we travel “north east” on my plot, the depth of contingency must increase
A d-tree structure is not the right one here. What is happening - as the loaded die story shows -- is a complex inference to best explanation in light of expertise, not just a decision-making that comes out at an overall value. The flowchart, understood as an iterative exercise that iteratively explores the material aspects of significance of an object, system, process or phenomenon, is a better approach. That is, this is actually an exercise in scientific methodologies across the three major categories of causal factors, with relevant adjustments as we go. Hope this helps, GEM of TKIkairosfocus
July 2, 2011
July
07
Jul
2
02
2011
06:45 AM
6
06
45
AM
PDT
Kf:
Appreciated, and I certainly hope this will be reasonably resolved shortly. I hope your own situation was positively resolved.
Well, not exactly resolved, but it blew over. But there was a time when googling my name threw up all kinds of derogatory claims about me, and I was embarrassed (to say the least) about what anyone who knew me would think (of course they were all false). The worst was when someone sent me a fat letter full of abuse, and the envelope was so stuffed that it was intercepted by the police who checked it for anthrax (it was, fortunately, just pages and pages of handwritten rant). That's when things got scary. But even google forgets! Do what you have to do, and try not to let it distress you (easier said than done - I lost both weight - which I could afford! - and sleep - which I couldn't). It passes. :hug: LizzieElizabeth Liddle
July 2, 2011
July
07
Jul
2
02
2011
05:49 AM
5
05
49
AM
PDT
Mr Mark F: Pardon, given the gravity of what is happening, I MUST BE DIRECT: kindly, drop the pretence of "misunderstanding." You are qualified in philosophy and have been in a major business, by your own admission; presumably for many years. So, we can presume a reasonable level of understanding. Beyond this, I have had more than enough responses to the post to know the message is quite clear -- right from the headline -- to people or reasonable intelligence, and the underlying event is an outrage that needs to be addressed on the substance, not on clever rhetorical side tracks. Nor, am I interested in a debate with you or anyone else; but in corrective action on your part, after you have explained yourself for harbouring the sort of character that has stooped to the behaviour I had to headline. In other words [and as I warned against on the record both here and at your blog], your blog has a cyber stalker who has been entertained there and has now gone on to even more extreme behaviours. A man who you have harboured at your blog site, who goes by the monicker The Whole Truth, who set up an attack blog, has indulged himself in outing behaviour that targets my wife and children. Who have nothing whatsoever to do with design theory, or debates over worldviews or related theology etc etc. But, there has been an attempt to out my wife by name and our children by allusion; echoing the notorious mafioso tactic of taking hostages by threat. That is a direct threat, and the linked factor of repeated unwholesome sexual references multiplies my reason for concern. Overnight, I have learned that the second participant in the wave of nasty outing themed spam, is involved in questionable sexually themed matters related to porn. That confirms the reasons for my concern that I am dealing with cyberstalking of horribly familiar patterns. Remember, too, I do not know whether these have confederates or fellow travellers here. That is a direct context in which next week, having first served notice of warning, I will be speaking with the local prosecutors' office and the local police. So, the time for word games is over. Please, explain yourself, and take corrective action. BYDAND GEM of TKIkairosfocus
July 2, 2011
July
07
Jul
2
02
2011
05:47 AM
5
05
47
AM
PDT
Erratum (me at 147):
So what I have there (courtesy of Dembski) is simply a continuous version of the filter, where, as you travel north-west east (increasing both SI and KC together), you require processes of ever- increasingly deeply nested contingency layers to produce.
Elizabeth Liddle
July 2, 2011
July
07
Jul
2
02
2011
05:26 AM
5
05
26
AM
PDT
Dr Liddle: Appreciated, and I certainly hope this will be reasonably resolved shortly. I hope your own situation was positively resolved. What this started with was abusive commentary by TFT at MF's blog, hence my repeated request that MF explain himself. I then received was it blog comments at my personal blog by TFT announcing his own new attack blog, in the most vulgar and malicious terms, accusing me of being involved in a homosexual circle with UD's staff. It went downhill from there, and of course along the way some seemed to think it their right to do web searches, dig up my name and plaster it all over derogatory and vulgar comments. My objection to the use of my name, in recent years has been that this leads to spam waves in my email. Unfortunately, I must now add that it has led to outright cyberstalking. The headline for the already linked post in reply is an excerpt from a submitted KF blog posting that has to be seen to be believed, and this is a part of a cluster of about 20 abusive submitted comments. (At least, since my post yesterday,t he spate of abusive commentary seems to have stopped at least for the moment. At least, in my in-box and spam folder.) Now, my wife has almost no internet presence, and my wife and our children are utterly irrelevant to any debates over the explanatory filter the mathematics of CSI, or linked worldview and theological issues etc. So, you can understand my shock to see in the midst of pretty nasty commentary on my theology --
BTW< TFT apparently is unaware of what the three-way debate on universalism, inclusivism and exclusivism is in soteriology in systematic theology or the role of Paul in that debate as the one who if anything opens up the room for the inclusivist view, which I hold in a moderate form --
. . . TFT let out the Mafioso tactic, snide threat of a "greeting" to my wife and children, with an attempt to name my wife. Now, that is a threat, as anyone who has watched the old Mafia movies would know. Worse, the set of comments had in them repeated extremely unhealthy remarks on sexual matters, as already indicated and other things of similar order. There were two persons involved, one we are calling TFT, and let us just say person of interest Y. Y's comments plainly fit in with TFTs in timing and substance. Y happened to post his main contribution in the cluster of comments that were captured by Blogger's moderation feature, in response to a post I made on the Internet Porn statistics published by the Pink Cross Foundation. This foundation is a group of former Porn so-called stars, who expose the abuses and destructive nature of the porn industry so called. The picture they paint is unquestionably accurate and utterly horrific; porn is . . . I am at a loss for strong enough words. I have a mother, I had grandmas, I have a wife, I had a mother in law, I have sisters in law, I have second mothers of the heart of several races and nationalities, I have aunts, one semi-official big sister, many other sisters of the heart [I just responded by email to one who just got married], I have a daughter, I have daughters of the heart. I would NEVER want any of these degraded and abused like that. Period. Women and girls are not meat toys to be ogled, prodded, played with, misled, intimidated into all sorts of acts, be-drugged, ripped up physically and emotionally and spiritually for the delectation and profit of drooling demonised dupes. Period. Here is a clip from commenter Y's remark (which he has trumpeted to others elsewhere), in response to PC's observation that the most popular day for viewing web porn -- remember, they are saying this is implicated in 58% of divorces, according to the claim of divorce lawyers -- is Sunday:
If more people spent their Sundays at home watching porn, there'd be less money in the coffers of those houses of hate and ignorance called churches. That could only be good for the world [privacy violation immediately follows]
Now, in trumpeting the post I will not publish at my personal blog, Y gave a link to a sexually themed site he operates (the title is about "seduction . . ."). In following up that link, I came across a site where he advertises "intimate" photography, which on my opening up led with a picture of a young girl in a nudity context, at least this is distorted by the use of a hot tub. But, that is what he puts up in public. This clearly confirms to me that I am right to be seriously concerned about cyberstalking, and that the extensions of privacy violations from myself to an attempt to "out": my family, is a dangerous nuclear threshold escalation. I am taking the steps I indicated, and of course there is more evidence than I have said here. This is a watershed issue, and the time has come for the decent to stand on one side of the divide. the sort of abusive, hostile, privacy violating, vulgar and slanderous commentary routinely tolerated or even encouraged at anti-design sites must stop. Now. And, since this started at MF's blog, he needs to be a part of that stopping. In particular, note, I have no way of knowing if these online thugs have confederates or fellow travellers here. So, I have but little choice, other than to make sure prior complaint is on record,and to initiate investigatory proceedings as appropriate. (I do know that in this jurisdiction, the law embeds a principle that UK law will be applied where there is no local statute, and as I have cited in my onward linked KF blog post, that law is quite stringent. Indeed TFT, Y et al should reflect on the point that in the UK law, harassment aggravated by religious bigotry multiplies the relevant gaol sentence 14 times over, to the equivalent of what in the US would be a serious felony. they have made their anti-religious antipathy plain as a key motivating factor in their misbehaviour.) These men have crossed a nuclear threshold, one that cannot be called back. Bydand GEM of TKIkairosfocus
July 2, 2011
July
07
Jul
2
02
2011
05:26 AM
5
05
26
AM
PDT
#139 KF I am sorry - I just noticed this. As you know I avoid being drawn into debates with you because I find your posts and comments extremely hard to understand (quite possibly because of my limitations). I am afraid this is also true of the post you linked to. It appears someone has made a very unpleasant and silly comment about you. I am sorry about that. I am unclear who did it, what exactly they did, what it has to do with me, and what you want me to do about it. Please can you reply in concise clear plain English - I am slow of study when it comes to your writing. Txmarkf
July 2, 2011
July
07
Jul
2
02
2011
05:10 AM
5
05
10
AM
PDT
Hi, kairosfocus:
F/N 2: Dr Liddle, you do not come across as a sock-puppet, which is what Mg turned out to plainly be.
No, I'm not. Elizabeth Liddle is my real name, if you google it, a lot of the hits are me (not the pet sitter and not the deceased!). Quite a lot of first hits on google scholar are me too (not the first one, though).
You do not fit that profile, so the issue is, that there is something that seems to be blocking a meeting of minds. Even, after it SEEMS that minds have met, as the post I just clipped from indicates. For I am not seeing the two stage empirically and analytically referenced decision process that the diagram indicates and as has been repeatedly discussed, but an attempt to collapse two nodes into one, like it is being force-fitted into an alien framework. What is the problem, please, and why is it there?
Well, that's what I'm trying to sort out! Try my posts at 138 and 145, and see if they make sense :)
Is it that the classic inference on being in a far enough skirt to reject the null is a one-stage inference?
Well, CSI seems to be a one stage inference, which is quite neat. But I have no problems with a two-stager.
If that is it, please be assured that the design inference is a case reasoning complex decision node structure not a simple branch; it is in this sense irreducibly complex. (In today’s Object oriented computing world do they teach structured programming structures? Sequences, branches, loops of various kinds, case decision structures, and the like? Is this the problem, that flowcharting has advantages after all that should not be neglected in the rush to pseudocode everything? A picture is worth a thousand words and all that? [BTW, I find the UML diagrams interesting as a newish approach. Anybody out there want to do a UML version of the EF, that us old fogeys cannot easily do?])
Don't know much about UML, but yes, sequences, branches, loops, case decision structures etc are all still there. :) And yes, a picture is worth a thousand words, which is why I tried to paint at least a word picture of the null space in 138. But that doesn't mean it is an alternative to the decision tree stuff - you'd certainly need decision trees, IMO, to construct that null space for real patterns, because you'd have to figure out just how deeply nested your contingencies could be under that null. As we travel "north east" on my plot, the depth of contingency must increase. So what I have there (courtesy of Dembski) is simply a continuous version of the filter, where, as you travel north-west (increasing both SI and KC together), you require processes of ever- increasingly deeply nested contingency layers to produce. And so, if deeply nested contingency is the hall mark of "choice" processes, and "choice" is excluded under the null, if you find more than you'd expect under the null in that North East corner, you can make your Design Inference. I'm pretty sure that is what both you and Dembski are saying! See what you think. Going out on my boat shortly, so I'll catch you later :)Elizabeth Liddle
July 2, 2011
July
07
Jul
2
02
2011
05:00 AM
5
05
00
AM
PDT
I take it, though, kf, you agree with Dembski when he says that:
Straight CSI is clearer as a criterion for design detection.
If so, would you agree that CSI patterns are those that have high Shannon Information, High Compressiblity, and are extremely improbable under the null of "non-choice" (that seems a good way of phrasing it) given the finite probabilistic resources of our universe?Elizabeth Liddle
July 2, 2011
July
07
Jul
2
02
2011
04:42 AM
4
04
42
AM
PDT
kf: I apologise, yes it seems I misordered the sequence of the filter. However, the sequence of the nulls was not the issue I was raising. I fully accept that Necessity is the first null to be eliminated in a two stage process. But if the CSI is used as the criterion for rejection of "a Chance Hypothesis" Dembski appears to be including "Necessity" in that portmanteau null. I assume that's why he feels that the separate stages can (not must!) be dispensed with. After all, the null space for CSI must include both chance events (noise) and Law-like events (sine waves, crystals) etc, as these can all be found somewhere on the SI-KC grid.Elizabeth Liddle
July 2, 2011
July
07
Jul
2
02
2011
04:35 AM
4
04
35
AM
PDT
F/N 2: Dr Liddle, you do not come across as a sock-puppet, which is what Mg turned out to plainly be. His drumbeat repetition of adequately answered-to points was joined to utter unresponsiveness on the merits and unwillingness to do event he most basic courtesies of discussion e.g. in the context of Dr Torley's long and patient explanations. (Anti evo et al, your attempts to turn the issue around would be amusing if they were not pathetic.) You do not fit that profile, so the issue is, that there is something that seems to be blocking a meeting of minds. Even, after it SEEMS that minds have met, as the post I just clipped from indicates. For I am not seeing the two stage empirically and analytically referenced decision process that the diagram indicates and as has been repeatedly discussed, but an attempt to collapse two nodes into one, like it is being force-fitted into an alien framework. What is the problem, please, and why is it there? Is it that the classic inference on being in a far enough skirt to reject the null is a one-stage inference? If that is it, please be assured that the design inference is a case reasoning complex decision node structure not a simple branch; it is in this sense irreducibly complex. (In today's Object oriented computing world do they teach structured programming structures? Sequences, branches, loops of various kinds, case decision structures, and the like? Is this the problem, that flowcharting has advantages after all that should not be neglected in the rush to pseudocode everything? A picture is worth a thousand words and all that? [BTW, I find the UML diagrams interesting as a newish approach. Anybody out there want to do a UML version of the EF, that us old fogeys cannot easily do?])kairosfocus
July 2, 2011
July
07
Jul
2
02
2011
04:34 AM
4
04
34
AM
PDT
Oh, and kf, I'm not sure what has been going on with regard to the cyberstalking issue, but I just want to make it absolutely clear that you have my total sympathy. I've been on the receiving end of that kind of thing myself, and it is an experience I do not wish to repeat. There is no excuse for that kind of behaviour. It's appalling. I wish you and your family the very best. LizzieElizabeth Liddle
July 2, 2011
July
07
Jul
2
02
2011
04:20 AM
4
04
20
AM
PDT
F/N: to add braces to belts, let me specify: FIRST DECISION NODE: DEFAULT, 1: Mechanical necessity leading to natural regularity, e.g. F = m* a, as in how a dropped heavy object falls at g REJECTED: If there is high contingency for the relevant aspect of the , such as the variety of readings of a common die, from 1 to 6. REMAINING ALTERNATIVES: Chance or choice, per empirical and analytical grounds. SECOND DECISION NODE: DEFAULT, 2: Chance, i.e. stochastic outcomes similar to what a die will tumble to and read, if it is fair. REJECTED: If we find FSCI or the like, whereby the outcomes are from zones sufficiently isolated and independently specified in the space of possibilities, that chance is much less reasonable -- though strictly logically possible -- than choice. For instance, text in coherent English in this thread is explained on choice not chance. REMAINING ALTERNATIVE: Such a phenomenon is best explained on choice, not chance.kairosfocus
July 2, 2011
July
07
Jul
2
02
2011
04:18 AM
4
04
18
AM
PDT
Dr Liddle: Your just above, at 135 is again a bit disappointing, given the discussions that have already been gone over again and again in recent days. I particularly refer to:
Here are some choices of hypothesis pairs, with their corollaries: 1. Chance is the null and “not Chance” is H1. This is fine, but we can’t therefore infer “design” from a rejection of the null unless we assume that all not-Chance causes must be Design causes. In which case, you could rewrite H1 as “Design”. If you don’t assume this, then fine, but then you can’t infer Design. Could be something different, e.g. “Necessity”. 2. Chance is the null, and “not Chance” is H1. Then, if “Chance is rejected, “Necessity” is the new null and “not Necessity” is H1. And “not Necessity”, and, as the only alternative to Chance and Necessity is Design, you might as will write your H1 as “Design” in the first place. This is the EF. 3. Not Design is the null, and Design is H1. Now you lump Chance and Necessity in together as the null. as being the only two Not-Design possibilities. But they all boil down to the same thing, so pick the one you are happiest with.
I must again point out that -- as the two successive decision nodes in the flowchart shown here emphasise -- the whole EF process begins with NECESSITY as default, contrasted with high contingency. This, is as can consistently be seen in both descriptions and diagrams, since the 1990's. This was already pointed out to you, complete with links and clips from Dr Dembski where he said in more or less these words, that the first default is necessity. It is in the context where we see wide variety of possible and/or observed outcomes under sufficiently close starting conditions, that we see that such high contingency must be explained on chance and/or necessity. For, we have already seen that the sign of natural regularity pointing to underlying law of mechanical necessity like F = m*a, is not relevant. Once we are in the regime of high contingency, we then observe certain differentiating signs: chance processes follow blind stochastic patterns such as are modelled on random variables. So, if we see the evidence of such a pattern one may not safely infer on best explanation to choice not chance. This, even though choice can indeed imitate chance. It is when we find ourselves dealing with an independently specifiable zone in the field of possible configurations, and where at the same time, that set of possibilirtes is so vast as rto swamp the resources of the solar system or teh cosmos as a whole, that we pay attention tothe contrasrting cpabilites of choice. As I just agian put up, if the first 1,000 ASCII characers of this post were to be seen in a set of coins, then we have strong grounds to infer to choice not chance as best explanation. That is because, even though strictly, chance could toss up any particular outcome, the dominance of relative statistical weights of meaningless clusters of outcomes in light of available resources, once we pass 300 - 1,000 bits worth of configs [10^150 to 10^301] would make it maximally unlikely ont eh face of it that we would ever observe such a special config by chance. Indeed,this sort of reasoning on relative statistical weights of clusters of microstates, is close to the heart of he statistical justification for the second law of thermodynamics. For instance, if you were to see a room in which all the O2 atoms were mysteriously clustered at one end, and a dead man were at the other, dead of asphyxiation, you might not know how tweredun, or whodunit -- could not be a human being under present circumstances -- but you would know that the man was dead by deliberate action. And, this shows by the way just how relevant a design inference is to scientific contexts of thought. So, Dr Liddle, your repeated error is to think of a single inferntial decision to be made, not a structured pattern of desicions, where on empirically and analytically warranted signs, we first expect necessity, then on seeing that we find contingency, we then expect chance and only conclude choice when we find that the pattern of the highly contingent outcome is well fitted to choice and ill fitted to chance. I find ti both extremely puzzling and even frustrating to see us going back over this point again and again and again, when the diagram -- as long since drawn to your attention, repeatedly -- is direct and simple. Please explain. GEM of TKIkairosfocus
July 2, 2011
July
07
Jul
2
02
2011
04:10 AM
4
04
10
AM
PDT
kairosfocus: I am NOT "getting my “understanding” of ID from the sort of critics who hang out at Anti-Evo etc". I'm reading Dembski's papers. And as I've said a few times, I'm not even disagreeing with you. Your inference that I am not posting in good faith is false. I have been absolutely up front about the fact that I don't think Dembski's argument works, but before we discuss that, I want to make sure we are on the same page as to what the argument actually is. Namely: That it is based on Fisherian hypothesis testing. That H1, however we phrase it, is the hypothesis we consider supported if our observed pattern falls in the rejection region under the null. That we can use either a two-stage filter (Chance, and Necessity, in turn, as the null), or subsume the null into a Chance hypothesis (as Dembski does in the paper I just linked to). That the null space has two dimensions - Shannon information, and Kolmogorov Compressiblity. And that if something falls in the skirt tail at the North East corner, where both Shannon Information and Kolmogorov Compressibility are both high, if the probability is low enough (i.e. below an alpha set as a function of the number of events in the universe) we can conclude Design. I got this nowhere except from Dembski's papers. I see nothing in your posts that conflicts with it. It seems to me a perfectly good, at least in principle, way of making an interesting Inference about causality from a pattern. Obviously I am a guest here, and if you do not want me to raise issues that you think have already been addressed, that your prerogative. But right now, the only issue I have raised is simply Mung's methodological parsing of Dembski's methods, not the methods themselves. tbh, I prefer the EF.Elizabeth Liddle
July 2, 2011
July
07
Jul
2
02
2011
04:08 AM
4
04
08
AM
PDT
Mr Mark F: Pardon an intervention, but I believe in light of what you may read in the linked (starting with the headline, clipped from TFT) and in the overnight addenda at F/N2 and F/N 3, you have some explaining to do in the context of the behaviour of those who have been regular commenters in your blog. This matter is serious enough -- cf my just above to Dr Liddle -- that your habitual tactic of ignoring anything I have to say at UD is not good enough. GEM of TKIkairosfocus
July 2, 2011
July
07
Jul
2
02
2011
03:43 AM
3
03
43
AM
PDT
Right. I re-read the paper. Well, I seem to have got it right. Dembski does indeed define the pdf as a 2D distribution of patterns under what he calls a "Chance" hypothesis". And we seem to have Shannon Information along one axis and Kolmogorov Compressibility along the other. So, let's take a look at the pdf: Let the east-west axis be the SI axis: A short string, or a longer string but consisting of only a small number of possible characters, will tend to be low in Shannon Information, whereas a longer string, especially if consisting of a large number of possible characters (e.g English letters) will be higher in Shannon Information. And let the north south axis be the compressibility axis: A string that is easy to describe is highly compressible, while string whose shortest description is itself is not compressible at all. And on the up-down axis we have frequency (which, later, we can divide by the volume under the curve to give probability). All values are positive. Now there are lots of low SI strings that are highly compressible (sine waves, for example). So we have a high peak up in the north-east corner of the plot. There are also lots of high SI strings that are not compressible at all (white noise, for instance). So we also have a peak at the SoutheWest coner of the plot. However, we have only small numbers of low SI, low compressible patterns because even if the shortest description of a pattern is itself, if the pattern itself doesn't contain much information, even if itself is the shortest description, that description will be quite short. So we have a low plain, near sea level, in the South East corner. The interesting part is the North West corner - here, are patterns that have high SI (lots of bits) but are also fairly compressible. They won't be very compressible of course, because they are so rich in SI, so the actual North West corner, like the South East corner will be pretty well at sea level - near zero numbers of high SI, highly compressible patterns. So now we have the topography of the pdf under the null. It's a saddle, interestingly, not a bell (that's because the two dimensions are not orthogonal - they are negatively correlated). However, if we take a diagonal section from the South East corner to the North West corner, we will in fact see a bell curve (actually, any diagonal SE-NW section will be a bell), and that's the one we are interested in. We are not interested in White noise (South West) nor in sine waves (North East). And nor are we actually interested in the very low probability patterns in the South East, where SI is low and compressibility is low. That's the kind of pattern produced by clumping processes. But as we travel from South East in a North Westerly direction, the terrain rises, and we start to encounter some interesting patterns with greater frequency - pattern that have quite a lot of SI, but also quite a lot of compressibility. And these are quite common - snowflakes, vortices, fractals. So, here we have, subsumed under the null hypothesis Dembski calls "Chance", processes that he once referred to as "Necessity", and which kf would call "low contingency processes" - compressibility is high (a simple contingency statement will generate the string) but SI is also high (patterns are large, and may have many different possible features, so there are a lot of bits). But we continue to travel NW. Recall that this null is called "Chance" and the if rejected, we infer "Design". Dembki's contention is that as we continue to travel North-East, that under the null landscape of "Chance", the land will start to fall. We will reach a region in which compressibility is high, and SI is high, but there is are very few, if any, patterns. But, lo and behold - we find some! First of all we find the Complete Works of Shakespeare. Then we find A Watch on a Heath. Then we find a living cell! These objects shouldn't be here! Under the null, this level of compressibility is not compatible with this level of Shannon Information! Sure, it's not very compressible, but it's a heck of a lot more compressible than we'd expect under the null! Indeed, under the null, the probability of finding such a thing is so low that we should have no more than an even chance of finding just one out of all the patterns in the universe! And here we have lots! Something Fishy Is Going On. Yes?Elizabeth Liddle
July 2, 2011
July
07
Jul
2
02
2011
03:37 AM
3
03
37
AM
PDT
Dr Liddle: Pardon an intervention. Perhaps, you need to look at WAC 30, top right this and every UD page; on the EF. You will see that he is in effect saying that the conceptual framework of the EF needed updating (hence the per aspects approach I have used) and that the relevant part of the EF is captured in the CSI concept. That is, once one is dealing with high contingency, addressing the presence of CSI is equivalent for the relevant aspect of an object or process. This is exactly so, and a further update would address the log reduced form of the Chi metric, say: Chi_500 = I*S - 500, bits beyond the solar system threshold. And, for those who have so hastily forgotten that the Durston et al table of values for FSC will fit right into this form and yield 35 values based on information metrics for protein families published in the literature in 2007, let me clip the relevant UD CSI Newsflash post on that:
Using Durston’s Fits from his Table 1, in the Dembski style metric of bits beyond the threshold, and simply setting the threshold at 500 bits: RecA: 242 AA, 832 fits, Chi: 332 bits beyond SecY: 342 AA, 688 fits, Chi: 188 bits beyond Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond . . . results n7 . . . . In short one may use the Durston metric as a good measure of the target zone’s actual encoded information content, which Table 1 also conveniently reduces to bits per symbol so we can see how the redundancy affects the information used across the domains of life to achieve a given protein’s function; not just the raw capacity in storage unit bits [= no. of AA's * 4.32 bits/AA on 20 possibilities, as the chain is not particularly constrained.]
Or to use the case of 1,000 coins that was already drawn to your attention, a string of such copins tossed at random will have ahigh I-value on the Hartley-Shannon measure I = - log p, due to the quirks involved. But S = 0 as any value would be acceptable, i.e. it is not specific, coming from a describable target zone T (apart form painting the target around the value ex post facto, which is not an independent description). If on the otehr hand, the same set of coins were to hold the successive bit values for the first 143 or so ASCII characters of this post, then we now have an independent description. And, a very specific set of values indeed. so S = 1. I would have a somewhat lower information-carrying bit value than in the first case, as English text has in it certain redundancies. However that would not make a material difference to the conclusion. Case 2 will pass the 500 bit threshold and would be deemed best explained on design. Indeed, if you were to convert the solar system into coins and tables and flippers and recorders, then try to get to the specific zone of interest as defined, by overwhelming likelihood on exhaustion of the P-time Q-state resources of the about 10^57 atoms involved, you would predictably fail. And yet, by intelligence, I wrote those first 20 or so words in a matter of minutes, by intelligence. This is an example of how the best explanation for FSCI is intelligence. And, it is not so hard to understand, or ill-defined, etc etc as ever so many objectors like to pretend. So, Dr Liddle, please understand our take on all this since March or so especially. What we keep on seeing from our end is drumbeat repetition of long since adequately answered and corrected talking points, backed up by trumpeting of patently false claims all over the Internet; accompanied by the worst sorts of personal attacks, as I will link on just now. That tells us, frankly, that we are not dealing with people who are interested in truth but in agendas. So, please, please, please, do not try to mainly get your "understanding" of ID from the sort of critics who hang out at Anti-Evo [see below and onward on fellow traveller Y], ATBC and even MF's blog [as in TFT who submitted the headlined comment in the linked below] etc, not to mention Wikipedia's hit-piece that so severely undermines their credibility. Such are increasingly showing themselves to be driven by deeply questionable agendas. FYI, right now, updating my own retort yesterday to a barefaced, Mafioso-style attempt to threaten my wife and children, I am finding out that someone else who was posting in pretty much the same manner and at the same time is associated with questionable photography of young girls. FYFI, one of my children happens to be a young girl. GEM of TKIkairosfocus
July 2, 2011
July
07
Jul
2
02
2011
03:33 AM
3
03
33
AM
PDT
PS: I found the comment by Dembski in which he says:
I’ve pretty much dispensed with the EF. It suggests that chance, necessity, and design are mutually exclusive. They are not. Straight CSI is clearer as a criterion for design detection.
https://uncommondescent.com/intelligent-design/some-thanks-for-professor-olofsson/#comment-299021 And the link he gives is to the paper we've both just been looking at, so I think I'm up to date.Elizabeth Liddle
July 2, 2011
July
07
Jul
2
02
2011
02:16 AM
2
02
16
AM
PDT
Mung: Good morning! I have my coffee beside me (although my cats have gone outside for their daily half hour butterfly chase). And the bruise is quite small. Right.
Hi Lizzie, Your problem is that you’re attempting to frame the null in terms of the alternative. It’s the other way around. ;)
Either is fine with me, as long one is Not The Other.
True, I never took Stats 101. It follows that I never failed Stats 101 :) . And I think Dembski went far beyond Stats 101.
Yes indeed, so you have a little catching up to do. That's fine. I didn't know what a standard deviation was until I was 49, and I've never looked back.
I have my “Statistics for Dummies” book, and some others as well:
Some people do like to set a cutoff probability before doing a hypothesis test; this is called an alpha level.
So if the rejection region is the alpha value, what’s the p-value?
Good question. It's amazing how many students fail this question on exam papers. But I won't :) the p value is the probability of your observed data under the null hypothesis. If that value is less than your alpha value (e.g. p=.05) you can reject the null.
The most important element that all hypothesis tests have in common is the p-value. All p-values have the same interpretation, no matter what test is done. So anywhere you see a p-value, you will know that a small p-value means the researcher found a “statistically significant” result, which means the null hypothesis was rejected.
Exactly. This is how we know that however we phrase the hypotheses in Dembski's work, the H1 is the one that allows us to infer Design, not H0, because what emerges from calculations where Design is inferred is a very low p value. But I think we agree on that now :) Yes?
you:
Yes, because it [specification] is part of the definition of the rejection region under the null.
The way I see it, it [the specification] is not part of the definition of the rejection region, but must fall within the rejection region.
Well, it's possible I'm misunderstanding Dembski here, but I don't think so. Let's go back to basics. Take the classic example of a deck of cards. 1: Any one sequence of 52 cards is has a tiny probability; however, if you lay out 52 cards you are bound to get one of them, so there's nothing odd about the one you get having a tiny probability. The sequence is complex (and all sequences are equally complex, i.e. have equal amounts of Shannon Information, which I could give you the formula for but can't type in html - it has factorials and logs in it though), but are not specified 2: However, if you specify a sequence in advance, get someone else to shuffle the pack, then that person lays out that exact sequence, that is quite extraordinary improbable under the null hypothesis that every sequence has equal probability So there is something very fishy about the process. We can reject the null. And we say that the pattern has "Specified Complexity". 3: Let's say you don't specify the sequence in advance, but you say there is a class of sequences that have something special about them. All the sequences that have the suits in sequential order, and the cards within each suit as A 2 3 4 5 6 7 8 9 10 J Q K, for example. There are 4! such sequences (i.e. 24). So getting one of the specials is slightly more probable than a single specified sequence. Therefore, if you see any one of them dealt, you have reason to be suspicious. And perhaps one might also include variations - the sequences in reverse, for example, or: all the aces, all the ones, all the twos, etcs. So, if we can find a way of describing a subset of all possible sequences that have some sort of special quality, then getting any one of them is a little more probable than getting a single specified example. And many of Dembski's papers involve some kind of definition of that subset - often in terms of Kolmogorov compressibility. That shrinks the rejection region a bit, because the larger the number of patterns that exhibit this "Specified Complexity" under the null, then the greater the probability of one of them coming up under the null of "nothing fishy going on". So the specification (the class of patterns that we would regard as specified) is very important, not because simply being specified is enough to allow us to reject the null, but because we need to know the expected frequency of members of that class under the null (remember this is frequentist statistics we are doing here), in order to figure out how improbable any member of that class is under that null. So that part of the process is part of computing the pdf under the null, which, as I said, is quite complicated because it has two dimensions - Shannon Information content and something like compressibility (interestingly these dimensions are not orthogonal, but they are not collinear either, which is why the CSI concept is interesting). For the pack of cards it is easy, because there is no variance along the SI axis (all sequences have equal SI) and the only variance is along the compressibility axis. However, for patterns in nature, both axes are relevant. But having defined our pdf under the null, we now have to set the alpha value, and, again, Dembski's definition of CSI actually includes that alpha. So the presence of CSI is not evidence that the null is rejected - it's what we declare the pattern to have IF the null is rejected. So to actually set about checking to see whether we can reject the null we have to unpack CSI and place its parameters in the right places in the test! So, thinking about this (over coffee, hope I don't regret this), yes, there is a sense in which "CSI" is our H1. But it's a rather strange H1 - it's a bit like saying our H1 for a series of coin tosses is "a pattern that is more improbable than my alpha under my null". And you still have to unpack all that before you do your test! Far better to say that H1 is the hypothesis that the coin is not fair, that you will set an alpha of p=.01, and that under your null heads and tails have equal probability. Then things are clear. I think Dembski is clear. ish. (The reason for the -ish, is that in the bits you've quoted recently, he regards Chance as the null - explicitly, as does Meyer, but elsewhere he also includes Necessity. This is a problem.)
The rejection region is too improbable, given all available non-intelligent resources.
No. The rejection region is defined as an improbable region. What is "improbable" is that a given pattern would fall within it, given all available non-intelligent resources. If a pattern did, it would be reasonable to reject "non-intelligent sources" as a likely explanation. In which case "non-intelligent sources" is your null and "intelligent sources" is your H1 :) But we getting there :)
The specification is that extra bit that’s required within the range of the too improbable and warrants the inference to design. I’m pretty sure that is what Dembski says.
Well, sorta, but sorta not. What I said is closer.
Well, in your quotes above, Dembski actually specifies the null: Chance. So in this case, H0 is “Chance” and H1 is “not Chance”. I was avoiding that one, because I’m not sure what we are supposed to do with Necessity, but I’ll leave that to you. Anyway, it doesn’t matter, because here Dembski seems to be saying that if we reject (“eliminate”) Chance we can infer Design. So he seems to have H0 as Chance and H2 as Design where Chance and Design are the only two possibilities and mutually exclusive.
What allows us to reject Chance and to infer design? Specification.
Well Specified Complexity, sure. That's the tail (well, corner-of-skirt, as there are two dimensions) of your pdf under the null. I agree with you that all these bits are important! I'm just trying to assign them the right roles in the hypothesis-testing procedure.
I was avoiding that one [H0 = Chance], because I’m not sure what we are supposed to do with Necessity,
You mentioned Chance as the null a few times I think. If you review I bet I did not object.
OK, fine. As long as you aren't worried about Necessity as an non-design alternative to Chance, that's fine. After all, I don't think Dembski uses the EF any more, does he? The EF is a sequential rejection of two nulls in turn. But one is fine with me :)
Now I ask you to consider carefully what you wrote: “in your quotes above, Dembski actually specifies the null: Chance.” So in this case, H0 is “Chance” and H1 is “not Chance”. Ding ding we have a winner!!!! Let us stop, pause, reflect. I assure you, you will find it hard to take back those words.
No, that's fine. It isn't my hypothesis after all! I'm fine with that. I'm not sure that kf is, but it's fine with me.
So he seems to have H0 as Chance and H2 as Design where Chance and Design are the only two possibilities and mutually exclusive.
No. The null is chance. The alternative is not chance. You don’t get to change the rules all of a sudden. Mindful of that bruise on my forehead, I'm going to be very careful here .... Mung: I'm not the one changing the rules Here are some choices of hypothesis pairs, with their corollaries: 1. Chance is the null and "not Chance" is H1. This is fine, but we can't therefore infer "design" from a rejection of the null unless we assume that all not-Chance causes must be Design causes. In which case, you could rewrite H1 as "Design". If you don't assume this, then fine, but then you can't infer Design. Could be something different, e.g. "Necessity". 2. Chance is the null, and "not Chance" is H1. Then, if "Chance is rejected, "Necessity" is the new null and "not Necessity" is H1. And "not Necessity", and, as the only alternative to Chance and Necessity is Design, you might as will write your H1 as "Design" in the first place. This is the EF. 3. Not Design is the null, and Design is H1. Now you lump Chance and Necessity in together as the null. as being the only two Not-Design possibilities. But they all boil down to the same thing, so pick the one you are happiest with.
That leaves only one more thing to address:
I was avoiding that one [H0 = Chance], because I’m not sure what we are supposed to do with Necessity,
How long have you been disputing/debating Dembski? At least 4 years, right? And you don’t know how he addresses necessity in the context of chance as the null? To me, this speaks volumes, for it is not as if Dembski has not addressed this very issue.
Well, not to my satisfaction! But that's a separate issue. I am not debating Dembski here (and never have), I'm debating you. If you are happy with Chance as H0, that's fine. And I'd be grateful if you would link to a source in which Dembski "addresses necessity in the context of chance as the null". I could have missed something :) Anyway, while we may not yet be on the same page, at least we now both seem to be holding the book the same way up.
Aaaaaaarrrrrrgggghhhhhh!!!!!!
I’m sorry. But I just got a mental picture of you banging your head on a table and I burst out laughing. It’s not my intent that you cause yourself physical, mental or emotional harm.
No problem :) I have a tough nut.Elizabeth Liddle
July 2, 2011
July
07
Jul
2
02
2011
02:10 AM
2
02
10
AM
PDT
#130 Mung First - I apologise for accidentally repeating my entire comment in #128 above. No I am not contradicting Lizzie. She is trying to rephrase Dembski's work in terms of classical hypothesis testing. I am saying that indeed it is possible to rephrase Dembski's work this way (or almost). However, classical hypothesis testing itself has enormous conceptual problems which Dembski's method shares. I absolutely agree that the null hypothesis (H0) for Dembski is "chance". H1 can be expressed as either: * Not chance Or * Design or necessity The rejection region is not as clear as it might be but is something to do with low Kolgomorov complexity (according to his most recent paper on the subject). But underlying this is the common problem with both hypothesis testing and the design inference. The underlying argument for both is: Given H0 it is extremely improbable that outcome X would fall into the rejection region. X falls into the rejection region. Therefore H0 is extremely improbable. (This may be phrased as "therefore we are justified in rejecting H0"). This has exactly the same logical form as: Given that X is an American it is extremely unlikely that X will be a member of Congress. X is a member of congress, Therefore it is extremely unlikely X is an American (or alternatively "therefore we are justified in rejecting the hypothesis that X is an American"). In practice classical hypothesis testing often gets away with it because there is an implicit assumption that outcome X falling into the rejection region is more probable under H1 than under H0. But this is just likelihood comparison sneaking in through the back door - which ID cannot handle because it requires examining the probability of the outcome given design.markf
July 1, 2011
July
07
Jul
1
01
2011
10:54 PM
10
10
54
PM
PDT
Hi Lizzie, Your problem is that you're attempting to frame the null in terms of the alternative. It's the other way around. ;) True, I never took Stats 101. It follows that I never failed Stats 101 :). And I think Dembski went far beyond Stats 101. I have my "Statistics for Dummies" book, and some others as well:
Some people do like to set a cutoff probability before doing a hypothesis test; this is called an alpha level.
So if the rejection region is the alpha value, what's the p-value?
The most important element that all hypothesis tests have in common is the p-value. All p-values have the same interpretation, no matter what test is done. So anywhere you see a p-value, you will know that a small p-value means the researcher found a "statistically significant" result, which means the null hypothesis was rejected.
you:
Yes, because it [specification] is part of the definition of the rejection region under the null.
The way I see it, it [the specification] is not part of the definition of the rejection region, but must fall within the rejection region. The rejection region is too improbable, given all available non-intelligent resources. The specification is that extra bit that's required within the range of the too improbable and warrants the inference to design. I'm pretty sure that is what Dembski says.
Well, in your quotes above, Dembski actually specifies the null: Chance. So in this case, H0 is “Chance” and H1 is “not Chance”. I was avoiding that one, because I’m not sure what we are supposed to do with Necessity, but I’ll leave that to you. Anyway, it doesn’t matter, because here Dembski seems to be saying that if we reject (“eliminate”) Chance we can infer Design. So he seems to have H0 as Chance and H2 as Design where Chance and Design are the only two possibilities and mutually exclusive.
What allows us to reject Chance and to infer design? Specification.
I was avoiding that one [H0 = Chance], because I’m not sure what we are supposed to do with Necessity,
You mentioned Chance as the null a few times I think. If you review I bet I did not object. Now I ask you to consider carefully what you wrote: "in your quotes above, Dembski actually specifies the null: Chance." So in this case, H0 is “Chance” and H1 is “not Chance”. Ding ding we have a winner!!!! Let us stop, pause, reflect. I assure you, you will find it hard to take back those words.
So he seems to have H0 as Chance and H2 as Design where Chance and Design are the only two possibilities and mutually exclusive.
No. The null is chance. The alternative is not chance. You don't get to change the rules all of a sudden. That leaves only one more thing to address:
I was avoiding that one [H0 = Chance], because I’m not sure what we are supposed to do with Necessity,
How long have you been disputing/debating Dembski? At least 4 years, right? And you don't know how he addresses necessity in the context of chance as the null? To me, this speaks volumes, for it is not as if Dembski has not addressed this very issue.
Aaaaaaarrrrrrgggghhhhhh!!!!!!
I'm sorry. But I just got a mental picture of you banging your head on a table and I burst out laughing. It's not my intent that you cause yourself physical, mental or emotional harm.Mung
July 1, 2011
July
07
Jul
1
01
2011
05:51 PM
5
05
51
PM
PDT
Mung:
Dembski:
Since specifications are those patterns that are supposed to underwrite a design inference, they need, minimally, to entitle us to eliminate chance.
The specification underwrites the design inference.
Yes, because it is part of the definition of the rejection region under the null.
It is the specification that needs “minimally, to entitle us to eliminate chance.”
Yes, because it is part of the definition of the rejection region under the null.
It’s not “design” that eliminates the “null,” it is specification.
Yes, exactly. Because specificaiton is part of the definiton of the rejection region under the null, and is thus the criterion by which we reject the null. And if we reject the null, we can infer design.
Therefore the null is not “no design”.
Aaaaaaarrrrrrgggghhhhhh!!!!!!
At best, the “null” is no specification.
nope.
H0: We do not have a specification. H1: We do have a specification. How is H0 not the negation of H1? How are the two not mutually exclusive? IOW, how do they fail to meet your requirements for a null and alternate?
Oh, they'd meet the mutual exclusion criterion OK, it's just that now you are stuck with no definition of your rejection region because you blew it all on your hypotheses!
Given H1 we can reject H0, and the inference to design is then warranted. That’s Dembski.
Nope. Dembski took stats 101 :) Will sleep on this. Will try to come up with an explanation that will hit the spot with an engineer :) I've cracked tougher nuts. But this time I'll have coffee before I try :)Elizabeth Liddle
July 1, 2011
July
07
Jul
1
01
2011
03:57 PM
3
03
57
PM
PDT
Mung:
You are mistaking your hypotheses for your inferences.
No, I’m not. It’s not called The Design Inference for nothing. The Inference to Design is what we’re allowed to make once we’ve tested the hypotheses.
Yes.
Dembski:
In a moment, we’ll consider a form of specified complexity that is independent of the replicational resources associated with S’s context of inquiry and thus, in effect, independent of S’s context of inquiry period (thereby strengthening the elimination of chance and the inference to design).
Dembski:
Since specifications are those patterns that are supposed to underwrite a design inference, they need, minimally, to entitle us to eliminate chance.
Whether or not there is a specification is the hypothesis. The presence of a specification is what warrants the design inference.
Well, in your quotes above, Dembski actually specifies the null: Chance. So in this case, H0 is "Chance" and H1 is "not Chance". I was avoiding that one, because I'm not sure what we are supposed to do with Necessity, but I'll leave that to you. Anyway, it doesn't matter, because here Dembski seems to be saying that if we reject ("eliminate") Chance we can infer Design. So he seems to have H0 as Chance and H2 as Design where Chance and Design are the only two possibilities and mutually exclusive. The point remains that your H0 and H1 have to be mutually exclusive; however retention of H0 does not exclude H1, although rejection of H0 excludes H0 (that's why we say it is "rejected". So your inferences do NOT have to be mutually exclusive (and aren't).
I still don’t think you understand the argument, but hey. Dembski:
Indeed, the mere possibility that we might have missed some chance hypothesis is hardly reason to think that such a hypothesis was operating. Nor is it reason to be skeptical of a design inference based on specified complexity. Appealing to the unknown to undercut what we do know is never sound epistemological practice. Sure, we may be wrong. But unknown chance hypotheses (and the unknown material mechanisms that supposedly induce them) have no epistemic force in showing that we are wrong.
Dembski:
If, in addition, our best probabilistic analysis of the biological systems in question tells us that they exhibit high specified complexity and therefore that unguided material processes could not have produced them with anything like a reasonable probability, would a design inference only now be warranted?
I don’t know how Dembki could make it any more plain, or how you could fail to read him correctly given how plainly it is stated. A design inference is what is warranted when: H1: some thing or event exhibits high specified complexity and therefore H0: that unguided material processes could not have produced them with anything like a reasonable probability. That’s Dembski’s argument in a nutshell. Thanks for all your help getting it out in the open and plain for all to see. :)
That's fine. I'm not (right now!) attempting to refute Dembski's argument. I'm attempting to point out Mung's confusion between an inference and a hypothesis. Oh, and an alpha value. This: "some thing or event exhibits high specified complexity" is not a hypothesis (in this context). It's the test of a hypothesis. It's actually the definition of the rejection region, i.e. the alpha value. This: "that unguided material processes could not have produced them with anything like a reasonable probability" is not a hypothesis. Its an inference made from the test of a hypothesis. Listen: Complex Specified Information (CSI) is a pattern that falls in the rejection region under the null, and the rejection region itself (the alpha cutoff) is part of the definition of CSI. IF a pattern falls within the rejection region under the null, it is regarded as possessing CSI. So the hypothesis isn't: "this pattern has CSI". The hypothesis is either: This pattern was Designed; or "This pattern was not due to Chance". The test of that hypothesis is whether the pattern has CSI. Otherwise the whole thing would be circular; it would be saying: Oh, look, this pattern has CSI, therefore it falls in the rejection region, therefore we can infer Design. But as you can only conclude that it has CSI if it falls in the rejection region, you are back to square one! As for your H0, it's even more circular! (also I think you made a typo - your turn for coffee I think) You can't include the alpha cut off probability ("reasonable probability" as part of your null hypothesis! It's completely incoherent. You were so close earlier! The whole reason for this null hypothesis malarkey is that you have to have a probability distribution, the tail or tails of which form the rejection region for your null. So a null is useless if we cannot construct from it a probability distribution for the class of event we are trying to investigate. So we can construct a probability distribution fairly easily for, say, percentages of heads in 100 coin tosses, under the null that the coin is fair. And if the observed percentage falls in the rejection region of that distribution, we can reject the null of a fair coin, and infer skulduggery. Dembski's null is very complex, and seems to have two dimensions, as I said above - complexity and specificity. Patterns that are both complex and specified are those that fall in a very small tail of skirt, defined by the CSI formula. So we have our pdf under the null, and we also have our rejection region, defined by the CSI formula. And if a pattern falls in that region, we reject the null (either "Chance" or "no Design) and accept our H1 (Not Chance, or Design). If it does not, we retain the null, which means we are not entitled to infer Design, though it could still be responsible for the pattern. But note: I am not disagreeing with Dembski. This is what he is saying. And it's fine. (Well, it is statistically parsable, and in principle is much more powerful casting Design as the null). My only beef is with you :)Elizabeth Liddle
July 1, 2011
July
07
Jul
1
01
2011
03:47 PM
3
03
47
PM
PDT
Of course I agree with everything Lizzie says – but I think there is a deeper point which is more relevant to ID.
Then why are you contradicting her? Restate your examples and argument as a null hypothesis and an alternative hypothesis and make sure the two are mutually exclusive. Perhaps you two should go have a chat over a beer :)Mung
July 1, 2011
July
07
Jul
1
01
2011
02:00 PM
2
02
00
PM
PDT
1 2 3 4 7

Leave a Reply