Uncommon Descent Serving The Intelligent Design Community

# Yes, Lizzie, Chance is Very Often an Explanation

Share
Flipboard
Print
Email

Over at The Skeptical Zone Elizabeth Liddle has weighed in on the “coins on the table” issue I raised in this post.

If you came across a table on which was set 500 coins (no tossing involved) and all 500 coins displayed the “heads” side of the coin, how on earth would you test “chance” as a hypothesis to explain this particular configuration of coins on a table?

Chance is not an explanation, and therefore cannot be rejected, or supported, as a hypothesis.

Staggering. Gobsmacking. Astounding. Superlatives fail me.

Not only is Dr. Liddle’s statement false, it is the exact opposite of the truth. Indeed, pharmaceutical companies, to name just one example, have spent countless billions of dollars in clinical trials of drugs attempting to rule out the “chance explanation.”

Don’t take my word for it. Here is a paper called What is a P-value? by Ronald A. Thisted, PhD, a statistics professor in the Departments of Statistics and Health Studies at the University of Chicago. The abstract states:

Results favoring one treatment over another in a randomized clinical trial can be explained only if the favored treatment really is superior or the apparent advantage enjoyed by the treatment is due solely to the working of chance. Since chance produces very small advantages often but large differences rarely, the larger the effect seen in the trial the less plausible chance assignment alone can be as an explanation. If the chance explanation can be ruled out, then the differences seen in the study must be due to the effectiveness of the treatment being studied. The p-value measures consistency between the results actually obtained in the trial and the “pure chance” explanation for those results. A p-value of 0.002 favoring group A arises very infrequently when the only differences between groups A and C are due to chance. More precisely, chance alone would produce such a result only twice in every thousand studies. Consequently, we conclude that the advantage of A over B is (quite probably) real rather than spurious.

In a clinical trial the null hypothesis is that the apparent advantage of the treatment is due to chance. The whole point of the trial is to see if the company can rule out the chance explanation, i.e. to rule out the null hypothesis that the results were due to chance, i.e., the chance hypothesis. So, if “chance is not an explanation” what is the point of spending all those billions trying to rule it out?

Want more? Here’s a paper from Penn State on the Chi-square test. An excerpt:

Chi-square is a statistical test commonly used to compare observed data with data we would expect to obtain according to a specific hypothesis. For example, if, according to Mendel’s laws, you expected 10 of 20 offspring from a cross to be male and the actual observed number was 8 males, then you might want to know about the “goodness to fit” between the observed and expected. Were the deviations (differences between observed and expected) the result of chance, or were they due to other factors. How much deviation can occur before you, the investigator, must conclude that something other than chance is at work, causing the observed to differ from the expected. The chi-square test is always testing what scientists call the null hypothesis, which states that there is no significant difference between the expected and observed result

Obviously, asking the question, “were the deviations the result of chance, or were they due to other factors” makes no sense if as Liddle says, “chance is not an explanation.”

I don’t know why Dr. Liddle would write something so obviously false. I am certain she knows better. “Darwinist Derangment Syndrome” or just sloppy drafting? I will let the readers decide.

Reciprocating Bill blah blah blah
You were called on your garbage. No need to re-analyze it again. You're post agreed with Mark which agrees that chance is an explanation. scordova
Some recent anti-ID debate tactics: Jargon Block: ID proponent uses a term or phrase like "macroevolution", "random" or "due to chance" and the anti-IDist claims that it isn't scientific or specified enough. After demonstrating that many scientists use the exact same phrase or term, the anti-IDist will move on to: Jargon Bluff: When an anti-ID advocate claims that even though the IDist has used the same term or phrase as many other scientists, they aren't using it the same way as the scientist. Education Block: The anti-ID advocate totally refuses to address the content of a debate, instead simply claiming or implying that the IDist's lack of specific education in the particular field in question precludes their argument from being worthy of addressing or rebuttal. Education Bluff: The anti-ID advocate adressess the ID argument or attempts to rebut it, and claims or implies that the only reason the IDist disagrees or fails to see their point is because they lack the proper education. The Quote-Mine Block: Generally invoked by the anti-ID advocate whenever an IDist employs a quote from a mainstream scientist that appears to support any point they are making. The Quote-Mine Bluff: After making their accusation of quote-mining, the anti-ID advocate demands that the IDist prove that they were not quote-mining, as if it the IDist's job to make a case for their presumed innocence. William J Murray
My post at TSZ:
Liz, It doesn't appear as if you read/responded to my post above that starts:
It seems to me that when Mr. Arrington says:
Because you write:
Second, my case, consistently, has been that “chance” cannot be the null hypothesis, which is what Barry claimed. I notice that you do not mention the term “null hypothesis” here
... and, again, it seems to me that you are being uncharitable in your reading of what Mr. Arrington said in terms of the "null hypothesis", whereas a charitable interpretation from the assumption that Mr. Arrington was being informal would interpret that characterization as being much the same as what you and others have said. IOW, it seems to me that you have jumped onto the way Mr. Arrington phrased his comment about the null hypothesis because it doesn't clearly state it the way you would prefer, but as I said in the aforementioned post, it seems that if you were being charitable, you would interpret his post (using your height example):
“Our null hypothesis will be that the mean height of Scotsmen is the same as the mean height of Englishmen, and that any difference between the two groups will fall within expected chance variance.”
That seems to me to be a fair reading of Mr.Arrington's statement (although it wasn't about height), unless I was simply assuming Mr. Arrington didn't know what he was talking about and wanted him to be making some kind of big conceptual error - like a failure to understand what a "null hypothesis" was, or believing that "chance" was in itself some kind of causal agency. Let's look at this quote from the abstract of How Was the Australian Flora Assembled Over the Last 65 Million Years? A Molecular Phylogenetic Perspective Annual Review of Ecology, Evolution, and Systematics, Vol. 44: 303-324 (Volume publication date November 2013) :
The Australian biota is a sample of the wider region, with extinction of some taxa and radiation of others (due to chance and opportunity), but biotic and abiotic interactions have resulted in a unique flora and fauna.
Ehh .... chance and opportunity don't actually cause anything, now do they? Yet, that's not how I would interpret what this guy is saying - it's not how anyone would charitably interpret what he said, unless they were specifically looking to jargon bluff or, perhaps more appropriately, jargon block him. It seems to me that when someone says "the null hypothesis is that the difference is due to chance" the are obviously using a shorthand means of saying that the null hypothesis is that there will be no difference between the sets of data, and any difference should be within expected chance variance (IOW, "due to" or "explained by" chance)."
Note: Prior to this, Dr. Liddle had unequivocally stated that in science, "due to" means "caused by", which she was using to make the case that Mr. Arrington was arguing that chance was in itself some kind of causal agency. When I pressed her on this, she equivocated that it wouldn't be entirely unscientific or unacceptable to refer to the variances between data sets as being "due to" or "not due to" chance. Perhaps she realized I might have done some checking around to find this statement by her in 2006:
Nonetheless, I quite agree with you that the discrepancy between the exit polls and the vote counts in Ohio were not due to chance. They were due to something. The question is: what?
The rest of my post at TSZ:
Moving on to your point about the ID claim that "the null hypothesis is that X feature in biology is caused by chance" being a huge, fundamental conceptual block, I think that - once again - your are using a jargon bluff (or a jargon block) because you don't like the way they phrase it, but I can almost certainly assure you that no one in the ID community thinks of chance as some kind of causal agency in and of itself. That's absurd. The ID null hypothesis you quibble with has often been described that "all biological features are due to necessity and chance". An uncharitable, jargon-block reading of which would paint by chance as meaning chance to be a causal agency, and "chance" the direct null hypothesis. But, is that really what ID proponents mean? Of course not. Are ID proponents even using a null hypothesis in the way you characterize as necessary? Is it necessary? I think that ID proponents are using the alternative method - comparing two different hypothesis to see which one better explains the data. The null, in this case, would be that phenomena X occurs as a result of the relevant materials interacting in accordance with known natural laws and tendencies, without any deliberate or teleological intervention/influence. The alternative hypothesis is that phenomena X cannot be plausibly achieved without teleological influence. The ID advocate is not really saying that the null is that X is "caused by physical law and chance" in the strict sense, because those are not causal agencies - they are characteristic descriptions of processes and/or outcomes. As is "design". The question is if one can establish an acceptable "teleology metric", where at a certain value, the design hypothesis is preferred because the data places X outside of acceptable variance for the non-teleological explanation set.
William J Murray
Sal:
It could be that Barry agrees, but the long winded response that was so carefully constructed looked like a substantive rebuttal, and it was a rebuttal to a claim that Barry didn’t even make in the first place.
Mine was a rebuttal of a specific claim: That “chance explanation” as used (loosely) by Thist exemplifies an instance of “chance explanation” as discussed more generally on this and previous threads - chance as an explanation of phenomena themselves. It doesn’t. “Chance explanation” in the context of hypothesis testing - a context explicitly established by Barry’s citation of Thist’s essay - refers to the fact that one’s experimental results inevitably reflect sampling error to some degree. It has nothing to do with a “chance explanation” of the phenomena itself, on equal footing with other candidate independent variables. The distinction should be easy to see: significance testing to rule out “chance explanation” (in Thist’s sense) and justify an inference from sample to population is no less necessary when the variables of interest within the population attain specific values by 100% deterministic means, with no stochastic factor at all. A randomized sample drawn from that population of 100% determined phenomena will nevertheless inevitably have a mean (and other summary statistics such as variance and standard deviation) that differs “by chance” to some degree from that of the population from which it is drawn. Tests of significance, (the use of a P-value, etc., the topic of Thist’s essay) set a threshold such differences must attain before inferences regarding rejection of the null are even entertained. The need for such procedures to rule out Thist's "chance explanation" (that our results reflect sampling error) says nothing whatsoever about "chance explanation" of the population values themselves.
If anything, the null hypothesis is “not chance”.
The null hypothesis, in the context of hypothesis testing, is that there is no difference between populations of interest vis the dependent variables of interest.
It was also a subtle ad hominem (Barry you’re just a lawyer, look at all this math you can’t comprehend).
Barry characterizes me as “invincibly stupid.” Sal complains of my “subtle ad hominem,” cleverly accomplished by not characterizing Barry in any way, mentioning the fact that he is a lawyer, or characterizing his knowledge of statistics. And in a post of 753 words, Sal characterizes mine of 321 words as “long winded.” Go figure. (The math isn't hard.) Reciprocating Bill
WJM, EA & Box: Well said. Jargon bluff is an excellent description of a common enough debate tactic used by many darwinists. In fact it is one reason why over time I have become leery of using the term "randomness." That was so poisoned by and twisted into pretzels by objectors that many of us simply reverted to the broader term "chance" [we used to speak of "random variation"]; which is now under attack. I should note that in the years that EL commented at UD, I had to repeatedly correct distortions of the ID design inference filter process, and she would often grudgingly acknowledge correction after laborious and typically going in circles exchanges taking days at a time, then soon thereafter she would revert to the same error again in another context -- often she was simply unable or unwilling to correctly read a flowchart. As she is highly educated, across time, I came to the evidence driven conclusion that the latter was the case and reflected a willfully continued misrepresentation presented in lieu of truth she knew or full well should have known -- or could simply have quoted if she was interested in a fair representation of the views she objected to. (NB: Those who trot out talking points on how EL was oh so innocent and censored by ID bullies because she presented effective objections, typically simply do not know what they were talking about. They are parrotting talking points that are themselves calculated and sustained toxic misrepresentations that in some cases can amount to outright defamation. Don't let me get on on the case of how I was defamed by invidious association with Nazis at TSZ, and how EL pretended that nothing of the sort was going on at her blog. And more, much more. TSZ is simply the more respectable front hosted by an enabler, for the fever swamp sites out there that as of the latest outrage, have tried to paint targets on my uninvolved family by trying to disclose street addresses.) One example of this pattern by EL that comes to mind and which is close to the matter now at hand, is the one on how design was allegedly the default inference made by design thinkers. I would repeatedly point out how contingency is used twice in the filtering process, and how there are therefore TWO defaults, one in the low contingency case -- mechanical necessity, and one in the high contingency case -- chance. This was plainly a case of someone repeatedly corrected against their will being of the same opinion still. What is now going on looks like a pattern of implying or suggesting that a chance based hyp is no longer a chance based hyp when it is dressed up in the technical specifics of being a null hyp in an inferential process. Rubbish! That's why you will see profs, textbooks and practitioners alike out there describing chance based null hyps as chance based null hyps in more or less those words. And in Fisherian elimination based hyp testing,t he chance based null is rejected on seeing a pattern of being sufficiently far out in a skirt tail that it is not credible that on the opportunities for observations typically in hand, you should be seeing something that far out. Under those 95% or 99% confidence circumstances, or the like, one rejects the chance based null. Let us put this one to bed by citing Wiki on statistical hyp testing, speaking against known ideological interest:
A statistical hypothesis test is a method of statistical inference using data from a scientific study. In statistics, a result is called statistically significant if it has been predicted as unlikely to have occurred by chance alone, according to a pre-determined threshold probability, the significance level. The phrase "test of significance" was coined by statistician Ronald Fisher.[1] These tests are used in determining what outcomes of a study would lead to a rejection of the null hypothesis for a pre-specified level of significance; this can help to decide whether results contain enough information to cast doubt on conventional wisdom, given that conventional wisdom has been used to establish the null hypothesis. The critical region [notice, this is a zone of interest . . . ] of a hypothesis test is the set of all outcomes which cause the null hypothesis to be rejected in favor of the alternative hypothesis. Statistical hypothesis testing is sometimes called confirmatory data analysis, in contrast to exploratory data analysis, which may not have pre-specified hypotheses.
Let us highlight again, citing Wiki's apt summary for emphasis:
In statistics, a result is called statistically significant if it has been predicted as unlikely to have occurred by chance alone, according to a pre-determined threshold probability, the significance level.
Of course, all of this is probably also influenced by Darwinist attempts to discredit Dembski's 2005 online paper on how specification allows us to firm up the theoretical underpinnings of Fisherian elimination [vs. Bayesian reasoning and likelihood comparisons], and especially his use of the Caputo case. There was a long exchange here at UD on it, involving a statistics prof -- PE if I recall, of Swedish descent -- with whom I have occasionally had onward exchanges of messages. The sum of those exchanges is reflected in the appendix to my online always linked note, here. And, it seems: onward, that chance based variation is now not chance based variation in the proposed darwinian mechanisms. That is why I have taken pains to highlight that there is a causal pattern tracing to chance, and that chance is expressed in two main ways:
(a) clashing of uncorrelated chains of events yielding outcomes that are highly contingent and credibly undirected, e.g. a dropped die tumbles and settles in a contingent manner because of its twelve edges and eight corners interacting with uncontrollable small variations in how it falls and impacts a surface (b) events at the level of or bubbling up from quantum processes that are understood to be directly random such as quantum potential barrier tunnelling, e.g. a mutation may be triggered by an alpha particle impact in a living cell, the alpha emission being a classic example of quantum tunnelling.
It is a plain fact that the darwinist mechanism is one where chance variation [CV] is held to account by various means for new varieties that then face differential reproductive success in ecological niches [DRS], yielding incremental descent with modification [IDWM] held to account for the claimed branching tree pattern of evolution [BTE]:
CV + DRS --> IDWM --> BTE
1 --> the source of change in genetic info and regulatory (or epigenetic info) relevant to such variations, is plainly the chance variations, it is the only thing that is adding. 2 --> In fact, what we have is that less favoured varieties [Darwin's sub title for Origin used "favoured races"] are subtracted and replaced by the more favoured. 3 --> Accordingly, I think I should modify the expression in interests of accuracy, to subtract sup-pops that undergo elimination by relative "reproductive failure" leading to elimination [EBRF]:
CV - EBRF --> IDWM --> BTE
4 --> This highlights how, it is held that incrementally, CV is held to be the source of novel bio-functional info that rewrites genes, gene regulation and gene expression in cell based life forms all the way from the last universal common ancestor to the world of life in the fossil record and around us today. 5 --> There are many problems with this picture. 6 --> For instance, the increment cannot exceed 500 bits of info, on the reasons of search space challenge as already described [and in fact there is evidence on forming new proteins from old by mutations that something like 7 co-ordinatedly functioning mutations is a reasonable upper limit . . . ]. 7 --> Where also, major novel body plan features, on examination of genome sizes and on plausible estimation alike, credibly require 10 - 100+ mn bits worth of new bio-information. Body plans require new cell types, tissues and organisation, which has to be based on information and manifests in new proteins etc. 8 --> I also note how ever so many proteins exist as singletons or the like in terms of fold domains in protein sequence space, and how fold domains seem to come in islands of function generally, that cannot plausibly be traversed in credible steps of change in credible populations with credible mutation rates. 9 --> In short, the chance based driver of the claimed evolutionary process simply does not credibly have the requited capability. 10 --> By contrast, there is a well known source of functionally specific complex organisation and/or associated information [FSCO/I] that comes in islands in vast config spaces. Namely, design. 11 --> On the evidence in front of us for some 60 years now on the FSCO/I rich basis for life forms, and on the only known and plausible source of FSCO/I, we are entitled to infer that the world of life is chock full of strong signs of design. 12 --> Where also, it needs to be underscored: the only two empirically warranted causal factors behind highly contingent outcomes, are chance based and design based. 13 --> Of these, once we pass the FSCO/I threshold, chance is reasonably eliminated as not being powerful enough. 14 --> Where also, attempts to blend chance with necessity, as we have seen, boil down to ending up obscuring the driving force of the claimed chance process: chance variation. The vaunted "natural selection" part ends up being simply a description of how some varieties lose out, it subtracts info it does not add it. 15 --> We need to focus on what ADDS info, or is at least held to add info. 16 --> And that leaves chance variation on the darwinist table. 17 --> Which, for cause as shown, just is not powerful enough to account for what we have to account for. Not, once we pass the FSCO/I threshold of 500 - 1,000 bits. (And, recall, new major body plan features credibly require 10 - 100+ mn bits of incremental info.) KF kairosfocus
Box @60: Excellent. Eric Anderson
Jargon bluff by Lizzie:
L: “And this is not a trivial nitpick. It goes, I think, to the error at the heart of the ID critique of evolutionary theory. Evolutionary theory is not the theory that what we observe is explained by “chance”. Chance explains nothing.”
L: So can we please jettison this canard that “Darwinists” propose chance either as as an explanation for the complexity of life, or even as the explanation for an unfeasibly long string of tossed heads?
3 quotes from E.V.Koonin:
Undirected, random variation is the main process that provides the material for evolution. Darwin was the first to allow chance as a major factor into the history of life, and this was arguably one of his greatest insights. (p.14)
As emphasized earlier, Darwin recognized a crucial role of chance in evolution, (…). (10)
(…) the emergence of even highly complex systems by chance is not just possible, but inevitable (392)
Box
KF @57:
In effect it is the CV that writes the code and the NS is the editor. But if your writer’s ability is quite limited . . . there’s but little to edit, nuh?
Exactly. Well put. ----- WJM @58: I don't know about Lizzie's specific motives in this instance and wouldn't want to speculate, but what you describe is indeed quite common. Eric Anderson
KF, I don't think Dr. Liddle was careless at all. I think what is being employed here is a jargon bluff. An IDist makes a point using terms that are commonly accepted and used even in the scientific community. The anti-ID advocate attempts to characterize the phrase or terminology as non-scientific. When the IDist points out that the phrasing or term is used all the time by the scientific community, the bluffer moves on to stage 2, where they claim that the IDist is not using the jargon in the same way as the scientists that use it. They do/did the same thing with the terms "macro-evolution", "random" and many other terms and phrases. If the "that term isn't scientific" bluff doesn't work, they move on to the "you're not using the term correctly" bluff. William J Murray
Folks: Pardon an interjection. First, chance or random processes are foundational in a lot of physics, that is why there is such a thing as statistical thermodynamics. For simple instance, temperature is a measure of average random kinetic energy per degree of freedom. In recent days, I have talked about Brownian motion, which depends on fluctuations in numbers of molecules colliding with small enough particles. Indeed, Einstein used this to argue to the empirically confirmed existence of atoms, earning part of his Nobel thereby. So, chance and closely linked concepts are scientifically accepted, explanatory constructs. Next, the vaunted Darwinian evolutionary mechanism depends on chance to trigger variations, which are then fed into differential reproductive success. So chance is at work here also. Chance, as I already pointed out, being due to uncorrelated clashes of trains of events, and/or to quantum effects that seem to be directly random. So, for instance, a bit of alpha radiation may trigger a mutation, and so forth. (That we fear these mutations shows us that we know deep down the overwhelmingly likely outcome of such chance phenomena.) So, there are a lot of phenomena where statistical behaviour and chance are important explaining constructs. That's before we come to statistics and hypothesis testing. Null hyps with chance based scatter around the null in a distribution is a COMMON chance based explanation -- rejection being taken up when we find ourselves in far tail zones that it is unlikely that we would observe by chance fluctuations around a mean but which would be very easily explained by other factors. Similarly, in the 500 coin thought exercise, the point is that the expected range is close to 250 heads and it is maximally implausible that one would see a 22-sigma fluctuation. Much more plausible is that this is not chance but a purposeful act, a design. All that stuff on means of samples vs pop means, sample scatter vs pop scatter and so forth have very little to do with this basic and commonplace fact of life. Save, to serve as red herring distractors led out to strawmen soaked in subtle ad hominems and waiting for a few rhetorical sparks. Then in the blaze, smoke, confusion and polarisation the key point is easily lost sight of. Chance is a broad concept, yes. so is matter, so is energy, so is time, so are many others. All of them find use and focussed application in many aspects of science. In statistical techniques, chance is the underlying concept in ever so much of what happens with statistical distributions. And in broader contexts where distributions are not precisely defined also. So, let us realise that someone spoke carelessly indeed and in error when she said:
Chance is not an explanation, and therefore cannot be rejected, or supported, as a hypothesis.
Yes, the specific way chance is applied will refine the thoughts into something like that the null hyp is that the treatment T had no effect beyond the usual sugar pill and trust in the white coat and stethoscope placebo, or whatever. But chance has not vanished from the explanation, it is integral to it. Ironic isn't it that it is design thinkers who are standing up for chance here! (As in who usually speak about "chance variation . . . " and "natural selection . . . " -- oh, maybe that is part of why the second part, which actually only describes how some varieties presumed to be introduced by chance don't survive gets all the emphasis. ) The point is, as the 500 coin exercise shows, there are serious limits to what we can expect chance to do, which highlights a serious limitation on explanations of diversification of life forms that pivotally depend on it. In effect it is the CV that writes the code and the NS is the editor. But if your writer's ability is quite limited . . . there's but little to edit, nuh? KF kairosfocus
If I may address Reciprocating Bill's arguments. It was a creative application of A Darwinists list of Strategies to Argue against ID
MF
He is pointing out that a result may deviate from the expected value by chance as opposed to some underlying cause.
Mark is right.....blah blah blah
Mark is right, but the insinuation is if Mark is Right, Barry must be wrong. It could be that Barry agrees, but the long winded response that was so carefully constructed looked like a substantive rebuttal, and it was a rebuttal to a claim that Barry didn't even make in the first place. It was an irrelevancy. It was also a subtle ad hominem (Barry you're just a lawyer, look at all this math you can't comprehend). But I knew Barry was an accountant, top of his class. Being a poker player, he's more versant in statistics than he's being given credit for. Heck, I agree with this statement by Mark.
He is pointing out that a result may deviate from the expected value by chance as opposed to some underlying cause.
which means chance can be an explanation. What has happened is Barry's original question (which was really my original question) posed to Nick has been subtly re-written in a clever way. Let me state the essentials of the original question to Nick which I posed (and which Barry adapated).
If you came across a table on which was set 500 fair coins and all 500 coins displayed the “heads” side of the coin, would you reject “chance” as a hypothesis to explain this particular configuration of coins on a table?
The correct answer, practically speaking is "yes". By arguing against things ID proponents have not said, they are insinuating the original question was something of the form:
If you came across a table on which was set 500 fair coins and all 500 coins displayed the “heads” side of the coin. I claim the following null hypothesis: “chance explains a configuration of 50% heads.” Can you prove or falsify my null hypothesis?
The answer is "no". The trick is to pull off re-writing the original question by not actually re-writing it but by arguing against a question Barry and I never posed in the first place. Now here is the subtlety, if "chance is the null hypothesis" then actually Lizzie's claim holds true:
Chance is not an explanation, and therefore cannot be rejected, or supported, as a hypothesis.
But you see, chance is not the null hypothesis in the coins question. If anything somewhat the opposite. Let me approximately rephrase the question in terms of a null hypothesis. It doesn't quite have the force of the original question, but you'll see how it can be approximately framed:
If you came across a table on which was set 500 fair coins and all 500 coins displayed the “heads” side of the coin. I claim the following null hypothesis: “The coins pattern is not the result of chance” Can you falsify my null hypothesis?
If anything, the null hypothesis is "not chance". The result is confusion factors abound. Ideas are falsely attributed to Barry which Barry never made. Irrelevancies are stressed by Reciprocating Bill to give the impression a rebuttal is being made, when in fact it's just a restatement of facts. The impression made is by going into long winded technical details and thus insinuating that a claim Barry has made has been refuted when in fact Barry never made such a claim. Notice, Reciprocating Bill never actually identified what it was that Barry said was wrong. Why? The whole post was pretending to argue against a position Barry never made, and the unsuspecting readers come away with the impression Barry was refuted. But he wasn't. If Barry had the patience, or it were me, the proper counter-maneuver would have been to say:
Reciprocating Bill, I'm a little slow, can you explain for the readers if this falsifies something I said?
RB, your assertions in 15 are wrong in every particular. Darwinists’ willingness, even eagerness, to twist, distort and obfuscate never ceases to amaze.
RB's argumentum ad nauseam had it's effect and superficially it could have looked to some onlookers like Barry might have capitulated, when in fact he was disgusted and didn't want to bother with it. I recognize that, but now that it is evident the ID side is prevailing in this recent exchange, I thought it would be instructive to revisit RB's comment. Reciprocating Bill could have pointed out the Mark essentially falsified Lizzie's claim:
a result may deviate from the expected value by chance as opposed to some underlying cause.
"She can refuse to integrate her comments and will be off your hook at any time she wishes." No, UB, you're wrong. This time she has been caught saying something so utterly stupid there is no way for her to squirm off the hook. She's stuck making the outlandish claim that chance is never an explanation. Everyone knows that is not true. Why she allowed herself to get stuck so bad is a mystery. That she is in fact well and truly stuck is beyond peradventure. Barry Arrington
scordova, you seem a little drunk on your own salt. :) I wouldn't pop the cork on any champagne where Dr Liddle is concerned. She's twice as clever as you, and three times as commited. She can refuse to integrate her comments and will be off your hook at any time she wishes. She can do so because there is no one to hold her intellectually accountable where she is not willing to do it for herself. Right now in the US there is a national advertising campaign where a former minister has come up with a "secret money code" from God, which can be used to unlock the financial riches deserved by all those who call his toll-free number, and every day we have a minister of some flavor who squints his eyes on TV and tells us that he senses God healing the lump on someone's ankle or the pain in their neck. Perhaps if theists did a little more to hold their fringe accountable, then the average materialist ideologue would find it harder to bullshit themselves, and everyone within earshot. Upright BiPed
Mark Frank mentioned
Q You seem to asking for what the deliberative democracy theorists would call authentic deliberation. There have been various proposals for the criteria for such deliberation. A good example is the Discourse Quality Index proposed by Steenberg . . .
Interesting---I'd never heard of "authentic deliberation." It sounds very interesting, but probably doable only between people of high integrity and commitment to finding truth (or at least a good approximation). Thank you for the reference. -Q Querius
scordova: Some good thoughts (if perhaps a bit cynically delivered, due to your battle-worn years of trench warfare) on the debating tactics and situation. Sadly, too often true. Eric Anderson
“Gee Lizzie, you’ve made a conving argument against the inappropriate claims of evolutionary biologists…”
typo
“Gee Lizzie, you’ve made a convincing argument against the inappropriate claims of evolutionary biologists…”
scordova
FWIW, These debates remind of the courtroom drama of the Jodi Arias murder trial. It dragged on for 5 years. It was evident even the defense lawyers team led by Kurt Nurmi didn't believe Jodi was innocent. Nurmi wanted the judge to excuse him from the case, but the Judge said no. The only real drama about the case was seeing whether prosecutor Juan Martinez could prosecute a conviction of Jodi with the death penalty. This is how I view the ID debate. Both the prosecutor (ID) and the defense (of Darwin) are just going through the motions-- they know each other's playbook. The only difference is the court room is a Kangaroo courtroom. ID has a great circumstantial case, but circumstantial cases can be tough to prosecute. The Darwin defenders will play a game of confusion and smokescreens and red herring and misrepresentations. I've seen it so many times. The main challenge as far as the internet wars is to see how quickly the ID side can put the opponent on the mat. In this case, it went a few rounds, until Barb (in another thread) found a vulnerability in Lizzie's argument, namely that Lizzie was opposing the claims of other evolutionists. And this led to something I posted here: https://uncommondesc.wpengine.com/intelligent-design/mark-frank-ok-im-with-you-fellas/#comment-484465 It takes some creativity to deal with a red herrings and sometimes the opponent in debate won't give you much to work with, but occasionally they'll slip. Now the question is will Lizzie and friends double down. The proper debate maneuver for the ID side, now that we've identified the vulnerability is to say:
Question for Lizzie? Given you think anyone resorting to chance explanations is wrong to do so, do you think the evolutionary biologist Koonin is wrong to assert in The Logic of Chance
The overwhelming importance of chance in the emergence of life on Earth
If evolution worked by chance, it obviously couldn’t work at all. Richard Dawkins
etc.
I hold Lizzie too high regard for her hospitality toward me I won't go to TSZ and post such a discussion. But if I were less friendly, that's how I would play the game. ID proponents must not necessarily rush either to disagree with their opponents. It might be your opponent has unwittingly put themselves in a position of disagreeing with their colleagues. You can thus win the exchange with out having to win the argument. And now you can really milk it if you're mischievous :twisted:
Gee Lizzie, don't you think Koonin is off base for invoking chance as a mechanism given your wonderful exposition against chance being a mechanism in the first place.
For what it's worth... Dr Liddle has described her conception of chance as all outcomes being equiprobable, and has most assuredly used "chance" as an explanation of that outcome. Upright BiPed
according to the textbook, evolution by chance occurs Moran characterizing Futuyma's textbook http://sandwalk.blogspot.com/2.....hance.html
and then
If evolution worked by chance, it obviously couldn’t work at all. Richard Dawkins
Lizzie is welcome to explain why Dawkins rejects the chance hypothesis for evolution and why Moran and Futuyma accept the chance hypothesis since Lizzie argues:
Chance is not an explanation, and therefore cannot be rejected, or supported, as a hypothesis.
In light of what other evolutionists have said, this of course puts those supporting Lizzie's claim in a bad position. scordova
I went over to The Skeptical Zone to see if Liddle had amended her position in response to correction. Nope. She's still peddling her nostrums:
But the “random” or “chance” part is the sampling part – “chance” is not the hypothesis. This is not “mush” – what is “mush” is to vaguely, mushily, say that the results are due to “chance”.
Barry Arrington
SC: Pardon but I am a little uncomfortable with a maximising uncertainty definition for chance phenomena [which points to flat randomness], as random variable can also show bias by central tendency or inclination to one end or another of a range of possibilities or more.
Thanks for you objection. You can pursue that line of criticism here: https://uncommondesc.wpengine.com/intelligent-design/the-paradox-of-almost-definite-knowledge-in-the-face-of-maximum-uncertainty-the-basis-of-id/ I welcome succinct and clear alternative if you can suggest one in that discussion. I don't want to clog Barry's thread, so if you can comment there, I would be grateful. Sal scordova
Mark Frank #43, you display, as far as I know, impeccable manners. However you will have noticed that amongst your ilk you are an exception in this respect. Surely you are aware who the real bullies are in this time frame. Don't ask me to condemn a courageous man who stands up against them. Box
Barry you are getting confused. I never disagreed with either Lizzie or Thisted on the essentials because they are in agreement. All that has happened is that Thisted has used 'chance' in a somewhat slipshod way. Read Lizzie's piece on TSZ for an explanation. Mark Frank
Box - I am still waiting to hear if you agree that #26 is an example of browbeating. (Barry has unilaterally declared that browbeating does not apply when he considers the victim not to be in good faith. I assume you can see through that ruse). Mark Frank
Lizzie: (…) “chance” is not an explanatory hypothesis
Lizzie: Sure, we speak of retaining the null as accepting that the data could have been the result of “chance”.
Uh, … the result of chance? How can something be the result of chance when ‘chance is not an explanation’?
But chance is not the null.
Strawman?
Wiki: In statistical inference of observed data of a scientific experiment, the null hypothesis refers to a general or default position: that there is no relationship between two measured phenomena, or that a potential medical treatment has no effect.
Box
Mark Frank, now you are back to agreeing with Lizzie. Which is it? Is she wrong and the statistics professor right? Is she right and the statistics professor wrong? Surely you are not suggesting they are both right when their statements are irreconcilable. Are you? BTW, #26 is not browbeating. Browbeating can only be done to people who are trying to argue in good faith. That excludes RB. Barry Arrington
"Coming soon. Wait for it." Can't wait... humbled
It is obvious that the Darwinist camp has been invaded and is been led by mentally disturbed individuals or worse. There are some of those in the other camp too, sorry. This is a war between religions. This war will not be won with interminable "debates". It will be won with the advent of a world changing technology or scientific discovery that annihilates the current paradigm and knocks everybody's socks off, scientists and laymen alike. Coming soon. Wait for it. Mapou
The chance discussion/debate going on between UD and TSZ, is it over the nature of the meaning of the word "chance"? My understanding of chance is as per the definitions found in various dictionaries and enclyclopedias, that being: "a possibility of something happening." and/or "the occurrence of events in the absence of any obvious intention or cause." Has the definition changed? I'm still trying to get my head around Krauss telling me that nothing, as in NO THING, is in fact actually something. Confusing... humbled
#34 TSErik I don't think you will find me making many comments about civility and such like on either side. I generally avoid it. But Querius raised it as an issue here. As it happens I have a several times criticised ID opponents for being uncivil on TSZ. It seems unnecessary to do it here. There are plenty of people to leap on anything that looks uncivil. Mark Frank
#35 Box You have picked out one sentence of Lizzie's which is not her best (I suspect a typo). The passage as a whole is authoritative and well-written. Do you agree that comment 26 is an example of browbeating? Mark Frank
MF #29: I have just seen that Lizzie has addressed this OP on TSZ in a more complete and rigorous fashion
Lizzie: But nobody IS denying ID as an explanation for the configuration of coins. I have rejected the hypothesis that they were fairly tossed. That is not the same as inferring that they were laid by an ID I do not “agree that ID is the best explanation” although, given the nature of coins and tables, it probably is, just as it would be if they’d been tossed (most likely tosser is an ID).
Very very complete and rigorous indeed. She has stopped making any sense whatsoever. This is scary …. Box
Comment #26 above is a prime example of sniping and browbeating. “I can’t believe someone with your intellect/education/experience would say something stupid as . . .”
It's strange Mark. I've looked back and taken note, and I cannot find you chiding, or calling for civility from the likes of Matzke, or others arguing for your side of the aisle. If I were to venture to TSZ or PT would I see you ardently defending ID proponents from incivility? You aren't a hypocrite are you Mark? You also wouldn't be trying to shift focus from a nonsensical position to a perceived moral high-ground to invalidate your opponents, would you? TSErik
MF: here we go again on probability hyps. While Wm AD did speak on this in connexion of a theoretical value, it is blatant that where we have 500+ bits of informational complexity sand a solar system scope, using the 10^57 atoms as observers observing every 10^-14 s for 10^17s will only be able to sample as 1 straw to a 1,000 light year thick cubical haystack of the config space. In teh case of the observed cosmos as a whole, 1,0000 bits will suffice to swamp search capacity to a much worse degree. So, sampling resources, or rather cosmic scale lack thereof, dominates any blind search on blind chance plus mechanical necessity. As a consequence we have no good reason to expect the allowed blind search of any character limited by atomic resources, to find specific, rare clusters of configurations. This has been pointed out any number of times and has been willfully ignored. That speaks volumes, utter volumes. There is just one empirically warranted source of FSCO/I and it is design by intelligence. The search challenge above easily tells why. But then, we are dealing here with an ilk that will not acknowledge self evident truths. KF kairosfocus
#23 Box
Not that I agree – I accept the necessity for self-evident truths and I don’t believe it was helpful that you went on about coins in packages – but can you name one other ID tactic?Because I have never witnessed a discussion where the tables were actually turned. Never have I seen that ID-proponents were forced to adopt the methods eloquently described by Querius in post #9.
Comment #26 above is a prime example of sniping and browbeating. “I can’t believe someone with your intellect/education/experience would say something stupid as . . .”  Mark Frank
#28 Barry
But there was an implicit probability model in the case Lizzie was discussing (the 500 heads scenario).
Barry when discussing the 500 coins I repeatedly asked if you meant a particular probability model (50% probability of each coin being head or tails independent of other coins). I said - if this is what you mean by chance then I reject it - Lizzie I am sure would do the same. However, you refused to confirm that was what you meant. Is that all you meant by chance in relation to the 500 coins? If so, we can all agree and go on to something more useful. Mark Frank
Something Dr. Liddle actually said at TSZ:
But nobody IS denying ID as an explanation for the configuration of coins. I have rejected the hypothesis that they were fairly tossed. That is not the same as inferring that they were laid by an ID I do not “agree that ID is the best explanation” although, given the nature of coins and tables, it probably is, just as it would be if they’d been tossed (most likely tosser is an ID)
William J Murray
I have just seen that Lizzie has addressed this OP on TSZ in a more complete and rigorous fashion than my comments. I would like to emphasise that she is someone who lives and breathes (and teaches) statistics professionally. Of course an argument from authority is not proof but it does merit trying to understand what the authority is saying. Mark Frank
Mark:
However, to use Chance in the abstract without an implicit or explicit probability model (in this case the null hypothesis) explains nothing
But there was an implicit probability model in the case Lizzie was discussing (the 500 heads scenario). So from your comment, I take it you agree Lizzie was wrong. OK. Barry Arrington
In a clinical trial the null hypothesis is that the apparent advantage of the treatment is due to chance. It's really not. The null is that the treatment has no effect. It's possible (and in fact quite likely) to reject that hypothesis with some p-value threshold or other and it still be most probable that the apparent-effect is due to chance. (i.e. if an hyptothesis is very unlikely, a "signficant" p-value only pushes the probability a little towards the belief that it is true). The argument about chance as a explanation seems like a complete waste of time to me. If you can carefully define what the chance hypotheses are (e.g. sampling from a known probability distribution) then I guess chance is an explanation, even if there is a mechanistic reason underlying the abstraction we make for that variation. wd400
Barry:
RB, your assertions in 15 are wrong in every particular. Darwinists’ willingness, even eagerness, to twist, distort and obfuscate never ceases to amaze.
RB:
I eagerly await your rebuttal of each of those particulars.
In the words of the man in black, “get used to disappointment.” Your assertions in 15 are so egregiously off base that they indicate one of two things: (1) someone who is invincibly stupid and incapable of understanding the issues; or (2) someone being intentionally dishonest and attempting to obscure the issue. Either way, it is pointless to engage with you. BTW, charity compels me to assume (1) is true. For the readers, I am not going to rise to RB’s bait. If anyone has a good faith question about the nonsense he spewed in 15, post it and I will answer it, or, better yet, go read the paper for yourself. Barry Arrington
#21 BA - I was attempting an ironic comment on your debating style which mostly comprises assertions that you are right. As it happens I think Thisted's use of "chance" is pretty much the same as Lizzie's but he was a bit sloppy in suggesting that chance is the explanation. If the p value is high enough then the explanation may well be the null hypothesis (whatever that is) which incorporates a probability model. This variation in this model can be called chance (i.e. the bit we can't explain as described in #7). As noted, chance in this sense may well include some intended or designed element which has not been detected. However, to use Chance in the abstract without an implicit or explicit probability model (in this case the null hypothesis) explains nothing. In fact it is pretty much meaningless. If you don't believe me ask William Dembski. He recognises the need for a specific hypothesis which includes an element of chance when he defines CSI. Mark Frank
KF: You are quite right, and have fleshed out in your comment some of the things I had in mind by my obscure "Now I happen to think there are problems with this argument and that chance may indeed be real." Furthermore, as you point out, the whole point of many, perhaps most, statistical analyses is to reject the chance explanation. I don't know if Lizzie is referring to the word "chance" in a very particular usage in a very particular paper and disputing how it is used in that case. But as a general matter to say that chance isn't an explanation is just silly. Eric Anderson
Mark Frank,
MF #8: I personally get very frustrated when I raise a point and the response is not to address the point but to declare that the opponent’s position is self-evident or that I am being pedantic in trying to define something in detail (two favourite ID tactics).
Not that I agree - I accept the necessity for self-evident truths and I don't believe it was helpful that you went on about coins in packages - but can you name one other ID tactic? Because I have never witnessed a discussion where the tables were actually turned. Never have I seen that ID-proponents were forced to adopt the methods eloquently described by Querius in post #9. Box
BA:
RB, your assertions in 15 are wrong in every particular.
I eagerly await your rebuttal of each of those particulars. Reciprocating Bill
Mark, if that is not what you meant, then what was your point? You said you were defending Lizzie, so I assumed you were trying to make what she said not conflict with what the professor said. If you are now backing off and admitting Lizzie was gobsmackingly wrong, I'm OK with that too. Cheers. Barry Arrington
#19 BA "The professor of statistics really means the same thing as Lizzie." Barry says so. I guess it must be true. Interesting that "chance" in this sense includes design! Mark Frank
Lizzie: Chance is not an explanation. Professor of statistics: The whole point of statistical testing is to rule out the “chance explanation.” Mark Frank 1: The professor of statistics really means the same thing as Lizzie. Mark Frank 2: “There is a lot of stuff here about how difficult ID opponents are to deal with.” Irony. You know how I love it. Barry Arrington
SC: Pardon but I am a little uncomfortable with a maximising uncertainty definition for chance phenomena [which points to flat randomness], as random variable can also show bias by central tendency or inclination to one end or another of a range of possibilities or more. That is why I think a more physical approach that starts with a paradigm case such as fair dice then uses it to introduce the concept of highly contingent outcomes for similar initial conditions that have no credible intelligent direction. As you know I spoke of clashing uncorrelated chains of events and also of the sort of hard core written in randomness that we find in quantum phenomena and statistical mechanics etc. For these I think the model of a box of marbles with pistons at the ends that can give a hard push and set in train movements and collisions culminating in Maxwell Boltzmann statistics is useful and points to thermodynamics. With Brownian motion as an observable and sufficiently close case that played a role in award of a Nobel prize. That of course then raises the issue of when do we see from results that intelligence is a likely cause, and that raises the issue that at some reasonable threshold of complexity measured by scope of configuration space, and available search resources, a blind process such as chance becomes maximally implausible. It is not hard to see -- save for those with a will problem -- that something that is functionally specific and complex beyond 500 bits worth of possible configs, will not plausibly result from blind chance and/or mechanical necessity. As the 500 H coins in a row case will aptly illustrate, and as would a similar row of coins spelling out the ASCII code for the first 72 or so characters of this message. KF kairosfocus
RB, your assertions in 15 are wrong in every particular. Darwinists' willingness, even eagerness, to twist, distort and obfuscate never ceases to amaze. Barry Arrington
Q @ 6: Your list reminds me of the last argument I had with my wife. :-) Barry Arrington
MF:
He is pointing out that a result may deviate from the expected value by chance as opposed to some underlying cause.
Mark is right. “Chance explanation” in the context of Thist’s paper refers to the fact that even perfectly executed random sampling from a population will select samples with means (of whatever variable is of interest) that inevitably differ to some degree from the mean of the population from which the samples are drawn. Samples may also display differing means. Nothing is being hypothesized to “cause” either individual measured values or sample means to take on the values they do, apart from the probabilities inherent in random sampling. The “chance” of concern is inherent in the experimental sampling procedures, not the phenomenon being measured. Fortunately, the probability that random sampling error will result in a sample with a mean that differs from the population mean by a given value is exactly calculable given knowledge of the variability of the value of interest and the size of the sample. “Ruling out chance” refers to quantifying the confidence one has that the difference one observes between sample mean and population mean (or between the means of several samples) is not likely to have arisen due to sampling itself. The “p-value” is an arbitrary threshold vis that confidence. Experimental variables that become the focus of hypothesis testing differ. What is hypothesized is that the sample and population mean (or multiple sample means) of the dependent variable of interest differ due to variations in an independent variable, ideally one manipulated by the experimenter. With appropriate experimental controls, large enough sample sizes the study acquires power sufficient that causal relationships may be established against the background of differences due to sampling error - always within the limitations of that confidence. So we are detecting hypothesized causal relationships against a background of statistical noise due to limitations inherent in random sampling - that is, inherent in the experimental procedure. This sense of “chance” is therefore NOT on the same footing as an “explanation” as the independent variables one is investigating. Reciprocating Bill
EA: As I have had to point out yesterday, chance denotes credibly undirected contingency in a situation. Thus, when chance acts -- I will explain a bit more -- we will see for quite similar initial conditions, a variation of outcomes across some range in accord with some distribution or the other. A variation that is consistent with undirected contingency. The common way for this to happen at macro level is based on the butterfly effect and the uncorrelated collision of causal chains that are often deterministic in themselves. E.g. a die drops under 9.8 N/kg, hits a table and then tumbles and settles. Thanks to unavoidable irregularities and variations, plus twelve edges and eight corners, we see a fair die giving a good imitation of a flat random distribution with the values from 1 to 6. It is reasonable to summarise this sort of undirected contingency under the name, chance. In effect we are getting a random variable as our outcome that sufficiently mimics mathematical models of randomness to be good enough for government work. (And don't ask me about where dice or the equivalent are used in Government work, on the principle of if you eat sausages don't visit a sausage factory. Let me just say that my Dad once taught me how to use a telephone directory as a poor man's random number table, as the line codes are generally uncorrelated with names so even though names are not random and line cores are not random, the uncorrelated clash sufficiently often is. But it won't work if all the Smiths live in the same district and all the Browns in another.) The second area is one where randomness may be directly manifest: quantum based phenomena, especially potential barrier tunnelling. Alpha particle emission is a classic case in point. Random rate effect, giving rise to a reliable and precise half-life for a sufficiently large sample. It is also reasonable in this case to speak about a chance process. So, there is nothing wrong whatsoever in discussing chance causal factors in these sorts of contexts. Where of course in physics these factors came in once gases were studied through kinetic theory and statistical mechanics. It was soon realised that the best explanations of gas behaviour was random molecular motions, connected to temperature as an index of the average random kinetic energy per degree of freedom. (And that is getting too close to SC's overkill.) Let's just say that the phenomenon of Brownian Motion, was recognised as a manifestation of this motion and from this, the reality of atoms and molecules was firmly established by Einstein, in one of the papers that led to his Nobel Prize. (He did not win the prize because of Relativity!) So, when I see the sorts of dismissals we are seeing, it is clear that the objectors are refusing to acknowledge basic statistics -- from which chance is a well recognised concept, and a lot of basic physics too that builds on the concept. But then the root point is in the tail of the first paragraph: we accept chance when the variation is in accord with what would happen with credibly undirected contingency, which we can often model, e.g. with the coins or with the Gaussian curve etc. That is, implicitly, we have the contrast that there are two explanations for contingency, chance and design. And, we must needs be able to distinguish them credibly, i.e. the need for a design inference explanatory filter is obvious once we squarely face the issue, what is chance? Hence, the resistance we are seeing, to the point of absurdity. KF kairosfocus
Dr Liddle et al: I presume you are watching. I simply beg to remind you that for many years, there has been a common practice of hypothesis testing by rejecting the null in light of evidence, the null being a hypothesis that chance -- undirected contingency -- accounts for the results observed. This comes out in Fisherian inference testing, and is in the picture in ANOVA. Where, basically the idea is that if we are sufficiently in a far-skirt tail zone of interest for a proposed distribution, it is unlikely that that is by chance. 5% tails are commonly used, as are 1% tails. This, you MUST know. It is basic statistics. You may be able to dimly recall how, several times, I set up the mental exercise of setting up a chart with a bell distribution with stripes and then suggesting dropping darts from a height sufficient that the darts would fall more or less evenly. Obviously, the central bulge of the bell shape is going to be hit far more often than the far tails are. And, if our search by darts is sufficiently small in number, the sufficiently far tails will reliably not be hit. In the case of 500 coins -- let's refine: fair coins, in a row, on a table, the situation is that there are 3.27*10^150 possible configurations, i.e. 2 ^ 500. No search process on the gamut of our solar system since it began can sample more than the equivalent of 1 straw to a cubical haystack as thick as our barred spiral galaxy's central bulge. In such a situation it is utterly unreasonable to expect to hit 500 H or the like by chance. The dominant cluster of configs will be near 50-50, with no particular order of H and T. So, we are utterly unlikely to hit alternating H and T either. For reasons which are only too plain. So, you are speaking against the truth you know, in hopes of scoring points off those who would not know better. Revealing. And, sad. KF kairosfocus
#11 CC I share your scepticism. The trouble is that most of this research takes place in the context of furthering democracy. To exclude one group from the debate is undemocratic. I would like to try a very structured environment such as MIT's deliberatorium (http://cci.mit.edu/klein/deliberatorium.html) where a main proposal is articulated, comments have to indicate how they relate to that proposal (e.g. reason for, reason against, request for clarification,). I would also limit comments to 200 words so that participants were not tempted to waste words on personal comments or irrelevances. Mark Frank
=> Mark Frank,Querius I am very skeptical that such an environment is possible unless there is a criteria for the participants. If you have a open forum where any one can sign up and participate, you will always have a problem in maintaining the forum discussion. On second thoughts, even in such environment, you will have ego clashes so it is not possible! coldcoffee
Q You seem to asking for what the deliberative democracy theorists would call authentic deliberation. There have been various proposals for the criteria for such deliberation. A good example is the Discourse Quality Index proposed by Steenberg which is based on six criteria: I. Justification Assertions are backed up with justifications II. Common Good Arguments are for the common good and not for the benefit of particular citizens III. Respect Discussion is on the basis of respect for participants and their arguments IV. Constructive politics Discussion is constructive and attempts to find a mutually acceptable solution V. Participation All citizens affected by the deliberation are involved (presence) and have equal ability to express their views (voice) VI. Authenticity Participants do not attempt to deceive each other There have also been various attempts to create Internet environments which encourage such discourse. However, none of them have been outstandingly successful.  Above all it needs a will from the participants to make it that way. Mark Frank
Mark Frank wrote
May I politely suggest that it is more constructive to address the argument than rant about the unreasonable nature of your opponents.
It might indeed be more constructive, but I usually find myself interrupted every few words, followed by many of the items that I listed. I'd propose a set of discussion rules that would ensure equal time by both parties in the discussion, no interruptions, no long lectures, and each point must be fairly addressed and answered. The discussion is judged a loss for the first person to use an ad hominem attack, or other unscrupulous tactic. Judges determine to what extent each party has answered the other. It's probably a hopeless cause, but I can dream . . . -Q Querius
There is a lot of stuff here about how difficult ID opponents are to deal with. In fact there is a lot of stuff about this throughout UD!   Whenever two parties get into a debate about things they care about, each party always thinks the other party is difficult, irrational, etc. Querius list in #6 is quite a good list but it always applies to the other guy. I personally get very frustrated when I raise a point and the response is not to address the point but to declare that the opponent’s position is self-evident or that I am being pedantic in trying to define something in detail (two favourite ID tactics). May I politely suggest that it is more constructive to address the argument than rant about the unreasonable nature of your opponents. Mark Frank
Barry You have of course banned Lizzie from responding  so I will do my best. Thisted’s paper is excellent but perhaps he could have phrased it a bit better. He is pointing out that a result may deviate from the expected value by chance as opposed to some underlying cause. Another way of phrasing this is to say there is no known explanation for the deviation.  This use of chance in this context does not preclude design. In fact the deviation from the expected value might have been because of some undetected, intelligent interference. Chance in this context just stands for – explanation not known. Mark Frank