Uncommon Descent Serving The Intelligent Design Community

# Jerad’s DDS Causes Him to Succumb to “Miller’s Mendacity” and Other Errors

Share
Flipboard
Print
Email

Part 1:  Jerad’s DDS (“Darwinist Derangement Syndrome”)

Sometimes one just has to stop, gape and stare at the things Darwinists say.

Consider Jerad’s response to Sal’s 500 coin flip post.  He says:  “If I got 500 heads in a row I’d be very surprised and suspicious. I might even get the coin checked. But it could happen.”  Later he says that if asked about 500 heads in a row he would respond:  “I would NOT say it was ‘inconsistent with fair coins.’”  Then this:  “All we are saying is that any particular sequence is equally unlikely and that 500 heads is just one of those particular sequences.”

No Jerad.  You are wrong. Stunningly, glaringly, gobsmackingly wrong, and it beggars belief that someone would say these things.  The probability of getting 500 heads in a row is (1/2)^500.  This is a probability far far beyond the universal probability bound.  Let me put it this way:  If every atom in the universe had been flipping a coin every second for the last 14.5 billion years, we would not expect to see this sequence even once.

But, insists Jerad, it could happen.  Jerad’s statement is true only in the trivial sense that flipping 500 heads in a row is not physically or logically impossible.  Nevertheless, the probability of it actually happening is so vanishingly small that it can be considered a practical impossibility.  If a person refuses to admit this, it means they are either invincibly stupid or piggishly obstinate or both.  Either way, it makes no sense to argue with them.  (Charity compels me to believe Jerad will reform his statements upon reflection.)

But, insists Jerad, the probability of the 500-heads-in-a-row sequence is exactly the same as the probability of any other sequence.  Again, Jerad’s statement is true only in the trivial sense that any 500 flip sequence of a fair coin has the exact same probability as any other.  Sadly, however, when we engage in a non-trivial analysis of the sequence we see that Jerad’s DDS has caused him to succumb to the Darwinist error I call “Miller’s Mendacity” (in homage to Johnson’s Berra’s Blunder).*  Miller’s Mendacity is named after Ken Miller, who once made the following statement in an interview:

One of the mathematical tricks employed by intelligent design involves taking the present day situation and calculating probabilities that the present would have appeared randomly from events in the past. And the best example I can give is to sit down with four friends, shuffle a deck of 52 cards, and deal them out and keep an exact record of the order in which the cards were dealt. We can then look back and say ‘my goodness, how improbable this is. We can play cards for the rest of our lives and we would never ever deal the cards out in this exact same fashion.’ You know what; that’s absolutely correct. Nonetheless, you dealt them out and nonetheless you got the hand that you did.

Miller’s analysis is either misleading or pointless, because no ID supporter has ever, as far as I know, argued “X is improbable; therefore X was designed.” Consider the example advanced by Miller, a sequence of 52 cards dealt from a shuffled deck. Miller’s point is that extremely improbable non-designed events occur all the time and therefore it is wrong to say extremely improbable events must be designed. Miller blatently misrepresents ID theory, because no ID proponent says that mere improbability denotes design.

Let’s consider a more relevant example.  Suppose, Jerad and I played 200 hands of heads up poker and I was the dealer.  If I dealt myself a royal flush in spades on every hand, I am sure Jerad would not be satisfied if I pointed out the (again, trivially true) fact that the sequence “200 royal flushes in spades in a row” has exactly the same probability as any other 200 hand sequence.  Jerad would naturally conclude that I had been cheating, and when I had shuffled the deck I only appeared to randomize the cards.  In other words, he would make a perfectly reasonable design inference.

What is the difference between Miller’s example and mine?  In Miller’s example the sequence of cards was only highly improbable. In my example the sequence of cards was not only highly improbable, but it also conformed to a specification.  ID proponents do not argue that mere improbability denotes design. They argue that design is the best explanation where there is a highly improbable event AND that event conforms to an independently designated specification.

Returning to Jerad’s 500 heads example, what are we to make of his statement that if that happened he “might” get the coin checked.  Blithering nonsense.  Of course he would not get the coin checked, because Jerad would already know to a moral certainty that the coin is not fair, and getting it “checked” would be a silly waste of time.  If Jerad denies that he would know to a moral certainty that the coin was not fair, that only means that he is invincibly stupid or piggishly obstinate or both.  Again, either way, it would make no sense to argue with him.  (And again, charity compels me to believe that upon reflection Jerad would not deny this.)

Part 2:  Why Would Jerad Say These Things?

Responding to Jerad’s probability analysis is child’s play.  He makes the same old tiresome Darwinist errors that we have had to correct countless times before and will doubtless have to correct again countless times in the future.

As the title of this post suggests, however, far more interesting to me is why Jerad – an obviously reasonably intelligent commenter – would say such things at all.  Sal calls it SSDD (Space Shuttle Denying Darwinist or Same Stuff, Different Darwinist).  I call it Darwinist Derangement Syndrome (“DDS”).  DDS is somewhat akin to Tourette syndrome in that sufferers appear to be compelled to make inexplicable statements (e.g., if I got 500 heads in a row I “might” get the coin checked or “It could happen.”).

DDS is a sad and somewhat pathetic condition that I hope one day to have included in the Diagnostic and Statistical Manual of Mental Disorders published by the American Psychiatric Association.  The manual is already larded up with diagnostic inflation; why not another?

What causes DDS?  Of course, it is difficult to be certain, but my best guess is that it results from an extreme commitment to materialist metaphysics.  What is the recommended treatment for DDS?  The only thing we can do is patiently point out the obvious over and over and over, with the small (but, one hopes, not altogether non-existent) chance that one day the patient will recover his senses.

*I took Ken Miller down on his error in this post

It always happens in probabily discussions. It’s very annoying. *harrumph*
:-) Brits, what can you do? :-) (speaking from just outside of York) Jerad
I am not trying to speak to anything other than those simple, clear, fairly abstract and ideal circumstances.
And this is why this whole conversation is a nonsense unless we factor in our actual state of knowledge (as you have just done). In an "ideal circumstance" we might *know* with God's Eye (or Mathematician's Eye) knowledge, that the coin was fair, and was fairly tossed. In which case, no matter what the sequence, we would reject Design. But the whole point of making inferences is that we do NOT know, with God's Eye knowledge that the coin was fair, fairly tossed. So we have to weigh up the relative probabilities of a fair coin, fairly tossed, or something else. And as 500 Heads one of a tiny subset of Special sequences, and therefore extremely probable, almost any other explanation is more likely than "fair coin fairly tossed". It's really no more complicated than that. Which is why I suggested a Bayesian formalisation of the inference, where at least we make our state of knowledge explicit. If we do not, we end up in silly arguments where the only difference is the amount of knowledge assumed. Jerad isn't suffering from "DDS" any more than Barry is suffering from "IDS". But the whole conversation is suffering from people thinking other people are being stupid when they are simply making different but unspecified (sometime) assumptions about what we know to start with. It always happens in probabily discussions. It's very annoying. *harrumph* Elizabeth B Liddle
Jerad: All you have managed to do is to underscore my point. KF kairosfocus
Your error has been repeatedly pointed out, from several quarters. You have insisted on a strawman distortion that caricatures the situation by highlighting something that is true in itself — that on a fair coin assumption or situation any one state is as probable as any one other state.
No, that was not a strawman distortion, that was the topic of the 22 sigma thread I responded to.
But in the wider context, we are precisely not dealing with any one state in isolation, but partitioning of the space of possibilities in light of various significant considerations.
What wider state are you talking about? I haven't responded to any thread which was about anything other than mathematics. Intentionally so.
What is to be explained is the partition we find ourselves in, on best explanation. And zones of interest that are overwhelmed by the statistical weight of the bulk of the possibilities are best explained on choice not chance.
Whatever. You're talking about clusters or groups of outcomes again and I've already agreed they are more likely to happen.
In short, you refuse to recognise that there are such things as significant partitions that can have a significance above and beyond merely being possible outcomes. Also, you are suppressing the underlying issue, that there are two known causes of highly contingent outcomes, chance and choice. Where, what is utterly unexpected — in the relevant case on the gamut of atomic resources in our solar system for its plausible lifespan — on chance, indeed is so maximally improbable that it is reliably unobservable on chance, is easily explained and feasibly observable on choice.
You can pick or define clusters or zones or partitions of outcomes that are in your interest. Sure. And you have to pick a 'measure' which, in this particular case was number of heads or tails. On other measures the outcome of all Hs would NOT be so far from the mean. If you've got a particular situation you want me to address then bring it up.
So, in a context of inference to best causal explanation, it is morally certain that choice was involved not chance. That is, one would be foolishly naive or irresponsible on matters of moment, to insist on acting as though chance is a serious explanation where choice is at all possible.
It's like shouting at a storm. I've said, MANY TIMES, I would first try and find some explanation other than chance before I fell back on that for a highly unusual result.
And, in the wider context that surfaces the underlying issue behind ever so many objections on the credible source of objects exhibiting configurations reflecting islands of function in seas of non-function: there is a major ideological a priori bias acting in origins science in our day that needs to be exposed for what it is and how it distorts reasoning on matters of origins. Namely Lewontin’s a priori materialism.
Can I get a translation please? Have you found an error in my mathematics? Doesn't look like it. If you have a situation you'd like me to address that I haven't already done multiple times then bring it up.
It is also highly significant in the context of the wider contentious debate that you have twisted the logic of the design inference into pretzels. When we come across an outcome that we cannot directly observe the source of, we need to infer a best empirically warranted explanation. The first default is mechanical necessity leading to natural regularities (which includes the latest attempted red herring, standing wave patterns revealed by dusting vibrating membranes with sand or the like). This is overturned by high contingency. If a given aspect is highly contingent — and this includes cases which may reflect a peaked/biased distribution of outcomes, not just a flat one — the default is chance if the outcome is reasonably observable on chance. Choice comes in in cases where we have highly unlikely outcomes on chance that come from highly specific, restricted zones of interest that plausibly reflect patterns that purposeful choice can account for. All heads, HT in alternation, Ht patterns exhibiting ASCII code in English, etc are cases in point.
I've already stated many times my response to this. You don't agree with me so you're trying to intimidate me into backing down or agreeing with you by posting long, rambling paragraphs which make comprehension difficult. I'll say it one more time: IF I flipped a coin 500 times and got all heads I'd try very, very, VERY hard to find an explanation for it even though I know that outcome is just as likely as any other. I might not ever really believe it was due to chance. BUT, if I couldn't find some explanation then I would write it off as a fluke, a result which is physically and mathematically consistent with the situation and no need to invoke some other agency which I looked for and couldn't find!! Same with getting a randomly generated line of Shakespeare. I am not trying to distort anything, I am not trying to speak to anything other than those simple, clear, fairly abstract and ideal circumstances. Jerad
PS: It is also highly significant in the context of the wider contentious debate that you have twisted the logic of the design inference into pretzels. When we come across an outcome that we cannot directly observe the source of, we need to infer a best empirically warranted explanation. The first default is mechanical necessity leading to natural regularities (which includes the latest attempted red herring, standing wave patterns revealed by dusting vibrating membranes with sand or the like). This is overturned by high contingency. If a given aspect is highly contingent -- and this includes cases which may reflect a peaked/biased distribution of outcomes, not just a flat one -- the default is chance if the outcome is reasonably observable on chance. Choice comes in in cases where we have highly unlikely outcomes on chance that come from highly specific, restricted zones of interest that plausibly reflect patterns that purposeful choice can account for. All heads, HT in alternation, Ht patterns exhibiting ASCII code in English, etc are cases in point. kairosfocus
Jerad: Let us observe your crucial strawmannising step:
Any particular single sequence is just as likely as any other single sequence in a truly ‘fair’ or random selection process. And, as I’ve said already, groups or clusters of outcomes will always have a higher probability than any single outcome. What are you arguing against?
Your error has been repeatedly pointed out, from several quarters. You have insisted on a strawman distortion that caricatures the situation by highlighting something that is true in itself -- that on a fair coin assumption or situation any one state is as probable as any one other state. But in the wider context, we are precisely not dealing with any one state in isolation, but partitioning of the space of possibilities in light of various significant considerations. What is to be explained is the partition we find ourselves in, on best explanation. And zones of interest that are overwhelmed by the statistical weight of the bulk of the possibilities are best explained on choice not chance. In short, you refuse to recognise that there are such things as significant partitions that can have a significance above and beyond merely being possible outcomes. Also, you are suppressing the underlying issue, that there are two known causes of highly contingent outcomes, chance and choice. Where, what is utterly unexpected -- in the relevant case on the gamut of atomic resources in our solar system for its plausible lifespan -- on chance, indeed is so maximally improbable that it is reliably unobservable on chance, is easily explained and feasibly observable on choice. So, in a context of inference to best causal explanation, it is morally certain that choice was involved not chance. That is, one would be foolishly naive or irresponsible on matters of moment, to insist on acting as though chance is a serious explanation where choice is at all possible. And, in the wider context that surfaces the underlying issue behind ever so many objections on the credible source of objects exhibiting configurations reflecting islands of function in seas of non-function: there is a major ideological a priori bias acting in origins science in our day that needs to be exposed for what it is and how it distorts reasoning on matters of origins. Namely Lewontin's a priori materialism. It is as simple as that. KF kairosfocus
Namely, you are looking at the bare logical possibility of a given single state of 500 coins as an outcome of chance and suggest that any given state is as improbable as any other on Bernouilli-Laplace indifference.
Yes, that is what I am addressing. And if I've doen something incorrectly then please point it out.
But that is not what we are looking at in praxis.
That is all I was doing. Just discussing the mathematics.
What we have in fact, is the issue of arriving as a special — simply describable or functionally specific, or whatever is relevant — state or cluster of states, vs other, dominant clusters of vastly larger statistical weight. I am sure you will recognise that in an indifference [fair coins, here] situation, when we have such an uneven partition of the space of possibilities, clusters of overwhelming statistical weight — which will be nearly 250 H:250T, in no particular pattern, will utterly dominate the observable outcomes.
As I already said above in different terms. I don't know what you're arguing against. Obviously a fairly jumbled mix of 500 Hs and Ts is more likely than any single outcome including all Hs. So?
What happens is that the state 500 H, that of 500 T, or a state that has in it a 72 or so character ASCII text in English, are all examples of remarkable, specially describable, specific and rare outcomes, deeply isolated in the field of possibilities.
Any particular single sequence is just as likely as any other single sequence in a truly 'fair' or random selection process. And, as I've said already, groups or clusters of outcomes will always have a higher probability than any single outcome. What are you arguing against?
So, if you are in an outcome state that is maximally improbable on chance, in a special zone that a chance based search strategy is maximally unlikely to attain, that is highly remarkable. Especially in a situation where there is the possibility of accessing such by choice contingency as opposed to chance contingency.
Why don't you specify a null and an alternate hypothesis and a confidence interval you'd like to test? Or be more clear what you're getting at.
In short, you have been tilting at a strawman.
I've been addressing a very particular mathematical point. If you can find any fault with what I've actually said then please point it out.
So, while it is strictly logically possible that lucky noise has caused all of this, that is by no means the best, empirically warranted, reasonable explanation. Indeed, it is quite evident on analysis of relevant scientific investigations, that a great many things in science are explained by investigating the sort of causal factors that are empirically reliable as causes of a given effect, then once that has been established, one treats the effect as a sign of its credible, reliably established cause. Text of posts by unseen posters is a good simple case in point.
How come everyone misses the point that I've made MANY TIMES that I would be extremely careful to first root out any bias or influence in the system before I attributed an outcome to chance?
And, if you or I were to come across a tray of 500 coins with all heads uppermost, or alternating heads and tails, or ASCII code for a text in English, that would be on its face string evidence of choice contingency, AKA design as best and most reasonable — though not only possible — explanation. That is patent. So, why the fuss and bother not to infer the blatantly reasonable?
What are you arguing against? If design was detectable then I assume I would discover that BEFORE ascribing a highly unusual outcome to chance!! Design would be a bias in the system, making it not 'fair'. You are the first one to accuse your opponent of attacking a strawman but you seem to be doing so here. Nothing I've said has been overturned, I was addressing a pretty basic mathematical issue, I've been very clear that ascribing chance is my last fall back for a highly organised outcome AFTER first making very, very, very sure there was no other detectable influence. I don't get it. Should I just repeat myself over and over again? Jerad
Are you familiar with the related results of statistical thermodynamics, which ground the 2nd law of thermodynamics? Namely that here are some things that are so remote, so beyond observability on relative statistical weight of clusters of partitioned configs that hey are not spontaneously observable on the gamut of a lab or the solar system or the observed cosmos? You have just done the equivalent of suggesting that you believe in perpetuum mobiles as feasible entities
If you can find something wrong with my mathematics then please point it out. I don't see how you can argue with the fact that any given sequence of Hs and Ts, including all Hs or all Ts or HTHTHTH . . . . or HHTTHHTTHHTT . . . or whatever sequence you'd like to specify are all equally likely to occur if the generating procedure is truly 'fari'. Obviously any class of outcomes is more likely to occur than any given single sequence. And obviously classes closer to the 'mean' (depending on what your measure is) are more likely to occur. But just because 'we' assign meaning or significance to certain outcomes or classes of outcomes doesn't change the mathematics. Jerad
Jerad: Are you familiar with the related results of statistical thermodynamics, which ground the 2nd law of thermodynamics? Namely that here are some things that are so remote, so beyond observability on relative statistical weight of clusters of partitioned configs that hey are not spontaneously observable on the gamut of a lab or the solar system or the observed cosmos? You have just done the equivalent of suggesting that you believe in perpetuum mobiles as feasible entities. KF kairosfocus
Dr Liddle: The fundamental issue is that we are dealing with large config spaces and blind samples (for sake of argument). Once we can define narrow zones of interest [so, partition the space on something separately specifiable than to list out the configs in detail that we want . . . ), and once we have rather limited resources -- with W = 2^500, 10^57 atoms in our solar system for 10^17 s is very limited, we have a situation where not on probability but on sampling theory, we have very little likelihood of capturing such zones of interest on any blind process within the reach of resources. We have no right to expect to see anything but the overwhelming bulk partition. In the case of 500 coins, the distribution is very sharply peaked indeed, centred on 250 H:250 T. 500 H is so far away form that that it is a natural special zone (and notice how simply it can be described, i..e. how easy the algor to construct this config is). In more relevant cases, we have clusters, which I have described as Z1, z2, . . . zn, where our sampling resources are again constrained. For the 500 bit solar system case, we are looking at samplinghte equivalent of 1 straw size blindly in a haystack 1,000 LY across. Even if the stack were superposed on our galactic neighbourhood, with 1,000's of star systems, since stars are several LY apart and are as a rule much smaller than a LY across, we are in a needle in haystack challenge on steroids. And, notice, I am not here demanding thatonly one state be in a zone, or that there be just one zone. Nope, so long as there is reason to se that zones are idolated and search resources are vastly overwhelmed, we are in a realm where the point holds. This then extends to the genome, where a viable one starts at 100 - 500,000 base pairs, and multicellular body plans are looking at 10 - 100+ mn bases, dozens of times over on the scope of the solar system. Where, also we know that functional specificity and complexity joined together are going to sharply confine acceptable configs. As can be seen from just requisites of text strings in English. Such gives analytical teeth tot he inductive point that the only known, observed source of FSCO/I is design. KF kairosfocus
Hmm. I would conclude “design” until an explanation surfaces. And I do not think that chance is a plausible explanation given the astronomical odds. This is where the faith comes in. You are willing to allow chance stand as a “plausible explanation” even though the odds are ridiculously low. This to me is not a scientific or a rational conclusion.
We'll just have to agree to disagree on that then I guess.
Let’s say you were unable to examine the details of the experiment because it took place 1 million years ago. All you have are the odds to go by. You can either accept chance as a plausible explanation or you can posit some type of design. Which would you choose?
Without other data or information pointing to the existence of a designer present at the time with the necessary skills and opportunity then I'd say chance is a more parsimonious explanation as it posits no undefined or unproven cause. I would also point out that accepting design as a plausible explanation is already heading down the path of defining and limiting the skills and motivations of the designer. Something that I've been told over and over again ID does not do.
Which is the more rational or the more probable explanation?
Having no independent evidence of a designer then I'd go with chance. OR, just say we don't know. I do not see how you can think an undefined and unproven designer is a more rational explanation. That's just faith. I have nothing against faith but I don't think it should be promoted as scientific. Especially when, although admittedly highly improbable, chance is 'consistent with the laws of mathematics' and physics and known to happen. Jerad
As I said, if I got 500 Hs or, indeed, any prespecified sequence on the very first try I’d be suspicious and I would check for anything that had affected the outcome. But if I found nothing ‘wrong’ then I’d conclude it was a lucky fluke. There’s no faith involved.
Well, I guess I'm a bit more simple minded than Dr. Dembski. And I bet you would be too if you were playing a poker hand and ran into someone with that kind of "luck" opposing you.
I would also NOT conclude ‘design’ since, as stated by Dr Dembski, first you have to allow for any and all non-design explanations. And chance is such a plausible explanation.
Hmm. I would conclude "design" until an explanation surfaces. And I do not think that chance is a plausible explanation given the astronomical odds. This is where the faith comes in. You are willing to allow chance stand as a "plausible explanation" even though the odds are ridiculously low. This to me is not a scientific or a rational conclusion. Let's say you were unable to examine the details of the experiment because it took place 1 million years ago. All you have are the odds to go by. You can either accept chance as a plausible explanation or you can posit some type of design. Which would you choose? Which is the more rational or the more probable explanation? tjguy
Neil and Jerad, The law of large numbers is well-accepted in mathematics. Thus, I don't think Barry is misusing probability with respect to the coins. I wrote on the issue here: The Law of Large Numbers vs. KeithS scordova
JWTruthInLove, Awesome find of Shallit's essay! scordova
(above cross-posted at TSZ, with some typos fixed). Elizabeth B Liddle
Chance can't do anything chance can not flip the coins..... Andre
Which is more rational to believe? Which takes more faith to believe? 1. That 500 coins were tossed and they landed in exactly the order you predicted ahead of time by pure chance? Or 2. That there was monkey business involved? If you have the faith to believe that it happened by pure total chance, fine, we just don’t think that is rational given the odds.
As I said, if I got 500 Hs or, indeed, any prespecified sequence on the very first try I'd be suspicious and I would check for anything that had affected the outcome. But if I found nothing 'wrong' then I'd conclude it was a lucky fluke. There's no faith involved. I would also NOT conclude 'design' since, as stated by Dr Dembski, first you have to allow for any and all non-design explanations. And chance is such a plausible explanation. Jerad
BB: For examples of contradictory other hands, watch what happens when an evo mat advocate is pressed on the want of a root tot eh tree, and how the evidence of what chem and physics applies in warm little ponds does not point to credible possibility of OOL. Very fast, they will pull the switcheroo, that OOL strictly is not part of the theory of evo. (This has happened ever so many times here at UD, and I suspect Talk Origins will exemplify same, etc.) KF kairosfocus
p. 148: “The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology. I submit that what they have in mind is specified complexity, or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . . Biological specification always refers to function . . . In virtue of their function [[a living organism's subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole] . . .” p. 144: [[Specified complexity can be defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ”
(And, Stephen Meyer presents much the same point in his Signature in the Cell, 2009, not exactly an unknown book.) Why then do so many statistically or mathematically trained objectors to design theory so often present the strawman argument that appears so many times yet again in this thread? First, it cannot be because of lack of capacity to access and understand the actual argument, we are dealing with those with training in relevant disciplines. Nor is it that the actual argument is hard to access, especially for those who have hung around at UD for years. Nor is such a consistent error explicable by blind chance, chance would make them get it right some of the time, by any reasonable finding, given their background. So, we are left with ideological blindness, multiplied by willful neglect of duties of care to do due diligence to get facts straight before making adverse comment, and possibly willful knowing distortion out of the notion that debates are a game in which all is fair if you can get away with it. Given that there has been corrective information presented over and over and over again, including by at least one Mathematics professor who appears above, the collective pattern is, sadly, plainly: seeking rhetorical advantage by willful distortion. Mendacity in one word. If we were dealing with seriousness about the facts, someone would have got it right and there would be at least a debate that nope, we are making a BIG mistake. The alignment is too perfect. Yes, at the lower end, those looking for leadership and blindly following are jut that, but at the top level there is a lot more responsibility than that. Sad, but not surprising. This fits a far wider, deeply disturbing pattern that involves outright slander and hateful, unjustified stereotyping and scapegoating. Where, enough is enough.>> ______________ Prediction: this too will be studiously ignored in the rush to make mendacious talking points. (NR, KS, AF et al just prove me wrong by actually addressing this on the merits. Please.) KF kairosfocus
And if you want to do 20 amino acids in a specified sequence here is more fun! http://www.random.org/sequences/ Good Luck with chance and random! You will quickly learn the only workable solution is by doing a very specific arrangement using a mind! Knock ourselves out! Andre
Onlookers: Observe how studiously Darwinist objectors have ignored the issues pointed out step by step at 48 above. It is patent that mere facts and reason are too inconvenient to pay attention to in haste to make favourite talking points. Which reminds me all too vividly about the exercise over the past month in which direct proof of the undeniability of a patent fact, that error exists suddenly turned into rhetorical pretzels. We are dealing here with ideological agendas all too willing to resort to mendacity by continuing a misrepresentation, not reason and certainly not reason guided by a sense of duty to accuracy and fairness. Be warned accordingly. KF kairosfocus
BB: In short, EVERY time Darwinists appeal to the tree of life icon -- starting with Darwin himself (the ONLY diagram in Origin as originally published) -- they imply a root. The utter absence of a plausible explanation for the root, highlighted by the sort of thing we see with the MU experiment in textbooks, is a smoking gun. Indeed, it is worse than that, as we are talking about origin of digital info bearing coded systems and the machines that process in co-ordination, for which the only credible, empirically warranted explanation is design. Then, design sits at the table from the root up, so design is available at every step of the tree of life, and it is the only thing that can in light of empirical verification of capacity explain the origin of major body plans dozens of times over needing 10 - 100 mn + bits of additional info, each. So is the sort of rhetorical game above that ignores what was pointed out, step by step at 48 above. KF kairosfocus
Assuming the coins are fair go have fun, you can do up to 200 at one time.... see if you will ever get a 200 heads only! http://www.random.org/coins/ Andre
Correction to my comment above. The indices are wrong: the probability that the first flip matches F1, times the probability that the second flip matches F2, times the probability that the third flip matches F3, … times the probability that the Nth flip matches Fn. keiths
JWTruthInLove, Shallit's post is entitled "Confusion Everywhere", which is appropriate since he himself is confused. He writes:
The example is given of flipping a presumably fair coin 500 times and observing it come up heads each time. The ID advocates say this is clear evidence of "design", and those arguing against them (including the usually clear-headed Neil Rickert) say no, the sequence HH...H is, probabilistically speaking, just as likely as any other.
Which is correct if the coin is fair, not just "presumably fair". And that is what eigenstate specified in the quote that started this whole debate:
Maybe that’s just sloppily written, but if you have 500 flips of a fair coin that all come up heads, given your qualification (“fair coin”), that is outcome is perfectly consistent with fair coins, and as an instance of the ensemble of outcomes that make up any statistical distribution you want to review. That is, physics is just as plausibly the driver for “all heads” as ANY OTHER SPECIFIC OUTCOME.
Eigenstate is correct. Take any specified sequence of coin flips {F1, F2, ... Fn} where each Fi is either H (heads) or T (tails). The probability of getting that precise sequence when flipping a fair coin is equal to: the probability that the first flip matches F0, times the probability that the second flip matches F1, times the probability that the third flip matches F2, ... times the probability that the Nth flip matches Fn. The coin is fair, meaning that the probability of a match is the same whether Fi is H or T: exactly 1/2. Therefore, the probability of matching any specific sequence of length n is exactly the same, regardless of its content: (1/2)^n. Now if you drop the stipulation that the probability distribution is known and fair, then the question becomes much more interesting. However, Sal is still wrong.
This is an old paradox... The solution is by my UW colleague Ming Li and his co-authors. The basic idea is that Kolmogorov complexity offers a solution to the paradox: it provides a universal probability distribution on strings that allows you to express your degree of surprise on enountering a string of symbols that is said to represent the flips of a fair coin.
Two problems with that statement: 1. We don't need a probability distribution, because we already have one. Eigenstate specified that the coins were fair, and we know what that distribution looks like. 2. Even setting #1 aside, Kolmogorov complexity cannot act as a proxy for (lack of) surprise. Consider my example above involving social security numbers. If I roll my SSN, I'm surprised because it is my SSN, not because of its Kolmogorov complexity.
But the ID advocates are also wrong, because they jump from "reject the fair coin hypothesis" to "design".
Yes, as I pointed out earlier:
Given that we observe a sequence of 500 heads, which explanation is more likely to be true? a) the coins are fair, the flips were random, and we just happened to get 500 heads in a row; or b) other factors are biasing (and perhaps determining) the outcome. The obvious answer is (b). In the case of homochirality, Sal’s mistake is to leap from (b) directly to a conclusion of design, which is silly. In other words, he sees the space of possibilities as {homochiral by chance, homochiral by design}. He rules out ‘homochiral by chance’ as being too improbable and concludes ‘homochiral by design’. Such a leap would be justified only if he already knew that homochirality couldn’t be explained by any non-chance, non-design mechanism (such as Darwinian evolution). But that, of course, is precisely what he is trying to demonstrate. He has assumed his conclusion.
I suspect that Shallit will agree with all of this once he realizes that this entire debate has been about a case in which the coins are known to be fair, not just "presumably fair". keiths