Uncommon Descent Serving The Intelligent Design Community

# Low Probability is Only Half of Specified Complexity

Share
Flipboard
Print
Email

In a prior post the order of a deck of cards was used as an example of specified complexity.  If a deck is shuffled and it results in all of the cards being ordered by rank and suit, one can infer design.  One commenter objected to this reasoning on the grounds that the specified order is no more improbable than any other order of cards (about 1 in 10^68).  In other words, the probably of every deck order is about 1 in 10^68, so why should we infer something special about this deck order simply because it has a low probability.

Well, last night at my friendly poker game I decided to test this theory.  We were playing five card poker with no draws after the deal.  On the first hand I delt myself a royal flush in spades.  Eyebrows were raised, but no one objected.  On the second hand I delt myself a royal flush in spades, as well as every hand all the way through the 13th.

When my friends objected I said, “Lookit, your intuition has led you astray.  You are infering design — that is to say that I’m cheating — simply on the basis of the low probability of this sequence of events.  But don’t you understand that the odds of me receiving 13 royal flushes in spades in a row is exactly the same as me receiving any other 13 hands. ” In a rather didactic tone of voice I continued, “Let me explain.  In the game we are playing there are 2,598,960 possible hands.  The odds of receiving a straight flush in spades is therefore 1 in 2,598,960.  But don’t you see, the odds of receiving ANY hand are exactly the same, 1 in 2,598,960.  And the odds of a series of events is simply the product of the odds of all of the events.  Therefore the odds of receiving 13 royal flushes in spades in a row is about 2.74^-71.  But, and here’s the clincher, the odds of receiving ANY series of 13 hands is exactly the same, 2.74^-71.  So there, pay up and kwicher whinin’.”

Unfortunately for me, one of my friends actually understands the theory of specified complexity, and right about this time this buttinski speaks up and says, “Nice analysis, but you are forgetting one thing.  Low probability is only half of what you need for a design inference.  You have completely skipped an analysis of the other half — i.e. [don’t you just hate it when people use “i.e.” in spoken language] A SPECIFICATION.”

“Waddaya mean, Mr. Smarty Pants,” I replied.  “My logic is unassailable. ” “Not so fast,” he said.  “Let me explain.  There are two  types of complex patterns, those that warrant a design inference (we call this a ‘specification’ and those that do not (which we call a ‘fabrication’).  The difference between a specification and a fabrication is the descriptive complexity of the underlying patterns [see Professor Sewell’s paper linked to his post below for a more detailed explanation of this].  A specification has a very simple description, in our case ’13 royal flushes in spades in a row.’  A fabrication has a very complex description.  For example, another 13 hand sequence could be described as ‘1 pair; 3 of a kind; no pair; no pair; 2 pair; straight; no pair; full house; no pair; 2 pair; 1 pair; 1 pair; flush.’  In summary, BarryA, our fellow players’ intuition has not led them astray.  Not only is the series of hands you delt yourself massively improbable, it is also clearly a specification.  A design inference is not only warranted, it is compelled.  I infer you are a no good, four flushin’, egg sucking mule of a cheater.”  He then turned to one of the other players and said, “Get a rope.”  Then I woke up.

[...] couple of years ago I trotted out the “highly improbable things happen all the time” meme our Darwinist friends use to such advantage at my home poker game.  For those who don’t [...] Uncommon Descent | The Multiverse is the Poker Player’s Best Friend
[...] a good illustration.  Here’s a sample: A couple of years ago I trotted out the “highly improbable things happen all the time” meme our Darwinist friends use to such advantage at my home poker game.  For those who don’t [...] The Multiverse Theory = the Atheists’ Concession Speech. | Eternity Matters
[...] is the Poker Player’s Best FriendMarch 13, 2012A couple of years ago I trotted out the “highly improbable things happen all the time” meme our Darwinist friends use to such advantage at my home poker game.  For those who don’t [...] God's iPod - Uncommon Descent - Intelligent Design
Post Mickey response: Mickey wrote:
How do you know it’s very complex? It’s designed. How do you know it’s designed? It’s very complex.
Design inferrence is not ciruclar like that. Even if one inferred specification, the beginning question, "How do you know it's very complex? It's designed.", is false. Example: Imagine a single dot on a blank sheet of paper. This is very simple. However, it was designed as art by a person. It almost certainly would not have been ascribed to be an intelligent made design without knowledge of the intent behind it's origin. JGuy
Hi Gpuccio -- (great comments BTW)
for numbers like 3333333333, which can be written as “10 times 3?. Such compressible patterns are usually recognizable by a conscious mind, for reasons that are probably much deeper than I can understand.
Ok I'll bite. If on *exceedingly* rare occasions a random shuffling of numbers produces a long string of say, three "3333333333," the "meaning" and "order" apparent here is just an illusion. Randomness knows no value ("3" is a numeric value) and each "3" is but an *independent*, fortuitous event, devoid of any connection to any other "3" (which is yet another independent fortuitous event). *Minds,* however perceive "whole-isticaly" (they see the whole string in the "mind's eye"), and subsequently recognize *connections* between valued events and typically generate non-fortuitous, connected, value laden systems and sequences --"archipelagos" (thanks karios) of real order and real function in a great ocean of disorder/disfunction. William Brookfield
MickeyBitsko is no longer with us. DaveScot
Mickey B: I think you are missing the point that most of our knowledge is probabilistic to some extent or other. That is, absolute proof beyond all dispute is a mirage -- even in Mathematics, post Godel. What we deal with on scientific knowledge is revisable inference to best explanation, and the objection that something utterly improbable just may happen by accident is not the prudent way to bet in such an explanation, once it has crossed the explanatory filters two-pronged test. [The probabilities we are dealing with are comparable to or lower than those that every oxygen molecule in your room could at random rush to one end, causing you to asphyxiate; something you don't usually worry about I suspect, and BTW, on pretty much the same statistical mechanical grounds, as I discuss in the appendix A, in the always linked. In a nutshell, the statistical weight of the mixed macrostate is so much higher than that of the clumped one, that we would not expect that to happen just once at random in the history of the observed cosmos.] Note, too, that in EVERY case where we directly observe the causal story, CSI is produced by agency. To see what lies under it, read my always linked, Appendix A section 6. The objections I saw in 64 above strike me as getting into the territory of selective hyperskepticism, which is self-refuting. GEM of TKI kairosfocus
Did anyone bother to look at the link I posted regarding built-in instructions above with nano-particles and there thoughts how OOL possibly started like that? I related it to the cards due to the design of suits and numbers. But, the selection process is external for cards. Whereas the link I posted they inserted instructions into the process. Essentially, this is teleological insertions. I believe what this shows is that DNA can no more align itself for meaningfult expression thru proteins than playing cards could by themselves. The Blueprint is both imprinted and teleological. It takes an intelligent selection process, not a blind one prior to any bet being made on the hand. I think they're proving the case for ID more so in these experiments. I thought it significant for ID. Specification is targeted outcomes for selection processes. Nothing is random in the nano-experiments, neither is anything random in card choices. For the cards to do what Mickey "I think" would like them to do, they would in turn have to have pre-built instructions just like the nanobots to align by suit and number. Anyone? Michaels7
DaveScot said:
If there are two decks, one perfectly ordered by suit and rank and one with no discernable order the one with no discernable order is the more likely to be generated by a random shuffle. The reason: there are gazillions of possible arrangements that display no discernable order and very few that are perfectly ordered.
You have no way of knowing that the first deck is randomly ordered, or that second deck isn't, although that's certainly the way to bet. Therefore, in order to conclude design, you must rely on facts not in evidence. If a phenomenon is the result of random action, the fact that the odds against it are one in a gazillion doesn't mean that it can't happen in the first opportunity. In fact, randomness is defined by the idea that the phenomenon can occur on any opportunity, because in order for something to be random, each possible outcome must have an equal chance of occurring on each opportunity. All of these arguments from probability seem to hinge on the mistaken idea that if the odds against something happening are a gazillion to one, we're going to have to wait through a gazillion opportunities for it to happen. Mickey Bitsko
Really weird error message, folks . . .
Ok kairosfocus, I'm not sure how it helps, but here you go: "Your computer will become self-aware after Windows restarts. Please disconnect it from the network." If you need a weirder one, let me know. Apollos
Really weird error message, folks . . . kairosfocus
Interested (and BarryA and Dave): First, Interested, thanks. (I should also again thank the former UD and regular ARN commenter Pixie for going through a long exchange with me on the subject at my own blog.) BarryA I think the problem here has been well addressed by GPuccio in other threads, when he pointed out that specification comes in different flavours, but of course when conjoined with informational complexity and contingency [i.e. in effect info storage capacity beyond 500 - 1,000 bits, the latter taking in effectively all practical cases of archipelagos of Functionality, not just the unique functional state in 10^150 states that the first does]. GP at no 51 in the Oct 27 Darwinist predictions thread:
Specification can be of at least 3 different kinds: 1) Pre-specification: we can recognize a pattern because we have seen it before. In this case, the pattern in itself is probably random, but its occurrence “after” a pre-specification is a sign of CSI (obviously provided that complexity is also present, but that stays true for each of the cases). 2) Compressibility: some mathematical patterns of information are highly compressible, which means that they can be expressed in a much shorter sequence than their original form. That is the case, for instance, for numbers like 3333333333, which can be written as “10 times 3?. Such compressible patterns are usually recognizable by a conscious mind, for reasons that are probably much deeper than I can understand. In this case, specification is in some way intrinsic to the specific pattern of information, we could say that it is inherent in its mathematical properties. 3) Finally there is perhaps the most important form of specification, at least for our purposes: specification by function. A few patterns of information are specified because thay can “do” something very specific, in the right context. That’s the case of proteins, obviously, but also of computer programs, or in general of algorithms. In this case specification is not so much a characteristic of the mathemathical properties of the sequence, but rather of what the sequence can realize in a particular physical context: for example, the sequence of an enzyme is meaningless in itself, but it becomes very powerful if it is used to guide the synthesis of the real protein, and if the real protein can exist in a context where it can fulfill its function. Function is a very objective evidence of specification, because it does not depend on pre-conceptions of the observer (at least, not more than any other concept in human knowledge). So, this is the theoretic frame of CSI: complexity “plus” specification. And, obviously, the absence of any known mechanical explanation of the specific specified pattern in terms of necessity (that is, we are observing apparently random phenomena). The summary is: a) If you have a very complex pattern (very unlikely) and b) If no explanation of that patterm is known in terms of necessity on the basis of physical laws (in other words, if that pattern is equally likely as all other possible patterns, in terms of physical laws, and is therefore potentially random) and c) If that pattern is recognizable as specified, in any of the ways I have previously described: then we are witnessing CSI, and the best empirical explanation for that is an intelligent agent.
Interested (and BarryA and Dave): First, Interested, thanks. (I should also again thank the former UD and regular ARN commenter Pixie for going through a long exchange with me on the subject at my own blog.) BarryA I think the problem here has been well addressed by GPuccio in other threads, when he pointed out that specification comes in different flavours, but of course when conjoined with informational complexity and contingency [i.e. in effect info storage capacity beyond 500 - 1,000 bits, the latter taking in effectively all practical cases of archipelagos of Functionality, not just the unique functional state in 10^150 states that the first does]. GP at no 51 in the Oct 27 Darwinist predictions thread:
Specification can be of at least 3 different kinds: 1) Pre-specification: we can recognize a pattern because we have seen it before. In this case, the pattern in itself is probably random, but its occurrence “after” a pre-specification is a sign of CSI (obviously provided that complexity is also present, but that stays true for each of the cases). 2) Compressibility: some mathematical patterns of information are highly compressible, which means that they can be expressed in a much shorter sequence than their original form. That is the case, for instance, for numbers like 3333333333, which can be written as “10 times 3?. Such compressible patterns are usually recognizable by a conscious mind, for reasons that are probably much deeper than I can understand. In this case, specification is in some way intrinsic to the specific pattern of information, we could say that it is inherent in its mathematical properties. 3) Finally there is perhaps the most important form of specification, at least for our purposes: specification by function. A few patterns of information are specified because thay can “do” something very specific, in the right context. That’s the case of proteins, obviously, but also of computer programs, or in general of algorithms. In this case specification is not so much a characteristic of the mathemathical properties of the sequence, but rather of what the sequence can realize in a particular physical context: for example, the sequence of an enzyme is meaningless in itself, but it becomes very powerful if it is used to guide the synthesis of the real protein, and if the real protein can exist in a context where it can fulfill its function. Function is a very objective evidence of specification, because it does not depend on pre-conceptions of the observer (at least, not more than any other concept in human knowledge). So, this is the theoretic frame of CSI: complexity “plus” specification. And, obviously, the absence of any known mechanical explanation of the specific specified pattern in terms of necessity (that is, we are observing apparently random phenomena). The summary is: a) If you have a very complex pattern (very unlikely) and b) If no explanation of that patterm is known in terms of necessity on the basis of physical laws (in other words, if that pattern is equally likely as all other possible patterns, in terms of physical laws, and is therefore potentially random) and c) If that pattern is recognizable as specified, in any of the ways I have previously described: then we are witnessing CSI, and the best empirical explanation for that is an intelligent agent.
By the way I think this is essentially what William Brookfield was getting at in #53 above. Apollos
DaveScot said:
"Any randomly generated gene is astronomically unlikely to be of any biological use just as any randomly ordered deck of cards is astronomically unlikely to exhibit any perfect orderings."
This is the heart of the matter as I understand it, relating to Mickey's objection about determining CSI based on a perfectly ordered deck. Am I correct in assuming it possible to develop a reasonable "signal to noise ratio" for a deck of playing cards? This should perfectly illuminate the "equal probability" obfuscation. Apollos
Tim If there are two decks, one perfectly ordered by suit and rank and one with no discernable order the one with no discernable order is the more likely to be generated by a random shuffle. The reason: there are gazillions of possible arrangements that display no discernable order and very few that are perfectly ordered. This is actually very analogous to coding genes. There are 52 different cards in a standard deck while there are 64 different codons (nucleic acid triplets). Genes are further complicated because they have no fixed length and may be thousands of codons long. There are gazillions of codon sequences that don't fold into potentially biologically active molecules and few that do consistenly fold into a biologically active molecule. Complicating it even further is that biologically active proteins don't exist in a vacuum but must fold in such a way as to precisely fit (in at least five dimensions - 3 spatial dimensions plus hydrophophic and hydrophilic surfaces) the shapes of other proteins and other non-protein molecules they need to grasp and release. The folding process is so complex that being able to predict it is the Holy Grail of biochemistry. So, while any gene sequence is as likely as any other from a randomly generated string of codons the odds of getting a gene that codes for a biologically active protein from a randomly generated sequence are very remote because of the ratio of useful sequences to useless sequences. Any randomly generated gene is astronomically unlikely to be of any biological use just as any randomly ordered deck of cards is astronomically unlikely to exhibit any perfect orderings. DaveScot
Note that I should clarify "shuffling" to mean a random rearrangement of the cards, with "random" meaning that every possible order is equally likely with each shuffle. Mickey Bitsko
Tim, What matters (to me at least) is the question I asked. Given a target arrangement of 52 cards, and continual reshuffling, how long (in terms of reshuffles) should it take before the target order is repeated, given that the odds are about 1 in 10^68? Mickey Bitsko
I can see from gpuccio's thoughtful response that I had not made myself clear concerning the two decks. ((“if there are two decks of cards, one in no discernible order, and one ordered by rank and suit, which arrangement was more likely to occur randomly” The answer is simple: the probability is the same for both decks, that is a very low probability. gpuccio (51))) What I should have conveyed: The suited and ordered deck is face up (that is how we know it is suited and ordered), but the deck with no discernible order is by definition face down. If it were face up, then an order would be discerned. I thought this clearly to myself, but failed to type it in, sorry. This definition should further sew up the reason why the secret society can not exist. Let's nail down the specification thing. Mickey wrote, "Shuffle a deck of cards throroughly, then note the order". By "note the order" I think what you are doing is specifying a target. It really doesn't matter how you formed the sequence. What matters is that it was FIRST specified THEN sought. Probability stories like these will eventually get me hoisted by my own petard, but I am quite sure that the action of selection (even if generated by chance, i.e. shuffling) generates a target that has been specified. Tim
Hi BarryA "Brookfield, I think you are putting a needless layer of complexity on this. Yes, I put the cards in the context of a poker game to make the story interesting." Sorry if I am needlessly complexifying things. That was not my intent. I am looking for a way of explaining SC that is less prone to confusion and possible detractor obfuscation. The poker story was great and many can relate to it, but I am thinking maybe we could be describing such situations in terms of the probabilistic resources from randomly shuffled cards -- set #1 (hideously low) -- versus the probabilistic resources from the "superset" #2. (significantly higher) and a subsequent inference to the best explanation...with superset #2 (intelligent design) as the winner...? While Granville's notion of macroscopic improbability is quite good it doesn't seem to work for nanotechnologies that are both SC and microscopic. Or perhaps I am missing something? William Brookfield
gpuccio said,
Now, just imagine that the exact sequence of the first deck is communicated to you in advance, and then the deck of cards comes out exactly as it was said to you: would you still believe that it happened in a random way?
I would believe that randomness was possible, but highly unlikely. Now a question for you: Shuffle a deck of cards throroughly, then note the order. Now keep shuffling and noting the order. How long do you believe it will be before the original order is encountered again? Mickey Bitsko
Mickey: you stated: "if there are two decks of cards, one in no discernible order, and one ordered by rank and suit, which arrangement was more likely to occur randomly" The answer is simple: the probability is the same for both decks, that is a very low probability. Now, just imagine that the exact sequence of the first deck is communicated to you in advance, and then the deck of cards comes out exactly as it was said to you: would you still believe that it happened in a random way? No (at least I hope, for your sake). That's an example of pre-specification. Or just suppose that the deck of cards comes out in perfect order. Would you still believe that it was correctly and randomly shaffled? No (at least, I hope, for your sake). That's an example of specification by compressibility. Or just suppose that the cards are binary, 0 and 1, and are more numerous (a few hundreds). Suppose that the deck of cards, in the exact order, can be written as the binary code of a small software program, and that such a program works as an ordering algorithm. Would you still believe that the the deck of cards was really random? No ((at least, I hope, again for your sake). That's an example of specification by function. Genomes and proteins are all specified by function. They all exhibit CSI, of the highest grade. It is simple. Those who try to speculate on hypothetical contradictions of the concept of specification are completely missing the power, beauty and elegance of the concept itself. And its beautiful, perfect simplicity. gpuccio
For calculating the informational bits using 8-bit single-byte coded graphic characters, here is an example: “ME THINKS IT IS LIKE A WEASEL" is only 133 bits of information(when calculated as a whole sentence; the complexity of the individual items of the set is 16, 48, 16, 16, 32, 8, 48 plus 8 bits for each space). So aequeosalinocalcalinoceraceoaluminosocupreovitriolic would be 416 informational bits. The specification is that it is an English word with a specific function. That specific function does not have any intermediates that maintain the same function. Here we have a situation where indirect intermediates are well below 500 informational bits and thus there is nothing to select for that will help much in reaching the goal. Thus this canyon must be bridged in one giant leap of recombination of various factors, making it difficult for Darwinian mechanisms. Even though that is not 500 I would still be surprised if that showed up in a program such as Zachriels unless the fitness function was designed in a certain manner. For more on calculating such things refer to Dembski's work. Patrick
Patrick (46)- What can you calculate the complexity of? I can't figure out how to calculate the complexity of anything. congregate
“Very complex” reaches some undefined (or very vaguely defined) point where design is assumed, without knowing anything more than the thing is so complex that it’s difficult to understand how it might happen naturally.
Specification does not equate to not "knowing anything more than the thing is so complex that it’s difficult to understand how it might happen naturally." Patrick
No, it doesn't ignore the specification. It assumes the specification. "Very complex" reaches some undefined (or very vaguely defined) point where design is assumed, without knowing anything more than the thing is so complex that it's difficult to understand how it might happen naturally. A Mickey Bitsko
My whole point here has been to point out that there are enough situations where design isn’t discernible without foreknowledge, which means (at least to me) that the concept of CSI involves inescapable circular reasoning. How do you know it’s very complex? It’s designed.
Err...no. Complexity can be calculated without knowing whether something is designed or not. In fact, with the explanatory filter the complexity is calculated without presuming design or no design. A non-designed object can also be very complex.
How do you know it’s designed? It’s very complex.
And, again, that ignores specification. Patrick
Even though the first deck being ordered by rank and suit is impressive (52! = 8.06581752×10^67) that still does not exceed Dembski's UPB of 10^150 or 500 informational bits. Now we could make a weak design inference (aka police investigation) but not an ID-based design inference if this was a one-time shuffle to win a jackpot. In that scenario we would presumably be able to discover the mechanism for potential cheating so we could use that design/designer detection method instead of ID. It's not as if ID methods are the only way to detect design. EDIT: For the jackpot scenario I'm presuming the prize winner would be required to shuffle an entire deck and the prize would be awarded if a contestant managed to get some sort of combination that is close to 1 in 10^8 (around the odds of Powerball). By turning up this result the contestant is essentially providing a result that is overkill for the terms of the prize. So although the guy might have got really, really lucky they will still investigate to see if it was rigged somehow. Saw this interesting article: http://creationevolutiondesign.blogspot.com/2006/10/origin-of-life-quotes-by-jbs-haldane_30.html
We can accept a certain amount of luck in our explanations, but not too much.... In our theory of how we came to exist, we are allowed to postulate a certain ration of luck. This ration has, as its upper limit, the number of eligible planets in the universe.... We [therefore] have at our disposal, if we want to use it, odds of 1 in 100 billion billion as an upper limit (or 1 in however many available planets we think there are) to spend in our theory of the origin of life. This is the maximum amount of luck we are allowed to postulate in our theory. Suppose we want to suggest, for instance, that life began when both DNA and its protein-based replication machinery spontaneously chanced to come into existence. We can allow ourselves the luxury of such an extravagant theory, provided that the odds against this coincidence occurring on a planet do not exceed 100 billion billion to one. [Dawkins, R., "The Blind Watchmaker," Norton: (New York, 1987, pp. 139,145-46]
I find that interesting considering Koonin's comments regard the unguided Origin Of Life (OOL) scenarios and as a conservative estimate he calculated 1 in 10^-1018 for the possibility that such a system could have arisen. Patrick
My whole point here has been to point out that there are enough situations where design isn't discernible without foreknowledge, which means (at least to me) that the concept of CSI involves inescapable circular reasoning. How do you know it's very complex? It's designed. How do you know it's designed? It's very complex. The very idea of "specification," it seems to me, requires foreknowledge that there is a source of such orders, as in the deck of cards ordered by rank and suit. If we had no foreknowledge of such things, no order could be differentiated from random orders. Mickey Bitsko
1 2