Uncommon Descent Serving The Intelligent Design Community

Confusing Probability: The “Every-Sequence-Is-Equally-Improbable” Argument

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Note to Readers:

The past few days on this thread there has been tremendous activity and much discussion about the concept of probability.  I had intended to post this OP months ago, but found it still in my drafts folder yesterday mostly, but not quite fully, complete.  In the interest of highlighting a couple of the issues hinted at in the recent thread, I decided to quickly dust off this post and publish it right away.  This is not intended to be a response to everything in the other thread.  In addition, I have dusted this off rather hastily (hopefully not too hastily), so please let me know if you find any errors in the math or otherwise, and I will be happy to correct them.

—–

Confusing Probability: The “Every-Sequence-Is-Equally-Improbable” Argument

In order to help explain the concept of probability, mathematicians often talk about the flip of a “fair coin.”  Intelligent design proponents, including William Dembski, have also used the coin flip example as a simplified way to help explain the concept of specified complexity.

For example, a flip of a fair coin 500 times can be calculated as a simple 2 to the 500th power, with the odds of such a sequence being approximately 1 in 3.3*10^150.  Based on this simple example, I have heard some intelligent design proponents, perhaps a little too simplistically, ask: “What would we infer if we saw 500 heads flipped in a row?”

At this point in the conversation the opponent of intelligent design often counters with various distractions, but perhaps the favorite argument – certainly the one that at least at first blush appears to address the question with some level of rationality – is that every sequence is just as improbable as another.  And therefore, comes the always implied (and occasionally stated) conclusion, there is nothing special about 500 heads in a row.  Nothing to see here; move along, folks.  This same argument at times rears its head when discussing other sequences, such as nucleotides in DNA or amino acid sequences in proteins.

For simplicity’s sake, I will discuss two examples to highlight the issue: the coin toss example and the example of generating a string of English characters.

Initial Impressions

At first blush, the “every-sequence-is-just-as-improbable-as-the-next” (“ESII” hereafter) argument appears to make some sense.  After all, if we have a random character generator that generates a random lowercase letter from the 26 characters in the English alphabet, where each character is generated without reference to any prior characters, then in that sense, yes, any particular equal-length sequence is just as improbable as any other.

As a result, one might be tempted to conclude that there is nothing special about any particular string – all are equally likely.  Thus, if we see a string of 500 heads in a row, or HTHTHT . . . repeating, or the first dozen prime numbers in binary, or the beginning of Hamlet, then, according to the ESII argument, there is nothing unusual about it.  After all, any particular sequence is just as improbable as the next.

This is nonsense.

Everyone, including the person making the ESII argument, knows it is nonsense.

A Bridge Random Generator for Sale

Imagine you are in the market for a new random character generator.  I invite you to my computer lab and announce that I have developed a wonderful new random character generator that with perfect randomness selects one of 26 lowercase letters in the English alphabet and displays it, then moves on to the next position, with each character selection independent of the prior.  If I then ran my generator and it spit out 500 a’s in a row, everyone in the world would immediately and clearly and unequivocally recognize that something funny was going on.

But if the ESII argument is valid, no such recognition is possible.  After all, every sequence is just as improbable as the next, the argument goes.

Yet, contrary to that claim, we would know, with great certainty, that something was amiss.  Any rational person would immediately realize that either (i) there was a mistake in the random character generator, perhaps a bad line of code, or (ii) I had produced the 500 a’s in a row purposely.  In either case, you would certainly refuse to turn over your hard-earned cash and purchase my random character generator.

Why does the ESII argument so fully and abruptly contradict our intuition?  Could our intuition about the random character generator be wrong?  Is it likely that the 500 a’s in a row was indeed produced through a legitimate random draw?  Where is the disconnect?

Sometimes intelligent design proponents, when faced with the ESII argument, are at a loss as to how to respond.  They know – everyone knows – that there is something not quite right about the ESII argument, but they can’t quite put a finger on it.  The ESII argument seems correct on its face, so why does it so strongly contradict our real-world experience about what we know to be the case?

My purpose today is to put a solid finger on the problems with the ESII argument.

In the paragraphs that follow, I will demonstrate that it is indeed our experience that is on solid ground, and that the ESII argument suffers from two significant, and fatal, logical flaws: (1) assuming the conclusion and (2) a category mistake.

Assuming the Conclusion

On this thread R0bb stated:

Randomly generate a string of 50 English characters. The following string is an improbable outcome (as is every other string of 50 English characters): aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

R0bb goes on to note that the probability of a particular string occurring is dependent on the process that produced it.  I agree on that point.

Yet there is a serious problem with the “everything-is-just-as-improbable” line of argumentation when we are talking about ascertaining the origin of something.

When R0bb claims his string of a’s is just as improbable as any other string of equal length, that is only true by assuming the string was generated by a random generator, which, if we examine his example, is exactly what he did.

However, the way in which an artifact was generated when we are examining it to determine its origin is precisely the question at issue.  Saying that every string of equal length is just as improbable as any other, in the context of design detection, is to assume as a premise the very conclusion we are trying to reach.

We cannot say, when we see a string of characters (or any other artifact) that exhibits a specification or particular pattern, that “Well, every other outcome is just as improbable, so nothing special to see here.” The improbability, as Robb pointed out, is based on the process that produced it. And the process that produced it is precisely the question at issue.

When we come across a string like: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa or some physical equivalent, like a crystal structure or a repeating pulse from a pulsar, we most definitely do not conclude it was produced by some random process that just happened to produce all a’s this time around, because, hey, every sequence is just as improbable as the next.

Flow of Analysis for Design Detection

Let’s dig in just a bit deeper and examine the proper flow of analysis in the context of design detection – in other words in the context of determining the origin of, or the process that produced, a particular sequence.

The proper flow of analysis is not:

  1. Assume that two sequences, specified sequence A and unspecified sequence B, arose from a random generator.
  2. Calculate the odds of sequence A arising.
  3. Calculate the odds of sequence B arising.
  4. Compare the odds and observe that the odds are equal.
  5. Conclude that every sequence is “just as likely” and, therefore, there is nothing special about a sequence that constitutes a specification.

Rather, the proper flow of analysis is:

  1. Observe the existence of specified sequence A.
  2. Calculate the odds of sequence A arising, assuming a random generator.
  3. Observe that a different cause, namely an intelligent agent, has the ability to produce sequence A with a probability of 1.
  4. Compare the odds and observe that there is a massive difference between the odds of the two causal explanations.
  5. Conclude, based on our uniform and repeated experience and by way of inference to the best explanation, that the more likely source of the sequence was an intelligent agent.

The problem with the first approach – the approach leading to the conclusion that every sequence is just as improbable as the next – is that it assumes the sequence under scrutiny was produced by a random generator.  Yet the origin of the sequence is precisely the issue in question.

This is the first problem with the ESII claim.  It commits a logical error in thinking that the flow of analysis is to assume a random generator and then compare sequences, when the question of whether a random generator produced the specified sequence in the first place is precisely the issue in question.  As a result, the ESII argument against design detection fails on logical grounds because it assumes as a premise the very conclusion it is attempting to reach.

The Category Mistake

Now let us examine a more nuanced, but equally important and substantive, problem with the ESII argument.  Consider the following two strings:

ababababababababababababababab

qngweyalbpelrngihseobkzpplmwny

When we consider these two strings in the context of design detection, we immediately notice a pattern in the first string, in this case a short-period repeating pattern ‘ab’.  That pattern is a specification.  In contrast, the second string exhibits no clear pattern and would not be flagged as a specification.

At this point the ESII argument rears its head and asserts that both sequences are just as improbable.  We have already dispensed with that argument by showing that it assumes as its premise the very conclusion it is trying to reach.  Yet there is a second fundamental problem with the ESII argument.

Specifically, when we are looking at a new artifact to see whether it was designed, we need not be checking to see if it conforms to an exact, previously-designated, down-to-the-letter specification.  Although it is possible that in some particular instance we might want to home in on a very specific pre-defined sequence for some purpose (such as when checking a password), in most cases we are interested in a general assessment as to whether the artifact exhibits a specification.

If I design a new product, if I write a new book, if I paint a new painting – in any of these cases, someone could come along afterwards and recognize clear indicia of design.  And that is true even if they did not have in mind a precise, fully-detailed description of the specification up front.  It is true even if they are making what we might call a “post specification.”

Indeed, if the outside observer did have such a fully-detailed specification up front, then it would have been them, not I, that had designed the product, or wrote the book, or painted the painting.

Yet, the absence of a pre specification does not deter their ability to — correctly and accurately — infer design in the slightest.  As with the product or the book or the painting, every time we recognize design after the fact, which we do regularly every day, we are drawing an inference based on a post specification.

The reason for this is that when we are looking at an artifact to determine whether it is designed, we are usually analyzing its general properties of specification and complexity rather than the very specific sequence in question.  Stated another way, it is the fact of a particular type of pattern that gives away the design, not necessarily the specific pattern itself.

Back to our example.  If instead of aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa, our random character generator produced ababababababababababababababab, we would still be confident that something was fishy with the random character generator.  The same would be true with acacacacacacacacacacacacacacac and so on.  We could also alter the pattern to make it somewhat longer, perhaps abcdabcdabcdabcdabcdabcdabcdabcd or even abcdefghabcdefghabcdefghabcdefgh and so on.

Indeed, there are many periodic repetitive patterns that would raise our suspicions just as much and would cause us to conclude that the sequence was not in fact produced by a legitimate random draw.

How many repetitive sequences would raise our suspicions – how many would we flag as a “specification”?  Certainly dozens or even hundreds.  Likely many thousands.  A million?  Yes, perhaps.

“But, Anderson,” you complain, “doesn’t your admission that there are many such repetitive sequences mean that we have to increase the likelihood of a random process stumbling upon a sequence that might be considered a “specification”?  Absolutely.  But let’s keep the numbers in context.

From Repetition to Communication

I’ll return to this in a moment, but first another example to drive the point home.  In addition to many repetitive sequences, don’t we also have to consider non-repetitive, but meaningful sequences?  Absolutely.

David Berlinski, in his classic essay, The Deniable Darwin, notes the situation thusly:

Linguists in the 1950’s, most notably Noam Chomsky and George Miller, asked dramatically how many grammatical English sentences could be constructed with 100 letters. Approximately 10 to the 25th power, they answered. This is a very large number. But a sentence is one thing; a sequence, another. A sentence obeys the laws of English grammar; a sequence is lawless and comprises any concatenation of those 100 letters. If there are roughly 10^25 sentences at hand, the number of sequences 100 letters in length is, by way of contrast, 26 to the 100th power. This is an inconceivably greater number. The space of possibilities has blown up, the explosive process being one of combinatorial inflation.

Berlinski’s point is well taken, but let’s push him even further.  What about other languages?  Might we see a coherent sentence show up in Spanish or French?  If we optimistically include 1,000 languages in the mix, not just English, we start to move the needle slightly.  But, remarkably, still not very much.  Even generously ignoring the very real problem of additional characters in other languages, we end up with an estimate of something on the order of 10^28 language sequences – 10^28 patterns that we might reasonably consider as specifications.

In addition to Chomsky’s and Miller’s impressive estimate of coherent language sentences, let’s now go back to where we started and add in the repetitive patterns we mentioned above.  A million?  A billion?  Let’s be generous and add 100 billion repetitive patterns that we think might be flagged as a specification.  It hardly budges the calculation.  It is a rounding error.  We still have approximately 10^28 potential specifications.

10^28 is a most impressive number, to be sure.

But, as Berlinski notes, the odds of a specific sequence in a 100-character string, is 1 in 26^100, or 3.14 x 10^141.  Just to make the number simpler for discussion, let’s again be more generous and divide by a third: 1 x 10^141.  If we subtract out the broad range of potential specifications from this number, we are still left with an astronomically large number of sequences that would not be flagged as a specification.  How large?  Stated comparatively, given a 100-letter randomly-generated sequence, the odds of us getting a specification  — not a particular pre-determined specification, any specification — are only 1 in 10^113.

What Are the Odds?

What this means in practice is that even if we take an expansive view of what can constitute a “specification,” the odds of a random process ever stumbling upon any one of these 10^28 specifications is still only approximately 1 in 10^113.  This is an outrageously large number and one that gives us excellent confidence, based on what we know and our real-world experience, that if we see any of these specifications – not just a particular one, but any one of them out of the entire group of specifications, that it likely did not come from a random draw.  And it doesn’t even make much difference if our estimate of specifications is off by a couple orders of magnitude.  The difference between the number of specifications and non-specifications is so great it would still be a drop in the proverbial bucket.

Now 1 in 10^113 is a long shot to be sure; it is difficult if not impossible to grasp such a number.  But intelligent design proponents are willing to go further and propose that a higher level of confidence should be required.  Dembski, for example, proposed 1 in 10^150 as a universal probability bound.  In the above example, Berlinski and I talked of a 100-character string.  But if we increase it to 130 characters then we start bumping up against the universal probability bound.  More characters would of course compound the odds.

Furthermore, when we have, as we do with living systems, multiple such sequences that are required for a molecular machine or a biological process or a biological system – arranged as they are in their own additional specified configuration that would compound the odds – then such calculations quickly go off the charts.

We can quibble about the exact calculations.  We can add more languages and can dream up other repetitive patterns that might, perhaps, be flagged as specifications.  We can tweak the length of the sequence and argue about minutiae.  Yet the fundamental lesson remains: the class of nonsense sequences vastly outnumbers the class of meaningful and/or repetitive sequences.

To sum, when we see the following sequences:

(1)       ababababababababababababababab

(2)       tobeornottobethatisthequestion

(3)       qngweyalbpelrngihseobkzpplmwny

We need to understand that rather than comparing one improbable sequence with another equally improbably sequence, what we are really comparing is a recognizable pattern, in the form of either (1) a repetitive sequence or (2) a meaningful sequence, versus (3) what appears to be a nonsense, random draw.

Properly formulated thusly, the probability of (1) or (2) versus (3) is definitely not equal.

Not even close.

Not even in the ballpark.

Thus, the failure to carefully identify what we are dealing with for purposes of design detection gives the ESII proponent the false impression that when choosing between a recognizable pattern and a random draw we are dealing with equivalent odds.  We are not.

—–

Conclusion

While examples of coin tosses and character strings may be oversimplifications in comparison to biological systems, such examples do give us an idea of the basic probabilistic hurdles faced by any random-based process.

The ESII argument, popular though it may be among some intelligent design opponents, is fatally flawed.  First, because it assumes as a premise (random generation) the very conclusion it seeks to reach.  Second, because it fails to properly define sequences, mistakenly assuming that a random sequence is on the same probabilistic footing as a patterned/specified sequence, rather than properly looking at the relative sizes of the two categories of sequences.

Opponents of intelligent design may be able to muster rational arguments that question the strength of the design inference, but the “every-sequence-is-equally-improbable” argument is not one of them.

Comments
Eric: Thanks for the link. I'm on vacation this week; and, frankly, my brain is tired from trying to figure this stuff out. So, I might get back into things in a day or two.PaV
June 5, 2017
June
06
Jun
5
05
2017
03:50 PM
3
03
50
PM
PDT
wd400 @35:
Does anyone actually put this “ESII” argument forward?
I'm surprised you have to ask that. I have heard it and variants of it more times than I can count. Even those who don't state it explicitly in the same words I have used it are driving at the same point on a regular basis, such as Kitcher in the recent thread. This is extremely common. I'm glad to hear, however, that you would never advance such an argument.
I have only seen variants of it mentioned when people make obviously wrong calculations like the “this protein is 100 a/a long, there are 20 amino acids so there is only a one in 1/20^100 chance of it arising”.
If you have heard variants of it made in that context, then you are underscoring my point for me. Thank you. The correct response to a claim about the odds of a particular functional protein is most definitely not the ESII argument, nor any variation of it. Yet, as you note, it regularly gets brought in as though it were some kind of rational response to the design inference. It isn't.
One of the problems with these kinds of calculations is they over-specify the target (how many amino acids could be replaced without altering the function, how many other proteins could perform this function..)
What you have noted here, however, is a rational response, or at least a rational consideration of factors. What is required for a given function? Are there other sequences of amino acids that would produce the same function without other side effects? These are excellent questions, questions that invite and require careful research. And, yes, they would change the calculation somewhat. But my understanding of the evidence to date is that the number of permissible changes to an amino acid sequence to retain protein function is quite small. It isn't going to impact the probability calculation in any meaningful way. So, yes, it is certainly correct to note that with some proteins there is a small subset of sequences that could perform the same function. But we mustn't fool ourselves into thinking that this observation impacts the design inference. Indeed, in the examples I have provided, I have acknowledge and taken into account the fact that many sequences would be flagged as functional or specified. The probability still cuts decisively against the Darwinian claim.
. . . and people often use card deals or coin tosses to demonstrate how easy it is calculate a tiny probability for something after it has happened. All of which seems perfectly reasonable to me.
Sure. We can calculate all we want. We just need to understand what we are calculating. And in essentially every instance in which the ESII argument is put forward (including its variants, as you have noted) logical mistakes are made by the person putting it forward, as I have outlined.Eric Anderson
June 5, 2017
June
06
Jun
5
05
2017
03:48 PM
3
03
48
PM
PDT
jdk @24: Agreed. Well said.Eric Anderson
June 5, 2017
June
06
Jun
5
05
2017
03:28 PM
3
03
28
PM
PDT
Yes, I agree Phinehas, especially if the source of our judgment about specification is human understanding. If I were to deal 2000 cards out of a deck of 20,000 cards (perhaps 10 suits and the numbers 1 to 2000 in each), it would be very hard for human beings to even recognize all but the most obvious patterns, and the percent of such hands would be extremely small (vanishingly so) in comparision to all the hands. If you got all 2000 of one suit in a hand, there is absolutely no doubt that we would conclude that did not happen by chance. But, I don't think we could definitively agree that some "level of complexity renders the specifications/significant sequences moot", because of, among other things, as I have mentioned, the non black-and-white judgments that need to be made about what counts as significant, and to what degree.jdk
June 5, 2017
June
06
Jun
5
05
2017
03:03 PM
3
03
03
PM
PDT
jdk: Yes, I am certainly referring back to what we touched on briefly in the previous thread. My point (which I didn't really elucidate in the other thread) is that, as the complexity increases, the probability lowers and the murkiness matters less and less. Past a certain point of complexity, the specifications (or significant sequences), no matter how broadly you define them, are so low as a percentage that you can practically ignore them and just use the probability. If this is true, the next question is this: What level of complexity renders the specifications/significant sequences moot?Phinehas
June 5, 2017
June
06
Jun
5
05
2017
02:20 PM
2
02
20
PM
PDT
Does anyone actually put this "ESII" argument forward? I have only seen variants of it mentioned when people make obviously wrong calculations like the "this protein is 100 a/a long, there are 20 amino acids so there is only a one in 1/20^100 chance of it arising". One of the problems with these kinds of calculations is they over-specify the target (how many amino acids could be replaced without altering the function, how many other proteins could perform this function..), and people often use card deals or coin tosses to demonstrate how easy it is calculate a tiny probability for something after it has happened. All of which seems perfectly reasonable to me.wd400
June 5, 2017
June
06
Jun
5
05
2017
02:15 PM
2
02
15
PM
PDT
Harry @30, you are right of course, that the human brain is an even better example, but I think some people find it easier to believe that unintelligent forces alone could create brains than computers just because they have no concept of the complexity of life, or of brains, while they can appreciate more readily the complexity of computers and iPhones (which of course should help them appreciate the complexity of brains). KF @31, what is a 404 code? You mean you are having trouble with my link? Seems to work for me. By the way, I recently (with some professional help) have redone the video "Why Evolution is Different," the subject of my last (Feb 22) post: https://uncommondescent.com/intelligent-design/video-why-evolution-is-different/ Not worth a new post, since the content hasn't changed much, but maybe worth another look for some viewers, a more "polished" version. I think it's my best presentation of the only two ID-related themes I ever talk about.Granville Sewell
June 5, 2017
June
06
Jun
5
05
2017
02:00 PM
2
02
00
PM
PDT
Excellent point EA. One must assume the random generation from a multiverse to overcome the sequence of events leading to our special occurrence: Cosmic Constants Gravitational force constant Electromagnetic force constant Strong nuclear force constant Weak nuclear force constant Cosmological constant Initial Conditions and “Brute Facts” Initial distribution of mass energy Ratio of masses for protons and electrons Velocity of light Mass excess of neutron over proton Principle of quantization Pauli Exclusion Principle Solar System Conditions Near inner edge of circumstellar habitable zone Low-eccentricity orbit outside spin-orbit and giant planet resonances A few, large Jupiter-mass planetary neighbors in large circular orbits Outside spiral arm of galaxy Near co-rotation circle of galaxy, in circular orbit around galactic center Within the galactic habitable zone Human advancement occurs during the cosmic habitable age “Local” Planetary Conditions Steady plate tectonics with right kind of geological interior Right amount of water in crust Sun is the right type star for life Large moon with right rotation period Stable atmosphere Proper concentration of sulfur Ability to create fire Right planetary mass Unique solar eclipse Cosmic shield Information rich biosphere Human Life Origin of life hurdles DNA has; Functional Information - Encoder - Error Correction - Decoder DNA contains multi-layered information that reads both forward and backwards - DNA stores data more efficiently than anything we've created - and a majority of DNA contains metainformation Complex pieces of molecular machinery in the cell The information enigma Morality Consciousness Free will Justice Perception Human language Recognizing art in natureHeartlander
June 5, 2017
June
06
Jun
5
05
2017
01:45 PM
1
01
45
PM
PDT
Phinehas writes,
My intuition is that as the number of characters in your string go up beyond 100, though the number of specifications will also increase, the percentage of specifications as compared to the total number of possible sequences goes down, perhaps even approaching zero. ... Can it be demonstrated beyond simple intuition?
Hmmm. This is exactly the argument I made here and here. Later, Phinehas wrote, "Thank you for your posts @105 and @192. I think they get to the heart of the issue for me." I also pointed out in those two posts in the other thread that specification is not a simple black-and-white issue. If we deal 13 out of 52 cards, in order (the beginning situation we discussed), getting all spades in order would be highly significant. However hands where every group of 3 cards was sequential(3 4 5 Q J 10 8 7 6 2 3 4 K) might strike someone as pretty significant if they noticed the pattern. However there would be many more such hands than all spades in order. Therefore, it would be hard to provide a mathematically rigorous analysis of significance, even if one tried to assign different "significance values" to different hands, because there would be so much subjective judgment. So I think the idea is important, and we can go beyond just "intuition", but I'm not sure how much could be demonstrated mathematically in any rigorous fashion.jdk
June 5, 2017
June
06
Jun
5
05
2017
12:57 PM
12
12
57
PM
PDT
Dr Sewell, I am getting a 404 code. KFkairosfocus
June 5, 2017
June
06
Jun
5
05
2017
12:57 PM
12
12
57
PM
PDT
Granville Sewell @ 11, I have been a fan of yours for years, Granville. I have a question for you. When you say,
Of course, one can still argue that the spectacular increase in order seen on Earth does not violate the second law because what has happened here is not really extremely improbable. And perhaps it only seems extremely improbable, but really is not, that, under the right conditions, the influx of stellar energy into a planet could cause atoms to rearrange themselves into nuclear power plants and spaceships and digital computers.
Why do you not use the most unlikely, functionally complex phenomenon known to us -- humanity itself -- as the example of extreme improbability? Nuclear power plants, spaceships and digital computers consist of crude technology in comparison to a human being.harry
June 5, 2017
June
06
Jun
5
05
2017
12:39 PM
12
12
39
PM
PDT
Hey Eric: My intuition is that as the number of characters in your string go up beyond 100, though the number of specifications will also increase, the percentage of specifications as compared to the total number of possible sequences goes down, perhaps even approaching zero. Does this seem right to you? Can it be demonstrated beyond simple intuition?Phinehas
June 5, 2017
June
06
Jun
5
05
2017
12:24 PM
12
12
24
PM
PDT
Another way to convince a ESII advocate: We are playing poker, I am dealing. I deal myself a royal flush, you an 'ordinary' hand. I win and take your money. We play another hand, and I deal myself a royal flush again. Total of 10 hands, all dealt by me, always getting a royal flush. You accuse me of cheating!!!! Not so fast. ESII establishes that all sequences have equal probability. Thus nothing unusual about the 50 cards I have dealt myself. No more unlikely than any other 50 card sequence. Thanks for your money! Force the ESII advocate to tell you WHY he is convinced I cheated.wsread
June 5, 2017
June
06
Jun
5
05
2017
12:21 PM
12
12
21
PM
PDT
Hi Eric, Thank you for your efforts in this thoughtful post. Though your post implies this, I think emphasizing the role of the receiver in information theory is helpful. There always has to be someone or something receiving and interpreting the information for it to be considered such. In Shannon's information model there is a transmitter and a receiver and they have, in advance, specified what the information will be: a set of messages. The receiver's job is to discern what message was sent by the transmitter from a (possibly corrupted) received sequence. The thing to note is that for something to be considered information and not gibberish there must be someone or something who find what is received informative. In a cell, the many machines that interpret the DNA code and other possible forms of information are the receivers. They must receive a very small subset of all the possible DNA sequences or the cell will die. This subset is their information and is special because it has meaning to them, even though the sequence when encoded in letters looks no different to us than any other fatal sequence. I may be confused, but it seems the CSI model attempts to define information as something that exists as a static entity immutable to anyone who considers it. It assumes (I think) any person or entity receiving a piece of information would agree on both the nature and amount of information it has. This is of course untrue. The receiver must be invoked as the interpreter and just as a valid DNA sequence is gibberish to us but not to a cell, the information content depends on who does the receiving and interpreting. A favorite trick of those arguing the ESII is 'I make valid text into a zip file. Now it looks like gibberish and is no different from any other random sequence of the same length, but we know there is really information in there.' But if I do not know how to unzip the file and get something I understand, it is not information to the receiver (me). This comment is too long so I will stop. But when considering these discussions, try asking yourself: Who is the receiver in this situation? What role does the receiver serve? Is it possible to specify this as information without having someone or something interpreting the result?GBDixon
June 5, 2017
June
06
Jun
5
05
2017
12:14 PM
12
12
14
PM
PDT
You all might be interested in my tutorial on specified complexity which covers some of this ground.johnnyb
June 5, 2017
June
06
Jun
5
05
2017
11:56 AM
11
11
56
AM
PDT
mike1962,
Everyone would be surprised if it output the same 1000 bit string twice in the same universe even though “all strings are just as likely.” Nobody would claim it was a random generator except the insane or those with an agenda. 1000 bits strings should never be expected to randomly hit twice in the same universe. It’s logically possible, but there are better explanations. Everything we know and experience about reality tells us so.
Interesting point. According to the "birthday paradox" calculations, it would take about 2^500 trials before the chance of a repeated bit string reaches 50%, which is many more than I had anticipated.daveS
June 5, 2017
June
06
Jun
5
05
2017
11:32 AM
11
11
32
AM
PDT
FYI: In 22 I wrote "suspect" cheating, but in fact I would conclude cheating. Just a small clarification.jdk
June 5, 2017
June
06
Jun
5
05
2017
11:28 AM
11
11
28
AM
PDT
The bottom line: Nobody would be surprised at a supposed random generator producing any particular 1000 bit string. Everyone would be surprised if it output the same 1000 bit string twice in the same universe even though "all strings are just as likely." Nobody would claim it was a random generator except the insane or those with an agenda. 1000 bits strings should never be expected to randomly hit twice in the same universe. It's logically possible, but there are better explanations. Everything we know and experience about reality tells us so. Specification trumps the ESII argument.mike1962
June 5, 2017
June
06
Jun
5
05
2017
11:04 AM
11
11
04
AM
PDT
Hi Eric. Yes, I am discussing probability from a theoretical point of view where all equiprobale elements are chosen randomly I am also aware of what happens when you take significance into the situation, which brings in a human element. In one of my long posts on the Darwinism thread I remember writing that if I was dealt 13 cards from what was purportedly a random deck and got 13 spades in order, I would suspect cheating, not chance, as the cause.jdk
June 5, 2017
June
06
Jun
5
05
2017
10:02 AM
10
10
02
AM
PDT
Your rudeness is duly noted, Barry. as well as the snarky tone of your post. FWIW, although I don't expect that you have paid attention, I've been writing on this issue on the Darwinism thread, and I have NOT been making the argument that you dismiss, nor have been making the argument you pointed to as Miller's Mendacity." In fact, I have been describing things in a way which support your position. So perhaps you should think about not jumping to conclusions before you call someone foolish.jdk
June 5, 2017
June
06
Jun
5
05
2017
09:57 AM
9
09
57
AM
PDT
jdk @ 14: I infer from your comments that you did not follow the link in my comment @ 13. Perhaps you should before you comment further. It will help you look somewhat less foolish.Barry Arrington
June 5, 2017
June
06
Jun
5
05
2017
09:40 AM
9
09
40
AM
PDT
Dionisio: Thanks for the thoughts. I'm not sure we can't apply probabilities to procedures as a matter of principle, although current lack of understanding of the procedures would make it more difficult, to be sure. You and gpuccio are quite right that the additional procedural aspects are hugely significant. This is part of the problem with the co-option proposal for the bacterial flagellum, for example. You can't just throw more parts or more lines of code into the middle of a highly controlled and sophisticated manufacturing process and expect anything good to come of it.Eric Anderson
June 5, 2017
June
06
Jun
5
05
2017
09:33 AM
9
09
33
AM
PDT
jdk @8: Thanks for stopping by. I understand your desire to limit your part of the discussion and you are of course free to participate where you want If I may, however, I think the points I am raising are indeed relevant to issues you have discussed and have said you are interested in. I am focusing on specific nuances, so let me explain a bit further.
Right off the bat, this comment excludes me, as I have made it clear that of course some improbable events have more significance than others.
Agreed that some sequences or events have more significance than others. Whether they are improbable is part of the question.
Every sequence is just as improbable as any other. This is a true fact, and I believe Eric agreed with me about this on the other thread.
Almost. This is one of the important nuances I am highlighting in this OP. In the context of naked probability without any assessment of specification, which is the context you have been discussing and saying you are interested in, it is true that any randomly-generated sequence will have the same probability of any other randomly-generated sequence. (Assuming, similar parameters, length, and so on.). On that point I agree with you. However, as soon as we get to a question of the origin of a sequence, the logic must inexorably shift dramatically. It is quite clear to me that there is a disconnect in the discussion between those who are arguing that every sequence is equally improbable and those who are arguing against it. Is the disconnect because people can't do math? Perhaps for some, but not for most. The disconnect is not the math, it is the logic -- specifically the underlying assumptions in the discussion. If we assume a random generation, then yes, each sequence is just as improbable as the next. This is worth noting, but is hardly of significant substance. On the other hand, if we are asking about the origin of the sequence, then we are dealing with competing causal explanations, in which case the probabilities are not equivalent. Not even close. ----- So I appreciate your desire to talk about the equal probabilities assuming random generation, assuming the same underlying causal explanation. But I can assure you that this assumption has not been made clear to many of your interlocutors. Thus the disconnect. In any event, even if you don't want to spend a lot of time on the issues I am highlighting, do you agree that the every-sequence-is-improbable argument is not a good argument against the design inference, for the reasons I have outlined?Eric Anderson
June 5, 2017
June
06
Jun
5
05
2017
09:29 AM
9
09
29
AM
PDT
Bob O'H @4:
Isn’t that also a problem for ID proponents, as they also make the same assumption? The CSI family of statistics seem to make this assumption and don’t, for example, include incremental selection.
Not really. Here I have just been analyzing a single string. In biological terms we might think of a particular nucleotide sequence or amino acid sequence. This is an incredibly low baseline that must be reached. If we are thinking of a more complete system, then we run into my additional point:
Furthermore, when we have, as we do with living systems, multiple such sequences that are required for a molecular machine or a biological process or a biological system – arranged as they are in their own additional specified configuration that would compound the odds – then such calculations quickly go off the charts.
I presume this latter is what you're referring to -- the idea that something like, say, the bacterial flagellum could be constructed bit by bit, rather than all at once? You are right that it is important to acknowledge the hypothetical possibility of an incremental, stepwise construction in the initial analysis. However, upon closer analysis what we find in most cases is that this is nothing but wild speculation. As an empirical matter, it is quite clear that numerous systems require a significant number of parts to work. Indeed, the way in which the parts of the bacterial flagellum have been identified has been primarily through knockout experiments, thus confirming the irreducible core of the flagellum. Now one could argue, as Darwinists have been wont to do in the face of this empirical evidence, that maybe, hypothetically, perhaps, the bacterial flagellum could have been constructed by a long series of individual components that eventually came together to form the bacterial flagellum. There are a few significant problems with this idea. First, there is no evidence for it. Second, there is not even a reasonable theoretical basis for such a scenario, beyond vague assertions. Third, although such a scenario would indeed avoid an "all at once" construction, it instead requires a long series of mutations and changes, all of which just happen to be beneficial, all of which just happen to be of such selective benefit to become fixed in the population, all of which just happen to occur at the right time and in the right order, all of which just happen to add up to a complex functional system. Unfortunately for the Darwinian paradigm, such an approach simply avoids the all-at-once probabilistic hurdle by embracing its own set of fantastic probabilistic hurdles. It's out of the frying pan and into the fire. It isn't realistic or reasonable. Most of the stories, like Matzke's made-up hypothetical he managed to get published, can scarcely even be called science. Rather, they are just another in the long string of simplistic Darwinian just-so stories.Eric Anderson
June 5, 2017
June
06
Jun
5
05
2017
08:58 AM
8
08
58
AM
PDT
Are you talking about poker? Of course each hand is equally random. Or are you talking about something else?jdk
June 5, 2017
June
06
Jun
5
05
2017
07:36 AM
7
07
36
AM
PDT
Every sequence is equally probable, but not every sequence is equally random. https://en.wikipedia.org/wiki/Algorithmic_information_theoryEricMH
June 5, 2017
June
06
Jun
5
05
2017
07:34 AM
7
07
34
AM
PDT
Nothing in poker is of extremely low probability. There are only 2.6 million possible hands, all equally probable: even royal straight flushes show up every 650,000 hands. Also, there is a well defined hierarchy of significance, and the probability of all the situations are well known.jdk
June 5, 2017
June
06
Jun
5
05
2017
07:26 AM
7
07
26
AM
PDT
Why won't these "improbable things happen all the time" people play poker with me? https://uncommondescent.com/intelligent-design/low-probability-is-only-half-of-specified-complexity/Barry Arrington
June 5, 2017
June
06
Jun
5
05
2017
07:18 AM
7
07
18
AM
PDT
D, process, regulation, interactions across complex networks such as cellular metabolism and so forth are all connected to the FSCO/I issue. Just, it is usually hard to see a direct right there in the microscope, visible feature. It is that visibility issue that makes D/RNA, proteins, enzymes and ribosomes so important in the discussion. And BTW, fine tuning is closely connected, in the world of life, islands of function are about clusters of operating points deeply isolated in a field or sea of possible configs. And it does no real good to try to project hill-climbing within an island to the wider context where the dominant challenge to evolutionary materialistic chance and necessity schemes is to blindly FIND a shoreline of function. Unfortunately, this difference between blind and insightful exploration of a space of possibilities seems very hard for objectors to hear. Look man, I am glancing across at a piece of white pine that came here as shipping support material that I can see reconfigured as a shad bait made of wood with through wiring. There is not a chance in the world that that would happen without insight and design. And if you think oh life reproduces so evolution answers, you have not seen the FSCO/I challenge to get TO reproduction yet, and need to read Paley Ch 2 on the self replicating time keeping watch, then apply that to the von Neumann kinematic self replicator and to the OoL challenge at the root of the tree of life. Then, to onward origin of body plans with vastly differing architectures, and up to our own origin and the issue of where rational mind comes from that allows us to have a discussion. [BTW, I am reading Reppert and dipping into Pearcey right now, with Yockey waiting in the wings. Good things to come, just the prefarory matrerial in Yockey has a lot to say that too many are stubborn to hear, frankly.] And at cosmological level, we are talking about the abstract architecture of a cosmos with many, mutually adapted, delicately balanced factors. And again, we see an astonishing degree of difficulty in even following the point accurately. These suggest to me that we deal with commitments at worldview level that warp the more technical discussion. Then, when, to break through, we put up and followed up -- cf here -- a striking concrete case such as Antikythera, and join that to Paley, it is studiously ignored or taken as an occasion for side-tracking tangents that do not look very fruitful. Then notice what didn't happen when we corrected the assertions that tried to discredit ID researchers -- the objectors showed that they do not see themselves as accountable before truth and fairness; that is sadly standard for agit-prop operatives, but there are too many enabling by going along. All of this tends to point to where our civilisation is going, and it is not pretty. Sigh, back to the RW challenges of the day, even on Whitmonday. KFkairosfocus
June 5, 2017
June
06
Jun
5
05
2017
07:01 AM
7
07
01
AM
PDT
In my recent Physics Essays article: http://www.math.utep.edu/Faculty/sewell/articles/pe_sewell.html I wrote: But the second law is always about probability, so what is still useful in more complicated scenarios is the fundamental principle behind all applications of the second law, which is that natural causes do not do macroscopically describable things which are extremely improbable from the microscopic point of view. Footnote: Extremely improbable events must be macroscopically (simply) describable to be forbidden; if we include extremely improbable events which can only be described by an atom-by-atom accounting, there are so many of these that some are sure to happen. (If we flip a billion fair coins, any particular outcome we get can be said to be extremely improbable, but we are only astonished if something extremely improbable and simply describable happens, such as “the last million coins are tails.”) If we define an event to be simply describable when it can be described in m or fewer bits, there are at most 2^m simply describable events; then we can set the probability threshold for an event to be considered “extremely improbable” so low that we can be confident that no extremely improbable, simply describable events will ever occur. Notice the similarity between this and Dembski’s argument that unintelligent forces do not do things that are “specified” (simply or macroscopically describable) and “complex” (extremely improbable).Granville Sewell
June 5, 2017
June
06
Jun
5
05
2017
07:00 AM
7
07
00
AM
PDT
1 2 3 4

Leave a Reply