Uncommon Descent Serving The Intelligent Design Community

Confusing Probability: The “Every-Sequence-Is-Equally-Improbable” Argument

Share
Flipboard
Print
Email

The past few days on this thread there has been tremendous activity and much discussion about the concept of probability.  I had intended to post this OP months ago, but found it still in my drafts folder yesterday mostly, but not quite fully, complete.  In the interest of highlighting a couple of the issues hinted at in the recent thread, I decided to quickly dust off this post and publish it right away.  This is not intended to be a response to everything in the other thread.  In addition, I have dusted this off rather hastily (hopefully not too hastily), so please let me know if you find any errors in the math or otherwise, and I will be happy to correct them.

—–

Confusing Probability: The “Every-Sequence-Is-Equally-Improbable” Argument

In order to help explain the concept of probability, mathematicians often talk about the flip of a “fair coin.”  Intelligent design proponents, including William Dembski, have also used the coin flip example as a simplified way to help explain the concept of specified complexity.

For example, a flip of a fair coin 500 times can be calculated as a simple 2 to the 500th power, with the odds of such a sequence being approximately 1 in 3.3*10^150.  Based on this simple example, I have heard some intelligent design proponents, perhaps a little too simplistically, ask: “What would we infer if we saw 500 heads flipped in a row?”

At this point in the conversation the opponent of intelligent design often counters with various distractions, but perhaps the favorite argument – certainly the one that at least at first blush appears to address the question with some level of rationality – is that every sequence is just as improbable as another.  And therefore, comes the always implied (and occasionally stated) conclusion, there is nothing special about 500 heads in a row.  Nothing to see here; move along, folks.  This same argument at times rears its head when discussing other sequences, such as nucleotides in DNA or amino acid sequences in proteins.

For simplicity’s sake, I will discuss two examples to highlight the issue: the coin toss example and the example of generating a string of English characters.

Initial Impressions

At first blush, the “every-sequence-is-just-as-improbable-as-the-next” (“ESII” hereafter) argument appears to make some sense.  After all, if we have a random character generator that generates a random lowercase letter from the 26 characters in the English alphabet, where each character is generated without reference to any prior characters, then in that sense, yes, any particular equal-length sequence is just as improbable as any other.

As a result, one might be tempted to conclude that there is nothing special about any particular string – all are equally likely.  Thus, if we see a string of 500 heads in a row, or HTHTHT . . . repeating, or the first dozen prime numbers in binary, or the beginning of Hamlet, then, according to the ESII argument, there is nothing unusual about it.  After all, any particular sequence is just as improbable as the next.

This is nonsense.

Everyone, including the person making the ESII argument, knows it is nonsense.

A Bridge Random Generator for Sale

Imagine you are in the market for a new random character generator.  I invite you to my computer lab and announce that I have developed a wonderful new random character generator that with perfect randomness selects one of 26 lowercase letters in the English alphabet and displays it, then moves on to the next position, with each character selection independent of the prior.  If I then ran my generator and it spit out 500 a’s in a row, everyone in the world would immediately and clearly and unequivocally recognize that something funny was going on.

But if the ESII argument is valid, no such recognition is possible.  After all, every sequence is just as improbable as the next, the argument goes.

Yet, contrary to that claim, we would know, with great certainty, that something was amiss.  Any rational person would immediately realize that either (i) there was a mistake in the random character generator, perhaps a bad line of code, or (ii) I had produced the 500 a’s in a row purposely.  In either case, you would certainly refuse to turn over your hard-earned cash and purchase my random character generator.

Why does the ESII argument so fully and abruptly contradict our intuition?  Could our intuition about the random character generator be wrong?  Is it likely that the 500 a’s in a row was indeed produced through a legitimate random draw?  Where is the disconnect?

Sometimes intelligent design proponents, when faced with the ESII argument, are at a loss as to how to respond.  They know – everyone knows – that there is something not quite right about the ESII argument, but they can’t quite put a finger on it.  The ESII argument seems correct on its face, so why does it so strongly contradict our real-world experience about what we know to be the case?

My purpose today is to put a solid finger on the problems with the ESII argument.

In the paragraphs that follow, I will demonstrate that it is indeed our experience that is on solid ground, and that the ESII argument suffers from two significant, and fatal, logical flaws: (1) assuming the conclusion and (2) a category mistake.

Assuming the Conclusion

Randomly generate a string of 50 English characters. The following string is an improbable outcome (as is every other string of 50 English characters): aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

R0bb goes on to note that the probability of a particular string occurring is dependent on the process that produced it.  I agree on that point.

Yet there is a serious problem with the “everything-is-just-as-improbable” line of argumentation when we are talking about ascertaining the origin of something.

When R0bb claims his string of a’s is just as improbable as any other string of equal length, that is only true by assuming the string was generated by a random generator, which, if we examine his example, is exactly what he did.

However, the way in which an artifact was generated when we are examining it to determine its origin is precisely the question at issue.  Saying that every string of equal length is just as improbable as any other, in the context of design detection, is to assume as a premise the very conclusion we are trying to reach.

We cannot say, when we see a string of characters (or any other artifact) that exhibits a specification or particular pattern, that “Well, every other outcome is just as improbable, so nothing special to see here.” The improbability, as Robb pointed out, is based on the process that produced it. And the process that produced it is precisely the question at issue.

When we come across a string like: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa or some physical equivalent, like a crystal structure or a repeating pulse from a pulsar, we most definitely do not conclude it was produced by some random process that just happened to produce all a’s this time around, because, hey, every sequence is just as improbable as the next.

Flow of Analysis for Design Detection

Let’s dig in just a bit deeper and examine the proper flow of analysis in the context of design detection – in other words in the context of determining the origin of, or the process that produced, a particular sequence.

The proper flow of analysis is not:

1. Assume that two sequences, specified sequence A and unspecified sequence B, arose from a random generator.
2. Calculate the odds of sequence A arising.
3. Calculate the odds of sequence B arising.
4. Compare the odds and observe that the odds are equal.
5. Conclude that every sequence is “just as likely” and, therefore, there is nothing special about a sequence that constitutes a specification.

Rather, the proper flow of analysis is:

1. Observe the existence of specified sequence A.
2. Calculate the odds of sequence A arising, assuming a random generator.
3. Observe that a different cause, namely an intelligent agent, has the ability to produce sequence A with a probability of 1.
4. Compare the odds and observe that there is a massive difference between the odds of the two causal explanations.
5. Conclude, based on our uniform and repeated experience and by way of inference to the best explanation, that the more likely source of the sequence was an intelligent agent.

The problem with the first approach – the approach leading to the conclusion that every sequence is just as improbable as the next – is that it assumes the sequence under scrutiny was produced by a random generator.  Yet the origin of the sequence is precisely the issue in question.

This is the first problem with the ESII claim.  It commits a logical error in thinking that the flow of analysis is to assume a random generator and then compare sequences, when the question of whether a random generator produced the specified sequence in the first place is precisely the issue in question.  As a result, the ESII argument against design detection fails on logical grounds because it assumes as a premise the very conclusion it is attempting to reach.

The Category Mistake

Now let us examine a more nuanced, but equally important and substantive, problem with the ESII argument.  Consider the following two strings:

ababababababababababababababab

qngweyalbpelrngihseobkzpplmwny

When we consider these two strings in the context of design detection, we immediately notice a pattern in the first string, in this case a short-period repeating pattern ‘ab’.  That pattern is a specification.  In contrast, the second string exhibits no clear pattern and would not be flagged as a specification.

At this point the ESII argument rears its head and asserts that both sequences are just as improbable.  We have already dispensed with that argument by showing that it assumes as its premise the very conclusion it is trying to reach.  Yet there is a second fundamental problem with the ESII argument.

Specifically, when we are looking at a new artifact to see whether it was designed, we need not be checking to see if it conforms to an exact, previously-designated, down-to-the-letter specification.  Although it is possible that in some particular instance we might want to home in on a very specific pre-defined sequence for some purpose (such as when checking a password), in most cases we are interested in a general assessment as to whether the artifact exhibits a specification.

If I design a new product, if I write a new book, if I paint a new painting – in any of these cases, someone could come along afterwards and recognize clear indicia of design.  And that is true even if they did not have in mind a precise, fully-detailed description of the specification up front.  It is true even if they are making what we might call a “post specification.”

Indeed, if the outside observer did have such a fully-detailed specification up front, then it would have been them, not I, that had designed the product, or wrote the book, or painted the painting.

Yet, the absence of a pre specification does not deter their ability to — correctly and accurately — infer design in the slightest.  As with the product or the book or the painting, every time we recognize design after the fact, which we do regularly every day, we are drawing an inference based on a post specification.

The reason for this is that when we are looking at an artifact to determine whether it is designed, we are usually analyzing its general properties of specification and complexity rather than the very specific sequence in question.  Stated another way, it is the fact of a particular type of pattern that gives away the design, not necessarily the specific pattern itself.

Back to our example.  If instead of aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa, our random character generator produced ababababababababababababababab, we would still be confident that something was fishy with the random character generator.  The same would be true with acacacacacacacacacacacacacacac and so on.  We could also alter the pattern to make it somewhat longer, perhaps abcdabcdabcdabcdabcdabcdabcdabcd or even abcdefghabcdefghabcdefghabcdefgh and so on.

Indeed, there are many periodic repetitive patterns that would raise our suspicions just as much and would cause us to conclude that the sequence was not in fact produced by a legitimate random draw.

How many repetitive sequences would raise our suspicions – how many would we flag as a “specification”?  Certainly dozens or even hundreds.  Likely many thousands.  A million?  Yes, perhaps.

“But, Anderson,” you complain, “doesn’t your admission that there are many such repetitive sequences mean that we have to increase the likelihood of a random process stumbling upon a sequence that might be considered a “specification”?  Absolutely.  But let’s keep the numbers in context.

From Repetition to Communication

I’ll return to this in a moment, but first another example to drive the point home.  In addition to many repetitive sequences, don’t we also have to consider non-repetitive, but meaningful sequences?  Absolutely.

David Berlinski, in his classic essay, The Deniable Darwin, notes the situation thusly:

Linguists in the 1950’s, most notably Noam Chomsky and George Miller, asked dramatically how many grammatical English sentences could be constructed with 100 letters. Approximately 10 to the 25th power, they answered. This is a very large number. But a sentence is one thing; a sequence, another. A sentence obeys the laws of English grammar; a sequence is lawless and comprises any concatenation of those 100 letters. If there are roughly 10^25 sentences at hand, the number of sequences 100 letters in length is, by way of contrast, 26 to the 100th power. This is an inconceivably greater number. The space of possibilities has blown up, the explosive process being one of combinatorial inflation.

Berlinski’s point is well taken, but let’s push him even further.  What about other languages?  Might we see a coherent sentence show up in Spanish or French?  If we optimistically include 1,000 languages in the mix, not just English, we start to move the needle slightly.  But, remarkably, still not very much.  Even generously ignoring the very real problem of additional characters in other languages, we end up with an estimate of something on the order of 10^28 language sequences – 10^28 patterns that we might reasonably consider as specifications.

In addition to Chomsky’s and Miller’s impressive estimate of coherent language sentences, let’s now go back to where we started and add in the repetitive patterns we mentioned above.  A million?  A billion?  Let’s be generous and add 100 billion repetitive patterns that we think might be flagged as a specification.  It hardly budges the calculation.  It is a rounding error.  We still have approximately 10^28 potential specifications.

10^28 is a most impressive number, to be sure.

But, as Berlinski notes, the odds of a specific sequence in a 100-character string, is 1 in 26^100, or 3.14 x 10^141.  Just to make the number simpler for discussion, let’s again be more generous and divide by a third: 1 x 10^141.  If we subtract out the broad range of potential specifications from this number, we are still left with an astronomically large number of sequences that would not be flagged as a specification.  How large?  Stated comparatively, given a 100-letter randomly-generated sequence, the odds of us getting a specification  — not a particular pre-determined specification, any specification — are only 1 in 10^113.

What Are the Odds?

What this means in practice is that even if we take an expansive view of what can constitute a “specification,” the odds of a random process ever stumbling upon any one of these 10^28 specifications is still only approximately 1 in 10^113.  This is an outrageously large number and one that gives us excellent confidence, based on what we know and our real-world experience, that if we see any of these specifications – not just a particular one, but any one of them out of the entire group of specifications, that it likely did not come from a random draw.  And it doesn’t even make much difference if our estimate of specifications is off by a couple orders of magnitude.  The difference between the number of specifications and non-specifications is so great it would still be a drop in the proverbial bucket.

Now 1 in 10^113 is a long shot to be sure; it is difficult if not impossible to grasp such a number.  But intelligent design proponents are willing to go further and propose that a higher level of confidence should be required.  Dembski, for example, proposed 1 in 10^150 as a universal probability bound.  In the above example, Berlinski and I talked of a 100-character string.  But if we increase it to 130 characters then we start bumping up against the universal probability bound.  More characters would of course compound the odds.

Furthermore, when we have, as we do with living systems, multiple such sequences that are required for a molecular machine or a biological process or a biological system – arranged as they are in their own additional specified configuration that would compound the odds – then such calculations quickly go off the charts.

We can quibble about the exact calculations.  We can add more languages and can dream up other repetitive patterns that might, perhaps, be flagged as specifications.  We can tweak the length of the sequence and argue about minutiae.  Yet the fundamental lesson remains: the class of nonsense sequences vastly outnumbers the class of meaningful and/or repetitive sequences.

To sum, when we see the following sequences:

(1)       ababababababababababababababab

(2)       tobeornottobethatisthequestion

(3)       qngweyalbpelrngihseobkzpplmwny

We need to understand that rather than comparing one improbable sequence with another equally improbably sequence, what we are really comparing is a recognizable pattern, in the form of either (1) a repetitive sequence or (2) a meaningful sequence, versus (3) what appears to be a nonsense, random draw.

Properly formulated thusly, the probability of (1) or (2) versus (3) is definitely not equal.

Not even close.

Not even in the ballpark.

Thus, the failure to carefully identify what we are dealing with for purposes of design detection gives the ESII proponent the false impression that when choosing between a recognizable pattern and a random draw we are dealing with equivalent odds.  We are not.

—–

Conclusion

While examples of coin tosses and character strings may be oversimplifications in comparison to biological systems, such examples do give us an idea of the basic probabilistic hurdles faced by any random-based process.

The ESII argument, popular though it may be among some intelligent design opponents, is fatally flawed.  First, because it assumes as a premise (random generation) the very conclusion it seeks to reach.  Second, because it fails to properly define sequences, mistakenly assuming that a random sequence is on the same probabilistic footing as a patterned/specified sequence, rather than properly looking at the relative sizes of the two categories of sequences.

Opponents of intelligent design may be able to muster rational arguments that question the strength of the design inference, but the “every-sequence-is-equally-improbable” argument is not one of them.

--If they do not violate the laws of physics, by which process do they create knowledge, which other processes cannot?-- If something doesn't violate the laws of physics, the laws of physics don't happen. tribune7
The only known source of meaningful information is an intelligent agent. Information is created by and grows by intelligence, not by some hypothetical constructor. UB has talked you patiently through this patently obvious issue. Constructor theory (which you managed to bring in, even when talking about probability) seems completely impotent in this regard.
First, if by "known" you mean "have experienced", you are appealing to inductivism, which is impossible. Second, if you do not know how intelligent agents create knowledge, or you're confused about the means by which they do, why would you expect to know if only intelligent agents can create it? The fact that you keep appealing to induction indicates you're confused about how knowledge grows. But, by all means, feel free to formulate a principle of induction that works in practice.
(And, by the way, the word you keep using “knowledge” requires an intelligent agent, not only for the source of the information but to be able to “know” the information.)
As I've pointed out elsewhere, the term "knowledge", as I'm using it here, represents a unification. Specifically, it refers to information that plays a causal role in being retained when embedded in a storage medium. That doesn't require a knowing subject.
Why would design require violating the laws of physics? It obviously doesn’t require or even imply any such thing.
So, then what's so special about designers? If they do not violate the laws of physics, by which process do they create knowledge, which other processes cannot? critical rationalist
--there is nothing special about 500 heads in a row-- Anyone who concludes that 500 heads in a row is chance is, literally, a fool. Really, as in literally. https://www.merriam-webster.com/dictionary/fool But take it a step further. Suppose you flip 500 heads in a row and the overhead light comes on i.e. a specific event. Suppose you flip another 500 heads in a row and the light goes off. Suppose this pattern is consistent. Suppose you still insist that it is the result of chance. Yes, there is a literal word for one who insist this. tribune7
The necessary knowledge must be present there. What is the probab[ilit]ly of that? IOW, what we need is not probab[ilit]ly, but an explanation for how knowledge grows, as it’s possible. That’s why constructor theory is about what tasks are possible, which are impossible and why.
The only known source of meaningful information is an intelligent agent. Information is created by and grows by intelligence, not by some hypothetical constructor. UB has talked you patiently through this patently obvious issue. Constructor theory (which you managed to bring in, even when talking about probability) seems completely impotent in this regard. You have consistently and repeatedly failed to understand the difference between a source of information and how it can be instantiated in a physical medium. (And, by the way, the word you keep using "knowledge" requires an intelligent agent, not only for the source of the information but to be able to "know" the information.)
If the probably of a designer designing a sequence is always 1, does that require them to somehow violate the laws of physics? I don’t see how this a nonsense or irrelevant question.
Why would design require violating the laws of physics? It obviously doesn't require or even imply any such thing. Eric Anderson
@Eric
In the meantime, please stop posting this nonsense and derailing other threads, as you have now done multiple times.
The criticism in comment #93 was regarding the validity of probability, or the lack there off, which was the topic of OP. Nor did it use constructor theory to argue against probability. Rather it went in the opposite direction. It’s not constructor theory specific, so it’s unclear what your objection to it is. This lecture expands on this in detail. It’s unclear what a theory about how people should behave if they want to win at games of chance has to do with physical reality. Do you have any criticism of it? Furthermore, I pointed out the problems with your claim that the probably of a designer creating specific sequences was 1. To reemphasize one in particular, one of those sequences could be used to cure cancer. However, if the probably of an mere “intelligent agent” creating it was 1, we should have a cure cancer by now, right? Yet, we do not. So, merely being an “intelligent agent” simply isn’t sufficient. Nothing about this criticism is constructor theory specific. IOW, saying it’s “constructor theory nonsense” seems to be an attempt to avoid it. In addition, take the representation of binary data in a computer. Anything below 0.6v is considered a 0, while equal to or above 0.6v is a 1. This allows for error correction. A series of voltages that is randomly distributed from 0.0 to 0.5 would all appear as 00000000000000000 in that system. That’s how a imperfect voltage regulation is corrected in transistor based digital systems. (In mechanical computers, cogs snapped into place, rather than being continually variable) If a sequence only triggers something in biology when such a threshold was met, it too could appear as a sequence of the same value over and over again, despite not being uniform either. Yet, you would assume that sequence was somehow designed.
Most recently on the Antikythera thread, UB walked very patiently through the basic issues for you. Please take some time to think through the issues and make sure you can both understand and articulate them before dumping more comments about this constructor theory on threads.
So, where is UB’s head post that I can reference? Where are the references to papers that expand on his theory I’ve requested. For example, I’ve asked multiple time for him to indicate what theory of information he was referring to, but he has still yet to provide one. Then, when someone else posted a link to the Biosemiotics site, what did I find? A link to Shannon’s theory. Then, later, UB said that people fifty years ago weren’t concerned whether information was Shannon information.(People 300 years ago were not concerned if motion was based on Einstein's, theory either as he had not conceived it yet. That doesn't mean GR is not relevant.) So, apparently, the site some one else referred me to has links to papers that are irrelevant to UB’s theory. I really don’t see how a single paragraph he has repeated without virtually any attempt of his own at clarification represents “walk[ing] very patiently through the basic issues for you” As for comment #94, it’s really a simple question, which is again relevant to the OP. If the probably of a designer designing a sequence is always 1, does that require them to somehow violate the laws of physics? I don’t see how this a nonsense or irrelevant question. critical rationalist
IOW, what we need is not probably, but an explanation for how knowledge grows, as it’s possible.
To clarify, in constructor theory either things are possible, in that they do not violate the laws of physics, or they are impossible in that they violate the laws of physics. And we already know the growth of knowledge is possible, as opposed to impossible. So, the question is, why is knowledge possible? That's an explanation for the growth of knowledge. Then again, perhaps that's not quite the position of ID proponents here. Does the growth of knowledge violate the laws of physics, despite being possible? critical rationalist
@Eric First, the mistake isn’t ESII. The mistake is assuming probably is relevant at all as it refers to what will probably happen, not what will happen, what actually happened, etc. So, you’re correct in that a category error is in play, just not the one you’re referring to. Probability was invented in the 16th century by people who only wanted to win at games of chance. That was the brith of game theory. This was a mathematical model to say that something was equally likely was in regards to a fair draw of cards or a fair die. So, this was geared towards games of chance, as to what we should or should not do if we wanted to win. As such, should be surprising that probability is found in some kind of fundamental role in physics or detecting design. Yes, it seems to work, but when you try to take is seriously, it fails. See this lecture for details. Saying something was probably designed doesn’t really help. Saying in a cell should have a function isn’t testable (what function?) Furthermore, that’s not even accurate as you would say probably has a function. Even then, designers sometimes create things that are purely ornamental, while others are unintended and even undesired consequences. Second, you seem to have grasped why constructor theory is relevant when you put the probably of a designer creating sequence A as 1. For example, I’d ask what is the probably of a designer existing to design the biosphere at the necessary time and place? How do you calculate that? What is the probably of a designer, as all the designers we've observed only occur in conjunction with those types of sequences, which apparently, need a designer. etc.? In addition, one possible sequence A would cure cancer. I’m an intelligent agent. What is the possibility that I can create that sequence? Merely being intelligent and making choices is not enough, as I’ve illustrated elsewhere. The necessary knowledge must be present there. What is the probably of that? IOW, what we need is not probably, but an explanation for how knowledge grows, as it’s possible. That’s why constructor theory is about what tasks are possible, which are impossible and why. critical rationalist
wd400 @46:
There are protein famalies in which no amino acid at all is conserved across all species.
How many a.a.s long are the proteins, and what function do they perform? PaV
WD400, FYI, the CSI concept traces to Orgel, 1973 and was further used by Wicken, 1979. These were seen as significant by Thaxton et al c 1984 in the first ID technical work, long before Dembski et al. The attempt to suggest a dubious concept introduced by those IDiots (that is how too many objectors will read "IDists") fails. Instead of the genetic fallacy, why not examine the concept, especially given Orgel's FUNCTION-OF-THE-CELL context? KF PS: Orgel:
living organisms are distinguished by theirspecified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . . . [HT, Mung, fr. p. 190 & 196:] These vague idea can be made more precise by introducing the idea of information. Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure.
[--> this is of course equivalent to the string of yes/no questions required to specify the relevant J S Wicken "wiring diagram" for the set of functional states, T, in the much larger space of possible clumped or scattered configurations, W, as Dembski would go on to define in NFL in 2002, also cf here, -- here and -- here -- (with here on self-moved agents as designing causes).]
One can see intuitively that many instructions are needed to specify a complex structure. [--> so if the q's to be answered are Y/N, the chain length is an information measure that indicates complexity in bits . . . ] On the other hand a simple repeating structure can be specified in rather few instructions.  [--> do once and repeat over and over in a loop . . . ] Complex but random structures, by definition, need hardly be specified at all . . . . Paley was right to emphasize the need for special explanations of the existence of objects with high information content, for they cannot be formed in nonevolutionary, inorganic processes [--> Orgel had high hopes for what Chem evo and body-plan evo could do by way of info generation beyond the FSCO/I threshold, 500 - 1,000 bits.] [The Origins of Life (John Wiley, 1973), p. 189, p. 190, p. 196.]
kairosfocus
EA, I suggest LK Nash's 1,000 coin example is far more instructive, given Mandl's direct translation into a physical system and the utility of binary text strings. KF kairosfocus
JDK, Have you done any statistical thermodynamics? If not, I suspect that gap is the root problem. If so, do you appreciate that the issue is not so much the particular detailed microstate but the CLUSTERS of sufficiently similar microstates, and the linked pattern that there tend to be overwhelmingly dominant clustersof microstates, due to their relative statistical weight? (Indeed, that is essentially how Dembski wrote in NFL, and his use of Omega as the set of possible states is a dead giveaway to one familiar with the usual symbols used in statistical thermodynamics, as in s = k log OMEGA, from Boltzmann.) So, for example with a tray of 1,000 fair coins, there is a well known binomial distribution that peaks near 500:500 H/T with a span of fluctuations that may be readily seen to span about +/- 200 or so, where the coins are overwhelmingly not in any particular order like HT-HT-HT . . . etc. This is of course readily translated into a paramagnetic substance in a weak B-field [Cf. here, Mandl], i.e. L K Nash's introductory coins example is physically relevant. In this context, meaningfully functional strings -- this includes EVERY possible meaningful 500-bit pattern in say ASCII English text -- are vanishingly rare relative to the dominant pattern, i.e. isolated islands of simply describable function are "lost" in an overwhelming sea of meaningless patterns that are beyond the blind search capacity of the observed cosmos. We are looking at 1.07*10^301 possibilities, in a context where our observed cosmos' 10^80 atoms, at say 10^14 observations of 1,000-coin strings per sec, for 10^17 s would max out at 10^111 observations, 1 in 10^190. A vanishingly small relative scope of blind search. Thus, we do not need to work out precise probability estimates, degree of blind search challenge is more than enough. Especially, when we can readily see -- think AutoCAD file -- that discussion on bit-strings is WLOG. (Any 3-d pattern can be informationally reduced to a string of structured answers to Y/N q's in a description language, that lays out the relevant node-arc mesh.) Beyond a reasonable threshold of 500-1,000 bits, the likelihood of getting to an island of function in the config space by blind search is clearly indistinguishable from zero. So, if we come across a bit pattern of 1,000 bits [143 ASCII characters] in recognisable English text or the like, we are well justified in inferring that this came about by intelligently directed configuration, not blind search. Especially when, if the direct search space is of order 10^301, as a search is a sample, search for a golden search comes from the power set, 2^[10^301]. That is why after the fact recognisable patterns such as the specification on functioning as English text in ASCII code or the like, are strong signs of origin of the said pattern by design. There is nothing intrinsically difficult in this basic reasoning, the problem is its import: in the heart of the living cell is DNA, with coded algorithmically functional text far, far beyond the 1,000 bit threshold. KF kairosfocus
The statement, "the probability of that having happened was 1/8", presupposes the favorable outcome of the event. Because otherwise, how would you calculate the probability? The probability of an event A is defined as P(A) = number of favorable outcomes/ total number of possible outcomes. By saying: "the probability WAS..."... you are actually saying: "the number of favorable outcomes WAS..." But if all that you did was coin tossing, then you obviously didn't specify what outcomes are favorable so the probability formula is not applicable. forexhr
OK, so I didn't understand you correctly. I disagree with your conclusion that if I throw three coins and get HTH I can't say that the probability of that having happened was 1/8 because I didn't specify HTH before the throw (which is what I think you are saying, although I'm probably wrong), but I've discussed this at length with others in this thread and others, and I think I'm done. Thanks. jdk
jdk @70 Both of your interpretations are incorrect. To explain why, we first need to define three preconditions for calculating probability: 1) Event 2) Outcome of the event 3) Favorable outcome Given these preconditions we also must know whether an event is real or imagined. If you throw a coin three times(real event) and get HTH(outcome of the event) than you cannot talk about probability because this REAL event cannot be related back to some REAL environment that would have determined the favorable outcome of the event. But, let's imagine you have a bet with a friend on flipping a coin three times. Your friend would give you \$100 if the outcome is HTH. Now you throw a coin three times and you get HTH. In this scenario you can relate this REAL outcome(HTH) back to the REAL environment - one where you made the bet, and then conclude that the probability of the outcome was 1/8. In your example, you started with the REAL event(coin throwing) and then you interpreted its outcome(HTH) as if it was predicted in some REAL environment. But no such prediction existed. On the other hand, in the case of imagined events, where we would say: "Let’s figure out the probability of getting HTH", the prediction of the outcome is already presupposed, so there is no need to establish connection between outcome and some prior environment. Regarding the claim that probability does not apply after the fact it is also incorrect because, in the context of probability, "fact" is just another word for "outcome of the event". If we know what the favorable outcome is than there is no problem to figure out the probability("after the fact"). forexhr
john_a_designer: Thanks for stopping in and for the good thoughts. I'm not sure what you're saying in the first sentence, so maybe you can clarify. It doesn't matter for the design inference whether we are dealing with a pre-specification or a post-specification. In terms of origins in biology (or the cosmos), we are always dealing with a post-specification. Your other points are well taken. I like your point about the casino operators -- yet another example of design inference that no-one objects to. Presumably because it isn't philosophically troubling.
Are there sequences, unlike cards, which are actually intrinsically specified? I think there are.
In terms of having meaning, yes. This is the difference I've tried to highlight between repetitive/ordered sequences that are often (although not strictly always) meaningless, and non-repetitive meaningful sequences. ----- Finally, we have to be a little bit careful to make sure we distinguish the sequence from the medium. I think you are, but just for readers out there I want to make that clear. The medium has no inherent specification or meaning in and of itself. But the medium can be organized to represent or "contain" information through the sequencing. Eric Anderson
I’m late to this discussion but let me summarize, very succinctly, what I think is the main issue here. From an ID perspective probability is irrelevant unless there is specification, or more precisely pre-specification, involved. I won’t rehash the discussion about playing cards, coins or dice except to say that casino operators invest a lot money in eye-in-the-sky cameras to catch cheaters. Do they have a right to a accuse some of their patrons of cheating at cards when they appear to get too lucky? After all, any hand is just as probable as any other, right? Obviously one wins or loses at cards, for example, based on a pre-specified criteria of what constitute a winning hand. For example, the cards 10, J, Q, K, A, all of the same suit, will make you a winner in poker, unless we agreed to change the rules-- which we could do. Technically there no intrinsic reason to consider one set of five cards to be any more significant than any other. Yet for the game to be meaningful or fun you have to specify what set of cards signifies winning. However, we (humans) don’t necessarily need to be the one’s specifying. In his novel Contact, for example, Carl Sagan explains how ET’s living hundreds of light years away might try to communicate with us by transmitting a signal that stands out from the background noise, which we would recognize as intelligent. On the other hand, is all specification the same? Are there sequences, unlike cards, which are actually intrinsically specified? I think there are. I think they occur with human designs and the “apparent design” we observe in the natural biological world. It is with this kind of intrinsic specification that arguments using probabilities have some real merit, especially when you are considering whether the cause of the “design” is some kind of intelligence or some kind of mindless natural process. john_a_designer
wd400 @69:
Why would you think that? The “islands” are formed by billions of years of divergent evolution. When lineages evolve apart from each other for billions of years they become quite distinct, I’m not sure why you think selection can’t drive that process.
No. We aren't looking for just those functional islands that are currently being occupied, by whatever hypothetical means you think they have been filled. We are talking about functions that can realistically exist in the given system, as an objective functional matter. As a result, I'm happy to assume that there are more possible islands of function than those currently occupied. But even with some very generous assumptions, the number is still miniscule compared to the search space. And, no, you don't get to claim that natural selection has the ability to do all this creative work and reach these islands of function, by assuming that these islands of function were "formed by billions of years of divergent evolution." It would be hard to think of a more blatant example of circular reasoning. The very question on the table is whether the Darwinian mechanism in fact has the ability to do what is claimed. Unfortunately, the math doesn't add up, and when we look at the actual observations and experiments that have been done, all we see the Darwinian mechanism doing is minor tweaks around the edges, largely insignificant and nowhere near the kind of creative power that is claimed.
I had forgotten about this, but I have to say I am amazed that you would link a thread that puts you in such a terrible light. If you read back over the threads you will see that you made the claim that the probability of a given sequence arising by naturalistic process, termed P(T|H) in CSI and essential for calculating this statistic, could be calculated by 20^(n.amino acids). This is so wrong that it’s kind of breathtaking (in fact I used it as an example of an obviously silly calculation above, forgetting that you had done this).
I'm happy to be corrected on any math mistakes I've made, but that doesn't seem to be the case. I'm not sure if you've understood the points made in the other threads. I discussed a chance-based formation in a side conversation with keiths and showed how a chance-based calculation was easy for a simple situation like that. What you seem to be arguing is that because a protein comes about through the Darwinian mechanism, then the odds of it arising through the Darwinian mechanism are much greater than under a chance-based scenario alone. Let's set aside the fact that your argument assumes the Darwinian mechanism produced the proteins in question, which is part of the very issue on the table. Let's even, just for a moment, set aside the question of whether the Darwinian mechanism provides anything other than a chance-based scenario anyway (given that selection is not directional). Given your apparently vastly greater understanding than mine of the naturalistic explanations out there, tell us: What do you think the odds are of the Darwinian mechanism stumbling across a functional biological system? Can you calculate it? Are the odds such that we should consider the Darwinian mechanism a realistic explanatory cause? And if you can't provide me a calculation of the odds, then I'm sorry to tell you, but you are right back squarely in the middle of the illogical and self-serving position I pointed out: namely, demanding that a calculation of the Darwinian mechanism be done by ID proponents when Darwinian proponents have never even offered one themselves.
IDists invented CSI, if they want to show that biology displays it then it is really up to them to do the calculations.
Do you think ID proponents came up with the concept of complex specified information? In any event, various calculations have been done, including an exceedingly simple example in this very OP. Others, have of course done much more rigorous calculations than my simple example. If you disagree, let's hear it. What do you think the odds of Darwinian evolution stumbling upon something like the bacterial flagellum are? Never mind that, what about a single medium-length protein like the 100 amino acid length protein that you reminded us of?
Of course, Dembski and others do not claim that CSI is evidence that modern evolutionary biology is an improbable explanation for biological diversity (they simply assume it, and demonstrate how making this assumption can lead to probability argument for design).
I'm surprised that after this many years in the debate you are still incorrectly describing the design inference. Furthermore, why should intelligent design proponents have to take the evolutionary "explanation for biological diversity" seriously when no realistic materialistic explanation has ever been offered? Unfortunately, what we are dealing with when the rubber meets the road is that there is this persistent and pervasive and perverse attempt on the part of the proponents of Darwinian theory (started blatantly by Darwin in The Origin) to insulate their theory against criticism by asserting that the opponents have the burden of demonstrating (with mathematical precision, it is occasionally demanded) that the Darwinian story is false, when the Darwinists themselves have never provided any empirical demonstration (certainly not one with any math; that is scrupulously avoided) that the Darwinian theory should be taken seriously in the first place. Eric Anderson
Eric, yes, that's the video. And you have a point; Le Conte's comments were made in 1888 though, when the belief that science had or would soon have all the answers was even more pervasive than today. BTW, when I click on the link to the UD post I left at #34, I do see the video embedded there, not sure why you don't see it. Granville Sewell
Granville, your link to the prior post works, but there is no link in the prior post to your video? In any event, I found your other video here: https://www.youtube.com/watch?v=VpEXXNxjWYE I don't know if this is the new one or not. ---- One quick thing that jumped out at me from the beginning of the video is the materialistic claim that we should hold out for naturalistic explanations because they have been supremely successful in all other fields. This is not actually true. Note my comment to Seversky on this very point here (middle portion of comment 67): https://uncommondesc.wpengine.com/fine-tuning/fine-tuning-and-the-claim-that-unlikely-things-happen-all-the-time/#comment-633321 Yes, purely naturalistic explanations have been very successful in some fields. But they have been spectacularly unsuccessful in the very fields we are dealing with in the present debate. So at the very least the materialist makes a category mistake by insisting a naturalistic explanation should be the default with respect to information-rich, functionally-integrated biological systems. I realize you are addressing other problems with the materialistic claim in your video, but I wanted to point this out as well. Eric Anderson
I'm on my smartphone, so I won't have much to say right now until I get in front of a regular computer. First, GBDixon makes a good observation in terms of information needing, if you will, a receiver. My idea of "matching" is the idea of there being both a sender and a receiver, otherwise, you might say, the information that were dealing with, and it's correlated probability, are suspended in what I termed in the last thread "dual space." Second, and this has reference to a post by wd400, the fact that a protein is functional, whether it comes from yeast or a human, is incidental. What is critical, is that the cell machinery recognizes the protein and is able to perform its normal function. This creates a linkage between the translation properties of the cell machinery and the functional properties of the protein which is a product of this translation. Based on what I said in my last post in the last thread, the "real" probabilities of the cell machinery then match the "real" probabilities of the functional protein, and here as I mentioned in the last post on the last thread I have in mind protein complexes which need properly fitted binding sites in order to function. "Real" probabilities of the cell machinery, and its mutational characteristics, must match the protein structure in such a way that the protein acts in a functional way. This "matching " of probabilities, or more properly in probabilities, turns into what I've turned "real" probabilities. And these "real" probabilities are not favorable to a Darwinian understanding of protein-protein complexes. More when I get in front of a computer. PaV
Hi forexhr. First, as I've said many times (although you may not have read any of my posts), I'm discussing pure probability and simple models such as coins and dice: I'm not interested in any arguments about how this apples to evolution or fine-tuning or anything like that. You write,
But given the event that already happened(for e.g. coin flips that created random sequence) you can figure out its probability only if you can relate this event back to an environment that determines the likelihood of this event before it happened. Without this pre-determination you can’t talk about probability, but only necessity.
I'm trying to figure what this means. I throw a coin three times, and get HTH. I think I can "relate this event back to an environment that determines the likelihood of this event before it happened": I threw the coins in the air and they bounced around randomly, each having a know chance of coming up heads or tails with equal probability. Thus I conclude that the probability of the result is 1/8. If it is incorrect in your eyes to say this, what do you mean by "relate this event back to an environment that determines the likelihood of this event before it happened." Also, your last line is, "Without this pre-determination you can’t talk about probability, but only necessity." This seems to say that once we have a result HTH, it is what exists and the whole notion that it might have been something else is null and void: probability does not apply after the fact, and everything that looks like it might have a probability before the fact is actually pre-determined to have a necessary result, as evidenced by the fact that once it exists, it has a 100% chance of being what it is. Do you mean anything like that? jdk
I, and I hope thirds time's a charm, will quit discussing this. jdk
Jdk: Noooooo!!! It is very exact to say “the probability of getting 5 green cards is 1/57.”
No it's not "very exact" jdk. This has been explained again and again. It allows for an specification informed by the outcome a la Kitcher. At this point I refuse to explain it again.
And I notice you didn’t respond to my comments about fantasy. Is pure math a “fantasy” because it is done abstractly and theoretically?
Don't be ridiculous. The independent specification is fantasy. I do hope you did understand that. Right? One deals cards without an independent specification and then one starts fantasizing:
Origenes: “What if I had independently specified five green cards and dealt them, what would have been the probability?”
The refers to fantasy. Got it? I am not talking about mathematics being "fantasy" or something like that. Get real. Origenes
You write,
That is inexact use of words. This is more exact: The probability of an independent specification matching the 5 cards being dealt is 1/57.
Noooooo!!! It is very exact to say "the probability of getting 5 green cards is 1/57." "5 green cards" is the specification: it specifies what situation we are considering. We don't need some person making an independent specification that matches an actual deal in order for the statement I made to be accurate. The five cards are considered as "dealt", in theory although nothing happens in actuality. The probability that 5 are green is calculated as 1/57. This is a theoretical statement. It can be tested empirically by "dealing" 57,000 hands (via a computer simulation) for instance as ascertaining that about 1000 are all green. And I notice you didn't respond to my comments about fantasy. Is pure math a "fantasy" because it is done abstractly and theoretically? jdk
jdk @73
No cards need to be dealt. The deck does not need to exist. Given the scenario (22 card deck, 11 green, 11 blue, deal 5),the probability of 5 green cards is 1/57.
That is inexact use of words. This is more exact: The probability of an independent specification matching the 5 cards being dealt is 1/57.
jdk: This is a mathematical fact.
Sure. But let us clearly state what probability is being measured. Origenes
But you are thus relegating the whole mathematical field of probability to "fantasy"! No cards need to be dealt. The deck does not need to exist. Given the scenario (22 card deck, 11 green, 11 blue, deal 5),the probability of 5 green cards is 1/57. This is a mathematical fact. It is not fantasy. Getting 5 green is the specification, and 1/57 is the probability. How can you call pure math "fantasy"? jdk
Jdk: But why do you say “only” a theoretical/hypothetical event?
As opposed to reality. "What if I had independently specified five green cards and dealt them, what would have been the probability?", is a question about a situation that did not actually happen. It is fantasy. What actually did happen was that five cards were dealt without an independent specification. And without an independent specification there is nothing to measure. Origenes
But why do you say "only" a theoretical/hypothetical event. That is exactly what the mathematical study of probability is about. We discuss cards, coins, etc to have concrete models to help us think and learn, but the probability theory exists on its own just as a perfect circle exists in geometry but not in the real world. jdk
jdk @66 @68 I can give you my interpretation:
Forexhr: Of course, in theory you can assume that favorable outcome had been specified before the event and than use the probability formula. But in this case you are not calculating the probability but what the probability would have been ....>>
>>... if that favorable outcome had been specified before the outcome/event. IOWs you are measuring a hypothetical pre-specification. But, as Forexhr points out, that's a only a probability reflecting a theoretical/hypothetical event, because, in fact, no actual pre-specification exists. But this is, of course, just my interpretation. Hopefully Forexhr will point it out when I misunderstood his reasoning. Origenes
So, no, we don’t have to include the additional odds of the Darwinian mechanism (compounding odds that would only make things worse for the Darwinian story, by the way). Here we’re just looking at the islands of function and the probabilities of stumbling across them, an issue that selection can’t do anything about.
Why would you think that? The "islands" are formed by billions of years of divergent evolution. When lineages evolve apart from each other for billions of years they become quite distinct, I'm not sure why you think selection can't drive that process.
Finally, for readers out there, I would note the irony of wd400’s repeated request that intelligent design proponents must calculate the probability of the Darwinian descent-with-modification mechanism when the ones proposing the Darwinian mechanism have not only steadfastly refused to provide such a calculation but seem quite averse to even addressing the issue. See here for why this is the height of hubris and an attempt to overturn the rightful burden of proof: https://uncommondesc.wpengine.com/darwinism/must-csi-include-the-probabilities-of-all-natural-processes-known-and-unknown/
Origenes, did you read my post at 66? I am confused about what forexhr is saying: he says both "Yes", that "I can’t figure that probability out unless I specified five green cards and actually dealt them" and then "in theory you can assume that favorable outcome had been specified before the event and than use the probability formula." These seem like contradictory statements. And then he says about the theoretical probability, "But in this case you are not calculating the probability but what the probability would have been", which just doesn't make sense. How does the verb tense "might have been" apply to a theoretical probability? You said it was well-stated, so maybe you can explain. jdk
Forexhr @64
Forexhr: ... probability is the measure of the likelihood that an event will occur. Without specifying the favorable outcome there is nothing to measure. Of course, in theory you can assume that favorable outcome had been specified before the event and than use the probability formula. But in this case you are not calculating the probability but what the probability would have been.
Well said. One side note. The specification only needs to originate independent from the outcome. A valid specification can come into existence after the outcome just as long as it is not informed by the outcome. Origenes
to forexhr at 64: When I wrote,
Are you really saying that I can’t figure that probability out unless I specified five green cards and actually dealt them?,
you replied,
Yes that’s what I am saying, because probability is the measure of the likelihood that an event will occur. Without specifying the favorable outcome there is nothing to measure. Of course, in theory you can assume that favorable outcome had been specified before the event and than use the probability formula. But in this case you are not calculating the probability but what the probability would have been.
I don't understand the verb tense here. Assume I have calculated the theoretical probability of five green cards even though no one will ever create the deck I hypothesized, nor actually deal any cards. (It's about 2%: 1 in 57 exactly.) But you are saying that is not really the probability, but what the probability "would have been". Would have been when? That is a conditional tense, but you have no consequent. Do you mean "you are not calculating the probability but what the probability would be" if I actually dealt the cards. That would be grammatically correct, but it might not be what you mean. (It would also be wrong, because I don't understand how one could possible state the we couldn't talk about hypothetical or theoretical probability unless the events described were actualized.) I'd have a real hard time teaching my probability chapter if I couldn't just say, "Let's figure out the probability of getting dealt a full house" without dealing until I got a full house. Can you clarify? jdk
kf @57:
JDK & EA, the issue lieth not in individual microstates but in the clusters.
Agreed, at least in terms of our typical ability to identify a post-specification after the event.
Cards and our recognition of special patterns is a bit artificial and distractive. Although the Math is relevant, it tends to be side-tracked.
I agree that the examples of cards and coins can get sidetracked. My effort over several posts has been to clarify and help people correct and avoid the sidetracks. I do think there is value in looking at these "simple" cases because (i) they help us start to appreciate the math, (ii) they help us understand the odds in a tractable way, and (iii) we are more likely (I'm an optimist) to have an occasional Darwinist sit up and take notice when they see a simple case with real math, than when they can hide behind the vague Darwinian claim of "descent with modification" as though it provided some answer to the origin of the biological novelty.* Your other thoughts on the fundamental issues with design in the living world are of course spot on. ----- * Case in point: note my comments @62 in response to wd400's attempt above to require Darwinian skeptics to calculate the probabilities of the Darwinian mechanism -- an inherently impossible task due to its vague character, which is part of the reason the Darwinists have never calculated it themselves, being instead quite content with vague assertions and made up stories. Eric Anderson
jdk @ 60: "Are you really saying that I can’t figure that probability out unless I specified five green cards and actually dealt them?" Yes that's what I am saying, because probability is the measure of the likelihood that an event will occur. Without specifying the favorable outcome there is nothing to measure. Of course, in theory you can assume that favorable outcome had been specified before the event and than use the probability formula. But in this case you are not calculating the probability but what the probability would have been. forexhr
forexhr @49: Thanks for the thoughts. I like your idea of looking at the particular environment to identify the potential outcomes that could be considered. I think jdk has also responded @50. I would just add that it seems to me what you are focusing on -- rightly so -- is specification. This is the elephant in the room that Kitcher missed in his attempted refutation of Behe. We can have a pre-event specification or a post-event specification. The pre-event specifications are the easy case and no-one has ever had a problem with that. However in biology we are of course typically dealing with a post-event specification. That is where Dembski has argued that we can tighten up our ability to post-specify, so that it becomes a rigorous and reliable inference. It does involve some nuances, however, which is why so many people (like Kitcher or R0bb in my OP) get off track. Eric Anderson
wd400 @46:
Well, this is not true. Human and yeast genes share only about 30% of amino acids, but in many cases the one can be replaced for the other. There are protein famalies in which no amino acid at all is conserved across all species.
These are interesting comparative observations. One might be forgiven for asking what experimental backup it has? Furthermore, even if we assume that a human gene performing a particular function could be completely and fully replaced with a yeast gene that would perform the same function without any adverse consequences to the human (something that I highly doubt has much empirical support), it still doesn't impact my point. Then we could triumphantly note that we have evidence of 2 amino acid sequences that can perform the same function (assuming we were also able to confirm that no splicing or editing was taking place during protein synthesis). I don't have a problem with the idea that there may be more than one sequence that can perform the same function. There may well be several amino acid combinations that could be formed into proteins with the same function. How many? 10? 100? 1000? It won't even budge the calculation. Yet at the same time it is well known, not just through the hazy lens of comparative genomics, but through actual experiments and health data, that in many cases even a single substitution can cause serious consequences. So, yes, I agree that the precise "plasticity" of proteins as to their underlying sequences is an open question. As I said, these are excellent questions that invite careful research. But there is very little rational reason to think that we can make substitutions willy-nilly on a large scale and keep the same function. Indeed it would be very naive to think so.
How can you test “the Darwinian claim” without actually including selection and descent with modification in your calculation? If the probability is determined by assuming all states are equiprobable then you have made the very mistake you claim is made in the ESII argument, haven’t you? (The last version of CSI that Dembski defended spells this out pretty clearly, although, of course, he never actually makes a calclation).
You are misunderstanding what I said. If islands of function are rare, as they most certainly are, the islands of function still have to be hit upon by chance. We don't get to invoke any "mechanism" for that. What I stated was that "the Darwinian claim is that this nearly invisible speck [the islands of function] was miraculously hit upon. Over and over and over." My statement actually gives Darwinism way more credit than it deserves and way more opportunity than would occur in reality. It ignores the problems of getting this rare island of function incorporated in the organism in a way that is heritable, of getting the function spread throughout the population, of keeping the function in place once obtained, of avoiding interfering reactions, and so on. I only hinted at some of the additional problems for the Darwinian story when I noted in the OP:
Furthermore, when we have, as we do with living systems, multiple such sequences that are required for a molecular machine or a biological process or a biological system – arranged as they are in their own additional specified configuration that would compound the odds – then such calculations quickly go off the charts.
Otherwise, my example assumes that all the other problems with the Darwinian story are happily resolved. So, no, we don't have to include the additional odds of the Darwinian mechanism (compounding odds that would only make things worse for the Darwinian story, by the way). Here we're just looking at the islands of function and the probabilities of stumbling across them, an issue that selection can't do anything about. ----- Finally, for readers out there, I would note the irony of wd400's repeated request that intelligent design proponents must calculate the probability of the Darwinian descent-with-modification mechanism, when the ones proposing the Darwinian mechanism have not only steadfastly refused to provide such a calculation but seem quite averse to even addressing the issue. See here for why this is the height of hubris and an attempt to overturn the rightful burden of proof: https://uncommondesc.wpengine.com/darwinism/must-csi-include-the-probabilities-of-all-natural-processes-known-and-unknown/ Eric Anderson
Jammer @ 58: Well said. Strong. Truth Will Set You Free
forexhr at 56: At 50 I wrote,
1. If I study, in theory, what would happen if I create a random sequence by flipping a coin three times, I can figure out what the possible random sequences are (HHH, HHT, etc) and I can figure out the probability of each of those: P(HHH) = 1/8, P{HHT) = 1/8 etc. Is there anything wrong about that sentence?
You replied,
@1. You cannot figure out the probability, but only what the probability would have been if the favorable outcome(HHT for e.g.) had been specified before the coins were flipped.
Your reply doesn't even seem to apply to what I wrote, as I talked about figuring out the probability in theory, without mentioning actually flipping any real coins at all. Your answer seem to imply that one couldn't figure out any probabilities in theory. For instance, suppose I dealt 5 cards from a deck of cards with 11 green and 11 blue cards. I can figure out the probability of getting all green cards even though this deck of cards will never exist and the 5 cards will never be dealt. Are you really saying that I can't figure that probability out unless I specified five green cards and actually dealt them? jdk
deleted [by jdk] jdk
I'm still awaiting the day an attorney pulls out a deck of cards in a courtroom to show the jury that improbable things happen all the time. "You see, your honor, while the DNA evidence may point to my client with 99.99994% confidence, improbable things happen all the time." *pulls out a deck of cards* Just how hard would he be laughed out of the courtroom? Yet, these 19th-century-minded, anti-intellectual savages want to use the same laughable argument to defend their atheistic miracles. Pathetic! Jammer
JDK & EA, the issue lieth not in individual microstates but in the clusters. Cards and our recognition of special patterns is a bit artificial and distractive. Although the Math is relevant, it tends to be side-tracked. That is one reason why I have emphasised looking at the world of life (esp. OoL) from a thermodynamic, statistical, blind search challenge perspective, and also the issue of the challenge of fine tuning for a cosmos well-fitted for cell-based life. Where it does not require precise or exacting probability estimates, to recognise an overwhelming blind search challenge; especially as search for golden search implies a selection from the set of subsets of the first-level config space, an exponentially harder challenge, given that a set of size n elements implies a power set of size 2^n. In that context, the issue is observable multi-component (material or abstract makes little difference) functional coherence giving us the context of functionally specific, complex organisation and/or associated information, FSCO/I. We can test the coherence by perturbing it enough, and will readily see the isolated islands of function in vast configuration spaces dominated by seas of non-function effect. Where, use of description languages (think Auto-CAD etc) shows discussion on binary strings is WLOG. This leads to blind search challenge that then rapidly overwhelms blind search resources of the sol system or the observed cosmos (the only actually scientifically observed cosmos). It is then utterly unsurprising that the only actually observed causal source for such FSCO/I is intelligently directed configuration, AKA design. With trillions of observed cases in point. All of this justifies inference to design on FSCO/I as reliable sign, where self-replication per von Neumann kinematic self-replicator is an example of the FSCO/I to be accounted for at OoL. Cf still live discussion: https://uncommondesc.wpengine.com/design-inference/fft-antikythera-paley-crick-axe-the-first-computer-claim-and-the-design-inference-on-sign/ KF kairosfocus
jdk @50 @1. You cannot figure out the probability, but only what the probability would have been if the favorable outcome(HHT for e.g.) had been specified before the coins were flipped. @2. The second sentence is correct since the above condition is satisfied. When evolutionists discuss probability they consfuse conditional(@1.) with actual(@2.) probabilities. forexhr
Dr Sewell, yes. Odd. KF kairosfocus
What about a set of 52 distinct items? Would we expect to see fewer specifications? Intuitively it seems the answer is yes, as it might be harder to build “patterns”.
I think the answer to this is yes. Patterns are seen by us, as human beings. If the things being studied already have patterns easily identifiable by us, it seem to me that there will be more chances that subsets of those elements will also contain patterns that we identify rather than if we just had 52 distinct but otherwise non-patterned objects. For instance, in the latter case, there is no order to consider, and the only groupings we would see would be groups of the same object. jdk
jdk @45: Thanks.
if we start with a situation that already contains patterns that we recognize, such as 4 suits and 13 cards in order per suit . . . we are bound to find more significant events than if we have n elements . . . that are totally distinct, with no natural order or categorizing attributes.
This is an interesting idea. I was tentatively inclined to agree, but I think we need to think through the situation to make sure there is a substantive difference. Are you saying that we might find more specifications if we have a deck of cards than if, say, we had 52 tosses of a coin where we have only 2 elements (H or T) rather than colors, numbers in order, etc.? (Of course we also have significant hands in card play, but that is really an outside meaning imposed on the cards by the rules of a particular game, rules that would change from game to game. This would be analogous to Berlinski's example of non-repetitive meaningful English sentences, rather than repetitive type patterns.) What about a set of 52 distinct items? Would we expect to see fewer specifications? Intuitively it seems the answer is yes, as it might be harder to build "patterns". Yet is this actually a difference in substance, or just a natural result of the fact that we are dealing with one larger set (52 distinct items) than 4 smaller sets (4 x 13 distinct items)? Eric Anderson
GBDixon @27: Thanks for the comments. We have to be careful about Shannon. He was interested in communication more than meaningful information. Information can clearly be information even before it is transmitted or received by a recipient -- i.e., before it is part of a communication experience. I can't go into the details here, but see this prior post for a detailed discussion of the issue: https://uncommondesc.wpengine.com/informatics/id-basics-information-part-ii-when-does-information-arise/ Eric Anderson
Origenes, In another related post you asked what the probabilities were for two events: 1) Writing down the # 5 on a piece of paper, and then flipping a fair die that lands on 5 2) Flipping a fair die, and it lands on 5. I think I have my own answer to these two events. But I would like your hear your answer. What do you think the probabilities are for these two events? juwilker juwilker
1. If I study, in theory, what would happen if I create a random sequence by flipping a coin three times, I can figure out what the possible random sequences are (HHH, HHT, etc) and I can figure out the probability of each of those: P(HHH) = 1/8, P{HHT) = 1/8 etc. Is there anything wrong about that sentence? 2. Then, if I actually flip three coins and get HHT, I can say "the probability of that having happened is 1/8? Is there anything wrong about that sentence? jdk
@ Eric Anderson: "mistakenly assuming that a random sequence is on the same probabilistic footing as a patterned/specified sequence.." Random sequence has absolutely nothing to do with probability but with necessity. For e.g., when cards are being dealt it is necessary to get some distribution of cards. Probability on the other hand is the measure of the likeliness of being dealt specific cards that were specified before dealing. This is obvious if we look at the probability formula - the probability of an event A is defined as P(A) = number of favorable outcomes/ total number of possible outcomes. In the case of random sequence, the numerator(number of favorable outcomes) of the probability formula is missing and thus it is impossible to calculate the probability. But, that doesn't mean that we can't take an event that already happened and then calculate the probability of it happening (DNA sequence for e.g.). The only condition is that the number of favorable outcomes is definable. For e.g. if we are looking at the cards dealt one after another and ask what was the probability for this arrangement, then this is a nonsensical question because we cannot relate this arrangement back to an environment before the cards were dealt, for example to something that someone said or wrote about favorable arrangements of cards. In other words, the number of favorable outcomes is not definable and it is impossible to calculate the probability. In the case of DNA sequences we have a completely different story. Although nobody defined the number of favorable outcomes before the formation of DNA sequences, this number is definable with regards to a particular environment. For e.g., if the environment is the intron-exon gene structure, then favorable outcomes are all DNA sequences that are capable to produce functional RNA splicing machine. If the environment is a specific nutrient, then favorable outcomes are all DNA sequences that are capable to produce pathway for metabolising this nutrient. If the environment is the female reproductive system, then favorable outcomes are all DNA sequences that are capable to produce male reproductive system. That is why we can calculate the probability of DNA sequences, but we cannot calculate the probability of random sequences. The phrase "the probability of a random sequence" is a mathematical oxymoron. forexhr
If one calculates the probability for a smooth landscape...
And how are you going to do that? wd400
wd400: How can you test “the Darwinian claim” without actually including selection and descent with modification in your calculation? If the probability is determined by assuming all states are equiprobable then you have made the very mistake you claim is made in the ESII argument, haven’t you?
Let's suppose that the right environment (a sufficiently smooth landscape) can assist the search for a certain amino acid sequence. IOWS due to the right sequence of environments, finding a certain amino acid becomes less improbable. The follow-up question is: what is the probability that there is such a smooth landscape? IOWS what is the probability that an environment provides exactly the right stepping stones to facilitate the search for the amino acid? If one calculates the probability for a smooth landscape and factors it in, then, in the best scenario, you break even wrt probabilities overall. 'Conservation of information' informs us that we cannot improve on a blind search — in this case the blind serach for a particular amino acid sequence — unless information is being inputted by an intelligent agent.
Dembski: The reason it's called "conservation" of information is that the best we can do is break even, rendering the search no more difficult than before. In that case, information is actually conserved. Yet often, as in this example, we may actually do worse by trying to improve the probability of a successful search. Thus, we may introduce an alternative search that seems to improve on the original search but that, once the costs of obtaining this search are themselves factored in, in fact exacerbate the original search problem.
Origenes
These are excellent questions, questions that invite and require careful research. And, yes, they would change the calculation somewhat. But my understanding of the evidence to date is that the number of permissible changes to an amino acid sequence to retain protein function is quite small. It isn’t going to impact the probability calculation in any meaningful way.
Well, this is not true. Human and yeast genes share only about 30% of amino acids, but in many cases the one can be replaced for the other. There are protein famalies in which no amino acid at all is conserved across all species. It is true that the over-specification is only one problem with these sorts of arguments. You illustrate another.
So, yes, it is certainly correct to note that with some proteins there is a small subset of sequences that could perform the same function. But we mustn’t fool ourselves into thinking that this observation impacts the design inference. Indeed, in the examples I have provided, I have acknowledge and taken into account the fact that many sequences would be flagged as functional or specified. The probability still cuts decisively against the Darwinian claim.
How can you test "the Darwinian claim" without actually including selection and descent with modification in your calculation? If the probability is determined by assuming all states are equiprobable then you have made the very mistake you claim is made in the ESII argument, haven't you? (The last version of CSI that Dembski defended spells this out pretty clearly, although, of course, he never actually makes a calclation). wd400
re 42, Eric writes,
This brings up an interesting point. One of the things I’ve been thinking about is if we could write a program to sift through sequences and flag things that appear as specifications. I’m not sure it would work very well with non-repetitive, meaningful sequences, because we would still be relying on significant human input to specify what sequences to look for. But it should work better with repetitive-type sequences and perhaps could at least give us a better idea of the number of repetitive-type sequences that would be flagged. Thoughts?
Sure, although, as you say, we (the programmers) would have to write the judgment rules. This is what I had in mind in my posts on the other thread where I suggested a "significance value." But there would be so many issues to make judgements about (such as, would events that were "almost significant" count, such as just a couple elements not matching the pattern") that I'm not sure you could get a result that was very meaningful. But with that said, even if the rules were fairly judgmental, if you applied the same rules to sets of increasing size, I'm sure you would see the result that I, and Phinehas, mentioned: that the larger the set the greater the ratio of non-significant events to significant events. And I like to say again, because no one has acknowledged this as an importnat point, if we start with a situation that already contains patterns that we recognize, such as 4 suits and 13 cards in order per suit, or even the numbers 1 through 6 on a die, we are bound to find more significant events than if we have n elements (52 or 6, in these cases) that are totally distinct, with no natural order or categorizing attributes. And P.P, I am using percentage in the sense of ratio: I know the actual numbers involved are very small decimals in many of the situations we are discussing, not everyday percentages. jdk
Heartlander Excellent point EA. One must assume the random generation from a multiverse to overcome the sequence of events leading to our special occurrence: just to clarify a couple of things in this nice post Large moon with right rotation period So the rotation period of the moon is not a surprising thing because it is in tidal lock which is a common condition in planetary mechanics. Tidal lock causes the rotation angular velocity to be matched identically with orbital angular velocity, so it is no rare or surprising thing to have the appearance of only one view of the moon from earth. The tidal lock zones of specific bodies can be estimated by masses, distances, and material properties of orbiting bodies and their orbited body. Mercury is in tidal lock. What is remarkable and surprising is that without the moon the earth's tilt would be radically unstable, but not because of the other planets but because the vectors representing earth's rotation and it's orbital revolution angular momenta are not parallel which is the cause of a tendency to instability. This instability is minimized by the moon's mass and orbit, not its rotation. I know next to nothing of planetary mechanics, for the record except what's in introductory college physics. Right amount of water in crust It's probably a miracle that Earth has any H2O because science so far cannot come up with even a likely scenario about how such a mass of H2O appeared on a supposedly (according to scientific thinking) once red-hot planet. Much less prove how it happened. groovamos
But the birthday problem situation usually surprises people the other way: In a room of 30 people you have about a 50% chance of two people having the same birthday. Most people would guess that considerably more people would be necessary. Probability can be slippery. Also, there are lots of places where you get big numbers really quickly. This is probably not considered true anymore (as our estimate of the number of galaxies has grown), but at one time it was said (based on the knowledge of the time) that the number of ways you can arrange the cards in a 52 card deck is about the same as the number of elementary particles in the universe. So it doesn't take a very complicated situation to produce probabilities that involve very low probabilities. jdk
Phineas and jdk @36/37: Phineas:
Past a certain point of complexity, the specifications (or significant sequences), no matter how broadly you define them, are so low as a percentage that you can practically ignore them and just use the probability.
This is one of the things that surprised me when I was doing the math for my little example. I generously assumed 100 billion repetitive-type sequences as specifications. It didn't even budge the calculation. When I said it was "a rounding error" it was not a figure of speech. Now the number of non-repetitive specified sequences is arguably much greater, so I don't think we can so easily dismiss them as insignificant. To be sure, the numbers are still staggering, so the conclusion is still the same. But I think it is fair to acknowledge that we are talking about 10^113 instead of 10^141 (in my example). On the other hand, you have expressed your comment as a "percentage", which in most ordinary endeavors we might only take out to several decimals. In serious calculation terms I think we still need to look at the numbers and do the scientific notation. But you are right, that in terms of getting a general sense as to the "likelihood" of something or the "percentage" chance, we can almost ignore the specification. The likelihood or percentage chance is effectively zero in both cases. Only if we take our percentage out to many, many decimal places, would the specification even be visible. So I guess I'm saying that we should be careful to take the full math into account to as many decimals as we can, but, yes, for purposes of drawing a practical real-world assessment, we can "practically ignore" (as you say) the specifications once we get out to a reasonably high level of complexity. ----- jdk:
Yes, I agree Phinehas, especially if the source of our judgment about specification is human understanding.
This brings up an interesting point. One of the things I've been thinking about is if we could write a program to sift through sequences and flag things that appear as specifications. I'm not sure it would work very well with non-repetitive, meaningful sequences, because we would still be relying on significant human input to specify what sequences to look for. But it should work better with repetitive-type sequences and perhaps could at least give us a better idea of the number of repetitive-type sequences that would be flagged. Thoughts? Eric Anderson
daveS @25:
. . . it would take about 2^500 trials before the chance of a repeated bit string reaches 50%, which is many more than I had anticipated.
Thanks. An interesting way to look at the problem. In other words, for a given situation, how many trials do we need before we even get to a 50-50 level of likelihood? Never mind claims of "it happened this way by chance" or even suggestions that "with enough time it is likely". Let's take a good hard look at the actual math and the astounding number of trials to even get to a "maybe/maybe-not" level of confidence. This is another good way of helping people get a feel for the astonishingly astronomical numbers we are talking about. Eric Anderson
Eric: Thanks for the link. I'm on vacation this week; and, frankly, my brain is tired from trying to figure this stuff out. So, I might get back into things in a day or two. PaV
wd400 @35:
Does anyone actually put this “ESII” argument forward?
I'm surprised you have to ask that. I have heard it and variants of it more times than I can count. Even those who don't state it explicitly in the same words I have used it are driving at the same point on a regular basis, such as Kitcher in the recent thread. This is extremely common. I'm glad to hear, however, that you would never advance such an argument.
I have only seen variants of it mentioned when people make obviously wrong calculations like the “this protein is 100 a/a long, there are 20 amino acids so there is only a one in 1/20^100 chance of it arising”.
If you have heard variants of it made in that context, then you are underscoring my point for me. Thank you. The correct response to a claim about the odds of a particular functional protein is most definitely not the ESII argument, nor any variation of it. Yet, as you note, it regularly gets brought in as though it were some kind of rational response to the design inference. It isn't.
One of the problems with these kinds of calculations is they over-specify the target (how many amino acids could be replaced without altering the function, how many other proteins could perform this function..)
What you have noted here, however, is a rational response, or at least a rational consideration of factors. What is required for a given function? Are there other sequences of amino acids that would produce the same function without other side effects? These are excellent questions, questions that invite and require careful research. And, yes, they would change the calculation somewhat. But my understanding of the evidence to date is that the number of permissible changes to an amino acid sequence to retain protein function is quite small. It isn't going to impact the probability calculation in any meaningful way. So, yes, it is certainly correct to note that with some proteins there is a small subset of sequences that could perform the same function. But we mustn't fool ourselves into thinking that this observation impacts the design inference. Indeed, in the examples I have provided, I have acknowledge and taken into account the fact that many sequences would be flagged as functional or specified. The probability still cuts decisively against the Darwinian claim.
. . . and people often use card deals or coin tosses to demonstrate how easy it is calculate a tiny probability for something after it has happened. All of which seems perfectly reasonable to me.
Sure. We can calculate all we want. We just need to understand what we are calculating. And in essentially every instance in which the ESII argument is put forward (including its variants, as you have noted) logical mistakes are made by the person putting it forward, as I have outlined. Eric Anderson
jdk @24: Agreed. Well said. Eric Anderson
Yes, I agree Phinehas, especially if the source of our judgment about specification is human understanding. If I were to deal 2000 cards out of a deck of 20,000 cards (perhaps 10 suits and the numbers 1 to 2000 in each), it would be very hard for human beings to even recognize all but the most obvious patterns, and the percent of such hands would be extremely small (vanishingly so) in comparision to all the hands. If you got all 2000 of one suit in a hand, there is absolutely no doubt that we would conclude that did not happen by chance. But, I don't think we could definitively agree that some "level of complexity renders the specifications/significant sequences moot", because of, among other things, as I have mentioned, the non black-and-white judgments that need to be made about what counts as significant, and to what degree. jdk
jdk: Yes, I am certainly referring back to what we touched on briefly in the previous thread. My point (which I didn't really elucidate in the other thread) is that, as the complexity increases, the probability lowers and the murkiness matters less and less. Past a certain point of complexity, the specifications (or significant sequences), no matter how broadly you define them, are so low as a percentage that you can practically ignore them and just use the probability. If this is true, the next question is this: What level of complexity renders the specifications/significant sequences moot? Phinehas
Does anyone actually put this "ESII" argument forward? I have only seen variants of it mentioned when people make obviously wrong calculations like the "this protein is 100 a/a long, there are 20 amino acids so there is only a one in 1/20^100 chance of it arising". One of the problems with these kinds of calculations is they over-specify the target (how many amino acids could be replaced without altering the function, how many other proteins could perform this function..), and people often use card deals or coin tosses to demonstrate how easy it is calculate a tiny probability for something after it has happened. All of which seems perfectly reasonable to me. wd400
Harry @30, you are right of course, that the human brain is an even better example, but I think some people find it easier to believe that unintelligent forces alone could create brains than computers just because they have no concept of the complexity of life, or of brains, while they can appreciate more readily the complexity of computers and iPhones (which of course should help them appreciate the complexity of brains). KF @31, what is a 404 code? You mean you are having trouble with my link? Seems to work for me. By the way, I recently (with some professional help) have redone the video "Why Evolution is Different," the subject of my last (Feb 22) post: https://uncommondesc.wpengine.com/intelligent-design/video-why-evolution-is-different/ Not worth a new post, since the content hasn't changed much, but maybe worth another look for some viewers, a more "polished" version. I think it's my best presentation of the only two ID-related themes I ever talk about. Granville Sewell
Excellent point EA. One must assume the random generation from a multiverse to overcome the sequence of events leading to our special occurrence: Cosmic Constants Gravitational force constant Electromagnetic force constant Strong nuclear force constant Weak nuclear force constant Cosmological constant Initial Conditions and “Brute Facts” Initial distribution of mass energy Ratio of masses for protons and electrons Velocity of light Mass excess of neutron over proton Principle of quantization Pauli Exclusion Principle Solar System Conditions Near inner edge of circumstellar habitable zone Low-eccentricity orbit outside spin-orbit and giant planet resonances A few, large Jupiter-mass planetary neighbors in large circular orbits Outside spiral arm of galaxy Near co-rotation circle of galaxy, in circular orbit around galactic center Within the galactic habitable zone Human advancement occurs during the cosmic habitable age “Local” Planetary Conditions Steady plate tectonics with right kind of geological interior Right amount of water in crust Sun is the right type star for life Large moon with right rotation period Stable atmosphere Proper concentration of sulfur Ability to create fire Right planetary mass Unique solar eclipse Cosmic shield Information rich biosphere Human Life Origin of life hurdles DNA has; Functional Information - Encoder - Error Correction - Decoder DNA contains multi-layered information that reads both forward and backwards - DNA stores data more efficiently than anything we've created - and a majority of DNA contains metainformation Complex pieces of molecular machinery in the cell The information enigma Morality Consciousness Free will Justice Perception Human language Recognizing art in nature Heartlander
Phinehas writes,
My intuition is that as the number of characters in your string go up beyond 100, though the number of specifications will also increase, the percentage of specifications as compared to the total number of possible sequences goes down, perhaps even approaching zero. ... Can it be demonstrated beyond simple intuition?
Hmmm. This is exactly the argument I made here and here. Later, Phinehas wrote, "Thank you for your posts @105 and @192. I think they get to the heart of the issue for me." I also pointed out in those two posts in the other thread that specification is not a simple black-and-white issue. If we deal 13 out of 52 cards, in order (the beginning situation we discussed), getting all spades in order would be highly significant. However hands where every group of 3 cards was sequential(3 4 5 Q J 10 8 7 6 2 3 4 K) might strike someone as pretty significant if they noticed the pattern. However there would be many more such hands than all spades in order. Therefore, it would be hard to provide a mathematically rigorous analysis of significance, even if one tried to assign different "significance values" to different hands, because there would be so much subjective judgment. So I think the idea is important, and we can go beyond just "intuition", but I'm not sure how much could be demonstrated mathematically in any rigorous fashion. jdk
Dr Sewell, I am getting a 404 code. KF kairosfocus
Granville Sewell @ 11, I have been a fan of yours for years, Granville. I have a question for you. When you say,
Of course, one can still argue that the spectacular increase in order seen on Earth does not violate the second law because what has happened here is not really extremely improbable. And perhaps it only seems extremely improbable, but really is not, that, under the right conditions, the influx of stellar energy into a planet could cause atoms to rearrange themselves into nuclear power plants and spaceships and digital computers.
Why do you not use the most unlikely, functionally complex phenomenon known to us -- humanity itself -- as the example of extreme improbability? Nuclear power plants, spaceships and digital computers consist of crude technology in comparison to a human being. harry
Hey Eric: My intuition is that as the number of characters in your string go up beyond 100, though the number of specifications will also increase, the percentage of specifications as compared to the total number of possible sequences goes down, perhaps even approaching zero. Does this seem right to you? Can it be demonstrated beyond simple intuition? Phinehas
Another way to convince a ESII advocate: We are playing poker, I am dealing. I deal myself a royal flush, you an 'ordinary' hand. I win and take your money. We play another hand, and I deal myself a royal flush again. Total of 10 hands, all dealt by me, always getting a royal flush. You accuse me of cheating!!!! Not so fast. ESII establishes that all sequences have equal probability. Thus nothing unusual about the 50 cards I have dealt myself. No more unlikely than any other 50 card sequence. Thanks for your money! Force the ESII advocate to tell you WHY he is convinced I cheated. wsread
You all might be interested in my tutorial on specified complexity which covers some of this ground. johnnyb
mike1962,
Everyone would be surprised if it output the same 1000 bit string twice in the same universe even though “all strings are just as likely.” Nobody would claim it was a random generator except the insane or those with an agenda. 1000 bits strings should never be expected to randomly hit twice in the same universe. It’s logically possible, but there are better explanations. Everything we know and experience about reality tells us so.
Interesting point. According to the "birthday paradox" calculations, it would take about 2^500 trials before the chance of a repeated bit string reaches 50%, which is many more than I had anticipated. daveS
FYI: In 22 I wrote "suspect" cheating, but in fact I would conclude cheating. Just a small clarification. jdk
The bottom line: Nobody would be surprised at a supposed random generator producing any particular 1000 bit string. Everyone would be surprised if it output the same 1000 bit string twice in the same universe even though "all strings are just as likely." Nobody would claim it was a random generator except the insane or those with an agenda. 1000 bits strings should never be expected to randomly hit twice in the same universe. It's logically possible, but there are better explanations. Everything we know and experience about reality tells us so. Specification trumps the ESII argument. mike1962
Hi Eric. Yes, I am discussing probability from a theoretical point of view where all equiprobale elements are chosen randomly I am also aware of what happens when you take significance into the situation, which brings in a human element. In one of my long posts on the Darwinism thread I remember writing that if I was dealt 13 cards from what was purportedly a random deck and got 13 spades in order, I would suspect cheating, not chance, as the cause. jdk
Your rudeness is duly noted, Barry. as well as the snarky tone of your post. FWIW, although I don't expect that you have paid attention, I've been writing on this issue on the Darwinism thread, and I have NOT been making the argument that you dismiss, nor have been making the argument you pointed to as Miller's Mendacity." In fact, I have been describing things in a way which support your position. So perhaps you should think about not jumping to conclusions before you call someone foolish. jdk
jdk @ 14: I infer from your comments that you did not follow the link in my comment @ 13. Perhaps you should before you comment further. It will help you look somewhat less foolish. Barry Arrington
Dionisio: Thanks for the thoughts. I'm not sure we can't apply probabilities to procedures as a matter of principle, although current lack of understanding of the procedures would make it more difficult, to be sure. You and gpuccio are quite right that the additional procedural aspects are hugely significant. This is part of the problem with the co-option proposal for the bacterial flagellum, for example. You can't just throw more parts or more lines of code into the middle of a highly controlled and sophisticated manufacturing process and expect anything good to come of it. Eric Anderson
jdk @8: Thanks for stopping by. I understand your desire to limit your part of the discussion and you are of course free to participate where you want If I may, however, I think the points I am raising are indeed relevant to issues you have discussed and have said you are interested in. I am focusing on specific nuances, so let me explain a bit further.
Right off the bat, this comment excludes me, as I have made it clear that of course some improbable events have more significance than others.
Agreed that some sequences or events have more significance than others. Whether they are improbable is part of the question.
Every sequence is just as improbable as any other. This is a true fact, and I believe Eric agreed with me about this on the other thread.
Almost. This is one of the important nuances I am highlighting in this OP. In the context of naked probability without any assessment of specification, which is the context you have been discussing and saying you are interested in, it is true that any randomly-generated sequence will have the same probability of any other randomly-generated sequence. (Assuming, similar parameters, length, and so on.). On that point I agree with you. However, as soon as we get to a question of the origin of a sequence, the logic must inexorably shift dramatically. It is quite clear to me that there is a disconnect in the discussion between those who are arguing that every sequence is equally improbable and those who are arguing against it. Is the disconnect because people can't do math? Perhaps for some, but not for most. The disconnect is not the math, it is the logic -- specifically the underlying assumptions in the discussion. If we assume a random generation, then yes, each sequence is just as improbable as the next. This is worth noting, but is hardly of significant substance. On the other hand, if we are asking about the origin of the sequence, then we are dealing with competing causal explanations, in which case the probabilities are not equivalent. Not even close. ----- So I appreciate your desire to talk about the equal probabilities assuming random generation, assuming the same underlying causal explanation. But I can assure you that this assumption has not been made clear to many of your interlocutors. Thus the disconnect. In any event, even if you don't want to spend a lot of time on the issues I am highlighting, do you agree that the every-sequence-is-improbable argument is not a good argument against the design inference, for the reasons I have outlined? Eric Anderson
Bob O'H @4:
Isn’t that also a problem for ID proponents, as they also make the same assumption? The CSI family of statistics seem to make this assumption and don’t, for example, include incremental selection.
Not really. Here I have just been analyzing a single string. In biological terms we might think of a particular nucleotide sequence or amino acid sequence. This is an incredibly low baseline that must be reached. If we are thinking of a more complete system, then we run into my additional point:
Furthermore, when we have, as we do with living systems, multiple such sequences that are required for a molecular machine or a biological process or a biological system – arranged as they are in their own additional specified configuration that would compound the odds – then such calculations quickly go off the charts.
I presume this latter is what you're referring to -- the idea that something like, say, the bacterial flagellum could be constructed bit by bit, rather than all at once? You are right that it is important to acknowledge the hypothetical possibility of an incremental, stepwise construction in the initial analysis. However, upon closer analysis what we find in most cases is that this is nothing but wild speculation. As an empirical matter, it is quite clear that numerous systems require a significant number of parts to work. Indeed, the way in which the parts of the bacterial flagellum have been identified has been primarily through knockout experiments, thus confirming the irreducible core of the flagellum. Now one could argue, as Darwinists have been wont to do in the face of this empirical evidence, that maybe, hypothetically, perhaps, the bacterial flagellum could have been constructed by a long series of individual components that eventually came together to form the bacterial flagellum. There are a few significant problems with this idea. First, there is no evidence for it. Second, there is not even a reasonable theoretical basis for such a scenario, beyond vague assertions. Third, although such a scenario would indeed avoid an "all at once" construction, it instead requires a long series of mutations and changes, all of which just happen to be beneficial, all of which just happen to be of such selective benefit to become fixed in the population, all of which just happen to occur at the right time and in the right order, all of which just happen to add up to a complex functional system. Unfortunately for the Darwinian paradigm, such an approach simply avoids the all-at-once probabilistic hurdle by embracing its own set of fantastic probabilistic hurdles. It's out of the frying pan and into the fire. It isn't realistic or reasonable. Most of the stories, like Matzke's made-up hypothetical he managed to get published, can scarcely even be called science. Rather, they are just another in the long string of simplistic Darwinian just-so stories. Eric Anderson
Are you talking about poker? Of course each hand is equally random. Or are you talking about something else? jdk
Every sequence is equally probable, but not every sequence is equally random. https://en.wikipedia.org/wiki/Algorithmic_information_theory EricMH
Nothing in poker is of extremely low probability. There are only 2.6 million possible hands, all equally probable: even royal straight flushes show up every 650,000 hands. Also, there is a well defined hierarchy of significance, and the probability of all the situations are well known. jdk
Why won't these "improbable things happen all the time" people play poker with me? https://uncommondesc.wpengine.com/intelligent-design/low-probability-is-only-half-of-specified-complexity/ Barry Arrington
D, process, regulation, interactions across complex networks such as cellular metabolism and so forth are all connected to the FSCO/I issue. Just, it is usually hard to see a direct right there in the microscope, visible feature. It is that visibility issue that makes D/RNA, proteins, enzymes and ribosomes so important in the discussion. And BTW, fine tuning is closely connected, in the world of life, islands of function are about clusters of operating points deeply isolated in a field or sea of possible configs. And it does no real good to try to project hill-climbing within an island to the wider context where the dominant challenge to evolutionary materialistic chance and necessity schemes is to blindly FIND a shoreline of function. Unfortunately, this difference between blind and insightful exploration of a space of possibilities seems very hard for objectors to hear. Look man, I am glancing across at a piece of white pine that came here as shipping support material that I can see reconfigured as a shad bait made of wood with through wiring. There is not a chance in the world that that would happen without insight and design. And if you think oh life reproduces so evolution answers, you have not seen the FSCO/I challenge to get TO reproduction yet, and need to read Paley Ch 2 on the self replicating time keeping watch, then apply that to the von Neumann kinematic self replicator and to the OoL challenge at the root of the tree of life. Then, to onward origin of body plans with vastly differing architectures, and up to our own origin and the issue of where rational mind comes from that allows us to have a discussion. [BTW, I am reading Reppert and dipping into Pearcey right now, with Yockey waiting in the wings. Good things to come, just the prefarory matrerial in Yockey has a lot to say that too many are stubborn to hear, frankly.] And at cosmological level, we are talking about the abstract architecture of a cosmos with many, mutually adapted, delicately balanced factors. And again, we see an astonishing degree of difficulty in even following the point accurately. These suggest to me that we deal with commitments at worldview level that warp the more technical discussion. Then, when, to break through, we put up and followed up -- cf here -- a striking concrete case such as Antikythera, and join that to Paley, it is studiously ignored or taken as an occasion for side-tracking tangents that do not look very fruitful. Then notice what didn't happen when we corrected the assertions that tried to discredit ID researchers -- the objectors showed that they do not see themselves as accountable before truth and fairness; that is sadly standard for agit-prop operatives, but there are too many enabling by going along. All of this tends to point to where our civilisation is going, and it is not pretty. Sigh, back to the RW challenges of the day, even on Whitmonday. KF kairosfocus
In my recent Physics Essays article: http://www.math.utep.edu/Faculty/sewell/articles/pe_sewell.html I wrote: But the second law is always about probability, so what is still useful in more complicated scenarios is the fundamental principle behind all applications of the second law, which is that natural causes do not do macroscopically describable things which are extremely improbable from the microscopic point of view. Footnote: Extremely improbable events must be macroscopically (simply) describable to be forbidden; if we include extremely improbable events which can only be described by an atom-by-atom accounting, there are so many of these that some are sure to happen. (If we flip a billion fair coins, any particular outcome we get can be said to be extremely improbable, but we are only astonished if something extremely improbable and simply describable happens, such as “the last million coins are tails.”) If we define an event to be simply describable when it can be described in m or fewer bits, there are at most 2^m simply describable events; then we can set the probability threshold for an event to be considered “extremely improbable” so low that we can be confident that no extremely improbable, simply describable events will ever occur. Notice the similarity between this and Dembski’s argument that unintelligent forces do not do things that are “specified” (simply or macroscopically describable) and “complex” (extremely improbable). Granville Sewell
kf, I am not interested in islands of functions, or any other arguments about how these issues apply to the physical universe, so I'd rather you not imply that I am by addressing your remarks to me. jdk
BO'H (and JDK): Islands of function. Kindly cf the discussion here in that light. KF PS: Please read Walker and Davies i/l/o the statistical mechanics, phase/ state/ configuration space view of systems and their dynamics:
In physics, particularly in statistical mechanics, we base many of our calculations on the assumption of metric transitivity, which asserts that a system’s trajectory will eventually [--> given "enough time and search resources"] explore the entirety of its state space – thus everything that is phys-ically possible will eventually happen. It should then be trivially true that one could choose an arbitrary “final state” (e.g., a living organism) and “explain” it by evolving the system backwards in time choosing an appropriate state at some ’start’ time t_0 (fine-tuning the initial state). In the case of a chaotic system the initial state must be specified to arbitrarily high precision. But this account amounts to no more than saying that the world is as it is because it was as it was, and our current narrative therefore scarcely constitutes an explanation in the true scientific sense. We are left in a bit of a conundrum with respect to the problem of specifying the initial conditions necessary to explain our world. A key point is that if we require specialness in our initial state (such that we observe the current state of the world and not any other state) metric transitivity cannot hold true, as it blurs any dependency on initial conditions – that is, it makes little sense for us to single out any particular state as special by calling it the ’initial’ state. If we instead relax the assumption of metric transitivity (which seems more realistic for many real world physical systems – including life), then our phase space will consist of isolated pocket regions and it is not necessarily possible to get to any other physically possible state (see e.g. Fig. 1 for a cellular automata example).
[--> or, there may not be "enough" time and/or resources for the relevant exploration, i.e. we see the 500 - 1,000 bit complexity threshold at work vs 10^57 - 10^80 atoms with fast rxn rates at about 10^-13 to 10^-15 s leading to inability to explore more than a vanishingly small fraction on the gamut of Sol system or observed cosmos . . . the only actually, credibly observed cosmos]
Thus the initial state must be tuned to be in the region of phase space in which we find ourselves [--> notice, fine tuning], and there are regions of the configuration space our physical universe would be excluded from accessing, even if those states may be equally consistent and permissible under the microscopic laws of physics (starting from a different initial state). Thus according to the standard picture, we require special initial conditions to explain the complexity of the world, but also have a sense that we should not be on a particularly special trajectory to get here (or anywhere else) as it would be a sign of fine–tuning of the initial conditions. [ --> notice, the "loading"] Stated most simply, a potential problem with the way we currently formulate physics is that you can’t necessarily get everywhere from anywhere (see Walker [31] for discussion). ["The “Hard Problem” of Life," June 23, 2016, a discussion by Sara Imari Walker and Paul C.W. Davies at Arxiv.]
kairosfocus
I've been involved in the linked thread, and don't want to get involved in a different thread, I think, especially since this one involves more topics than I've been commenting on, but I want to make one comment about this:
is that every sequence is just as improbable as another. And therefore, comes the always implied (and occasionally stated) conclusion, there is nothing special about 500 heads in a row.
Right off the bat, this comment excludes me, as I have made it clear that of course some improbable events have more significance than others. Every sequence is just as improbable as any other. This is a true fact, and I believe Eric agreed with me about this on the other thread. That is different than the issue of some sequences being more significant than others. As I wrote about at length in the other thread, the issue is not that all sequences are equally improbable (because they are), but that the ratio of significant to non-significant sequences is small. jdk
Eric Anderson and KF, I believe that the real biology FSCO/I problem is beyond probability. Note how GP explained so well the probability issues associated with the huge jumps of complex functional specified information in proteins found in different species. He wrote more than once very technical OP and follow-up comments about this subject with much detail. However, GP himself clarified that his articles --which were very insightful and provided strong arguments for ID-- did not cover the fundamental questions of what he called 'procedures' which are so far from being well understood. Once we step into the GP's "procedures" (spatiotemporal controlling mechanisms), then probabilities don't seem to apply, at least not directly. Actually I was looking forward to reading an OP by GP on the 'procedure' topic. I miss his technical OPs and serious discussions. His politely dissenting interlocutors lacked arguments against most of GP's detailed comments. Assuming one could somehow get over the insurmountable probability issues in biology, we still have to face the reality of GP's "procedures" where probability concepts don't seem to apply, at least not straightforwardly. Note that most of the questions posed in the comments referencing biology research papers in the threads "Mystery at the heart of life" and "A third way of evolution?" are implicitly related to the 'procedure' problem. Does this make sense? Please correct my comment. Thanks. Dionisio
Bob O'H @4: Did you miss this? https://uncommondesc.wpengine.com/intelligent-design/claim-that-animals-are-1-2-billion-years-old-comes-under-fire/#comment-632803 See comments @23-25, 28-30. They were all addressed to you. Thank you. Dionisio
Bob O'H @4: Did you quit or abandoned a discussion in another thread recently? Dionisio
The ESII argument, popular though it may be among some intelligent design opponents, is fatally flawed. First, because it assumes as a premise (random generation) the very conclusion it seeks to reach.
Isn't that also a problem for ID proponents, as they also make the same assumption? The CSI family of statistics seem to make this assumption and don't, for example, include incremental selection. Bob O'H
kf @2:
However, functionally specific organised sequences will be intermediate: aperiodic but compressible only to a limited extent, while fulfilling a coherent function depending on configurations coming from a cluster in what has been called an island of function.
Agreed. Typically intermediate, which is why we have to be careful about the idea that compressibility has close correspondence to complex specified information. I think some ID proponents have occasionally made statements about compressibility that might be easily misunderstood or perhaps even wrong. People also need to understand the scale of the configuration space. The islands of function are not only islands, they are miniscule. Nothing like Japan or The Phillipines or even the small dot of Kauai we see on our classroom globe. Berlinski compares the island of function to a dime-sized circle on an entire planet-sized landscape -- something that would not even be visible on our largest museum-sized globes. Yet the Darwinian claim is that this nearly invisible speck was miraculously hit upon. Over and over and over.
It is interesting to see objectors saying in effect, but there were signs of people and artifacts around. Yes, so, how was each of these items spotted as an artifact?
Exactly. After all, why was the nearby rock on the seafloor recognized as not a designed artifact and left to lie rather than being retrieved? The presence of humans around and even the presence of other artifacts around is not what permitted the design inference for something like the Antikythera mechanism. It was the properties of the artifact itself.
But what of the magic of descent with modification? . . . the issue is the origin of the underlying function of self-replication as itself reflecting FSCO/I that needs to be explained.
Indeed. Also, the misguided assumption, commonly heard within evolutionary theory, that reproduction somehow adds meaningfully to the ability of the evolutionary process to do its work . . . Post for another time . . .
Indeed, I think we need to ponder that finding an entity exhibiting FSCO/I, given the strength of the sign, is good reason to infer first to design as key causal factor then to the second order inference that contrivance points to a capable contriver at the point of origin.
Agreed. This is the way it always works. In all historical situations (i.e., absent real-time observation of the creative process), it is the contrivance that gives us information about the existence of and ability of the contriver. Eric Anderson
EA, a good effort, one that should provoke thought. I would add first, that as 3-d entities can be described by using strings of answers to Y/N q's in a description language, discussion on strings is WLOG. For example, consider AutoCAD. Second, I think the pivotal issue is often, does one consider whether a potential designer is POSSIBLE at the time/place of origin. If in one's estimation, a designer is even more implausible than that a blind search process produced some phenomenon exhibiting FSCO/I then one will be inclined to reject a design inference almost regardless of evidence or argument. So, you are right, there is a suppressed begging of questions at work; often lurking in the implications of so-called methodological naturalism. Third, I point to a complication: mechanical, law-like necessity. Indeed, back to the 1970's and 80's, Orgel and Thaxton et al spoke in terms of what Trevors and Abel would come to term orderly, random and [specifically] functional sequence complexity. These can be discerned on degree of resistance to "simple" and "short" but sufficient description. A truly random sequence basically would have to be duplicated to describe it. An orderly sequence comprising a repeated block, can be dramatically compressed as repeat block X, n times. However, functionally specific organised sequences will be intermediate: aperiodic but compressible only to a limited extent, while fulfilling a coherent function depending on configurations coming from a cluster in what has been called an island of function. Text with a few typos or the like is still readable due to some redundancy, but this will soon be overwhelmed by noise as the proportion increases. The design filter in the per aspect form I typically use, is linked to this, and to the triple pattern of causes we commonly see: blind chance and/or mechanical necessity and/or intelligently directed configurations. It is modified by the importance of being sure when one infers design and so false negatives are accepted. Per aspect relates to the need for configuration-based functional coherence and sufficient complexity beyond a threshold to be jointly present in the same particular focal aspect of an entity or a phenomenon. For instance, the six pointed star type snowflake shows six-pointed order and complexity of riming on the spikes, but these apply to distinct aspects. Likewise, a pendulum exercise will show orderly behaviour with a scattering of "noise" or "error" if we observe it. As a result, there are two successive defaults before concluding design; more or less:
Select a focal aspect Does it exhibit orderly, simply describable reliably repetitive behaviour under closely similar initial circumstances --> if so, lawlike necessity. If no, chance and/or intelligently directed configuration. Does it exhibit high complexity AND functional coherence based on specific configuration --> If no, chance If yes, intelligently directed configuration. Next aspect
The point is, some aspects will exhibit necessity and/or chance, others may show that in addition, there is a factor best explained on design. And so, the issue is, an inference to best causal explanation on tested, reliable signs. We routinely recognise lawlike necessity from orderly patterns emerging from sufficiently similar initial circumstances, including crystallisation on unit cells. We routinely see chance processes and typical patterns such as TV "snow" or similar patterns of minerals in say granite, or good random number generators. We also notice cases of FSCO/I such as how a diver working 148 feet down spotted a rock with a gear sticking out, or more exactly the fossil of a gear. Then an Archaeologist -- apparently, a former Minister of Education in Greece BTW -- spotted the same FSCO/I rich pattern among the items retrieved from the Antikythera wreck site. It is interesting to see objectors saying in effect, but there were signs of people and artifacts around. Yes, so, how was each of these items spotted as an artifact? ANS: Each manifested FSCO/I, in the case of half buried marble statues, they showed ravages that wore away features, too. But what of the magic of descent with modification? ANS: As Paley pointed out in CH 2 of his Nat Theol -- fifty years prior to Darwin's publication, the issue is the origin of the underlying function of self-replication as itself reflecting FSCO/I that needs to be explained. That is, start in Darwin's pond or the like and address OoL. Then, we can go on to see the similar challenge to account for origin of information-rich body plans, starting at molecular levels. As in 10 - 100+ mn bases of new genetic info for body plans, and 100 - 1,000+ for OOL, where both are well beyond the 500 - 1,000 bit threshold that overwhelms sol system to observed cosmos scale atomic resources [10^57 - 80 atoms, 10^12 - 14 rxns per s for fast organic type reactions] in a 10^17 s window of time. So, our causal pattern intuitions on sign are well founded. Indeed, I think we need to ponder that finding an entity exhibiting FSCO/I, given the strength of the sign, is good reason to infer first to design as key causal factor then to the second order inference that contrivance points to a capable contriver at the point of origin. KF kairosfocus
Additional Note to Readers: The every-sequence-is-equally-improbable argument against design has significant problems. I addressed the key issue (specification) in my podcast here: http://www.discovery.org/multimedia/audio/2015/06/eric-anderson-probability-design/ There are a couple of critical follow-up points that are set forth in the above OP. ----- See the following brief comments from the recent threads for more background on the current discussion: https://uncommondesc.wpengine.com/intelligent-design/darwinism-why-its-failed-predictions-dont-matter/#comment-632906 https://uncommondesc.wpengine.com/intelligent-design/darwinism-why-its-failed-predictions-dont-matter/#comment-632950 Eric Anderson