Uncommon Descent Serving The Intelligent Design Community

# Back to Basics

Share
Flipboard
Print
Email

The materialists have been doing a little end zone dance over at the The Circularity of the Design Inference post. They seem to think that Winston Ewert has conceded that Dembski’s CSI argument is circular. Their celebrations are misplaced. Ewert did nothing of the sort. He did NOT say that Dembski’s CSI argument is circular. He said (admittedly in a rather confusing and inelegant way) that some people’s interpretation of the CSI argument is circular.

Ewert is making a very simple point. To make a design inference based on mere probability alone is fallacious. I don’t know what all of the fuss is about. But just in case this is not clear by now, let’s go back to basics. The design inference requires two things: A huge ocean of probability and a very tiny island of specification. If you don’t have both, it does not work.

Perhaps a poker example will illuminate the issue. There are 2,598,956 five-card poker combinations. Only 1 of those combinations corresponds to the specification “royal flush in spades.” The probability of a royal flush in spades on any given hand is 0.000000385. Now let us suppose the “search space” (i.e., the ocean of probability) is “four consecutive hands of poker.” The probability of a series of events is the product of the probability of all of the events. The probability of receiving a royal flush in spades in four consecutive hands is 0.000000385^4 or 0.00000000000000000000000002197 or about 2.197X10^-26.

Here’s the interesting point. The probability of ANY given series of four poker hands is exactly the same, i.e., 2.197X10^-26. So why would every one of us look askance at the series “four royal flushes in spades in a row” even though it has the exact same low probability as every other sequence of four hands?

The answer to this is, of course, the idea behind CSI. Low probability by itself does not establish CSI. The fact that in the enormous probabilistic ocean of four consecutive poker hands the deal landed on a tiny little island of specification (“four royal flushes in spades) is what causes us to suspect design (i.e., cheating).

Ewert writes:

The fact that an event or object is improbable is insufficient to establish that it formed by natural means. That’s why Dembski developed the notion of specified complexity, arguing that in order to reject chance events they must both be complex and specified.

Poker analogy: The fact that a series of four poker hands has a very low probability (i.e., 2.197X10^-26) is insufficient to establish that it was caused by pure chance. That’s why we need a specification as well.

Ewert:

Hence, its not the same thing to say that the evolution of the bacterial flagellum is improbable and that it didn’t happen. If the bacterial flagellum were not specified, it would be perfectly possible to evolve it even thought it is vastly improbable.

Poker analogy: It is not the same thing to say that a series of four hands of poker is improbable and therefore it did not happen by chance. If the four hands were not specified, it would be perfectly possible to deal them by pure chance even though any particular such sequence is vastly improbable.

Ewert:

The notion of specified complexity exists for one purpose: to give force to probability arguments. If we look at Behe’s irreducible complexity, Axe’s work on proteins, or practically any work by any intelligent design proponent, the work seeks to demonstrate that the Darwinian account of evolution is vastly improbable. Dembski’s work on specified complexity and design inference works to show why that improbability gives us reason to reject Darwinian evolution and accept design.

Poker analogy: Dembski’s work on specified complexity and design inference works to show us why that improbability (i.e., 2.197X10^-26) gives us reason to reject chance and accept design (i.e., cheating).

In conclusion it seems to me that after all the dust settles we will see that Ewert was merely saying that Miller’s Mendacity (see the UD Glossary) misconstrues the CSI argument. But we already knew that.

126 Phinehas November 18, 2014 at 11:39 am After the first hand, the hand becomes the prior specification, doesn’t it? Yes In other words, wouldn’t the probability of getting a Royal Flush in Spades four hands in a row be exactly the same as the probability of getting any five hands in a row? That's essentially what I said in post#6: The probability of getting the same hand in four consecutive deals, where the hand *is* specified in advance, is (3.8477e-007)^4 = 2.1918e-026 The probability of getting the same hand in four five consecutive deals, where the hand is *not* specified in advance, is (3.8477e-007)^3^4 = 5.69641e-0202.1918e-026 cantor
cantor:
The point being, something’s rotten in Denmark if the same hand is dealt 4 times in a row, regardless of whether or not said hand meets any prior specification.
After the first hand, the hand becomes the prior specification, doesn't it? In other words, wouldn't the probability of getting a Royal Flush in Spades four hands in a row be exactly the same as the probability of getting any five hands in a row? Phinehas
111 Alicia Renard November 17, 2014 at 10:24 am I have no idea what “blind watchmaker thesis” is. . You might want to expand the circle of books and articles you read to include opposing viewpoints. cantor
'what are the blind watchmaker research programs?' I think they're pretty much in the dark, Cantor. Axel
Dr JDD, If you'd like to understand why KF's "islands of function" rhetoric is bogus, I'd recommend the book Arrival of the Fittest, discussed in this thread. It's very bad news for ID. You'll be hearing a lot about it, so you might as well read it. keith s
Dr JDD, I just had an emergency of zero notice come over my bows, I thank you for taking time to respond as above. Gotta run. KF kairosfocus
Alicia Renard says:
KF: Sparse blind search is not a good strategy to find isolated islands of function in that space, one dominated utterly by bit strings in no particular order or organisation and of near 50:50 distribution. Let’s continue with an example related to reality. How common is functionality in proteins? Of all possible proteins (as there is no upper limit to the length of a protein sequence this is infinite – the longest known sequence in living organisms being titin variants that approach 35,000 residues). Interestingly, there seems to be an observed increase in the length of proteins found in Archaea, bacteria and eukaryotes that, in itself suggests that new proteins do not arrive in living organisms by random assembly. Titin contains two sub-domains, one of 100 residues – the other of 80, each repeated over a 100 times which again suggests an evolutionary path of adding subunits. Of course you can move this back to an issue of abiogenesis and ask where did the first self-sustaining self-replicators come from but this does not appear to be your argument when talking about “islands of function” with regard to protein sequences. To repeat what others have said, the evolutionary process of small variations arising and being sifted by a differential reproduction process is what you need to model if you want to demonstrate that such a process is not viable or adequate to explain life’s diversification from life’s common ancestor.
And they say that ID uses circular arguments. This is circularity and assumptiveness dressed up in knowledge. Titin is certainly an interesting protein to discuss and think of, but notice how Alicia approaches it:
Titin contains two sub-domains, one of 100 residues – the other of 80, each repeated over a 100 times which again suggests an evolutionary path of adding subunits.
No, it does not. That is your interpretation based on an a priori assumption (that you use to validate your assumption thus a circular argument. This is a fallacy when discussing a protein as you have failed at the most basic level to first understand the function of the protein. In this case, titin is essentially a structural type of protein that spans the cell and plays a significant role in contraction. Hence why it is massively expressed in cardiac tissue and muscle tissue. Once you understand it's function, the fact that it is made up of multiple repeating subunits makes perfect sense - optimisation for function. In fact, given its role it would be unusual or odd, or unexpected for a protein like titin to have non-repeating units. It does nothing of the sort to suggest that evolution has a "path" of adding small subunits - that is overtly assumptive and as said, ignores the whole point of the protein (function - something many materialists have a hard time grasping). Whether the deception is intended or not, the point is this is why you fail at the basic level of "wisdom" in science - you have the question all wrong. There is a book that says a lot about foolishness of men and their supposed wisdom when they cannot even acknowledge the existence of a Designer... Oh by the way, seeing as Thornton et al have it all sorted out, I suppose you can explain to us all how a large highly conserved protein essential for life such as the ATPsynthase enzyme came around when it did not have a simpler template that we know of to work evolution's magic on and step-wise produce such a vital protein? I would love to hear how blind unguided processes led in a step-wise manner to such an essential protein. Dr JDD
Adapa, the question is do those self-replicators come as a royal flush or through an iterative process? Collin
Collin Do you think that the iterative feedback process evolved via an iterative feedback process or came about all at once? As soon as you had imperfect self replicators competing for resources you had the iterative feedback process of evolution. Everything since then was just refinements to the process. Adapa
KF @ 108, I don't quite follow your explanation, but I don't have a good grasp on how to determine specification so I think the communication problem is on my end. Thanks for the effort. Do you think Dembski would consider crystals "well-specified," as Orgel called them? Learned Hand
Adapa @85 "Interesting that there’s been no comment on the observation that the OP “dealt a royal straight flush” poker example is fundamentally wrong when used as an analogy for long term iterative feedback processes like evolution." Do you think that the iterative feedback process evolved via an iterative feedback process or came about all at once? And when it came about, was it so stable that it had zero chance of death per iteration? Let's say that at each iteration there was a 1% chance that the process failed due to fatal mutation, for an individual is multiplied by (1% times number of days old) and it iterated each day. Let's also say that the reason that death is more likely as time goes on for the individual is because the likelihood of detrimental mutations increases at that rate. Finally, let's say that a mutation-detection-and-correction mechanism evolves. It evolves because it confers a benefit on the individual's survival and reproduction. Does this mechanism evolve all at once like a royal flush? If not, does it begin to confer benefits on the organism before it is fully developed? If so, how does it continue developing when its job is to prevent further mutations? Wouldn't it be more successful the more it failed (thus a paradox)? Seems like with some systems, including OOL, it really is royal flush all at once or a paradox. Collin
Alicia @ 110. Did you not read the OP? If not, your question can be excused. Read the OP. If you have read the OP and still ask that question, I don't think I can help you any further. Barry Arrington
Barry #85: "So much for professors of population genetics being able to read for comprehension." Now we know where Joe gets his abusive and offensive nature from. Lap dogs always tend to do things that they think will please their master. But seriously, you have banned ID opponents for far less serious infractions than Joe displays several times per day. If you want UD to encourage civil and honest discussion, as you claim, you are very clearly demonstrating your hypocrisy by not dealing with Joe as he should be. If you are not going to do it because it is the right thing to do, then do it to reduce your personal legal liability. Blog moderators have been held legally accountable for libelous comments posted on their sites when the moderator has not acted on them. I am not threatening to take this action but we all know that it is only a matter of time before Joe makes libellous comments about another commenter who wi take action. centrestream
Intelligent Design and the Origin of Biological Information: A Response to Dennis Venema http://www.discovery.org/a/17571 bornagain77
KF You should spend some time reading Biologos. Try this thread by Dennis Venena, for example and I recommend reading comments by Roger Sawtelle. You could do worse than learn a little humility from him. Alicia Renard
KF writes: Empirical observational warrant for blind watchmaker thesis OOL is _____________ ? I have no idea what "blind watchmaker thesis" is. There is currently no empirically-based theory or hypothessis for the origin of life on Earth. How could there be when no evidence remains of early life? For creating sets of hundreds of proteins in such a way that they can be coordinated is ___________? The evolutionary explanation is, briefly reiterated rounds of variation and selection on populations, one very powerful illustration being the ongoing Lenski experiment All at once we can recognise is a non starter, for stepwise, the actual observational base is:_________________? Trying to squeeze the juice of meaning out of this lemon but failing. Why then did Orgel and Shapiro end up in mutual ruin____________? (By contrast the origin of FSCO/I by intelligently directed configuration is all around us with cumulatively trillions of cases in point, i.e. vera causa is in hand.) Ah, something familiar! FSCO/I and variants only comes into play once you decide that all other possibilities other than design are improbable. Why you just don't settle for the straightforward explanation that our Lord God created the Universe, with everything, including us, in it I really can't understand. And why we should expect to know God through models and mechanisms I don't know either. Alicia Renard
BA writes: Alicia @ 93. So much for professors of population genetics being able to read for comprehension. As I explained above, Winston said no such thing. Do you mean Dr Ewert did not write:
CSI and Specified complexity do not help in any way to establish that the evolution of the bacterial flagellum is improbable. Rather, the only way to establish that the bacterial flagellum exhibits CSI is to first show that it was improbable. Any attempt to use CSI to establish the improbability of evolution is deeply fallacious.
and
So Keith is right, arguing for the improbability of evolution on the basis of specified complexity is circular. However, specified complexity, as developed by Dembski, isn’t designed for the purpose of demonstrating the improbability of evolution. When used for its proper role, specified complexity is a valid, though limited argument.
What has Felsenstein misunderstood about what Ewert wrote? Alicia Renard
Joe #95
When I have posted anything resembling an insult that was unprovoked? Evidence please or admit that you are dishonest
…..And we know that you and your ilk do not value open discussion. CSI exists regardless of how it was formed. That keith s can’t get that fact demonstrates he is not into an open discussion. keith s wants to dominate discussions with his strawmen, lies and misrepresentations.
markf
LH: Specification is not told by the clock, though "after the fact" has been used for saying the equivalent of painting the target around where the arrow hit; e.g. throw your lottery then say oh the number we pulled, let's print it on a ticket and call it the winner. Which would be different from, pull a set once, then choose it as a target then later try to hit it again. (If a supposedly fair lottery with the old fashioned balls keeps hitting the same number that is a sign of cheating or defects. Guess why.) Just to start, proteins fold or fail, and attempts to play around with them often end in destabilisation, cf here. It is told by independence or reasonable detachability. Functionality of the good old 6500 C3 is either there or not, for instance, and it is readily observable. BTW, that's why so many fishermen swear by it though there is a warning on putting Carbon Tex washers in the drag on older units as the gears may strip if the drag is set too hard . . . another case of interactive function and limits to variability due to the island of function nature of the entity. OK when I have some more time. KF kairosfocus
Joe Umm well saying biological function = biological specification is done before we observe it. No. People were observing biological function for thousands of years before the concept of biological specification was ever thought up. It's all just post hoc rationalization which makes it meaningless. Adapa
A: Recent events should make it plain who makes those decisions around UD. It 'ent moi. KF kairosfocus
AR: I already pointed you here, please cf, it does address that exact question. In addition, recall the root and first challenge is at OOL, where hundreds of mutually interacting proteins, and many other molecular machines ahve to be put together in an integrated cell. Empirical observational warrant for blind watchmaker thesis OOL is _____________ ? For creating sets of hundreds of proteins in such a way that they can be coordinated is ___________? All at once we can recognise is a non starter, for stepwise, the actual observational base is:_________________? Why then did Orgel and Shapiro end up in mutual ruin____________? (By contrast the origin of FSCO/I by intelligently directed configuration is all around us with cumulatively trillions of cases in point, i.e. vera causa is in hand.) KF kairosfocus
Now how about you stepping up and supporting your accusation against me? Joe
Umm well saying biological function = biological specification is done before we observe it. Joe
Joe Again- biological specification refers to function. We do not know if there is functionality until we observe it. Orgel’s specified complexity wrt biology also refers to functionality. All done post hoc. Dembski's specification has to be made before the observation. UD's own ID Fundamentals page says so as I've already posted twice. Adapa
R0bb- Why can't it be that Dembski took Orgel's SC and just expanded on it? That is made it more specified? Joe
AGAIN- Natural selection is non-random only in that not every individual has the same probability of being eliminated. Ernst Mayr goes over this in "What Evolution Is". Contingency rules when it comes to natural selection. Joe
Learned Hand- Read "No Free Lunch"- Orgel is referenced there and in conjunction with "Dembski's" specified complexity. Joe
Alicia @ 93. So much for professors of population genetics being able to read for comprehension. As I explained above, Winston said no such thing. Barry Arrington
Again- biological specification refers to function. We do not know if there is functionality until we observe it. Orgel's specified complexity wrt biology also refers to functionality. The references in "No Free Lunch" make it clear that the concepts are the same- well maybe Dembski's is updated from Orgel's Joe
Alicia:
Following some links, I find that Joe Felsenstein, a professor of population genetics has something to say about the relevance of various arguments at Uncommon descent to reality.
We have proven that Joe Felsenstein doesn't even understand the arguments. So what he has to say is irrelevant. Joe
KF when will you quit tut tutting Joe and finally do something about his constant belligerent and unprovoked insults?
When I have posted anything resembling an insult that was unprovoked? Evidence please or admit that you are dishonest Joe
Kairosfocus,
We can all examine the matter and follow the discussion to see how we have a basic observable (that is a commonplace) functionally specific complex organisation and associated information. Orgel & Wicken discussed it in qualitative, observational terms. WmAD sought to develop a metric model, pivoting on a dual to the information content. So it is not just his word, we can work it out....
If Orgel is defining either "complex" or "specified" in a fundamentally different way than Dembski, then no, he didn't just quantify their concept. And to his credit, I don't think Dembski has ever claimed that--I think he was relatively clear that he is substantially transforming what Orgel (at least) was talking about. But here at UD, it seems like the party line is that Orgel and Dembski are using the same concept of "specified complexity." BA claims they're "exactly" the same, although you've been somewhat more circumspect. But Dembski is talking about probability and a priori specification, whereas neither seems to be true of Orgel. Asking why we should think Dembski and Orgel are talking about the same concept of complexity has gotten me insulted, talked down to, and ignored. The question stands. It's not the most important question in the world, but at some point I think we can infer that there isn't any good reason to think the two concepts are the same. My assumption is that at some point, BA assumed they were the same concept based on the same Orgel excerpts we've all read online. (There's nothing wrong with forming an opinion based on such excerpts--I do it, and so do you, gentle reader, especially when the source text isn't readily available.) And although he can't support the position, because he treats conversations like zero-sum competitions, he doesn't want to be seen as reconsidering either. My armchair psychology is worth what you paid for it, maybe less. But it's not a competition. It's just a conversation. If there's some reason to think that Orgel and Dembski are both thinking probabilistically, I'd like to know what it is. Short of an actual explanation, I'm going to assume it's just become one of those shibboleths that can't be questioned without incurring UD's version of hospitality and charity. Learned Hand
Following some links, I find that Joe Felsenstein, a professor of population genetics has something to say about the relevance of various arguments at Uncommon descent to reality. Alicia Renard
kairosfocus:
Maybe you may wish to look at 49 above
Thanks, but 49 just underscores the difference. For example, it says,
This, leading to something that was not merely random, but was not merely repetitive order either.
But in Dembski's concept of specified complexity, it doesn't matter if something is merely repetitive. It can still be complex. Also, you quote Dembski:
Specified complexity, as I develop it, is a subtle notion that incorporates five main ingredients: (1) a probabilistic version of complexity applicable to events; (2) conditionally independent patterns; (3) probabilistic resources, which come in two forms, replicational and specificational; (4) a specificational version of complexity applicable to patterns; and (5) a universal probability bound.
So in order for Orgel's "specified complexity" to be "precisely the same concept" as Dembski's, as Barry claims, it would have to also incorporate those five ingredients. Can you provide any evidence that it does? R0bb
We do not have enough resources to address every error. Hence, you should never infer anything from silence. The plain point that evolution as proposed is not a purely random process and thus attempts to refute the concept that do not address the non-random iterative and successive inputs of variation tested and sorted by selection are wasting effort that might possibly be employed in identifying problems with the theory of evolution. That first step on the path could then be followed by an attempt to generate some kind of design hypothesis. That second step could be followed by suggesting some ways of testing that design hypothesis. Alicia Renard
Joe #58: "LoL! keith s is an insult to humanity." Joe #59: "And another lie." Joe #60: " Obviously you are just gullible and will believe anything that you think supports unguided evolution." Joe #71: " And we have proven that our opponents lie." And even after Gordon admonishes Joe, Joe's non-apology is: "Apologies kairosfocus but sometimes the truth hurts." Obviously UD is not serious about fairly enforcing a requirement for civil conduct on this site. ID opponents get banned for doing nothing more than disagreeing with Barry or Gordon, yet Joes has berated, insulted and called people liars for years, and he is never banned. And please don't give us the crap about Joe being provoked. Many others are provoked and don't respond in this fashion. Barry has already given Joe his last warning. Based on Joe's recent behaviour, if Joe retains comment privileges, Barry is either a liar or a hypocrite. centrestream
Barry Arrington Adapa @ 85. We do not have enough resources to address every error. Hence, you should never infer anything from silence OK, that's a fair enough point from you but what about the rest of the UD posters? Adapa
kairosfocus Joe, language and tone. KF when will you quit tut tutting Joe and finally do something about his constant belligerent and unprovoked insults? Barry already has Joe on probation for the same reason. Either fish or cut bait. Adapa
Adapa @ 85. We do not have enough resources to address every error. Hence, you should never infer anything from silence. Barry Arrington
KF: Sparse blind search is not a good strategy to find isolated islands of function in that space, one dominated utterly by bit strings in no particular order or organisation and of near 50:50 distribution. Let's continue with an example related to reality. How common is functionality in proteins? Of all possible proteins (as there is no upper limit to the length of a protein sequence this is infinite - the longest known sequence in living organisms being titin variants that approach 35,000 residues). Interestingly, there seems to be an observed increase in the length of proteins found in Archaea, bacteria and eukaryotes that, in itself suggests that new proteins do not arrive in living organisms by random assembly. Titin contains two sub-domains, one of 100 residues - the other of 80, each repeated over a 100 times which again suggests an evolutionary path of adding subunits. Of course you can move this back to an issue of abiogenesis and ask where did the first self-sustaining self-replicators come from but this does not appear to be your argument when talking about "islands of function" with regard to protein sequences. To repeat what others have said, the evolutionary process of small variations arising and being sifted by a differential reproduction process is what you need to model if you want to demonstrate that such a process is not viable or adequate to explain life's diversification from life's common ancestor. Alicia Renard
Interesting that there's been no comment on the observation that the OP "dealt a royal straight flush" poker example is fundamentally wrong when used as an analogy for long term iterative feedback processes like evolution. Interesting but not surprising. Adapa
Joe ID uses Orgel’s as we see functionality and say there is a specification. That's a post hoc generated specification. It's perfectly fine for Orgel's use but it fails your ID requirement. Even your own UD source says the specification must be made beforehand.
The second component in the notion of specified complexity is the criterion of specificity. The idea behind specificity is that not only must an event be unlikely (complex), it must also conform to an independently given, detachable pattern. Specification is like drawing a target on a wall and then shooting the arrow. Without the specification criterion, we’d be shooting the arrow and then drawing the target around it after the fact.
You're drawing the target around the arrow, exactly what Dembski says you can't do in his version. Adapa
4 Joe November 16, 2014 at 7:03 pm what are the blind watchmaker research programs?
Bullseye. I guess I should stop waiting for an answer from Daniel King. cantor
Robb: Start at OOL. Then, in Darwin's warm salty pond or the like, show us how the specified complexity of the living cell does not make it highly improbable to arrive at cell based life. Then, having shown us on paper, do it on the ground. While at it, explain why we do not routinely hear of spontaneous generation of novel life in soup cans on shelves, etc. KF kairosfocus
Robb: We can all examine the matter and follow the discussion to see how we have a basic observable (that is a commonplace) functionally specific complex organisation and associated information. Orgel & Wicken discussed it in qualitative, observational terms. WmAD sought to develop a metric model, pivoting on a dual to the information content. So it is not just his word, we can work it out. I find it convenient to work in info terms given the empirical access that gives. KF PS: Maybe you may wish to look at 49 above: https://uncommondesc.wpengine.com/intelligent-design/back-to-basics/#comment-529077 kairosfocus
F/N: On Islands of function in the AA sequence space, please see the excerpts from Axe and onward his paper: https://uncommondesc.wpengine.com/intelligent-design/id-foundations/axe-on-specific-barriers-to-macro-level-darwinian-evolution-due-to-protein-formation-and-linked-islands-of-specific-function/ KF kairosfocus
Barry:
For those interested in the the relation between Dembski’s work on CSI and Orgel’s statement, you can see Dembski’s paper that quotes Orgel here.
That quote is one of Dembski's many statements in which he draws some kind of connection between his and Orgel's usage of the term "specified complexity". But even if you interpret Dembski as saying that he and Orgel use the term to indicate "precisely the same concept", as you have claimed, it's not enough to simply ask us to take Dembski's word for it. What you need to show is that when Orgel says "complexity", he actually means "improbability under non-design hypotheses" as Dembski does. I see no evidence of that. R0bb
Barry:
You don’t seem to understand the comment to which you link. Read it again, and if you still don’t understand it I will explain it.
Okay, I've read it again. I would appreciate it if you would correct my misunderstanding. R0bb
Alecia I can agree to disagree, but it is impossible to be civil when lies are repeated..... How do others do it? How do they continually see past the lies and keep their patience? I would say hat tip to those with the patience of Job! Andre
AR, please go to a tackle shop and inspect an Abu 6500 c3 reel. Look at its exploded view diagram. Ask the sales clerk what would happen if you were to ignore the diagram in re-assembling the reel and jumble up the parts. "Islands of function," whether you wish to accept it or not, describes a well known phenomenon that is also manifest in text, programs and in the molecular machines of life. Selective hyperskepticism is leading you into obtuseness that frustrates seeing patent facts for what they are.KF kairosfocus
AIGuy. I'd like to personally invite you to the "An attempt at computing dFSCI for English language" thread. I promise you you won't be disappointed. peace fifthmonarchyman
Apologies kairosfocus but sometimes the truth hurts Joe
AR: The informational threshold can be approached in many ways but the simple conservative view taken is for the sol system, of 10^57 atoms for 10^17 s with fast ionic rxn rates as a measure of how fast atomic changes may happen. 10^87 acts and observations of 10^57 atoms in 10^17 s is reasonable, and in fact quite generous. But on assessing the ratio of possible observations of 500 coins observed flipped and read by each atom for that time, we find that the fraction of sampled to possible states for 500 bits is comparable to one straw taken at random from a cubical haystack about as thick as our galaxy (some 100's of LY thick). Sparse blind search is not a good strategy to find isolated islands of function in that space, one dominated utterly by bit strings in no particular order or organisation and of near 50:50 distribution. Push the limit to the observed cosmos and you are looking at about 10^111 observations . . . and to be even more generous take 1,000 bits, The straw to cube ratio in that case would swallow up our observed cosmos of about 90 bn LY across. So, for test purposes, it is reasonable to see that range, 500 - 1,000 bits worth of possibilities as a limit for what blind search at fast atomic action rates could reasonably do. No the limits are not arbitrary and self-serving. KF kairosfocus
Alicia Renard, Andre meant exactly what he posted. When we prove people are lying then that is much more than a mere disagreement. And we have proven that our opponents lie. Joe
Joe, language and tone. KF kairosfocus
KF: Taking neg logs does not increase our ALGEBRAIC knowledge, but may open our eyes to recognise what we deal with, info beyond a threshold. It may help visualise, and also manipulate large numbers and large variances. It changes not where you decide to claim a threshold Alicia Renard
Andre: It is impossible to stay courteous and civil when people are deliberate in their lies……. How do others do it? Don't you mean to say "it is impossible to be civil to people I disagree with? Don't you think it is uncivil to automatically brand anyone you disagree with as a "liar"? Don't you think your lack of civility to those you disagree with as "liars" excludes you from civil discourse which is the likely reason you get ignored? (It may also be your inability to comprehend the processes you refer to as "PCD" but that's for others to confirm.) Alicia Renard
AR: I notice what is rapidly becoming a stock selectively hyperskeptical dismissive argument, which is a strawman tactic that should be withdrawn. Doubtless, you learned it from those you look to, so you are not blameworthy. However it needs correction for record. Taking neg logs does not increase our ALGEBRAIC knowledge, but may open our eyes to recognise what we deal with, info beyond a threshold. And by doing an in effect dual, we are in a zone that accesses empirical metrics of information that allow us to ground ourselves to observational reality. Studying info storage, evaluating vulnerability of functional configs to random perturbation, and to observe redundancies, variable frequencies and the like. That is a very important and commonplace move in modelling, engineering, systems and physics etc. Last week, I pointed out that Laplace and Z Transforms and graphs tied to such do that service for systems. (For years I lived more in complex frequency than time domains.) That is why the objection is besides the point and even strawmannish and selectively hyperskeptical. KF kairosfocus
Thank you Andre- well said. Joe
It is impossible to stay courteous and civil when people are deliberate in their lies....... How do others do it? Andre
centrestream:
Surely an agent that can manipulate nature to deal with new or difficult situations could just as easily manipulate nature to prevent new or difficult situations.
So accidents never happen? Really? Joe
Joe at 5: "Due to an agency that can utilize knowledge to manipulate nature for a purpose or to deal with new or difficult situations." Surely an agent that can manipulate nature to deal with new or difficult situations could just as easily manipulate nature to prevent new or difficult situations. Keiths #12: "But it’s also worth noting that in a follow-up thread, Barry scoffed when I told him..." Is this the same moderator who said that scoffing is a poor form of argumentation? centrestream
Notice that specification is defined as an a priori description of a system, not a post hoc one. Orgel’s “specification” is a post hoc description of the formation of polypeptides from DNA.
LoL! ID uses Orgel's as we see functionality and say there is a specification. Joe
RDFish- YOU don't get to erect a strawman and make IDists follow it. Joe
Just demonstrate the protein can arise via unguided processes and be done with it. Adapa:
No, it hasn't.
See Joe Thornton’s work on ancestral protein reconstruction.
Seen it and it has nothing to do with unguided evolution producing proteins. Obviously you are just gullible and will believe anything that you think supports unguided evolution. Joe
It’s obvious from reading about Orgel’s collaborations with Stanley Miller among others that Orgel’s “specified complexity” has nothing in common with Dembski’s usurping of the term.
And another lie. It's obvious that the NCSE is just a bunch of desperate fools. Joe
keith s:
I don’t mind them, and they are as ineffective as insults from Mung or Joe.
LoL! keith s is an insult to humanity. All I do is point that out. Joe
So haw can you support the assertion about “islands of function”?
Observation. That is we observe protein fubnction isolation. How can someone test the claim that unguided evolution can produce functional proteins? Joe
"To make a design inference based on mere probability alone is fallacious." Agreed. But that is the only argument that you have. Even the IC argument is based on probability. That was the entire argument behind Behe's fallacious extrapolation from the rarity of chloroquine resistance. centrestream
keith s:
Your error is this: You fail to recognize that in order to establish that something exhibits 500 bits of CSI, you have to calculate P(T|H), the probability that it came about by “Darwinian and other material mechanisms”, as Dembski put it. P(T|H) is right there in Dembski’s equation.
Wrong. That edquation has to do with SPECIFICATION, not calculating CSI. And it is still up to you to provide H. You can't so you lose, as usual.
Again: You cannot establish that something exhibits 500 bits of CSI unless you consider the relevant ‘chance hypotheses’.
Again that is total nonsense and doesn't follow from anything Dembski has written. Joe
Kairosfocus: In short, islands of function based on highly specific complex organisation of interacting parts, in wider config spaces that are overwhelmingly non functional, are a fact. You claim a "fact", something that can only be established by producing novel, say, protein sequences and testing for some function. Nobody disputes there are theoretically limitless possibilities of linear amino-acid sequences. For example, one essential requirement for establishing enzymatic activity for a protein is solubility in water of appropriate pH and osmotic concentration. How many as-yet theoretical sequences are soluble? Nobody knows. Can we say anything about the possible properties of sequences undiscovered in nature? Not with any reliability. So haw can you support the assertion about "islands of function"? Alicia Renard
Kairosfocus: And of course the WmAD metric boils down to a functionally specific info beyond a relevant threshold metric. That is why it can be log reduced and set in terms that are informational and amenable to information measuring approaches. But if I produce a graph of log x, I end up with a curved line that I can use as a converter between x and log x. The results are unique and reversible. Taking logs adds nothing to our knowledge of x. Please, can you explain what you think you are achieving by calculating a logarithm? Not rocket science. No it's basic math! Alicia Renard
The pettifogging and absolute silliness pouring from RDF and Learned Hand is dizzying. TSErik
KS, I think there is a reasonable goal of civil, on topic discussion. Why not focus on the merits of fact, reasoning and alternative frameworks of thought? Where too, two years ago you refused to take up the root and branch warrant for the evolutionary materialist tree of life. But in fact had you successfully done such, this site and the underpinnings of design theory would have collapsed two years ago if the blind watchmaker thesis in some form had actually been warranted on the merits. The offer to host the 6,000 word essay is still open. Where, the dog that will not bark may be telling us something. KF kairosfocus
RDF, you made a clever rhetorical quip but have not dealt with the issue, of your fallacy of the complex, loaded question. I repeat, design is a process, intelligently directed configuration. As just described, that process often leaves empirical markers, FSCO/I being most relevant. KF kairosfocus
Folks, I suggest a pause to address the core issue instead of reading in by polarisation that ends up erecting strawmen. Orgel and Wicken across the 70's recognised that a common engineering phenomenon was present in the living cell and onward in life. Namely, functionally specific complex organisation that achieves functionality by interaction of correct parts assembled and coupled together per a wiring diagram, as Wicken termed it. This, leading to something that was not merely random, but was not merely repetitive order either. Orgel contrasted crystals and random mixes of mineral crystals in granite or the like to the specified complexity he was seeing. Wicken spoke of wiring diagrams, and pointed out that such is informational. Let us notice:
ORGEL, 1973: . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [The Origins of Life (John Wiley, 1973), p. 189.] WICKEN, 1979: ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]
The source of my descriptive summary term, functionally specific complex organisation and/or associated information (FSCO/I) should be obvious. Save, to the excessively polarised and selectively hyperskeptical. If something is informational, according to a generalised wiring diagram, then it should be amenable to net listing and representation, similar to circuit diagrams, wiring diagrams, instrumentation and piping diagrams, exploded view assembly diagrams, process-flow integrated charts and the like. Stuff that AutoCAD etc routinely reduce to files, in the end on the principle of structured chains of y/n q's specifying configuration from the field of possibilities. That already yields an info content in bits that is a valid info metric. Beyond, studies of redundancies and statistical patterns may compress somewhat but the aperiodic (non-crystalline repetition) arrangement and coupling in wiring diagrams will resist high levels of compression. As Trevors and Abel pointed out long ago. What WmAD worked on was metrics of quantification that would allow us to distinguish based on observable characteristics and induction on a body of experience, cases where there was good reason to infer design as cause. Right after the passage BA cited in the linked paper, Dembski highlighted five key points:
Orgel and Davies used specified complexity loosely. [--> I suggest, qualitatively would have been beter phrasing] I’ve formalized it as a statistical criterion for identifying the effects of intelligence. Specified complexity, as I develop it, is a subtle notion that incorporates five main ingredients: (1) a probabilistic version of complexity applicable to events; (2) conditionally independent patterns; (3) probabilistic resources, which come in two forms, replicational and specificational; (4) a specificational version of complexity applicable to patterns; and (5) a universal probability bound.
Boiling down in light of the 2005 metric model, specification is generalised on the original focal matter, functionality. Dembski notes that in the biological world, it is cased out as functionality . . . obviously, that based on interactive parts working in a wiring diagram pattern. That's a touchstone test that is also the main field in view for scientific issues. Complexity is best first viewed in light of that structured y/n q list that gives the state of the config observed or planned in the context of other possible clumped or scattered configs. That gets us into config spaces, with lists of parts, standard axes for parts, co-ord axes, xyx the most convenient with ox the polar axis and a reference origin. Parts then may be located at xyz co-ords and their local axes aligned relative to ox per yaw pitch roll. Couplings can be identified. And so forth. Tedious but that is how there is a place for everything and everything in its place. You do not want to "let the smoke out." Compression techniques may reduce, but not to the extent that one says, construct unit crystal, replicate till materials are used up. Or actually, allow crystallisation forces to do so. Wiring diagrams are not at random, if function is to emerge. Nor are they generally merely orderly. Organisation that is functionally specific and complex, thence informational is an apt description. From this one can assess likelihood of hitting on such by the equivalent of putting 6500 C3 parts in a bag and shaking up. That can work for some fairly simple things. But as complexity rises, less and less likely. 500 - 1,000 bits is a threshold set off sol system or observable cosmos atomic resources. That is where probability enters, but that is also connected to information, which is measured on log of inverse probabilities as well as by direct y/n q counts and statistical studies. All of which can be connected. And of course the WmAD metric boils down to a functionally specific info beyond a relevant threshold metric. That is why it can be log reduced and set in terms that are informational and amenable to information measuring approaches. Not rocket science. But, if you are determined to find objections, read to object, exert selective hyperskepticism, ignore the wider context of common engineering phenomena, and generally hold "IDiots" in hostile contempt and suspicion, you will predictably refuse to acknowledge the general reasonableness of such an approach. That is why I have now taken to holding up the exploded diagram of a 6500 C3 reel. Not a 747 jumbo jet. Not an instrument on its panel. Not a clock on its panel. Not even a watch. A very simple product made by a company that started out making watches and taxi meters. Look at the wiring diagram. Ask yourself whether any or nearly any clumped config of parts would work as a reel. Ask whether shaking parts up in a bag or the like would be likely to discover working configs. Or, scattering parts. In short, islands of function based on highly specific complex organisation of interacting parts, in wider config spaces that are overwhelmingly non functional, are a fact. Yes, tolerances and variations exist so there is no one point, and the island is a range of possibilities T in the wider -- much wider -- space W. Which is a multidimensional space where configs may have many close neighbourhood points etc etc etc. None of that changes the basic fact. And, as Paley put on the table in 1804 for watches, we can in principle have a self replicating reel with internalised blue print. Down that road lies the von Neumann Self Replicator, vNSR. That additionality would INCREASE the FSCO/I and would be further reason to infer design of the reel. Then, we can extend such phenomena to molecular scale and observe the cell. FSCO/I a-plenty, wiring networks everywhere, vNSR implemented, it uses digitally stored algorithmic codes to assemble proteins and more. That is why I keep pointing to OOL as the first context to be examined. It is foundational. It is easy to see that atomic resources of sol system and of observed cosmos are not enough to get a reasonable likelihood of blind watchmaker processes getting us to a cell. The FSCO/I strongly points to design. That is transformational as we see here the collapse of the blind watchmaker designer mimic project. Beyond the OOL, we see a tree of life. Built in niche adaptability and robustness are not at issue, body plan origin is. FSCO/I again, where when mutation patterns, the need for 10 - 100+ mn bases of fresh dna to address cell types, tissues, organs, regulatory expression from embryonic stages on, and integrated body plan systems all come together we have no good observational evidence founded reason to exclude the obvious empirically warranted source of such. Intelligently directed configuration, aka design. KF kairosfocus
Keith S Lets please stop with the nonsense of Darwinian mechanisms please Keith S. What are Darwinian Mechanisms? Natural selection? Are you aware that Darwin based this on what he observed about artificial selection? He reconkoned that if we can do it in a guided way then nature can also do so! It is false and the collapse of the Galapagos finches' evolution proves this, these creatures adapt to their environment and then always revert back when the pressures no longer apply..... http://www.jstor.org/stable/10.1086/674899 Random Mutations There is no such thing as random mutations....... http://www.ncbi.nlm.nih.gov/pubmed/22522932 How about that? we see design even in your supposed Darwinian framework....... Risk management strategy chew on it Keith S Andre
Ugh, no, it was right the first time. I'd better go to bed, jetlag only makes you feel awake. Learned Hand
If he’s ever specifically said that he thinks crystals are complex, I don’t recall it.
Sorry, I meant aren't complex. Learned Hand
Barry,
Also, you are a guest on this website. You should make an effort at being polite to your host. Unprovoked insults are bad manners.
As all long-term (and even most short-term) readers know, you routinely insult your guests. Please spare us the hypocritical double standard. Just to be clear: I'm not complaining about your insults. I don't mind them, and they are as ineffective as insults from Mung or Joe. What I do object to is the double standard. If you're going to complain about insults, then ban yourself. If you're unwilling to ban yourself, then don't complain about others who behave better than you do. keith s
I am familiar with Dembski stating that crystals do NOT exhibit CSI. Understanding Intelligent Design: Everything You Need to Know in Plain Language, pp. 105-106 (Harvest House, 2008).)
I'm not familiar with that work, and haven't read it. I have read No Free Lunch, but I'm not going to look for my copy right now so I'll point to Shallit and Elsberry instead, since they cite what I'm thinking of. They point out in more detail why Dembski considers crystals not to be designed. It's because there's a "physical necessity" cause, not because they aren't "complex." So I don't think this establishes that Dembski doesn't consider crystals "complex." (Having written that, I think that your citation comes from the paragraph Luskin excerpted, and which can helpfully be found in the top few results when googling "Dembski crystals complex". He seems to be saying the same thing there that Shallit and Elsberry were attributing to him.) If he's ever specifically said that he thinks crystals are complex, I don't recall it. I infer that from his position that highly-ordered things can be "complex," like the Kubrick Monolith or five thousand coins landing heads-up in a row or the Caputo sequence. That could be wrong; it's possible that he would say crystals aren't complex because nothing originating from such a physical necessity can be complex. Is that your take? Please do, if you have the time, share why you think Orgel and Dembski use the term "complexity" in exactly the same way. Is it because Orgel defines complexity somewhere in a way that comports with Dembski's use? Where? I've asked several times why you think this; you seem extremely reluctant to explain. Learned Hand
Barry:
keiths: “And in a recent thread, he claimed that CSI can be assessed without a chance hypothesis” You don’t seem to understand the comment to which you link. Read it again, and if you still don’t understand it I will explain it.
Pay attention, Barry. That comment was from R0bb, not me. However, looking at the linked thread, I can see your error, and it is probably the same one that R0bb spotted. Your error is this: You fail to recognize that in order to establish that something exhibits 500 bits of CSI, you have to calculate P(T|H), the probability that it came about by "Darwinian and other material mechanisms", as Dembski put it. P(T|H) is right there in Dembski's equation. Are you familiar with the equation, and do you understand it? In order to calculate P(T|H), you have to consider all relevant 'Darwinian and material mechanisms' leading to the phenomenon in question. In other words, all relevant "chance hypotheses", to use Dembski's inapt terminology. Again: You cannot establish that something exhibits 500 bits of CSI unless you consider the relevant 'chance hypotheses'. Dembski's (and KF's, and gpuccio's) eternal problem is that they cannot calculate P(T|H) for biological phenomena. Since they can't do that, they can't demonstrate design. keith s
I still don't understand how materialists can think or believe that; Examples like; Stability control mechanisms, feedback loops, networks, two way information flow, and redundancy, can ever be the result of chance or necessity..... the mind truly boggles on how they ignore these problems to their worldview, Things don't appear to be designed, they are designed and we recognise the engineering principles at play here. Andre
Here is a definition of "specified cmplexity" based on Dembski's work that was posted on UD:
Specified complexity consists of two important components, both of which are essential for making reliable design inferences. The first component is the criterion of complexity or improbability. In order for an event to meet the standards of Dembski’s theoretical notion of specified complexity, the probability of its happening must be lower than the Universal Probability Bound which Dembski sets at one chance in 10^150 possibilities. The second component in the notion of specified complexity is the criterion of specificity. The idea behind specificity is that not only must an event be unlikely (complex), it must also conform to an independently given, detachable pattern. Specification is like drawing a target on a wall and then shooting the arrow. Without the specification criterion, we’d be shooting the arrow and then drawing the target around it after the fact.
ID Foundations 4 Notice that specification is defined as an a priori description of a system, not a post hoc one. Orgel's "specification" is a post hoc description of the formation of polypeptides from DNA. The way Orgel and Dembski use the term is fundamentally different. Adapa
Learned Hand:
I don’t know. I only have the wiki entry on specified complexity to go on...
Yet even wikipedia got it right, and you didn't. http://en.wikipedia.org/wiki/Specified_complexity#Definition Mung
Adapa reminds me of keiths. If you don't have an argument, post a link to something else. Then pretend like you have an argument.
Leslie Orgel (1973) coined the now famous term “specified complexity” to distinguish between crystals, which are organized but not complex, and life, which is both organized and complex.
Which paper was that published in? Mung
Adapa @ 35. I made a substantive criticism of the NCSE paper. Do you care to take a stab at answering it? Barry Arrington
LH:
I’m not all that skeptical that Orgel and Dembski have the same basic definition of “specified.”
Good. It is pretty much the same concept. I am familiar with Dembski stating that crystals do NOT exhibit CSI. Understanding Intelligent Design: Everything You Need to Know in Plain Language, pp. 105-106 (Harvest House, 2008).) I am curious why you think Demski says crystals are complex. Barry Arrington
Learned Hand @ 15:
I am very doubtful that Arrington has read the Orgel paper, or can explain how they could possibly be describing the same concept of CSI given how different their concepts of complexity are.
And the basis of your doubt is? Learned Hand @ 23:
No, I haven’t read it. I’ve only read the same excerpts that are commonly cited, in which Orgel seems to define “complex” in the usual way (as in, comprised of varying and distinguished parts), as opposed to the Dembskian way (as in, extremely improbable).
You haven't read the source material, the book by Orgel, but you know enough about it to ascertain that Barry hasn't read it. How does that work? Mung
Barry Arrington Adapa, you cite NCSE as if it were a science source instead of a political propaganda organ. When you have no argument against the information provided just declare the source to be political propaganda and hand wave it away. Saves time in thinking. Adapa
Barry:
Mung, the fact that Learned Hand calls Orgel’s book a “paper” answers your question, no?
Not really, lol! The chutzpah! I know I'm ignorant, so you must be ignorant! I don't know I'm ignorant, so you must be ignorant! I think you're ignorant, so you must be ignorant! But thanks again for an OP that exposes the weaknesses in the ID argument. ID proponents must read the Orgel paper lest they be accused of having not read the Orgel paper! You haven't read the Orgel paper? Then you have no right to quote Orgel! QED. Mung
keiths: "And in a recent thread, he claimed that CSI can be assessed without a chance hypothesis" You don't seem to understand the comment to which you link. Read it again, and if you still don't understand it I will explain it. Barry Arrington
Adapa, you cite NCSE as if it were a science source instead of a political propaganda organ. I am unimpressed. Do you seriously think "organized" is a synonym for "specified"? NCSE is in full spin mode here. Barry Arrington
BTW Learned, what do you think Orgel and Davies mean by the word “specified”?
I don't know. I only have the wiki entry on specified complexity to go on (and a few similarly curt excerpts). All I really can tell is that he defines crystals as "specified," but without knowing why exactly it's not a very helpful example. The excerpt I just looked up says that crystals are specified "because they consist of a very large number of identical molecules packed together in a uniform way." But I doubt that "specified" meant "large number of uniformly packed identical parts" to Orgel, since that wouldn't apply to life. I'm not all that skeptical that Orgel and Dembski have the same basic definition of "specified." I don't know if it's true or not, but I don't have any specific reason to think it's false. I do have a reason to think that it's false that he and Orgel are thinking of "complex" in the same way, given Dembski's highly non-standard definition of the term--especially given that he defines "complex" to include regular forms like crystals. Learned Hand
This is from the NCSE
The role of crystalline minerals in the origin of life was proposed by JD Bernal over forty years ago. Bernal, following Aharon Katchalsky, pointed out that the clay montmorillonite’s surface readily bound simple organic molecules (Bernal 1967). Most clays are plate- or lath-shaped micro-crystals made of silicon, oxygen, and aluminum, interspersed with other elements (commonly iron, calcium, or sodium) which can replace the major elements. These substituted metals alter the electric charge on the crystal’s surface, providing locations where organic molecules can attach. The structure of the clay crystal provides stability and organization essential for the origin of life (for example, Wang and Ferris 2005; Hanczyc and others 2003; Saladino and others 2002). Leslie Orgel (1973) coined the now famous term “specified complexity” to distinguish between crystals, which are organized but not complex, and life, which is both organized and complex. He was well aware then of the potential role of crystalline minerals in the origin of life. Twenty-five years later, Orgel demonstrated the thermodynamic favorability of polymer formation on grains of the mineral apatite, or hydroxylcalcium phosphate (see Ferris 2002 for a “reader-friendly” account). Why re-invent the crystal? It's obvious from reading about Orgel's collaborations with Stanley Miller among others that Orgel's "specified complexity" has nothing in common with Dembski's usurping of the term.
Mung, No, Arrington hasn't cited anything that Orgel wrote to support his claim, whether paper or book or eldritch papyrus, at least as far as I can remember. That's what I'm asking for, something to support his claim that Orgel and Dembski use the term "complex" in "exactly" the same way. Let's head off a miscommunication: I don't think that Dembski says that he and Orgel and using "complex" in exactly the same way. I think that Barry Arrington makes that claim. Learned Hand
Learned Hand, it wasn't a paper, it was a book. Accusing Barry of not having read the paper when you don't even know where the quote came from just makes you look like an ignorant fool. And basing an argument on the imagined contents of a paper that you haven't even read makes you look even more the ignorant fool. Mung
BTW Learned, what do you think Orgel and Davies mean by the word "specified"? Barry Arrington
Barry Arrington, the cited Dembski language merely says that Orgel used the phrase "specified complexity." Dembski has never, to my knowledge, claimed that Orgel used the term "complexity" in the same way that he himself does. I think you could read the implication into the cited language, but I also think he's careful not to say it outright. You make a much stronger claim: that Orgel “uses the terms complex and specified in exactly the sense Dembski uses the terms.” Neither you nor Kairosfocus have provided any text that supports that very strong claim. Is it in the Orgel document? Can you please share with us where we can find that information? Or, if it's not in the Orgel monograph, why you believe that he uses the term "complex" in "exactly the sense Dembski" does? Learned Hand
Learned Hand, now it's your turn to try reading for comprehension. Read 18 again carefully, and maybe you'll be able to see the relationship Dembski believes Orgel's work bears to his own. Barry Arrington
Well gee "Learned Hand." I searched in vain in that link Barry supplied for a reference to "the Orgel paper" and came up empty. Maybe Barry is just bluffing. Maybe Dembski made it all up as a part of some grant Intelligent Design conspiracy. Maybe you are just confused and grasping at straws. But you've read "the Orgel paper," right? That's why you can be so confidant that Barry hasn't read it? Mung
Mung, No, I haven't read it. I've only read the same excerpts that are commonly cited, in which Orgel seems to define "complex" in the usual way (as in, comprised of varying and distinguished parts), as opposed to the Dembskian way (as in, extremely improbable). I've also asked, repeatedly, why Arrington (and to a lesser extent, KF) claim that Orgel and Dembski are using the same concept, when the commonly-cited excerpts make it clear that they aren't. They don't appear to be able or willing to answer the question, as you can see from Arrington's characteristically snippy but unresponsive comment. I think it's something they assumed to be true once upon a time, and having assumed it, they do not seem to be willing to consider the fact that they might be wrong. Learned Hand
keith, trying reading the excerpt I provided from Dembski's paper again. This time try reading for comprehension and you will answer your own question. After all, Dembski spells out the relation between his work and Orgel's prior work right there in the excerpt. Also, you are a guest on this website. You should make an effort at being polite to your host. Unprovoked insults are bad manners. Barry Arrington
Mung, the fact that Learned Hand calls Orgel's book a "paper" answers your question, no? Barry Arrington
Barry, You can't seriously believe that merely because Dembski quotes Orgel, it means that their concepts of specified complexity are the same. Can you? That would be foolish even by Arringtonian standards. keith s
Learned Hand:
I am very doubtful that Arrington has read the Orgel paper, or can explain how they could possibly be describing the same concept of CSI given how different their concepts of complexity are.
The Orgel paper? So you've read it? And you can provide a link to it so Barry can read it, if he hasn't already? I don't think Barry has anything to fear from someone who criticized him for not having read "the Orgel paper" when the critic hasn't read "the Orgel paper." Mung
For those interested in the the relation between Dembski's work on CSI and Orgel's statement, you can see Dembski's paper that quotes Orgel here. Excerpt:
The term specified complexity is about thirty years old. To my knowledge origin-of-life researcher Leslie Orgel was the first to use it. In his 1973 book The Origins of Life he wrote: “Living organisms are distinguished by their specified complexity. Crystals such as granite fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity” (189). More recently, Paul Davies (1999, 112) identified specified complexity as the key to resolving the problem of life’s origin: “Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity.” Neither Orgel nor Davies, however, provided a precise analytic account of specified complexity. I provide such an account in The Design Inference (1998b) and its sequel No Free Lunch (2002). In this section I want briefly to outline my work on specified complexity. Orgel and Davies used specified complexity loosely. I’ve formalized it as a statistical criterion for identifying the effects of intelligence
Barry Arrington
Barry Arrington But Joe is not asking for your statement of faith. He is asking for actual evidence. There is none. (shrug) I can't make anyone visit the Thorton Lab site and read the many papers on reconstructing protein histories. Nor can I copy all of that work here. To those unwilling to educate themselves there will stay no evidence. That's their loss. Adapa
Although you should be warned about how the moderator of this board has responded to someone else who pointed out that Orgel and Dembski are talking about two different concepts.
The moderator made the same claim to me not long ago. I believe I asked for support for the claim at the time; I know I asked kairosfocus several times. (Although, to be fair, kf is more careful than Arrington is; he only says that Orgel's and Dembski's concepts are connected, not that they are the same.) I am very doubtful that Arrington has read the Orgel paper, or can explain how they could possibly be describing the same concept of CSI given how different their concepts of complexity are. Learned Hand
as to Thorton's work: Severe Limits to Darwinian Evolution: - Michael Behe - Oct. 2009 Excerpt: The immediate, obvious implication is that the 2009 results render problematic even pretty small changes in structure/function for all proteins — not just the ones he worked on.,,,Thanks to Thornton’s impressive work, we can now see that the limits to Darwinian evolution are more severe than even I had supposed. http://www.evolutionnews.org/2009/10/severe_limits_to_darwinian_evo.html Wheel of Fortune: New Work by Thornton's Group Supports Time-Asymmetric Dollo's Law - Michael Behe - October 5, 2011 Excerpt: Darwinian selection will fit a protein to its current task as tightly as it can. In the process, it makes it extremely difficult to adapt to a new task or revert to an old task by random mutation plus selection. http://www.evolutionnews.org/2011/10/wheel_of_fortune_new_work_by_t051621.html From Thornton's Lab, More Strong Experimental Support for a Limit to Darwinian Evolution - Michael Behe - June 23, 2014 Excerpt: In prior comments on Thornton's work I proposed something I dubbed a "Time-Symmetric Dollo's Law" (TSDL).3, 8 Briefly that means, because natural selection hones a protein to its present job (not to some putative future or past function), it will be very difficult to change a protein's current function to another one by random mutation plus natural selection. But there was an unexamined factor that might have complicated Thornton's work and called the TSDL into question. What if there were a great many potential neutral mutations that could have led to the second protein? The modern protein that occurs in land vertebrates has very particular neutral changes that allowed it to acquire its present function, but perhaps that was an historical accident. Perhaps any of a large number of evolutionary alterations could have done the same job, and the particular changes that occurred historically weren't all that special. That's the question Thornton's group examined in their current paper. Using clever experimental techniques they tested thousands of possible alternative mutations. The bottom line is that none of them could take the place of the actual, historical, neutral mutations. The paper's conclusion is that, of the very large number of paths that random evolution could have taken, at best only extremely rare ones could lead to the functional modern protein. http://www.evolutionnews.org/2014/06/more_strong_exp087061.html podcast - Michael Behe: The Limit in the Evolution of Proteins (Thorton's 2014 paper) http://intelligentdesign.podomatic.com/entry/2014-07-09T16_35_28-07_00 bornagain77
Which of the following best describes what the term “design” is intended to mean here?
Joe: Dodge, Fail. KF: Dodge, Fail. You complain when others dodge your questions - why can't you answer mine? Surely you can say which of these statements are entailed by the term "design" and which are not? 1) Not due to chance and/or necessity 2) Due to the actions of an entity or process that may or may not be conscious 3) Due to the actions of a conscious entity 4) Due to an entity that has some of the same mental abilities as humans but not necessarily all of them 5) Due to an entity that has all of the mental abilities of normal human beings, including the ability to learn and use natural language Cheers, RDFish/AIGuy RDFish
This comment of R0bb's deserves a reposting here:
Winston:
You can certainly have a notion of specified complexity that is observable, like Orgel and Wicken did. But care must be taken not to conflate it with Dembski’s conception.
Thank you. Although you should be warned about how the moderator of this board has responded to someone else who pointed out that Orgel and Dembski are talking about two different concepts. Barry:
Mathgrrl, I will tell you what is ridiculous: Your attempt to convince people that Orgel and Dembski are talking about two different concepts, when that is plainly false. Like the Wizard of Oz you can tell people “don’t look behind that curtain” until you are blue in the face. But I’ve looked behind your curtain, and there is nothing there but a blustering old man. I will not retract an obviously true statement no matter how much you huff. You’ve been found out. Deal with it.
But it’s also worth noting that in a follow-up thread, Barry scoffed when I told him that Dembski’s examples of specified complexity include simple repetitive sequences, plain rectangular monoliths, and narrowband signals. And in a recent thread, he claimed that CSI can be assessed without a chance hypothesis. So the board moderator, who has been “studying the origins issue for 22 years”, doesn’t understand what Dembski means by CSI. Which means that if you want to clean up the CSI mess, you have an uphill battle ahead of you.
keith s
Joe Adapa- Just demonstrate the protein can arise via unguided processes and be done with it. Already been done. See Joe Thornton's work on ancestral protein reconstruction. Much of it is online and available from his lab's web site. Thornton Lab: molecular mechanisms of evolution Adapa
Adapa- Just demonstrate the protein can arise via unguided processes and be done with it. Oops you can't account for life Joe
The OP and the poker example only serve to highlight the category error ID makes when it tries to use the “this protein is too improbable to have formed naturally” argument. Science knows that proteins weren’t formed "as is" in a one-time process like the royal straight flush in the example was dealt in one hand. Rather extant proteins are the result of an iterative process involving feedback that’s been running for over 3.5 billion years. To make the poker analogy relevant you’d have to posit a game of draw poker where each player is allowed to discard, reshuffle the deck and redraw until he is satisfied, up to 3 billion times. With those rules “functionally specified” royal straight flushes would be commonplace. Sorry but you can’t calculate the probability of a result in a long term iterative feedback process like evolution by taking a one time snapshot of the current state. You just can’t. You have to take into account the history and activity of the process. Adapa
RDF, try, Design, VERB: intelligently directed configuration (as a causal process). Designer, NOUN: an entity that creates intelligently directed configurations. Two utterly different foci. KF kairosfocus
6 cantor November 16, 2014 at 7:12 pm The probability of getting the same hand in four consecutive deals, where the hand is *not* specified in advance, is (3.8477e-007)^3 = 5.69641e-020
. The point being, something's rotten in Denmark if the same hand is dealt 4 times in a row, regardless of whether or not said hand meets any prior specification. . cantor
If there are 52 unique cards in a deck, then there are 52C5 = 2,598,960 possible 5-card hands. So the probability of getting any one pre-specified hand in a single deal is 1/2,598,960 = 3.8477e-007 The probability of getting the same hand in four consecutive deals, where the hand is *not* specified in advance, is (3.8477e-007)^3 = 5.69641e-020 The probability of getting the same hand in four consecutive deals, where the hand *is* specified in advance, is (3.8477e-007)^4 = 2.1918e-026 cantor
Which of the following best describes what the term “design” is intended to mean here?
Due to an agency that can utilize knowledge to manipulate nature for a purpose or to deal with new or difficult situations. Joe
Science is about observations and arguments. As for research programs, what are the blind watchmaker research programs? Joe
Hi Barry, Which of the following best describes what the term "design" is intended to mean here? 1) Not due to chance and/or necessity 2) Due to the actions of an entity or process that may or may not be conscious 3) Due to the actions of a conscious entity 4) Due to an entity that has some of the same mental abilities as humans but not necessarily all of them 5) Due to an entity that has all of the mental abilities of normal human beings, including the ability to learn and use natural language Cheers, RDFish/AIGuy RDFish
In conclusion it seems to me that after all the dust settles we will see that Ewert was merely saying that Miller’s Mendacity (see the UD Glossary) misconstrues the CSI argument.
The elephant in the room waved to me and said "The problem is that it is NO MORE THAN an ARGUMENT. It's not a RESEARCH PROGRAM." Since the elephant can't type, I'm forwarding her message. Daniel King
Barry, excellent! You have unified all the major elements of the argument under a single thematic umbrella. As a result, critics will have far fewer opportunities to obfuscate, misrepresent the argument, or create irrelevant distractions. StephenB