Uncommon Descent Serving The Intelligent Design Community

Can we all agree on specified complexity?

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

Amid the fog of recent controversies, I can discern a hopeful sign: the key figures in the ongoing debate over specified complexity on Uncommon Descent are actually converging in their opinions. Allow me to explain why.

Winston Ewert’s helpful clarifications on CSI

In a recent post, ID proponent Winston Ewert agreed that Elizabeth Liddle had a valid point in her criticisms of the design inference, but then went on to say that she had misunderstood what the design inference was intended to do (emphases mine):

She has objected that specified complexity and the design inference do not give a method for calculating probabilities. She is correct, but the design inference was never intended to do that. It is not about how we calculate probabilities, but about the consequences of those probabilities. Liddle is complaining that the design inference isn’t something that it was never intended to be.

He also added:

…[T]he design inference is a conditional. It argues that we can infer design from the improbability of Darwinian mechanisms. If offers no argument that Darwinian mechanisms are in fact improbable. When proving a conditional, we are not concerned with whether or not the antecedent is true. We are interested in whether the consequent follows from the antecedent.

In another post, Winston Ewert summarized his thoughts on specified complexity:

The notion of specified complexity exists for one purpose: to give force to probability arguments. If we look at Behe’s irreducible complexity, Axe’s work on proteins, or practically any work by any intelligent design proponent, the work seeks to demonstrate that the Darwinian account of evolution is vastly improbable. Dembski’s work on specified complexity and design inference works to show why that improbability gives us reason to reject Darwinian evolution and accept design.

Winston Ewert concluded that “the only way to establish that the bacterial flagellum exhibits CSI is to first show that it was improbable.”

To which I would respond: hear, hear! I completely agree.

What about Ewert’s claim that “CSI and Specified complexity do not help in any way to establish that the evolution of the bacterial flagellum is improbable”? He is correct, if by “CSI and Specified complexity,” he simply means the concepts denoted by those terms. If, however, we are talking about the computed probability of the evolution of the bacterial flagellum emerging via unguided processes, then of course this number can be used to support a design inference: if the probability in question is low enough, then the inference to an Intelligent Designer becomes a rational one. Ewert obviously agrees with me on this point, for he writes that “Dembski’s work on specified complexity and design inference works to show why that improbability gives us reason to reject Darwinian evolution and accept design.”

In a recent post, I wrote that “we can decide whether an object has an astronomically low probability of having been produced by unintelligent causes by determining whether it has CSI (that is, a numerical value of specified complexity (SC) that exceeds a certain threshold).” Immediately afterwards, I added that in order to calculate the specified complexity of an object, we first require “the probability of producing the object in question via ‘Darwinian and other material mechanisms.'” I then added that “we compute that probability.” The word “compute” makes it quite clear that without that probability, we will be unable to infer that a given object was in fact designed. I concluded: “To summarize: to establish that something has CSI, we need to show that it exhibits specificity, and that it has an astronomically low probability of having been producedby unguided evolution or any other unintelligent process” (italics added).

Imagine my surprise, then, when I discovered that some readers had been interpreting my claim that “we can decide whether an object has an astronomically low probability of having been produced by unintelligent causes by determining whether it has CSI (that is, a numerical value of specified complexity (SC) that exceeds a certain threshold)” as if I were arguing for a design inference on the basis of some pre-specified numerical value for CSI! Nothing could be further from the truth. To be quite clear: I maintain that the inference that biological organisms (or structures, such as proteins) were designed is a retrospective one. We are justified in making this inference only after we have computed, on the basis of the best information available to us, that the emergence of these organisms (or structures) via unguided processes – in which I include both random changes and the non-random winnowing effect of natural selection – falls below a certain critical threshold of 1 in 2^500 (or roughly, 1 in 10^150). There. I cannot be clearer than that.

So I was heartened to read on a recent post by Barry Arrington that Keith S had recently endorsed a form of design inference, when he wrote:

To use the coin-flipping example, every sequence of 500 fair coin flips is astronomically improbable, because there are 2^500 possible sequences and all have equally low probability. But obviously we don’t exclaim “Design!” after every 500 coin flips. The missing ingredient is the specification of the target T.

Suppose I specify that T is a sequence of 250 consecutive heads followed by 250 consecutive tails. If I then sit down and proceed to flip that exact sequence, you can be virtually certain that something fishy is going on. In other words, you can reject the chance hypothesis H that the coin is fair and that I am flipping it fairly.

That certainly sounds like a design inference to me.

In a follow-up comment on Barry Arrington’s post, Keith S went on to point out:

…[I]n that example, I am not calculating CSI and then using it to determine that something fishy is going on. Rather, I have to determine that something fishy is going on first (that is, that P(T|H) is extremely low under the chance hypothesis) in order to attribute CSI to it.

To which I would respond: you’re quite right, Keith S. That’s what I’ve been saying and what Winston Ewert has been saying. It seems we all agree. We do have to calculate the probability of a system emerging via random and/or non-random unguided processes, before we impute a high level of CSI to the system and conclude that it was designed.

CSI vs. irreducible complexity: what’s the difference?

In a subsequent comment, Keith S wrote:

I think it’s instructive to compare irreducible complexity to CSI in this respect.

To argue that something is designed because it exhibits CSI is circular, because you have to know that it is designed before you can attribute CSI to it.

To argue that something is designed because it is irreducibly complex is not circular, because you can determine that it is IC (according to Behe’s definition) without first determining that it is designed.

The problem with the argument from IC is not that it’s circular — it’s that IC is not a barrier to evolution.

For the record: the following article by Casey Luskin over at Evolution News and Views sets forth Professor Mike Behe’s views on exaptation, which are that while it cannot be absolutely ruled out, its occurrence is extremely improbable, even for modestly complex biologically features. Professor Behe admits, however, that he cannot rigorously quantify his assertions, which are based on his professional experience as a biochemist. Fair enough.

The big difference between CSI and irreducible complexity, then, is not that the former is circular while the latter is not, but that CSI is quantifiable (for those systems where we can actually calculate the probability of their having emerged via unguided random and/or non-random processes) whereas irreducible complexity is not. That is what makes CSI so useful, when arguing for design.

Does Dr. Dembski contradict himself? I think not

Keith S claims to have uncovered a contradiction between the following statement by leading Intelligent Design advocate Dr. Willaim Dembski:

Michael Behe’s notion of irreducible complexity is purported to be a case of actual specified complexity and to be exhibited in real biochemical systems (cf. his book Darwin’s Black Box). If such systems are, as Behe claims, highly improbable and thus genuinely complex with respect to the Darwinian mechanism of mutation and natural selection and if they are specified in virtue of their highly specific function (Behe looks to such systems as the bacterial flagellum), then a door is reopened for design in science that has been closed for well over a century. Does nature exhibit actual specified complexity? The jury is still out.

and this statement of his:

It is CSI that Michael Behe has uncovered with his irreducbly complex biochemical machines. It is CSI that for cosmologists underlies the fine-tuning of the universe and that the various anthropic principles attempt to understand.

I don’t see any contradiction at all here. In the first quote, Dr. Dembski is cautiously pointing out that the inference that the bacterial flagellum was designed hinges on probability calculations, which we do not know for certain to be correct. In the second quote, he is expressing his belief, based on his reading of the evidence currently available, that these calculations are in fact correct, and that Nature does in fact exhibit design.

Dembski and the Law of Conservation of Information

Keith S professes to be deeply puzzled by Dr. Dembski’s Law of Conservation of Information (LCI), which he finds “murky.” He is especially mystified by the statement that neither chance nor law can increase information.

I’d like to explain LCI to Keith S in a single sentence. As I see it, its central insight is very simple: that when all factors are taken into consideration, the probability of an event’s occurrence does not change over the course of time, until it actually occurs. In other words, if the emergence of life in our universe was a fantastically improbable event at the time of the Big Bang, then it was also a fantastically improbable event 3.8 billion years ago, immediately prior to its emergence on Earth. And if it turns out that the emergence of life on Earth 3.8 billions of years ago was a highly probable event, then we should say that the subsequent emergence of life in our universe was highly probable at the time of the Big Bang, too. Chance doesn’t change probabilities over the course of time; neither does law. Chance and law simply provide opportunities for the probabilities to be played out.

Someone might argue that we can think of events in human history which seemed highly improbable at time t, but which would have seemed much more probable at a later time t + 1. (Hitler’s rise to power in Germany would have seemed very unlikely in January 1923, but very likely in January 1933.) But this objection misses the point. Leaving aside the point that humans are free agents, a defender of LCI could reply that when all factors are taken into consideration, events that might seem improbable at an earlier time can in fact be demonstrated to have a high probability of occurring subsequently.

Making inferences based on what you currently know: what’s the problem with that?

Certain critics of Intelligent Design are apt to fault ID proponents for making design inferences based on what scientists currently know. But I see no problem with that, as long as ID proponents declare that they would be prepared to cheerfully revise their opinions, should new evidence come to light which overturns currently accepted beliefs.

I have long argued that Dr. Douglas Axe’s paper, The Case Against a Darwinian Origin of Protein Folds, whose argument I summarized in my recent post, Barriers to macroevolution: what the proteins say, demonstrates beyond reasonable doubt that unguided mechanisms could not have given rise to protein folds that we find in living creatures’ body proteins, in the space of just four billion years. I have also pointed out that Dr. Eugene Koonin’s peer-reviewed article, The Cosmological Model of Eternal Inflation and the Transition from Chance to Biological Evolution in the History of Life (Biology Direct 2 (2007): 15, doi:10.1186/1745-6150-2-15) makes a very strong case that the probability of a living thing capable of undergoing Darwinian evolution – or what Dr. Koonin refers to as a coupled translation-replication system – emerging in our observable universe during the course of its history is astronomically low: 1 in 10^1,018 is Dr. Koonin’s estimate, using a “toy model” that makes deliberately optimistic assumptions. Finally, I have argued that Dr. Robin Collins’ essay, The Teleological Argument rules out the infinite multiverse hypothesis which Dr. Koonin proposes in order to explain the unlikely emergence of life in our universe: as Dr. Koonin argues, a multiverse would need to be specially fine-tuned in order to produce even one universe like our own. If Dr. Axe’s and Dr. Koonin’s estimates are correct, and if we cannot fall back on the hypothesis of a multiverse in order to shorten the odds against life emerging, then the only rational inference that we can make, based on what we currently know, is that the first living thing was designed, and that the protein folds we find in living creatures were also designed.

Now, Keith S might object that these estimates could be wrong – and indeed, they could. For that matter, the currently accepted age of the universe (13.798 billion years) could be totally wrong too, but I don’t lose any sleep over that fact. In everyday life, we make decisions based on what we currently know. If Keith S wants to argue that one can reasonably doubt the inference that living things were designed, then he needs to explain why the estimates I’ve cited above could be mistaken – and by a very large margin, at that.

Recently, Keith S has mentioned a new book by Dr. Andreas Wagner, titled, The Arrival of the Fittest: Solving Evolution’s Greatest Puzzle. I haven’t read the book yet, but let me say this: if the book makes a scientifically plausible case, using quantitative estimates, that life in all its diversity could have emerged on Earth over the space of just 3.8 billion years, then I will cheerfully change my mind and admit I was wrong in maintaining that it had to have been designed. As John Maynard Keynes famously remarked, “When the facts change, I change my mind. What do you do, sir?”

For that matter, I try to keep an open mind about the recent discovery of soft tissue in dinosaur bones (see here and here). Personally, I think it’s a very odd finding, which is hard to square with the scientifically accepted view that these bones are millions of years old, but at the present time, I think the preponderance of geological and astronomical arguments in favor of an old Earth is so strong that this anomaly, taken alone, would be insufficient to overthrow my belief in an old cosmos. Still, I could be wrong. Science does not offer absolute certitude, and it has never claimed to.

Conclusion

To sum up: statements about the CSI of a system are retrospective, and should be made only after we have independently calculated the probability of a system emerging via unguided (random or non-random) processes, based on what we currently know. After these calculations have been performed, one may legitimately infer that the system was designed – even while admitting that should subsequent evidence come to light that would force a drastic revision of the probability calculations, one would have to revise one’s views on whether that system was designed.

Are we all on the same page now?

Comments
No Me Think, I am not bluffing. Do you think you are the only evo who has referred me to Wagner's work? Really?Joe
November 18, 2014
November
11
Nov
18
18
2014
04:28 AM
4
04
28
AM
PDT
"If you knew how Science works" Tamara Knight, shouldn't Darwinism be a 'science' first before we can see if it works as a science? Darwinism is a Pseudo-Science: 1. No Rigid Mathematical Basis 2. No Demonstrated Empirical Basis 3. Random Mutation and Natural Selection Are Both Grossly Inadequate as ‘creative engines’ 4. Information is not reducible to a material basis https://docs.google.com/document/d/1oaPcK-KCppBztIJmXUBXTvZTZ5lHV4Qg_pnzmvVL2Qw/editbornagain77
November 18, 2014
November
11
Nov
18
18
2014
04:25 AM
4
04
25
AM
PDT
Joe @ 25,
Also I have read some of his papers before today
Joe,You’re bluffing and you know it ! P.S: This is silly and non-productive, so either you can stop, or I will.Me_Think
November 18, 2014
November
11
Nov
18
18
2014
04:21 AM
4
04
21
AM
PDT
Tamara Knight:
Which would be all well and good Joe if you actually determined design was present.
We have. OTOH you still have nothing, not even a methodology.
You make “design is present” the default explanation.
That is incorrect and demonstrates ignorance on your part. How can design be the default if we actively consider other explanations FIRST?
If you knew how Science works then you would see the correct default is “we don’t know (yet)”
Yet you don't do that. You say unguided evolution did it and thar ain't no design! Your entire position = "we don't know"Joe
November 18, 2014
November
11
Nov
18
18
2014
04:18 AM
4
04
18
AM
PDT
“we don’t know (yet)” does this mean Darwinism is not an established fact like gravity is?bornagain77
November 18, 2014
November
11
Nov
18
18
2014
04:17 AM
4
04
17
AM
PDT
Me Think- You said to read the abstracts! Also I have read some of his papers before today. You're bluffing and you know it.Joe
November 18, 2014
November
11
Nov
18
18
2014
04:15 AM
4
04
15
AM
PDT
Joe @ 22 Ha, Ha ! At 5:47 you posted @ 17. By 6:03 @ post 22, you read 10 papers ? Allowing atleast 6 minutes for reading and posting comments in this and other threads, you read all 10 papers in 10 minutes ?Me_Think
November 18, 2014
November
11
Nov
18
18
2014
04:14 AM
4
04
14
AM
PDT
In the real world we first determine design is present BEFORE we even attempt to answer those other questions.
Which would be all well and good Joe if you actually determined design was present. But you don't do you. You make "design is present" the default explanation. If you knew how Science works then you would see the correct default is "we don't know (yet)"Tamara Knight
November 18, 2014
November
11
Nov
18
18
2014
04:10 AM
4
04
10
AM
PDT
Me Think, your bluff is duly noted. I looked at the first ten papers and they do not support anything you have said.Joe
November 18, 2014
November
11
Nov
18
18
2014
04:03 AM
4
04
03
AM
PDT
But it only works because we know the nature, capabilities and limitations of the designer (humans).
LoL! We know their capabilities by what they left behind for us to discover. We sure as heck cannot test those people to see if they actually had the capability.
But we are repeatedly told this subject, the nature and mechanisms used by the designer, is a forbidden subject for ID.
That is incorrect. They are separate questions just as you separate the OoL from evolutionism. You have no idea what you are posting yet you feel compelled to post anyway. Strange. In the real world we first determine design is present BEFORE we even attempt to answer those other questions. It is very telling that you don't know how science works. Nice job.Joe
November 18, 2014
November
11
Nov
18
18
2014
04:02 AM
4
04
02
AM
PDT
Joe @ 18 I can't, because you are wrong on all counts !Me_Think
November 18, 2014
November
11
Nov
18
18
2014
04:00 AM
4
04
00
AM
PDT
Joe: "Design inferences are based on our KNOWLEDGE of cause and effect relationships, ie science. " True. That is how archaeology works. But it only works because we know the nature, capabilities and limitations of the designer (humans). But we are repeatedly told this subject, the nature and mechanisms used by the designer, is a forbidden subject for ID. How can you study cause and effect when we aren't allowed to examine the cause? Saying "intelligent agent" is not a cause unless you have details on how this "intelligent agent" effects change. Fail.centrestream
November 18, 2014
November
11
Nov
18
18
2014
03:58 AM
3
03
58
AM
PDT
Geez Me Think, if you think I am wrong then please correct me. Your bluff has been called.Joe
November 18, 2014
November
11
Nov
18
18
2014
03:49 AM
3
03
49
AM
PDT
Joe @ 16, You can't judge that by title of the papers. You at least got to read the Abstracts ! Unfortunately I can only lead the horse to the water...Me_Think
November 18, 2014
November
11
Nov
18
18
2014
03:47 AM
3
03
47
AM
PDT
Right and his speculation is based on that. And nothing in those references refers to blind watchmaker evolution. They don't even pertain to macroevolution.Joe
November 18, 2014
November
11
Nov
18
18
2014
03:37 AM
3
03
37
AM
PDT
Joe @13 The list of journals were related research was published is here
160. Payne, J.L., Wagner, A. (2014) The robustness and evolvability of transcription factor binding sites. Science 343, 875-877.[link] 159. Wagner, A. (2014) A genotype network reveals homoplastic cycles of convergent evolution in influenza A (H3N2) evolution. Proceedings of the Royal Society B: Biological Sciences 281, 20132763. [reprint request] 158. Szovenyi, P., Devos, N., Weston, D.J., Yang, X., Hock, Z., Shaw, J.A., Shimizu, K.K., McDaniel, S., Wagner, A. Efficient purging of deleterious mutations in plants with haploid selfing. Genome Biology and Evolution 6, 1238-1252. [reprint request] 157. Wagner, A., Rosen, W. (2014) Spaces of the possible: universal Darwinism and the wall between technological and biological innovation. Journal of the Royal Society Interface 11, 20131190. [reprint request] 156. Payne, J.L., Wagner, A. Latent phenotypes pervade gene regulatory circuits. BMC Systems Biology 8 (1), 64. [reprint request] 155. Dhar, R., Bergmiller, T., Wagner, A. (2014) Increased gene dosage plays a predominant role in the initial stages of evolution of duplicate TEM-1 beta lactamase genes. Evolution 68, 1775-1791. [reprint request] 154. Hayden, E., Bratulic, S., Konig, I., Ferrada, E., Wagner, A. (2014) The effects of stabilizing and directional selection on phenotypic and genotypic variation in a population of RNA enzymes. Journal of Molecular Evolution 78, 101-108. [reprint request] 153. Barve, A., Hosseini, S.-R., Martin, O.C., Wagner, A. Historical contingency and the gradual evolution of metabolic properties in central carbon and genome-scale metabolisms. BMC Systems Biology 2014, 8:48. [reprint request] 152. Wagner, A. (2014) Mutational robustness accelerates the origin of novel RNA phenotypes through phenotypic plasticity. Biophysical Journal 106, 955-965. [reprint request] 151. Sunnaker, M., Zamora-Sillero, E., Garcia de Lomana, A.L., Rudroff, F., Sauer, U., Stelling, J., Wagner, A. (2014) Topological augmentation to infer hidden processes in biological systems. Bioinformatics 30, 221-227. [reprint request] 150. Wagner, A., Andriasyan, V., Barve, A. (2014) The organization of metabolic genotype space facilitates adaptive evolution in nitrogen metabolism. Journal of Molecular Biochemistry 3: 2-13. [reprint request] 149. Payne, J.L., Moore, J.H., Wagner, A. (2014) Robustness, evolvability, and the logic of genetic regulation. Artificial Life 20, 111-126. [reprint request] 2013 148. Barve, A., Wagner, A. (2013) A latent capacity for evolutionary innovation through exaptation in metabolic systems. Nature 500, 203-206. [reprint request] 147. Sunnaker, M., Zamora-Sillero, E., Dechant, R., Ludwig, C., Busetto, A.G., Wagner, A., Stelling, J. (2013) Automatic generation of predictive dynamic models reveals nuclear phosphorylation as the key Msn2 control mechanism. Science Signaling 6, ra41. [reprint request] 146. Szovenyi, P., Ricca, M., Hock, Z., Shaw, J.A., Shimizu, K.K., Wagner, A. (2013) Selection is no more efficient in haploid than in diploid life stages of an angiosperm and a moss. Molecular Biology and Evolution 30: 1929-1939. [reprint request] 145. Payne, J.A., Wagner, A. (2013) Constraint and contingency in multifunctional gene regulatory circuits. PLoS Computational Biology 9 (6), e1003071. [reprint request] 144. Dhar, R., Saegesser, R., Weikert, C., Wagner, A. (2013) Yeast adapts to a changing stressful environment by evolving cross-protection and anticipatory gene regulation. Molecular Biology and Evolution 30, 573-588. [reprint request] 143. Sabath, N., Ferrada, E., Barve, A., Wagner, A., (2013) Growth temperature and genome size in bacteria are negatively correlated, suggesting genomic streamlining during thermal adaptation. Genome Biology and Evolution 5, 966-977. [reprint request] . 142. Bilgin, T., Kurnaz, I.A., Wagner, A. (2013) Selection shapes the robustness of ligand-binding amino acids. Journal of Molecular Evolution 76, 343-349. [reprint request] . 141. Bichsel, M., Barbour, A.D., Wagner, A. (2013) Estimating the fitness effect of insertion sequences. Journal of Mathematical Biology 66, 95-114. [reprint request] 140. Wagner, A. (2013) Genotype networks and evolutionary innovations in biological systems. In Handbook of Systems Biology. Eds: Walhout, A.J.M., Vidal, M., Dekker, J., Academic Press, London, p 251-264. [reprint request] 139. Wagner, A. (2013) Metabolic networks and their evolution. In Encyclopedia of Systems Biology; p 1256-1259; Dubitzky, W., Wolkenhauer, O., Yokota, H., Cho, K.-H. (eds) Springer, New York.
Me_Think
November 18, 2014
November
11
Nov
18
18
2014
03:33 AM
3
03
33
AM
PDT
PPS: An there are metaphors everywhere, that is part and parcel of language in the real world. And if you imagine this is absent from Mathematics, consider the use of Cartesian space and graph paper etc.kairosfocus
November 18, 2014
November
11
Nov
18
18
2014
03:28 AM
3
03
28
AM
PDT
Nothing strange about it. Unguided evolution has proven to be useless- it can't even be modeled nor produce any testable hypotheses. And Wagner's ideas are not published in peer-review. If his ideas had the evidence he would publish.Joe
November 18, 2014
November
11
Nov
18
18
2014
03:28 AM
3
03
28
AM
PDT
archaeologists and forensic scientists do not just throw in the towel and say a designer did it. Design inferences are based on our KNOWLEDGE of cause and effect relationships
Indeed, I've seen much speculation from Egyptologists about how the Pyramids might have been built. All manner of possible constuction methods ahave been postulated based on our knowledge of what they knew about cause and effect relationships. I few cranks even claim they could not have had enough knowledge of the relevant cause and effect relationships, and visiting aliens must have helped them out.Tamara Knight
November 18, 2014
November
11
Nov
18
18
2014
03:27 AM
3
03
27
AM
PDT
Joe @ 9 Quite a strange conclusion - on both counts! Have you read Wagner's book to warrant the second conclusion ?Me_Think
November 18, 2014
November
11
Nov
18
18
2014
03:25 AM
3
03
25
AM
PDT
PS: Search is of course a metaphor, much as selection is in natural selection. What is going on is subsets of the config space of possible configs of atoms and molecules are being samples from moment to moment in Darwin's pond or the like, and that can be analysed in terms of sampling from the set of configs W. Dynamic-stochstic processes can be used to assess that, blending chance and necessity, creating in effect random walks with drift. Think, air molecules moving about at random, within an air mass moving along as part of a wind. The tree of life metaphor is a grand narrative based on such a process imagined to incrementally access possibilities across the world of life seen as a vast continent of possibilities that are incrementally accessible. The problem is there is no good empirically anchored, observationally grounded reason (apart from impositions of a priori materialist schools of thought), to accept that such a continent is real, and that it is accessible from Darwin's pond or the like. The scattering of protein clusters in AA sequence space within wider organic chemistry, is a capital example in point why that continent is not credibly there but instead a dust of islands.kairosfocus
November 18, 2014
November
11
Nov
18
18
2014
03:25 AM
3
03
25
AM
PDT
Me Think- Thank you for admitting that unguided evolution is useless. BTW Wagner didn't do anything but speculate.Joe
November 18, 2014
November
11
Nov
18
18
2014
03:22 AM
3
03
22
AM
PDT
Joe @ 4
Me think, Unguided evolution is not a search
Exactly ! Would you please inform your ID colleagues who keep believing that evolution is hunting for patterns and specifications ? Wagner’s ‘search’ is a random walk down the genotype network. he show how new phenotype can be discovered at hyperdimensions in a fraction of a step.Me_Think
November 18, 2014
November
11
Nov
18
18
2014
03:14 AM
3
03
14
AM
PDT
VJT: Let me pause while I wait for people to wake up before hitting the phone. It seems to me that there are several issues here that need to be taken as balancing points: 1: All significant scientific findings and especially explanations are inherently provisional, subject to correction or replacement on future analysis or findings. 2: This holds for the design inference across chance and/or mechanical necessity vs design. 3: We need to come to the prior evidence-led recognition of a basic commonplace fact of engineering, which can be summed up:
a: Many systems are complex based on multiple interacting parts that b: are wired up on in effect a wiring diagram that imposes c: fairly tight constraints on configs that exhibit relevant function, vs a much larger number of other possible clumped or scattered configs of the same components. (That is, islands of function in much wider config spaces of possible but non-functional configs, are real. Contemplate the properly assembled Abu 6500 C3 reel vs a shaken up bag of its parts, if you doubt that the informational wiring diagram makes a difference.) d: The wiring diagram is highly informational, which can at first level be roughly quantified on the number of y/n q's that are to be answered to specify acceptable configs (up to tolerances etc). e: This can be descriptively titled functionally specific complex organisation and associated information, FSCO/I.
4: I am not satisfied that many objectors to the design inference on FSCO/I are appropriately responsive to the basic points just made, and it seems that there is a problem of selective hyperskepticism at work. 5: From the emergence of evidence on biochemistry and molecular biology esp. from the elucidation of DNA from 1953 on, it became clear that FSCO/I and its subset, digitally coded functionally specific complex information, are present in the living cell. Thus by the 1970's leading OOL researchers Orgel and Wicken went on record:
ORGEL, 1973:  . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [[The Origins of Life (John Wiley, 1973), p. 189.] WICKEN, 1979: ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems.  Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]
(Notice, very carefully, please: "Organization, then, is functional[ly specific] complexity and carries information." This is the root of the descriptive term, FSCO/I.) 6: That set of remarks must be understood in light of the context outlined above. The origin of biofunctional, complex specific, interactive, information rich organisation is at the root of the tree of life and its branching. 7: As a simple comparison for strings of y/n q's to specify states, 500 - 1,000 H/T coins have 3.27*10^150 - 1.07*10^301 possibilities as configs. The former overwhelms the number of search operations possible for the 10^57 atoms of the sol system each searching 10^14 attempts per second, for 10^17 s roughly as one straw to a cubical haystack as thick as our galaxy. For the latter, the haystack to be compared to one straw would swallow up the observed cosmos of some 90 bn LY across. Blind chance led needle in haystack searches of such stacks will be maximally sparse and unlikely to be successful. Matters not if you scatter a dust, carry our random walks or combine the two, etc. Too much stack, too little search. 8: So, we have an observable phenomenon in life that, even without attempted explicit detailed precise quantification of improbability, is a formidable challenge to any mechanism that appeals to chance-led non foresighted processes as engines of innovation hoping to generate FSCO/I. 9: Where OOL is pivotal, because all there is by way of plausible chance and necessity hyps is the physics, chemistry and thermodynamics of Darwin's warm salty pond, or a volcano vent or a cold comet or a gas giant moon etc. And in particular, appeals to the magic of differential reproductive success in niches have to first account for the origin of code-using von Neumann self replicators [vNSRs] added to metabolising automata based on homochiral proteins in gated encapsulated cells, in reasonable environments, in a reasonable time and scope of resources on empirically demonstrated capacity of credible forces and materials of nature. 10: Such simply has never been done. 11: And, that is the ROOT of the evolutionary materialist tree of life. 12: Where, there is only one empirically plausible, routinely observed, needle in haystack sparse search challenge answering known source of FSCO/I. Namely, intelligently directed configuration, aka design. (That's what configured this comment post, as text strings in English. It is what designed and built the PC etc you are reading this on. It did the same for the Abu 6500 C3 reel, and more.) 13: Without explicit calculations beyond the ones generally indicated, we are already in a position to see that FSCO/I is an inductively strong and reliable sign of design as cause. (FSCO/I as understood, does not require irreducible complexity, as redundancies may be involved, etc. IC is a subset but it is not the only one. And, by abstracting away from biofunction or even interactive functionality to define specification, we arrive at a superset, complex specified information. Where the possibility of interactive function is enough. Phone beats me to the bell.) 14: Just to indicate a bit, consider a naively simplistic cell, of 100 proteins of average length 100 AAs, where instead of 4.32 bits per AA on the choice of 20 alternatives, let things be so loose -- this is implausibly loose -- there is but one y/n required to specify on average, maybe is this hydrophilic or hydrophobic; any h-phil or h-phob AA would do in the same locus on the chain. Such proteins do the work of the cell. Already that is 10,000 bits, which is vastly beyond the FSCO/I threshold. Recall, for each bit beyond 1,000 the config space cardinality W DOUBLES. 9,000 doublings here. 15: In short design sits at the table for OOL, and therefore for everything beyond too, shifting the balance of reasonable plausibilities drastically. 16: Where, if we need a particular reason to justify that, we may wish to consider the challenge of origin of thousands of protein clusters across AA space, constituting thousands of islands of function that are deeply isolated, as you pointed out in the OP. 17: In this context, I do not buy the concept being pushed by objectors, that one must calculate probabilities, and especially must do so directly while putting up a list of arbitrary chance driven hypotheses that they refrain from offering. 18: That is irresponsible burden of proof shifting. The normal empirically warranted explanation of FSCO/I is design. Indeed, it is a common-sense sign of design, used routinely in all sorts of circumstances. The objectors wish to dismiss that without showing empirically backed warrant, which is selectively hyperskeptical. 19: Instead, I strongly suggest that once very sparse sampling is evidently on the table, and large config spaces confront FSCO/I, it is those who would put up an alternative who need to warrant on empirical evidence that they have mechanisms capable of creating FSCO/I without intelligently guided configuration. This is the vera causa test. 20: There is no good reason to believe this test has been met by the objectors, and the fairly obvious defects in the many suggested counter-examples speak loud and clear on how weak their case is. 21: Further, we have something else that is connected to probability, which is readily observed and/or estimated. Information content. 22: Which, as outlined above plainly points to config spaces well beyond reasonable sparse needle in haystack search. Where also, apart from y/n q's or the equivalent estimates (which are commonplace in information practice, including in Shannon's original paper) we may make stochastic studies that bring out redundancies etc and give us informational estimates anchored in how the available set of possibilities has been used by whatever has emerged across time. 23: Where also FSCO/I -- a fact not a speculation -- naturally comes in islands of function, for which we can observe empirical indications for biology in the cell based on distribution of proteins in AA sequence space. Constrained by requisites of folding and functioning in biological cellular environments. 24: Where the above suffices to show that debate talking points and assertions of circularity are groundless. The improbability of finding islands of function deeply isolated in config spaces of very large size on blind search is not a matter of question-begging but of the nature of the configuration constraints imposed by function, vs otherwise possible clumped or scattered configs. 25: Yes, one must be fairly careful in wording, but that is besides the point of the fundamental empirical issue at stake. Until I see signs of objectors taking the issue of configuration constraints to achieve organised interactive function seriously [and I find that conspicuously, consistently absent -- for years], I see no reason whatsoever to entertain objections based on assertions of circularity, given what has been outlined above, yet again. KFkairosfocus
November 18, 2014
November
11
Nov
18
18
2014
03:13 AM
3
03
13
AM
PDT
Exactly ! Would you please inform your ID colleagues who keep believing that evolution is hunting for patterns and specifications ? Wagner's 'search' is a random walk down the genotype network. he show how new phenotype can be discovered at hyperdimensions in a fraction of a step.Me_Think
November 18, 2014
November
11
Nov
18
18
2014
03:12 AM
3
03
12
AM
PDT
Tamara Knight admits that her position has nothing yet she thinks that is a problem for ID. Strange. She also doesn't understand how design inferences work. No Tamara, archaeologists and forensic scientists do not just throw in the towel and say a designer did it. Design inferences are based on our KNOWLEDGE of cause and effect relationships, ie science. And to refute any given design inference all one has to do is step up and demonstrate that necessity and chance can account for it.Joe
November 18, 2014
November
11
Nov
18
18
2014
03:10 AM
3
03
10
AM
PDT
Me think, Unguided evolution is not a searchJoe
November 18, 2014
November
11
Nov
18
18
2014
03:07 AM
3
03
07
AM
PDT
I have long argued that Dr. Douglas Axe’s paper, The Case Against a Darwinian Origin of Protein Folds, whose argument I summarized in my recent post, Barriers to macroevolution: what the proteins say, demonstrates beyond reasonable doubt that unguided mechanisms could not have given rise to protein folds that we find in living creatures’ body proteins, in the space of just four billion years.
I don't know what Keiths argument will be, but here are some thoughts from Here : ...Even if one cuts the search space down to the size of a domain, (average modern size ~ 100 amino acids), these numbers are astronomical, though Axe does not go into this correction in detail. But obviously, the origin of proteins at the dawn of life has never been hypothesized to involve the sudden appearance of 300 or even 100 amino acid-long enzymes for oxidative phosphorylation. This is a straw man from top to bottom Over in the actual scientific community, the origin of protein coding capacity is commonly assumed to have extremely modest beginnings, as an extension of the RNA world, when RNA had the primary replicative and catalytic ability. This modest catalytic ability might then have been abetted by tiny peptides, painfully assembled by a set of primitive RNA enzymes, and then extended to slightly longer protein chains, which eventually and competitively, through their vastly superior chemical abilities, relegated RNA to what is now its mostly informational role. Indeed, the protein-translating ribosome remains a thoroughly RNA machine, using strings of mRNA as the template code, tRNA- mounted amino acids as the building blocks, and a catalytic core of rRNA for polymerization. This sort of gives the game away right there, if one cares to look. The messiness of this genetic code, with some amino acids encoded by only one of the 64 codons, and others encoded by six, indicates some late additions and jerry-rigging to the system. And since the code's establishment, more amino acids have come into use through chemical modifications, either before the amino acid is incorporated (selenocysteine), or afterwards (hypusine). Axe never recognizes such realistic accounts of the primitive origin of proteins, however. He also assumes that successful proteins have to approach modern levels of efficiency, making any path from one folded form to another folded form(there are an estimated ~2000 classified folds) impossible, none in between being likely to have a well-honed function. Axe cites experiments showing that proteins can switch readily between quasi-stable folds, an important precursor to these innovations. But he dismisses such cases as not competitive in the modern Darwinian landscape.When a novel function is at issue,however, how primitive is too primitive? All of phylogenetic analysis is based on the wide variation of sequences, to the point that functionally and structurally similar proteins may have no detectable similarity in their linear sequences. Axe notes that every organism harbors, in addition to critical genes that are highly conserved, a population of others with no detectable relationships. A bold hypothesis from his perspective would be that it is these proteins that are the most important. Another hypothesis is quite a bit more likely - These proteins are, in point of fact, the least important ones of the organism, prone to rapid mutation and divergence to the point of unrecognizability. These, in turn, might be exactly the kinds of proteins that generate new structures, folds, and functions, if they can outrun complete inactivation through mutation, yielding up the novel folds that the author seems so perplexed by. As for Wagner's book, the genotype network on hyperdimensions reduces the search space drastically, making improbability argument look silly. It has been discussed in other threads.Me_Think
November 18, 2014
November
11
Nov
18
18
2014
02:43 AM
2
02
43
AM
PDT
...and should be made only after we have independently calculated the probability of a system emerging via unguided (random or non-random) processes, based on what we currently know
And therein lies the problem. We can experimentally determine the probability of "likely" events, but not the probability of "very unlikely" ones. So calculation is the only option, but what we most certainly cannot do is calculate the probability of an event unless we fully understand all its causes. Your argument boils down to a rephrasing of "We can't see how it works, therefore ID did it". Throwing in the towel on conventional scientific research would be the problem, not the solution.Tamara Knight
November 18, 2014
November
11
Nov
18
18
2014
02:12 AM
2
02
12
AM
PDT
I'm afraid I think the answer to the post's title will be 'no'.Bob O'H
November 18, 2014
November
11
Nov
18
18
2014
01:08 AM
1
01
08
AM
PDT
1 10 11 12

Leave a Reply