Uncommon Descent Serving The Intelligent Design Community

Order vs. Complexity: A follow-up post

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

NOTE: This post has been updated with an Appendix – VJT.

My post yesterday, Order is not the same thing as complexity: A response to Harry McCall (17 June 2013), seems to have generated a lively discussion, judging from the comments received to date. Over at The Skeptical Zone, Mark Frank has also written a thoughtful response titled, VJ Torley on Order versus Complexity. In today’s post, I’d like to clear up a few misconceptions that are still floating around.

1. In his opening paragraph, Mark Frank writes:

To sum it up – a pattern has order if it can be generated from a few simple principles. It has complexity if it can’t. There are some well known problems with this – one of which being that it is not possible to prove that a given pattern cannot be generated from a few simple principles. However, I don’t dispute the distinction. The curious thing is that Dembski defines specification in terms of a pattern that can generated from a few simple principles. So no pattern can be both complex in VJ’s sense and specified in Dembski’s sense.

Mark Frank appears to be confusing the term, “generated,” with the term. “described.” here. What I wrote in my post yesterday is that a pattern exhibits order if it can be generated by “a short algorithm or set of commands,” and complexity if it can’t be compressed into a shorter pattern by a general law or computer algorithm. Professor William Dembski, in his paper, Specification: The Pattern That Signifies Intelligence, defines specificity in terms of the shortest verbal description of a pattern. On page 16, Dembski defines the function phi_s(T) for a pattern T as “the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T” (emphasis mine) before going on to define the specificity sigma as minus the log (to base 2) of the product of phi_s(T) and P(T|H), where P(T|H) is the probability of the pattern T being formed according to “the relevant chance hypothesis that takes into account Darwinian and other material mechanisms” (p. 17). In The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Intelligent Design advocates William Dembski and Jonathan Wells define specification as “low DESCRIPTIVE complexity” (p. 320), and on page 311 they explain that descriptive complexity “generalizes Kolmogorov complexity by measuring the size of the minimum description needed to characterize a pattern.”

The definition of order and complexity relates to whether or not a pattern can be generated mathematically by “a short algorithm or set of commands,” rather than whether or not it can be described in a few words. The definition of specificity, on the other hand, relates to whether or not a pattern can be characterized by a brief verbal description. There is nothing that prevents a pattern from being difficult to generate algorithmically, but easy to describe verbally. Hence it is quite possible for a pattern to be both complex and specified.

NOTE: I have substantially revised my response to Mark Frank, in the Appendix below.

2. Dr. Elizabeth Liddle, in a comment on Mark Frank’s post, writes that “by Dembski’s definition a chladni pattern would be both specified and complex. However, it would not have CSI because it is highly probable given a relevant chance (i.e. non-design) hypothesis.” The second part of her comment is correct; the first part is incorrect. Precisely because a Chladni pattern is “highly probable given a relevant chance (i.e. non-design) hypothesis,” it is not complex. In The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), William Dembski and Jonathan Wells define complexity as “The degree of difficulty to solve a problem or achieve a result,” before going on to add: “The most common forms of complexity are probabilistic (as in the probability of obtaining some outcome) or computational (as in the memory or computing time required for an algorithm to solve a problem)” (pp. 310-311). If a Chladni pattern is easy to generate as a result of laws then it exhibits order rather than complexity.

3. In another comment, Dr. Liddle writes: “V J Torley seems to be forgetting that fractal patterns are non-repeating, even though they can be simply described.” I would beg to differ. Here’s what Wikipedia has to say in its article on fractals (I’ve omitted references):

Fractals are typically self-similar patterns, where self-similar means they are “the same from near as from far”. Fractals may be exactly the same at every scale, or, as illustrated in Figure 1, they may be nearly the same at different scales. The definition of fractal goes beyond self-similarity per se to exclude trivial self-similarity and include the idea of a detailed pattern repeating itself.

The caption accompanying the figure referred to above reads as follows: “The Mandelbrot set illustrates self-similarity. As you zoom in on the image at finer and finer scales, the same pattern re-appears so that it is virtually impossible to know at which level you are looking.”

That sounds pretty repetitive to me. More to the point, fractals are mathematically easy to generate. Here’s what Wikipedia says about the Mandelbrot set, for instance:

More precisely, the Mandelbrot set is the set of values of c in the complex plane for which the orbit of 0 under iteration of the Complex quadratic polynomial zn+1 = zn2 + c remains bounded. That is, a complex number c is part of the Mandelbrot set if, when starting with z0 = 0 and applying the iteration repeatedly, the absolute value of zn remains bounded however large n gets.

NOTE: I have revised some of my comments on Mandelbrot sets and fractals. See the Appendix below.

4. In another comment on the same post, Professor Joe Felsenstein objects that Dembski’s definition of specified complexity has a paradoxical consequence: “It implies that we are to regard a life form as uncomplex, and therefore having specified complexity [?] if it is easy to describe,” which means that “a hummingbird, on that view, has not nearly as much specification as a perfect steel sphere,” even though the hummingbird “can do all sorts of amazing things, including reproduce, which the steel sphere never will.” He then suggests defining specification on a scale of fitness.

In my post yesterday, I pointed out that the term “specified complexity” is fairly non-controversial when applied to life: as chemist Leslie Orgel remarked in 1973, “living organisms are distinguished by their specified complexity.” Orgel added that crystals are well-specified, but simple rather than complex. If specificty were defined in terms of fitness, as Professor Felsenstein suggests, then we could no longer say that a non-reproducing crystal was specified.

However, Professor Felsenstein’s example of the steel sphere is an interesting one, because it illustrates that the probability of a sphere’s originating by natural processes may indeed be extremely low, especially if it is also made of an exotic material. (In this respect, it is rather like the lunar monolith in the movie 2001.) Felsenstein’s point is that a living organism would be a worthier creation of an intelligent agent than such a sphere, as it has a much richer repertoire of capabilities.

Closely related to this point is the fact that living things exhibit a nested hierarchy of organization, as well as dedicated functionality: intrinsically adapted parts whose entire repertoire of functionality is “dedicated” to supporting the functionality of the whole unit which they comprise. Indeed, it is precisely this kind of organization and dedicated functionality which allows living things to reproduce in the first place.

At the bottom level, the full biochemical specifications required for putting together a living thing such as a hummingbird are very long indeed. It is only when we get to higher organizational levels that we can apply holistic language and shorten our description, by characterizing the hummingbird in terms of its bodily functions rather than its parts, and by describing those functions in terms of how they benefit the whole organism.

I would therefore agree that an entity exhibiting this combination of traits (bottom-level exhaustive detail and higher-level holistic functionality, which makes the entity easy to characterize in a few words) is a much more typical product of intelligent agency than a steel sphere, notwithstanding the latter’s descriptive simplicity.

In short: specified complexity gets us to Intelligent Design, but some designs are a lot more intelligent than others. Whoever made hummingbirds must have been a lot smarter than we are; we have enough difficulties putting together a single protein.

5. In a comment on my post, Alan Fox objects that “We simply don’t know how rare novel functional proteins are.” Here I should refer him to the remarks made by Dr. Branko Kozulic in his 2011 paper, Proteins and Genes, Singletons and Species. I shall quote a brief extract:

In general, there are two aspects of biological function of every protein, and both depend on correct 3D structure. Each protein specifically recognizes its cellular or extracellular counterpart: for example an enzyme its substrate, hormone its receptor, lectin sugar, repressor DNA, etc. In addition, proteins interact continuously or transiently with other proteins, forming an interactive network. This second aspect is no less important, as illustrated in many studies of protein-protein interactions [59, 60]. Exquisite structural requirements must often be fulfilled for proper functioning of a protein. For example, in enzymes spatial misplacement of catalytic residues by even a few tenths of an angstrom can mean the difference between full activity and none at all [54]. And in the words of Francis Crick, “To produce this miracle of molecular construction all the cell need do is to string together the amino acids (which make up the polypeptide chain) in the correct order”….

Let us assess the highest probability for finding this correct order by random trials and call it, to stay in line with Crick’s term, a “macromolecular miracle”. The experimental data of Keefe and Szostak indicate – if one disregards the above described reservations – that one from a set of 10^11 randomly assembled polypeptides can be functional in vitro, whereas the data of Silverman et al. [57] show that of the 10^10 in vitro functional proteins just one may function properly in vivo. The combination of these two figures then defines a “macromolecular miracle” as a probability of one against 10^21. For simplicity, let us round this figure to one against 10^20…

It is important to recognize that the one in 10^20 represents the upper limit, and as such this figure is in agreement with all previous lower probability estimates. Moreover, there are two components that contribute to this figure: first, there is a component related to the particular activity of a protein – for example enzymatic activity that can be assayed in vitro or in vivo – and second, there is a component related to proper functioning of that protein in the cellular context: in a biochemical pathway, cycle or complex. (pp. 7-8)

In short: the specificity of proteins is not in doubt, and their usefulness for Intelligent Design arguments is therefore obvious.

I sincerely hope that the foregoing remarks will remove some common misunderstandings and stimulate further discussion.

APPENDIX

Let me begin with a confession: I had a nagging doubt when I put up this post a couple of days ago. What bothered me was that (a) some of the definitions of key terms were a little sloppily worded; and (b) some of these definitions seemed to conflate mathematics with physics.

Maybe I should pay more attention to my feelings.

A comment by Professor Jeffrey Shallit over at The Skeptical Zone also convinced me that I needed to re-think my response to Mark Frank on the proper definition of specificity. Professor Shallit’s remarks on Kolmogorov complexity also made me realize that I needed to be a lot more careful about defining the term “generate,” which may denote either a causal process governed by physical laws, or the execution of an algorithm by performing a sequence of mathematical operations.

What I wrote in my original post, Order is not the same thing as complexity: A response to Harry McCall (17 June 2013), is that a pattern exhibits order if it can be generated by “a short algorithm or set of commands,” and complexity if it can’t be compressed into a shorter pattern by a general law or computer algorithm.

I’d now like to explain why I now find those definitions unsatisfactory, and what I would propose in their stead.

Problems with the definition of order

I’d like to start by going back to the original sources. In Signature in the Cell l (Harper One, 2009, p. 106), Dr. Stephen Meyer writes:

Complex sequences exhibit an irregular, nonrepeating arrangement that defies expression by a general law or computer algorithm (an algorithm is a set of expressions for accomplishing a specific task or mathematical operation). The opposite of a highly complex sequence is a highly ordered sequence like ABCABCABCABC, in which the characters or constituents repeat over and over due to some underlying rule, algorithm or general law. (p. 106)

[H]igh probability repeating sequences like ABCABCABCABCABCABC have very little information (either carrying capacity or content)… Such sequences aren’t complex either. Why? A short algorithm or set of commands could easily generate a long sequence of repeating ABC’s, making the sequence compressible. (p. 107)
(Emphases mine – VJT.)

There are two problems with this definition. First, it mistakenly conflates physics with mathematics, when it declares that a complex sequence can be generated by “a general law or computer algorithm.” I presume that by “general law,” Dr. Meyer means to refer to some law of Nature, since on page 107, he lists certain kinds of organic molecules as examples of complexity. The problem here is that a sequence may be easy to generate by a computer algorithm, but difficult to generate by the laws of physics (or vice versa). In that case, it may be complex according to physical criteria but not according to mathematical criteria (or the reverse), generating a contradiction.

Second, the definition conflates: (a) the repetitiveness of a sequence, with (b) the ability of a short algorithm to generate that sequence, and (c) the Shannon compressibility of that sequence. The problem here is that there are non-repetitive sequences which can be generated by a short algorithm. Some of these non-repeating sequences are also Shannon-incompressible. Do these sequences exhibit order or complexity?

Third, the definition conflicts with what Professor Dembski has written on the subject of order and complexity. In The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Professor William Dembski and Dr. Jonathan Wells provide three definitions for order, the first of which reads as follows:

(1) Simple or repetitive patterns, as in crystals, that are the result of laws and cannot reasonably be used to draw a design inference. (p. 317; italics mine – VJT).

The reader will notice that the definition refers only to law-governed physical processes.

Dembski’s 2005 paper, Specification: The Pattern that Signifies Intelligence, also refers to the Champernowne sequence as exhibiting a “combination of pattern simplicity (i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing the corresponding event by chance)” (pp. 15-16). According to Dembski, the Champernowne sequence can be “constructed simply by writing binary numbers in ascending lexicographic order, starting with the one-digit binary
numbers (i.e., 0 and 1), proceeding to the two-digit binary numbers (i.e., 00, 01, 10, and 11),” and so on indefinitely, which means that it can be generated by a short algorithm. At the same time, Dembski describes at as having “event-complexity (i.e., difficulty of reproducing the corresponding event by chance).” In other words, it is not an example of what he would define as order. And yet, because it can be generated by “a short algorithm,” it would arguably qualify as an example of order under Dr. Meyer’s criteria (see above).

Problems with the definition of specificity

Dr. Meyer’s definition of specificity is also at odds with Dembski’s. On page 96 of Signature in the Cell, Dr. Meyer defines specificity in exclusively functional terms:

By specificity, biologists mean that a molecule has some features that have to be what they are, within fine tolerances, for the molecule to perform an important function within the cell.

Likewise, on page 107, Meyer speaks of a sequence of digits as “specifically arranged to form a function.”

By contrast, in The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Professor William Dembski and Dr. Jonathan Wells define specification as “low DESCRIPTIVE complexity” (p. 320), and on page 311 they explain that descriptive complexity “generalizes Kolmogorov complexity by measuring the size of the minimum description needed to characterize a pattern.” Although Dembski certainly regards functional specificity as one form of specificity, since he elsewhere refers to the bacterial flagellum – a “bidirectional rotary motor-driven propeller” – as exhibiting specificity, he does not regard it as the only kind of specificity.

In short: I believe there is a need for greater rigor and consistency when defining these key terms. Let me add that I don’t wish to criticize any of the authors I’ve mentioned above; I’ve been guilty of terminological imprecision at times, myself.

My suggestions for more rigorous definitions of the terms “order” and “specification”

So here are my suggestions. In Specification: The Pattern that Signifies Intelligence, Professor Dembski defines a specification in terms of a “combination of pattern simplicity (i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing the corresponding event by chance),” and in The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Dembski and Wells define complex specified information as being equivalent to specified complexity (p. 311), which they define as follows:

An event or object exhibits specified complexity provided that (1) the pattern to which it conforms is a highly improbable event (i.e. has high PROBABILISTIC COMPLEXITY) and (2) the pattern itself is easily described (i.e. has low DESCRIPTIVE COMPLEXITY). (2008, p. 320)

What I’d like to propose is that the term order should be used in opposition to high probabilistic complexity. In other words, a pattern is ordered if and only if its emergence as a result of law-governed physical processes is not a highly improbable event. More succinctly: a pattern is ordered is it is reasonably likely to occur, in our universe, and complex if its physical realization in our universe is a very unlikely event.

Thus I was correct when I wrote above:

If a Chladni pattern is easy to generate as a result of laws then it exhibits order rather than complexity.

However, I was wrong to argue that a repeating pattern is necessarily a sign of order. In a salt crystal it certainly is; but in the sequence of rolls of a die, a repeating pattern (e.g. 123456123456…) is a very improbable pattern, and hence it would be probabilistically complex. (It is, of course, also a specification.)

Fractals, revisited

The same line of argument holds true for fractals: when assessing whether they exhibit order or (probabilistic) complexity, the question is not whether they repeat themselves or are easily generated by mathematical algorithms, but whether or not they can be generated by law-governed physical processes. I’ve seen conflicting claims on this score (see here and here and here): some say there are fractals in Nature, while other say that some objects in Nature have fractal features, and still others, that the patterns that produce fractals occur in Nature even if fractals themselves do not. I’ll leave that one to the experts to sort out.

The term specification should be used to refer to any pattern of low descriptive complexity, whether functional or not. (I say this because some non-functional patterns, such as the lunar monolith in 2001, and of course fractals, are clearly specified.)

Low Kolmogorov complexity is, I would argue, a special case of specification. Dembski and Wells agree: on page 311 of The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), they explain that descriptive complexity “generalizes Kolmogorov complexity by measuring the size of the minimum description needed to characterize a pattern” (italics mine).

Kolmogorov complexity as a special case of descriptive complexity

Which brings me to Professor Shallit’s remarks in a post over at The Skeptical Zone, in response to my earlier (misguided) attempt to draw a distinction between the mathematical generation of a pattern and the verbal description of that pattern:

In the Kolmogorov setting, “concisely described” and “concisely generated” are synonymous. That is because a “description” in the Kolmogorov sense is the same thing as a “generation”; descriptions of an object x in Kolmogorov are Turing machines T together with inputs i such that T on input i produces x. The size of the particular description is the size of T plus the size of i, and the Kolmogorov complexity is the minimum over all such descriptions.

I accept Professor Shallit’s correction on this point. What I would insist, however, is that the term “descriptive complexity,” as used by the Intelligent design movement, cannot be simply equated with Kolmogorov complexity. Rather, I would argue that low Kolmogorov complexity is a special case of low descriptive complexity. My reason for adopting this view is that the determination of a object’s Kolmogorov complexity requires a Turing machine (a hypothetical device that manipulates symbols on a strip of tape according to a table of rules), which is an inappropriate (not to mention inefficient) means of determining whether an object possesses functionality of a particular kind – e.g. is this object a cutting implement? What I’m suggesting, in other words, is that at least some functional terms in our language are epistemically basic, and that our recognition of whether an object possesses these functions is partly intuitive. Using a table of rules to determine whether or not an object possesses a function (say, cutting) is, in my opinion, likely to produce misleading results.

My response to Mark Frank, revisited

I’d now like to return to my response to Mark Frank above, in which I wrote:

The definition of order and complexity relates to whether or not a pattern can be generated mathematically by “a short algorithm or set of commands,” rather than whether or not it can be described in a few words. The definition of specificity, on the other hand, relates to whether or not a pattern can be characterized by a brief verbal description. There is nothing that prevents a pattern from being difficult to generate algorithmically, but easy to describe verbally. Hence it is quite possible for a pattern to be both complex and specified.

This, I would now say, is incorrect as it stands. The reason why it is quite possible for an object to be both complex and specified is that the term “complex” refers to the (very low) likelihood of its originating as a result of physical laws (not mathematical algorithms), whereas the term “specified” refers to whether it can be described briefly – whether it be according to some algorithm or in functional terms.

Implications for Intelligent Design

I have argued above that we can legitimately infer an Intelligent Designer for any system which is capable of being verbally described in just a few words, and whose likelihood of originating as a result of natural laws is sufficiently close to zero. This design inference is especially obvious in systems which exhibit biological functionality. Although we can make design inferences for non-biological systems (e.g. moon monoliths, if we found them), the most powerful inferences are undoubtedly drawn from the world of living things, with their rich functionality.

In an especially perspicuous post on this thread, G. Puccio argued for the same conclusion:

The simple truth is, IMO, that any kind of specification, however defined, will do, provided that we can show that that specification defines a specific subset of the search space that is too small to be found by a random search, and which cannot reasonably be found by some natural algorithm….

In the end, I will say it again: the important point is not how you specify, but that your specification identifies:

a)an utterly unlikely subset as a pre-specification

or

b) an utterly unlikely subset which is objectively defined without any arbitrary contingency, like in the case of pi.

In the second case, specification needs not be a pre-specification.

Functional specification is a perfect example of the second case.

Provided that the function can be objectively defined and measured, the only important point is how complex it is: IOWs, how small is the subset of sequences that provide the function as defined, in the search space.

That simple concept is the foundation for the definition of dFSCI, or any equivalent metrics.

It is simple, it is true, it works.

P{T|H) and elephants: Dr. Liddle objects

But how do we calculate probabilistic complexities? Dr. Elizabeth Liddle writes:

P(T|H) is fine to compute if you have a clearly defined non-design hypothesis for which you can compute a probability distribution.

But nobody, to my knowledge, has yet as suggested how you would compute it for a biological organism, or even for a protein.

In a similar vein, Alan Fox comments:

We have, as yet, no way to predict functionality in unknown proteins. Without knowing what you don’t know, you can’t calculate rarity.

In a recent post entitled, The Edge of Evolution, I cited a 2011 paper by Dr. Branko Kozulic, titled, Proteins and Genes, Singletons and Species, in which he argued (generously, in his view) that at most, 1 in 10^21 randomly assembled polypeptides would be capable of functioning as a viable protein in vivo, that each species possessed hundreds of isolated proteins called “singletons” which had no close biochemical relatives, and that the likelihood of these proteins originating by unguided mechanisms in even one species was astronomically low, making proteins at once highly complex (probabilistically speaking) and highly specified (by virtue of their function) – and hence as sure a sign as we could possibly expect of an Intelligent Designer at work in the natural world:

In general, there are two aspects of biological function of every protein, and both depend on correct 3D structure. Each protein specifically recognizes its cellular or extracellular counterpart: for example an enzyme its substrate, hormone its receptor, lectin sugar, repressor DNA, etc. In addition, proteins interact continuously or transiently with other proteins, forming an interactive network. This second aspect is no less important, as illustrated in many studies of protein-protein interactions [59, 60]. Exquisite structural requirements must often be fulfilled for proper functioning of a protein. For example, in enzymes spatial misplacement of catalytic residues by even a few tenths of an angstrom can mean the difference between full activity and none at all [54]. And in the words of Francis Crick, “To produce this miracle of molecular construction all the cell need do is to string together the amino acids (which make up the polypeptide chain) in the correct order” [61, italics in original]. (pp. 7-8)

Let us assess the highest probability for finding this correct order by random trials and call it, to stay in line with Crick’s term, a “macromolecular miracle”. The experimental data of Keefe and Szostak indicate – if one disregards the above described reservations – that one from a set of 10^11 randomly assembled polypeptides can be functional in vitro, whereas the data of Silverman et al. [57] show that of the 10^10 in vitro functional proteins just one may function properly in vivo. The combination of these two figures then defines a “macromolecular miracle” as a probability of one against 10^21. For simplicity, let us round this figure to one against 10^20. (p. 8)

To put the 10^20 figure in the context of observable objects, about 10^20 squares each measuring 1 mm^2 would cover the whole surface of planet Earth (5.1 x 10^14 m^2). Searching through such squares to find a single one with the correct number, at a rate of 1000 per second, would take 10^17 seconds, or 3.2 billion years. Yet, based on the above discussed experimental data, one in 10^20 is the highest probability that a blind search has for finding among random sequences an in vivo functional protein. (p. 9)

The frequency of functional proteins among random sequences is at most one in 10^20 (see above). The proteins of unrelated sequences are as different as the proteins of random sequences [22, 81, 82] – and singletons per definition are exactly such unrelated proteins. (p. 11)

A recent study, based on 573 sequenced bacterial genomes, has concluded that the entire pool of bacterial genes – the bacterial pan-genome – looks as though of infinite size, because every additional bacterial genome sequenced has added over 200 new singletons [111]. In agreement with this conclusion are the results of the Global Ocean Sampling project reported by Yooseph et al., who found a linear increase in the number of singletons with the number of new protein sequences, even when the number of the new sequences ran into millions [112]. The trend towards higher numbers of singletons per genome seems to coincide with a higher proportion of the eukaryotic genomes sequenced. In other words, eukaryotes generally contain a larger number of singletons than eubacteria and archaea. (p. 16)

Based on the data from 120 sequenced genomes, in 2004 Grant et al. reported on the presence of 112,000 singletons within 600,000 sequences [96]. This corresponds to 933 singletons per genome…
[E]ach species possesses hundreds, or even thousands, of unique genes – the genes that are not shared with any other species. (p. 17)

Experimental data reviewed here suggest that at most one functional protein can be found among 10^20 proteins of random sequences. Hence every discovery of a novel functional protein (singleton) represents a testimony for successful overcoming of the probability barrier of one against at least 10^20, the probability defined here as a “macromolecular miracle”. More than one million of such “macromolecular miracles” are present in the genomes of about two thousand species sequenced thus far. Assuming that this correlation will hold with the rest of about 10 million different species that live on Earth [157], the total number of “macromolecular miracles” in all genomes could reach 10 billion. These 10^10 unique proteins would still represent a tiny fraction of the 10^470 possible proteins of the median eukaryotic size. (p. 21)

If just 200 unique proteins are present in each species, the probability of their simultaneous appearance is one against at least 10^4,000. [The] Probabilistic resources of our universe are much, much smaller; they allow for a maximum of 10^149 events [158] and thus could account for a one-time simultaneous appearance of at most 7 unique proteins. The alternative, a sequential appearance of singletons, would require that the descendants of one family live through hundreds of “macromolecular miracles” to become a new species – again a scenario of exceedingly low probability. Therefore, now one can say that each species is a result of a Biological Big Bang; to reserve that term just for the first living organism [21] is not justified anymore. (p. 21)

“But what if the search for a functional protein is not blind?” ask my critics. “What if there’s an underlying bias towards the emergence of functionality in Nature?” “Fine,” I would respond. “Let’s see your evidence.”

Alan Miller rose to the challenge. In a recent post entitled, Protein Space and Hoyle’s Fallacy – a response to vjtorley, cited a paper by Michael A. Fisher, Kara L. McKinley, Luke H. Bradley, Sara R. Viola and Michael H. Hecht, titled, De Novo Designed Proteins from a Library of Artificial Sequences Function in Escherichia Coli and Enable Cell Growth (PLoS ONE 6(1): e15364. doi:10.1371/journal.pone.0015364, January 4, 2011), in support of his claim that proteins were a lot easier for Nature to build on the primordial Earth than Intelligent Design proponents imagine, and he accused them of resurrecting Hoyle’s fallacy.

In a very thoughtful comment over on my post CSI Revisited, G. Puccio responded to the key claims made in the paper, and to what he perceived as Alan Miller’s misuse of the paper (bolding below is mine):

First of all, I will just quote a few phrases from the paper, just to give the general scenario of the problems:

a) “We designed and constructed a collection of artificial genes encoding approximately 1.5×106 novel amino acid sequences. Because folding into a stable 3-dimensional structure is a prerequisite for most biological functions, we did not construct this collection of proteins from random sequences. Instead, we used the binary code strategy for protein design, shown previously to facilitate the production of large combinatorial libraries of folded proteins.”

b) “Cells relying on the de novo proteins grow significantly slower than those expressing the natural protein.”

c) “We also purified several of the de novo proteins. (To avoid contamination by the natural enzyme, purifications were from strains deleted for the natural gene.) We tested these purified proteins for the enzymatic activities deleted in the respective autotrophs, but were unable to detect activity that was reproducibly above the controls.”

And now, my comments:

a) This is the main fault of the paper, if it is interpreted (as Miller does) as evidence that functional proteins can evolve from random sequences. The very first step of the paper is intelligent design: indeed, top down protein engineering based on our hardly gained knowledge about the biochemical properties of proteins.

b) The second problem is that the paper is based on function rescue, not on the appearance of a mew function. Experiments based on function rescue have serious methodological problems, if used as models of neo darwinian evolution. The problem here is specially big, because we know nothing of how the “evolved” proteins work to allow the minimal rescue of function in the complex system of E. Coli (see next point).

c) The third problem is that the few rescuing sequences have no detected biochemical activity in vitro. IOWs, we don’t know what they do, and how they act at biochemical level. IOWs, with no known “local function” for the sequences, we have no idea of the functional complexity of the “local function” that in some unknown way is linked to the functional rescue. The authors are well aware of that, and indeed spend a lot of time discussing some arguments and experiments to exclude some possible interpretation of indirect rescue, or at least those that they have conceived.

The fact remains that the hypothesis that the de novo sequences have the same functional activity as the knocked out genes, even if minimal, remain unproved, because no biochemical activity of that kind could be shown in vitro for them.

These are the main points that must be considered. In brief, the paper does not prove, in any way, what Miller thinks it proves.

And that was Miller’s best paper!

In the meantime, can you forgive us in the Intelligent Design community for being just a little skeptical of claims that “no intelligence was required” to account for the origin of proteins, of the first living cell (which would have probably required hundreds of proteins), of complex organisms in the early Cambrian period, and even of the appearance of a new species, in view of what has been learned about the prevalence of singleton proteins and genes in living organisms?

Comments
gpuccio:
Here I lose you. We certainly can observe a lot of apples which have mass (as you say: “apples often have mass”). So, we have no need to demonstrate that for each apple. But no basic protein domain has been shown to have selectable intermediaries, so I can’t see why we should suppose that some have them. I can’t see the analogy with apples!
Fair point, it was a lame example. My point is that positing selectable precursors seems at least no less credible than positing a completely unobserved entity. And at least we know where to look for the selectable precursors, and we know that Darwinian algorithms basically work. For example (I know UD proponents hate this demonstration, but it deserves a lot more credit than it's given), Lenski's AVIDA shows that even if you have functions that are all Irreducibly complex (require non-selectable precursors) they evolve, even when they require deleterious precursors. So we know that the principle works. My argument is not "therefore there must have been selectable precursors" but "therefore there is no reason to reject selectable precursors and infer design by default".
I am not thinking “that for some reason there were no selectable intermediaries for a given protein“. I am stating that no selectable intermediaries are known for any basic protein domain. It’s quite different, don’t you agree?
Yes, but it's what I meant. Sorry I was unclear. But gpuccio, this then becomes an argument-from-ignorance. The basic protein domains are extremely ancient. How would you test whether any precursor was selectable in those organisms in that environment? That's why I'd say the onus (if you want to reject the "null" of selectable precursors) to demonstrate that such precursors are very unlikely. I'm not asking you to believe they existed. I'm simply saying that rejecting rejecting that hypothesis, is not warranted. If an astronomer detects a perturbation in the orbit of some planet that might, or might not, indicate an unknown object, we do not reject the hypothesis and infer Intelligent Perturbation just because we have not found that object. We allow both Intelligent Perturbation and Unknown Object to remain as unrejected alternatives. Again, in case the point is missed: I am not arguing against the hypothesis of design. Actually let me call that Design, because "design" could denote human design, which I certainly would not argue against! I am merely arguing against the validity of the arguments for Design that you are presenting.
Inferring (not deducing!) design from dFSCI is not circular, and is perfectly correct. No more on that point.
Actually I accept that the flaw here is not circularity. It is if you assume that by rejecting the random-draw null you have rejected the non-design null, but as you claim that the inference rather is by "simple inference by analogy", I agree it is not circular. On the other hand nor is it sound.
No, you seem not to understand the point. the point is very simple: all unrelated states have the same probability of being reached.
Yes, I know that is your point. I'm saying that is an unwarranted assumption. If that's your null, then under that null the probability distribution will indeed be not much different to random walk. But you can't then reject the hypothesis that all unrelated states do NOT have the same probability of being reached - that there are are viable evolutionary pathways. That would be assuming your conclusion, and yet again ignoring the eleP(T|H)ant!
If proteins which confer some advantage exist in the protein sequence space, there is absolutely no reason that their distribution in the space “favors” unrelated functional proteins instead of unrelated non functional proteins.
Are you saying that there is no reason to expect any correspondence between protein sequence and protein properties? If so, by what reasoning? I'd say that under the Darwinian hypothesis that is what you'd expect. Sequences for which a slight variation results in a similar phenotype will tend to be selected simply for that reason. Variants who produce offspring with similar fitness will leave more offspring than variants for whom the fitness of the offspring is more of a crapshoot. And in any case we know it is the case - similar genotypes tend to produce similar phenotypes. If sequences were as brittle as you suggest, few of us would be alive. Anyway, feel free to call a halt if you find my posts are causing hair loss :) But it's good to talk, and thanks. Cheers LizzieElizabeth B Liddle
June 21, 2013
June
06
Jun
21
21
2013
11:26 AM
11
11
26
AM
PDT
Liddle, ... and how it relates to your analogy of "Darwin's idea" being analogous to a GA. (Not to mention you not being impressed that the fine-tuning argument.)CentralScrutinizer
June 21, 2013
June
06
Jun
21
21
2013
11:20 AM
11
11
20
AM
PDT
Liddle: Self-replicators that replicate with heritable variance in reproductive success in the current environment.
There is an important distinction between that sort of self-replicator and a self-replicator that does not have the properties of "heritable variance in reproductive success in the current environment." I'm wondering if you get the full impact of that distinction with regards to the fine tuning of the universe.CentralScrutinizer
June 21, 2013
June
06
Jun
21
21
2013
11:19 AM
11
11
19
AM
PDT
Upright Biped: I have restored your authoring rights at TSZ (got lost in the hack), so feel free to start an OP there if you would like.Elizabeth B Liddle
June 21, 2013
June
06
Jun
21
21
2013
10:25 AM
10
10
25
AM
PDT
CentralScrutinizer:
No, not just self-replicators. Self-replicators that produce better self-replicators.
As I usually put it, laboriously, but truncated on this occasion: Self-replicators that replicate with heritable variance in reproductive success in the current environment.Elizabeth B Liddle
June 21, 2013
June
06
Jun
21
21
2013
10:22 AM
10
10
22
AM
PDT
Elizabeth: I appreciate your goodwill to understand my points. I think at least some progress has been made. As you know, I have no intention to convince you, so just a few final (I hope :) ) considerations will do: OK. If your null assumes “no selectable intermediaries” then rejecting that null is not rejecting Darwinian processes. It’s rejecting a process that did not involve selection. OK. We do not need to separately postulate mass for every apple before attributing its fall to gravity. We cannot reject the null that this apple fell to earth because it had mass, simply because we have not been able to ascertain that did. What we can do is to say that apples often have mass, and when they do, they fall. Here I lose you. We certainly can observe a lot of apples which have mass (as you say: "apples often have mass"). So, we have no need to demonstrate that for each apple. But no basic protein domain has been shown to have selectable intermediaries, so I can't see why we should suppose that some have them. I can't see the analogy with apples! We can say: many things have selectable intermediaries, and when they do, they can evolve. I don't know to what you are referring here. Not proteins, I suppose. Unless therefore we have good reason to think that for some reason there were no selectable intermediaries for a given protein, we have no justification for rejecting that hypothesis, and accepting, by default, a hypothesis (design) that is equally without independent support. Except for the fact that no known basic protein domain has been shown to have selectable intermediaries. I am not thinking "that for some reason there were no selectable intermediaries for a given protein". I am stating that no selectable intermediaries are known for any basic protein domain. It's quite different, don't you agree? We cannot deduce their existence because of dFSCO, because that would be as circular as deducing design from dFSCO. Inferring (not deducing!) design from dFSCI is not circular, and is perfectly correct. No more on that point. Inferring or deducing the existence of intermediaries from dFSCI is simply senseless. I really can't see how you can compare two concepts so different. This argument would have merit if you could also show that a random walk through protein space would not go via a great many proteins that confer some advantage at some time. No, you seem not to understand the point. the point is very simple: all unrelated states have the same probability of being reached. If proteins which confer some advantage exist in the protein sequence space, there is absolutely no reason that their distribution in the space "favors" unrelated functional proteins instead of unrelated non functional proteins. So, the existence of proteins which "confer some advantage", be then 1, 10, 1000 or many more, does not change the probability distribution. All unrelated states have the same probability to be reached, because all of them have the same probability of having some protein that "confers some advantage" in the walk. So, the probability of reaching some functional state remains extremely low. whatever the number of sequences that "confer some advantage". That should answer all other following observations. Have a good time! :)gpuccio
June 21, 2013
June
06
Jun
21
21
2013
09:37 AM
9
09
37
AM
PDT
...self-replicators that have to power to "climb Mount Improbable", if you will.CentralScrutinizer
June 21, 2013
June
06
Jun
21
21
2013
09:34 AM
9
09
34
AM
PDT
Liddle: However, it’s possible that foresight was required to set up a universe that would bring forth self-replicators!
No, not just self-replicators. Self-replicators that produce better self-replicators.CentralScrutinizer
June 21, 2013
June
06
Jun
21
21
2013
09:32 AM
9
09
32
AM
PDT
... to continue If any part of it resembles a GA, as you have implied, the whole of it necessarily does. Which means the entire universe is a goal-oriented GA that is "trying to find self-replicators that produce better self-replicators." This is your implication, whether you realize or not.CentralScrutinizer
June 21, 2013
June
06
Jun
21
21
2013
09:31 AM
9
09
31
AM
PDT
Liddle: I agree that the laws of the universe would have to be such that self-replicators would form. Once that’s done, then features that promote survival and better self-replicators will be preferentially selected (that’s Darwin’s idea).
Foul. For "Darwin's idea" to be analogous to a GA, as you have implied, the laws of the universe would have to have to exist such that self-replicators would form that promote survival and produce preferentially-selected "better" self-replicators. Self-replicators could be plausibly envisioned that did nothing but self-replicate without leading to "better" (what do you mean by "better"?) self-replicators. But what you assert is that the laws of nature are such that, not only they lead to self-replicators, but they led to self-replicators of such a nature that they produce over time "better" self-replicators. You say, "once that's done", as if the nature of the self-replicators are now divorced from the laws that led to them. Not so. You don't get to "something new" once self-replicators have come to exist. It's all one process. And if any part of it resembles a GA, as you have implied, the whole of it necessarily does.CentralScrutinizer
June 21, 2013
June
06
Jun
21
21
2013
09:27 AM
9
09
27
AM
PDT
I agree that the laws of the universe would have to be such that self-replicators would form. Once that’s done, then features that promote survival and better self-replicators will be preferentially selected (that’s Darwin’s idea).
No, darwin's idea was design without a designer. Ernst Mayr, one of the founders of the modern synthesis, goes over the fact that the variation has to be unguided, ie happenstance. Also what is "better" is all relative.
However, it’s possible that foresight was required to set up a universe that would bring forth self-replicators!
That would mean darwinian evolution wouldn't be the inference. It would only be part of the picture. The main thesis would be that those self-replicators were designed to evolve into living organisms and eventually into beings capable of scientific discovery.
This, I proposed would evolve via Darwinian mechanisms once a simple non-inert-symbolic-semiotic-whatsits self-replicating population had got going.
That is incorrect as you do not understand what darwinian mechanisms entail.Joe
June 21, 2013
June
06
Jun
21
21
2013
09:20 AM
9
09
20
AM
PDT
EL, Instead of misunderstanding the second man on the bus, you should show him your proposition in #110 and ask what he thinks. Really. :) (by the way, the second man on the bus is Sterelny/Griffiths 1999)Upright BiPed
June 21, 2013
June
06
Jun
21
21
2013
09:14 AM
9
09
14
AM
PDT
Awful typo above in my post 106, gpuccio:
We deduce their existence because of dFSCO, because that would be as circular as deducing design from dFSCO.
should of course read:
We cannot deduce their existence because of dFSCO, because that would be as circular as deducing design from dFSCO.
oopsElizabeth B Liddle
June 21, 2013
June
06
Jun
21
21
2013
08:54 AM
8
08
54
AM
PDT
And good luck to you, Upright Biped. You may well be correct. Obviously it looks slightly different from here :) But that's the way it goes with communication, as pointed out so wisely by your man on the bus. Cheers LizzieElizabeth B Liddle
June 21, 2013
June
06
Jun
21
21
2013
08:48 AM
8
08
48
AM
PDT
Dr Liddle, Man on the bus says to the other man “I do not suggest that inexorable forces can give rise to the relationships required for information to exist”. The other man replies “Without the potential of miscommunication, information is not possible”. :) I think there is a conceptual problem you have yet to understand. Good luck to you.Upright BiPed
June 21, 2013
June
06
Jun
21
21
2013
08:38 AM
8
08
38
AM
PDT
CentralScrutinizer @106
Right. For the “blind watchmaker evolution” to be analogous to a GA, the laws of the universe would have to have been designed on purpose to favor the creation of functional features that have survival value. Of course, there goes the blind watchmaker right out the door in favor of the goal oriented designer.
I agree that the laws of the universe would have to be such that self-replicators would form. Once that's done, then features that promote survival and better self-replicators will be preferentially selected (that's Darwin's idea). The "blind watchmaker" refers to the idea that once you have self-replicators, adaptation will occur semi-automatically, without "foresight". However, it's possible that foresight was required to set up a universe that would bring forth self-replicators! Hence the "fine-tuning" argument. I don't think myself it has a great deal of force, but I think it's a better argument than argument-from-biology. After all, if the whole universe is designed, why would some bits look more designed than others?Elizabeth B Liddle
June 21, 2013
June
06
Jun
21
21
2013
08:15 AM
8
08
15
AM
PDT
gpuccio:
b) The starting hypothesis is that no selectable intermediaries exist, and them the whole walk happens as the consequence of random variation. In that case, each unrelated state has the same probability to be reached, indeed lower than the probability of any related state. So, the probability of a random walk reaching the new functional state is at most the rate between functional space and search space, where the functional space is the number of sequences of that length that exhibit the function, and the search space is the number of possible sequences of that length. That is the concept of dFSI. A good approximation of the dFSI of protein families can be reached by the Durston method. Taking an appropriate threshold of complexity for the biological system in out planet, that IMO can be 150 bits, the null hypothesis of a random origin of the new sequence can easily be rejected.
Yes. Apologies for asking you to repeat this. It wasn't that I had forgotten, just that I wanted to make sure that this was what you meant. OK. If your null assumes "no selectable intermediaries" then rejecting that null is not rejecting Darwinian processes. It's rejecting a process that did not involve selection. With which I entirely agree. I am absolutely sure that proteins did not arise without selectable intermediaries. But having rejected that null, you cannot then extrapolate to rejecting the null of "using selectable intermediaries" because that is not included in your null.
c) If and when naturally selectable intermediaries are found, the reasoning can be repeated, rajecting or accepting the null random hypothesis for each random walk, from A to B and from B to C. Where B is a naturally selectable intermediary between A and C. If no selectable intermediary is known, we still have to apply the null random hypothesis to the full walk from A to C.
No. You merely reject the null of "a known selectable intermediary". We do not need to separately postulate mass for every apple before attributing its fall to gravity. We cannot reject the null that this apple fell to earth because it had mass, simply because we have not been able to ascertain that did. What we can do is to say that apples often have mass, and when they do, they fall. That is an extreme example, but the logic is the same. We do not have to say: this protein evolved because it had selectable intermediaries. We can say: many things have selectable intermediaries, and when they do, they can evolve. Unless therefore we have good reason to think that for some reason there were no selectable intermediaries for a given protein, we have no justification for rejecting that hypothesis, and accepting, by default, a hypothesis (design) that is equally without independent support. However, the Darwinian hypothesis actually predicts that there were selectable intermediaries. This means that we can look for evidence of them. We deduce their existence because of dFSCO, because that would be as circular as deducing design from dFSCO. But we can seek independent evidence, as we can for design. And if we then use some kind of Bayesian inference, we can evaluate the relative credibility of the two hypotheses. Interestingly, this is what Keynes seems to have been getting at!
As explained, I am rejecting a Fisherian null, and then examining the alternative explanations. My priors certainly differ from yours, but they have nothing to do with my Fisherian reasoning (thanks God!).
Yes, I know. And I am saying that in doing so you have only rejected the null of random walk. You have not rejected the null of selectable intermediaries. You do address the latter, but using a different inferential method. The null rejection doesn't do the job. As for the other inferential method, which you now flesh out (thank you!), I'll be brief, as I've got to go:
1) There is no logical reason at all that sequence intermediates between two unrelated sequences should give any reproductive advantage that leads from one sequence to the other. IOWs, variations to one sequence can rarely give a reproductive advantage, but there is no reason at all why they should lead the walk towards a new, unrelated sequence with a completely new function. That is wishful thinking at best, complete folly at worst.
This argument would have merit if you could also show that a random walk through protein space would not go via a great many proteins that confer some advantage at some time. A toy example: there are 1000 possible proteins. Of these, 10 have local function (catalyse something; are chemically active, whatever). And 20 confer some generic advantage (are good for stopping holes in membranes, for instance). If as you say there's no reason to suppose that the 10 locally functional proteins are sequentially related to the 20 generically advantageous functional proteins, the chances that one of the 20 will be on the pathway to one of the 10 is very small. So far so good. However, if 900 of the proteins had some kind of generically advantageous function, the chances that 10 of them will also be sequentially related to the 10 that are locally functional, becomes quite high. So we have a prediction: If the Darwinian hypothesis is correct, and there were selectable precursors to modern functional proteins, then most early simple proteins probably conferred some selective advantage. I don't know the answer. But that null has not been rejected, and I don't see any good reason to think it's less likely than a designer. Anyway, thanks for the conversation, and apologies for getting you to repeat what you've already said!Elizabeth B Liddle
June 21, 2013
June
06
Jun
21
21
2013
08:11 AM
8
08
11
AM
PDT
CLAVDIVS @109: I absolutely agree. My own position is that the reason the results of evolution look so much like the products of human design is that human designers operate very much like evolutionary processes! Hence the term "neural darwinism". It's probably the easiest kind of design process to evolve. Which is probably why the designer chose it....Elizabeth B Liddle
June 21, 2013
June
06
Jun
21
21
2013
07:37 AM
7
07
37
AM
PDT
Upright Biped:
“Careless typing” was never the problem, and the your wish to see the information system of self-replication resolved by Darwinian mechanisms was something you often repeated.
I have never, ever, suggested that you could produce a system of self-replicators from a system of non-self-replicators by Darwinian evolution. If you thought I suggested such a thing, either I mistyped, or you misread. Clearly it would be an absurd claim, because you have to have self-replicators before you can have Darwinian evolution. By definition. That's why Darwinian evolution can't account for OoL. As I must have said rather often. However, what you did ask me to do was not simply to devise a system whereby self-replicators emerged from non-self-replicators (necessarily by non-Darwinian means), but to have those self-replicators self-replicate by means of some coding protocol in which an inert symbolic/semiotic information transmission medium served to code for some evolutionary advantageous phenotypic feature as in the DNA-tRNA-amino acid system in living cells. This, I proposed would evolve via Darwinian mechanisms once a simple non-inert-symbolic-semiotic-whatsits self-replicating population had got going Hence my emendation above from "evolve" to "emerge, then evolve" above. Here was my plan: 1. non-selfreplicating vMonomers 2. non-selfreplicating vPolymers 3. self-replicating vPolymers 4. self-replicating vPolymers in self-replicating vVesicles 5. self-replicating vPolymers in self-replicating vVesicles with some kind of semiotic information transfer system. 1 - 3 must be non Darwinian. 3-5 can be Darwinian, because once you have self-replication, you can have Darwinian processes. I hope this finally clears up the misunderstanding, and me of any charge of inconsistency (apart from the odd typo). Perhaps you misunderstood because you assumed that there could be no self-replication without a semiotic whatsits. But if self-replicators by definition have semiotic whatsits, then I'd be happy to have another go, and if I show that virtual self-replicators can emerge from non-self-replicators, I will have fulfilled the challenge. If not, then first I have to get virtual self-replicators from virtual non-self-replicators by non-Darwinian means, then have the semiotic thing evolve subsequently.Elizabeth B Liddle
June 21, 2013
June
06
Jun
21
21
2013
07:34 AM
7
07
34
AM
PDT
gpuccio and elizabeth: I have always thought the real question is: Did humans, with their capacity for intelligent design, evolve step-wise by Darwinian mechanisms from simpler precursors that did not have the capacity for intelligent design? Assuming arguendo that humans did so evolve, then it is not surprising at all that features of biological life resemble human design, because ex hypothesi the human capacity for design arose from the evolutionary process acting on biological life -- in fact, we would expect to see parallels between human design and biology. So it seems to me both evolutionary theory and intelligent design theory would expect to find analogies between human design and features of biology. Accordingly, I have never really understood why the argument by analogy from human design to intelligent design of life on earth is thought by some to be a knock-down argument.CLAVDIVS
June 21, 2013
June
06
Jun
21
21
2013
07:29 AM
7
07
29
AM
PDT
CentralScrutinizer- Elizabeth sez that darwinian evolution is not the blind watchmaker. As I have been saying, she doesn't understand darwinian evolution. That is why we need a thread about that before we can discuss anything else.Joe
June 21, 2013
June
06
Jun
21
21
2013
07:09 AM
7
07
09
AM
PDT
Elizabeth: But what are they? What is your search space? Equiprobable random draw? Because that isn’t the Darwinian null. You may not remember, but I have discussed that in extreme detail with you here: https://uncommondescent.com/intelligent-design/evolutionist-youre-misrepresenting-natural-selection/ (all the last part of the thread). Must I say it again? I will, in brief: a) My model is a random walk form some precursor sequence (a precious gene, or some non coding DNA sequence) to an unrelated functional gene protein (a new basic protein domain) b) The starting hypothesis is that no selectable intermediaries exist, and them the whole walk happens as the consequence of random variation. In that case, each unrelated state has the same probability to be reached, indeed lower than the probability of any related state. So, the probability of a random walk reaching the new functional state is at most the rate between functional space and search space, where the functional space is the number of sequences of that length that exhibit the function, and the search space is the number of possible sequences of that length. That is the concept of dFSI. A good approximation of the dFSI of protein families can be reached by the Durston method. Taking an appropriate threshold of complexity for the biological system in out planet, that IMO can be 150 bits, the null hypothesis of a random origin of the new sequence can easily be rejected. c) If and when naturally selectable intermediaries are found, the reasoning can be repeated, rajecting or accepting the null random hypothesis for each random walk, from A to B and from B to C. Where B is a naturally selectable intermediary between A and C. If no selectable intermediary is known, we still have to apply the null random hypothesis to the full walk from A to C. Why do you think that “a designer at an unknown time, by unknown means, designed and fabricated a DNA sequence likely to produce a functional protein and inserted into a living organism without leaving any trace of the process” is “more credible” than “unknown precursor proteins provided slight unknown selective advantages and so organisms with those sequences left more offspring”? a) The time is known: each emergence of a new basic protein domain is a time, that can be approximately known studying natural history. Our understanding of the times is constantly improving. b) We know little of the means, but much can be known as our understanding of molecular biology and of natural history improves. Guided mutation or intelligent selection, for example, are different "means", and they will leave different tracks in the genome and proteome. c) Traces of the process are abundant. The whole genome and proteome of living beings is a very strong trace. The more we know it, the more we understand of the design process. d) "unknown precursor proteins provided slight unknown selective advantages and so organisms with those sequences left more offspring" is completely non credible, for two different orders of reasons: 1) There is no logical reason at all that sequence intermediates between two unrelated sequences should give any reproductive advantage that leads from one sequence to the other. IOWs, variations to one sequence can rarely give a reproductive advantage, but there is no reason at all why they should lead the walk towards a new, unrelated sequence with a completely new function. That is wishful thinking at best, complete folly at worst. 2) There is no empirical support to the idea. Show those intermediaries, if they exist. But then we are not rejecting a Fisherian null. We are doing something more like Bayesian inference. And our priors will differ. As explained, I am rejecting a Fisherian null, and then examining the alternative explanations. My priors certainly differ from yours, but they have nothing to do with my Fisherian reasoning (thanks God!).gpuccio
June 21, 2013
June
06
Jun
21
21
2013
06:59 AM
6
06
59
AM
PDT
Liddle: The Darwinian algorithm (again I am not arguing for “neo-” anything) has been shown to result in functional features. This is why people write GAs. A functional protein is a functional feature. Joe: GAs have nothing to do with darwinism. GAs have at least one goal. Darwinian evolution doesn’t have any. The darwinian algorithm is a contradiction of terms.
Right. For the "blind watchmaker evolution" to be analogous to a GA, the laws of the universe would have to have been designed on purpose to favor the creation of functional features that have survival value. Of course, there goes the blind watchmaker right out the door in favor of the goal oriented designer.CentralScrutinizer
June 21, 2013
June
06
Jun
21
21
2013
06:54 AM
6
06
54
AM
PDT
Why do you think that “a designer at an unknown time, by unknown means, designed and fabricated a DNA sequence likely to produce a functional protein and inserted into a living organism without leaving any trace of the process”
How do you know no traces were left? And guess how the Stonehenge investigation started? "Some unknown designer, at an unknown time, designed and fabricated Stonehenge for unknown reasons and unknown processes." Once design is inferred then we get to the next questions.Joe
June 21, 2013
June
06
Jun
21
21
2013
06:52 AM
6
06
52
AM
PDT
Dr Liddle at 97. "Careless typing" was never the problem, and the your wish to see the information system of self-replication resolved by Darwinian mechanisms was something you often repeated.
Dr Liddle: But there’s no reason (that I can see) to assume that such a system is IC, or, more to the point, “unevolvable”. ... Dr Liddle: I think, though I could be wrong, that his case is that a code can’t evolve because you need the code before you can have the evolution. This is what that old challenge was about. I still think it would be cool to simulate the evolution of code from non-code. ... Dr Liddle: There needs to be a mechanism by which that set, or an equivalent set, of tRNA molecules came to be templated by the DNA, and not some useless set in which one triplet could result in any one of a number of amino acids. So? Why shouldn’t evolutionary mechanisms result in such a set? ... etc, etc, etc
scrap the line of excuses, you were just wrong.Upright BiPed
June 21, 2013
June
06
Jun
21
21
2013
06:51 AM
6
06
51
AM
PDT
And Elizabeth, ignoring me just makes you willfully ignorant.Joe
June 21, 2013
June
06
Jun
21
21
2013
06:49 AM
6
06
49
AM
PDT
Elizabeth, Seeing that you do not understand darwinian evolution, it is safe to say that you have no idea what the darwinian null is. As I said you need to start by understanding darwinian evolution. Continuing your misrepresentations of it isn't going to do you any good.Joe
June 21, 2013
June
06
Jun
21
21
2013
06:49 AM
6
06
49
AM
PDT
gpuccio:
Again, by the dFSCI metrics. No elephant here.
But what are they? What is your search space? Equiprobable random draw? Because that isn't the Darwinian null.
3) For biological objects, you may agree that at present the origin cannot be independently assessed, otherwise we would not be here discussing. As biological objects are the only other objects in the universe that exhibit dFSCI, it is simply natural to propose a design explanation for them. This is a very simple inference by analogy. In the absence of any other credible explanation, this remains the best explanation.
It may well be "simply natural" and I'm not saying it isn't, but the argument you are presenting here is not the rejection of a non-design Fisherian null. It is comparing two hypotheses, for neither of which you have much evidence independent of the explanandum. Why do you think that "a designer at an unknown time, by unknown means, designed and fabricated a DNA sequence likely to produce a functional protein and inserted into a living organism without leaving any trace of the process" is "more credible" than "unknown precursor proteins provided slight unknown selective advantages and so organisms with those sequences left more offspring"? I do understand your irritation with me, and I'm sure that from each of our points of view we think the other is failing to grasp a simple point! I am perfectly happy with reasoning that puts side by side two alternative explanations for a phenomenon, even though there is no (or little) independent evidence (independent of the explanandum) and evaluates their credibility. But then we are not rejecting a Fisherian null. We are doing something more like Bayesian inference. And our priors will differ.
I try to follow facts and explain them, not to interpret them according to dogmatic philosophical commitments.
Me too. But we need to put our priors on the table. And we cannot derive them from our posterior. As it were :)Elizabeth B Liddle
June 21, 2013
June
06
Jun
21
21
2013
06:33 AM
6
06
33
AM
PDT
This PROVES Lizzie is clueless wrt darwinian evolution:
The Darwinian algorithm (again I am not arguing for “neo-” anything) has been shown to result in functional features. This is why people write GAs. A functional protein is a functional feature.
GAs have nothing to do with darwinism. GAs have at least one goal. Darwinian evolution doesn't have any. The darwinian algorithm is a contradiction of terms.Joe
June 21, 2013
June
06
Jun
21
21
2013
06:27 AM
6
06
27
AM
PDT
Guys, Elizabeth Liddle doesn't even understand Darwinian evolution. So there is no way she understands what is being debated. Until she understands what Darwinian evolution entails, all of you are just wasting your time.Joe
June 21, 2013
June
06
Jun
21
21
2013
06:25 AM
6
06
25
AM
PDT
1 2 3 4 5 6 8

Leave a Reply