Uncommon Descent Serving The Intelligent Design Community

Order vs. Complexity: A follow-up post

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

NOTE: This post has been updated with an Appendix – VJT.

My post yesterday, Order is not the same thing as complexity: A response to Harry McCall (17 June 2013), seems to have generated a lively discussion, judging from the comments received to date. Over at The Skeptical Zone, Mark Frank has also written a thoughtful response titled, VJ Torley on Order versus Complexity. In today’s post, I’d like to clear up a few misconceptions that are still floating around.

1. In his opening paragraph, Mark Frank writes:

To sum it up – a pattern has order if it can be generated from a few simple principles. It has complexity if it can’t. There are some well known problems with this – one of which being that it is not possible to prove that a given pattern cannot be generated from a few simple principles. However, I don’t dispute the distinction. The curious thing is that Dembski defines specification in terms of a pattern that can generated from a few simple principles. So no pattern can be both complex in VJ’s sense and specified in Dembski’s sense.

Mark Frank appears to be confusing the term, “generated,” with the term. “described.” here. What I wrote in my post yesterday is that a pattern exhibits order if it can be generated by “a short algorithm or set of commands,” and complexity if it can’t be compressed into a shorter pattern by a general law or computer algorithm. Professor William Dembski, in his paper, Specification: The Pattern That Signifies Intelligence, defines specificity in terms of the shortest verbal description of a pattern. On page 16, Dembski defines the function phi_s(T) for a pattern T as “the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T” (emphasis mine) before going on to define the specificity sigma as minus the log (to base 2) of the product of phi_s(T) and P(T|H), where P(T|H) is the probability of the pattern T being formed according to “the relevant chance hypothesis that takes into account Darwinian and other material mechanisms” (p. 17). In The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Intelligent Design advocates William Dembski and Jonathan Wells define specification as “low DESCRIPTIVE complexity” (p. 320), and on page 311 they explain that descriptive complexity “generalizes Kolmogorov complexity by measuring the size of the minimum description needed to characterize a pattern.”

The definition of order and complexity relates to whether or not a pattern can be generated mathematically by “a short algorithm or set of commands,” rather than whether or not it can be described in a few words. The definition of specificity, on the other hand, relates to whether or not a pattern can be characterized by a brief verbal description. There is nothing that prevents a pattern from being difficult to generate algorithmically, but easy to describe verbally. Hence it is quite possible for a pattern to be both complex and specified.

NOTE: I have substantially revised my response to Mark Frank, in the Appendix below.

2. Dr. Elizabeth Liddle, in a comment on Mark Frank’s post, writes that “by Dembski’s definition a chladni pattern would be both specified and complex. However, it would not have CSI because it is highly probable given a relevant chance (i.e. non-design) hypothesis.” The second part of her comment is correct; the first part is incorrect. Precisely because a Chladni pattern is “highly probable given a relevant chance (i.e. non-design) hypothesis,” it is not complex. In The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), William Dembski and Jonathan Wells define complexity as “The degree of difficulty to solve a problem or achieve a result,” before going on to add: “The most common forms of complexity are probabilistic (as in the probability of obtaining some outcome) or computational (as in the memory or computing time required for an algorithm to solve a problem)” (pp. 310-311). If a Chladni pattern is easy to generate as a result of laws then it exhibits order rather than complexity.

3. In another comment, Dr. Liddle writes: “V J Torley seems to be forgetting that fractal patterns are non-repeating, even though they can be simply described.” I would beg to differ. Here’s what Wikipedia has to say in its article on fractals (I’ve omitted references):

Fractals are typically self-similar patterns, where self-similar means they are “the same from near as from far”. Fractals may be exactly the same at every scale, or, as illustrated in Figure 1, they may be nearly the same at different scales. The definition of fractal goes beyond self-similarity per se to exclude trivial self-similarity and include the idea of a detailed pattern repeating itself.

The caption accompanying the figure referred to above reads as follows: “The Mandelbrot set illustrates self-similarity. As you zoom in on the image at finer and finer scales, the same pattern re-appears so that it is virtually impossible to know at which level you are looking.”

That sounds pretty repetitive to me. More to the point, fractals are mathematically easy to generate. Here’s what Wikipedia says about the Mandelbrot set, for instance:

More precisely, the Mandelbrot set is the set of values of c in the complex plane for which the orbit of 0 under iteration of the Complex quadratic polynomial zn+1 = zn2 + c remains bounded. That is, a complex number c is part of the Mandelbrot set if, when starting with z0 = 0 and applying the iteration repeatedly, the absolute value of zn remains bounded however large n gets.

NOTE: I have revised some of my comments on Mandelbrot sets and fractals. See the Appendix below.

4. In another comment on the same post, Professor Joe Felsenstein objects that Dembski’s definition of specified complexity has a paradoxical consequence: “It implies that we are to regard a life form as uncomplex, and therefore having specified complexity [?] if it is easy to describe,” which means that “a hummingbird, on that view, has not nearly as much specification as a perfect steel sphere,” even though the hummingbird “can do all sorts of amazing things, including reproduce, which the steel sphere never will.” He then suggests defining specification on a scale of fitness.

In my post yesterday, I pointed out that the term “specified complexity” is fairly non-controversial when applied to life: as chemist Leslie Orgel remarked in 1973, “living organisms are distinguished by their specified complexity.” Orgel added that crystals are well-specified, but simple rather than complex. If specificty were defined in terms of fitness, as Professor Felsenstein suggests, then we could no longer say that a non-reproducing crystal was specified.

However, Professor Felsenstein’s example of the steel sphere is an interesting one, because it illustrates that the probability of a sphere’s originating by natural processes may indeed be extremely low, especially if it is also made of an exotic material. (In this respect, it is rather like the lunar monolith in the movie 2001.) Felsenstein’s point is that a living organism would be a worthier creation of an intelligent agent than such a sphere, as it has a much richer repertoire of capabilities.

Closely related to this point is the fact that living things exhibit a nested hierarchy of organization, as well as dedicated functionality: intrinsically adapted parts whose entire repertoire of functionality is “dedicated” to supporting the functionality of the whole unit which they comprise. Indeed, it is precisely this kind of organization and dedicated functionality which allows living things to reproduce in the first place.

At the bottom level, the full biochemical specifications required for putting together a living thing such as a hummingbird are very long indeed. It is only when we get to higher organizational levels that we can apply holistic language and shorten our description, by characterizing the hummingbird in terms of its bodily functions rather than its parts, and by describing those functions in terms of how they benefit the whole organism.

I would therefore agree that an entity exhibiting this combination of traits (bottom-level exhaustive detail and higher-level holistic functionality, which makes the entity easy to characterize in a few words) is a much more typical product of intelligent agency than a steel sphere, notwithstanding the latter’s descriptive simplicity.

In short: specified complexity gets us to Intelligent Design, but some designs are a lot more intelligent than others. Whoever made hummingbirds must have been a lot smarter than we are; we have enough difficulties putting together a single protein.

5. In a comment on my post, Alan Fox objects that “We simply don’t know how rare novel functional proteins are.” Here I should refer him to the remarks made by Dr. Branko Kozulic in his 2011 paper, Proteins and Genes, Singletons and Species. I shall quote a brief extract:

In general, there are two aspects of biological function of every protein, and both depend on correct 3D structure. Each protein specifically recognizes its cellular or extracellular counterpart: for example an enzyme its substrate, hormone its receptor, lectin sugar, repressor DNA, etc. In addition, proteins interact continuously or transiently with other proteins, forming an interactive network. This second aspect is no less important, as illustrated in many studies of protein-protein interactions [59, 60]. Exquisite structural requirements must often be fulfilled for proper functioning of a protein. For example, in enzymes spatial misplacement of catalytic residues by even a few tenths of an angstrom can mean the difference between full activity and none at all [54]. And in the words of Francis Crick, “To produce this miracle of molecular construction all the cell need do is to string together the amino acids (which make up the polypeptide chain) in the correct order”….

Let us assess the highest probability for finding this correct order by random trials and call it, to stay in line with Crick’s term, a “macromolecular miracle”. The experimental data of Keefe and Szostak indicate – if one disregards the above described reservations – that one from a set of 10^11 randomly assembled polypeptides can be functional in vitro, whereas the data of Silverman et al. [57] show that of the 10^10 in vitro functional proteins just one may function properly in vivo. The combination of these two figures then defines a “macromolecular miracle” as a probability of one against 10^21. For simplicity, let us round this figure to one against 10^20…

It is important to recognize that the one in 10^20 represents the upper limit, and as such this figure is in agreement with all previous lower probability estimates. Moreover, there are two components that contribute to this figure: first, there is a component related to the particular activity of a protein – for example enzymatic activity that can be assayed in vitro or in vivo – and second, there is a component related to proper functioning of that protein in the cellular context: in a biochemical pathway, cycle or complex. (pp. 7-8)

In short: the specificity of proteins is not in doubt, and their usefulness for Intelligent Design arguments is therefore obvious.

I sincerely hope that the foregoing remarks will remove some common misunderstandings and stimulate further discussion.

APPENDIX

Let me begin with a confession: I had a nagging doubt when I put up this post a couple of days ago. What bothered me was that (a) some of the definitions of key terms were a little sloppily worded; and (b) some of these definitions seemed to conflate mathematics with physics.

Maybe I should pay more attention to my feelings.

A comment by Professor Jeffrey Shallit over at The Skeptical Zone also convinced me that I needed to re-think my response to Mark Frank on the proper definition of specificity. Professor Shallit’s remarks on Kolmogorov complexity also made me realize that I needed to be a lot more careful about defining the term “generate,” which may denote either a causal process governed by physical laws, or the execution of an algorithm by performing a sequence of mathematical operations.

What I wrote in my original post, Order is not the same thing as complexity: A response to Harry McCall (17 June 2013), is that a pattern exhibits order if it can be generated by “a short algorithm or set of commands,” and complexity if it can’t be compressed into a shorter pattern by a general law or computer algorithm.

I’d now like to explain why I now find those definitions unsatisfactory, and what I would propose in their stead.

Problems with the definition of order

I’d like to start by going back to the original sources. In Signature in the Cell l (Harper One, 2009, p. 106), Dr. Stephen Meyer writes:

Complex sequences exhibit an irregular, nonrepeating arrangement that defies expression by a general law or computer algorithm (an algorithm is a set of expressions for accomplishing a specific task or mathematical operation). The opposite of a highly complex sequence is a highly ordered sequence like ABCABCABCABC, in which the characters or constituents repeat over and over due to some underlying rule, algorithm or general law. (p. 106)

[H]igh probability repeating sequences like ABCABCABCABCABCABC have very little information (either carrying capacity or content)… Such sequences aren’t complex either. Why? A short algorithm or set of commands could easily generate a long sequence of repeating ABC’s, making the sequence compressible. (p. 107)
(Emphases mine – VJT.)

There are two problems with this definition. First, it mistakenly conflates physics with mathematics, when it declares that a complex sequence can be generated by “a general law or computer algorithm.” I presume that by “general law,” Dr. Meyer means to refer to some law of Nature, since on page 107, he lists certain kinds of organic molecules as examples of complexity. The problem here is that a sequence may be easy to generate by a computer algorithm, but difficult to generate by the laws of physics (or vice versa). In that case, it may be complex according to physical criteria but not according to mathematical criteria (or the reverse), generating a contradiction.

Second, the definition conflates: (a) the repetitiveness of a sequence, with (b) the ability of a short algorithm to generate that sequence, and (c) the Shannon compressibility of that sequence. The problem here is that there are non-repetitive sequences which can be generated by a short algorithm. Some of these non-repeating sequences are also Shannon-incompressible. Do these sequences exhibit order or complexity?

Third, the definition conflicts with what Professor Dembski has written on the subject of order and complexity. In The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Professor William Dembski and Dr. Jonathan Wells provide three definitions for order, the first of which reads as follows:

(1) Simple or repetitive patterns, as in crystals, that are the result of laws and cannot reasonably be used to draw a design inference. (p. 317; italics mine – VJT).

The reader will notice that the definition refers only to law-governed physical processes.

Dembski’s 2005 paper, Specification: The Pattern that Signifies Intelligence, also refers to the Champernowne sequence as exhibiting a “combination of pattern simplicity (i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing the corresponding event by chance)” (pp. 15-16). According to Dembski, the Champernowne sequence can be “constructed simply by writing binary numbers in ascending lexicographic order, starting with the one-digit binary
numbers (i.e., 0 and 1), proceeding to the two-digit binary numbers (i.e., 00, 01, 10, and 11),” and so on indefinitely, which means that it can be generated by a short algorithm. At the same time, Dembski describes at as having “event-complexity (i.e., difficulty of reproducing the corresponding event by chance).” In other words, it is not an example of what he would define as order. And yet, because it can be generated by “a short algorithm,” it would arguably qualify as an example of order under Dr. Meyer’s criteria (see above).

Problems with the definition of specificity

Dr. Meyer’s definition of specificity is also at odds with Dembski’s. On page 96 of Signature in the Cell, Dr. Meyer defines specificity in exclusively functional terms:

By specificity, biologists mean that a molecule has some features that have to be what they are, within fine tolerances, for the molecule to perform an important function within the cell.

Likewise, on page 107, Meyer speaks of a sequence of digits as “specifically arranged to form a function.”

By contrast, in The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Professor William Dembski and Dr. Jonathan Wells define specification as “low DESCRIPTIVE complexity” (p. 320), and on page 311 they explain that descriptive complexity “generalizes Kolmogorov complexity by measuring the size of the minimum description needed to characterize a pattern.” Although Dembski certainly regards functional specificity as one form of specificity, since he elsewhere refers to the bacterial flagellum – a “bidirectional rotary motor-driven propeller” – as exhibiting specificity, he does not regard it as the only kind of specificity.

In short: I believe there is a need for greater rigor and consistency when defining these key terms. Let me add that I don’t wish to criticize any of the authors I’ve mentioned above; I’ve been guilty of terminological imprecision at times, myself.

My suggestions for more rigorous definitions of the terms “order” and “specification”

So here are my suggestions. In Specification: The Pattern that Signifies Intelligence, Professor Dembski defines a specification in terms of a “combination of pattern simplicity (i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing the corresponding event by chance),” and in The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Dembski and Wells define complex specified information as being equivalent to specified complexity (p. 311), which they define as follows:

An event or object exhibits specified complexity provided that (1) the pattern to which it conforms is a highly improbable event (i.e. has high PROBABILISTIC COMPLEXITY) and (2) the pattern itself is easily described (i.e. has low DESCRIPTIVE COMPLEXITY). (2008, p. 320)

What I’d like to propose is that the term order should be used in opposition to high probabilistic complexity. In other words, a pattern is ordered if and only if its emergence as a result of law-governed physical processes is not a highly improbable event. More succinctly: a pattern is ordered is it is reasonably likely to occur, in our universe, and complex if its physical realization in our universe is a very unlikely event.

Thus I was correct when I wrote above:

If a Chladni pattern is easy to generate as a result of laws then it exhibits order rather than complexity.

However, I was wrong to argue that a repeating pattern is necessarily a sign of order. In a salt crystal it certainly is; but in the sequence of rolls of a die, a repeating pattern (e.g. 123456123456…) is a very improbable pattern, and hence it would be probabilistically complex. (It is, of course, also a specification.)

Fractals, revisited

The same line of argument holds true for fractals: when assessing whether they exhibit order or (probabilistic) complexity, the question is not whether they repeat themselves or are easily generated by mathematical algorithms, but whether or not they can be generated by law-governed physical processes. I’ve seen conflicting claims on this score (see here and here and here): some say there are fractals in Nature, while other say that some objects in Nature have fractal features, and still others, that the patterns that produce fractals occur in Nature even if fractals themselves do not. I’ll leave that one to the experts to sort out.

The term specification should be used to refer to any pattern of low descriptive complexity, whether functional or not. (I say this because some non-functional patterns, such as the lunar monolith in 2001, and of course fractals, are clearly specified.)

Low Kolmogorov complexity is, I would argue, a special case of specification. Dembski and Wells agree: on page 311 of The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), they explain that descriptive complexity “generalizes Kolmogorov complexity by measuring the size of the minimum description needed to characterize a pattern” (italics mine).

Kolmogorov complexity as a special case of descriptive complexity

Which brings me to Professor Shallit’s remarks in a post over at The Skeptical Zone, in response to my earlier (misguided) attempt to draw a distinction between the mathematical generation of a pattern and the verbal description of that pattern:

In the Kolmogorov setting, “concisely described” and “concisely generated” are synonymous. That is because a “description” in the Kolmogorov sense is the same thing as a “generation”; descriptions of an object x in Kolmogorov are Turing machines T together with inputs i such that T on input i produces x. The size of the particular description is the size of T plus the size of i, and the Kolmogorov complexity is the minimum over all such descriptions.

I accept Professor Shallit’s correction on this point. What I would insist, however, is that the term “descriptive complexity,” as used by the Intelligent design movement, cannot be simply equated with Kolmogorov complexity. Rather, I would argue that low Kolmogorov complexity is a special case of low descriptive complexity. My reason for adopting this view is that the determination of a object’s Kolmogorov complexity requires a Turing machine (a hypothetical device that manipulates symbols on a strip of tape according to a table of rules), which is an inappropriate (not to mention inefficient) means of determining whether an object possesses functionality of a particular kind – e.g. is this object a cutting implement? What I’m suggesting, in other words, is that at least some functional terms in our language are epistemically basic, and that our recognition of whether an object possesses these functions is partly intuitive. Using a table of rules to determine whether or not an object possesses a function (say, cutting) is, in my opinion, likely to produce misleading results.

My response to Mark Frank, revisited

I’d now like to return to my response to Mark Frank above, in which I wrote:

The definition of order and complexity relates to whether or not a pattern can be generated mathematically by “a short algorithm or set of commands,” rather than whether or not it can be described in a few words. The definition of specificity, on the other hand, relates to whether or not a pattern can be characterized by a brief verbal description. There is nothing that prevents a pattern from being difficult to generate algorithmically, but easy to describe verbally. Hence it is quite possible for a pattern to be both complex and specified.

This, I would now say, is incorrect as it stands. The reason why it is quite possible for an object to be both complex and specified is that the term “complex” refers to the (very low) likelihood of its originating as a result of physical laws (not mathematical algorithms), whereas the term “specified” refers to whether it can be described briefly – whether it be according to some algorithm or in functional terms.

Implications for Intelligent Design

I have argued above that we can legitimately infer an Intelligent Designer for any system which is capable of being verbally described in just a few words, and whose likelihood of originating as a result of natural laws is sufficiently close to zero. This design inference is especially obvious in systems which exhibit biological functionality. Although we can make design inferences for non-biological systems (e.g. moon monoliths, if we found them), the most powerful inferences are undoubtedly drawn from the world of living things, with their rich functionality.

In an especially perspicuous post on this thread, G. Puccio argued for the same conclusion:

The simple truth is, IMO, that any kind of specification, however defined, will do, provided that we can show that that specification defines a specific subset of the search space that is too small to be found by a random search, and which cannot reasonably be found by some natural algorithm….

In the end, I will say it again: the important point is not how you specify, but that your specification identifies:

a)an utterly unlikely subset as a pre-specification

or

b) an utterly unlikely subset which is objectively defined without any arbitrary contingency, like in the case of pi.

In the second case, specification needs not be a pre-specification.

Functional specification is a perfect example of the second case.

Provided that the function can be objectively defined and measured, the only important point is how complex it is: IOWs, how small is the subset of sequences that provide the function as defined, in the search space.

That simple concept is the foundation for the definition of dFSCI, or any equivalent metrics.

It is simple, it is true, it works.

P{T|H) and elephants: Dr. Liddle objects

But how do we calculate probabilistic complexities? Dr. Elizabeth Liddle writes:

P(T|H) is fine to compute if you have a clearly defined non-design hypothesis for which you can compute a probability distribution.

But nobody, to my knowledge, has yet as suggested how you would compute it for a biological organism, or even for a protein.

In a similar vein, Alan Fox comments:

We have, as yet, no way to predict functionality in unknown proteins. Without knowing what you don’t know, you can’t calculate rarity.

In a recent post entitled, The Edge of Evolution, I cited a 2011 paper by Dr. Branko Kozulic, titled, Proteins and Genes, Singletons and Species, in which he argued (generously, in his view) that at most, 1 in 10^21 randomly assembled polypeptides would be capable of functioning as a viable protein in vivo, that each species possessed hundreds of isolated proteins called “singletons” which had no close biochemical relatives, and that the likelihood of these proteins originating by unguided mechanisms in even one species was astronomically low, making proteins at once highly complex (probabilistically speaking) and highly specified (by virtue of their function) – and hence as sure a sign as we could possibly expect of an Intelligent Designer at work in the natural world:

In general, there are two aspects of biological function of every protein, and both depend on correct 3D structure. Each protein specifically recognizes its cellular or extracellular counterpart: for example an enzyme its substrate, hormone its receptor, lectin sugar, repressor DNA, etc. In addition, proteins interact continuously or transiently with other proteins, forming an interactive network. This second aspect is no less important, as illustrated in many studies of protein-protein interactions [59, 60]. Exquisite structural requirements must often be fulfilled for proper functioning of a protein. For example, in enzymes spatial misplacement of catalytic residues by even a few tenths of an angstrom can mean the difference between full activity and none at all [54]. And in the words of Francis Crick, “To produce this miracle of molecular construction all the cell need do is to string together the amino acids (which make up the polypeptide chain) in the correct order” [61, italics in original]. (pp. 7-8)

Let us assess the highest probability for finding this correct order by random trials and call it, to stay in line with Crick’s term, a “macromolecular miracle”. The experimental data of Keefe and Szostak indicate – if one disregards the above described reservations – that one from a set of 10^11 randomly assembled polypeptides can be functional in vitro, whereas the data of Silverman et al. [57] show that of the 10^10 in vitro functional proteins just one may function properly in vivo. The combination of these two figures then defines a “macromolecular miracle” as a probability of one against 10^21. For simplicity, let us round this figure to one against 10^20. (p. 8)

To put the 10^20 figure in the context of observable objects, about 10^20 squares each measuring 1 mm^2 would cover the whole surface of planet Earth (5.1 x 10^14 m^2). Searching through such squares to find a single one with the correct number, at a rate of 1000 per second, would take 10^17 seconds, or 3.2 billion years. Yet, based on the above discussed experimental data, one in 10^20 is the highest probability that a blind search has for finding among random sequences an in vivo functional protein. (p. 9)

The frequency of functional proteins among random sequences is at most one in 10^20 (see above). The proteins of unrelated sequences are as different as the proteins of random sequences [22, 81, 82] – and singletons per definition are exactly such unrelated proteins. (p. 11)

A recent study, based on 573 sequenced bacterial genomes, has concluded that the entire pool of bacterial genes – the bacterial pan-genome – looks as though of infinite size, because every additional bacterial genome sequenced has added over 200 new singletons [111]. In agreement with this conclusion are the results of the Global Ocean Sampling project reported by Yooseph et al., who found a linear increase in the number of singletons with the number of new protein sequences, even when the number of the new sequences ran into millions [112]. The trend towards higher numbers of singletons per genome seems to coincide with a higher proportion of the eukaryotic genomes sequenced. In other words, eukaryotes generally contain a larger number of singletons than eubacteria and archaea. (p. 16)

Based on the data from 120 sequenced genomes, in 2004 Grant et al. reported on the presence of 112,000 singletons within 600,000 sequences [96]. This corresponds to 933 singletons per genome…
[E]ach species possesses hundreds, or even thousands, of unique genes – the genes that are not shared with any other species. (p. 17)

Experimental data reviewed here suggest that at most one functional protein can be found among 10^20 proteins of random sequences. Hence every discovery of a novel functional protein (singleton) represents a testimony for successful overcoming of the probability barrier of one against at least 10^20, the probability defined here as a “macromolecular miracle”. More than one million of such “macromolecular miracles” are present in the genomes of about two thousand species sequenced thus far. Assuming that this correlation will hold with the rest of about 10 million different species that live on Earth [157], the total number of “macromolecular miracles” in all genomes could reach 10 billion. These 10^10 unique proteins would still represent a tiny fraction of the 10^470 possible proteins of the median eukaryotic size. (p. 21)

If just 200 unique proteins are present in each species, the probability of their simultaneous appearance is one against at least 10^4,000. [The] Probabilistic resources of our universe are much, much smaller; they allow for a maximum of 10^149 events [158] and thus could account for a one-time simultaneous appearance of at most 7 unique proteins. The alternative, a sequential appearance of singletons, would require that the descendants of one family live through hundreds of “macromolecular miracles” to become a new species – again a scenario of exceedingly low probability. Therefore, now one can say that each species is a result of a Biological Big Bang; to reserve that term just for the first living organism [21] is not justified anymore. (p. 21)

“But what if the search for a functional protein is not blind?” ask my critics. “What if there’s an underlying bias towards the emergence of functionality in Nature?” “Fine,” I would respond. “Let’s see your evidence.”

Alan Miller rose to the challenge. In a recent post entitled, Protein Space and Hoyle’s Fallacy – a response to vjtorley, cited a paper by Michael A. Fisher, Kara L. McKinley, Luke H. Bradley, Sara R. Viola and Michael H. Hecht, titled, De Novo Designed Proteins from a Library of Artificial Sequences Function in Escherichia Coli and Enable Cell Growth (PLoS ONE 6(1): e15364. doi:10.1371/journal.pone.0015364, January 4, 2011), in support of his claim that proteins were a lot easier for Nature to build on the primordial Earth than Intelligent Design proponents imagine, and he accused them of resurrecting Hoyle’s fallacy.

In a very thoughtful comment over on my post CSI Revisited, G. Puccio responded to the key claims made in the paper, and to what he perceived as Alan Miller’s misuse of the paper (bolding below is mine):

First of all, I will just quote a few phrases from the paper, just to give the general scenario of the problems:

a) “We designed and constructed a collection of artificial genes encoding approximately 1.5×106 novel amino acid sequences. Because folding into a stable 3-dimensional structure is a prerequisite for most biological functions, we did not construct this collection of proteins from random sequences. Instead, we used the binary code strategy for protein design, shown previously to facilitate the production of large combinatorial libraries of folded proteins.”

b) “Cells relying on the de novo proteins grow significantly slower than those expressing the natural protein.”

c) “We also purified several of the de novo proteins. (To avoid contamination by the natural enzyme, purifications were from strains deleted for the natural gene.) We tested these purified proteins for the enzymatic activities deleted in the respective autotrophs, but were unable to detect activity that was reproducibly above the controls.”

And now, my comments:

a) This is the main fault of the paper, if it is interpreted (as Miller does) as evidence that functional proteins can evolve from random sequences. The very first step of the paper is intelligent design: indeed, top down protein engineering based on our hardly gained knowledge about the biochemical properties of proteins.

b) The second problem is that the paper is based on function rescue, not on the appearance of a mew function. Experiments based on function rescue have serious methodological problems, if used as models of neo darwinian evolution. The problem here is specially big, because we know nothing of how the “evolved” proteins work to allow the minimal rescue of function in the complex system of E. Coli (see next point).

c) The third problem is that the few rescuing sequences have no detected biochemical activity in vitro. IOWs, we don’t know what they do, and how they act at biochemical level. IOWs, with no known “local function” for the sequences, we have no idea of the functional complexity of the “local function” that in some unknown way is linked to the functional rescue. The authors are well aware of that, and indeed spend a lot of time discussing some arguments and experiments to exclude some possible interpretation of indirect rescue, or at least those that they have conceived.

The fact remains that the hypothesis that the de novo sequences have the same functional activity as the knocked out genes, even if minimal, remain unproved, because no biochemical activity of that kind could be shown in vitro for them.

These are the main points that must be considered. In brief, the paper does not prove, in any way, what Miller thinks it proves.

And that was Miller’s best paper!

In the meantime, can you forgive us in the Intelligent Design community for being just a little skeptical of claims that “no intelligence was required” to account for the origin of proteins, of the first living cell (which would have probably required hundreds of proteins), of complex organisms in the early Cambrian period, and even of the appearance of a new species, in view of what has been learned about the prevalence of singleton proteins and genes in living organisms?

Comments
PPPS: Collins English Dict, default: >> 6. (Electronics & Computer Science / Computer Science (also)) Computing a. the preset selection of an option offered by a system, which will always be followed except when explicitly altered b. (as modifier) default setting>>kairosfocus
June 23, 2013
June
06
Jun
23
23
2013
07:04 AM
7
07
04
AM
PDT
Elizabeth Liddle:
Both darwinian processes and intentional designers have been demonstrated to produce Irreducibly Complex functions (i.e. functions in which there is no pathway of selectable precursors).
That is a lie. So now I understand why Lizzie sez what she does- she just makes it all up and presents it as reality. I challenge Lizzie and all other evos to present the evidence taht darwinian processes have been demonstrated to produce IC. If she fails to do so then it is obvious that she is just lying, again.Joe
June 23, 2013
June
06
Jun
23
23
2013
07:00 AM
7
07
00
AM
PDT
PPS: Note, this explicitly accepts that cases of actual design that are too simple to pass the threshold will be assigned chance as default explanation. This is to be ever so sure when the decision design is made.kairosfocus
June 23, 2013
June
06
Jun
23
23
2013
06:50 AM
6
06
50
AM
PDT
PS: Note, again the first default -- assumed so in absence of positive reason to go to an alternative -- is necessity, rejected on high contingency. There are two known sources of such, chance and choice. The second default is chance unless there is a positive reason such as FSCO/I, to decide in favour of choice. That is, an inference to design has to pass TWO positive reasons to reject a default. As has been explained over and over again, but ignored or distorted.kairosfocus
June 23, 2013
June
06
Jun
23
23
2013
06:47 AM
6
06
47
AM
PDT
gpuccio:
Elizabeth: Let’s go to the final point, the most important: why the neo darwinian algorithm is not only unsupported by facts, but also usupported by logic. I will try to be simple and clear. My impression was that, in your initial discussion, you were only suggesting that selectable precursors could exist in the protein space, and that if they were many that would help the evolution of functional proteins. At this point you had not mentioned anything about protein structure and function, as you do in your following post. My answer to that was very simple. Even if many selectable precursors exist in the protein space, there is no reason to think that their distribution favors functional proteins versus non functional states. Therefore, the probability of getting to a functional protein remains the same, whatever the number of selectable intermediaries in the space. IOWs, even if selection acts, it will act as much to lead to non functional states as it does to lead to functional states, and as functional states are extremely rare, the probability of finding them remains extremely low. Is that clear?
I think so, but I may still be misunderstanding you. What you seem to me to be saying would be true if it were the case that there is no correlation between sequence similarity and functionality. So that if sequence ABCDE is functional, sequence ABCED is not more more likely to be functional than sequence ZYXWP. However, if similar sequences are likely to confer similar fitness, then what you seem to be saying would not hold. And in fact, similar sequences tend to yield proteins with similar properties. No?
Now, in the following post, you add considerations about protein structure and function. They are not completely clear, but I will try to make my point just the same. Here is your argument:
Are you saying that there is no reason to expect any correspondence between protein sequence and protein properties? If so, by what reasoning? I’d say that under the Darwinian hypothesis that is what you’d expect. Sequences for which a slight variation results in a similar phenotype will tend to be selected simply for that reason. Variants who produce offspring with similar fitness will leave more offspring than variants for whom the fitness of the offspring is more of a crapshoot. And in any case we know it is the case – similar genotypes tend to produce similar phenotypes. If sequences were as brittle as you suggest, few of us would be alive.
You seem to imply that, in some way, the relationship between structure and function can “help” the transition to a functional unrelated state. But the opposite is true. Let’s go in order. The scenario is, as usual, the emergence of a new basic protein domain. As I have already discussed with you in the past, we must decide what is our starting sequence. the most obvious possibilities are: a) An existing, unrelated protein coding gene b) An existing, unrelated pseudogene, no more functional c) An unrelated non coding sequence. Why do I insist on “unrelated”? Because othwerwise we are no more in the scenario of the emergence of a new basic protein domain. As I have explained many times, we have about 2000 superfamilies in the SCOP classification. Each of them is completely unrelated, at sequence level, to all the others, as can be easily verified. Each of them has different sequence, different folding, different functions. And they appear at different times of natural history, although almost half of them are already present in LUCA. So, the emergence of a new superfamily at some time is really the emergence of a new functional island. The new functional protein will be, by definition, unrelated at the sequence level to anything functional that already existed. It will have a new folding, and new functions. Is that clear?
Yes, I think so. Let's assume for simplicity that there is only one superfamily of proteins - that all functional proteins share some kind of sequence similarity, but are a tiny proportion of all possible protein sequences. And let's say that phylogetic analysis shows that the LUCA - the protein at the base of the tree (if there was one - protein sequences might will be the result of HGT as will as LGT) is still quite substantial in length. In other words that the shortest possible extent ancestor of the superfamily is still vastly unlikely, if picked at random from a barrel of all possible sequences. We know that similar, but longer, sequences tend to be functional (or we wouldn't have a superfamily). We do not know, because none exist, whether similar, but shorter (and therefore less improbable in our barrel pick) will also tend to be functional. But is there any reason to think not? I think you essentially address this below:
Now, as usual I will debate NS using the following terminology. Please, humor me.
Of course :) You have done no less for me :)
1) Negative NS: the process by which some new variation that reduces reproductive fitness can be eliminated. 2) Positive NS: the process by which some new variation that confer a reproductive advantage can expand in the population, and therefore increase its probabilistic resources (number of reproduction per time in the subset with that variation). Let’s consider hypothesis a). Here, negative NS can only act against the possibility of getting to a new, unrelated sequence with a new function by RV. Indeed, then only effect of negative NS will be to keep the existing function, and eliminate all intermediaries where that function is lost or decreases. The final effect is that neutral mutations can change the sequence, but the function will remain the same, and so the folding. That is what is expressed in the big bang theory of protein evolution, and explains very well the sequence variety in a same superfamily, while the function remains approximately the same. In this scenario, it is even more impossible to reach a new functional island, because negative NS will keep the variation within the boundaries of the existing functional island. What about positive NS? In this scenario, it can only have a limited role, maybe to tweak the existing function, improve it, or change a little bit the substrate affinity. Some known cases of microevolution, like nylonase, could well be explained in this context. Let’s go now to cases b) and c). In both situations, the original sequence is not transcribed, or simply is not functional. Otherwise, we are still in case a). That certainly improves our condition. There is no more the limitation of negative NS. Now we can walk in all directions, without any worry about an existing function or folding that must be preserved. Well, that’s much better! But… in the end, now we are in the field of a pure random walk. All existing unrelated states are equiprobable. The probability of reaching a new functional island is now the same as in the purely random hypothesis. Your suggestion that some privileged walks may exist between isolated functional islands is simply illogical. Why should that be so? The functional islands are completely separated at sequence level, we know that. SCOP classification proves that. They are also separated at the folding level: they fold differently. They also have different functions. Why in the universe should privileged pathways exist between them? What are you? An extreme theistic evolutionist, convinced that God has designed, in the Big Bang, a very unlikely universe where in the protein space, for no apparent reason, there are privileged walk between unrelated functional islands, so that darwinian evolution may occur? How credible is this “God supports Darwin” game? You, like anyone who finds the neo darwinian algorithm logically credible, should really answer these very simple questions.
First of all, I entirely agree that for Darwinian evolution to occur, there must be a correlation between genotypic similarity and phenotypic similarity. If there is no correlation between genotype and phenotype, then even if there is "heritable variance in reproductive success", then any offspring even slightly different, genetically, from its parent will have no more probability resembling its parent phenotypically than any other possible variant. This is essentially the No Free Lunch argument, and the basis for Dembski's Search for a Search. However, for now, let us merely observe that organisms tend to resemble their parents both genotypically and phenotypically, and that similar genotypes produce similar phenotypes, both at the organism level, and at the gene level - similar sequences tend to produce similar phenotypic effects. If this is not true for proteins - in other words if protein space is far more disconnected than, for example, the space of regulatory genes, then you may well be correct. I would certainly agree that the functional connectedness of protein space is potentially interesting issue, and that if you can show that sequence distance (taking into account various mechanism for variation generation including both HGT and LGT) is too poorly correlated with fitness distance for a Darwinian account to be plausible, than, cool. This is, in my view, a much better approach to ID (or rather a much better approach to critiquing Darwinian accounts, because as well as "designer", as a possible inference from a falsification of Darwinian mechanisms, there is also "other factor as yet unknown" - indeed I would class a designer as one such), than trying to compute quantities like dFSCO (as I understand it), which merely tell us what needs to be explained, and are not, in my view, for reasons stated above, themselves evidence for a particular explanation.Elizabeth B Liddle
June 23, 2013
June
06
Jun
23
23
2013
06:40 AM
6
06
40
AM
PDT
F/N: A default is a first resort that is switched away from (a null hyp if you will), not the result of a reasonable trichotomy where we account for A and B as not credibly responsible and back up C with abundant empirical warrant. KFkairosfocus
June 23, 2013
June
06
Jun
23
23
2013
06:38 AM
6
06
38
AM
PDT
@Liddle:
I am calling the “default” what you are left with if you reject other available options.
That's also how programming languages with switch-case-statements specify the "default"-behaviour.JWTruthInLove
June 23, 2013
June
06
Jun
23
23
2013
05:53 AM
5
05
53
AM
PDT
Elizabeth: Always briefly and in no order: At the very least it is as supported by analogy as your designer is. Both darwinian processes and intentional designers have been demonstrated to produce Irreducibly Complex functions (i.e. functions in which there is no pathway of selectable precursors). ???? My tool for design inference is, as you well know, dFSCI. What darwinian process has ever been shown to produce dFSCI? I agree. And nor can the designer be falsified. There are many scientific explanations that cannot be falsified. Indeed, the only regular falsification we do in science is falsification of the null hypothesis. If a hypothesis cannot be set up as a null, it cannot be falsified. ???? What are you saying here? Are you rejecting the whole Popperian theory of science? It seems you are rather confused. The null is never falsified, it is only rejected because improbable. H0 is not falsified, ever. Instead, necessity explanations can be falsified. For example, if I assume that the cause of the effect I observe is X, but further experimentation shows that X does not produce that effect, my H1 is falsified. Popper's point is that if an explanation can never be falsified, for the same nature of the explanation, ot is not a scientific explanation. As I said in my #177, the ID theory is perfectly falsifiable, and is therefore perfectly scientific.gpuccio
June 23, 2013
June
06
Jun
23
23
2013
05:28 AM
5
05
28
AM
PDT
Elizabeth:
Regarding the falsifiability of ID, even if I have not followed the whole discussion, I would say that while a generic hypothesis of design or of a designer is not a scientific issue, and cannot be falsified, ID theory is a scientific theory and can very well be falsified. ID theory states that a designer can be inferred because of specific properties of objects, such as dFSCI. The simple observation of objects exhibiting dFSCI which certainly came into existence without any design intervention would immediately falsify the theory.
I agree.Elizabeth B Liddle
June 23, 2013
June
06
Jun
23
23
2013
05:01 AM
5
05
01
AM
PDT
Kairosfocus:
For, repeatedly and indeed again above, it has been pointed out to you that the design inference is TWICE OVER, not a default. The first being mechanical necessity, overcome through high contingency as opposed to natural regularity such as F = m*a. Secondly, highly contingent outcomes are held the result of chance showing itself in patterns such as statistical distributions, absent the FSCO/I pattern of complex, functional specificity, especially in something like strings of digital code that function as code, beyond 500 bits or the equivalent.
OK, let me make myself clear. Let's say I have a washing machine. If I want to a warm wash, I press "warm". If I want a full rinse, I press "full rinse". If I reject both, the machine stays on its factory setting (the "default") which is "eco" (cold water, half rinse). In other words I am calling the "default" what you are left with if you reject other available options. That's the sense in which I mean the EF (and indeed chi) is the "default" - it's what you conclude if you reject, in the case of the EF, Law, and Chance, and, in the case of chi, the null hypothesis of no-design. In other words, it's what's left, once you've rejected other alternatives on offer (spin, full rinse, Law, necessity, the null). But if you don't like the work "default" that is fine. I will avoid it.Elizabeth B Liddle
June 23, 2013
June
06
Jun
23
23
2013
04:59 AM
4
04
59
AM
PDT
gpuccio (getting there, slowly!):
Elizabeth: The basic protein domains are extremely ancient. How would you test whether any precursor was selectable in those organisms in that environment? That’s why I’d say the onus (if you want to reject the “null” of selectable precursors) to demonstrate that such precursors are very unlikely. As explained, “selectable precursors” are not a “null”: they are an alternative hypothesis (H1b, not H0).
That's find. Dembski treats them as a null ("Darwinian or other material mechanisms"). You are not. This is good.
I reject the neo darwinian hypothesis H1b because it is completely unsupported by facts. I have no onus at all. It is unsupported by facts. Period. Show me the facts, and I will change my mind.
At the very least it is as supported by analogy as your designer is. Both darwinian processes and intentional designers have been demonstrated to produce Irreducibly Complex functions (i.e. functions in which there is no pathway of selectable precursors).
Moreover, if intermediaries exist, it must be possible to find them in the lab, and to argue about what advantage they could have given.
Possibly, possibly not. If archaic precursor proteins existed that provided reproductive advantage to their bearers in the archaic environment, neither may be available for examination today. That doesn't invalidate the hypothesis, any more than lack of any independent evidence of a designer invalidates the designer hypothesis.
If your point is that: a) Precursors could have existed, but we have no way to find them and: b) Even if we found them, there is no way to understand if they could have given an advantage in “those organisms” and in “that environment”, because we can know nothing of those organism and that environment, then you are typically proposing an hypothesis that can never be falsified. I suppose Popper would say that it is not a scientific hypothesis.
I agree. And nor can the designer be falsified. There are many scientific explanations that cannot be falsified. Indeed, the only regular falsification we do in science is falsification of the null hypothesis. If a hypothesis cannot be set up as a null, it cannot be falsified. Which is why falsification isn't really how science makes progress (null hypotheses are usually really boring, like "these two samples are drawn from the same population"; "this correlation is not zero".
We allow both Intelligent Perturbation and Unknown Object to remain as unrejected alternatives. The rejection of an alternative is always an individual choice. I would be happy if ID and neodarwinism could coexist as “unrejected alternatives” in the current scientific scenario. That’s not what is happening. Almost all scientists accept the unsupported theory, and fiercely reject the empirically supported one. Using all possible methods to discredit it, fight it, consider it as a non scientific religious conspiration, and so on. Not a good scenario at all, for human dignity.
I absolutely agree with you that there is no scientific reason to reject design, and to claim that the success of the Darwinian model is evidence against a designer is fallacious, in my view. It isn't. However, to say that an inference is invalid (as I consider all ID inferences I have met so far are) is not to say that the conclusion is untrue. Nor is to say that the ID argument against evolution is false is the same as saying that evolution is true. And I do disagree with you sharply that the Darwinian model is unsupported. The Darwinian model provides us with a theoretical mechanism by which information as to how to build an organism that can survive and reproduce in an environment full of threats and resources can be bootstrapped into a genome, and makes empirical predictions that have been repeatedly confirmed. In that sense it is a better theory than Newton's theory of gravity. We still do not have a mechanism for gravity - which means we have no explanation for space-time, and thus no mechanism for existence itself. If I wanted to make an ID argument, I'd say: never mind evolution, explain gravity, materialists!
I am merely arguing against the validity of the arguments for Design that you are presenting. The point is not missed at all. It’s my arguments for design that I defend, and nothing else. And I shall go on not using the capital letter, because I am arguing for some (non human) conscious intelligent being, not for God.
OK. And I see you are prepared to propose specific (times and means of interventions, for instance). This is good - and potentially allows you to make specific testable predictions. For example, would you not agree that the intentions of the designer could be investigating, and hypotheses developed as to how the genome was physically manipulated?
Actually I accept that the flaw here is not circularity. Thank you for that. It is if you assume that by rejecting the random-draw null you have rejected the non-design null, which I have never done.
That's fine. I assumed that your dFSCO calculation, like Dembski's chi, was based on rejecting a Fisherian null. If it is just a fact that requires explanation (the proportion of functional proteins out of all possible sequences) that's fine.
but as you claim that the inference rather is by “simple inference by analogy”, I agree it is not circular. I simply claim what I have always clearly stated for years, here and at TSZ.
OK.
On the other hand nor is it sound. You are free to believe so, but I see no reason for such a statement. More in next post.
May need to go to the supermarket, but will be back!Elizabeth B Liddle
June 23, 2013
June
06
Jun
23
23
2013
04:30 AM
4
04
30
AM
PDT
Elizabeth: Regarding the falsifiability of ID, even if I have not followed the whole discussion, I would say that while a generic hypothesis of design or of a designer is not a scientific issue, and cannot be falsified, ID theory is a scientific theory and can very well be falsified. ID theory states that a designer can be inferred because of specific properties of objects, such as dFSCI. The simple observation of objects exhibiting dFSCI which certainly came into existence without any design intervention would immediately falsify the theory.gpuccio
June 23, 2013
June
06
Jun
23
23
2013
04:12 AM
4
04
12
AM
PDT
Dr Liddle:
it’s an argument I’m making in good faith
I am sorry, but I must disagree, above and beyond the issue of the slanders you have hosted and denied that you have hosted, that sharply reduce your credibility to s[peak and be taken at face value. For, repeatedly and indeed again above, it has been pointed out to you that the design inference is TWICE OVER, not a default. The first being mechanical necessity, overcome through high contingency as opposed to natural regularity such as F = m*a. Secondly, highly contingent outcomes are held the result of chance showing itself in patterns such as statistical distributions, absent the FSCO/I pattern of complex, functional specificity, especially in something like strings of digital code that function as code, beyond 500 bits or the equivalent. I know for a fact that I have repeatedly pointed this out to you, highlighting the per aspect design inference filter in explanation. That is a matter of FACT, not opinion. You may wish to disagree that there are three main patterns of causes that are empirically relevant, but that is immediately not he same as that such a trichotomy is not commonly seen and widely understood. It is further not he case that after the two defaults, one can reasonably and truthfully say that the inference to design is a default. Finally, if one wishes to suggest a fourth way of causation, per aspect, one needs to warrant it. And a combination of blind chance and mechanical necessity is taken into account under chance, as the necessity is not responsible for the aspect of high contingency, by definition. So, until someone warrants on observation, a fourth causal pattern, we are well within epistemic rights to reason on the three longstanding abundantly warranted cases. And, to interpret the hope for a fourth pattern as an implicit acknowledgement of the force of the argument, but multiplied by a wish not to follow it to its conclusion. Good day, madam KFkairosfocus
June 23, 2013
June
06
Jun
23
23
2013
04:02 AM
4
04
02
AM
PDT
Elizabeth: Some quick comments, for the moment, in no special order: It’s not “propaganda”, gpuccio, it’s an argument I’m making in good faith. I was not referring to any bad faith on your part. IMO, the "argument-from-ignorance" issue is darwinist propaganda, and has been for many years. You are certainly repeating it in "good faith", but the argument itself is darwinist propaganda. IMO. This is a different argument and I agree that in this argument design is not treated as the default. Thank you. I am not sure at all that you are right about Dembski, but as usual I will stick to my arguments, in the form they have had for years here. So, if you agree that in my arguments there is no default, it's fine with me. I have no time now, so I will take the more serious issues later...gpuccio
June 23, 2013
June
06
Jun
23
23
2013
04:01 AM
4
04
01
AM
PDT
Upright Biped:
I don’t think design is falsifiable, because it is always possible (indeed it is highly defensible theology) to postulate a designer who conceived of a universe that Just Worked.
a) Which ID arguments present theology as a defense of their claims? If the answer is none, then your comment was ill-conceived (if we can assume that reasoning which has no meaning to the issue being reasoned over can be classified as ill-conceived).
I didn't claim that anyone presented theology as a defense of their claims (although many have inferred a deity from the perceived evidence for design, and, indeed, frequently accuse "Darwinists" of being resistant to the Design inference because of its theological implications). An my comment was perfectly "well conceived". If a postulated designer is unspecified and may have unlimited powers (as, for example, many postulated deities do), there is no way the falsify such a generic designer hypothesis.
b) ID arguments are not based on a universe that ‘just worked’. An argument based on a universe that ‘just worked’ would not require any evidence for its claim, and indeed, could not provide any evidence because there would be nothing to distinguish evidence from not-evidence,
Exactly. That was precisely my point: that the existence of a deity in no way depends on the Design Inference being valid; conversely, atheism is not justified by the invalidity of an Design Inference. In my view it is no more valid to argue evolution, therefore no god, than it is to argue no evolution, therefore god.
Biological ID is based on the tangible observation of living systems, and cosmological ID is based on a consilience of fine tuning (i.e. not the fact that the universe works, but the parameters in which it works). All of this is evidence that can be argued over, which is effectively opposite of having an absence of evidence. So again, your comment is ill-concieved if it is to be used as rationale.
I think that you misunderstood the import of my comment.
c) Following on von Neumann, Pattee argued that an iterative symbol system is the fundamental requirement in a process capable of the open-ended evolution that characterizes living things. ID can be fasified by a demonstration that unguided (non-design) forces are capable of establishing a iterative symbol system (a semiotic state) in a local system.
Well, it wouldn't falsify ID. It would merely falsify the specific claim that a designer is a necessary proximal cause for the generation of an iterative symbol system. It wouldn't rule out a designer as a necessary distal cause. For instance, as the cause of a universe with heavy elements including carbon, without which an "iterative symbol system" might (or might not) be impossible.
d) If it is demonstrated that unguided forces can establish this semiotic state in a local system, then ID as a theory would be very effectively falsified, even if the truth of ID remained in question.
I agree that that specific conjecture would be falsified. This is why I think it is useful to have specific ID conjectures.
On the other hand, the proposition of unguided forces as the origin of semiosis is not falsifiable under any cicumstances.
I agree. A conjecture has to be specific to be falsiable - make specific predictions. "Unguided forces" is far too vague to be falsifiable.
If mankind should someday create life from non-life, that achievement will not move a hair on the argument’s head. If ID proponents point out the vast amount of agency involvement in the creation of that life, materialists would forever argue, as they do now, that we just don’t yet know how these things happen without agency inovolvement.
Yes indeed. And so agency involvement cannot be rejected.
They will say, just as they do now, that any counter-argument can do no more than point to a gap in our knowledge. The counter-argument to unguided forces would be immediately labeled, as it is now, as an argument from ignorance. The proposition of unguided forces never has to undergo a test because its based on the speculation of an unknown mechanism, and therefore is subject only to the researcher’s imagination. It simply cannot be tested, which is the very definiton of non-falsifiable.
Non-specific conjectures ("A designer with unspecified powers"; "unspecified unguided forces") cannot be falsified. We falsify a conjecture when we make a specific prediction based on that conjecture, and that prediction is not confirmed by observation. However, note that all falsifications in science are probabilistic, and even when a prediction is confirmed (with a good p value), there are always potential alternative explanations, including unfalsfiable ones (omphalism, for instance).
So Dr Liddle, just as when you were pondering the evolution of replication machinery, you are once again wrong to the left and wrong to the right. Half of your comment is meaningless and the other half is demonstrably false.
I disagree, for the reasons given above.Elizabeth B Liddle
June 23, 2013
June
06
Jun
23
23
2013
03:54 AM
3
03
54
AM
PDT
gpuccio, continuing:
But gpuccio, this then becomes an argument-from-ignorance.
Absolutely not! This is simply neo darwinist propaganda. The scenario is very simple. Design is a credible explanation for dFSCI, because of specific positive empirical observations of the relationship between dFSCI and design. That is a very positive argument for the design inference, and ignorance has nothing to do with it.
It's not "propaganda", gpuccio, it's an argument I'm making in good faith. Design and selectable precursors are both potential explanations for dFSCI; to say reject selectable precursors and infer design on the basis that there are no known selectable precursors is, literally, an "argument from ignorance": "we don't know of any, therefore they don't exist". And it is selective a well - we don't know of any designers around at the time of protein domain emergence, but we can't assume they don't exist, because that, too, would be an "argument from ignorance". (btw, in this context "ignorance" doesn't mean "not knowing what you should know" - it just means "not knowing" - not sure if this translates across the languages!)
Then, there is the attempt of neo darwinists to explain dFSCI in biology by an algorithm based on RV + NS. Without discussing the details (more on NS in a moment), let’s say that such an explanation has no credibility unless selectable intermediaries to all basic protein domains exist. Our empirical data offer at present no support to such an existence, and it is not even suggested by pure logical reasonings.
I agree that the selectable precursors/intermediaries is an alternative explanation. I'm saying that rejecting its credibility because there is no "empirical data" is no more (or less) justified than rejecting "a designer" because there is no empirical data to suggest the existence of a designer. As for logical reasoning - I disagree, but maybe we will get to that later.
Therefore, the situation is as follows: a) We reject H0 (pure random origin) b) We have a credible explanation, based on positive empirical observations (design): let’s call it H1a c) We have a suggested alternative explanation, unsupported by any empirical observation (the neodarwinian hypothesis): let’s call it H1b Now, it is obvious to me that H1a is far better than H1b. So, I accept it as the best explanation. You may disagree, but the fact that I reject H1b as a credible explanation because it is unsupported by known facts is in no way an “argument from ignorance”: it is simply sound scientific reasoning.
No, gpuccio, I don't think it is "sound scientific reasoning. Let me attempt again to say why, with reference to your nicely laid out chain of reasoning above:
a) We reject H0 (pure random origin)
Of course. Nobody suggests this as a hypothesis anyway, and we can clearly reject it. This leaves:
b) We have a credible explanation, based on positive empirical observations (design): let’s call it H1a
The only reason to grant this explanation credibility is by analogy with human design. I do not reject the argument, but nor do I think it merits the credibility you accord it. And the reasoning, though the conclusion may be correct, is fallacious: A has properties X Y and Z but not P or Q. B has properties P, and Z, but not X or Y. A is caused by D; therefore B is probably caused by D.
c) We have a suggested alternative explanation, unsupported by any empirical observation (the neodarwinian hypothesis): let’s call it H1b
It is supported by as much, if not more, empirical observation than the design hypothesis. We observe that things evolve via selectable precursors, both theoretically, and by direct observation. My point is not that we must reject H1a and consider H1b supported. It is that there is no principled reason for rejecting either. In order to choose the better, we'd have to make differential predictive hypothesis about new data; what would we expect if H1a was true that we would not expect if H1b was true?, and vice versa? Then go out and look for it. Again, at the risk of being repetitive, let me say: I am NOT arguing that the world in general, or biology specifically, was not designed. I'm not even arguing that we could not test certain design hypotheses. I'm simply arguing that the design inference methodology (Fisherian null hypothesis testing) suggested by Dembski doesn't work in any case where the probability distribution under the null is not calculable, and that the argument by analogy doesn't work either. The conclusion may be true, but the arguments are unsound.Elizabeth B Liddle
June 23, 2013
June
06
Jun
23
23
2013
03:04 AM
3
03
04
AM
PDT
I don’t think design is falsifiable, because it is always possible (indeed it is highly defensible theology) to postulate a designer who conceived of a universe that Just Worked.
a) Which ID arguments present theology as a defense of their claims? If the answer is none, then your comment was ill-conceived (if we can assume that reasoning which has no meaning to the issue being reasoned over can be classified as ill-conceived). b) ID arguments are not based on a universe that 'just worked'. An argument based on a universe that 'just worked' would not require any evidence for its claim, and indeed, could not provide any evidence because there would be nothing to distinguish evidence from not-evidence, Biological ID is based on the tangible observation of living systems, and cosmological ID is based on a consilience of fine tuning (i.e. not the fact that the universe works, but the parameters in which it works). All of this is evidence that can be argued over, which is effectively opposite of having an absence of evidence. So again, your comment is ill-concieved if it is to be used as rationale. c) Following on von Neumann, Pattee argued that an iterative symbol system is the fundamental requirement in a process capable of the open-ended evolution that characterizes living things. ID can be fasified by a demonstration that unguided (non-design) forces are capable of establishing a iterative symbol system (a semiotic state) in a local system. d) If it is demonstrated that unguided forces can establish this semiotic state in a local system, then ID as a theory would be very effectively falsified, even if the truth of ID remained in question. On the other hand, the proposition of unguided forces as the origin of semiosis is not falsifiable under any cicumstances. If mankind should someday create life from non-life, that achievement will not move a hair on the argument's head. If ID proponents point out the vast amount of agency involvement in the creation of that life, materialists would forever argue, as they do now, that we just don't yet know how these things happen withou agency inovolvement. They will say, just as they do now, that any counter-argument can do no more than point to a gap in our knowledge. The counter-argument to unguided forces would be immediately labeled, as it is now, as an argument from ignorance. The proposition of unguided forces never has to undergo a test because its based on the speculation of an unknown mechanism, and therefore is subject only to the researcher's imagination. It simply cannot be tested, which is the very definiton of non-falsifiable. So Dr Liddle, just as when you were pondering the evolution of replication machinery, you are once again wrong to the left and wrong to the right. Half of your comment is meaningless and the other half is demonstrably false.Upright BiPed
June 23, 2013
June
06
Jun
23
23
2013
02:56 AM
2
02
56
AM
PDT
Phinehas:
That seems like a pretty legitimate inference to me. What am I missing? Can you give me an example to work with?
Yes: For all cases of gastric ulcer for which a cause is known, the cause is a bacteria. Therefore all cases of gastric ulcer for which no cause is known, the cause must be a bacteria. This could be true - after all, it was a long time before the helicobacter was discovered, and it may well be that gastric ulcers for which no trace of bacteria is found is nonetheless caused by a bacteria that we haven't got a good test for yet. But it is not a sound inference. There may be a quite different cause of gastric ulcer in patients for whom the cause is not helicobacter.Elizabeth B Liddle
June 23, 2013
June
06
Jun
23
23
2013
02:13 AM
2
02
13
AM
PDT
Hey Liz, nice to see you here! I don't understand this:
If all cases of X for which a cause is known the cause is Y, you cannot infer that for all cases of X for which no cause is known, the cause is also Y.
That seems like a pretty legitimate inference to me. What am I missing? Can you give me an example to work with?Phinehas
June 22, 2013
June
06
Jun
22
22
2013
07:53 PM
7
07
53
PM
PDT
As for non-biological self-replicators- there isn't anything to suggest they exist- usually takes two, one for a template and one for a catalyst- and nothing to suggest that even given those 2 and plenty of respurces, that anything else will ever evolve. The point being is that there isn't any evidence to support Lizzie's position.Joe
June 22, 2013
June
06
Jun
22
22
2013
07:17 PM
7
07
17
PM
PDT
You think Design is reasonable, and that material mechanisms are not. I think material mechanisms are reasonable, and Design is not (or rather interventionist design).
Excuse me but what you think is irrelevant. What has been demonstrated is that darwinian processes are not sufficient for producing protein machinery.
I don’t think design is falsifiable, because it is always possible (indeed it is highly defensible theology) to postulate a designer who conceived of a universe that Just Worked.
And yet we have said how to falsify ID. Also scientists have said they have falsified it (they were wrong but you can't have it both ways). But anyway, can you give us a testable hypothesis for the materialist position? That way we will know what you will accept. The reason being is that many people have offered up testable design hypotheses. So all we need is hypotheses from your side so we know what you will accept and we can compare- you know to see who is really doing science and who is bluffing. So can you produce or not?Joe
June 22, 2013
June
06
Jun
22
22
2013
07:14 PM
7
07
14
PM
PDT
And what does the test have to do with biology? I don’t see any practical implications of your arbitrary algorithm. To quote Eric Anderson
Sorry, I disagree completely. The point is to convey/demonstrate the effectiveness of self-replication and heritable variation and not demonstrate it.computerist
June 22, 2013
June
06
Jun
22
22
2013
05:27 PM
5
05
27
PM
PDT
Eric:
I was referring to the machines themselves.
OK. In that case, could you rephrase your question?
I’d say that there is a characteristic quality of things output by processes characterised by deep decision-trees. These include the output of human design, machine design, and evolutionary processes.
Here you are just lumping the alleged evolutionary processes into the same category as human design. Nonsense. Talk about assuming the conclusion. That is precisely the question at issue: do evolutionary processes have the ability to produce output like human design? And they have never been shown to do so.
I wasn't assuming a conclusion there - I was stating it. I agree it is the question at issue. The provisional conclusion I myself have reached is the one I gave.
No. ID points out that such quality is only known to come from purposeful intelligent activity. And no-one has ever shown that purely natural processes are up to the task. That is what the whole debate is about.
Well, Dembski specifically excluded intention and purpose from his definition of intellgence. But I agree that purposeful intelligent activity is what ID implies (and Dembski implies is implied!)
Why in the world would you have to know the answer beforehand? That certainly doesn’t follow logically. As long as the calculation is based on reasonable estimates and includes information that we do know, it can allow us to draw a reasonable inference based on the current state of knowledge. We certainly know enough about biology at this point to start making some calculations and drawing some reasonable conclusions. No-one has ever claimed that the exact probabilities are known with precision. And they need not be.
Because that is how Dembski defines chi - in terms of Fisherian null hypothesis testing. If the null can be rejected, at an alpha of 1/10^150, then design is inferred. If we cannot compute the null, then the calculation is meaningless. If the P(T|H) is just a guess, based on your view that the Target, given Darwinian processes or other material mechanisms, is extremely low, then, clearly, the calculation will output "Design". However, if your view is that the Target, given Darwinian processes or other material mechanisms is quite high, then the calculation will not output "Design". In other words, what the chi calculation outputs is entirely dependent on your estimate for how likely non-Design mechanisms are to have resulted in the Target. So it doesn't tell you anything that you didn't already think. You think Design is reasonable, and that material mechanisms are not. I think material mechanisms are reasonable, and Design is not (or rather interventionist design). One of us may be right and the other wrong, but all the chi calculation will do is tell us what we already think. You could call it an instance of "conservation of information" :)
Well, you’re back to your long-standing concern about being able to “precisely define the probability distribution.” Yet you freely admit that such a calculation is not needed in other instances (archaeology, forensics, etc.). So you are imposing an a priori different demand for what counts as evidence or what tool can be applied to infer design when it comes to living systems.
No, because that is not the method applied in other instances, or rather, it is only used when you can define the probability distribution under a relevant null. For instance, I'd be perfectly happy to use chi (actually something much simpler with a much more lenient alpha) to test the hypothesis that a casino had rigged the odds. That's because we know precisely what the probability distribution is under the null of no rigging. But I wouldn't use it for archaeology, or most forensics, or SETI. I'd use something else. My point is really very simple, and not even an attack on ID as a hypothesis: chi, as a concept, is an invalid way to test the ID hypothesis. So is inferring ID from the fact that biological organisms share some important properties with human artefacts. But there are plenty of other ways of setting about finding evidence for an interventionist designer. (Note the qualification "interventionist" - not all designers will leave evidence).
Now, we have a couple of possibilities: It could be that your position is simply based on a refusal to consider design in living systems. Some might be forgiven for thinking this is the case.
Well, I don't find the idea of interventionist design a very attractive one, it's true. But that wouldn't stop me considering it, if I found the arguments or evidence persuasive or valid. On the other hand, I should point out that for half a century I was perfectly happy to assume that the world had been brought into being by an omnicient, omnipotent and benevolent deity who designed it so beautifully that its workings could be discerned by science.
Or, perhaps, it could be that you are aware of some other calculation or some other “tool,” as you say, that will allow us to determine whether a particular living system was designed.
I can certainly suggest other approaches, but they are unlikely to be definitive. I don't think design is falsifiable, because it is always possible (indeed it is highly defensible theology) to postulate a designer who conceived of a universe that Just Worked. But it might be possible to set up specific testable hypothesis for an interventionist designer. Failure to demonstrate an interventionist designer would not, however, allow us to conclude either that there is no inteventionist designer, nor that there is no designer at all.
Please let us know what your proposed calculation or proposed tool is.
I'd actually be happy to do so. I'd start with some specific design hypothesis, such as Front Loading. But I should reiterate that I do not, myself, think that an interventionist designer is a terribly promising hypothesis.Elizabeth B Liddle
June 22, 2013
June
06
Jun
22
22
2013
05:05 PM
5
05
05
PM
PDT
gpuccio: thanks for these responses. I'm going to have to take them in bite-size chunks, as I have visitors at the moment.
Another point I want to stress is that design is never inferred “by default”. I don’t even understand what you mean by such a strange wording.
Dembski sets up his null such that if it is rejected, he infers design. That is what I meant by "default".
Design is inferred because we observe dFSCI, and because we know that design is the only observed cause of dFSCI in all testable cases.
This is a different argument and I agree that in this argument design is not treated as the default. I suggest it is nonetheless fallacious, for reasons I gave above. If all cases of X for which a cause is known the cause is Y, you cannot infer that for all cases of X for which no cause is known, the cause is also Y. You would not do it for a disease, and there is no justification for doing it for anything else!
That makes a design inference perfectly reasonable. There is no defalut here, only sound reasoning.
I agree you are not defaulting to design (as Dembski does, in both the EF and CSI) but it is not sound reasoning.
Design is a very good explanation for any observed dFSCI. The fact that no other credible explanation is available makes design the best available explanation, not certainly “a default inference”.
There is a perfectly good alternative, namely, the proteins evolved by selectable precursors. You may find an interventionist designer more credible that a selectable precursors, but there is no intrinsic reason to choose one rather than the other, given that we have no independent evidence (on the table) for either designer, or selectable precursors. Or, if there is, I am not seeing the reasoning :) More in a bit Cheers LizzieElizabeth B Liddle
June 22, 2013
June
06
Jun
22
22
2013
04:31 PM
4
04
31
PM
PDT
@AF & kf: First nazis (AF), then 1984 (kf). The weird desire of darwinists and trinitarians to use tyranny inspired polemic to their own benefit never ceases to amuse this onlooker.JWTruthInLove
June 22, 2013
June
06
Jun
22
22
2013
04:28 PM
4
04
28
PM
PDT
@computerist:
probably should be the same as initial program
Why? And why do you need to execute some random code-string? Obviously, it depends on the compiler (or the language) whether it accepts any combination of a binary-string or not. And what does the test have to do with biology? I don't see any practical implications of your arbitrary algorithm. To quote Eric Anderson
It is very nice that people can write a computer program, with very little relevance to actual biology, and see it do something. That is a good exercise in programming experience, and it is fun to see your program do something.
JWTruthInLove
June 22, 2013
June
06
Jun
22
22
2013
04:09 PM
4
04
09
PM
PDT
AF: You have simply compounded your errors and those of your colleagues, reminding me of 1984's lesson on the twisting of language as a step to tyranny. The long train of radical abuses and usurpations is ever more evident, and where it will end if unchecked is all too plain. Thank you for the inadvertent warning. Let me give you Websters 1828 on liberty with particular emphasis on the distinction from license:
liberty LIB'ERTY, n. [L. libertas, from liber, free.] 1. Freedom from restraint, in a general sense, and applicable to the body, or to the will or mind. The body is at liberty, when not confined; the will or mind is at liberty, when not checked or controlled. A man enjoys liberty, when no physical force operates to restrain his actions or volitions. 2. Natural liberty, consists in the power of acting as one thinks fit, without any restraint or control, except from the laws of nature. It is a state of exemption from the control of others, and from positive laws and the institutions of social life. This liberty is abridged by the establishment of government. 3. Civil liberty, is the liberty of men in a state of society, or natural liberty, so far only abridged and restrained, as is necessary and expedient for the safety and interest of the society, state or nation. A restraint of natural liberty, not necessary or expedient for the public, is tyranny or oppression. civil liberty is an exemption from the arbitrary will of others, which exemption is secured by established laws, which restrain every man from injuring or controlling another. Hence the restraints of law are essential to civil liberty.
Now, understand the chaos that radicals are about to plunge our civilisation into by playing with the fire of destabilising family and forcing men to stand on conscience-backed principle at any cost. A lesson Antiochus Epiphanes should have heeded before he decided to paganise the Judaeans by dint of state power. KFkairosfocus
June 22, 2013
June
06
Jun
22
22
2013
03:53 PM
3
03
53
PM
PDT
@160, probably should be the same as initial program. If python is used (interpreted) then utilizing exec/execFile/eval/ come to mind.computerist
June 22, 2013
June
06
Jun
22
22
2013
03:07 PM
3
03
07
PM
PDT
@computerist: It was not an objection. My point is stated in the question "How does the interpreter or compiler which executes the file look like?"JWTruthInLove
June 22, 2013
June
06
Jun
22
22
2013
02:56 PM
2
02
56
PM
PDT
… anyone can write an interpretor or compiler that accepts any given language without errors.
So? I'm not getting your point here of how its an objection of any sort. Perhaps you can explain a bit more.computerist
June 22, 2013
June
06
Jun
22
22
2013
02:51 PM
2
02
51
PM
PDT
1 2 3 4 8

Leave a Reply