Uncommon Descent Serving The Intelligent Design Community

Order vs. Complexity: A follow-up post

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

NOTE: This post has been updated with an Appendix – VJT.

My post yesterday, Order is not the same thing as complexity: A response to Harry McCall (17 June 2013), seems to have generated a lively discussion, judging from the comments received to date. Over at The Skeptical Zone, Mark Frank has also written a thoughtful response titled, VJ Torley on Order versus Complexity. In today’s post, I’d like to clear up a few misconceptions that are still floating around.

1. In his opening paragraph, Mark Frank writes:

To sum it up – a pattern has order if it can be generated from a few simple principles. It has complexity if it can’t. There are some well known problems with this – one of which being that it is not possible to prove that a given pattern cannot be generated from a few simple principles. However, I don’t dispute the distinction. The curious thing is that Dembski defines specification in terms of a pattern that can generated from a few simple principles. So no pattern can be both complex in VJ’s sense and specified in Dembski’s sense.

Mark Frank appears to be confusing the term, “generated,” with the term. “described.” here. What I wrote in my post yesterday is that a pattern exhibits order if it can be generated by “a short algorithm or set of commands,” and complexity if it can’t be compressed into a shorter pattern by a general law or computer algorithm. Professor William Dembski, in his paper, Specification: The Pattern That Signifies Intelligence, defines specificity in terms of the shortest verbal description of a pattern. On page 16, Dembski defines the function phi_s(T) for a pattern T as “the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T” (emphasis mine) before going on to define the specificity sigma as minus the log (to base 2) of the product of phi_s(T) and P(T|H), where P(T|H) is the probability of the pattern T being formed according to “the relevant chance hypothesis that takes into account Darwinian and other material mechanisms” (p. 17). In The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Intelligent Design advocates William Dembski and Jonathan Wells define specification as “low DESCRIPTIVE complexity” (p. 320), and on page 311 they explain that descriptive complexity “generalizes Kolmogorov complexity by measuring the size of the minimum description needed to characterize a pattern.”

The definition of order and complexity relates to whether or not a pattern can be generated mathematically by “a short algorithm or set of commands,” rather than whether or not it can be described in a few words. The definition of specificity, on the other hand, relates to whether or not a pattern can be characterized by a brief verbal description. There is nothing that prevents a pattern from being difficult to generate algorithmically, but easy to describe verbally. Hence it is quite possible for a pattern to be both complex and specified.

NOTE: I have substantially revised my response to Mark Frank, in the Appendix below.

2. Dr. Elizabeth Liddle, in a comment on Mark Frank’s post, writes that “by Dembski’s definition a chladni pattern would be both specified and complex. However, it would not have CSI because it is highly probable given a relevant chance (i.e. non-design) hypothesis.” The second part of her comment is correct; the first part is incorrect. Precisely because a Chladni pattern is “highly probable given a relevant chance (i.e. non-design) hypothesis,” it is not complex. In The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), William Dembski and Jonathan Wells define complexity as “The degree of difficulty to solve a problem or achieve a result,” before going on to add: “The most common forms of complexity are probabilistic (as in the probability of obtaining some outcome) or computational (as in the memory or computing time required for an algorithm to solve a problem)” (pp. 310-311). If a Chladni pattern is easy to generate as a result of laws then it exhibits order rather than complexity.

3. In another comment, Dr. Liddle writes: “V J Torley seems to be forgetting that fractal patterns are non-repeating, even though they can be simply described.” I would beg to differ. Here’s what Wikipedia has to say in its article on fractals (I’ve omitted references):

Fractals are typically self-similar patterns, where self-similar means they are “the same from near as from far”. Fractals may be exactly the same at every scale, or, as illustrated in Figure 1, they may be nearly the same at different scales. The definition of fractal goes beyond self-similarity per se to exclude trivial self-similarity and include the idea of a detailed pattern repeating itself.

The caption accompanying the figure referred to above reads as follows: “The Mandelbrot set illustrates self-similarity. As you zoom in on the image at finer and finer scales, the same pattern re-appears so that it is virtually impossible to know at which level you are looking.”

That sounds pretty repetitive to me. More to the point, fractals are mathematically easy to generate. Here’s what Wikipedia says about the Mandelbrot set, for instance:

More precisely, the Mandelbrot set is the set of values of c in the complex plane for which the orbit of 0 under iteration of the Complex quadratic polynomial zn+1 = zn2 + c remains bounded. That is, a complex number c is part of the Mandelbrot set if, when starting with z0 = 0 and applying the iteration repeatedly, the absolute value of zn remains bounded however large n gets.

NOTE: I have revised some of my comments on Mandelbrot sets and fractals. See the Appendix below.

4. In another comment on the same post, Professor Joe Felsenstein objects that Dembski’s definition of specified complexity has a paradoxical consequence: “It implies that we are to regard a life form as uncomplex, and therefore having specified complexity [?] if it is easy to describe,” which means that “a hummingbird, on that view, has not nearly as much specification as a perfect steel sphere,” even though the hummingbird “can do all sorts of amazing things, including reproduce, which the steel sphere never will.” He then suggests defining specification on a scale of fitness.

In my post yesterday, I pointed out that the term “specified complexity” is fairly non-controversial when applied to life: as chemist Leslie Orgel remarked in 1973, “living organisms are distinguished by their specified complexity.” Orgel added that crystals are well-specified, but simple rather than complex. If specificty were defined in terms of fitness, as Professor Felsenstein suggests, then we could no longer say that a non-reproducing crystal was specified.

However, Professor Felsenstein’s example of the steel sphere is an interesting one, because it illustrates that the probability of a sphere’s originating by natural processes may indeed be extremely low, especially if it is also made of an exotic material. (In this respect, it is rather like the lunar monolith in the movie 2001.) Felsenstein’s point is that a living organism would be a worthier creation of an intelligent agent than such a sphere, as it has a much richer repertoire of capabilities.

Closely related to this point is the fact that living things exhibit a nested hierarchy of organization, as well as dedicated functionality: intrinsically adapted parts whose entire repertoire of functionality is “dedicated” to supporting the functionality of the whole unit which they comprise. Indeed, it is precisely this kind of organization and dedicated functionality which allows living things to reproduce in the first place.

At the bottom level, the full biochemical specifications required for putting together a living thing such as a hummingbird are very long indeed. It is only when we get to higher organizational levels that we can apply holistic language and shorten our description, by characterizing the hummingbird in terms of its bodily functions rather than its parts, and by describing those functions in terms of how they benefit the whole organism.

I would therefore agree that an entity exhibiting this combination of traits (bottom-level exhaustive detail and higher-level holistic functionality, which makes the entity easy to characterize in a few words) is a much more typical product of intelligent agency than a steel sphere, notwithstanding the latter’s descriptive simplicity.

In short: specified complexity gets us to Intelligent Design, but some designs are a lot more intelligent than others. Whoever made hummingbirds must have been a lot smarter than we are; we have enough difficulties putting together a single protein.

5. In a comment on my post, Alan Fox objects that “We simply don’t know how rare novel functional proteins are.” Here I should refer him to the remarks made by Dr. Branko Kozulic in his 2011 paper, Proteins and Genes, Singletons and Species. I shall quote a brief extract:

In general, there are two aspects of biological function of every protein, and both depend on correct 3D structure. Each protein specifically recognizes its cellular or extracellular counterpart: for example an enzyme its substrate, hormone its receptor, lectin sugar, repressor DNA, etc. In addition, proteins interact continuously or transiently with other proteins, forming an interactive network. This second aspect is no less important, as illustrated in many studies of protein-protein interactions [59, 60]. Exquisite structural requirements must often be fulfilled for proper functioning of a protein. For example, in enzymes spatial misplacement of catalytic residues by even a few tenths of an angstrom can mean the difference between full activity and none at all [54]. And in the words of Francis Crick, “To produce this miracle of molecular construction all the cell need do is to string together the amino acids (which make up the polypeptide chain) in the correct order”….

Let us assess the highest probability for finding this correct order by random trials and call it, to stay in line with Crick’s term, a “macromolecular miracle”. The experimental data of Keefe and Szostak indicate – if one disregards the above described reservations – that one from a set of 10^11 randomly assembled polypeptides can be functional in vitro, whereas the data of Silverman et al. [57] show that of the 10^10 in vitro functional proteins just one may function properly in vivo. The combination of these two figures then defines a “macromolecular miracle” as a probability of one against 10^21. For simplicity, let us round this figure to one against 10^20…

It is important to recognize that the one in 10^20 represents the upper limit, and as such this figure is in agreement with all previous lower probability estimates. Moreover, there are two components that contribute to this figure: first, there is a component related to the particular activity of a protein – for example enzymatic activity that can be assayed in vitro or in vivo – and second, there is a component related to proper functioning of that protein in the cellular context: in a biochemical pathway, cycle or complex. (pp. 7-8)

In short: the specificity of proteins is not in doubt, and their usefulness for Intelligent Design arguments is therefore obvious.

I sincerely hope that the foregoing remarks will remove some common misunderstandings and stimulate further discussion.

APPENDIX

Let me begin with a confession: I had a nagging doubt when I put up this post a couple of days ago. What bothered me was that (a) some of the definitions of key terms were a little sloppily worded; and (b) some of these definitions seemed to conflate mathematics with physics.

Maybe I should pay more attention to my feelings.

A comment by Professor Jeffrey Shallit over at The Skeptical Zone also convinced me that I needed to re-think my response to Mark Frank on the proper definition of specificity. Professor Shallit’s remarks on Kolmogorov complexity also made me realize that I needed to be a lot more careful about defining the term “generate,” which may denote either a causal process governed by physical laws, or the execution of an algorithm by performing a sequence of mathematical operations.

What I wrote in my original post, Order is not the same thing as complexity: A response to Harry McCall (17 June 2013), is that a pattern exhibits order if it can be generated by “a short algorithm or set of commands,” and complexity if it can’t be compressed into a shorter pattern by a general law or computer algorithm.

I’d now like to explain why I now find those definitions unsatisfactory, and what I would propose in their stead.

Problems with the definition of order

I’d like to start by going back to the original sources. In Signature in the Cell l (Harper One, 2009, p. 106), Dr. Stephen Meyer writes:

Complex sequences exhibit an irregular, nonrepeating arrangement that defies expression by a general law or computer algorithm (an algorithm is a set of expressions for accomplishing a specific task or mathematical operation). The opposite of a highly complex sequence is a highly ordered sequence like ABCABCABCABC, in which the characters or constituents repeat over and over due to some underlying rule, algorithm or general law. (p. 106)

[H]igh probability repeating sequences like ABCABCABCABCABCABC have very little information (either carrying capacity or content)… Such sequences aren’t complex either. Why? A short algorithm or set of commands could easily generate a long sequence of repeating ABC’s, making the sequence compressible. (p. 107)
(Emphases mine – VJT.)

There are two problems with this definition. First, it mistakenly conflates physics with mathematics, when it declares that a complex sequence can be generated by “a general law or computer algorithm.” I presume that by “general law,” Dr. Meyer means to refer to some law of Nature, since on page 107, he lists certain kinds of organic molecules as examples of complexity. The problem here is that a sequence may be easy to generate by a computer algorithm, but difficult to generate by the laws of physics (or vice versa). In that case, it may be complex according to physical criteria but not according to mathematical criteria (or the reverse), generating a contradiction.

Second, the definition conflates: (a) the repetitiveness of a sequence, with (b) the ability of a short algorithm to generate that sequence, and (c) the Shannon compressibility of that sequence. The problem here is that there are non-repetitive sequences which can be generated by a short algorithm. Some of these non-repeating sequences are also Shannon-incompressible. Do these sequences exhibit order or complexity?

Third, the definition conflicts with what Professor Dembski has written on the subject of order and complexity. In The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Professor William Dembski and Dr. Jonathan Wells provide three definitions for order, the first of which reads as follows:

(1) Simple or repetitive patterns, as in crystals, that are the result of laws and cannot reasonably be used to draw a design inference. (p. 317; italics mine – VJT).

The reader will notice that the definition refers only to law-governed physical processes.

Dembski’s 2005 paper, Specification: The Pattern that Signifies Intelligence, also refers to the Champernowne sequence as exhibiting a “combination of pattern simplicity (i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing the corresponding event by chance)” (pp. 15-16). According to Dembski, the Champernowne sequence can be “constructed simply by writing binary numbers in ascending lexicographic order, starting with the one-digit binary
numbers (i.e., 0 and 1), proceeding to the two-digit binary numbers (i.e., 00, 01, 10, and 11),” and so on indefinitely, which means that it can be generated by a short algorithm. At the same time, Dembski describes at as having “event-complexity (i.e., difficulty of reproducing the corresponding event by chance).” In other words, it is not an example of what he would define as order. And yet, because it can be generated by “a short algorithm,” it would arguably qualify as an example of order under Dr. Meyer’s criteria (see above).

Problems with the definition of specificity

Dr. Meyer’s definition of specificity is also at odds with Dembski’s. On page 96 of Signature in the Cell, Dr. Meyer defines specificity in exclusively functional terms:

By specificity, biologists mean that a molecule has some features that have to be what they are, within fine tolerances, for the molecule to perform an important function within the cell.

Likewise, on page 107, Meyer speaks of a sequence of digits as “specifically arranged to form a function.”

By contrast, in The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Professor William Dembski and Dr. Jonathan Wells define specification as “low DESCRIPTIVE complexity” (p. 320), and on page 311 they explain that descriptive complexity “generalizes Kolmogorov complexity by measuring the size of the minimum description needed to characterize a pattern.” Although Dembski certainly regards functional specificity as one form of specificity, since he elsewhere refers to the bacterial flagellum – a “bidirectional rotary motor-driven propeller” – as exhibiting specificity, he does not regard it as the only kind of specificity.

In short: I believe there is a need for greater rigor and consistency when defining these key terms. Let me add that I don’t wish to criticize any of the authors I’ve mentioned above; I’ve been guilty of terminological imprecision at times, myself.

My suggestions for more rigorous definitions of the terms “order” and “specification”

So here are my suggestions. In Specification: The Pattern that Signifies Intelligence, Professor Dembski defines a specification in terms of a “combination of pattern simplicity (i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing the corresponding event by chance),” and in The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Dembski and Wells define complex specified information as being equivalent to specified complexity (p. 311), which they define as follows:

An event or object exhibits specified complexity provided that (1) the pattern to which it conforms is a highly improbable event (i.e. has high PROBABILISTIC COMPLEXITY) and (2) the pattern itself is easily described (i.e. has low DESCRIPTIVE COMPLEXITY). (2008, p. 320)

What I’d like to propose is that the term order should be used in opposition to high probabilistic complexity. In other words, a pattern is ordered if and only if its emergence as a result of law-governed physical processes is not a highly improbable event. More succinctly: a pattern is ordered is it is reasonably likely to occur, in our universe, and complex if its physical realization in our universe is a very unlikely event.

Thus I was correct when I wrote above:

If a Chladni pattern is easy to generate as a result of laws then it exhibits order rather than complexity.

However, I was wrong to argue that a repeating pattern is necessarily a sign of order. In a salt crystal it certainly is; but in the sequence of rolls of a die, a repeating pattern (e.g. 123456123456…) is a very improbable pattern, and hence it would be probabilistically complex. (It is, of course, also a specification.)

Fractals, revisited

The same line of argument holds true for fractals: when assessing whether they exhibit order or (probabilistic) complexity, the question is not whether they repeat themselves or are easily generated by mathematical algorithms, but whether or not they can be generated by law-governed physical processes. I’ve seen conflicting claims on this score (see here and here and here): some say there are fractals in Nature, while other say that some objects in Nature have fractal features, and still others, that the patterns that produce fractals occur in Nature even if fractals themselves do not. I’ll leave that one to the experts to sort out.

The term specification should be used to refer to any pattern of low descriptive complexity, whether functional or not. (I say this because some non-functional patterns, such as the lunar monolith in 2001, and of course fractals, are clearly specified.)

Low Kolmogorov complexity is, I would argue, a special case of specification. Dembski and Wells agree: on page 311 of The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), they explain that descriptive complexity “generalizes Kolmogorov complexity by measuring the size of the minimum description needed to characterize a pattern” (italics mine).

Kolmogorov complexity as a special case of descriptive complexity

Which brings me to Professor Shallit’s remarks in a post over at The Skeptical Zone, in response to my earlier (misguided) attempt to draw a distinction between the mathematical generation of a pattern and the verbal description of that pattern:

In the Kolmogorov setting, “concisely described” and “concisely generated” are synonymous. That is because a “description” in the Kolmogorov sense is the same thing as a “generation”; descriptions of an object x in Kolmogorov are Turing machines T together with inputs i such that T on input i produces x. The size of the particular description is the size of T plus the size of i, and the Kolmogorov complexity is the minimum over all such descriptions.

I accept Professor Shallit’s correction on this point. What I would insist, however, is that the term “descriptive complexity,” as used by the Intelligent design movement, cannot be simply equated with Kolmogorov complexity. Rather, I would argue that low Kolmogorov complexity is a special case of low descriptive complexity. My reason for adopting this view is that the determination of a object’s Kolmogorov complexity requires a Turing machine (a hypothetical device that manipulates symbols on a strip of tape according to a table of rules), which is an inappropriate (not to mention inefficient) means of determining whether an object possesses functionality of a particular kind – e.g. is this object a cutting implement? What I’m suggesting, in other words, is that at least some functional terms in our language are epistemically basic, and that our recognition of whether an object possesses these functions is partly intuitive. Using a table of rules to determine whether or not an object possesses a function (say, cutting) is, in my opinion, likely to produce misleading results.

My response to Mark Frank, revisited

I’d now like to return to my response to Mark Frank above, in which I wrote:

The definition of order and complexity relates to whether or not a pattern can be generated mathematically by “a short algorithm or set of commands,” rather than whether or not it can be described in a few words. The definition of specificity, on the other hand, relates to whether or not a pattern can be characterized by a brief verbal description. There is nothing that prevents a pattern from being difficult to generate algorithmically, but easy to describe verbally. Hence it is quite possible for a pattern to be both complex and specified.

This, I would now say, is incorrect as it stands. The reason why it is quite possible for an object to be both complex and specified is that the term “complex” refers to the (very low) likelihood of its originating as a result of physical laws (not mathematical algorithms), whereas the term “specified” refers to whether it can be described briefly – whether it be according to some algorithm or in functional terms.

Implications for Intelligent Design

I have argued above that we can legitimately infer an Intelligent Designer for any system which is capable of being verbally described in just a few words, and whose likelihood of originating as a result of natural laws is sufficiently close to zero. This design inference is especially obvious in systems which exhibit biological functionality. Although we can make design inferences for non-biological systems (e.g. moon monoliths, if we found them), the most powerful inferences are undoubtedly drawn from the world of living things, with their rich functionality.

In an especially perspicuous post on this thread, G. Puccio argued for the same conclusion:

The simple truth is, IMO, that any kind of specification, however defined, will do, provided that we can show that that specification defines a specific subset of the search space that is too small to be found by a random search, and which cannot reasonably be found by some natural algorithm….

In the end, I will say it again: the important point is not how you specify, but that your specification identifies:

a)an utterly unlikely subset as a pre-specification

or

b) an utterly unlikely subset which is objectively defined without any arbitrary contingency, like in the case of pi.

In the second case, specification needs not be a pre-specification.

Functional specification is a perfect example of the second case.

Provided that the function can be objectively defined and measured, the only important point is how complex it is: IOWs, how small is the subset of sequences that provide the function as defined, in the search space.

That simple concept is the foundation for the definition of dFSCI, or any equivalent metrics.

It is simple, it is true, it works.

P{T|H) and elephants: Dr. Liddle objects

But how do we calculate probabilistic complexities? Dr. Elizabeth Liddle writes:

P(T|H) is fine to compute if you have a clearly defined non-design hypothesis for which you can compute a probability distribution.

But nobody, to my knowledge, has yet as suggested how you would compute it for a biological organism, or even for a protein.

In a similar vein, Alan Fox comments:

We have, as yet, no way to predict functionality in unknown proteins. Without knowing what you don’t know, you can’t calculate rarity.

In a recent post entitled, The Edge of Evolution, I cited a 2011 paper by Dr. Branko Kozulic, titled, Proteins and Genes, Singletons and Species, in which he argued (generously, in his view) that at most, 1 in 10^21 randomly assembled polypeptides would be capable of functioning as a viable protein in vivo, that each species possessed hundreds of isolated proteins called “singletons” which had no close biochemical relatives, and that the likelihood of these proteins originating by unguided mechanisms in even one species was astronomically low, making proteins at once highly complex (probabilistically speaking) and highly specified (by virtue of their function) – and hence as sure a sign as we could possibly expect of an Intelligent Designer at work in the natural world:

In general, there are two aspects of biological function of every protein, and both depend on correct 3D structure. Each protein specifically recognizes its cellular or extracellular counterpart: for example an enzyme its substrate, hormone its receptor, lectin sugar, repressor DNA, etc. In addition, proteins interact continuously or transiently with other proteins, forming an interactive network. This second aspect is no less important, as illustrated in many studies of protein-protein interactions [59, 60]. Exquisite structural requirements must often be fulfilled for proper functioning of a protein. For example, in enzymes spatial misplacement of catalytic residues by even a few tenths of an angstrom can mean the difference between full activity and none at all [54]. And in the words of Francis Crick, “To produce this miracle of molecular construction all the cell need do is to string together the amino acids (which make up the polypeptide chain) in the correct order” [61, italics in original]. (pp. 7-8)

Let us assess the highest probability for finding this correct order by random trials and call it, to stay in line with Crick’s term, a “macromolecular miracle”. The experimental data of Keefe and Szostak indicate – if one disregards the above described reservations – that one from a set of 10^11 randomly assembled polypeptides can be functional in vitro, whereas the data of Silverman et al. [57] show that of the 10^10 in vitro functional proteins just one may function properly in vivo. The combination of these two figures then defines a “macromolecular miracle” as a probability of one against 10^21. For simplicity, let us round this figure to one against 10^20. (p. 8)

To put the 10^20 figure in the context of observable objects, about 10^20 squares each measuring 1 mm^2 would cover the whole surface of planet Earth (5.1 x 10^14 m^2). Searching through such squares to find a single one with the correct number, at a rate of 1000 per second, would take 10^17 seconds, or 3.2 billion years. Yet, based on the above discussed experimental data, one in 10^20 is the highest probability that a blind search has for finding among random sequences an in vivo functional protein. (p. 9)

The frequency of functional proteins among random sequences is at most one in 10^20 (see above). The proteins of unrelated sequences are as different as the proteins of random sequences [22, 81, 82] – and singletons per definition are exactly such unrelated proteins. (p. 11)

A recent study, based on 573 sequenced bacterial genomes, has concluded that the entire pool of bacterial genes – the bacterial pan-genome – looks as though of infinite size, because every additional bacterial genome sequenced has added over 200 new singletons [111]. In agreement with this conclusion are the results of the Global Ocean Sampling project reported by Yooseph et al., who found a linear increase in the number of singletons with the number of new protein sequences, even when the number of the new sequences ran into millions [112]. The trend towards higher numbers of singletons per genome seems to coincide with a higher proportion of the eukaryotic genomes sequenced. In other words, eukaryotes generally contain a larger number of singletons than eubacteria and archaea. (p. 16)

Based on the data from 120 sequenced genomes, in 2004 Grant et al. reported on the presence of 112,000 singletons within 600,000 sequences [96]. This corresponds to 933 singletons per genome…
[E]ach species possesses hundreds, or even thousands, of unique genes – the genes that are not shared with any other species. (p. 17)

Experimental data reviewed here suggest that at most one functional protein can be found among 10^20 proteins of random sequences. Hence every discovery of a novel functional protein (singleton) represents a testimony for successful overcoming of the probability barrier of one against at least 10^20, the probability defined here as a “macromolecular miracle”. More than one million of such “macromolecular miracles” are present in the genomes of about two thousand species sequenced thus far. Assuming that this correlation will hold with the rest of about 10 million different species that live on Earth [157], the total number of “macromolecular miracles” in all genomes could reach 10 billion. These 10^10 unique proteins would still represent a tiny fraction of the 10^470 possible proteins of the median eukaryotic size. (p. 21)

If just 200 unique proteins are present in each species, the probability of their simultaneous appearance is one against at least 10^4,000. [The] Probabilistic resources of our universe are much, much smaller; they allow for a maximum of 10^149 events [158] and thus could account for a one-time simultaneous appearance of at most 7 unique proteins. The alternative, a sequential appearance of singletons, would require that the descendants of one family live through hundreds of “macromolecular miracles” to become a new species – again a scenario of exceedingly low probability. Therefore, now one can say that each species is a result of a Biological Big Bang; to reserve that term just for the first living organism [21] is not justified anymore. (p. 21)

“But what if the search for a functional protein is not blind?” ask my critics. “What if there’s an underlying bias towards the emergence of functionality in Nature?” “Fine,” I would respond. “Let’s see your evidence.”

Alan Miller rose to the challenge. In a recent post entitled, Protein Space and Hoyle’s Fallacy – a response to vjtorley, cited a paper by Michael A. Fisher, Kara L. McKinley, Luke H. Bradley, Sara R. Viola and Michael H. Hecht, titled, De Novo Designed Proteins from a Library of Artificial Sequences Function in Escherichia Coli and Enable Cell Growth (PLoS ONE 6(1): e15364. doi:10.1371/journal.pone.0015364, January 4, 2011), in support of his claim that proteins were a lot easier for Nature to build on the primordial Earth than Intelligent Design proponents imagine, and he accused them of resurrecting Hoyle’s fallacy.

In a very thoughtful comment over on my post CSI Revisited, G. Puccio responded to the key claims made in the paper, and to what he perceived as Alan Miller’s misuse of the paper (bolding below is mine):

First of all, I will just quote a few phrases from the paper, just to give the general scenario of the problems:

a) “We designed and constructed a collection of artificial genes encoding approximately 1.5×106 novel amino acid sequences. Because folding into a stable 3-dimensional structure is a prerequisite for most biological functions, we did not construct this collection of proteins from random sequences. Instead, we used the binary code strategy for protein design, shown previously to facilitate the production of large combinatorial libraries of folded proteins.”

b) “Cells relying on the de novo proteins grow significantly slower than those expressing the natural protein.”

c) “We also purified several of the de novo proteins. (To avoid contamination by the natural enzyme, purifications were from strains deleted for the natural gene.) We tested these purified proteins for the enzymatic activities deleted in the respective autotrophs, but were unable to detect activity that was reproducibly above the controls.”

And now, my comments:

a) This is the main fault of the paper, if it is interpreted (as Miller does) as evidence that functional proteins can evolve from random sequences. The very first step of the paper is intelligent design: indeed, top down protein engineering based on our hardly gained knowledge about the biochemical properties of proteins.

b) The second problem is that the paper is based on function rescue, not on the appearance of a mew function. Experiments based on function rescue have serious methodological problems, if used as models of neo darwinian evolution. The problem here is specially big, because we know nothing of how the “evolved” proteins work to allow the minimal rescue of function in the complex system of E. Coli (see next point).

c) The third problem is that the few rescuing sequences have no detected biochemical activity in vitro. IOWs, we don’t know what they do, and how they act at biochemical level. IOWs, with no known “local function” for the sequences, we have no idea of the functional complexity of the “local function” that in some unknown way is linked to the functional rescue. The authors are well aware of that, and indeed spend a lot of time discussing some arguments and experiments to exclude some possible interpretation of indirect rescue, or at least those that they have conceived.

The fact remains that the hypothesis that the de novo sequences have the same functional activity as the knocked out genes, even if minimal, remain unproved, because no biochemical activity of that kind could be shown in vitro for them.

These are the main points that must be considered. In brief, the paper does not prove, in any way, what Miller thinks it proves.

And that was Miller’s best paper!

In the meantime, can you forgive us in the Intelligent Design community for being just a little skeptical of claims that “no intelligence was required” to account for the origin of proteins, of the first living cell (which would have probably required hundreds of proteins), of complex organisms in the early Cambrian period, and even of the appearance of a new species, in view of what has been learned about the prevalence of singleton proteins and genes in living organisms?

Comments
You further know that I made a valid historical parallel to how the ordinary German people who went along quietly with what went on under the 3rd Reich were made to do tours of shame to learn what hey had been in denial about in order to begin the de-nazification process. I showed a famous photo of a critical moment in that at Buchenwald
Invalid! Apart from the insertion of this word as justification for your behaviour, you know what you did. Far from being a person who I would have thought , given their cultural background, would wish to oppose all forms of oppression, you, on the contrary, want to oppress others. Your attitude to gay people I find particularly offensive.Alan Fox
June 23, 2013
June
06
Jun
23
23
2013
04:14 PM
4
04
14
PM
PDT
:)Upright BiPed
June 23, 2013
June
06
Jun
23
23
2013
03:37 PM
3
03
37
PM
PDT
Great to talk to you gpuccio. I'm fairly occupied for the next week or so (big family gathering next weekend), but let's stay in touch :) Cheers LizzieElizabeth B Liddle
June 23, 2013
June
06
Jun
23
23
2013
02:48 PM
2
02
48
PM
PDT
So… ID arguments do not posit theology, the validity of design arguments do not rest upon the existence of a deity,
Indeed.
you disagree with Dover,
That does not follow, and I do not have a legal opinion. I am not a US citizen, and far from forbidding religion in education, it is mandated in the UK, so that would not even arise as an issue. The quality of the science would.
specific design hypotheses cannot be rejected as non-falsifiable,
Well, I wouldn't reject them even if they were not falsifiable, but I certainly agree that specific falsifiable design hypotheses are possible to formulate.
and you were able to characterize the null.
For the challenge you set? I characteristed a null. I'm not sure we agreed on it.
So (using your own notion of “unguided forces”) that leaves only the observation you have yet to speak of, which we can just let lay if you wish: If mankind should someday create life from non-life, that achievement will not move a hair on the argument’s head. If ID proponents point out the vast amount of agency involvement in the creation of that life, materialists would forever argue, as they do now, that we just don’t yet know how these things happen without agency involvement. They will say, just as they do now, that any counter-argument can do no more than point to a gap in our knowledge. The counter-argument to unguided forces would be immediately labeled, as it is now, as an argument from ignorance. The proposition of unguided forces never has to undergo a test because it’s based on the speculation of an unknown mechanism, and therefore is subject only to the researcher’s imagination. It simply cannot be tested, which is the very definition of non-falsifiable.
Sorry, I'm not sure what you are asking me here. Could you clarify? And which argument's "head" are you talking about exactly? I don't really understand what you are getting at here.Elizabeth B Liddle
June 23, 2013
June
06
Jun
23
23
2013
02:46 PM
2
02
46
PM
PDT
Elizabeth: I agree with you to leave it there "for a bit", for the sake of our personal balance, and also because my ego is so satisfied of having obtained a"bravo" from you! :) I would just add, regarding this observation: I don’t know, gpuccio. But you seemed to be suggesting that there was no evidence for evolutionary relationships within superfamilies, and it seemed to me there was quite a lot, that indeed the concept is based on the inferred phylogenies. that I never intended that. I choose the example of superfamilies exactly because we can be rather sure that they are completely disconnected , while within a superfamily, and even more within a family, homologies and similarities of folding and function are much more evident.gpuccio
June 23, 2013
June
06
Jun
23
23
2013
02:39 PM
2
02
39
PM
PDT
Dr Liddle, at 207 So... ID arguments do not posit theology, the validity of design arguments do not rest upon the existence of a deity, you disagree with Dover, specific design hypotheses cannot be rejected as non-falsifiable, and you were able to characterize the null. So (using your own notion of "unguided forces") that leaves only the observation you have yet to speak of, which we can just let lay if you wish: If mankind should someday create life from non-life, that achievement will not move a hair on the argument’s head. If ID proponents point out the vast amount of agency involvement in the creation of that life, materialists would forever argue, as they do now, that we just don’t yet know how these things happen without agency involvement. They will say, just as they do now, that any counter-argument can do no more than point to a gap in our knowledge. The counter-argument to unguided forces would be immediately labeled, as it is now, as an argument from ignorance. The proposition of unguided forces never has to undergo a test because it’s based on the speculation of an unknown mechanism, and therefore is subject only to the researcher’s imagination. It simply cannot be tested, which is the very definition of non-falsifiable.Upright BiPed
June 23, 2013
June
06
Jun
23
23
2013
01:12 PM
1
01
12
PM
PDT
I will not go on with the epistemological issue. I think you make a great confusion, mix the null hypothesis with the alternative hypotheses, and reaon in Bayesian terms for a Fisherian setting. But frankly, I cannot spend further time on that, I don’t believe you will ever be convinced.
Fair enough, but bear in the mind the possibility that the person confused may not be me :) I'm by no means an enthusiast of Fisherian null hypothesis testing, but I think it's as near as we get to Popperian falsification, which is why I think Dembski favored it. Neymann-Pearson is similar, but you don't falsify as with Fisher. I'd say that the way science is actually done today is not by falsification but by comparing model fits: if model A is a better fit to the data than model B, we prefer model A. But no models are perfect fits - all models are at best incomplete.
You ask for a definition of dFSCI. I have given it I don’t know how many times.
Thanks, gpuccio. But recall I do not read every post (indeed I have only just been unbanned here) and I have an aging brain. A link would have been fine, and if you'd like me to post this for reference at TSZ I'd be delighted to do so.
However: dFSCI is a subset of CSI characterized by the following: a) It applies only to objects where a digital sequence can be read in some way (digital information) b) The specification is exclusively functional. A conscious observer can objectively define any function he likes for the sequence observed in the object, and offer a way to measure the function itself, and a threshold for the function, so that the function can be expressed as a binary variable (absent / present) for any possible digital sequence in an object. c) We compute the functional complexity linked to the function so defined: it expresses the minimal number of bits necessary to provide the function, and is computed as the rate of the functional space (number of sequences that provide the function) to the search space (total number of possible sequences). The computation is usually made by fixing the sequence length and using some approximations, like the Durston method for protein families. Repetitive sequences, or sequences that can be generated by known algorithms will be considered as having the dFSI of the generating algorithm in that system (that is, we consider the Kolmogorow complexity of the observed sequence). d) We fix an arbitrary threshold to transform the computation of dFSI in bits into a binary value (dFSCI: absent / present). The threshold must take into account the system we are observing, the time span allowed for the emergence of the object and the probabilistic resources available in that system in that time span (the number of states that can be tested). For the biological system in our planet, I have suggested a threshold of 150 bits. 500 bits should be a sufficient threshold for any system. e) Our null (H0) is that the sequence originated as a random outcome in the syste. The presence of dFSCI allows us to reject that null. f) Then we take into consideration, as alternative hypotheses, ID (the only well known generator of dFSCI). However, any non design alternative hypothesis that is explicitly formulated can be taken into consideration, before choosing design as the best explanation. Any hypothesis that can explicitly explain the outcome on the basis of what already is available in the system is welcome, and will be accepted or refuted according to its explanatory merits (no to a probability). If the alternative hypothesis includes random steps, those steps can be evaluated again by the dFSCI tool. This, in brief, and with some clarifications about the aspects you stressed in the last posts.
Thank you! For what it's worth, I think that's by far the best of the various chi-derivatives I've read! It allows us to reject a perfectly well-characterised null, even if it's a null nobody actually proposes ;) And then moves to a different method for comparing alternative explanations for the explanandum, dFCSO itself. Bravo! And perhaps we'd better leave it there for a bit, as I have some Penguins to attend to, not to mention Real Life :) Thanks a lot. LizzieElizabeth B Liddle
June 23, 2013
June
06
Jun
23
23
2013
01:03 PM
1
01
03
PM
PDT
gpuccio
Elizabeth: So, what is your hypothesis? That there were a group of connected ancestors that gave rise to 2000 disconnected superfamilies? And how were these “ancestors” connected? At the sequence level? Did they fold similarly, Did they have similar functions?
I don't know, gpuccio. But you seemed to be suggesting that there was no evidence for evolutionary relationships within superfamilies, and it seemed to me there was quite a lot, that indeed the concept is based on the inferred phylogenies. But I don't see why separate roots for separate superfamilies is particularly unlikely
So, how can you explain that a group of proteins coneected at sequence level, and with similar folding, gave rise to 2000 separated superfamilies, that bear no sequence connection one with the other, that fold in compèletely different ways, that have different functions?
The superfamilies may well be separately rooted (emerge from different parts of the genome), but I am not yet persuaded that they couldn't be have roots that go back to simpler yet selectable precursors.
Ancestors that have never been onserved, neither as “remnants” in the proteome nor in the lab?
Whey would they still be extant? If current theory is correct, LUCA lived several hundred million years after abiogenesis.
Is this an explanation, in your mind?
I just don't see a reason to favour of an inteventionist designer as an alternative to it. Just because we can't see back beyond a certain point, doesn't mean we need discount all evolution before that point. It would be, in my view, like assuming that because we can't see beyond the bend in the river, that the river source is just beyond that bend. But do please link me your definition of dFSCO. I'd like to see it. Cheers LizzieElizabeth B Liddle
June 23, 2013
June
06
Jun
23
23
2013
12:47 PM
12
12
47
PM
PDT
Elizabeth: Oops! Accidental incomplete posting... I go on: Is this an explanation, in your mind? And you say you have no "darwinian faith"! I will not go on with the epistemological issue. I think you make a great confusion, mix the null hypothesis with the alternative hypotheses, and reaon in Bayesian terms for a Fisherian setting. But frankly, I cannot spend further time on that, I don't believe you will ever be convinced. You ask for a definition of dFSCI. I have given it I don't know how many times. However: dFSCI is a subset of CSI characterized by the following: a) It applies only to objects where a digital sequence can be read in some way (digital information) b) The specification is exclusively functional. A conscious observer can objectively define any function he likes for the sequence observed in the object, and offer a way to measure the function itself, and a threshold for the function, so that the function can be expressed as a binary variable (absent / present) for any possible digital sequence in an object. c) We compute the functional complexity linked to the function so defined: it expresses the minimal number of bits necessary to provide the function, and is computed as the rate of the functional space (number of sequences that provide the function) to the search space (total number of possible sequences). The computation is usually made by fixing the sequence length and using some approximations, like the Durston method for protein families. Repetitive sequences, or sequences that can be generated by known algorithms will be considered as having the dFSI of the generating algorithm in that system (that is, we consider the Kolmogorow complexity of the observed sequence). d) We fix an arbitrary threshold to transform the computation of dFSI in bits into a binary value (dFSCI: absent / present). The threshold must take into account the system we are observing, the time span allowed for the emergence of the object and the probabilistic resources available in that system in that time span (the number of states that can be tested). For the biological system in our planet, I have suggested a threshold of 150 bits. 500 bits should be a sufficient threshold for any system. e) Our null (H0) is that the sequence originated as a random outcome in the syste. The presence of dFSCI allows us to reject that null. f) Then we take into consideration, as alternative hypotheses, ID (the only well known generator of dFSCI). However, any non design alternative hypothesis that is explicitly formulated can be taken into consideration, before choosing design as the best explanation. Any hypothesis that can explicitly explain the outcome on the basis of what already is available in the system is welcome, and will be accepted or refuted according to its explanatory merits (no to a probability). If the alternative hypothesis includes random steps, those steps can be evaluated again by the dFSCI tool. This, in brief, and with some clarifications about the aspects you stressed in the last posts.gpuccio
June 23, 2013
June
06
Jun
23
23
2013
12:24 PM
12
12
24
PM
PDT
KF:
Dr Liddle: Kindly stop playing games, you have seen how the probability you have decided to turn into a talking point is actually part of an information measure and so a laid measure of info int eh system will automatically take it into account. If you do not understand the way info is measured then please do a tutorial. KF
I am not "playing games" KF. It's a simple question. I know how information is measured. In fact I know several ways. But to compute chi you need a value for P(T|H) where, according to Dembski, H is the "relevant chance hypothesis taking into account Darwinian and other material mechanisms". All I want to know is how you compute the probability of T given that hypothesis. I know how to compute it where H is "random draw" or "random walk". What I want to know is how you compute it for other non-design hypotheses. How> is it "automatically taken into account"?Elizabeth B Liddle
June 23, 2013
June
06
Jun
23
23
2013
12:07 PM
12
12
07
PM
PDT
Elizabeth: So, what is your hypothesis? That there were a group of connected ancestors that gave rise to 2000 disconnected superfamilies? And how were these "ancestors" connected? At the sequence level? Did they fold similarly, Did they have similar functions? So, how can you explain that a group of proteins coneected at sequence level, and with similar folding, gave rise to 2000 separated superfamilies, that bear no sequence connection one with the other, that fold in compèletely different ways, that have different functions? Ancestors that have never been onserved, neither as "remnants" in the proteome nor in the lab? Is this an explanation, in your mind? Agpuccio
June 23, 2013
June
06
Jun
23
23
2013
12:01 PM
12
12
01
PM
PDT
Upright Biped:
Then it should be easy to answer the question: “Which ID arguments present theology as a defense of their claims?”
None that I am aware of. Which is why I didn't claim that any did.
Then the validity of the design inference is in no way dependent upon the existence of a deity, is it? Take a poll at TSZ and test out that nugget on your followers. See who else disagrees with Dover.
Short answer: no.
It wouldn’t eliminate the possibility of undetectable design in nature, but ID as a biological theory based on the ability to detect design, would be dead.
Well, that hypothesis would be dead, yes. There could be others. But that's the point of hypothesis testing: deriving testable hypotheses from larger explanatory framework. I have never said, and don't think is true, that specific design hypotheses are not falsifiable. All hypotheses have to be specific to be falsifiable - they have to be capable of being cast as a null.
This is the same conjecture you just agreed is specific enough to be falsified. The only difference between the two is that the proposed cause of “design” has replaced the proposed cause of “unguided forces”. If we merely insert the criteria for “unguided forces” which you intended to simulate in your demonstration (and we assume you knew what you meant by those criteria) then your concerns over the term would obviously be satisfied. Now, how would you falsify the claim?
The issue is that you have to be able to characterise your null. In my proposed demonstration, the null hypothesis (what I was aiming to falsify) was that in the absence of a designer, or intentional guiding force, self-replicators would not emerge from a population of non-self-replicators.
If your simulation had succeeded, you intended to a) ignore that result given the fact that you created the simulation, or b) state that your demonstration was valid because you properly modeled the environment, etc?
No, all I would rejected is the null hypothesis that self-replicators cannot emerge from non-self-replicators unguided. I would not have rejected the hypothesis that the conditions necessary for self-replicators to emerge spontaneously (unguided) from non-self-replicators must be themselve be designed .Elizabeth B Liddle
June 23, 2013
June
06
Jun
23
23
2013
11:56 AM
11
11
56
AM
PDT
Af: Kindly stop misleading people further and exu=sing the inexcusable. You and ilk full well know -- though I doubt you will admit -- that there has been a major problem of enabling behaviour for abuse of design thinkers int eh academy and wider education systems amounting to a witch hunt. You further know that I made a valid historical parallel to how the ordinary German people who went along quietly with what went on under the 3rd Reich were made to do tours of shame to learn what hey had been in denial about in order to begin the de-nazification process. I showed a famous photo of a critical moment in that at Buchenwald. Your ilk at TSZ seized on that to falsely accuse me of involvement in an alleged right wing conspiracy to impose a theocracy and in that context OM [itself a vicious slander, one too often sponsored by eminent advocates of evolutionary materialism in recent years . . . which reveals much about their want of basic decency] -- without correction from your side and with EL trying to deny what happened . . . enabling behaviour -- tried to use invidious association to insinuate that I and the Nazis object to homosexual behaviour and the like. That is a serious bit of smearing as it is quite simple to see that -- never mind the mind bending games now on all over our civilisation -- a great many people have serious, quite principled questions and objections to such homosexualisation of our law and culture. Now, at no point did you distance yourself from the behaviour of your circle, or seek to correct it. So to now suggest by the clever turn of phrase about opening cans of worms that I invited such smears is a LIE, indeed a further slander. And yes AF, I am calling things by their right, short blunt names at this point. I hope you have the decency to feel remorse. No, I pointed out a valid historical warning, only to have the foul-minded seek to smear me. At this stage I am pointing such out, not to expect a return to decent behaviour on your ilk's part (your ilk's behaviour over a prolonged period makes it plain that such will not happen until there is a decisive breaking that ends in a tour of shame and awakening of remorse -- a good thing; I recall here the apology tour Russia sent out to Jamaica in 1990, post cold war), but to make it clear that you have crossed the threshold into inexcusable incivility and that we should reckon with this as we reflect on the significance of the controversies over design. In short I am pointing out the problem of the sort of nihilistic faction tactics that Plato warned about as a consequence of the rise of evolutionary materialism in The Laws, BK X, 2350 years ago:
Ath. . . . [[The avant garde philosophers and poets, c. 360 BC] say that fire and water, and earth and air [[i.e the classical "material" elements of the cosmos], all exist by nature and chance, and none of them by art, and that as to the bodies which come next in order-earth, and sun, and moon, and stars-they have been created by means of these absolutely inanimate existences. The elements are severally moved by chance and some inherent force according to certain affinities among them-of hot with cold, or of dry with moist, or of soft with hard, and according to all the other accidental admixtures of opposites which have been formed by necessity. After this fashion and in this manner the whole heaven has been created, and all that is in the heaven, as well as animals and all plants, and all the seasons come from these elements, not by the action of mind, as they say, or of any God, or from art, but as I was saying, by nature and chance only. [[In short, evolutionary materialism premised on chance plus necessity acting without intelligent guidance on primordial matter is hardly a new or a primarily "scientific" view! Notice also, the trichotomy of causal factors: (a) chance/accident, (b) mechanical necessity of nature, (c) art or intelligent design and direction.] . . . . [[Thus, they hold that t]he Gods exist not by nature, but by art, and by the laws of states, which are different in different places, according to the agreement of those who make them; and that the honourable is one thing by nature and another thing by law, and that the principles of justice have no existence at all in nature, but that mankind are always disputing about them and altering them; and that the alterations which are made by art and by law have no basis in nature, but are of authority for the moment and at the time at which they are made.- [[Relativism, too, is not new; complete with its radical amorality rooted in a worldview that has no foundational IS that can ground OUGHT.] These, my friends, are the sayings of wise men, poets and prose writers, which find a way into the minds of youth. They are told by them that the highest right is might [[ Evolutionary materialism leads to the promotion of amorality], and in this way the young fall into impieties, under the idea that the Gods are not such as the law bids them imagine; and hence arise factions [[Evolutionary materialism-motivated amorality "naturally" leads to continual contentions and power struggles], these philosophers inviting them to lead a true life according to nature, that is, to live in real dominion over others [[such amoral factions, if they gain power, "naturally" tend towards ruthless tyranny], and not in legal subjection to them.
I suggest that you need to soberly reflect on what you have involved yourself with as an enabler in light of such warnings from history on what happens when people begin to act out the implications of worldviews that -- having no foundational IS that can ground OUGHT -- imply that might and manipulation make 'right,' and that honour is a mere matter of power. For, the process has already begun. Or have you forgotten what slander, marginalisation, disrespect, and scapegoating all too often lead to? Good day, sir. GEM of TKI PS: JWT, I hope that helps set a bit of context. I am not going around picking a quarrel, I am warning about the cliff the slippery slope is pointing to.kairosfocus
June 23, 2013
June
06
Jun
23
23
2013
11:35 AM
11
11
35
AM
PDT
Dr Liddle: Kindly stop playing games, you have seen how the probability you have decided to turn into a talking point is actually part of an information measure and so a laid measure of info int eh system will automatically take it into account. If you do not understand the way info is measured then please do a tutorial. KFkairosfocus
June 23, 2013
June
06
Jun
23
23
2013
11:09 AM
11
11
09
AM
PDT
Dr Liddle at #174
I didn’t claim that anyone presented theology as a defense of their claims
Then it should be easy to answer the question: “Which ID arguments present theology as a defense of their claims?”
That was precisely my point: that the existence of a deity in no way depends on the Design Inference being valid
Then the validity of the design inference is in no way dependent upon the existence of a deity, is it? Take a poll at TSZ and test out that nugget on your followers. See who else disagrees with Dover.
In my view it is no more valid to argue evolution, therefore no god, than it is to argue no evolution, therefore god.
Therefore if a prominent biologist at a major university should look out on the human landscape and fondly proposed that “evolution is the greatest engine of atheism”, you'd think he and all others with his mindset had drawn an invalid conclusion from evolution. In the real world, where people publish books, write articles, and own blogs, you must be shocked at the invalid arguments.
Well, it wouldn’t falsify ID.
It wouldn’t eliminate the possibility of undetectable design in nature, but ID as a biological theory based on the ability to detect design, would be dead.
I agree that that specific conjecture would be falsified. This is why I think it is useful to have specific ID conjectures.
So the hypothesis that design is required to originate the semiosis that evolution requires to exist is a falsifiable proposition and scientifically valid.
I agree. A conjecture has to be specific to be falsiable
This is the same conjecture you just agreed is specific enough to be falsified. The only difference between the two is that the proposed cause of “design” has replaced the proposed cause of “unguided forces”. If we merely insert the criteria for “unguided forces” which you intended to simulate in your demonstration (and we assume you knew what you meant by those criteria) then your concerns over the term would obviously be satisfied. Now, how would you falsify the claim?
Yes indeed. And so agency involvement cannot be rejected.
If your simulation had succeeded, you intended to a) ignore that result given the fact that you created the simulation, or b) state that your demonstration was valid because you properly modeled the environment, etc?
Non-specific conjectures (“A designer with unspecified powers”; “unspecified unguided forces”) cannot be falsified.
This is a great comment. On the one hand you are bringing up theological issues, having never provided a list of any ID claims that rest on theological reasoning. You even quite plainly state that the validity of the design inference is independent of the existence of a deity. And then on the other hand, your comment addresses absolutely nothing whatsoever in the paragraph you were addressing. Here it is again, the relevant text you ignored: If mankind should someday create life from non-life, that achievement will not move a hair on the argument’s head. If ID proponents point out the vast amount of agency involvement in the creation of that life, materialists would forever argue, as they do now, that we just don’t yet know how these things happen without agency involvement. They will say, just as they do now, that any counter-argument can do no more than point to a gap in our knowledge. The counter-argument to unguided forces would be immediately labeled, as it is now, as an argument from ignorance. The proposition of unguided forces never has to undergo a test because it’s based on the speculation of an unknown mechanism, and therefore is subject only to the researcher’s imagination. It simply cannot be tested, which is the very definition of non-falsifiable.
I disagree, for the reasons given above.
Your “reasons given above” contained: a) a complete no-show on your list of ID claims that rely on theological backing, and b) a total lack of response to the proposition that “unguided forces” as a cause of life (i.e. those forces which you proposed, without argument, to model in your simulation) is a completely non-falsifiable thesis. Your “reasons given above” contained little new substance and a complete void in response to the key questions. There seems little reason to continue, as I am pressed for time otherwise.Upright BiPed
June 23, 2013
June
06
Jun
23
23
2013
10:39 AM
10
10
39
AM
PDT
I have already told you Lizzie. We don't have to because no one can even produce a chance hypothesis.Joe
June 23, 2013
June
06
Jun
23
23
2013
10:39 AM
10
10
39
AM
PDT
All I'm asking, KF, is how you compute P(T|H)? I'm happy to stipulate that T is one of a very small number of Targets out of a very large number of combinations.Elizabeth B Liddle
June 23, 2013
June
06
Jun
23
23
2013
10:35 AM
10
10
35
AM
PDT
Dr Liddle; Pardon, still not good enough. Choice leading to design of functionally specific complex organised objects, systems and processes is a well known, frequently observed phenomenon with billions of cases in point. The same for chance and law. One could quibble a bit that chance is a little catch-all for things where outcomes are not purposefully connected to configurations, but this too is well observed starting with a falling tumbling die or molecular velocities or the like and of course sky noise, Zener noise, Johnson noise, flicker noise, Weibull distributed populations, Gaussian distributed populations, Poisson distributed populations, Boltzmann distributed populations and more, much more. The point is that cause is a known phenomenon that certain factors influence, enable/disable and in some cases determine outcomes. Causes follow in some cases regular patterns where on initial conditions being similar outcomes will be predictably similar, e.g. F = m*a, etc. In others there is high contingency of outcomes without obvious material difference on initial conditions. A dropped object falls, if it is a die it tumbles and gives a particular random distribution of outcomes. If fair, it will be more or less flat random. If loaded -- a case of design -- it will not. Contingency by chance or by choice. And we have disaggregated aspects of one and the same phenomenon to see causal patterns connected to them. Now, you need to address FSCO/I and its known, reliably observed cause and the analysis that shows why available atomic resources would only give a minuscule sample of the space of possible configs, W, so that the only reasonable outcome on chance is from the bulk -- which reliably will be non-functional because of the specificity constraints to achieve function. So, when we see patterns coming from narrow functionally specific zones T in such spaces W, the only reasonable cause is choice. Which we do routinely see, e.g this post manifests just that pattern, in a context of linguistically functional code. Object code in a system that processes info, will also be linguistic, but is specific to machine function. The notion of molecular noise in some pond or the like writing code is red flag ridiculous. It is only entertained because there is a strong prejudice against the alternative. And of course to the exact extent that molecular patterns are determined by forces of necessity, to that same extent they cannot bear information. KFkairosfocus
June 23, 2013
June
06
Jun
23
23
2013
10:16 AM
10
10
16
AM
PDT
gpuccio:
Elizabeth: If this is not true for proteins – in other words if protein space is far more disconnected than, for example, the space of regulatory genes, then you may well be correct. But protein space is disconnected. Don’t you think that 2000 superfamilies, and growing, in the SCOP classification, completely separated at sequence level, and folding level, and function level, is not a disconnected space?
Well, I have to take your word for it that the families are thus disconnected, but the separation of the superfamilies is not the relevant separation. The fact that they exist is evidence that within families the space is highly connected, by which I mean, high correlation between sequence similarity and possession of function. And the fact that phylogenetic trees can be inferred is evidence for at least a substantial LGT signal, although clearly there is HGT signal as well. But the argument is not that one family can evolve into another, any more than evolutionists argue that cats can evolve into dogs (and that evolution is infirmed by the observation that they cannot). It's connectedness between the root of those families and postulated non-extant precursor sequences that you need to demonstrate is not there. In other words, I am not saying you can get from superfamily A to superfamily B; I'm saying that it may be possible to get from archaic protein A to the root of superfamily A. And given that you can get from the root of superfamily A to the twigs of superfamily A forward in time why should the root not extend further back in time?
Let’s suppose that some time, in LUCA or pre-LUCA , the living being survives and duplicates with only one protein superfamily (pure myth, but let’s suppose it can be true, for the sake of discussion). Even finding that single superfamily would already appear an extraordinary piece of luck. But let’s assume that pre-LUCA was very lucky. But then? What happens? Two possible scenarios, both equally impossible: a) In some way, the original functional superfamily, through RV, originates the other 1999 in the course of evolution, either from working genes or from non functional duplicates, realizing the stunning total of 1999 random walks that all reach a separated functional island, against all probabilities. b) In some way, the other 1999 superfamilies are found again by sheer luck, like the first one, from something like non coding sequences or similar. Against all probabilities. A good explanation indeed, is the neodarwinian model. My compliments to all those who are so enthusiast of the thing! Elizabeth, I have great respect for you. I find your concepts of epistemology a little bit mixed up, but after all that is not a crime. And you have at least one excuse in your darwinian faith: you obviously understand very little of molecular biology (definitely not a crime!). Others could not say the same in their own defense.
First of all, I do not have "darwinian faith". Second, I will defend my scientific epistemology. Thirdly, while I am no molecular biologist, I'm not entirely a naive reader of the literature, so I'm puzzled by your scenario. How do you know the walk to the root of a superfamily was random i.e. that there were no functional ancestral members of that family that are no longer extant?Elizabeth B Liddle
June 23, 2013
June
06
Jun
23
23
2013
09:20 AM
9
09
20
AM
PDT
Oops ludid = luridAlan Fox
June 23, 2013
June
06
Jun
23
23
2013
09:09 AM
9
09
09
AM
PDT
164 JWTruthInLoveJune 22, 2013 at 5:28 pm
@AF & kf: First nazis (AF), then 1984 (kf). The weird desire of darwinists and trinitarians to use tyranny inspired polemic to their own benefit never ceases to amuse this onlooker.
You seem to be confused on who said what. Can't really blame you if you have been reading KF ;) I haven't made any reference to Nazis. KF opened a can of worms by posting some daft OP about marching people around death camps, complete with ludid pictures, as if it had something to do with ID or the price of bread.Alan Fox
June 23, 2013
June
06
Jun
23
23
2013
09:08 AM
9
09
08
AM
PDT
So I take it we will not be presented with any evidence for darwinian processes producing irreducible complexity. And here I was thinking I missed some very important scientific discovery. Back to "Darwin's Doubt"...Joe
June 23, 2013
June
06
Jun
23
23
2013
08:55 AM
8
08
55
AM
PDT
OK, one more time- design is not the default. One has to actively consider and eliminate non-agency explanations first. Not only do we have to eliminate chance and necessity, for example, there also has to be some correlation to mind- ie some specification. So, actively considering alternatives alone is enough to disqualify design as the default. Add to that the fact that it is also not enough to eliminate chance and necessity and design is no way close to being a default inference.Joe
June 23, 2013
June
06
Jun
23
23
2013
08:48 AM
8
08
48
AM
PDT
KF:
I trust we can now lay the default talking point to rest.
Indeed. I will no longer use that term. Please mentally replace it in any previous context with something like "that which we conclude when alternate hypotheses have been rejected".Elizabeth B Liddle
June 23, 2013
June
06
Jun
23
23
2013
08:27 AM
8
08
27
AM
PDT
oops, messed up quote tags. I think you can probably figure out who said what! I wrote the paragraph starting "Well, strictly, no", and all subsequent paragraphs at the same intentation level.Elizabeth B Liddle
June 23, 2013
June
06
Jun
23
23
2013
08:23 AM
8
08
23
AM
PDT
We are getting a little out of synch, here, gpuccio, but let me try to address this:
Elizabeth: Always briefly and in no order: At the very least it is as supported by analogy as your designer is. Both darwinian processes and intentional designers have been demonstrated to produce Irreducibly Complex functions (i.e. functions in which there is no pathway of selectable precursors). ???? My tool for design inference is, as you well know, dFSCI. What darwinian process has ever been shown to produce dFSCI?
I think I can answer this, but before I do, can you link to a specific definition of your dFSCI? I am familiar with a number of comparable metrics, but would prefer be clear about yours.
I agree. And nor can the designer be falsified. There are many scientific explanations that cannot be falsified. Indeed, the only regular falsification we do in science is falsification of the null hypothesis. If a hypothesis cannot be set up as a null, it cannot be falsified. ???? What are you saying here? Are you rejecting the whole Popperian theory of science?
No, I'm not.
It seems you are rather confused. The null is never falsified, it is only rejected because improbable. H0 is not falsified, ever.
Well, you'd better have that argument with Sir Ronald Fisher! But certainly, falsification is always probabilistic in empirical science, not absolute, as in math. That's why we don't "prove" things in science, merely demonstrate that our models are supported, or infirmed.
Instead, necessity explanations can be falsified. For example, if I assume that the cause of the effect I observe is X, but further experimentation shows that X does not produce that effect, my H1 is falsified.
Well, strictly, no. All you have done is retained the null that X does not have an effect. It may have had an effect, but one so small that you lacked the statistical power to demonstrate it. This is the big problem with trying to show, for instance, that a drug has no adverse effects, hence the movement towards effect-size hypotheses, which can be rejected by a study of known statistical power. So what would be better would be to cast this as an effect-size hypothesis: X has an effect on Y that is at least Z. Then, if you get an effect size whose confidence limits don't overlap with Z you can consider your data improbable under your hypothesis. Alternatively, if your hypothesis is that ONLY X produces Y (rather than the simple hypothesis that X produces Y) then you could falsify that hypothesis without specifying an effect size, but you'd still be setting it up as the null: In the absence of X there can be no Y. Under that null hypothesis, the probability of observing Y in the absence of X would be (near) zero. If you observed Y in the absence of X, you could reject your null (i.e. consider it falsified).
Popper’s point is that if an explanation can never be falsified, for the same nature of the explanation, ot is not a scientific explanation. As I said in my #177, the ID theory is perfectly falsifiable, and is therefore perfectly scientific.
I do not, and have not, disputed your claim that the hypothesis that only a designer can produce dFCSO is unfalsifiable. It is, because that hypothesis can be cast as a null: In the absence of a designer, we will not observe the generation of dFCSO. If we can falsify that null (show the generation of dFCSO in the absence of a designer) we can infer that the null is false. Essentially, Fisherian hypothesis testing outputs the probability that you would observe what you observe under some hypothesis. If that probability is low, and you observe it anyway, you can reject the hypothesis. If your hypothesis can be rejected (you can show that what you observe would be unlikely were it true), it can be considered scientific in Popperian terms. The problem with Dembski's chi is that it is an attempt to falsify "non-design" which is far to vague to be cast as a falsifiable null. "Non design" is not a falsifiable hypothesis. Nor is "design".
Elizabeth B Liddle
June 23, 2013
June
06
Jun
23
23
2013
08:20 AM
8
08
20
AM
PDT
Similarly, I must challenge the objectors to identify a definitive fourth causal option, in the sequence lo vs hi contingency, then for hi, chance vs choice. Where, in point of fact, chance is a very broad default indeed, i.e. if we cannot assign something to necessity as it does not show natural regularities for this aspect, we assume chance unless we have positive reason to infer design. Objectors need to show that any alternative to design for high contingency in an aspect of a phenomenon or process cannot be categorised as chance -- the default -- but as something else, and on what empirically justified basis.kairosfocus
June 23, 2013
June
06
Jun
23
23
2013
07:12 AM
7
07
12
AM
PDT
I trust we can now lay the default talking point to rest.kairosfocus
June 23, 2013
June
06
Jun
23
23
2013
07:08 AM
7
07
08
AM
PDT
P^4S: Webopedia: >>default A value or setting that a device or program automatically selects if you do not specify a substitute. For example, word processors have default margins and default page lengths that you can override or reset. The default drive is the disk drive the computer accesses unless you specify a different disk drive. Likewise, the default directory is the directory the operating system searches unless you specify a different directory. The default can also be an action that a device or program will take. For example, some word processors generate backup files by default. >>kairosfocus
June 23, 2013
June
06
Jun
23
23
2013
07:06 AM
7
07
06
AM
PDT
Elizabeth: If this is not true for proteins – in other words if protein space is far more disconnected than, for example, the space of regulatory genes, then you may well be correct. But protein space is disconnected. Don't you think that 2000 superfamilies, and growing, in the SCOP classification, completely separated at sequence level, and folding level, and function level, is not a disconnected space? Let's suppose that some time, in LUCA or pre-LUCA :) , the living being survives and duplicates with only one protein superfamily (pure myth, but let's suppose it can be true, for the sake of discussion). Even finding that single superfamily would already appear an extraordinary piece of luck. But let's assume that pre-LUCA was very lucky. But then? What happens? Two possible scenarios, both equally impossible: a) In some way, the original functional superfamily, through RV, originates the other 1999 in the course of evolution, either from working genes or from non functional duplicates, realizing the stunning total of 1999 random walks that all reach a separated functional island, against all probabilities. b) In some way, the other 1999 superfamilies are found again by sheer luck, like the first one, from something like non coding sequences or similar. Against all probabilities. A good explanation indeed, is the neodarwinian model. My compliments to all those who are so enthusiast of the thing! Elizabeth, I have great respect for you. I find your concepts of epistemology a little bit mixed up, but after all that is not a crime. And you have at least one excuse in your darwinian faith: you obviously understand very little of molecular biology (definitely not a crime!). Others could not say the same in their own defense.gpuccio
June 23, 2013
June
06
Jun
23
23
2013
07:05 AM
7
07
05
AM
PDT
1 2 3 8

Leave a Reply