Uncommon Descent Serving The Intelligent Design Community

Order vs. Complexity: A follow-up post

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

NOTE: This post has been updated with an Appendix – VJT.

My post yesterday, Order is not the same thing as complexity: A response to Harry McCall (17 June 2013), seems to have generated a lively discussion, judging from the comments received to date. Over at The Skeptical Zone, Mark Frank has also written a thoughtful response titled, VJ Torley on Order versus Complexity. In today’s post, I’d like to clear up a few misconceptions that are still floating around.

1. In his opening paragraph, Mark Frank writes:

To sum it up – a pattern has order if it can be generated from a few simple principles. It has complexity if it can’t. There are some well known problems with this – one of which being that it is not possible to prove that a given pattern cannot be generated from a few simple principles. However, I don’t dispute the distinction. The curious thing is that Dembski defines specification in terms of a pattern that can generated from a few simple principles. So no pattern can be both complex in VJ’s sense and specified in Dembski’s sense.

Mark Frank appears to be confusing the term, “generated,” with the term. “described.” here. What I wrote in my post yesterday is that a pattern exhibits order if it can be generated by “a short algorithm or set of commands,” and complexity if it can’t be compressed into a shorter pattern by a general law or computer algorithm. Professor William Dembski, in his paper, Specification: The Pattern That Signifies Intelligence, defines specificity in terms of the shortest verbal description of a pattern. On page 16, Dembski defines the function phi_s(T) for a pattern T as “the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T” (emphasis mine) before going on to define the specificity sigma as minus the log (to base 2) of the product of phi_s(T) and P(T|H), where P(T|H) is the probability of the pattern T being formed according to “the relevant chance hypothesis that takes into account Darwinian and other material mechanisms” (p. 17). In The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Intelligent Design advocates William Dembski and Jonathan Wells define specification as “low DESCRIPTIVE complexity” (p. 320), and on page 311 they explain that descriptive complexity “generalizes Kolmogorov complexity by measuring the size of the minimum description needed to characterize a pattern.”

The definition of order and complexity relates to whether or not a pattern can be generated mathematically by “a short algorithm or set of commands,” rather than whether or not it can be described in a few words. The definition of specificity, on the other hand, relates to whether or not a pattern can be characterized by a brief verbal description. There is nothing that prevents a pattern from being difficult to generate algorithmically, but easy to describe verbally. Hence it is quite possible for a pattern to be both complex and specified.

NOTE: I have substantially revised my response to Mark Frank, in the Appendix below.

2. Dr. Elizabeth Liddle, in a comment on Mark Frank’s post, writes that “by Dembski’s definition a chladni pattern would be both specified and complex. However, it would not have CSI because it is highly probable given a relevant chance (i.e. non-design) hypothesis.” The second part of her comment is correct; the first part is incorrect. Precisely because a Chladni pattern is “highly probable given a relevant chance (i.e. non-design) hypothesis,” it is not complex. In The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), William Dembski and Jonathan Wells define complexity as “The degree of difficulty to solve a problem or achieve a result,” before going on to add: “The most common forms of complexity are probabilistic (as in the probability of obtaining some outcome) or computational (as in the memory or computing time required for an algorithm to solve a problem)” (pp. 310-311). If a Chladni pattern is easy to generate as a result of laws then it exhibits order rather than complexity.

3. In another comment, Dr. Liddle writes: “V J Torley seems to be forgetting that fractal patterns are non-repeating, even though they can be simply described.” I would beg to differ. Here’s what Wikipedia has to say in its article on fractals (I’ve omitted references):

Fractals are typically self-similar patterns, where self-similar means they are “the same from near as from far”. Fractals may be exactly the same at every scale, or, as illustrated in Figure 1, they may be nearly the same at different scales. The definition of fractal goes beyond self-similarity per se to exclude trivial self-similarity and include the idea of a detailed pattern repeating itself.

The caption accompanying the figure referred to above reads as follows: “The Mandelbrot set illustrates self-similarity. As you zoom in on the image at finer and finer scales, the same pattern re-appears so that it is virtually impossible to know at which level you are looking.”

That sounds pretty repetitive to me. More to the point, fractals are mathematically easy to generate. Here’s what Wikipedia says about the Mandelbrot set, for instance:

More precisely, the Mandelbrot set is the set of values of c in the complex plane for which the orbit of 0 under iteration of the Complex quadratic polynomial zn+1 = zn2 + c remains bounded. That is, a complex number c is part of the Mandelbrot set if, when starting with z0 = 0 and applying the iteration repeatedly, the absolute value of zn remains bounded however large n gets.

NOTE: I have revised some of my comments on Mandelbrot sets and fractals. See the Appendix below.

4. In another comment on the same post, Professor Joe Felsenstein objects that Dembski’s definition of specified complexity has a paradoxical consequence: “It implies that we are to regard a life form as uncomplex, and therefore having specified complexity [?] if it is easy to describe,” which means that “a hummingbird, on that view, has not nearly as much specification as a perfect steel sphere,” even though the hummingbird “can do all sorts of amazing things, including reproduce, which the steel sphere never will.” He then suggests defining specification on a scale of fitness.

In my post yesterday, I pointed out that the term “specified complexity” is fairly non-controversial when applied to life: as chemist Leslie Orgel remarked in 1973, “living organisms are distinguished by their specified complexity.” Orgel added that crystals are well-specified, but simple rather than complex. If specificty were defined in terms of fitness, as Professor Felsenstein suggests, then we could no longer say that a non-reproducing crystal was specified.

However, Professor Felsenstein’s example of the steel sphere is an interesting one, because it illustrates that the probability of a sphere’s originating by natural processes may indeed be extremely low, especially if it is also made of an exotic material. (In this respect, it is rather like the lunar monolith in the movie 2001.) Felsenstein’s point is that a living organism would be a worthier creation of an intelligent agent than such a sphere, as it has a much richer repertoire of capabilities.

Closely related to this point is the fact that living things exhibit a nested hierarchy of organization, as well as dedicated functionality: intrinsically adapted parts whose entire repertoire of functionality is “dedicated” to supporting the functionality of the whole unit which they comprise. Indeed, it is precisely this kind of organization and dedicated functionality which allows living things to reproduce in the first place.

At the bottom level, the full biochemical specifications required for putting together a living thing such as a hummingbird are very long indeed. It is only when we get to higher organizational levels that we can apply holistic language and shorten our description, by characterizing the hummingbird in terms of its bodily functions rather than its parts, and by describing those functions in terms of how they benefit the whole organism.

I would therefore agree that an entity exhibiting this combination of traits (bottom-level exhaustive detail and higher-level holistic functionality, which makes the entity easy to characterize in a few words) is a much more typical product of intelligent agency than a steel sphere, notwithstanding the latter’s descriptive simplicity.

In short: specified complexity gets us to Intelligent Design, but some designs are a lot more intelligent than others. Whoever made hummingbirds must have been a lot smarter than we are; we have enough difficulties putting together a single protein.

5. In a comment on my post, Alan Fox objects that “We simply don’t know how rare novel functional proteins are.” Here I should refer him to the remarks made by Dr. Branko Kozulic in his 2011 paper, Proteins and Genes, Singletons and Species. I shall quote a brief extract:

In general, there are two aspects of biological function of every protein, and both depend on correct 3D structure. Each protein specifically recognizes its cellular or extracellular counterpart: for example an enzyme its substrate, hormone its receptor, lectin sugar, repressor DNA, etc. In addition, proteins interact continuously or transiently with other proteins, forming an interactive network. This second aspect is no less important, as illustrated in many studies of protein-protein interactions [59, 60]. Exquisite structural requirements must often be fulfilled for proper functioning of a protein. For example, in enzymes spatial misplacement of catalytic residues by even a few tenths of an angstrom can mean the difference between full activity and none at all [54]. And in the words of Francis Crick, “To produce this miracle of molecular construction all the cell need do is to string together the amino acids (which make up the polypeptide chain) in the correct order”….

Let us assess the highest probability for finding this correct order by random trials and call it, to stay in line with Crick’s term, a “macromolecular miracle”. The experimental data of Keefe and Szostak indicate – if one disregards the above described reservations – that one from a set of 10^11 randomly assembled polypeptides can be functional in vitro, whereas the data of Silverman et al. [57] show that of the 10^10 in vitro functional proteins just one may function properly in vivo. The combination of these two figures then defines a “macromolecular miracle” as a probability of one against 10^21. For simplicity, let us round this figure to one against 10^20…

It is important to recognize that the one in 10^20 represents the upper limit, and as such this figure is in agreement with all previous lower probability estimates. Moreover, there are two components that contribute to this figure: first, there is a component related to the particular activity of a protein – for example enzymatic activity that can be assayed in vitro or in vivo – and second, there is a component related to proper functioning of that protein in the cellular context: in a biochemical pathway, cycle or complex. (pp. 7-8)

In short: the specificity of proteins is not in doubt, and their usefulness for Intelligent Design arguments is therefore obvious.

I sincerely hope that the foregoing remarks will remove some common misunderstandings and stimulate further discussion.

APPENDIX

Let me begin with a confession: I had a nagging doubt when I put up this post a couple of days ago. What bothered me was that (a) some of the definitions of key terms were a little sloppily worded; and (b) some of these definitions seemed to conflate mathematics with physics.

Maybe I should pay more attention to my feelings.

A comment by Professor Jeffrey Shallit over at The Skeptical Zone also convinced me that I needed to re-think my response to Mark Frank on the proper definition of specificity. Professor Shallit’s remarks on Kolmogorov complexity also made me realize that I needed to be a lot more careful about defining the term “generate,” which may denote either a causal process governed by physical laws, or the execution of an algorithm by performing a sequence of mathematical operations.

What I wrote in my original post, Order is not the same thing as complexity: A response to Harry McCall (17 June 2013), is that a pattern exhibits order if it can be generated by “a short algorithm or set of commands,” and complexity if it can’t be compressed into a shorter pattern by a general law or computer algorithm.

I’d now like to explain why I now find those definitions unsatisfactory, and what I would propose in their stead.

Problems with the definition of order

I’d like to start by going back to the original sources. In Signature in the Cell l (Harper One, 2009, p. 106), Dr. Stephen Meyer writes:

Complex sequences exhibit an irregular, nonrepeating arrangement that defies expression by a general law or computer algorithm (an algorithm is a set of expressions for accomplishing a specific task or mathematical operation). The opposite of a highly complex sequence is a highly ordered sequence like ABCABCABCABC, in which the characters or constituents repeat over and over due to some underlying rule, algorithm or general law. (p. 106)

[H]igh probability repeating sequences like ABCABCABCABCABCABC have very little information (either carrying capacity or content)… Such sequences aren’t complex either. Why? A short algorithm or set of commands could easily generate a long sequence of repeating ABC’s, making the sequence compressible. (p. 107)
(Emphases mine – VJT.)

There are two problems with this definition. First, it mistakenly conflates physics with mathematics, when it declares that a complex sequence can be generated by “a general law or computer algorithm.” I presume that by “general law,” Dr. Meyer means to refer to some law of Nature, since on page 107, he lists certain kinds of organic molecules as examples of complexity. The problem here is that a sequence may be easy to generate by a computer algorithm, but difficult to generate by the laws of physics (or vice versa). In that case, it may be complex according to physical criteria but not according to mathematical criteria (or the reverse), generating a contradiction.

Second, the definition conflates: (a) the repetitiveness of a sequence, with (b) the ability of a short algorithm to generate that sequence, and (c) the Shannon compressibility of that sequence. The problem here is that there are non-repetitive sequences which can be generated by a short algorithm. Some of these non-repeating sequences are also Shannon-incompressible. Do these sequences exhibit order or complexity?

Third, the definition conflicts with what Professor Dembski has written on the subject of order and complexity. In The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Professor William Dembski and Dr. Jonathan Wells provide three definitions for order, the first of which reads as follows:

(1) Simple or repetitive patterns, as in crystals, that are the result of laws and cannot reasonably be used to draw a design inference. (p. 317; italics mine – VJT).

The reader will notice that the definition refers only to law-governed physical processes.

Dembski’s 2005 paper, Specification: The Pattern that Signifies Intelligence, also refers to the Champernowne sequence as exhibiting a “combination of pattern simplicity (i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing the corresponding event by chance)” (pp. 15-16). According to Dembski, the Champernowne sequence can be “constructed simply by writing binary numbers in ascending lexicographic order, starting with the one-digit binary
numbers (i.e., 0 and 1), proceeding to the two-digit binary numbers (i.e., 00, 01, 10, and 11),” and so on indefinitely, which means that it can be generated by a short algorithm. At the same time, Dembski describes at as having “event-complexity (i.e., difficulty of reproducing the corresponding event by chance).” In other words, it is not an example of what he would define as order. And yet, because it can be generated by “a short algorithm,” it would arguably qualify as an example of order under Dr. Meyer’s criteria (see above).

Problems with the definition of specificity

Dr. Meyer’s definition of specificity is also at odds with Dembski’s. On page 96 of Signature in the Cell, Dr. Meyer defines specificity in exclusively functional terms:

By specificity, biologists mean that a molecule has some features that have to be what they are, within fine tolerances, for the molecule to perform an important function within the cell.

Likewise, on page 107, Meyer speaks of a sequence of digits as “specifically arranged to form a function.”

By contrast, in The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Professor William Dembski and Dr. Jonathan Wells define specification as “low DESCRIPTIVE complexity” (p. 320), and on page 311 they explain that descriptive complexity “generalizes Kolmogorov complexity by measuring the size of the minimum description needed to characterize a pattern.” Although Dembski certainly regards functional specificity as one form of specificity, since he elsewhere refers to the bacterial flagellum – a “bidirectional rotary motor-driven propeller” – as exhibiting specificity, he does not regard it as the only kind of specificity.

In short: I believe there is a need for greater rigor and consistency when defining these key terms. Let me add that I don’t wish to criticize any of the authors I’ve mentioned above; I’ve been guilty of terminological imprecision at times, myself.

My suggestions for more rigorous definitions of the terms “order” and “specification”

So here are my suggestions. In Specification: The Pattern that Signifies Intelligence, Professor Dembski defines a specification in terms of a “combination of pattern simplicity (i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing the corresponding event by chance),” and in The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Dembski and Wells define complex specified information as being equivalent to specified complexity (p. 311), which they define as follows:

An event or object exhibits specified complexity provided that (1) the pattern to which it conforms is a highly improbable event (i.e. has high PROBABILISTIC COMPLEXITY) and (2) the pattern itself is easily described (i.e. has low DESCRIPTIVE COMPLEXITY). (2008, p. 320)

What I’d like to propose is that the term order should be used in opposition to high probabilistic complexity. In other words, a pattern is ordered if and only if its emergence as a result of law-governed physical processes is not a highly improbable event. More succinctly: a pattern is ordered is it is reasonably likely to occur, in our universe, and complex if its physical realization in our universe is a very unlikely event.

Thus I was correct when I wrote above:

If a Chladni pattern is easy to generate as a result of laws then it exhibits order rather than complexity.

However, I was wrong to argue that a repeating pattern is necessarily a sign of order. In a salt crystal it certainly is; but in the sequence of rolls of a die, a repeating pattern (e.g. 123456123456…) is a very improbable pattern, and hence it would be probabilistically complex. (It is, of course, also a specification.)

Fractals, revisited

The same line of argument holds true for fractals: when assessing whether they exhibit order or (probabilistic) complexity, the question is not whether they repeat themselves or are easily generated by mathematical algorithms, but whether or not they can be generated by law-governed physical processes. I’ve seen conflicting claims on this score (see here and here and here): some say there are fractals in Nature, while other say that some objects in Nature have fractal features, and still others, that the patterns that produce fractals occur in Nature even if fractals themselves do not. I’ll leave that one to the experts to sort out.

The term specification should be used to refer to any pattern of low descriptive complexity, whether functional or not. (I say this because some non-functional patterns, such as the lunar monolith in 2001, and of course fractals, are clearly specified.)

Low Kolmogorov complexity is, I would argue, a special case of specification. Dembski and Wells agree: on page 311 of The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), they explain that descriptive complexity “generalizes Kolmogorov complexity by measuring the size of the minimum description needed to characterize a pattern” (italics mine).

Kolmogorov complexity as a special case of descriptive complexity

Which brings me to Professor Shallit’s remarks in a post over at The Skeptical Zone, in response to my earlier (misguided) attempt to draw a distinction between the mathematical generation of a pattern and the verbal description of that pattern:

In the Kolmogorov setting, “concisely described” and “concisely generated” are synonymous. That is because a “description” in the Kolmogorov sense is the same thing as a “generation”; descriptions of an object x in Kolmogorov are Turing machines T together with inputs i such that T on input i produces x. The size of the particular description is the size of T plus the size of i, and the Kolmogorov complexity is the minimum over all such descriptions.

I accept Professor Shallit’s correction on this point. What I would insist, however, is that the term “descriptive complexity,” as used by the Intelligent design movement, cannot be simply equated with Kolmogorov complexity. Rather, I would argue that low Kolmogorov complexity is a special case of low descriptive complexity. My reason for adopting this view is that the determination of a object’s Kolmogorov complexity requires a Turing machine (a hypothetical device that manipulates symbols on a strip of tape according to a table of rules), which is an inappropriate (not to mention inefficient) means of determining whether an object possesses functionality of a particular kind – e.g. is this object a cutting implement? What I’m suggesting, in other words, is that at least some functional terms in our language are epistemically basic, and that our recognition of whether an object possesses these functions is partly intuitive. Using a table of rules to determine whether or not an object possesses a function (say, cutting) is, in my opinion, likely to produce misleading results.

My response to Mark Frank, revisited

I’d now like to return to my response to Mark Frank above, in which I wrote:

The definition of order and complexity relates to whether or not a pattern can be generated mathematically by “a short algorithm or set of commands,” rather than whether or not it can be described in a few words. The definition of specificity, on the other hand, relates to whether or not a pattern can be characterized by a brief verbal description. There is nothing that prevents a pattern from being difficult to generate algorithmically, but easy to describe verbally. Hence it is quite possible for a pattern to be both complex and specified.

This, I would now say, is incorrect as it stands. The reason why it is quite possible for an object to be both complex and specified is that the term “complex” refers to the (very low) likelihood of its originating as a result of physical laws (not mathematical algorithms), whereas the term “specified” refers to whether it can be described briefly – whether it be according to some algorithm or in functional terms.

Implications for Intelligent Design

I have argued above that we can legitimately infer an Intelligent Designer for any system which is capable of being verbally described in just a few words, and whose likelihood of originating as a result of natural laws is sufficiently close to zero. This design inference is especially obvious in systems which exhibit biological functionality. Although we can make design inferences for non-biological systems (e.g. moon monoliths, if we found them), the most powerful inferences are undoubtedly drawn from the world of living things, with their rich functionality.

In an especially perspicuous post on this thread, G. Puccio argued for the same conclusion:

The simple truth is, IMO, that any kind of specification, however defined, will do, provided that we can show that that specification defines a specific subset of the search space that is too small to be found by a random search, and which cannot reasonably be found by some natural algorithm….

In the end, I will say it again: the important point is not how you specify, but that your specification identifies:

a)an utterly unlikely subset as a pre-specification

or

b) an utterly unlikely subset which is objectively defined without any arbitrary contingency, like in the case of pi.

In the second case, specification needs not be a pre-specification.

Functional specification is a perfect example of the second case.

Provided that the function can be objectively defined and measured, the only important point is how complex it is: IOWs, how small is the subset of sequences that provide the function as defined, in the search space.

That simple concept is the foundation for the definition of dFSCI, or any equivalent metrics.

It is simple, it is true, it works.

P{T|H) and elephants: Dr. Liddle objects

But how do we calculate probabilistic complexities? Dr. Elizabeth Liddle writes:

P(T|H) is fine to compute if you have a clearly defined non-design hypothesis for which you can compute a probability distribution.

But nobody, to my knowledge, has yet as suggested how you would compute it for a biological organism, or even for a protein.

In a similar vein, Alan Fox comments:

We have, as yet, no way to predict functionality in unknown proteins. Without knowing what you don’t know, you can’t calculate rarity.

In a recent post entitled, The Edge of Evolution, I cited a 2011 paper by Dr. Branko Kozulic, titled, Proteins and Genes, Singletons and Species, in which he argued (generously, in his view) that at most, 1 in 10^21 randomly assembled polypeptides would be capable of functioning as a viable protein in vivo, that each species possessed hundreds of isolated proteins called “singletons” which had no close biochemical relatives, and that the likelihood of these proteins originating by unguided mechanisms in even one species was astronomically low, making proteins at once highly complex (probabilistically speaking) and highly specified (by virtue of their function) – and hence as sure a sign as we could possibly expect of an Intelligent Designer at work in the natural world:

In general, there are two aspects of biological function of every protein, and both depend on correct 3D structure. Each protein specifically recognizes its cellular or extracellular counterpart: for example an enzyme its substrate, hormone its receptor, lectin sugar, repressor DNA, etc. In addition, proteins interact continuously or transiently with other proteins, forming an interactive network. This second aspect is no less important, as illustrated in many studies of protein-protein interactions [59, 60]. Exquisite structural requirements must often be fulfilled for proper functioning of a protein. For example, in enzymes spatial misplacement of catalytic residues by even a few tenths of an angstrom can mean the difference between full activity and none at all [54]. And in the words of Francis Crick, “To produce this miracle of molecular construction all the cell need do is to string together the amino acids (which make up the polypeptide chain) in the correct order” [61, italics in original]. (pp. 7-8)

Let us assess the highest probability for finding this correct order by random trials and call it, to stay in line with Crick’s term, a “macromolecular miracle”. The experimental data of Keefe and Szostak indicate – if one disregards the above described reservations – that one from a set of 10^11 randomly assembled polypeptides can be functional in vitro, whereas the data of Silverman et al. [57] show that of the 10^10 in vitro functional proteins just one may function properly in vivo. The combination of these two figures then defines a “macromolecular miracle” as a probability of one against 10^21. For simplicity, let us round this figure to one against 10^20. (p. 8)

To put the 10^20 figure in the context of observable objects, about 10^20 squares each measuring 1 mm^2 would cover the whole surface of planet Earth (5.1 x 10^14 m^2). Searching through such squares to find a single one with the correct number, at a rate of 1000 per second, would take 10^17 seconds, or 3.2 billion years. Yet, based on the above discussed experimental data, one in 10^20 is the highest probability that a blind search has for finding among random sequences an in vivo functional protein. (p. 9)

The frequency of functional proteins among random sequences is at most one in 10^20 (see above). The proteins of unrelated sequences are as different as the proteins of random sequences [22, 81, 82] – and singletons per definition are exactly such unrelated proteins. (p. 11)

A recent study, based on 573 sequenced bacterial genomes, has concluded that the entire pool of bacterial genes – the bacterial pan-genome – looks as though of infinite size, because every additional bacterial genome sequenced has added over 200 new singletons [111]. In agreement with this conclusion are the results of the Global Ocean Sampling project reported by Yooseph et al., who found a linear increase in the number of singletons with the number of new protein sequences, even when the number of the new sequences ran into millions [112]. The trend towards higher numbers of singletons per genome seems to coincide with a higher proportion of the eukaryotic genomes sequenced. In other words, eukaryotes generally contain a larger number of singletons than eubacteria and archaea. (p. 16)

Based on the data from 120 sequenced genomes, in 2004 Grant et al. reported on the presence of 112,000 singletons within 600,000 sequences [96]. This corresponds to 933 singletons per genome…
[E]ach species possesses hundreds, or even thousands, of unique genes – the genes that are not shared with any other species. (p. 17)

Experimental data reviewed here suggest that at most one functional protein can be found among 10^20 proteins of random sequences. Hence every discovery of a novel functional protein (singleton) represents a testimony for successful overcoming of the probability barrier of one against at least 10^20, the probability defined here as a “macromolecular miracle”. More than one million of such “macromolecular miracles” are present in the genomes of about two thousand species sequenced thus far. Assuming that this correlation will hold with the rest of about 10 million different species that live on Earth [157], the total number of “macromolecular miracles” in all genomes could reach 10 billion. These 10^10 unique proteins would still represent a tiny fraction of the 10^470 possible proteins of the median eukaryotic size. (p. 21)

If just 200 unique proteins are present in each species, the probability of their simultaneous appearance is one against at least 10^4,000. [The] Probabilistic resources of our universe are much, much smaller; they allow for a maximum of 10^149 events [158] and thus could account for a one-time simultaneous appearance of at most 7 unique proteins. The alternative, a sequential appearance of singletons, would require that the descendants of one family live through hundreds of “macromolecular miracles” to become a new species – again a scenario of exceedingly low probability. Therefore, now one can say that each species is a result of a Biological Big Bang; to reserve that term just for the first living organism [21] is not justified anymore. (p. 21)

“But what if the search for a functional protein is not blind?” ask my critics. “What if there’s an underlying bias towards the emergence of functionality in Nature?” “Fine,” I would respond. “Let’s see your evidence.”

Alan Miller rose to the challenge. In a recent post entitled, Protein Space and Hoyle’s Fallacy – a response to vjtorley, cited a paper by Michael A. Fisher, Kara L. McKinley, Luke H. Bradley, Sara R. Viola and Michael H. Hecht, titled, De Novo Designed Proteins from a Library of Artificial Sequences Function in Escherichia Coli and Enable Cell Growth (PLoS ONE 6(1): e15364. doi:10.1371/journal.pone.0015364, January 4, 2011), in support of his claim that proteins were a lot easier for Nature to build on the primordial Earth than Intelligent Design proponents imagine, and he accused them of resurrecting Hoyle’s fallacy.

In a very thoughtful comment over on my post CSI Revisited, G. Puccio responded to the key claims made in the paper, and to what he perceived as Alan Miller’s misuse of the paper (bolding below is mine):

First of all, I will just quote a few phrases from the paper, just to give the general scenario of the problems:

a) “We designed and constructed a collection of artificial genes encoding approximately 1.5×106 novel amino acid sequences. Because folding into a stable 3-dimensional structure is a prerequisite for most biological functions, we did not construct this collection of proteins from random sequences. Instead, we used the binary code strategy for protein design, shown previously to facilitate the production of large combinatorial libraries of folded proteins.”

b) “Cells relying on the de novo proteins grow significantly slower than those expressing the natural protein.”

c) “We also purified several of the de novo proteins. (To avoid contamination by the natural enzyme, purifications were from strains deleted for the natural gene.) We tested these purified proteins for the enzymatic activities deleted in the respective autotrophs, but were unable to detect activity that was reproducibly above the controls.”

And now, my comments:

a) This is the main fault of the paper, if it is interpreted (as Miller does) as evidence that functional proteins can evolve from random sequences. The very first step of the paper is intelligent design: indeed, top down protein engineering based on our hardly gained knowledge about the biochemical properties of proteins.

b) The second problem is that the paper is based on function rescue, not on the appearance of a mew function. Experiments based on function rescue have serious methodological problems, if used as models of neo darwinian evolution. The problem here is specially big, because we know nothing of how the “evolved” proteins work to allow the minimal rescue of function in the complex system of E. Coli (see next point).

c) The third problem is that the few rescuing sequences have no detected biochemical activity in vitro. IOWs, we don’t know what they do, and how they act at biochemical level. IOWs, with no known “local function” for the sequences, we have no idea of the functional complexity of the “local function” that in some unknown way is linked to the functional rescue. The authors are well aware of that, and indeed spend a lot of time discussing some arguments and experiments to exclude some possible interpretation of indirect rescue, or at least those that they have conceived.

The fact remains that the hypothesis that the de novo sequences have the same functional activity as the knocked out genes, even if minimal, remain unproved, because no biochemical activity of that kind could be shown in vitro for them.

These are the main points that must be considered. In brief, the paper does not prove, in any way, what Miller thinks it proves.

And that was Miller’s best paper!

In the meantime, can you forgive us in the Intelligent Design community for being just a little skeptical of claims that “no intelligence was required” to account for the origin of proteins, of the first living cell (which would have probably required hundreds of proteins), of complex organisms in the early Cambrian period, and even of the appearance of a new species, in view of what has been learned about the prevalence of singleton proteins and genes in living organisms?

Comments
Elizabeth: a) I use neo darwinism in the sense of the modern synthesis, a molecular hypothesis that is not "classical darwinism". If you prefer simply darwinism, do as you like. I will go on using "neo darwinism", that IMO is more precise. b) You need to be able to compute a probability distribution for your data under the null of “random noise”. If you reject that null, you have only rejected whatever you modeled as your null. If you didn’t include Darwinian processes and natural mechanisms (as Dembski says you must) then you can’t reject those processes and mechanisms. As said many times, I reject the null random hypothesis by the dFSCI metrics. c) So how you are you computing your null of “random noise”? That is what I am calling the eleP(T|H)ant in the room. Again, by the dFSCI metrics. No elephant here. d) Why is design any more “supported by facts” than Darwinian processes (by which I mean heritable variance in reproductive success – not sure what you are including as “neo Darwinian algorithm” – there’s nothing “neo” about the Darwinian algorithm I am proposing)? I think I have said it hundreds of times. I will say it once more. Because design is the only known cause of dFSCI. All objects exhibiting dFSCI, of which the origin can be independently assessed, are designed objects. e)
We have an explanandum (functional proteins that constitute a small proportion of theoretically possible proteins), for which we have two possible explanations: 1. A designer assembled those sequences, having selected them because of their potential as functional proteins. We have no evidence for this. 2. Precursors of those proteins conferred some reproductive advantage to their bearers. We have only a small amount of evidence for this. Why should we consider one of these a better explanation than the other?
See my previous answer. f) Well, no. You are assuming your conclusion. You are saying: this thing has dFSCI; only designed things have dFSCI; therefore this thing, like all other things with dFSCI was designed; therefore only designed things have dFSCI. Some time ago I have spent weeks here, defending dFSCI against false accusations of circularity from your blog, very similar to the one you repeat here. I will not go again into that in detail. I invite you to read the old threads. In a few words, I can restate here why dFSCI is not circular: 1) We have only two kinds of objects in the universe that exhibit dFSCI: human artifacts and biological objects. 2) In the case of human artifacts, it can be easily verified that all objects that exhibit dFSCI, and whose origin can be independently known, are human designed objects. These are facts. IOWs. we can safely use dFSCI to correctly infer design in for any object whose origin can be independently assesses, with 100% specificity. These are observed facts. 3) For biological objects, you may agree that at present the origin cannot be independently assessed, otherwise we would not be here discussing. As biological objects are the only other objects in the universe that exhibit dFSCI, it is simply natural to propose a design explanation for them. This is a very simple inference by analogy. In the absence of any other credible explanation, this remains the best explanation. There is no circularity in that, as even some of your friends at TSZ admitted in the end. If you are not convinced, please remain of your mind, but don't come again about that with the same old wrong arguments, because I have no more time to spend on that. Let's say we agree to disagree. g)
That certainly does not rule out a designer. Indeed, to be perfectly honest, I’d be somewhat disappointed in a Designer (capitalisation intentional) that had to continually intervened to tweak a flagellum here, or a protein there. It seems to me that an omniscient, omnipotent deity would be capable of designing a universe that Just Worked. The deity herself would be undetectable from within that universe, at least by scientific reasoning, but no less present or causal. Indeed, the ability of scientific reasoning to account for her creation without recourse to postulated intermittent tweaking might itself be adduced as evidence for her omipotence and omniscience. But we get the God there is, not the God we ask for
You can have the philosophical position you like. I try to follow facts and explain them, not to interpret them according to dogmatic philosophical commitments. I am not a skeptic, after all :)gpuccio
June 21, 2013
June
06
Jun
21
21
2013
06:15 AM
6
06
15
AM
PDT
Thanks, Upright Biped. Yes, I certainly recognise that self-replicators cannot evolve themselves into being. I have never not-recognised that (indeed I have made the point explicitly many times), and I apologies if my careless typing gave you an erroneous impression.Elizabeth B Liddle
June 21, 2013
June
06
Jun
21
21
2013
05:28 AM
5
05
28
AM
PDT
Dr Liddle,
You are of course entitled to your view of our interactions, as I am to mine.
Indeed.
As you point out, it is all available for anyone to form their own view, should they care.
Yes, and if they look as late as a year ago, they'll find you wondering if I would simply concede my argument if you could evolve information machinery, or suggesting that half a code (whatever that is) should do the trick, or wondering why I conceive of this as a design issue and not an evolutionary one.
We had better leave it there rather than derail this thread further...
Agreed.
...unless you’d like to discuss it at TSZ.
Perhaps that will be necessary at some point, although your recognition that self-replicators did not evolve themselves into being is probably the best that can be expected. As it stands for now, I'm out.Upright BiPed
June 21, 2013
June
06
Jun
21
21
2013
05:25 AM
5
05
25
AM
PDT
gpuccio:
No. I think you are seriously wrong here. The null hypothesis in a Fisherian hypothesis testing is that the effect we are observing has no special cause, and is only explained by random noise. That null hypothesis can be safely rejected by dFSCI.
I disagree that I am wrong :) You need to be able to compute a probability distribution for your data under the null of "random noise". If you reject that null, you have only rejected whatever you modeled as your null. If you didn't include Darwinian processes and natural mechanisms (as Dembski says you must) then you can't reject those processes and mechanisms. So how you are you computing your null of "random noise"? That is what I am calling the eleP(T|H)ant in the room.
So, here we are in the following scenario. A random explanation pf biological information can safely be rejected by the dFSCI metrics. Design is a viable and credible explanation for what we observe. The only non design explanation that can be offered, at present, is the neo darwinian algorithm. This second explanation is completely unsupported by facts, so it cannot compete with the design explanation.
Why is design any more "supported by facts" than Darwinian processes (by which I mean heritable variance in reproductive success - not sure what you are including as "neo Darwinian algorithm" - there's nothing "neo" about the Darwinian algorithm I am proposing)? We have an explanandum (functional proteins that constitute a small proportion of theoretically possible proteins), for which we have two possible explanations: 1. A designer assembled those sequences, having selected them because of their potential as functional proteins. We have no evidence for this. 2. Precursors of those proteins conferred some reproductive advantage to their bearers. We have only a small amount of evidence for this. Why should we consider one of these a better explanation than the other?
I really don’t understand you. The handiwork is the explanandum. The designer is the explanation. So, the handiwork is evidence of a designer. It’s simple, isn’t it?
No. It's circular. Let's say I find a coin on the ground. I think: "it must have dropped out of someone's pocket". I can't then turn round and say: "The evidence the coin dropped out of someone's pocket is that I found a coin on the ground". This is because there are other possible explanations. Perhaps someone was tossing a coin for who goes in to bat first, and couldn't see where it landed. The fact of the coin on the ground gives no more support to one of these hypotheses than the other. However, if I have independent (of the coin-on-the-ground) evidence for the first, for instance, I find a hole in my pocket, and all my money gone, I can consider the first explanation more likely. Or if I see a bunch of cricketers tossing coins in a wild manner, I can consider the second fairly well supported. A designer, in the absence of independent evidence, is no better supported than a Darwinian explanation for which there is no independent evidence.
Again, I am losing you. dFSCI in proteins is the explanandum. A designer is an explanation. Neo darwinian algorithm is another possible explanation.
Yes.
But: a) A designer explains observed facts, because the form and properties of the facts we observe are exactly the form and properties of designed things (dFSCI).
Well, no. You are assuming your conclusion. You are saying: this thing has dFSCI; only designed things have dFSCI; therefore this thing, like all other things with dFSCI was designed; therefore only designed things have dFSCI.
b) Neo darwinian algorithm does not explain anything, because it has never been shown capable to produce that kind of results.
The Darwinian algorithm (again I am not arguing for "neo-" anything) has been shown to result in functional features. This is why people write GAs. A functional protein is a functional feature. But even if you reject this, it doesn't get you out of the circularity problems with your (a) :) As you repeat here:
Wrongt. As I said, there is a specific, strong, credible and positive reason why we offer design as an explanation of dFSCI in the biological world: because design is the only empirical explanation for dFSCI in the whole known universe. Only designed things exhibit dFSCI, as far as we know.
Consider: In in some cases of disease X we find evidence of bacterial activity. In other cases of disease X we find no evidence of bacterial activity. This is disease X. Therefore it was caused by bacterial activity. I would argue that this reasoning is fallacious (although the conclusion could be correct). And I suggest your argument has the same form: In in some cases of dFSCO we find evidence of designers designing it. In other cases of dFSCO we find no evidence of designers designing it. This is a case of dFSCO Therefore it was caused by designers designing it. Again, the conclusion could be true, but the argument is fallacious.
Well, I can agree that if God materialized tomorrow, in Times Square and in front of hundreds of witnesses, a book written in golden letters with all the details of the project of biological beings in the course of our planet’s existence, and a final declaration: “It was Me!”, that would certainly be some evidence for design. :) In the meantime, and in the absence of such explicit miracles, we have to be content with scientific reasoning. And scientific reasoning brings us to the design inference as best explanation of what we observe, according to a lot of positive evidence that, in a sense, is even better than the golden book.
Only if the scientific reasoning is sound. My position is that it is not. That certainly does not rule out a designer. Indeed, to be perfectly honest, I'd be somewhat disappointed in a Designer (capitalisation intentional) that had to continually intervened to tweak a flagellum here, or a protein there. It seems to me that an omniscient, omnipotent deity would be capable of designing a universe that Just Worked. The deity herself would be undetectable from within that universe, at least by scientific reasoning, but no less present or causal. Indeed, the ability of scientific reasoning to account for her creation without recourse to postulated intermittent tweaking might itself be adduced as evidence for her omipotence and omniscience. But we get the God there is, not the God we ask for :)Elizabeth B Liddle
June 21, 2013
June
06
Jun
21
21
2013
05:04 AM
5
05
04
AM
PDT
franklin: My phrase was: Let’s take the simplest example: an enzyme that accelerates a reaction. I am not saying that this is the simplest protein function. I am saying that it is IMO the simplest example of protein function for my discussion. Why don't you offer arguments, instead of fastidious critics? Why would I deny that enzymes exist and perform biochemical function? But we are discussing OOL and emergent self-replicators might require. Certainly, the ability to maintain osmolality, pH balance, and metal homeostasis are vital roles to play in any living organisms. That these functions aren’t sexy enough for you does not mean that they can be ignored. I was not discussing OOL at all. I was discussing the emergence of new basic protein domains, as I usually do. I discuss what I consider sexy. You are free to do the same. And, possibly, offer arguments. If the function of serum albumins is not complex enough for you perhaps you could explain how well you, as a living organism, would be able to get along without these proteins? Would you still be alive? You are really something! I have not discussed the functional complexity of serum albumins exactly because their functional complexity is probably lower. Globulins and enzymes are certainly better examples of functional complexity. So, I debate them. I have never stated that there are not simpler objects in living beings. I discuss these that are certainly complex, because I am discussing dFSCI, and dFSCI needs a very serious threshold of complexity to be a good tool for design detection. Again, you are free to offer counterarguments, if you have any, using your own sexy examples. Specialkinds of binding? what does that even mean? For example, from Wikipedia: "ATP-binding cassette transporters (ABC-transporter) are members of a protein superfamily that is one of the largest and most ancient families with representatives in all extant phyla from prokaryotes to humans.[1][2] ABC transporters are transmembrane proteins that utilize the energy of adenosine triphosphate (ATP) hydrolysis to carry out certain biological processes including translocation of various substrates across membranes and non-transport-related processes such as translation of RNA and DNA repair.[3][4] They transport a wide variety of substrates across extra- and intracellular membranes, including metabolic products, lipids and sterols, and drugs. Proteins are classified as ABC transporters based on the sequence and organization of their ATP-binding cassette (ABC) domain(s). ABC transporters are involved in tumor resistance, cystic fibrosis and a range of other inherited human diseases along with both bacterial (prokaryotic) and eukaryotic (including human) development of resistance to multiple drugs." Is the binding of protons not a special enough function for a protein to be considered in a OOl scenario? Is that function simpler than a enzyme therefore making it a much simper example of a protein function? As said, I am not discussing specifically an OOL scenario. Moreover, a function is complex according to the number of bits in the sequence that are necessary to implement the function. You can offer any possible function, and do your calculations. Some functions are simpler, but most protein functions are very, very complex. I offer my examples and make my calculations. You can do the same.gpuccio
June 21, 2013
June
06
Jun
21
21
2013
04:35 AM
4
04
35
AM
PDT
Elizabeth: I apologize! I got the formatting wrong. Here is the correct version:
What I am pointing out is that the concept of CSI (and your own version) intrinsically involves precisely that; eliminating the non-design alternative. And it can’t be done.
No. I think you are seriously wrong here. The null hypothesis in a Fisherian hypothesis testing is that the effect we are observing has no special cause, and is only explained by random noise. That null hypothesis can be safely rejected by dFSCI. Once the null (random) hypothesis is rejected, any credible hypothesis that explains the observed pattern can compete for “best explanation”. So, here we are in the following scenario. A random explanation pf biological information can safely be rejected by the dFSCI metrics. Design is a viable and credible explanation for what we observe. The only non design explanation that can be offered, at present, is the neo darwinian algorithm. This second explanation is completely unsupported by facts, so it cannot compete with the design explanation. Any other non design explanation is welcome to the competition, provided it is supported by facts. So, we are perfectly safe in a Fisherian context. Like any other biological and medical theory, once the chance hypothesis is rejected, we must choose the best non chance explanation. I definitely choose design, and so should do, IMO, all unbiased thinkers. Well, I meant, independent of the handiwork! I’m not saying the postulated handiwork isn’t evidence. I’m saying it cannot as both your explanandum and your explanation. I really don’t understand you. The handiwork is the explanandum. The designer is the explanation. So, the handiwork is evidence of a designer. It’s simple, isn’t it? I am happy to stipulate, for the sake of argument, that number of functional proteins is a tiny fraction of the number of possible proteins. That fact needs to be explained. But it cannot simultanously be evidence for the proffered explanation over some other proffered explanation. Again, I am losing you. dFSCI in proteins is the explanandum. A designer is an explanation. Neo darwinian algorithm is another possible explanation. But: a) A designer explains observed facts, because the form and properties of the facts we observe are exactly the form and properties of designed things (dFSCI). b) Neo darwinian algorithm does not explain anything, because it has never been shown capable to produce that kind of results. we cannot appeal to the explanandum itself (the small proportion of modern functional proteins out of all possible proteins) to differentiate between the two hypotheses proffered to explain it. That is why I suggested that providing independent (of the explanandum) evidence would be a potentially better approach. Wrong. As I said, there is a specific, strong, credible and positive reason why we offer design as an explanation of dFSCI in the biological world: because design is the only empirical explanation for dFSCI in the whole known universe. Only designed things exhibit dFSCI, as far as we know. On the contrary, there is absolutely no clue that the neo darwinian algorithm can produce dFSCI. So, your epistemological position is not correct. Lack of such evidence wouldn’t rule out design (indeed design, unspecified, is unrule-out-able), but positive evidence could rule it in. Well, I can agree that if God materialized tomorrow, in Times Square and in front of hundreds of witnesses, a book written in golden letters with all the details of the project of biological beings in the course of our planet’s existence, and a final declaration: “It was Me!”, that would certainly be some evidence for design. In the meantime, and in the absence of such explicit miracles, we have to be content with scientific reasoning. And scientific reasoning brings us to the design inference as best explanation of what we observe, according to a lot of positive evidence that, in a sense, is even better than the golden book.gpuccio
June 21, 2013
June
06
Jun
21
21
2013
04:17 AM
4
04
17
AM
PDT
Elizabeth:
What I am pointing out is that the concept of CSI (and your own version) intrinsically involves precisely that; eliminating the non-design alternative. And it can’t be done. No. I think you are seriously wrong here. The null hypothesis in a Fisherian hypothesis testing is that the effect we are observing has no special cause, and is only explained by random noise. That null hypothesis can be safely rejected by dFSCI. Once the null (random) hypothesis is rejected, any credible hypothesis that explains the observed pattern can compete for "best explanation". So, here we are in the following scenario. A random explanation pf biological information can safely be rejected by the dFSCI metrics. Design is a viable and credible explanation for what we observe. The only non design explanation that can be offered, at present, is the neo darwinian algorithm. This second explanation is completely unsupported by facts, so it cannot compete with the design explanation. Any other non design explanation is welcome to the competition, provided it is supported by facts. So, we are perfectly safe in a Fisherian context. Like any other biological and medical theory, once the chance hypothesis is rejected, we must choose the best non chance explanation. I definitely choose design, and so should do, IMO, all unbiased thinkers. Well, I meant, independent of the handiwork! I’m not saying the postulated handiwork isn’t evidence. I’m saying it cannot as both your explanandum and your explanation. I really don't understand you. The handiwork is the explanandum. The designer is the explanation. So, the handiwork is evidence of a designer. It's simple, isn't it? I am happy to stipulate, for the sake of argument, that number of functional proteins is a tiny fraction of the number of possible proteins. That fact needs to be explained. But it cannot simultanously be evidence for the proffered explanation over some other proffered explanation. Again, I am losing you. dFSCI in proteins is the explanandum. A designer is an explanation. Neo darwinian algorithm is another possible explanation. But: a) A designer explains observed facts, because the form and properties of the facts we observe are exactly the form and properties of designed things (dFSCI). b) Neo darwinian algorithm does not explain anything, because it has never been shown capable to produce that kind of results. we cannot appeal to the explanandum itself (the small proportion of modern functional proteins out of all possible proteins) to differentiate between the two hypotheses proffered to explain it. That is why I suggested that providing independent (of the explanandum) evidence would be a potentially better approach. Wrongt. As I said, there is a specific, strong, credible and positive reason why we offer design as an explanation of dFSCI in the biological world: because design is the only empirical explanation for dFSCI in the whole known universe. Only designed things exhibit dFSCI, as far as we know. On the contrary, there is absolutely no clue that the neo darwinian algorithm can produce dFSCI. So, your epistemological position is not correct. Lack of such evidence wouldn’t rule out design (indeed design, unspecified, is unrule-out-able), but positive evidence could rule it in. Well, I can agree that if God materialized tomorrow, in Times Square and in front of hundreds of witnesses, a book written in golden letters with all the details of the project of biological beings in the course of our planet's existence, and a final declaration: "It was Me!", that would certainly be some evidence for design. :) In the meantime, and in the absence of such explicit miracles, we have to be content with scientific reasoning. And scientific reasoning brings us to the design inference as best explanation of what we observe, according to a lot of positive evidence that, in a sense, is even better than the golden book.
gpuccio
June 21, 2013
June
06
Jun
21
21
2013
04:13 AM
4
04
13
AM
PDT
gpuccioLIt’s IMO the simplest available example to make a simple and clear discussion on my point. Am I free to choose my examples? You are certainly free to choose your own examples but you realize that they may not be the simplest example of protein function contrary to your claims of being so.
gpuccioLOr do you deny that enzymes exist, and do what I say?
Why would I deny that enzymes exist and perform biochemical function? But we are discussing OOL and emergent self-replicators might require. Certainly, the ability to maintain osmolality, pH balance, and metal homeostasis are vital roles to play in any living organisms. That these functions aren't sexy enough for you does not mean that they can be ignored.
gpuccioLSimple binding to some biochemical compound is not in itself an interesting enough functional specification. Indeed, in many cases that kind of function so defined is not complex at all.
If the function of serum albumins is not complex enough for you perhaps you could explain how well you, as a living organism, would be able to get along without these proteins? Would you still be alive?
gpuccioLSpecial kinds of binding, especially if related to specific conformational changes and biochemical actions, are much more complex. But the example would become complex too
Specialkinds of binding? what does that even mean? Is the binding of protons not a special enough function for a protein to be considered in a OOl scenario? Is that function simpler than a enzyme therefore making it a much simper example of a protein function?franklin
June 21, 2013
June
06
Jun
21
21
2013
01:04 AM
1
01
04
AM
PDT
gpuccio:
Your statement is the equivalent of: “I will never accept your credible explanation unless you logically eliminate my empirically unsupported explanation”.
No, it isn't, gpuccio, and the fact that you think it is is at the base of the problem IMO (and not unique to you!). I am perfectly happy, in principle, to accept your credible explanation. I do not require that you eliminate the non-design alternative before I do so. What I am pointing out is that the concept of CSI (and your own version) intrinsically involves precisely that; eliminating the non-design alternative. And it can't be done. That doesn't mean we must conclude that non-design-did-it after all. It just means that the inference of Design via Fisherian null hypothesis testing where the null is the omnibus null of "non-design" isn't valid. A Bayesian approach might work better.
The handiwork of a designer is independent evidence of a designer (no need to use the capital letter here). Your refute of that evidence is “dependent” on your worldview. But the evidence of dFSCI in biological objects is independent on any worldview: it is just empirical reasoning.
Well, I meant, independent of the handiwork! I'm not saying the postulated handiwork isn't evidence. I'm saying it cannot as both your explanandum and your explanation. I am happy to stipulate, for the sake of argument, that number of functional proteins is a tiny fraction of the number of possible proteins. That fact needs to be explained. But it cannot simultanously be evidence for the proffered explanation over some other proffered explanation. If you say: I hypothesise that a designer designed and fabricated the DNA sequences required and I say: I hypothesise that there were a series of precursor sequences that offered some slight reproductive advantage to their bearers in the environment in which they lived we cannot appeal to the explanandum itself (the small proportion of modern functional proteins out of all possible proteins) to differentiate between the two hypotheses proffered to explain it. That is why I suggested that providing independent (of the explanandum) evidence would be a potentially better approach. Lack of such evidence wouldn't rule out design (indeed design, unspecified, is unrule-out-able), but positive evidence could rule it in.Elizabeth B Liddle
June 21, 2013
June
06
Jun
21
21
2013
01:02 AM
1
01
02
AM
PDT
Upright Biped @82 You are of course entitled to your view of our interactions, as I am to mine. As you point out, it is all available for anyone to form their own view, should they care. We had better leave it there rather than derail this thread further, unless you'd like to discuss it at TSZ.Elizabeth B Liddle
June 21, 2013
June
06
Jun
21
21
2013
12:36 AM
12
12
36
AM
PDT
Oh, and thanks for the welcome, Eric :)Elizabeth B Liddle
June 21, 2013
June
06
Jun
21
21
2013
12:32 AM
12
12
32
AM
PDT
Eric:
Is it your position that it is impossible to determine whether an event is unlikely to have occurred by purely natural processes unless we are able to fully define and calculate the probability of it occurring by such processes?
I'm not sure what you mean by "purely natural processes", but that is not what I am saying. I do think it is perfectly possible to determine that an event was due to a designer without defining and calculing the probability of it occurring by some non-design means. That is what archaeologists and forensic scientists do, for instance. What I am saying is much narrower than that, and concern's Dembski's concept of "CSI" or "chi" for which he gives a mathematical formula based on the principle of Fisherian null hypothesis testing. That formula contains the parameter p(T|H), which is the probability of observing the Target under the null hypothesis, which he defines as "the relevant chance hypothesis, including Darwinian and other material mechanisms". I am saying that that is not calculable, and that treating "non-design" as an omnibus null doesn't work, and that therefore the concept of chi doesn't work as a method of detecting design.Elizabeth B Liddle
June 21, 2013
June
06
Jun
21
21
2013
12:31 AM
12
12
31
AM
PDT
franklin: That certainly is not the simplest available example. Why not apply your reasoning to function along the lines of ‘maintains osmolality’ or ‘binds oxygen’ or ‘binds a metal ion’? It's IMO the simplest available example to make a simple and clear discussion on my point. Am I free to choose my examples? Or do you deny that enzymes exist, and do what I say? Simple binding to some biochemical compound is not in itself an interesting enough functional specification. Indeed, in many cases that kind of function so defined is not complex at all. Any chemical compound can bind to something else. Special kinds of binding, especially if related to specific conformational changes and biochemical actions, are much more complex. But the example would become complex too.gpuccio
June 21, 2013
June
06
Jun
21
21
2013
12:06 AM
12
12
06
AM
PDT
Elizabeth: Very briefly: If we are to rule out a Darwinian explanation for modern proteins, we need to demonstrate that there is no possible series of precursors to modern “locally” functional proteins that did not confer a reproductive advantage to their ancient bearer organisms. I know we have been here before, gpuccio, but it’s important! The Darwinian hypothesis is that precursor proteins served some advantageous function for the organism – if that’s the hypothesis you want to falsify, that’s the level at which you need to demonstrate lack of prior utility of precursor for the organism. The simple fact is, we have no necessity to "rule out" something that does not exists, if not as a mere logical possibility. That is not science. We need not to "demonstrate that there is no possible series of precursors to modern “locally” functional proteins that did not confer a reproductive advantage to their ancient bearer organisms". It's you (or anyone who supports the neo darwinian "explanation") who must show that there is any empirical support for such a bizarre idea! At present, the only support to that "explanation" is a dogmatic, ideological refute of the credible alternative, intelligent design. That is an ideological worldview (reductive materialism), and refusing the only credible explanation for a "non explanation", completely unsupported by both facts and logic, is not science. Again, science is not done by "demonstrating" that there could not logically exist the explanation someone likes. It is done by supporting a possible explanation with facts, and then comparing it with other possible explanations. The game is simple here: all known facts support the ID explanation. None of them supports the imaginary scenario where, as you say, there exists a "series of precursors to modern “locally” functional proteins that did confer a reproductive advantage to their ancient bearer organisms". Your statement is the equivalent of: "I will never accept your credible explanation unless you logically eliminate my empirically unsupported explanation". The null hypothesis we reject here is the non design hypothesis. Under that hypothesis, only random events and/or natural laws can be invoked as the explanation of what we observe. Well, those two causes cannot explain what we observe. Then you say: "No, I will not reject the null, because maybe if we could observe such and such then a bizarre explanation could work". But that is not scientifically correct. First show that we have observed such and such (for example, the precursors, either in the proteome or in the lab). Then your reasoning will gain any credibility. Until then, all unbiased thinkers will correctly reject the null. This is empirical science. Most protein domain families are inferred to be extremely ancient, so it is at that ancient level – prior to multicellularity – that we need to be looking for potential reproductive advantage, at least for modern proteins in those domains. I agree. That certainly makes things easier, because prokaryotes are vastly available for experimentation and research. Now this is your field, not mine I agree :) But if you are to persuade me that there is no evolutionary pathway to a modern functional protein, I will need to be assured that its precursors were necessarily useless in all organisms that they inhabited in any environment – in other words “unselectable”. I only want to "persuade" you, or anyone else, that there is no known evolutionary pathway to a modern functional protein, and that relegates the neo darwinian hypothesis to the field of myth, not science. That's much simpler :) Again, I need not falsify your claims, if your claims are empirically unsupported. That is falsification enough in itself. I could simply argue that dark energy was the cause of a necessary pathway to modern proteins, but unless and until I give some empirical support to that statement, you need no "falsify" it. I do not claim that they were not – but my point is that to reject the null of non-design, you’d have to show that they were. Or, alternatively, provide independent evidence of a Designer (independent of the handiwork of the postulated Designer), The handiwork of a designer is independent evidence of a designer (no need to use the capital letter here). Your refute of that evidence is "dependent" on your worldview. But the evidence of dFSCI in biological objects is independent on any worldview: it is just empirical reasoning. Nonetheless positive ID hypotheses are in principle possible (“Frontloading” for instance, probably makes different predictions to Darwinian evolution). My hypothesis of designed variation certainly makes different predictions: for example, the lack of selectable intermediaries, and the possible existence of rather "sudden" jumps in the emergence of information. Both predictions are supported by a lot of known facts. Frontloading makes different predictions still, but I am not aware of much empirical support for that hypothesis. That's why I usually don't like it.gpuccio
June 21, 2013
June
06
Jun
21
21
2013
12:02 AM
12
12
02
AM
PDT
Elizabeth: Welcome back. I'm glad to see that your posting privileges have been restored.
P(T|H) is fine to compute if you have a clearly defined non-design hypothesis for which you can compute a probability distribution.
Let's cut to the chase. You've brought this up many times, so it seems like a central theme for you. Is it your position that it is impossible to determine whether an event is unlikely to have occurred by purely natural processes unless we are able to fully define and calculate the probability of it occurring by such processes? Thanks,Eric Anderson
June 20, 2013
June
06
Jun
20
20
2013
06:18 PM
6
06
18
PM
PDT
Elizabeth:
If we are to rule out a Darwinian explanation for modern proteins,...
First you have to understand what a "Darwinian explanation" entails. Until you do that all you are doing is equivocating.
The Darwinian hypothesis is that precursor proteins served some advantageous function for the organism –
Nope, only that it wasn't fatal. Natural selection is eliminative. What doesn't work or is fatal is what gets filtered out. As for independent evidence for the designer- again, the evidence for a designer wrt biology is independent of the evidence for a designer in physics and cosmology. But anyway, until you understand what a "Darwinian explanation" entails, you will never understand what is being debated. And that means your opinion on the subject is tainted by that ignorance. Just sayin'...Joe
June 20, 2013
June
06
Jun
20
20
2013
06:05 PM
6
06
05
PM
PDT
Dr Liddle: “Well, not really, Upright Biped.” What is clear to me Elizabeth is that when you and I began this conversation two years ago, you had no disciplined conception of what information was - at all. If you did, it was certainly not made evident by your positions. The argument presented to you has focused your understanding of what information is, and how it must operate in order to produce material effects. You clearly now know that Darwinian evolution did not evolve the material requirements for biological information to exist, but it’s saddening that you do not possess the capacity to admit it. What you’ve done here - this ridiculous denial of what is made obvious by your own words - is nothing new. I have been pulling up your words and pointing out the inconsistencies almost from the very start, and yet it’s always the same damn thing: You absolutely never integrate. I would never expect someone like Byers or even Sandstrom to really have the capacity to be wrong about something and learn from a competitor, but for some odd reason I thought you might. I was wrong about that, and I have been wrong about it from the start.Upright BiPed
June 20, 2013
June
06
Jun
20
20
2013
05:38 PM
5
05
38
PM
PDT
gpuccio: Let’s take the simplest example: an enzyme that accelerates a reaction.
That certainly is not the simplest available example. Why not apply your reasoning to function along the lines of 'maintains osmolality' or 'binds oxygen' or 'binds a metal ion'?franklin
June 20, 2013
June
06
Jun
20
20
2013
04:54 PM
4
04
54
PM
PDT
Thanks for this response, and it's good that we agree on more than you anticipated :) I am not persuaded, however, by your distinction between "local" and organismic function. If we are to rule out a Darwinian explanation for modern proteins, we need to demonstrate that there is no possible series of precursors to modern "locally" functional proteins that did not confer a reproductive advantage to their ancient bearer organisms. I know we have been here before, gpuccio, but it's important! The Darwinian hypothesis is that precursor proteins served some advantageous function for the organism - if that's the hypothesis you want to falsify, that's the level at which you need to demonstrate lack of prior utility of precursor for the organism. Most protein domain families are inferred to be extremely ancient, so it is at that ancient level - prior to multicellularity - that we need to be looking for potential reproductive advantage, at least for modern proteins in those domains. Now this is your field, not mine, so I will stop there. But if you are to persuade me that there is no evolutionary pathway to a modern functional protein, I will need to be assured that its precursors were necessarily useless in all organisms that they inhabited in any environment - in other words "unselectable". I do not claim that they were not - but my point is that to reject the null of non-design, you'd have to show that they were. Or, alternatively, provide independent evidence of a Designer (independent of the handiwork of the postulated Designer), but I know that is an unpopular approach amongst ID proponents! Nonetheless positive ID hypotheses are in principle possible ("Frontloading" for instance, probably makes different predictions to Darwinian evolution).Elizabeth B Liddle
June 20, 2013
June
06
Jun
20
20
2013
04:40 PM
4
04
40
PM
PDT
VJ: You are always a wonderful example of a sincere search for truth. Your deep attempts at clarifying difficult concepts are certainly precious. At the cost of being repetitive, I will try again to give my contribution to the problem of specification and complexity. We must not forget our real purpose: our real purpose is a tool for design detection in the empirical world, nothing else. A tool is good if it works. So I say again that functional specification is completely valid for most discussions about biological objects. Biological objects are special because they are functional, not because they are repetitive, or compressible, and so on. So, the only pertinent question, in form of a protein, is: is it functional? And then: how many bits of specific information are necessary to have that function? That is, in a nutshell, the concept of dFSCI. Specification is not important in itself. A lot of objects can be specified in some way, and yet they are not designed. What is unique of design is a specification that cannot be attained without a very high number of bits of specific information. That kind of functions are never obtained in s "natural", non designed context. That's why dFSCI is exactly the tool we need. It works. Let's go to the problem of "order". I would simply say that any statement must be relative to a context. I have always emphasized that any attempt at design detection must be relative to a specific physical system, with defined temporal frame, and with definite probabilistic resources. To infer design for an object in that system, we just need to ascertain: a) That the digital functional complexity (ratio of sequences that exhibit the function to the search space of sequences) for that function is high enogh, considering the probabilistic resources of the system in the temporal frame. b) That no known algorithm physically available in that system can generate the observed functional sequence in a necessity way, in alternative to mere random probability. If a) and b) are true, we can safely infer design as the best explanation. Neo darwinists usually refute b) saying that we can never be sure that some day we can find an algorithm that can generate the functional sequence. That argument is silly and utterly unscientific. Science works with the explanations we have, not with the mere theoretical possibility that some day one can be found. That is religious expectation, not science. That's why, in my b), I always stress the words "known algorithm". Let's go to the case of sequences from coin tossing. Let's say that we have a sequence of 100 heads. What does it mean? I don't know what Dembski would say, but for me 100 heads is not a good sequence to infer design in a system, unless we can verify many conditions. 100 heads is a highly compressible sequence. Its Kolmogorov complexity is very low. What does it mean? It means that you can easily have that sequence by some very simple physical algorithm: for example, the easiest way is that you are tossing a coin that always gives a head, for its physical properties. But other possibilities should be excluded, like some special magnetic field, and so on. None of these hypotheses necessarily entails design. But again, if our coin gives us the first 500 bits of pi in binary form, the situation is completely different. The result is not highly compressible. It is in a sense compressible, because some algorithm can calculate the digits of pi. But: a) The algorithm would anyway be rather complex (It would express the Kolmogorov complexity of any result corresponding to pi, however long) b) If our system is simply a man who tosses a coin, I can't see how any pi computing algorithm can be incorporated in such a system. So, the only way to explain a sequence of coin tossing that expresses the first 500 bits of pi is design: the man who tosses the coin already knows the sequence to be attained, and in some way he controls the outcome of each toss. So, I really believe that if we stay empirical, define our systems and time correctly, compute our probabilistic resources, and use a good tool for design detection like dFSCI, our design inferences will be really good and scientifically valid.gpuccio
June 20, 2013
June
06
Jun
20
20
2013
04:35 PM
4
04
35
PM
PDT
Elizabeth: Thank you for the clear answers. I see that you agree on more points than I expected. I can only be happy of that. Your main "difficulty" seems to be the following: Not easily, because I don’t know how you could compute how many of the theoretically possible proteins could perform some advantageous (i.e. promote reproduction) function in some organism in some environment at some time. Which is what you’d have to do if you wanted to compute, say the probability of the protein evolving. So I’ve never really understood the reasoning there. But, again, here you seem not to understand the peculiar problem of proteins. The fact is, most proteins have definite biochemical functions, what I called, in an earlier discussion with you, the "local" function. That is exactly the function that you find immediately defined in protein databases, when it is well understood. Now, for a moment, forget the higher level of organization, and how in the end the function will give or not give a reproductive advantage. The point is, if a protein is not an efficient molecular machine that does something that would not otherwise happen, it is generally useless. Of course, a proteins could be just a messenger or a signal, but usually most basic important proteins are biological catalysts, and very efficient ones. Indeed, even signal cascades are always realized by very efficient biochemical reactions. Let's take the simplest example: an enzyme that accelerates a reaction. The simple fact is: that reaction, which is necessary for the biological environment (for metabolism, or reproduction, you name it), would never happen spontaneously, or it would happen at a ridiculously low speed. But we find in the cell a specific protein, very complex and efficient, which folds in such a way that it can, for example, bind very efficiently to the two components that must react, and makes them react one with the other against all "natural" biochemical laws. IOWs, it is a machine that performs a "local" function extremely well. Nothing like that exists in nature, out of living beings. Now, the local function, in itself, could have no special meaning. Obviously, it can be related to a reproductive advantage, or to any kind of biological advantage, only if correctly integrated in a complex system that needs just that function. That is, in few words, the concept of irreducible complexity. But the point i am trying to make here is the following: if the protein is not able to accelerate enormously that reaction, it is of no utility. So, when you reason that "proteins could perform some advantageous (i.e. promote reproduction) function in some organism in some environment at some time", you are reasoning abstractly, and forgetting that each protein must be able to perform its "wonderful" local function, to be able to help in any possible way. Otherwise, it is only a sequence of aminoacids: some useless burden for the cell. It is exactly that point that invalidates any neo darwinian explanation: those "local" functions are extremely complex, and separated in the space of sequences, as I have many times demonstrated by simple data taken from the proteome, in particular from the SCOP classification. Basic protein domains are usually longer than 100 AAs, sometimes much longer. Durston has found very high functional complexity in most protein families he has examined. Those basic domains are the essential foundation of all biochemical functions in the cell. They are many, they are complex, they are separated. No neo darwinian explanation has ever been found for even one of them. These are facts. In front of these facts, the design explanation has huge credibility. It should be the main hypothesis in biological science, today. Eacj new development in our understanding of biological complexity, at all levels, adds strngth and credibility to the design explanation, and in no way helps the neo darwinian theory, which is really reduced to a dogmatic myth. Science must go on, It must go on according to what is credible, and explains observed facts. That is certainly not true of the neo darwinian paradigm. The absolute abnormality is that such an unsupported paradigm is still accepted by most scientists as "truth". That can only be explained by such a huge cognitive bias that the only correct way to express it is "dogma". You say: If we could simply calculate for all possible proteins the proportion that are “functional”, all would be well. If the proportion was small enough, you could claim they were “Irreducibly Complex” in that a vast number of unselectable steps would be required to get them from a short peptide (or even a long peptide) to a useful protein. But that is exactly what all the facts point to! And you must consider that what we should look at is not "any functional protein" in absolute, but rather "any protein with a new, original local function that, by itself, can give a reproductive advantage in a certain pre-existing cell type". Neo darwinists like to dream of small variations that give reproductive advantages. Sometimes (very rarely) that happens, it is well known, it is supported by facts, and it is called "microevolution". But those adaptations are only "tweakings" of what already exists. In no way they are "steps" towards new, original sequences, with new, original complex local functions. That has never been shown to happen, for the very simple reason that it does not happen. That is a dream, a myth, completely unsupported by facts. In other words, you can’t separate the “function” of a protein from the job it does in keeping the organism alive and fecund, which will vary depending on the environment the organism is in, and, for multicellular organisms, the tissues in which it is expressed, and under what conditions. I surely can separate the two things. It is very simple. If there is no local function, there can be no higher level of organization, no "job" at all. IOWs, if an enzyme does not work, it does not work. No environment can use it, because it does not work. That's why you have to explain the emergence of basic local functions, that means the emergence of new basic protein domains. Once a protein works, that is it does the wonderful, miraculous biochemical job that it does, then it can be integrated in different contexts, for different higher functions, with different levels of expression in different tissues, and so on. But the beginning of everything is always there: the basic biochemical function, the wonderful, complex biochemical machine that makes something happen that would never happen otherwise.gpuccio
June 20, 2013
June
06
Jun
20
20
2013
04:03 PM
4
04
03
PM
PDT
My objection to ID is the positive claim that “intelligence was required” to account for proteins, or whatever.
Well seeing that you cannot explain how proteins arose, your "objection" amounts to whining.Joe
June 20, 2013
June
06
Jun
20
20
2013
03:44 PM
3
03
44
PM
PDT
Well, not really, Upright Biped. But I guess we leave the judgement as an exercise for the reader. But sure, I should have said "emerge", not "evolve" (as I did correctly later in that thread). So I accept responsibility for the misunderstanding. I hope it is now clear.Elizabeth B Liddle
June 20, 2013
June
06
Jun
20
20
2013
03:43 PM
3
03
43
PM
PDT
Translation:
You misunderstood my claim Clearly, what evolution requires - didn't evolve I've been certain of that all along If you thought I was claiming otherwise, then thats absurd of you. I'm mystified by why you thought I claimed that Oh I see, I was arguing for it after all Obviously I meant something different than my words Honestly
cue the applause :)Upright BiPed
June 20, 2013
June
06
Jun
20
20
2013
03:41 PM
3
03
41
PM
PDT
A small point, but an important point, Dr Torley (I shall digest the rest of your appendix later, if that's not too horrible a mixed metaphor!) - you write:
In the meantime, can you forgive us in the Intelligent Design community for being just a little skeptical of claims that “no intelligence was required” to account for the origin of proteins, of the first living cell (which would have probably required hundreds of proteins), of complex organisms in the early Cambrian period, and even of the appearance of a new species, in view of what has been learned about the prevalence of singleton proteins and genes in living organisms?
I'd like to make something really clear: I personally do not make ANY claim "that 'no intelligence was required' to account for to origin of proteins", and my position is that any such claim would be hard to defend scientifically. My objection to ID is the positive claim that "intelligence was required" to account for proteins, or whatever. In other words, ID is (in my view unjustifiably) attempting to reject a pantechnicon null of non-design; in contrast, mainstream science does not, and cannot, reject a pantechnicon null of design. This is because neither hypothesis is capable of serving as the null, because to reject a null, you have to be able to estimate the probability distribution of your data under that null, and under neither null can that probability distribution be computed.Elizabeth B Liddle
June 20, 2013
June
06
Jun
20
20
2013
03:39 PM
3
03
39
PM
PDT
Hi everyone, I've updated this post with an Appendix, as I've revised my views on some key points. Comment is welcome.vjtorley
June 20, 2013
June
06
Jun
20
20
2013
03:14 PM
3
03
14
PM
PDT
Ah, further down that same page, I get it right:
That’s why I presented a specific proposal. I’ve thought it out a little more thoroughly, so here it is: I propose to devise a virtual world populationed by virtual monomers (which I will refer to as vMonomers). Each of these monomers will have a set of “chemical” properties, i.e. they will have certain affinities and phobias (if that’s the right word) and some of those affinities will be shallowly contingent. This if you like is the “Necessity” part of the virtual world – a set of simple rules that govern what happens when my vMonomers come into contact with each other. It will be entirely deterministic. In contrast, the way my vMonomers move around their virtual world will be entirely stochastic (virtual Brownian motion, if you like) so that the probability of any one of them moving in any given direction is completely flat – all directions are equiprobable. So we have Necessity, and we have Chance. And what I hope to demonstrate is that in that virtual world, self-reproducing structures will emerge. If I succeed, then the very fact that I have self-reproducing structures, means, I think, that information (in your sense) has been created, because each structure embodies the information required to make a copy of itself. However, those copies will not be perfect, and so I also foresee that once my self-reproducing structures have emerged they will evolve, in other words the most prevalent structure type in each generation will tend to change. As I say, I don’t know that I can do this (although I believe it can be done!) If I succeeded, would you agree that information (meaningful information, i.e. the information required to duplicate a structure) had been created by Chance and Necessity?
So really, there should have been no confusion, UBP. Indeed, lower down that thread you yourself quote my later words:
And what I hope to demonstrate is that in that virtual world, self-reproducing structures will emerge. If I succeed, then the very fact that I have self-reproducing structures, means, I think, that information (in your sense) has been created, because each structure embodies the information required to make a copy of itself.
But I fully concede, that my word "evolve" in the earlier post was an error. (apologies for this derail - UBP we can take this to TSZ if you like).Elizabeth B Liddle
June 20, 2013
June
06
Jun
20
20
2013
03:11 PM
3
03
11
PM
PDT
Ah, I see you quoted me as saying:
And what I propose to do is that starting with a random distribution of these units, a self-replicating population of more complex units will evolve…
I agree that was ambiguous - I should have written "emerge" - although as soon as you have a population of self-replicators (as long as there is heritable variance in reproductive success) it will indeed evolve, which was the point I next made. Here is the whole of what I wrote, amended (I googled it - a link from you would have been helpful)
I’m going to start off with a “toy” chemistry – a virtual environment populated with units (chemicals, atoms, ions, whatever) that have certain properties (affinities, phobias, ambiphilic, etc) in a fluid medium where motion is essentially brownian (all directions equiprobable) unless influenced by another unit. I may have to introduce an analog of convection, but at this stage I’m not sure. And what I propose to do is that starting with a random distribution of these units, a self-replicating population of more complex units will evolve emerge and then evolve, in which each unit (or “organism” if you like, or “critter”) has, encoded with in it, the “recipe” for its own offspring. That way we will have a Darwinian process (if I achieve it) where I don’t even specify a fitness function that isn’t intrinsic to the “chemistry”, that depends entirely on random motion (“Chance” if you like) and “necessity” (the toy chemistry) to create an “organism” with a “genome” that encodes information for making the next generation. Information “about” the next generation that is “sent” to the processes involved in replication. If I succeeded, would you accept that I had met the challenge, or do you foresee a problem? (I have to say, I’m not sure I can do it!)
Note that last sentence!Elizabeth B Liddle
June 20, 2013
June
06
Jun
20
20
2013
01:48 PM
1
01
48
PM
PDT
Upright Biped:
You were claiming that Darwinian processes could certainly account for any “new information” introduced into the genome.
Yes indeed. And I would still stand by that claim.
This is the challenge you undertook. In other words, Dr Liddle, you were specifically trying to create a simulation to show that Darwinian processes could originate a self-repicating system. That was what you believed it could do, and that was what you intended to show.
Um, no. If you thought that was what I was claiming, you misunderstood. Perhaps you could track down the place where you think I said that. But clearly, as Darwinian processes require a self-replicating system to function you cannot generate a self-replicating system ab initio by Darwinian processes! If that's what you thought I was claiming, then I concede completely that it isn't possible. It would be absurd. But I'm mystified as to why you think I might have made such a claim. (And, btw, although it's nice to be back, it would be good if you could at least do me the courtesy of considering alternate possibilities to dishonesty when trying to account for my words. My being in error is one; your own misunderstanding of my position is another.)Elizabeth B Liddle
June 20, 2013
June
06
Jun
20
20
2013
01:33 PM
1
01
33
PM
PDT
EL: “by setting up a model in which there is no initial breeding-with-variance population, but only a world of Deterministic and non-deterministic rules (Necessities and Chance) from which I hope my self-replicators will emerge”.
EL: Darwinian processes can’t explain the origin of self-replicators
EL: I see no inconsistency between the statements
You are so terribly predictable Dr Liddle. The very moment I hung up the phone, I knew you'd simply claim there was "nothing inconsistent". Being a clever apologist, I knew you'd play on the great possibility that the average reader here wouldn't know the history of the conversation. But the truth of the matter is that the simulation you were trying to run was being done to defend a very specific statement that I had challenged you on. You were on UD taking the existence of the information system in the genome for granted. This is something materialists do quite frequently. You were claiming that Darwinian processes could certainly account for any "new information" introduced into the genome. Perhaps you'll remember the conversation, you were proclaiming the powers of the Darwinian process, and said:
I simply do not accept the tenet that replication with modification + natural selection cannot introduce “new information” into the genome. It demonstrably can, IMO, on any definition of information I am aware of.
To which I had responded:
Neo-Darwinism doesn’t have a mechanism to bring information into existence in the first place. To speak freely of what it can do with information once it exist, is to ignore the 600lbs assumption in the room.
Obviously, that 600lbs assumption is the origin of the information system inside a self-replicator (i.e."into existence in the first place"). And just as obvious, I was challenging you on the power of Darwinian processes to create such a system from scratch (i.e. "Neo-Darwinian doesn't have a mechanism"). This is the challenge you undertook. In other words, Dr Liddle, you were specifically trying to create a simulation to show that Darwinian processes could originate a self-repicating system. That was what you believed it could do, and that was what you intended to show. - - - - - - - - - - - - - - - - - - - - Your history here has taught me that your next move will be to simply repeat the denial of any inconsistency. This sort of thing has become expected. And as already stated, this will only send me back into the vault for more of your clarifying comments on the subject. Such as...
I’m going to start off with a “toy” chemistry – a virtual environment populated with units... And what I propose to do is that starting with a random distribution of these units, a self-replicating population of more complex units will evolve... That way we will have a Darwinian process where I don’t even specify a fitness function...
So as I suggested earlier (and given that hell will freeze over before you admit the obvious) perhaps you would prefer to just drop it. It seem to be far easier on you just to conclude that we've "misunderstood each other". :|Upright BiPed
June 20, 2013
June
06
Jun
20
20
2013
01:20 PM
1
01
20
PM
PDT
1 3 4 5 6 7 8

Leave a Reply