Uncommon Descent Serving The Intelligent Design Community

Order vs. Complexity: A follow-up post

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

NOTE: This post has been updated with an Appendix – VJT.

My post yesterday, Order is not the same thing as complexity: A response to Harry McCall (17 June 2013), seems to have generated a lively discussion, judging from the comments received to date. Over at The Skeptical Zone, Mark Frank has also written a thoughtful response titled, VJ Torley on Order versus Complexity. In today’s post, I’d like to clear up a few misconceptions that are still floating around.

1. In his opening paragraph, Mark Frank writes:

To sum it up – a pattern has order if it can be generated from a few simple principles. It has complexity if it can’t. There are some well known problems with this – one of which being that it is not possible to prove that a given pattern cannot be generated from a few simple principles. However, I don’t dispute the distinction. The curious thing is that Dembski defines specification in terms of a pattern that can generated from a few simple principles. So no pattern can be both complex in VJ’s sense and specified in Dembski’s sense.

Mark Frank appears to be confusing the term, “generated,” with the term. “described.” here. What I wrote in my post yesterday is that a pattern exhibits order if it can be generated by “a short algorithm or set of commands,” and complexity if it can’t be compressed into a shorter pattern by a general law or computer algorithm. Professor William Dembski, in his paper, Specification: The Pattern That Signifies Intelligence, defines specificity in terms of the shortest verbal description of a pattern. On page 16, Dembski defines the function phi_s(T) for a pattern T as “the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T” (emphasis mine) before going on to define the specificity sigma as minus the log (to base 2) of the product of phi_s(T) and P(T|H), where P(T|H) is the probability of the pattern T being formed according to “the relevant chance hypothesis that takes into account Darwinian and other material mechanisms” (p. 17). In The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Intelligent Design advocates William Dembski and Jonathan Wells define specification as “low DESCRIPTIVE complexity” (p. 320), and on page 311 they explain that descriptive complexity “generalizes Kolmogorov complexity by measuring the size of the minimum description needed to characterize a pattern.”

The definition of order and complexity relates to whether or not a pattern can be generated mathematically by “a short algorithm or set of commands,” rather than whether or not it can be described in a few words. The definition of specificity, on the other hand, relates to whether or not a pattern can be characterized by a brief verbal description. There is nothing that prevents a pattern from being difficult to generate algorithmically, but easy to describe verbally. Hence it is quite possible for a pattern to be both complex and specified.

NOTE: I have substantially revised my response to Mark Frank, in the Appendix below.

2. Dr. Elizabeth Liddle, in a comment on Mark Frank’s post, writes that “by Dembski’s definition a chladni pattern would be both specified and complex. However, it would not have CSI because it is highly probable given a relevant chance (i.e. non-design) hypothesis.” The second part of her comment is correct; the first part is incorrect. Precisely because a Chladni pattern is “highly probable given a relevant chance (i.e. non-design) hypothesis,” it is not complex. In The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), William Dembski and Jonathan Wells define complexity as “The degree of difficulty to solve a problem or achieve a result,” before going on to add: “The most common forms of complexity are probabilistic (as in the probability of obtaining some outcome) or computational (as in the memory or computing time required for an algorithm to solve a problem)” (pp. 310-311). If a Chladni pattern is easy to generate as a result of laws then it exhibits order rather than complexity.

3. In another comment, Dr. Liddle writes: “V J Torley seems to be forgetting that fractal patterns are non-repeating, even though they can be simply described.” I would beg to differ. Here’s what Wikipedia has to say in its article on fractals (I’ve omitted references):

Fractals are typically self-similar patterns, where self-similar means they are “the same from near as from far”. Fractals may be exactly the same at every scale, or, as illustrated in Figure 1, they may be nearly the same at different scales. The definition of fractal goes beyond self-similarity per se to exclude trivial self-similarity and include the idea of a detailed pattern repeating itself.

The caption accompanying the figure referred to above reads as follows: “The Mandelbrot set illustrates self-similarity. As you zoom in on the image at finer and finer scales, the same pattern re-appears so that it is virtually impossible to know at which level you are looking.”

That sounds pretty repetitive to me. More to the point, fractals are mathematically easy to generate. Here’s what Wikipedia says about the Mandelbrot set, for instance:

More precisely, the Mandelbrot set is the set of values of c in the complex plane for which the orbit of 0 under iteration of the Complex quadratic polynomial zn+1 = zn2 + c remains bounded. That is, a complex number c is part of the Mandelbrot set if, when starting with z0 = 0 and applying the iteration repeatedly, the absolute value of zn remains bounded however large n gets.

NOTE: I have revised some of my comments on Mandelbrot sets and fractals. See the Appendix below.

4. In another comment on the same post, Professor Joe Felsenstein objects that Dembski’s definition of specified complexity has a paradoxical consequence: “It implies that we are to regard a life form as uncomplex, and therefore having specified complexity [?] if it is easy to describe,” which means that “a hummingbird, on that view, has not nearly as much specification as a perfect steel sphere,” even though the hummingbird “can do all sorts of amazing things, including reproduce, which the steel sphere never will.” He then suggests defining specification on a scale of fitness.

In my post yesterday, I pointed out that the term “specified complexity” is fairly non-controversial when applied to life: as chemist Leslie Orgel remarked in 1973, “living organisms are distinguished by their specified complexity.” Orgel added that crystals are well-specified, but simple rather than complex. If specificty were defined in terms of fitness, as Professor Felsenstein suggests, then we could no longer say that a non-reproducing crystal was specified.

However, Professor Felsenstein’s example of the steel sphere is an interesting one, because it illustrates that the probability of a sphere’s originating by natural processes may indeed be extremely low, especially if it is also made of an exotic material. (In this respect, it is rather like the lunar monolith in the movie 2001.) Felsenstein’s point is that a living organism would be a worthier creation of an intelligent agent than such a sphere, as it has a much richer repertoire of capabilities.

Closely related to this point is the fact that living things exhibit a nested hierarchy of organization, as well as dedicated functionality: intrinsically adapted parts whose entire repertoire of functionality is “dedicated” to supporting the functionality of the whole unit which they comprise. Indeed, it is precisely this kind of organization and dedicated functionality which allows living things to reproduce in the first place.

At the bottom level, the full biochemical specifications required for putting together a living thing such as a hummingbird are very long indeed. It is only when we get to higher organizational levels that we can apply holistic language and shorten our description, by characterizing the hummingbird in terms of its bodily functions rather than its parts, and by describing those functions in terms of how they benefit the whole organism.

I would therefore agree that an entity exhibiting this combination of traits (bottom-level exhaustive detail and higher-level holistic functionality, which makes the entity easy to characterize in a few words) is a much more typical product of intelligent agency than a steel sphere, notwithstanding the latter’s descriptive simplicity.

In short: specified complexity gets us to Intelligent Design, but some designs are a lot more intelligent than others. Whoever made hummingbirds must have been a lot smarter than we are; we have enough difficulties putting together a single protein.

5. In a comment on my post, Alan Fox objects that “We simply don’t know how rare novel functional proteins are.” Here I should refer him to the remarks made by Dr. Branko Kozulic in his 2011 paper, Proteins and Genes, Singletons and Species. I shall quote a brief extract:

In general, there are two aspects of biological function of every protein, and both depend on correct 3D structure. Each protein specifically recognizes its cellular or extracellular counterpart: for example an enzyme its substrate, hormone its receptor, lectin sugar, repressor DNA, etc. In addition, proteins interact continuously or transiently with other proteins, forming an interactive network. This second aspect is no less important, as illustrated in many studies of protein-protein interactions [59, 60]. Exquisite structural requirements must often be fulfilled for proper functioning of a protein. For example, in enzymes spatial misplacement of catalytic residues by even a few tenths of an angstrom can mean the difference between full activity and none at all [54]. And in the words of Francis Crick, “To produce this miracle of molecular construction all the cell need do is to string together the amino acids (which make up the polypeptide chain) in the correct order”….

Let us assess the highest probability for finding this correct order by random trials and call it, to stay in line with Crick’s term, a “macromolecular miracle”. The experimental data of Keefe and Szostak indicate – if one disregards the above described reservations – that one from a set of 10^11 randomly assembled polypeptides can be functional in vitro, whereas the data of Silverman et al. [57] show that of the 10^10 in vitro functional proteins just one may function properly in vivo. The combination of these two figures then defines a “macromolecular miracle” as a probability of one against 10^21. For simplicity, let us round this figure to one against 10^20…

It is important to recognize that the one in 10^20 represents the upper limit, and as such this figure is in agreement with all previous lower probability estimates. Moreover, there are two components that contribute to this figure: first, there is a component related to the particular activity of a protein – for example enzymatic activity that can be assayed in vitro or in vivo – and second, there is a component related to proper functioning of that protein in the cellular context: in a biochemical pathway, cycle or complex. (pp. 7-8)

In short: the specificity of proteins is not in doubt, and their usefulness for Intelligent Design arguments is therefore obvious.

I sincerely hope that the foregoing remarks will remove some common misunderstandings and stimulate further discussion.

APPENDIX

Let me begin with a confession: I had a nagging doubt when I put up this post a couple of days ago. What bothered me was that (a) some of the definitions of key terms were a little sloppily worded; and (b) some of these definitions seemed to conflate mathematics with physics.

Maybe I should pay more attention to my feelings.

A comment by Professor Jeffrey Shallit over at The Skeptical Zone also convinced me that I needed to re-think my response to Mark Frank on the proper definition of specificity. Professor Shallit’s remarks on Kolmogorov complexity also made me realize that I needed to be a lot more careful about defining the term “generate,” which may denote either a causal process governed by physical laws, or the execution of an algorithm by performing a sequence of mathematical operations.

What I wrote in my original post, Order is not the same thing as complexity: A response to Harry McCall (17 June 2013), is that a pattern exhibits order if it can be generated by “a short algorithm or set of commands,” and complexity if it can’t be compressed into a shorter pattern by a general law or computer algorithm.

I’d now like to explain why I now find those definitions unsatisfactory, and what I would propose in their stead.

Problems with the definition of order

I’d like to start by going back to the original sources. In Signature in the Cell l (Harper One, 2009, p. 106), Dr. Stephen Meyer writes:

Complex sequences exhibit an irregular, nonrepeating arrangement that defies expression by a general law or computer algorithm (an algorithm is a set of expressions for accomplishing a specific task or mathematical operation). The opposite of a highly complex sequence is a highly ordered sequence like ABCABCABCABC, in which the characters or constituents repeat over and over due to some underlying rule, algorithm or general law. (p. 106)

[H]igh probability repeating sequences like ABCABCABCABCABCABC have very little information (either carrying capacity or content)… Such sequences aren’t complex either. Why? A short algorithm or set of commands could easily generate a long sequence of repeating ABC’s, making the sequence compressible. (p. 107)
(Emphases mine – VJT.)

There are two problems with this definition. First, it mistakenly conflates physics with mathematics, when it declares that a complex sequence can be generated by “a general law or computer algorithm.” I presume that by “general law,” Dr. Meyer means to refer to some law of Nature, since on page 107, he lists certain kinds of organic molecules as examples of complexity. The problem here is that a sequence may be easy to generate by a computer algorithm, but difficult to generate by the laws of physics (or vice versa). In that case, it may be complex according to physical criteria but not according to mathematical criteria (or the reverse), generating a contradiction.

Second, the definition conflates: (a) the repetitiveness of a sequence, with (b) the ability of a short algorithm to generate that sequence, and (c) the Shannon compressibility of that sequence. The problem here is that there are non-repetitive sequences which can be generated by a short algorithm. Some of these non-repeating sequences are also Shannon-incompressible. Do these sequences exhibit order or complexity?

Third, the definition conflicts with what Professor Dembski has written on the subject of order and complexity. In The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Professor William Dembski and Dr. Jonathan Wells provide three definitions for order, the first of which reads as follows:

(1) Simple or repetitive patterns, as in crystals, that are the result of laws and cannot reasonably be used to draw a design inference. (p. 317; italics mine – VJT).

The reader will notice that the definition refers only to law-governed physical processes.

Dembski’s 2005 paper, Specification: The Pattern that Signifies Intelligence, also refers to the Champernowne sequence as exhibiting a “combination of pattern simplicity (i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing the corresponding event by chance)” (pp. 15-16). According to Dembski, the Champernowne sequence can be “constructed simply by writing binary numbers in ascending lexicographic order, starting with the one-digit binary
numbers (i.e., 0 and 1), proceeding to the two-digit binary numbers (i.e., 00, 01, 10, and 11),” and so on indefinitely, which means that it can be generated by a short algorithm. At the same time, Dembski describes at as having “event-complexity (i.e., difficulty of reproducing the corresponding event by chance).” In other words, it is not an example of what he would define as order. And yet, because it can be generated by “a short algorithm,” it would arguably qualify as an example of order under Dr. Meyer’s criteria (see above).

Problems with the definition of specificity

Dr. Meyer’s definition of specificity is also at odds with Dembski’s. On page 96 of Signature in the Cell, Dr. Meyer defines specificity in exclusively functional terms:

By specificity, biologists mean that a molecule has some features that have to be what they are, within fine tolerances, for the molecule to perform an important function within the cell.

Likewise, on page 107, Meyer speaks of a sequence of digits as “specifically arranged to form a function.”

By contrast, in The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Professor William Dembski and Dr. Jonathan Wells define specification as “low DESCRIPTIVE complexity” (p. 320), and on page 311 they explain that descriptive complexity “generalizes Kolmogorov complexity by measuring the size of the minimum description needed to characterize a pattern.” Although Dembski certainly regards functional specificity as one form of specificity, since he elsewhere refers to the bacterial flagellum – a “bidirectional rotary motor-driven propeller” – as exhibiting specificity, he does not regard it as the only kind of specificity.

In short: I believe there is a need for greater rigor and consistency when defining these key terms. Let me add that I don’t wish to criticize any of the authors I’ve mentioned above; I’ve been guilty of terminological imprecision at times, myself.

My suggestions for more rigorous definitions of the terms “order” and “specification”

So here are my suggestions. In Specification: The Pattern that Signifies Intelligence, Professor Dembski defines a specification in terms of a “combination of pattern simplicity (i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing the corresponding event by chance),” and in The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Dembski and Wells define complex specified information as being equivalent to specified complexity (p. 311), which they define as follows:

An event or object exhibits specified complexity provided that (1) the pattern to which it conforms is a highly improbable event (i.e. has high PROBABILISTIC COMPLEXITY) and (2) the pattern itself is easily described (i.e. has low DESCRIPTIVE COMPLEXITY). (2008, p. 320)

What I’d like to propose is that the term order should be used in opposition to high probabilistic complexity. In other words, a pattern is ordered if and only if its emergence as a result of law-governed physical processes is not a highly improbable event. More succinctly: a pattern is ordered is it is reasonably likely to occur, in our universe, and complex if its physical realization in our universe is a very unlikely event.

Thus I was correct when I wrote above:

If a Chladni pattern is easy to generate as a result of laws then it exhibits order rather than complexity.

However, I was wrong to argue that a repeating pattern is necessarily a sign of order. In a salt crystal it certainly is; but in the sequence of rolls of a die, a repeating pattern (e.g. 123456123456…) is a very improbable pattern, and hence it would be probabilistically complex. (It is, of course, also a specification.)

Fractals, revisited

The same line of argument holds true for fractals: when assessing whether they exhibit order or (probabilistic) complexity, the question is not whether they repeat themselves or are easily generated by mathematical algorithms, but whether or not they can be generated by law-governed physical processes. I’ve seen conflicting claims on this score (see here and here and here): some say there are fractals in Nature, while other say that some objects in Nature have fractal features, and still others, that the patterns that produce fractals occur in Nature even if fractals themselves do not. I’ll leave that one to the experts to sort out.

The term specification should be used to refer to any pattern of low descriptive complexity, whether functional or not. (I say this because some non-functional patterns, such as the lunar monolith in 2001, and of course fractals, are clearly specified.)

Low Kolmogorov complexity is, I would argue, a special case of specification. Dembski and Wells agree: on page 311 of The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), they explain that descriptive complexity “generalizes Kolmogorov complexity by measuring the size of the minimum description needed to characterize a pattern” (italics mine).

Kolmogorov complexity as a special case of descriptive complexity

Which brings me to Professor Shallit’s remarks in a post over at The Skeptical Zone, in response to my earlier (misguided) attempt to draw a distinction between the mathematical generation of a pattern and the verbal description of that pattern:

In the Kolmogorov setting, “concisely described” and “concisely generated” are synonymous. That is because a “description” in the Kolmogorov sense is the same thing as a “generation”; descriptions of an object x in Kolmogorov are Turing machines T together with inputs i such that T on input i produces x. The size of the particular description is the size of T plus the size of i, and the Kolmogorov complexity is the minimum over all such descriptions.

I accept Professor Shallit’s correction on this point. What I would insist, however, is that the term “descriptive complexity,” as used by the Intelligent design movement, cannot be simply equated with Kolmogorov complexity. Rather, I would argue that low Kolmogorov complexity is a special case of low descriptive complexity. My reason for adopting this view is that the determination of a object’s Kolmogorov complexity requires a Turing machine (a hypothetical device that manipulates symbols on a strip of tape according to a table of rules), which is an inappropriate (not to mention inefficient) means of determining whether an object possesses functionality of a particular kind – e.g. is this object a cutting implement? What I’m suggesting, in other words, is that at least some functional terms in our language are epistemically basic, and that our recognition of whether an object possesses these functions is partly intuitive. Using a table of rules to determine whether or not an object possesses a function (say, cutting) is, in my opinion, likely to produce misleading results.

My response to Mark Frank, revisited

I’d now like to return to my response to Mark Frank above, in which I wrote:

The definition of order and complexity relates to whether or not a pattern can be generated mathematically by “a short algorithm or set of commands,” rather than whether or not it can be described in a few words. The definition of specificity, on the other hand, relates to whether or not a pattern can be characterized by a brief verbal description. There is nothing that prevents a pattern from being difficult to generate algorithmically, but easy to describe verbally. Hence it is quite possible for a pattern to be both complex and specified.

This, I would now say, is incorrect as it stands. The reason why it is quite possible for an object to be both complex and specified is that the term “complex” refers to the (very low) likelihood of its originating as a result of physical laws (not mathematical algorithms), whereas the term “specified” refers to whether it can be described briefly – whether it be according to some algorithm or in functional terms.

Implications for Intelligent Design

I have argued above that we can legitimately infer an Intelligent Designer for any system which is capable of being verbally described in just a few words, and whose likelihood of originating as a result of natural laws is sufficiently close to zero. This design inference is especially obvious in systems which exhibit biological functionality. Although we can make design inferences for non-biological systems (e.g. moon monoliths, if we found them), the most powerful inferences are undoubtedly drawn from the world of living things, with their rich functionality.

In an especially perspicuous post on this thread, G. Puccio argued for the same conclusion:

The simple truth is, IMO, that any kind of specification, however defined, will do, provided that we can show that that specification defines a specific subset of the search space that is too small to be found by a random search, and which cannot reasonably be found by some natural algorithm….

In the end, I will say it again: the important point is not how you specify, but that your specification identifies:

a)an utterly unlikely subset as a pre-specification

or

b) an utterly unlikely subset which is objectively defined without any arbitrary contingency, like in the case of pi.

In the second case, specification needs not be a pre-specification.

Functional specification is a perfect example of the second case.

Provided that the function can be objectively defined and measured, the only important point is how complex it is: IOWs, how small is the subset of sequences that provide the function as defined, in the search space.

That simple concept is the foundation for the definition of dFSCI, or any equivalent metrics.

It is simple, it is true, it works.

P{T|H) and elephants: Dr. Liddle objects

But how do we calculate probabilistic complexities? Dr. Elizabeth Liddle writes:

P(T|H) is fine to compute if you have a clearly defined non-design hypothesis for which you can compute a probability distribution.

But nobody, to my knowledge, has yet as suggested how you would compute it for a biological organism, or even for a protein.

In a similar vein, Alan Fox comments:

We have, as yet, no way to predict functionality in unknown proteins. Without knowing what you don’t know, you can’t calculate rarity.

In a recent post entitled, The Edge of Evolution, I cited a 2011 paper by Dr. Branko Kozulic, titled, Proteins and Genes, Singletons and Species, in which he argued (generously, in his view) that at most, 1 in 10^21 randomly assembled polypeptides would be capable of functioning as a viable protein in vivo, that each species possessed hundreds of isolated proteins called “singletons” which had no close biochemical relatives, and that the likelihood of these proteins originating by unguided mechanisms in even one species was astronomically low, making proteins at once highly complex (probabilistically speaking) and highly specified (by virtue of their function) – and hence as sure a sign as we could possibly expect of an Intelligent Designer at work in the natural world:

In general, there are two aspects of biological function of every protein, and both depend on correct 3D structure. Each protein specifically recognizes its cellular or extracellular counterpart: for example an enzyme its substrate, hormone its receptor, lectin sugar, repressor DNA, etc. In addition, proteins interact continuously or transiently with other proteins, forming an interactive network. This second aspect is no less important, as illustrated in many studies of protein-protein interactions [59, 60]. Exquisite structural requirements must often be fulfilled for proper functioning of a protein. For example, in enzymes spatial misplacement of catalytic residues by even a few tenths of an angstrom can mean the difference between full activity and none at all [54]. And in the words of Francis Crick, “To produce this miracle of molecular construction all the cell need do is to string together the amino acids (which make up the polypeptide chain) in the correct order” [61, italics in original]. (pp. 7-8)

Let us assess the highest probability for finding this correct order by random trials and call it, to stay in line with Crick’s term, a “macromolecular miracle”. The experimental data of Keefe and Szostak indicate – if one disregards the above described reservations – that one from a set of 10^11 randomly assembled polypeptides can be functional in vitro, whereas the data of Silverman et al. [57] show that of the 10^10 in vitro functional proteins just one may function properly in vivo. The combination of these two figures then defines a “macromolecular miracle” as a probability of one against 10^21. For simplicity, let us round this figure to one against 10^20. (p. 8)

To put the 10^20 figure in the context of observable objects, about 10^20 squares each measuring 1 mm^2 would cover the whole surface of planet Earth (5.1 x 10^14 m^2). Searching through such squares to find a single one with the correct number, at a rate of 1000 per second, would take 10^17 seconds, or 3.2 billion years. Yet, based on the above discussed experimental data, one in 10^20 is the highest probability that a blind search has for finding among random sequences an in vivo functional protein. (p. 9)

The frequency of functional proteins among random sequences is at most one in 10^20 (see above). The proteins of unrelated sequences are as different as the proteins of random sequences [22, 81, 82] – and singletons per definition are exactly such unrelated proteins. (p. 11)

A recent study, based on 573 sequenced bacterial genomes, has concluded that the entire pool of bacterial genes – the bacterial pan-genome – looks as though of infinite size, because every additional bacterial genome sequenced has added over 200 new singletons [111]. In agreement with this conclusion are the results of the Global Ocean Sampling project reported by Yooseph et al., who found a linear increase in the number of singletons with the number of new protein sequences, even when the number of the new sequences ran into millions [112]. The trend towards higher numbers of singletons per genome seems to coincide with a higher proportion of the eukaryotic genomes sequenced. In other words, eukaryotes generally contain a larger number of singletons than eubacteria and archaea. (p. 16)

Based on the data from 120 sequenced genomes, in 2004 Grant et al. reported on the presence of 112,000 singletons within 600,000 sequences [96]. This corresponds to 933 singletons per genome…
[E]ach species possesses hundreds, or even thousands, of unique genes – the genes that are not shared with any other species. (p. 17)

Experimental data reviewed here suggest that at most one functional protein can be found among 10^20 proteins of random sequences. Hence every discovery of a novel functional protein (singleton) represents a testimony for successful overcoming of the probability barrier of one against at least 10^20, the probability defined here as a “macromolecular miracle”. More than one million of such “macromolecular miracles” are present in the genomes of about two thousand species sequenced thus far. Assuming that this correlation will hold with the rest of about 10 million different species that live on Earth [157], the total number of “macromolecular miracles” in all genomes could reach 10 billion. These 10^10 unique proteins would still represent a tiny fraction of the 10^470 possible proteins of the median eukaryotic size. (p. 21)

If just 200 unique proteins are present in each species, the probability of their simultaneous appearance is one against at least 10^4,000. [The] Probabilistic resources of our universe are much, much smaller; they allow for a maximum of 10^149 events [158] and thus could account for a one-time simultaneous appearance of at most 7 unique proteins. The alternative, a sequential appearance of singletons, would require that the descendants of one family live through hundreds of “macromolecular miracles” to become a new species – again a scenario of exceedingly low probability. Therefore, now one can say that each species is a result of a Biological Big Bang; to reserve that term just for the first living organism [21] is not justified anymore. (p. 21)

“But what if the search for a functional protein is not blind?” ask my critics. “What if there’s an underlying bias towards the emergence of functionality in Nature?” “Fine,” I would respond. “Let’s see your evidence.”

Alan Miller rose to the challenge. In a recent post entitled, Protein Space and Hoyle’s Fallacy – a response to vjtorley, cited a paper by Michael A. Fisher, Kara L. McKinley, Luke H. Bradley, Sara R. Viola and Michael H. Hecht, titled, De Novo Designed Proteins from a Library of Artificial Sequences Function in Escherichia Coli and Enable Cell Growth (PLoS ONE 6(1): e15364. doi:10.1371/journal.pone.0015364, January 4, 2011), in support of his claim that proteins were a lot easier for Nature to build on the primordial Earth than Intelligent Design proponents imagine, and he accused them of resurrecting Hoyle’s fallacy.

In a very thoughtful comment over on my post CSI Revisited, G. Puccio responded to the key claims made in the paper, and to what he perceived as Alan Miller’s misuse of the paper (bolding below is mine):

First of all, I will just quote a few phrases from the paper, just to give the general scenario of the problems:

a) “We designed and constructed a collection of artificial genes encoding approximately 1.5×106 novel amino acid sequences. Because folding into a stable 3-dimensional structure is a prerequisite for most biological functions, we did not construct this collection of proteins from random sequences. Instead, we used the binary code strategy for protein design, shown previously to facilitate the production of large combinatorial libraries of folded proteins.”

b) “Cells relying on the de novo proteins grow significantly slower than those expressing the natural protein.”

c) “We also purified several of the de novo proteins. (To avoid contamination by the natural enzyme, purifications were from strains deleted for the natural gene.) We tested these purified proteins for the enzymatic activities deleted in the respective autotrophs, but were unable to detect activity that was reproducibly above the controls.”

And now, my comments:

a) This is the main fault of the paper, if it is interpreted (as Miller does) as evidence that functional proteins can evolve from random sequences. The very first step of the paper is intelligent design: indeed, top down protein engineering based on our hardly gained knowledge about the biochemical properties of proteins.

b) The second problem is that the paper is based on function rescue, not on the appearance of a mew function. Experiments based on function rescue have serious methodological problems, if used as models of neo darwinian evolution. The problem here is specially big, because we know nothing of how the “evolved” proteins work to allow the minimal rescue of function in the complex system of E. Coli (see next point).

c) The third problem is that the few rescuing sequences have no detected biochemical activity in vitro. IOWs, we don’t know what they do, and how they act at biochemical level. IOWs, with no known “local function” for the sequences, we have no idea of the functional complexity of the “local function” that in some unknown way is linked to the functional rescue. The authors are well aware of that, and indeed spend a lot of time discussing some arguments and experiments to exclude some possible interpretation of indirect rescue, or at least those that they have conceived.

The fact remains that the hypothesis that the de novo sequences have the same functional activity as the knocked out genes, even if minimal, remain unproved, because no biochemical activity of that kind could be shown in vitro for them.

These are the main points that must be considered. In brief, the paper does not prove, in any way, what Miller thinks it proves.

And that was Miller’s best paper!

In the meantime, can you forgive us in the Intelligent Design community for being just a little skeptical of claims that “no intelligence was required” to account for the origin of proteins, of the first living cell (which would have probably required hundreds of proteins), of complex organisms in the early Cambrian period, and even of the appearance of a new species, in view of what has been learned about the prevalence of singleton proteins and genes in living organisms?

Comments
@computerist I think that's the main question here: How does the interpreter or compiler which executes the file look like? ... Since ....
This is mostly up to the programmer depending on the language being used.
... anyone can write an interpretor or compiler that accepts any given language without errors.JWTruthInLove
June 22, 2013
June
06
Jun
22
22
2013
01:54 PM
1
01
54
PM
PDT
@JWTruthInLove Simple, it will run the file, if it can. If its an interpreter if it runs and there are no errors will returns true. If its compiler and compiles and executes without errors, returns true. This is mostly up to the programmer depending on the language being used. This doesn't assume the type of function, this merely assumes that it functions.computerist
June 22, 2013
June
06
Jun
22
22
2013
01:19 PM
1
01
19
PM
PDT
@computerist: What does "gaFile.execute(temp)" do?JWTruthInLove
June 22, 2013
June
06
Jun
22
22
2013
01:04 PM
1
01
04
PM
PDT
We can test whether a Darwinian process is capable of generating FCSI, and sustain and build on top of the existing FCSI given self-replication. I propose to run this type of simulation (based roughly on the pseudo-code below), but more knowledge (which I don't have since I'm not a biologist) is required for more "realistic" results. GAFile gaFile = new GAFile(someArbitrarySizeOfTextFile); Fcsi fcsi = new Fcsi(); fcsi.setFcsiBound(theSmallestKnownFunctionalSubsystemExpressedInBits); boolean running = true; int count = 0; int generations = someBigNumber; while(running) { File temp = gaFile.randomize(); boolean runnable = gaFile.execute(temp); if(runnable) { gaFile.setFile(temp);//new one "survives", save for next iteration if(fcsi.hasFcsi(gaFile.getFile())) { fcsi.incrementFoundCount(); } else { fcsi.incrementNotFoundCount(); } } if(count > generations) { running = false; } count++; } fcsi.printStatistics(); This maybe a ridiculous proposition considering it only took me a few minutes to write (and it is merely a simulation), but unless this type of test is performed, I don't see how any credence can be given to the Darwinian mechanism that "self-replication with heritable variation is all that is required and therefore evolution is inevitable" as per comment 137.computerist
June 22, 2013
June
06
Jun
22
22
2013
12:37 PM
12
12
37
PM
PDT
F/N @ KF How this declaration abuses the word "freedom" is a perfect illustration of why we need to oppose such bigotry.Alan Fox
June 22, 2013
June
06
Jun
22
22
2013
11:54 AM
11
11
54
AM
PDT
Dr Liddle, pardon but I must highlight a question, are these declarants and signatories all to be invidiously compared to Nazis and Nazism, too? And is that to be seen as “a nuh nutten”? KF
I certainly wonder what would happen to human rights such as the right to free expression and the free exchange of ideas if you and your ilk were to gain any sort of political power base. Thank goodness you make yourself appear so ludicrous that most people can't take you seriously.Alan Fox
June 22, 2013
June
06
Jun
22
22
2013
11:50 AM
11
11
50
AM
PDT
Kairosfocus:
Perhaps it has not dawned on you that before you can try to have a discussion with me on merits of points, you need to resolve the problem of hosting and denying that you have harboured slander against me.
I'm sorry, kairosfocus, but I do not understand the problem you want me to resolve. Please feel free to come over to TSZ and make your point, or, alternatively, as I am now able to post here, perhaps start a thread here? Or feel free to email me.Elizabeth B Liddle
June 22, 2013
June
06
Jun
22
22
2013
10:19 AM
10
10
19
AM
PDT
Elizabeth @143:
Assuming you mean the output of machines, as opposed to machines as the putative artefact, yes.
I was referring to the machines themselves. Your “output of machines” is potentially also the product of design, but is typically so poorly defined (notwithstanding your self-congratulations on being so careful with your definitions) as to not be helpful. Let’s focus on the easy cases first: the actual machines we see before us.
I’d say that there is a characteristic quality of things output by processes characterised by deep decision-trees. These include the output of human design, machine design, and evolutionary processes.
Here you are just lumping the alleged evolutionary processes into the same category as human design. Nonsense. Talk about assuming the conclusion. That is precisely the question at issue: do evolutionary processes have the ability to produce output like human design? And they have never been shown to do so.
I think that quality is what the ID project has tended to assume must come from an intentional designing agent.
No. ID points out that such quality is only known to come from purposeful intelligent activity. And no-one has ever shown that purely natural processes are up to the task. That is what the whole debate is about.
My point is that key to the chi calculation is the parameter P(T|H), the Probability of the [specified] Target, given the null Hypothesis, which is “the relevant Chance Hypothesis taking into account Darwinian and other Material mechanisms”. That’s what I’m saying is not only non-calculable, but that you’d have to know the answer to your question before you calculate it the answer.
Why in the world would you have to know the answer beforehand? That certainly doesn’t follow logically. As long as the calculation is based on reasonable estimates and includes information that we do know, it can allow us to draw a reasonable inference based on the current state of knowledge. We certainly know enough about biology at this point to start making some calculations and drawing some reasonable conclusions. No-one has ever claimed that the exact probabilities are known with precision. And they need not be.
In other words, I suggest, null hypothesis testing in this form is a completely inappropriate and useless way of inferring Design. Not wrong, just useless. GIGO. . . . I’d say that null hypothesis testing simply won’t give you the answer to the question you are asking. I’m saying the wrong tool for the job. It can’t do it, unless you can precisely define the probability distribution under your null. So it will work to reject the null that a coin is fair. It won’t work to reject the null that a black monolith was not-designed.
Well, you’re back to your long-standing concern about being able to “precisely define the probability distribution.” Yet you freely admit that such a calculation is not needed in other instances (archaeology, forensics, etc.). So you are imposing an a priori different demand for what counts as evidence or what tool can be applied to infer design when it comes to living systems. Now, we have a couple of possibilities: It could be that your position is simply based on a refusal to consider design in living systems. Some might be forgiven for thinking this is the case. Or, perhaps, it could be that you are aware of some other calculation or some other “tool,” as you say, that will allow us to determine whether a particular living system was designed. Please let us know what your proposed calculation or proposed tool is.Eric Anderson
June 22, 2013
June
06
Jun
22
22
2013
09:52 AM
9
09
52
AM
PDT
De Liddle, pardon butr I must highlight a question, are these declarants and signatories all to be invidiously compared to Nazis and Nazism, too? And is that to be seen as "a nuh nutten"? KFkairosfocus
June 22, 2013
June
06
Jun
22
22
2013
06:05 AM
6
06
05
AM
PDT
F/N: EL in 128 -- after repeated cycles of correction over the course of at least a year, on what the design inference explanatory filter is and does:
there is no reason to reject selectable precursors and infer design by default”.
Agsain, after all of these months and more? At this point, I must chalk this up to a deliberately misleading strawman. If EL actually believes this, it is because she has repeatedly refused to accept what has been repeatedly, explicitly pointed out to her concerning the EF and is a simple fact easily ascertainable from the flowchart presented here in the very first post in January 2011 for the ID foundations series here at UD. Namely, as the two decision diamonds show, that there are TWO DEFAULTS, and design is not one of them. (This is also implied in the analysis above where the expression Chi_500 = I*S - 500, bits beyond the solar system threshold, is deduced.) I have explicitly, point by point explained this to Dr Liddle before, so I will not try again, I will simply highlight that the first default is that mechanical necessity suffices to causally explain a phenomenon, similar to how a dropped heavy object falls under 9.8 N/kg initial acceleration near earth's surface. This is defeated by observing high contingency. That is, when under evidently similar initial circumstances, we have materially diverse outcomes, e.g. the dropped object is a die and it tumbles and rests with different faces uppermost. High contingency has two empirically warranted explanations: chance circumstances and/or design, where both can be involved but under certain circumstances we can draw out the distinct effects. Chance is of course the second default. It produces stochastically distributed outcomes, reflective of underlying processes that may trace to quantum statistical distributions or the like, or to the sort of circumstances that happen with a die. That is, we have uncorrelated deterministic chains of cause, with some noise injected, and the result is amplified through sensitive dependence on initial and intervening conditions such as the surface roughness and the eight corners and twelve edges of a die. Similarly, my father's generation of statisticians had a trick to use phone directories as random number tables as the line codes [numbers] though deterministically assigned, are typically highly uncorrelated with names listed in alphabetical order. What defeats this default is what GP is highlighting, complex, functional specificity, especially in coded information such as we see in this post and in something like DNA. This is because, code is readily recognisable, is functionally specific and is therefore confined to narrow zones T in much larger config spaces W. This has been oulined above, and if you want to look at a widely accessible discussion, cf Signature in the Cell. There are three known relevant sources of cause: 1: necessity acting on initial circumstances through dynamics, and leading to natural regularities such as F = m*a, 2: chance contingency, leading to stochastically distributed outcomes, 3: choice contingency, aka design, leading to in some cases FSCO/I especially dFSCI. The only empirically warranted source for FSCO/I is design, and there are literally billions to trillions now of accessible cases in point. There is no fourth causal pattern that is empirically warranted. That is we have either regularites or high contingency,a nd contingency has two distinct sources with diverse empirical signatures in cases of inter3est. This is not reasoning on question-begging default it is inference to best empirically warrantted current explanation, on billions of tests that show the reliability of the inference. If objectors genuinely disagree, then they should put up clear cases of blind chance and/or -- notice, the combination is in this -- mechanical necessity producing FSCO/I, especially dFSCI. Let us just say that there is a long list of attempts that invariably turn out to instead be intelligence. All of this has been shown, right there before Dr Liddle and co, over and over again. So, when I see the sort of regurgitated, recirculated talking point above, I have to conclude -- with more of sorrow than of anger -- in light of the incident of denied slander already cited that this is willful continued misrepresentation. Good day GEM of TKIkairosfocus
June 22, 2013
June
06
Jun
22
22
2013
05:10 AM
5
05
10
AM
PDT
Elizabeth: Let's go to the final point, the most important: why the neo darwinian algorithm is not only unsupported by facts, but also usupported by logic. I will try to be simple and clear. My impression was that, in your initial discussion, you were only suggesting that selectable precursors could exist in the protein space, and that if they were many that would help the evolution of functional proteins. At this point you had not mentioned anything about protein structure and function, as you do in your following post. My answer to that was very simple. Even if many selectable precursors exist in the protein space, there is no reason to think that their distribution favors functional proteins versus non functional states. Therefore, the probability of getting to a functional protein remains the same, whatever the number of selectable intermediaries in the space. IOWs, even if selection acts, it will act as much to lead to non functional states as it does to lead to functional states, and as functional states are extremely rare, the probability of finding them remains extremely low. Is that clear? Now, in the following post, you add considerations about protein structure and function. They are not completely clear, but I will try to make my point just the same. Here is your argument:
Are you saying that there is no reason to expect any correspondence between protein sequence and protein properties? If so, by what reasoning? I’d say that under the Darwinian hypothesis that is what you’d expect. Sequences for which a slight variation results in a similar phenotype will tend to be selected simply for that reason. Variants who produce offspring with similar fitness will leave more offspring than variants for whom the fitness of the offspring is more of a crapshoot. And in any case we know it is the case – similar genotypes tend to produce similar phenotypes. If sequences were as brittle as you suggest, few of us would be alive.
You seem to imply that, in some way, the relationship between structure and function can "help" the transition to a functional unrelated state. But the opposite is true. Let's go in order. The scenario is, as usual, the emergence of a new basic protein domain. As I have already discussed with you in the past, we must decide what is our starting sequence. the most obvious possibilities are: a) An existing, unrelated protein coding gene b) An existing, unrelated pseudogene, no more functional c) An unrelated non coding sequence. Why do I insist on "unrelated"? Because othwerwise we are no more in the scenario of the emergence of a new basic protein domain. As I have explained many times, we have about 2000 superfamilies in the SCOP classification. Each of them is completely unrelated, at sequence level, to all the others, as can be easily verified. Each of them has different sequence, different folding, different functions. And they appear at different times of natural history, although almost half of them are already present in LUCA. So, the emergence of a new superfamily at some time is really the emergence of a new functional island. The new functional protein will be, by definition, unrelated at the sequence level to anything functional that already existed. It will have a new folding, and new functions. Is that clear? Now, as usual I will debate NS using the following terminology. Please, humor me. 1) Negative NS: the process by which some new variation that reduces reproductive fitness can be eliminated. 2) Positive NS: the process by which some new variation that confer a reproductive advantage can expand in the population, and therefore increase its probabilistic resources (number of reproduction per time in the subset with that variation). Let's consider hypothesis a). Here, negative NS can only act against the possibility of getting to a new, unrelated sequence with a new function by RV. Indeed, then only effect of negative NS will be to keep the existing function, and eliminate all intermediaries where that function is lost or decreases. The final effect is that neutral mutations can change the sequence, but the function will remain the same, and so the folding. That is what is expressed in the big bang theory of protein evolution, and explains very well the sequence variety in a same superfamily, while the function remains approximately the same. In this scenario, it is even more impossible to reach a new functional island, because negative NS will keep the variation within the boundaries of the existing functional island. What about positive NS? In this scenario, it can only have a limited role, maybe to tweak the existing function, improve it, or change a little bit the substrate affinity. Some known cases of microevolution, like nylonase, could well be explained in this context. Let's go now to cases b) and c). In both situations, the original sequence is not transcribed, or simply is not functional. Otherwise, we are still in case a). That certainly improves our condition. There is no more the limitation of negative NS. Now we can walk in all directions, without any worry about an existing function or folding that must be preserved. Well, that's much better! But... in the end, now we are in the field of a pure random walk. All existing unrelated states are equiprobable. The probability of reaching a new functional island is now the same as in the purely random hypothesis. Your suggestion that some privileged walks may exist between isolated functional islands is simply illogical. Why should that be so? The functional islands are completely separated at sequence level, we know that. SCOP classification proves that. They are also separated at the folding level: they fold differently. They also have different functions. Why in the universe should privileged pathways exist between them? What are you? An extreme theistic evolutionist, convinced that God has designed, in the Big Bang, a very unlikely universe where in the protein space, for no apparent reason, there are privileged walk between unrelated functional islands, so that darwinian evolution may occur? How credible is this "God supports Darwin" game? You, like anyone who finds the neo darwinian algorithm logically credible, should really answer these very simple questions.gpuccio
June 22, 2013
June
06
Jun
22
22
2013
05:07 AM
5
05
07
AM
PDT
Elizabeth: The basic protein domains are extremely ancient. How would you test whether any precursor was selectable in those organisms in that environment? That’s why I’d say the onus (if you want to reject the “null” of selectable precursors) to demonstrate that such precursors are very unlikely. As explained, "selectable precursors" are not a "null": they are an alternative hypothesis (H1b, not H0). I reject the neo darwinian hypothesis H1b because it is completely unsupported by facts. I have no onus at all. It is unsupported by facts. Period. Show me the facts, and I will change my mind. Moreover, if intermediaries exist, it must be possible to find them in the lab, and to argue about what advantage they could have given. If your point is that: a) Precursors could have existed, but we have no way to find them and: b) Even if we found them, there is no way to understand if they could have given an advantage in "those organisms" and in "that environment", because we can know nothing of those organism and that environment, then you are typically proposing an hypothesis that can never be falsified. I suppose Popper would say that it is not a scientific hypothesis. We allow both Intelligent Perturbation and Unknown Object to remain as unrejected alternatives. The rejection of an alternative is always an individual choice. I would be happy if ID and neodarwinism could coexist as "unrejected alternatives" in the current scientific scenario. That's not what is happening. Almost all scientists accept the unsupported theory, and fiercely reject the empirically supported one. Using all possible methods to discredit it, fight it, consider it as a non scientific religious conspiration, and so on. Not a good scenario at all, for human dignity. I am merely arguing against the validity of the arguments for Design that you are presenting. The point is not missed at all. It's my arguments for design that I defend, and nothing else. And I shall go on not using the capital letter, because I am arguing for some (non human) conscious intelligent being, not for God. Actually I accept that the flaw here is not circularity. Thank you for that. It is if you assume that by rejecting the random-draw null you have rejected the non-design null, which I have never done. but as you claim that the inference rather is by “simple inference by analogy”, I agree it is not circular. I simply claim what I have always clearly stated for years, here and at TSZ. On the other hand nor is it sound. You are free to believe so, but I see no reason for such a statement. More in next post.gpuccio
June 22, 2013
June
06
Jun
22
22
2013
04:16 AM
4
04
16
AM
PDT
Elizabeth: For me it's a pleasure to go on discussing with you, provided it does not become too exacting on my time :) To find some balance, I will as a rule answer only the new aspects in your posts, and take for granted what we have already clearly discussed, with the due differences between our points. So, some comments to your last post (#128): Fair point, it was a lame example. My point is that positing selectable precursors seems at least no less credible than positing a completely unobserved entity. And at least we know where to look for the selectable precursors, and we know that Darwinian algorithms basically work. For example (I know UD proponents hate this demonstration, but it deserves a lot more credit than it’s given), Lenski’s AVIDA shows that even if you have functions that are all Irreducibly complex (require non-selectable precursors) they evolve, even when they require deleterious precursors. So we know that the principle works. My argument is not “therefore there must have been selectable precursors” but “therefore there is no reason to reject selectable precursors and infer design by default”. I will not comment on GAs. I have already done that, and in my past discussions I have clearly shown how even the GA you proposed in you blog is in no way an acceptable model of NS, and has no relevance to our discussion. As far as I remember, nobody at TSZ could refute my arguments about your GA. I invite you to read those past threads, if you want. Another point I want to stress is that design is never inferred "by default". I don't even understand what you mean by such a strange wording. Design is inferred because we observe dFSCI, and because we know that design is the only observed cause of dFSCI in all testable cases. That makes a design inference perfectly reasonable. There is no defalut here, only sound reasoning. Design is a very good explanation for any observed dFSCI. The fact that no other credible explanation is available makes design the best available explanation, not certainly "a default inference". But gpuccio, this then becomes an argument-from-ignorance. Absolutely not! This is simply neo darwinist propaganda. The scenario is very simple. Design is a credible explanation for dFSCI, because of specific positive empirical observations of the relationship between dFSCI and design. That is a very positive argument for the design inference, and ignorance has nothing to do with it. Then, there is the attempt of neo darwinists to explain dFSCI in biology by an algorithm based on RV + NS. Without discussing the details (more on NS in a moment), let's say that such an explanation has no credibility unless selectable intermediaries to all basic protein domains exist. Our empirical data offer at present no support to such an existence, and it is not even suggested by pure logical reasonings. Therefore, the situation is as follows: a) We reject H0 (pure random origin) b) We have a credible explanation, based on positive empirical observations (design): let's call it H1a c) We have a suggested alternative explanation, unsupported by any empirical observation (the neodarwinian hypothesis): let's call it H1b Now, it is obvious to me that H1a is far better than H1b. So, I accept it as the best explanation. You may disagree, but the fact that I reject H1b as a credible explanation because it is unsupported by known facts is in no way an "argument from ignorance": it is simply sound scientific reasoning. More in next post.gpuccio
June 22, 2013
June
06
Jun
22
22
2013
03:54 AM
3
03
54
AM
PDT
PS: Just to highlight the key transformation:
X = – log2[10^120 ·phiS(T)·P(T|H)], where log(p*q*r) = log(p) + log(q ) + log(r), 10^120 ~ 2^398 and log(1/p) = – log (p), so: Chi = – log2(2^398 * D2 * p), in bits, and where also D2 = phiS(T) Chi = Ip – (398 + K2), where now: log2 (D2 ) = K2 That is, we have transformed the probability into an information metric; which is far more tractable and in part can be directly observed in the informational macromolecules of life which can be easily enough rendered into bits. Next, we get the thresholds and transform further into SPECIFIC, functional information by use of a dummy variable keyed to observation of functional specificity:
chi is a metric of bits from a zone of interest, beyond a threshold of “sufficient complexity to not plausibly be the result of chance,” (398 + K2). So, (a) since (398 + K2) tends to at most 500 bits on the gamut of our solar system [[our practical universe, for chemical interactions! ( . . . if you want , 1,000 bits would be a limit for the observable cosmos)] [--> the atomic interactions threshold] and (b) as we can define and introduce a dummy variable for specificity, S [--> This injects specificity per observations . . . ], where (c) S = 1 or 0 according as the observed configuration, E, is on objective analysis specific to a narrow and independently describable zone of interest, T: Chi = Ip*S – 500, in bits beyond a “complex enough” threshold NB: If S = 0, this locks us at Chi = – 500; and, if Ip is less than 500 bits, Chi will be negative even if S is positive. E.g.: a string of 501 coins tossed at random will have S = 0, but if the coins are arranged to spell out a message in English using the ASCII code [[notice independent specification of a narrow zone of possible configurations, T], Chi will — unsurprisingly — be positive. Following the logic of the per aspect necessity vs chance vs design causal factor explanatory filter, the default value of S is 0 [--> notice this, those who want to play talking point games on the value of S], i.e. it is assumed that blind chance and/or mechanical necessity are adequate to explain a phenomenon of interest.
I trust this makes the matter clear enough for those who want to understand. KFkairosfocus
June 22, 2013
June
06
Jun
22
22
2013
02:41 AM
2
02
41
AM
PDT
Dr Liddle: Perhaps it has not dawned on you that before you can try to have a discussion with me on merits of points, you need to resolve the problem of hosting and denying that you have harboured slander against me. Until you resolve this matter positively, I am forced to assume that you are an agenda driven ideologue who has no respect for truth, accuracy or fairness, but will push any and all persuasive talking points to gain an advantage. Regardless of actual merit. Thus, until the matter is resolved, you have no credibility. Period. Beyond that, onlookers and other participants, the above clip I presented suffices to show that the solution to the probability challenge that EL seems to want to front is that it is in fact part of an information measure, but not drawn out. The solution to this -- as I showed above by clipping a longstanding log reduction -- is to simply move the equation one step forward by extracting the - log2 (p) to yield that information metric. (I must assume that Dr Liddle is able to check up that this is standard fare for measuring information. Even if she is not, I assure you that Connor, Taub and Schilling and a lot of others all the way back to Shannon et al are there to help the serious inquirer who actually wants to discuss matters on merits.) Informational measures, FYI, automatically take into account the issue of chance based hypotheses of all kinds, on getting to observed information that is functional. As was further shown, the informational metric Dembski proposed tuns out to be a threshold metric of info beyond a credible limit for sufficient specific complexity to be not credibly chance and necessity by ANY mechanism. Remember, I reduced the matter to atoms changing state every chemical reaction time, which so long as it is blind, by chance and or mechanical necessity, will fill the bill. There is absolutely nothing special about a cluster of organic chemicals in a living cell, that would make them suddenly not behave in accordance with what atoms do under chance factors and mechanical necessity. Where also, if you want to fuss and bother about the alleged special case of life -- I thought vitalism was supposed to be dead, we can easily see that the whole Darwinian mechanism for alleged design of complexity in life forms is:
CHANCE VARIATION (CV) + DIFFERENTIAL REPRODUCTIVE SUCCESS (DRS) --> DESCENT WITH MODIFICATION (DWM)
This can be analysed on an informational view. DRS, what is commonly called "Natural Selection" (which misleadingly suggests design powers), is actually a subtracter of varieties, through extinction of the inferior varieties. That is, it is NOT a source of added information, by direct implication. The only remaining possible source of added information -- and notice we are here begging the much bigger question of getting to a self replicating life form, which itself is enough to put design at the table and thus shifts onward discussion decisively -- is chance variations, triggered by anything from a radiation damaged water molecule reacting with any neighbouring molecule, on up. (And that is the primary mechanism we studied in radiation physics class, as water is the commonest molecule in the body. Suffice to say that the context for this was radiation sickness and cancer. Not exactly promising as the source for adding functionally specific complex info.) So, we have high contingency bearing abundant FSCO/I to explain, and the alternatives sitting at the table are chance variations such as by radiation etc, and design by someone able to do the equivalent of a molecular nanotech lab some generations beyond Venter et al, and maybe to use targetted viri or the equivalent as means of injection. Withe the sort of exceeding complexity and specificity involved in the associated digitally coded nanotech implemented info, the reasonable man would bet on design. And let us zoom in on the info beyond a threshold calc for a moment, to see how the thresholds are set:
X = – log2[10^120 ·phiS(T)·P(T|H)]. –> X is “chi” and phi is “phi” xx: To simplify and build a more “practical” mathematical model, we note that information theory researchers Shannon and Hartley showed us how to measure information by changing probability into a log measure that allows pieces of information to add up naturally: Ip = – log p, in bits if the base is 2. That is where the now familiar unit, the bit, comes from. Where we may observe from say — as just one of many examples of a standard result — Principles of Comm Systems, 2nd edn, Taub and Schilling (McGraw Hill, 1986), p. 512, Sect. 13.2:
Let us consider a communication system in which the allowable messages are m1, m2, . . ., with probabilities of occurrence p1, p2, . . . . Of course p1 + p2 + . . . = 1. Let the transmitter select message mk of probability pk; let us further assume that the receiver has correctly identified the message [[--> My nb: i.e. the a posteriori probability in my online discussion here is 1]. Then we shall say, by way of definition of the term information, that the system has communicated an amount of information Ik given by I_k = (def) log_2 1/p_k (13.2-1)
xxi: So, since 10^120 ~ 2^398 [--> we just took out the number of observations that can happen int he observed cosmos through binary events], we may “boil down” the Dembski metric using some algebra — i.e. substituting and simplifying the three terms in order — as log(p*q*r) = log(p) + log(q ) + log(r) and log(1/p) = – log (p): Chi = – log2(2^398 * D2 * p), in bits, and where also D2 = phiS(T) Chi = Ip – (398 + K2), where now: log2 (D2 ) = K2 That is, chi is a metric of bits from a zone of interest, beyond a threshold of “sufficient complexity to not plausibly be the result of chance,” (398 + K2). So, (a) since (398 + K2) tends to at most 500 bits on the gamut of our solar system [[our practical universe, for chemical interactions! ( . . . if you want , 1,000 bits would be a limit for the observable cosmos)] [--> the atomic interactions threshold] and (b) as we can define and introduce a dummy variable for specificity, S [--> This injects specificity per observations . . . ], where (c) S = 1 or 0 according as the observed configuration, E, is on objective analysis specific to a narrow and independently describable zone of interest, T: Chi = Ip*S – 500, in bits beyond a “complex enough” threshold NB: If S = 0, this locks us at Chi = – 500; and, if Ip is less than 500 bits, Chi will be negative even if S is positive. E.g.: a string of 501 coins tossed at random will have S = 0, but if the coins are arranged to spell out a message in English using the ASCII code [[notice independent specification of a narrow zone of possible configurations, T], Chi will — unsurprisingly — be positive. Following the logic of the per aspect necessity vs chance vs design causal factor explanatory filter, the default value of S is 0 [--> noticve this, those who want to play talking point games on the value of S], i.e. it is assumed that blind chance and/or mechanical necessity are adequate to explain a phenomenon of interest. S goes to 1 when we have objective grounds — to be explained case by case — to assign that value. That is, we need to justify why we think the observed cases E come from a narrow zone of interest, T, that is independently describable, not just a list of members E1, E2, E3 . . . ; in short, we must have a reasonable criterion that allows us to build or recognise cases Ei from T, without resorting to an arbitrary list.
That should be clear enough, save to those locked into ideological blindness, for whom (after literally years of patient correction ignored, strawmannised and twisted into occasions for slander) I have now lost all further patience. If ideologues play drumbeat talking point games beyond here (especially if they harbour, enable or carry out slanders), I simply note them down as having no respect for accuracy, logic, evidence, truth and fairness. And if they cannot accept that information is measured by taking a negative log probability, so that a probability is best understood int eh guise of the relevant info metric, then that is a sign that we are just looking at handy drumbeat talking points not serious discussion by people able to deal with the matter on its merits. On the charitable view. (I do not wish to elaborate the alternative view.) Good day. KFkairosfocus
June 22, 2013
June
06
Jun
22
22
2013
02:24 AM
2
02
24
AM
PDT
Eric:
Oh, stop it. We all know what this means in the context of the design debate
Eric, you asked me to state my position, which I did. I like to be precise, so I don't like to be misleading, so I prefer to choose my own terms. "Natural" sometimes means "not articial" or "not designed", and sometimes it means "not-supernatural". So I avoided the term completely when I tried to convey my position. And I seem to have successfully clarified my position, so it seems to have paid off :)
And in the case of, say, machines, I presume you would also acknowledge that this applies generally and there is not some special exclusionary category reserved for machines made of biomolecules just because they happen to be organic molecules rather than inorganic molecules.
Assuming you mean the output of machines, as opposed to machines as the putative artefact, yes. Actually, either way, yes. Eric, you will have to forgive my pedantic insistence on definitions - I do think that a huge amount of the heat in the ID debate results from people talking past each other, and using the same terms to mean different things! So at the risk of irritating you further, I will err on the pedantic side. I'd say that there is a characteristic quality of things output by processes characterised by deep decision-trees. These include the output of human design, machine design, and evolutionary processes. I think that quality is what the ID project has tended to assume must come from an intentional designing agent. I disagree. I don't think the pattern is the mark of intention, I think the pattern is the mark of iterative decision-trees. I think that Intention may also be discernable in its products, but I think what we'd be looking for would be different.
This is a strange comment. The calculations put forth by design proponents typically give every reasonable conceivable edge to natural processes, often including the entire particle resources of the known universe, the fastest possible reaction rates known, etc. It doesn’t matter whether it is specifically calculable to the nth degree. Every opportunity is provided to natural processes and they still are pathetically impotent under any rational scenario to even begin to construct the kinds of systems we see in life.
You have missed my point (not surprisingly, as I didn't explicitly make it in this thread, although I have been making it over at TSZ). I am not concerned with the Universal Probability Bound. I'd be happier with a much less stringent alpha - even physicists after all only look for 5 sigma, and in my field we publish at 2. Nor am I concerned about Specification - I think it's problematic, but I'm happy with it in principle. My point is that key to the chi calculation is the parameter P(T|H), the Probability of the [specified] Target, given the null Hypothesis, which is "the relevant Chance Hypothesis taking into account Darwinian and other Material mechanisms". That's what I'm saying is not only non-calculable, but that you'd have to know the answer to your question before you calculate it the answer. So no, the formula doesn't give "every conceivable edge" to Darwinian evolution. It simply outputs an answer for any prior you might have as to the probability of Darwinian evolution. If that is low, you'll conclude Design; if it's high, you won't. In other words, I suggest, null hypothesis testing in this form is a completely inappropriate and useless way of inferring Design. Not wrong, just useless. GIGO.
Finally, it is strange that you would say treating “non-design” as a null hypothesis doesn’t work. Would you prefer that we treat design as the null hypothesis?
Absolutely not. That wouldn't work either. I'd say that null hypothesis testing simply won't give you the answer to the question you are asking. I'm saying the wrong tool for the job. It can't do it, unless you can precisely define the probability distribution under your null. So it will work to reject the null that a coin is fair. It won't work to reject the null that a black monolith was not-designed.Elizabeth B Liddle
June 22, 2013
June
06
Jun
22
22
2013
01:37 AM
1
01
37
AM
PDT
Elizabeth @87:
I’m not sure what you mean by “purely natural processes”
Oh, stop it. We all know what this means in the context of the design debate. It means, without more, the regular workings of the known laws of the universe – gravity, electromagnetism, the strong/weak nuclear forces, and their sub-forces (chemistry, biochemistry, etc.). Or, to really simplify things for purposes of the design discussion, you can just think of it as processes that are not guided, directed, influenced or controlled by an intelligent being, i.e., not influenced by a designer.
I do think it is perfectly possible to determine that an event was due to a designer without defining and calculing the probability of it occurring by some non-design means. That is what archaeologists and forensic scientists do, for instance.
Good. I think that is a critical point. And in the case of, say, machines, I presume you would also acknowledge that this applies generally and there is not some special exclusionary category reserved for machines made of biomolecules just because they happen to be organic molecules rather than inorganic molecules.
What I am saying is much narrower than that, and concern’s Dembski’s concept of “CSI” or “chi” for which he gives a mathematical formula based on the principle of Fisherian null hypothesis testing. That formula contains the parameter p(T|H), which is the probability of observing the Target under the null hypothesis, which he defines as “the relevant chance hypothesis, including Darwinian and other material mechanisms”. I am saying that that is not calculable, and that treating “non-design” as an omnibus null doesn’t work, and that therefore the concept of chi doesn’t work as a method of detecting design.
This is a strange comment. The calculations put forth by design proponents typically give every reasonable conceivable edge to natural processes, often including the entire particle resources of the known universe, the fastest possible reaction rates known, etc. It doesn’t matter whether it is specifically calculable to the nth degree. Every opportunity is provided to natural processes and they still are pathetically impotent under any rational scenario to even begin to construct the kinds of systems we see in life. Furthermore, for decades now, the more we learn the more stringent the calculations become, not less. There is absolutely no rational way anyone can look at the calculations and conclude that a reasonable inference cannot be drawn. To say we can’t do a complete, entirely accurate calculation – and therefore, can’t draw any conclusion – is to hide behind a fig leaf and to demand a level of omniscience of design proponents that is never demanded from any other field. Finally, it is strange that you would say treating “non-design” as a null hypothesis doesn’t work. Would you prefer that we treat design as the null hypothesis? That is probably what we should do, given that virtually everyone acknowledges living systems look designed. Certainly a good argument can be made for considering living systems to be designed unless someone can affirmatively demonstrate that the system could reasonably have come about through purely natural processes.Eric Anderson
June 21, 2013
June
06
Jun
21
21
2013
10:32 PM
10
10
32
PM
PDT
I'm satisfied. Yawn.CentralScrutinizer
June 21, 2013
June
06
Jun
21
21
2013
10:09 PM
10
10
09
PM
PDT
And Elizabeth doesn't understand the implications of:
can you explain how you compute P(T|H) where, H, to quote Dembski 2005, is “the relevant chance hypothesis that takes into account Darwinian and other material mechanisms”?
LoL! Such a hypothesis doesn't exist. That is what I have been telling you. It is up to you guys to tell us what your position's hypotheses are.Joe
June 21, 2013
June
06
Jun
21
21
2013
07:09 PM
7
07
09
PM
PDT
Slightly. As CentralScrutinizer rightly points out, self-replication is not itself sufficient – it has to be self-replication with heritable variance in reproductive success.
And that variance has to be happenstance in order for the process to be darwinian.Joe
June 21, 2013
June
06
Jun
21
21
2013
07:07 PM
7
07
07
PM
PDT
kairosfocus: can you explain how you compute P(T|H) where, H, to quote Dembski 2005, is "the relevant chance hypothesis that takes into account Darwinian and other material mechanisms"?Elizabeth B Liddle
June 21, 2013
June
06
Jun
21
21
2013
04:32 PM
4
04
32
PM
PDT
computerist
So as long as you have self-replication, evolution is inevitable. Whether subsequent mutations are “good” or “bad” or whatever, evolution continues. Whatever the outcome, evolution continues as long as self-replication prevails. Case closed. This is what I understand to be the core underlying position of Dr. Liddle. Am I wrong?
Slightly. As CentralScrutinizer rightly points out, self-replication is not itself sufficient - it has to be self-replication with heritable variance in reproductive success. If that is present, evolution is not inevitable but highly likely for the simple and logical reason that if you have self-replicators replicating with heritable variance in reproductive success in the current environment, the more successful variants will tend to become most prevalent. So what is near-inevitable, under those conditions, is that populations will adapt to their current environment. If that environment changes, they may or may not be able to readapt fast enough not to go extinct.Elizabeth B Liddle
June 21, 2013
June
06
Jun
21
21
2013
04:30 PM
4
04
30
PM
PDT
F/N: Dr Liddle, here is the summary, that shows how EVERY CHEMICAL TIME EVENT OF EVERY ONE OF THE 10^57 ATOMS OF OUR SOLAR SYSTEM IS TAKEN INTO ACCOUNT IN THE 500 bit FSCO/I LIMIT, and onwards, every Planck time event of the 10^80 atoms of our observed cosmos in the 1,000 bit limit. That is, every atom every 10^-14 s is deemed an observer in the first case, and in the second, every atom, every 10^-45 s in the second. Where the issue is to search a space in the first instance that is such that even so generous a limit on probability stand as picking a straw sized sample blindly from a cubical haystack 1,000 LY on the side -- as thick as our galaxy, superposed on it. The second case makes the whole cosmos be swallowed up in the haystack. Where we can easily see that the firm result of sampling theory is that only the vast bulk of such a stack would be picked up by ANY process tracing to blind chance and mechanical necessity. And similarly, it is quite evident save to those who are committed not to see this, that the requisites of functionally specific complex organisation and associated information -- as are manifest in English ASCII text, in computer ASCII codes and in the genomic DNA codes alike -- will manifestly sharply constrict the subset T of the possible arrangements W that will be relevantly functional. This easily explains why Bill Gates does not hire monkeys to code his software by random typing, why random document generation exercises have failed to produce any functional text of relevant length (72+ ASCII characters), and why no cases of the chance and necessity driven actually observed evolution of relevantly complex biological function have ever been observed. Similarly, it is obvious why the ONLY empirically observed source of FSCO/I has been design. Thus, we are epistemically entitled to infer that the best causal explanation of FSCO/I is design, and that it is a highly reliable sign of design as cause. Period. Here is the excerpt, which has been repeatedly drawn to your attention and has been repeatedly ignored or distorted into strawman tactic pretzels: ________________ >> xix: Later on (2005), Dembski provided a slightly more complex formula, that we can quote and simplify, showing that it boils down to a "bits from a zone of interest [[in a wider field of possibilities] beyond a reasonable threshold of complexity" metric: ? = – log2[10^120 ·?S(T)·P(T|H)]. --> ? is "chi" and ? is "phi" xx: To simplify and build a more "practical" mathematical model, we note that information theory researchers Shannon and Hartley showed us how to measure information by changing probability into a log measure that allows pieces of information to add up naturally: Ip = - log p, in bits if the base is 2. That is where the now familiar unit, the bit, comes from. Where we may observe from say -- as just one of many examples of a standard result -- Principles of Comm Systems, 2nd edn, Taub and Schilling (McGraw Hill, 1986), p. 512, Sect. 13.2: Let us consider a communication system in which the allowable messages are m1, m2, . . ., with probabilities of occurrence p1, p2, . . . . Of course p1 + p2 + . . . = 1. Let the transmitter select message mk of probability pk; let us further assume that the receiver has correctly identified the message [[--> My nb: i.e. the a posteriori probability in my online discussion here is 1]. Then we shall say, by way of definition of the term information, that the system has communicated an amount of information Ik given by I_k = (def) log_2 1/p_k (13.2-1) xxi: So, since 10^120 ~ 2^398, we may "boil down" the Dembski metric using some algebra -- i.e. substituting and simplifying the three terms in order -- as log(p*q*r) = log(p) + log(q ) + log(r) and log(1/p) = – log (p): Chi = – log2(2^398 * D2 * p), in bits, and where also D2 = ?S(T) Chi = Ip – (398 + K2), where now: log2 (D2 ) = K2 That is, chi is a metric of bits from a zone of interest, beyond a threshold of "sufficient complexity to not plausibly be the result of chance," (398 + K2). So, (a) since (398 + K2) tends to at most 500 bits on the gamut of our solar system [[our practical universe, for chemical interactions! ( . . . if you want , 1,000 bits would be a limit for the observable cosmos)] and (b) as we can define and introduce a dummy variable for specificity, S, where (c) S = 1 or 0 according as the observed configuration, E, is on objective analysis specific to a narrow and independently describable zone of interest, T: Chi = Ip*S – 500, in bits beyond a "complex enough" threshold NB: If S = 0, this locks us at Chi = - 500; and, if Ip is less than 500 bits, Chi will be negative even if S is positive. E.g.: a string of 501 coins tossed at random will have S = 0, but if the coins are arranged to spell out a message in English using the ASCII code [[notice independent specification of a narrow zone of possible configurations, T], Chi will -- unsurprisingly -- be positive. Following the logic of the per aspect necessity vs chance vs design causal factor explanatory filter, the default value of S is 0, i.e. it is assumed that blind chance and/or mechanical necessity are adequate to explain a phenomenon of interest. S goes to 1 when we have objective grounds -- to be explained case by case -- to assign that value. That is, we need to justify why we think the observed cases E come from a narrow zone of interest, T, that is independently describable, not just a list of members E1, E2, E3 . . . ; in short, we must have a reasonable criterion that allows us to build or recognise cases Ei from T, without resorting to an arbitrary list. A string at random is a list with one member, but if we pick it as a password, it is now a zone with one member. (Where also, a lottery, is a sort of inverse password game where we pay for the privilege; and where the complexity has to be carefully managed to make it winnable. ) An obvious example of such a zone T, is code symbol strings of a given length that work in a programme or communicate meaningful statements in a language based on its grammar, vocabulary etc. This paragraph is a case in point, which can be contrasted with typical random strings ( . . . 68gsdesnmyw . . . ) or repetitive ones ( . . . ftftftft . . . ); where we can also see by this case how such a case can enfold random and repetitive sub-strings. Arguably -- and of course this is hotly disputed -- DNA protein and regulatory codes are another. Design theorists argue that the only observed adequate cause for such is a process of intelligently directed configuration, i.e. of design, so we are justified in taking such a case as a reliable sign of such a cause having been at work. (Thus, the sign then counts as evidence pointing to a perhaps otherwise unknown designer having been at work.) So also, to overthrow the design inference, a valid counter example would be needed, a case where blind mechanical necessity and/or blind chance produces such functionally specific, complex information. (Points xiv - xvi above outline why that will be hard indeed to come up with. There are literally billions of cases where FSCI is observed to come from design.) xxii: So, we have some reason to suggest that if something, E, is based on specific information describable in a way that does not just quote E and requires at least 500 specific bits to store the specific information, then the most reasonable explanation for the cause of E is that it was designed. The metric may be directly applied to biological cases: Using Durston’s Fits values -- functionally specific bits -- from his Table 1, to quantify I, so also accepting functionality on specific sequences as showing specificity giving S = 1, we may apply the simplified Chi_500 metric of bits beyond the threshold: RecA: 242 AA, 832 fits, Chi: 332 bits beyond SecY: 342 AA, 688 fits, Chi: 188 bits beyond Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond xxiii: And, this raises the controversial question that biological examples such as DNA -- which in a living cell is much more complex than 500 bits -- may be designed to carry out particular functions in the cell and the wider organism. >> _______________ I trust this should suffice for record. Good day, madam. GEM of TKIkairosfocus
June 21, 2013
June
06
Jun
21
21
2013
03:15 PM
3
03
15
PM
PDT
Dr Liddle: I thought it wise to first give you a chance to show a better angle than I have seen recently. Unfortunately, the tone and tactics above tell me differently, for instance, here is the last straw that leads me to break silence:
[EL, 95:] If you didn’t include Darwinian processes and natural mechanisms (as Dembski says you must) then you can’t reject those processes and mechanisms. So how you are you computing your null of “random noise”? That is what I am calling the eleP(T|H)ant in the room.
Sorry, after this from the previous linked:
{EL, at TSZ:] >>Kairosfocus, this is outrageous. Nobody here, to my knowledge, has suggested that you are a Nazi, and I certainly have not.>> {In denial of responsibility for hosting the following outrage without correction . . . ] {OM at TSZ:] >>I would like to note however that both the Nazis and KF think that homosexuals are immoral and/or deviants. So draw your own conclusion as to who’ll be marching who round what camp if they get their way.>> [in company of AF and RTH, who did not correct this outrage] --> I need to add details from my for record, that:
Sometimes, it is needful to drive home a point, even when it is on an unpleasant matter and deals with uncivil conduct . . . . I think we can take it as a given that when one is characterised in the formula “both the Nazis and X think that . . . ” one is being compared to Nazis. In a way obviously intended to taint one with the justly deserved odium that attaches to Nazism. In short, the utterly offensive — and demonstrably unwarranted — suggestion is being made that one is a Nazi. That, sirs, is slander . . . . My having a principled objection to the agenda to homosexualise marriage in our day, and my wider concern that on significant evidence homosexual behaviour is disordered, damaging to the participant and potentially hazardous to society at large — which BTW is not even a part of the debates over design theory — is compared to Nazis. The insinuation is blatant, save to the willfully blind: implicit accusation of hatred rather than principled concern along with those of a great many people including some of the most distinguished across the ages and down to today (BTW, cf. here for some thoughts and concerns that are too often ignored or suppressed today). That is, principled concern is reduced to a loathsome caricature by invidious comparison with Nazis, in order to taint without good reason. And, to create a toxic, polarised atmosphere filled with the smoke of burning, slander-soaked strawmen, so that no reasonable and serious discussion of a serious concern can happen. As though, only Nazis and this hateful bigot now under scrutiny by being pushed into the same boat as Nazis could possibly have such a view. Sorry, TSZ management, this enabling of Alinskyite toxic rhetoric is not good enough, not by a long shot.
. . . I can no longer afford to take a lenient view of your attempted clever distractions and dismissals. I will remark briefly on the above. You full well know, or SHOULD know, Dr Liddle, that the 2005 Dembski expression was drawn out, simplified and applied to biological systems here at UD some years ago now. To try to pretend otherwise -- as you do in the excerpt I have just made -- is, at this stage, a willfully continued misrepresentation of easily accessed facts. Facts I will link now and intend to excerpt from a summary at IOSE in a moment. That is, Dr Liddle, we see here, with all due respects, a pattern of disregard for duties of care to accuracy, much less truth and fairness. Simply not good enough, and I for one am finished with leniency on such. Good day, madam. GEM of TKIkairosfocus
June 21, 2013
June
06
Jun
21
21
2013
02:53 PM
2
02
53
PM
PDT
Well, Lizzie, you should have had an infinite supply of popcorn. :razz: Or better yet, seeing that infinities exist in the mind, you could just imagine the popcorn too. :roll: However none of that changes the fact that GAs and EAs are examples of Intelligent Design Evolution, ie evolution by design. And always will be.Joe
June 21, 2013
June
06
Jun
21
21
2013
12:30 PM
12
12
30
PM
PDT
So as long as you have self-replication, evolution is inevitable. Whether subsequent mutations are "good" or "bad" or whatever, evolution continues. Whatever the outcome, evolution continues as long as self-replication prevails. Case closed. This is what I understand to be the core underlying position of Dr. Liddle. Am I wrong?computerist
June 21, 2013
June
06
Jun
21
21
2013
12:27 PM
12
12
27
PM
PDT
I'm out of popcorn. I ate it all, watching some thread about infinite sets :)Elizabeth B Liddle
June 21, 2013
June
06
Jun
21
21
2013
11:54 AM
11
11
54
AM
PDT
Not sure what you mean here, but GA’s are certainly “analogous” to “Darwin’s idea”! That’s why they are called “GAs” or “evolutionary algorithms”.
LoL! GAs are NOT related to darwinian evolution because unlike darwinian evolution both GAs and EAs have at least one goal. They are designed to solve specific problems. As I said, Lizzie does NOT understand what darwinian evolution entails. This is going to be entertaining. On one hand we have Lizzie, with absolutely no clue as to what darwinian evolution entails nor what is being debated. And on the other hand we have IDists who do not seem interested in correcting any of that. So all we have are people talking past each other because there isn't any common understanding. Break out the popcorn!Joe
June 21, 2013
June
06
Jun
21
21
2013
11:45 AM
11
11
45
AM
PDT
oops, the above was to CentralScrutinizer, as is what is below:
and how it relates to your analogy of “Darwin’s idea” being analogous to a GA
Not sure what you mean here, but GA's are certainly "analogous" to "Darwin's idea"! That's why they are called "GAs" or "evolutionary algorithms". Not all GAs use exactly the same principle, but many do.Elizabeth B Liddle
June 21, 2013
June
06
Jun
21
21
2013
11:35 AM
11
11
35
AM
PDT
There is an important distinction between that sort of self-replicator and a self-replicator that does not have the properties of “heritable variance in reproductive success in the current environment.”
Yes indeed. Which is why I usually include the full monty. You caught a rare occasion when I took a short-cut.
I’m wondering if you get the full impact of that distinction with regards to the fine tuning of the universe.
Probably not. I certainly accept that for life to begin (i.e. in my view for Darwinian-capable self-replicators to emerge from not-such) you need heavy atoms, including carbon. I don't know, once you've got those atoms, what else in the early universe would make a difference to whether Darwinian-capable self-replicators or mere common-or-garden self-replicators would emerge, because, of course, we don't yet know how they emerged (if they did :)) But I guess it might turn out that something is absolutely critical and makes the difference.Elizabeth B Liddle
June 21, 2013
June
06
Jun
21
21
2013
11:32 AM
11
11
32
AM
PDT
1 2 3 4 5 8

Leave a Reply