Uncommon Descent Serving The Intelligent Design Community

Order vs. Complexity: A follow-up post

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

NOTE: This post has been updated with an Appendix – VJT.

My post yesterday, Order is not the same thing as complexity: A response to Harry McCall (17 June 2013), seems to have generated a lively discussion, judging from the comments received to date. Over at The Skeptical Zone, Mark Frank has also written a thoughtful response titled, VJ Torley on Order versus Complexity. In today’s post, I’d like to clear up a few misconceptions that are still floating around.

1. In his opening paragraph, Mark Frank writes:

To sum it up – a pattern has order if it can be generated from a few simple principles. It has complexity if it can’t. There are some well known problems with this – one of which being that it is not possible to prove that a given pattern cannot be generated from a few simple principles. However, I don’t dispute the distinction. The curious thing is that Dembski defines specification in terms of a pattern that can generated from a few simple principles. So no pattern can be both complex in VJ’s sense and specified in Dembski’s sense.

Mark Frank appears to be confusing the term, “generated,” with the term. “described.” here. What I wrote in my post yesterday is that a pattern exhibits order if it can be generated by “a short algorithm or set of commands,” and complexity if it can’t be compressed into a shorter pattern by a general law or computer algorithm. Professor William Dembski, in his paper, Specification: The Pattern That Signifies Intelligence, defines specificity in terms of the shortest verbal description of a pattern. On page 16, Dembski defines the function phi_s(T) for a pattern T as “the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T” (emphasis mine) before going on to define the specificity sigma as minus the log (to base 2) of the product of phi_s(T) and P(T|H), where P(T|H) is the probability of the pattern T being formed according to “the relevant chance hypothesis that takes into account Darwinian and other material mechanisms” (p. 17). In The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Intelligent Design advocates William Dembski and Jonathan Wells define specification as “low DESCRIPTIVE complexity” (p. 320), and on page 311 they explain that descriptive complexity “generalizes Kolmogorov complexity by measuring the size of the minimum description needed to characterize a pattern.”

The definition of order and complexity relates to whether or not a pattern can be generated mathematically by “a short algorithm or set of commands,” rather than whether or not it can be described in a few words. The definition of specificity, on the other hand, relates to whether or not a pattern can be characterized by a brief verbal description. There is nothing that prevents a pattern from being difficult to generate algorithmically, but easy to describe verbally. Hence it is quite possible for a pattern to be both complex and specified.

NOTE: I have substantially revised my response to Mark Frank, in the Appendix below.

2. Dr. Elizabeth Liddle, in a comment on Mark Frank’s post, writes that “by Dembski’s definition a chladni pattern would be both specified and complex. However, it would not have CSI because it is highly probable given a relevant chance (i.e. non-design) hypothesis.” The second part of her comment is correct; the first part is incorrect. Precisely because a Chladni pattern is “highly probable given a relevant chance (i.e. non-design) hypothesis,” it is not complex. In The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), William Dembski and Jonathan Wells define complexity as “The degree of difficulty to solve a problem or achieve a result,” before going on to add: “The most common forms of complexity are probabilistic (as in the probability of obtaining some outcome) or computational (as in the memory or computing time required for an algorithm to solve a problem)” (pp. 310-311). If a Chladni pattern is easy to generate as a result of laws then it exhibits order rather than complexity.

3. In another comment, Dr. Liddle writes: “V J Torley seems to be forgetting that fractal patterns are non-repeating, even though they can be simply described.” I would beg to differ. Here’s what Wikipedia has to say in its article on fractals (I’ve omitted references):

Fractals are typically self-similar patterns, where self-similar means they are “the same from near as from far”. Fractals may be exactly the same at every scale, or, as illustrated in Figure 1, they may be nearly the same at different scales. The definition of fractal goes beyond self-similarity per se to exclude trivial self-similarity and include the idea of a detailed pattern repeating itself.

The caption accompanying the figure referred to above reads as follows: “The Mandelbrot set illustrates self-similarity. As you zoom in on the image at finer and finer scales, the same pattern re-appears so that it is virtually impossible to know at which level you are looking.”

That sounds pretty repetitive to me. More to the point, fractals are mathematically easy to generate. Here’s what Wikipedia says about the Mandelbrot set, for instance:

More precisely, the Mandelbrot set is the set of values of c in the complex plane for which the orbit of 0 under iteration of the Complex quadratic polynomial zn+1 = zn2 + c remains bounded. That is, a complex number c is part of the Mandelbrot set if, when starting with z0 = 0 and applying the iteration repeatedly, the absolute value of zn remains bounded however large n gets.

NOTE: I have revised some of my comments on Mandelbrot sets and fractals. See the Appendix below.

4. In another comment on the same post, Professor Joe Felsenstein objects that Dembski’s definition of specified complexity has a paradoxical consequence: “It implies that we are to regard a life form as uncomplex, and therefore having specified complexity [?] if it is easy to describe,” which means that “a hummingbird, on that view, has not nearly as much specification as a perfect steel sphere,” even though the hummingbird “can do all sorts of amazing things, including reproduce, which the steel sphere never will.” He then suggests defining specification on a scale of fitness.

In my post yesterday, I pointed out that the term “specified complexity” is fairly non-controversial when applied to life: as chemist Leslie Orgel remarked in 1973, “living organisms are distinguished by their specified complexity.” Orgel added that crystals are well-specified, but simple rather than complex. If specificty were defined in terms of fitness, as Professor Felsenstein suggests, then we could no longer say that a non-reproducing crystal was specified.

However, Professor Felsenstein’s example of the steel sphere is an interesting one, because it illustrates that the probability of a sphere’s originating by natural processes may indeed be extremely low, especially if it is also made of an exotic material. (In this respect, it is rather like the lunar monolith in the movie 2001.) Felsenstein’s point is that a living organism would be a worthier creation of an intelligent agent than such a sphere, as it has a much richer repertoire of capabilities.

Closely related to this point is the fact that living things exhibit a nested hierarchy of organization, as well as dedicated functionality: intrinsically adapted parts whose entire repertoire of functionality is “dedicated” to supporting the functionality of the whole unit which they comprise. Indeed, it is precisely this kind of organization and dedicated functionality which allows living things to reproduce in the first place.

At the bottom level, the full biochemical specifications required for putting together a living thing such as a hummingbird are very long indeed. It is only when we get to higher organizational levels that we can apply holistic language and shorten our description, by characterizing the hummingbird in terms of its bodily functions rather than its parts, and by describing those functions in terms of how they benefit the whole organism.

I would therefore agree that an entity exhibiting this combination of traits (bottom-level exhaustive detail and higher-level holistic functionality, which makes the entity easy to characterize in a few words) is a much more typical product of intelligent agency than a steel sphere, notwithstanding the latter’s descriptive simplicity.

In short: specified complexity gets us to Intelligent Design, but some designs are a lot more intelligent than others. Whoever made hummingbirds must have been a lot smarter than we are; we have enough difficulties putting together a single protein.

5. In a comment on my post, Alan Fox objects that “We simply don’t know how rare novel functional proteins are.” Here I should refer him to the remarks made by Dr. Branko Kozulic in his 2011 paper, Proteins and Genes, Singletons and Species. I shall quote a brief extract:

In general, there are two aspects of biological function of every protein, and both depend on correct 3D structure. Each protein specifically recognizes its cellular or extracellular counterpart: for example an enzyme its substrate, hormone its receptor, lectin sugar, repressor DNA, etc. In addition, proteins interact continuously or transiently with other proteins, forming an interactive network. This second aspect is no less important, as illustrated in many studies of protein-protein interactions [59, 60]. Exquisite structural requirements must often be fulfilled for proper functioning of a protein. For example, in enzymes spatial misplacement of catalytic residues by even a few tenths of an angstrom can mean the difference between full activity and none at all [54]. And in the words of Francis Crick, “To produce this miracle of molecular construction all the cell need do is to string together the amino acids (which make up the polypeptide chain) in the correct order”….

Let us assess the highest probability for finding this correct order by random trials and call it, to stay in line with Crick’s term, a “macromolecular miracle”. The experimental data of Keefe and Szostak indicate – if one disregards the above described reservations – that one from a set of 10^11 randomly assembled polypeptides can be functional in vitro, whereas the data of Silverman et al. [57] show that of the 10^10 in vitro functional proteins just one may function properly in vivo. The combination of these two figures then defines a “macromolecular miracle” as a probability of one against 10^21. For simplicity, let us round this figure to one against 10^20…

It is important to recognize that the one in 10^20 represents the upper limit, and as such this figure is in agreement with all previous lower probability estimates. Moreover, there are two components that contribute to this figure: first, there is a component related to the particular activity of a protein – for example enzymatic activity that can be assayed in vitro or in vivo – and second, there is a component related to proper functioning of that protein in the cellular context: in a biochemical pathway, cycle or complex. (pp. 7-8)

In short: the specificity of proteins is not in doubt, and their usefulness for Intelligent Design arguments is therefore obvious.

I sincerely hope that the foregoing remarks will remove some common misunderstandings and stimulate further discussion.

APPENDIX

Let me begin with a confession: I had a nagging doubt when I put up this post a couple of days ago. What bothered me was that (a) some of the definitions of key terms were a little sloppily worded; and (b) some of these definitions seemed to conflate mathematics with physics.

Maybe I should pay more attention to my feelings.

A comment by Professor Jeffrey Shallit over at The Skeptical Zone also convinced me that I needed to re-think my response to Mark Frank on the proper definition of specificity. Professor Shallit’s remarks on Kolmogorov complexity also made me realize that I needed to be a lot more careful about defining the term “generate,” which may denote either a causal process governed by physical laws, or the execution of an algorithm by performing a sequence of mathematical operations.

What I wrote in my original post, Order is not the same thing as complexity: A response to Harry McCall (17 June 2013), is that a pattern exhibits order if it can be generated by “a short algorithm or set of commands,” and complexity if it can’t be compressed into a shorter pattern by a general law or computer algorithm.

I’d now like to explain why I now find those definitions unsatisfactory, and what I would propose in their stead.

Problems with the definition of order

I’d like to start by going back to the original sources. In Signature in the Cell l (Harper One, 2009, p. 106), Dr. Stephen Meyer writes:

Complex sequences exhibit an irregular, nonrepeating arrangement that defies expression by a general law or computer algorithm (an algorithm is a set of expressions for accomplishing a specific task or mathematical operation). The opposite of a highly complex sequence is a highly ordered sequence like ABCABCABCABC, in which the characters or constituents repeat over and over due to some underlying rule, algorithm or general law. (p. 106)

[H]igh probability repeating sequences like ABCABCABCABCABCABC have very little information (either carrying capacity or content)… Such sequences aren’t complex either. Why? A short algorithm or set of commands could easily generate a long sequence of repeating ABC’s, making the sequence compressible. (p. 107)
(Emphases mine – VJT.)

There are two problems with this definition. First, it mistakenly conflates physics with mathematics, when it declares that a complex sequence can be generated by “a general law or computer algorithm.” I presume that by “general law,” Dr. Meyer means to refer to some law of Nature, since on page 107, he lists certain kinds of organic molecules as examples of complexity. The problem here is that a sequence may be easy to generate by a computer algorithm, but difficult to generate by the laws of physics (or vice versa). In that case, it may be complex according to physical criteria but not according to mathematical criteria (or the reverse), generating a contradiction.

Second, the definition conflates: (a) the repetitiveness of a sequence, with (b) the ability of a short algorithm to generate that sequence, and (c) the Shannon compressibility of that sequence. The problem here is that there are non-repetitive sequences which can be generated by a short algorithm. Some of these non-repeating sequences are also Shannon-incompressible. Do these sequences exhibit order or complexity?

Third, the definition conflicts with what Professor Dembski has written on the subject of order and complexity. In The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Professor William Dembski and Dr. Jonathan Wells provide three definitions for order, the first of which reads as follows:

(1) Simple or repetitive patterns, as in crystals, that are the result of laws and cannot reasonably be used to draw a design inference. (p. 317; italics mine – VJT).

The reader will notice that the definition refers only to law-governed physical processes.

Dembski’s 2005 paper, Specification: The Pattern that Signifies Intelligence, also refers to the Champernowne sequence as exhibiting a “combination of pattern simplicity (i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing the corresponding event by chance)” (pp. 15-16). According to Dembski, the Champernowne sequence can be “constructed simply by writing binary numbers in ascending lexicographic order, starting with the one-digit binary
numbers (i.e., 0 and 1), proceeding to the two-digit binary numbers (i.e., 00, 01, 10, and 11),” and so on indefinitely, which means that it can be generated by a short algorithm. At the same time, Dembski describes at as having “event-complexity (i.e., difficulty of reproducing the corresponding event by chance).” In other words, it is not an example of what he would define as order. And yet, because it can be generated by “a short algorithm,” it would arguably qualify as an example of order under Dr. Meyer’s criteria (see above).

Problems with the definition of specificity

Dr. Meyer’s definition of specificity is also at odds with Dembski’s. On page 96 of Signature in the Cell, Dr. Meyer defines specificity in exclusively functional terms:

By specificity, biologists mean that a molecule has some features that have to be what they are, within fine tolerances, for the molecule to perform an important function within the cell.

Likewise, on page 107, Meyer speaks of a sequence of digits as “specifically arranged to form a function.”

By contrast, in The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Professor William Dembski and Dr. Jonathan Wells define specification as “low DESCRIPTIVE complexity” (p. 320), and on page 311 they explain that descriptive complexity “generalizes Kolmogorov complexity by measuring the size of the minimum description needed to characterize a pattern.” Although Dembski certainly regards functional specificity as one form of specificity, since he elsewhere refers to the bacterial flagellum – a “bidirectional rotary motor-driven propeller” – as exhibiting specificity, he does not regard it as the only kind of specificity.

In short: I believe there is a need for greater rigor and consistency when defining these key terms. Let me add that I don’t wish to criticize any of the authors I’ve mentioned above; I’ve been guilty of terminological imprecision at times, myself.

My suggestions for more rigorous definitions of the terms “order” and “specification”

So here are my suggestions. In Specification: The Pattern that Signifies Intelligence, Professor Dembski defines a specification in terms of a “combination of pattern simplicity (i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing the corresponding event by chance),” and in The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), Dembski and Wells define complex specified information as being equivalent to specified complexity (p. 311), which they define as follows:

An event or object exhibits specified complexity provided that (1) the pattern to which it conforms is a highly improbable event (i.e. has high PROBABILISTIC COMPLEXITY) and (2) the pattern itself is easily described (i.e. has low DESCRIPTIVE COMPLEXITY). (2008, p. 320)

What I’d like to propose is that the term order should be used in opposition to high probabilistic complexity. In other words, a pattern is ordered if and only if its emergence as a result of law-governed physical processes is not a highly improbable event. More succinctly: a pattern is ordered is it is reasonably likely to occur, in our universe, and complex if its physical realization in our universe is a very unlikely event.

Thus I was correct when I wrote above:

If a Chladni pattern is easy to generate as a result of laws then it exhibits order rather than complexity.

However, I was wrong to argue that a repeating pattern is necessarily a sign of order. In a salt crystal it certainly is; but in the sequence of rolls of a die, a repeating pattern (e.g. 123456123456…) is a very improbable pattern, and hence it would be probabilistically complex. (It is, of course, also a specification.)

Fractals, revisited

The same line of argument holds true for fractals: when assessing whether they exhibit order or (probabilistic) complexity, the question is not whether they repeat themselves or are easily generated by mathematical algorithms, but whether or not they can be generated by law-governed physical processes. I’ve seen conflicting claims on this score (see here and here and here): some say there are fractals in Nature, while other say that some objects in Nature have fractal features, and still others, that the patterns that produce fractals occur in Nature even if fractals themselves do not. I’ll leave that one to the experts to sort out.

The term specification should be used to refer to any pattern of low descriptive complexity, whether functional or not. (I say this because some non-functional patterns, such as the lunar monolith in 2001, and of course fractals, are clearly specified.)

Low Kolmogorov complexity is, I would argue, a special case of specification. Dembski and Wells agree: on page 311 of The Design of Life: Discovering Signs of Intelligence in Biological Systems (The Foundation for Thought and Ethics, Dallas, 2008), they explain that descriptive complexity “generalizes Kolmogorov complexity by measuring the size of the minimum description needed to characterize a pattern” (italics mine).

Kolmogorov complexity as a special case of descriptive complexity

Which brings me to Professor Shallit’s remarks in a post over at The Skeptical Zone, in response to my earlier (misguided) attempt to draw a distinction between the mathematical generation of a pattern and the verbal description of that pattern:

In the Kolmogorov setting, “concisely described” and “concisely generated” are synonymous. That is because a “description” in the Kolmogorov sense is the same thing as a “generation”; descriptions of an object x in Kolmogorov are Turing machines T together with inputs i such that T on input i produces x. The size of the particular description is the size of T plus the size of i, and the Kolmogorov complexity is the minimum over all such descriptions.

I accept Professor Shallit’s correction on this point. What I would insist, however, is that the term “descriptive complexity,” as used by the Intelligent design movement, cannot be simply equated with Kolmogorov complexity. Rather, I would argue that low Kolmogorov complexity is a special case of low descriptive complexity. My reason for adopting this view is that the determination of a object’s Kolmogorov complexity requires a Turing machine (a hypothetical device that manipulates symbols on a strip of tape according to a table of rules), which is an inappropriate (not to mention inefficient) means of determining whether an object possesses functionality of a particular kind – e.g. is this object a cutting implement? What I’m suggesting, in other words, is that at least some functional terms in our language are epistemically basic, and that our recognition of whether an object possesses these functions is partly intuitive. Using a table of rules to determine whether or not an object possesses a function (say, cutting) is, in my opinion, likely to produce misleading results.

My response to Mark Frank, revisited

I’d now like to return to my response to Mark Frank above, in which I wrote:

The definition of order and complexity relates to whether or not a pattern can be generated mathematically by “a short algorithm or set of commands,” rather than whether or not it can be described in a few words. The definition of specificity, on the other hand, relates to whether or not a pattern can be characterized by a brief verbal description. There is nothing that prevents a pattern from being difficult to generate algorithmically, but easy to describe verbally. Hence it is quite possible for a pattern to be both complex and specified.

This, I would now say, is incorrect as it stands. The reason why it is quite possible for an object to be both complex and specified is that the term “complex” refers to the (very low) likelihood of its originating as a result of physical laws (not mathematical algorithms), whereas the term “specified” refers to whether it can be described briefly – whether it be according to some algorithm or in functional terms.

Implications for Intelligent Design

I have argued above that we can legitimately infer an Intelligent Designer for any system which is capable of being verbally described in just a few words, and whose likelihood of originating as a result of natural laws is sufficiently close to zero. This design inference is especially obvious in systems which exhibit biological functionality. Although we can make design inferences for non-biological systems (e.g. moon monoliths, if we found them), the most powerful inferences are undoubtedly drawn from the world of living things, with their rich functionality.

In an especially perspicuous post on this thread, G. Puccio argued for the same conclusion:

The simple truth is, IMO, that any kind of specification, however defined, will do, provided that we can show that that specification defines a specific subset of the search space that is too small to be found by a random search, and which cannot reasonably be found by some natural algorithm….

In the end, I will say it again: the important point is not how you specify, but that your specification identifies:

a)an utterly unlikely subset as a pre-specification

or

b) an utterly unlikely subset which is objectively defined without any arbitrary contingency, like in the case of pi.

In the second case, specification needs not be a pre-specification.

Functional specification is a perfect example of the second case.

Provided that the function can be objectively defined and measured, the only important point is how complex it is: IOWs, how small is the subset of sequences that provide the function as defined, in the search space.

That simple concept is the foundation for the definition of dFSCI, or any equivalent metrics.

It is simple, it is true, it works.

P{T|H) and elephants: Dr. Liddle objects

But how do we calculate probabilistic complexities? Dr. Elizabeth Liddle writes:

P(T|H) is fine to compute if you have a clearly defined non-design hypothesis for which you can compute a probability distribution.

But nobody, to my knowledge, has yet as suggested how you would compute it for a biological organism, or even for a protein.

In a similar vein, Alan Fox comments:

We have, as yet, no way to predict functionality in unknown proteins. Without knowing what you don’t know, you can’t calculate rarity.

In a recent post entitled, The Edge of Evolution, I cited a 2011 paper by Dr. Branko Kozulic, titled, Proteins and Genes, Singletons and Species, in which he argued (generously, in his view) that at most, 1 in 10^21 randomly assembled polypeptides would be capable of functioning as a viable protein in vivo, that each species possessed hundreds of isolated proteins called “singletons” which had no close biochemical relatives, and that the likelihood of these proteins originating by unguided mechanisms in even one species was astronomically low, making proteins at once highly complex (probabilistically speaking) and highly specified (by virtue of their function) – and hence as sure a sign as we could possibly expect of an Intelligent Designer at work in the natural world:

In general, there are two aspects of biological function of every protein, and both depend on correct 3D structure. Each protein specifically recognizes its cellular or extracellular counterpart: for example an enzyme its substrate, hormone its receptor, lectin sugar, repressor DNA, etc. In addition, proteins interact continuously or transiently with other proteins, forming an interactive network. This second aspect is no less important, as illustrated in many studies of protein-protein interactions [59, 60]. Exquisite structural requirements must often be fulfilled for proper functioning of a protein. For example, in enzymes spatial misplacement of catalytic residues by even a few tenths of an angstrom can mean the difference between full activity and none at all [54]. And in the words of Francis Crick, “To produce this miracle of molecular construction all the cell need do is to string together the amino acids (which make up the polypeptide chain) in the correct order” [61, italics in original]. (pp. 7-8)

Let us assess the highest probability for finding this correct order by random trials and call it, to stay in line with Crick’s term, a “macromolecular miracle”. The experimental data of Keefe and Szostak indicate – if one disregards the above described reservations – that one from a set of 10^11 randomly assembled polypeptides can be functional in vitro, whereas the data of Silverman et al. [57] show that of the 10^10 in vitro functional proteins just one may function properly in vivo. The combination of these two figures then defines a “macromolecular miracle” as a probability of one against 10^21. For simplicity, let us round this figure to one against 10^20. (p. 8)

To put the 10^20 figure in the context of observable objects, about 10^20 squares each measuring 1 mm^2 would cover the whole surface of planet Earth (5.1 x 10^14 m^2). Searching through such squares to find a single one with the correct number, at a rate of 1000 per second, would take 10^17 seconds, or 3.2 billion years. Yet, based on the above discussed experimental data, one in 10^20 is the highest probability that a blind search has for finding among random sequences an in vivo functional protein. (p. 9)

The frequency of functional proteins among random sequences is at most one in 10^20 (see above). The proteins of unrelated sequences are as different as the proteins of random sequences [22, 81, 82] – and singletons per definition are exactly such unrelated proteins. (p. 11)

A recent study, based on 573 sequenced bacterial genomes, has concluded that the entire pool of bacterial genes – the bacterial pan-genome – looks as though of infinite size, because every additional bacterial genome sequenced has added over 200 new singletons [111]. In agreement with this conclusion are the results of the Global Ocean Sampling project reported by Yooseph et al., who found a linear increase in the number of singletons with the number of new protein sequences, even when the number of the new sequences ran into millions [112]. The trend towards higher numbers of singletons per genome seems to coincide with a higher proportion of the eukaryotic genomes sequenced. In other words, eukaryotes generally contain a larger number of singletons than eubacteria and archaea. (p. 16)

Based on the data from 120 sequenced genomes, in 2004 Grant et al. reported on the presence of 112,000 singletons within 600,000 sequences [96]. This corresponds to 933 singletons per genome…
[E]ach species possesses hundreds, or even thousands, of unique genes – the genes that are not shared with any other species. (p. 17)

Experimental data reviewed here suggest that at most one functional protein can be found among 10^20 proteins of random sequences. Hence every discovery of a novel functional protein (singleton) represents a testimony for successful overcoming of the probability barrier of one against at least 10^20, the probability defined here as a “macromolecular miracle”. More than one million of such “macromolecular miracles” are present in the genomes of about two thousand species sequenced thus far. Assuming that this correlation will hold with the rest of about 10 million different species that live on Earth [157], the total number of “macromolecular miracles” in all genomes could reach 10 billion. These 10^10 unique proteins would still represent a tiny fraction of the 10^470 possible proteins of the median eukaryotic size. (p. 21)

If just 200 unique proteins are present in each species, the probability of their simultaneous appearance is one against at least 10^4,000. [The] Probabilistic resources of our universe are much, much smaller; they allow for a maximum of 10^149 events [158] and thus could account for a one-time simultaneous appearance of at most 7 unique proteins. The alternative, a sequential appearance of singletons, would require that the descendants of one family live through hundreds of “macromolecular miracles” to become a new species – again a scenario of exceedingly low probability. Therefore, now one can say that each species is a result of a Biological Big Bang; to reserve that term just for the first living organism [21] is not justified anymore. (p. 21)

“But what if the search for a functional protein is not blind?” ask my critics. “What if there’s an underlying bias towards the emergence of functionality in Nature?” “Fine,” I would respond. “Let’s see your evidence.”

Alan Miller rose to the challenge. In a recent post entitled, Protein Space and Hoyle’s Fallacy – a response to vjtorley, cited a paper by Michael A. Fisher, Kara L. McKinley, Luke H. Bradley, Sara R. Viola and Michael H. Hecht, titled, De Novo Designed Proteins from a Library of Artificial Sequences Function in Escherichia Coli and Enable Cell Growth (PLoS ONE 6(1): e15364. doi:10.1371/journal.pone.0015364, January 4, 2011), in support of his claim that proteins were a lot easier for Nature to build on the primordial Earth than Intelligent Design proponents imagine, and he accused them of resurrecting Hoyle’s fallacy.

In a very thoughtful comment over on my post CSI Revisited, G. Puccio responded to the key claims made in the paper, and to what he perceived as Alan Miller’s misuse of the paper (bolding below is mine):

First of all, I will just quote a few phrases from the paper, just to give the general scenario of the problems:

a) “We designed and constructed a collection of artificial genes encoding approximately 1.5×106 novel amino acid sequences. Because folding into a stable 3-dimensional structure is a prerequisite for most biological functions, we did not construct this collection of proteins from random sequences. Instead, we used the binary code strategy for protein design, shown previously to facilitate the production of large combinatorial libraries of folded proteins.”

b) “Cells relying on the de novo proteins grow significantly slower than those expressing the natural protein.”

c) “We also purified several of the de novo proteins. (To avoid contamination by the natural enzyme, purifications were from strains deleted for the natural gene.) We tested these purified proteins for the enzymatic activities deleted in the respective autotrophs, but were unable to detect activity that was reproducibly above the controls.”

And now, my comments:

a) This is the main fault of the paper, if it is interpreted (as Miller does) as evidence that functional proteins can evolve from random sequences. The very first step of the paper is intelligent design: indeed, top down protein engineering based on our hardly gained knowledge about the biochemical properties of proteins.

b) The second problem is that the paper is based on function rescue, not on the appearance of a mew function. Experiments based on function rescue have serious methodological problems, if used as models of neo darwinian evolution. The problem here is specially big, because we know nothing of how the “evolved” proteins work to allow the minimal rescue of function in the complex system of E. Coli (see next point).

c) The third problem is that the few rescuing sequences have no detected biochemical activity in vitro. IOWs, we don’t know what they do, and how they act at biochemical level. IOWs, with no known “local function” for the sequences, we have no idea of the functional complexity of the “local function” that in some unknown way is linked to the functional rescue. The authors are well aware of that, and indeed spend a lot of time discussing some arguments and experiments to exclude some possible interpretation of indirect rescue, or at least those that they have conceived.

The fact remains that the hypothesis that the de novo sequences have the same functional activity as the knocked out genes, even if minimal, remain unproved, because no biochemical activity of that kind could be shown in vitro for them.

These are the main points that must be considered. In brief, the paper does not prove, in any way, what Miller thinks it proves.

And that was Miller’s best paper!

In the meantime, can you forgive us in the Intelligent Design community for being just a little skeptical of claims that “no intelligence was required” to account for the origin of proteins, of the first living cell (which would have probably required hundreds of proteins), of complex organisms in the early Cambrian period, and even of the appearance of a new species, in view of what has been learned about the prevalence of singleton proteins and genes in living organisms?

Comments
You further know that I made a valid historical parallel to how the ordinary German people who went along quietly with what went on under the 3rd Reich were made to do tours of shame to learn what hey had been in denial about in order to begin the de-nazification process. I showed a famous photo of a critical moment in that at Buchenwald
Invalid! Apart from the insertion of this word as justification for your behaviour, you know what you did. Far from being a person who I would have thought , given their cultural background, would wish to oppose all forms of oppression, you, on the contrary, want to oppress others. Your attitude to gay people I find particularly offensive. Alan Fox
:) Upright BiPed
Great to talk to you gpuccio. I'm fairly occupied for the next week or so (big family gathering next weekend), but let's stay in touch :) Cheers Lizzie Elizabeth B Liddle
So… ID arguments do not posit theology, the validity of design arguments do not rest upon the existence of a deity,
Indeed.
you disagree with Dover,
That does not follow, and I do not have a legal opinion. I am not a US citizen, and far from forbidding religion in education, it is mandated in the UK, so that would not even arise as an issue. The quality of the science would.
specific design hypotheses cannot be rejected as non-falsifiable,
Well, I wouldn't reject them even if they were not falsifiable, but I certainly agree that specific falsifiable design hypotheses are possible to formulate.
and you were able to characterize the null.
For the challenge you set? I characteristed a null. I'm not sure we agreed on it.
So (using your own notion of “unguided forces”) that leaves only the observation you have yet to speak of, which we can just let lay if you wish: If mankind should someday create life from non-life, that achievement will not move a hair on the argument’s head. If ID proponents point out the vast amount of agency involvement in the creation of that life, materialists would forever argue, as they do now, that we just don’t yet know how these things happen without agency involvement. They will say, just as they do now, that any counter-argument can do no more than point to a gap in our knowledge. The counter-argument to unguided forces would be immediately labeled, as it is now, as an argument from ignorance. The proposition of unguided forces never has to undergo a test because it’s based on the speculation of an unknown mechanism, and therefore is subject only to the researcher’s imagination. It simply cannot be tested, which is the very definition of non-falsifiable.
Sorry, I'm not sure what you are asking me here. Could you clarify? And which argument's "head" are you talking about exactly? I don't really understand what you are getting at here. Elizabeth B Liddle
Elizabeth: I agree with you to leave it there "for a bit", for the sake of our personal balance, and also because my ego is so satisfied of having obtained a"bravo" from you! :) I would just add, regarding this observation: I don’t know, gpuccio. But you seemed to be suggesting that there was no evidence for evolutionary relationships within superfamilies, and it seemed to me there was quite a lot, that indeed the concept is based on the inferred phylogenies. that I never intended that. I choose the example of superfamilies exactly because we can be rather sure that they are completely disconnected , while within a superfamily, and even more within a family, homologies and similarities of folding and function are much more evident. gpuccio
Dr Liddle, at 207 So... ID arguments do not posit theology, the validity of design arguments do not rest upon the existence of a deity, you disagree with Dover, specific design hypotheses cannot be rejected as non-falsifiable, and you were able to characterize the null. So (using your own notion of "unguided forces") that leaves only the observation you have yet to speak of, which we can just let lay if you wish: If mankind should someday create life from non-life, that achievement will not move a hair on the argument’s head. If ID proponents point out the vast amount of agency involvement in the creation of that life, materialists would forever argue, as they do now, that we just don’t yet know how these things happen without agency involvement. They will say, just as they do now, that any counter-argument can do no more than point to a gap in our knowledge. The counter-argument to unguided forces would be immediately labeled, as it is now, as an argument from ignorance. The proposition of unguided forces never has to undergo a test because it’s based on the speculation of an unknown mechanism, and therefore is subject only to the researcher’s imagination. It simply cannot be tested, which is the very definition of non-falsifiable. Upright BiPed
I will not go on with the epistemological issue. I think you make a great confusion, mix the null hypothesis with the alternative hypotheses, and reaon in Bayesian terms for a Fisherian setting. But frankly, I cannot spend further time on that, I don’t believe you will ever be convinced.
Fair enough, but bear in the mind the possibility that the person confused may not be me :) I'm by no means an enthusiast of Fisherian null hypothesis testing, but I think it's as near as we get to Popperian falsification, which is why I think Dembski favored it. Neymann-Pearson is similar, but you don't falsify as with Fisher. I'd say that the way science is actually done today is not by falsification but by comparing model fits: if model A is a better fit to the data than model B, we prefer model A. But no models are perfect fits - all models are at best incomplete.
You ask for a definition of dFSCI. I have given it I don’t know how many times.
Thanks, gpuccio. But recall I do not read every post (indeed I have only just been unbanned here) and I have an aging brain. A link would have been fine, and if you'd like me to post this for reference at TSZ I'd be delighted to do so.
However: dFSCI is a subset of CSI characterized by the following: a) It applies only to objects where a digital sequence can be read in some way (digital information) b) The specification is exclusively functional. A conscious observer can objectively define any function he likes for the sequence observed in the object, and offer a way to measure the function itself, and a threshold for the function, so that the function can be expressed as a binary variable (absent / present) for any possible digital sequence in an object. c) We compute the functional complexity linked to the function so defined: it expresses the minimal number of bits necessary to provide the function, and is computed as the rate of the functional space (number of sequences that provide the function) to the search space (total number of possible sequences). The computation is usually made by fixing the sequence length and using some approximations, like the Durston method for protein families. Repetitive sequences, or sequences that can be generated by known algorithms will be considered as having the dFSI of the generating algorithm in that system (that is, we consider the Kolmogorow complexity of the observed sequence). d) We fix an arbitrary threshold to transform the computation of dFSI in bits into a binary value (dFSCI: absent / present). The threshold must take into account the system we are observing, the time span allowed for the emergence of the object and the probabilistic resources available in that system in that time span (the number of states that can be tested). For the biological system in our planet, I have suggested a threshold of 150 bits. 500 bits should be a sufficient threshold for any system. e) Our null (H0) is that the sequence originated as a random outcome in the syste. The presence of dFSCI allows us to reject that null. f) Then we take into consideration, as alternative hypotheses, ID (the only well known generator of dFSCI). However, any non design alternative hypothesis that is explicitly formulated can be taken into consideration, before choosing design as the best explanation. Any hypothesis that can explicitly explain the outcome on the basis of what already is available in the system is welcome, and will be accepted or refuted according to its explanatory merits (no to a probability). If the alternative hypothesis includes random steps, those steps can be evaluated again by the dFSCI tool. This, in brief, and with some clarifications about the aspects you stressed in the last posts.
Thank you! For what it's worth, I think that's by far the best of the various chi-derivatives I've read! It allows us to reject a perfectly well-characterised null, even if it's a null nobody actually proposes ;) And then moves to a different method for comparing alternative explanations for the explanandum, dFCSO itself. Bravo! And perhaps we'd better leave it there for a bit, as I have some Penguins to attend to, not to mention Real Life :) Thanks a lot. Lizzie Elizabeth B Liddle
gpuccio
Elizabeth: So, what is your hypothesis? That there were a group of connected ancestors that gave rise to 2000 disconnected superfamilies? And how were these “ancestors” connected? At the sequence level? Did they fold similarly, Did they have similar functions?
I don't know, gpuccio. But you seemed to be suggesting that there was no evidence for evolutionary relationships within superfamilies, and it seemed to me there was quite a lot, that indeed the concept is based on the inferred phylogenies. But I don't see why separate roots for separate superfamilies is particularly unlikely
So, how can you explain that a group of proteins coneected at sequence level, and with similar folding, gave rise to 2000 separated superfamilies, that bear no sequence connection one with the other, that fold in compèletely different ways, that have different functions?
The superfamilies may well be separately rooted (emerge from different parts of the genome), but I am not yet persuaded that they couldn't be have roots that go back to simpler yet selectable precursors.
Ancestors that have never been onserved, neither as “remnants” in the proteome nor in the lab?
Whey would they still be extant? If current theory is correct, LUCA lived several hundred million years after abiogenesis.
Is this an explanation, in your mind?
I just don't see a reason to favour of an inteventionist designer as an alternative to it. Just because we can't see back beyond a certain point, doesn't mean we need discount all evolution before that point. It would be, in my view, like assuming that because we can't see beyond the bend in the river, that the river source is just beyond that bend. But do please link me your definition of dFSCO. I'd like to see it. Cheers Lizzie Elizabeth B Liddle
Elizabeth: Oops! Accidental incomplete posting... I go on: Is this an explanation, in your mind? And you say you have no "darwinian faith"! I will not go on with the epistemological issue. I think you make a great confusion, mix the null hypothesis with the alternative hypotheses, and reaon in Bayesian terms for a Fisherian setting. But frankly, I cannot spend further time on that, I don't believe you will ever be convinced. You ask for a definition of dFSCI. I have given it I don't know how many times. However: dFSCI is a subset of CSI characterized by the following: a) It applies only to objects where a digital sequence can be read in some way (digital information) b) The specification is exclusively functional. A conscious observer can objectively define any function he likes for the sequence observed in the object, and offer a way to measure the function itself, and a threshold for the function, so that the function can be expressed as a binary variable (absent / present) for any possible digital sequence in an object. c) We compute the functional complexity linked to the function so defined: it expresses the minimal number of bits necessary to provide the function, and is computed as the rate of the functional space (number of sequences that provide the function) to the search space (total number of possible sequences). The computation is usually made by fixing the sequence length and using some approximations, like the Durston method for protein families. Repetitive sequences, or sequences that can be generated by known algorithms will be considered as having the dFSI of the generating algorithm in that system (that is, we consider the Kolmogorow complexity of the observed sequence). d) We fix an arbitrary threshold to transform the computation of dFSI in bits into a binary value (dFSCI: absent / present). The threshold must take into account the system we are observing, the time span allowed for the emergence of the object and the probabilistic resources available in that system in that time span (the number of states that can be tested). For the biological system in our planet, I have suggested a threshold of 150 bits. 500 bits should be a sufficient threshold for any system. e) Our null (H0) is that the sequence originated as a random outcome in the syste. The presence of dFSCI allows us to reject that null. f) Then we take into consideration, as alternative hypotheses, ID (the only well known generator of dFSCI). However, any non design alternative hypothesis that is explicitly formulated can be taken into consideration, before choosing design as the best explanation. Any hypothesis that can explicitly explain the outcome on the basis of what already is available in the system is welcome, and will be accepted or refuted according to its explanatory merits (no to a probability). If the alternative hypothesis includes random steps, those steps can be evaluated again by the dFSCI tool. This, in brief, and with some clarifications about the aspects you stressed in the last posts. gpuccio
KF:
Dr Liddle: Kindly stop playing games, you have seen how the probability you have decided to turn into a talking point is actually part of an information measure and so a laid measure of info int eh system will automatically take it into account. If you do not understand the way info is measured then please do a tutorial. KF
I am not "playing games" KF. It's a simple question. I know how information is measured. In fact I know several ways. But to compute chi you need a value for P(T|H) where, according to Dembski, H is the "relevant chance hypothesis taking into account Darwinian and other material mechanisms". All I want to know is how you compute the probability of T given that hypothesis. I know how to compute it where H is "random draw" or "random walk". What I want to know is how you compute it for other non-design hypotheses. How> is it "automatically taken into account"? Elizabeth B Liddle
Elizabeth: So, what is your hypothesis? That there were a group of connected ancestors that gave rise to 2000 disconnected superfamilies? And how were these "ancestors" connected? At the sequence level? Did they fold similarly, Did they have similar functions? So, how can you explain that a group of proteins coneected at sequence level, and with similar folding, gave rise to 2000 separated superfamilies, that bear no sequence connection one with the other, that fold in compèletely different ways, that have different functions? Ancestors that have never been onserved, neither as "remnants" in the proteome nor in the lab? Is this an explanation, in your mind? A gpuccio
Upright Biped:
Then it should be easy to answer the question: “Which ID arguments present theology as a defense of their claims?”
None that I am aware of. Which is why I didn't claim that any did.
Then the validity of the design inference is in no way dependent upon the existence of a deity, is it? Take a poll at TSZ and test out that nugget on your followers. See who else disagrees with Dover.
Short answer: no.
It wouldn’t eliminate the possibility of undetectable design in nature, but ID as a biological theory based on the ability to detect design, would be dead.
Well, that hypothesis would be dead, yes. There could be others. But that's the point of hypothesis testing: deriving testable hypotheses from larger explanatory framework. I have never said, and don't think is true, that specific design hypotheses are not falsifiable. All hypotheses have to be specific to be falsifiable - they have to be capable of being cast as a null.
This is the same conjecture you just agreed is specific enough to be falsified. The only difference between the two is that the proposed cause of “design” has replaced the proposed cause of “unguided forces”. If we merely insert the criteria for “unguided forces” which you intended to simulate in your demonstration (and we assume you knew what you meant by those criteria) then your concerns over the term would obviously be satisfied. Now, how would you falsify the claim?
The issue is that you have to be able to characterise your null. In my proposed demonstration, the null hypothesis (what I was aiming to falsify) was that in the absence of a designer, or intentional guiding force, self-replicators would not emerge from a population of non-self-replicators.
If your simulation had succeeded, you intended to a) ignore that result given the fact that you created the simulation, or b) state that your demonstration was valid because you properly modeled the environment, etc?
No, all I would rejected is the null hypothesis that self-replicators cannot emerge from non-self-replicators unguided. I would not have rejected the hypothesis that the conditions necessary for self-replicators to emerge spontaneously (unguided) from non-self-replicators must be themselve be designed . Elizabeth B Liddle
Af: Kindly stop misleading people further and exu=sing the inexcusable. You and ilk full well know -- though I doubt you will admit -- that there has been a major problem of enabling behaviour for abuse of design thinkers int eh academy and wider education systems amounting to a witch hunt. You further know that I made a valid historical parallel to how the ordinary German people who went along quietly with what went on under the 3rd Reich were made to do tours of shame to learn what hey had been in denial about in order to begin the de-nazification process. I showed a famous photo of a critical moment in that at Buchenwald. Your ilk at TSZ seized on that to falsely accuse me of involvement in an alleged right wing conspiracy to impose a theocracy and in that context OM [itself a vicious slander, one too often sponsored by eminent advocates of evolutionary materialism in recent years . . . which reveals much about their want of basic decency] -- without correction from your side and with EL trying to deny what happened . . . enabling behaviour -- tried to use invidious association to insinuate that I and the Nazis object to homosexual behaviour and the like. That is a serious bit of smearing as it is quite simple to see that -- never mind the mind bending games now on all over our civilisation -- a great many people have serious, quite principled questions and objections to such homosexualisation of our law and culture. Now, at no point did you distance yourself from the behaviour of your circle, or seek to correct it. So to now suggest by the clever turn of phrase about opening cans of worms that I invited such smears is a LIE, indeed a further slander. And yes AF, I am calling things by their right, short blunt names at this point. I hope you have the decency to feel remorse. No, I pointed out a valid historical warning, only to have the foul-minded seek to smear me. At this stage I am pointing such out, not to expect a return to decent behaviour on your ilk's part (your ilk's behaviour over a prolonged period makes it plain that such will not happen until there is a decisive breaking that ends in a tour of shame and awakening of remorse -- a good thing; I recall here the apology tour Russia sent out to Jamaica in 1990, post cold war), but to make it clear that you have crossed the threshold into inexcusable incivility and that we should reckon with this as we reflect on the significance of the controversies over design. In short I am pointing out the problem of the sort of nihilistic faction tactics that Plato warned about as a consequence of the rise of evolutionary materialism in The Laws, BK X, 2350 years ago:
Ath. . . . [[The avant garde philosophers and poets, c. 360 BC] say that fire and water, and earth and air [[i.e the classical "material" elements of the cosmos], all exist by nature and chance, and none of them by art, and that as to the bodies which come next in order-earth, and sun, and moon, and stars-they have been created by means of these absolutely inanimate existences. The elements are severally moved by chance and some inherent force according to certain affinities among them-of hot with cold, or of dry with moist, or of soft with hard, and according to all the other accidental admixtures of opposites which have been formed by necessity. After this fashion and in this manner the whole heaven has been created, and all that is in the heaven, as well as animals and all plants, and all the seasons come from these elements, not by the action of mind, as they say, or of any God, or from art, but as I was saying, by nature and chance only. [[In short, evolutionary materialism premised on chance plus necessity acting without intelligent guidance on primordial matter is hardly a new or a primarily "scientific" view! Notice also, the trichotomy of causal factors: (a) chance/accident, (b) mechanical necessity of nature, (c) art or intelligent design and direction.] . . . . [[Thus, they hold that t]he Gods exist not by nature, but by art, and by the laws of states, which are different in different places, according to the agreement of those who make them; and that the honourable is one thing by nature and another thing by law, and that the principles of justice have no existence at all in nature, but that mankind are always disputing about them and altering them; and that the alterations which are made by art and by law have no basis in nature, but are of authority for the moment and at the time at which they are made.- [[Relativism, too, is not new; complete with its radical amorality rooted in a worldview that has no foundational IS that can ground OUGHT.] These, my friends, are the sayings of wise men, poets and prose writers, which find a way into the minds of youth. They are told by them that the highest right is might [[ Evolutionary materialism leads to the promotion of amorality], and in this way the young fall into impieties, under the idea that the Gods are not such as the law bids them imagine; and hence arise factions [[Evolutionary materialism-motivated amorality "naturally" leads to continual contentions and power struggles], these philosophers inviting them to lead a true life according to nature, that is, to live in real dominion over others [[such amoral factions, if they gain power, "naturally" tend towards ruthless tyranny], and not in legal subjection to them.
I suggest that you need to soberly reflect on what you have involved yourself with as an enabler in light of such warnings from history on what happens when people begin to act out the implications of worldviews that -- having no foundational IS that can ground OUGHT -- imply that might and manipulation make 'right,' and that honour is a mere matter of power. For, the process has already begun. Or have you forgotten what slander, marginalisation, disrespect, and scapegoating all too often lead to? Good day, sir. GEM of TKI PS: JWT, I hope that helps set a bit of context. I am not going around picking a quarrel, I am warning about the cliff the slippery slope is pointing to. kairosfocus
Dr Liddle: Kindly stop playing games, you have seen how the probability you have decided to turn into a talking point is actually part of an information measure and so a laid measure of info int eh system will automatically take it into account. If you do not understand the way info is measured then please do a tutorial. KF kairosfocus
Dr Liddle at #174
I didn’t claim that anyone presented theology as a defense of their claims
Then it should be easy to answer the question: “Which ID arguments present theology as a defense of their claims?”
That was precisely my point: that the existence of a deity in no way depends on the Design Inference being valid
Then the validity of the design inference is in no way dependent upon the existence of a deity, is it? Take a poll at TSZ and test out that nugget on your followers. See who else disagrees with Dover.
In my view it is no more valid to argue evolution, therefore no god, than it is to argue no evolution, therefore god.
Therefore if a prominent biologist at a major university should look out on the human landscape and fondly proposed that “evolution is the greatest engine of atheism”, you'd think he and all others with his mindset had drawn an invalid conclusion from evolution. In the real world, where people publish books, write articles, and own blogs, you must be shocked at the invalid arguments.
Well, it wouldn’t falsify ID.
It wouldn’t eliminate the possibility of undetectable design in nature, but ID as a biological theory based on the ability to detect design, would be dead.
I agree that that specific conjecture would be falsified. This is why I think it is useful to have specific ID conjectures.
So the hypothesis that design is required to originate the semiosis that evolution requires to exist is a falsifiable proposition and scientifically valid.
I agree. A conjecture has to be specific to be falsiable
This is the same conjecture you just agreed is specific enough to be falsified. The only difference between the two is that the proposed cause of “design” has replaced the proposed cause of “unguided forces”. If we merely insert the criteria for “unguided forces” which you intended to simulate in your demonstration (and we assume you knew what you meant by those criteria) then your concerns over the term would obviously be satisfied. Now, how would you falsify the claim?
Yes indeed. And so agency involvement cannot be rejected.
If your simulation had succeeded, you intended to a) ignore that result given the fact that you created the simulation, or b) state that your demonstration was valid because you properly modeled the environment, etc?
Non-specific conjectures (“A designer with unspecified powers”; “unspecified unguided forces”) cannot be falsified.
This is a great comment. On the one hand you are bringing up theological issues, having never provided a list of any ID claims that rest on theological reasoning. You even quite plainly state that the validity of the design inference is independent of the existence of a deity. And then on the other hand, your comment addresses absolutely nothing whatsoever in the paragraph you were addressing. Here it is again, the relevant text you ignored: If mankind should someday create life from non-life, that achievement will not move a hair on the argument’s head. If ID proponents point out the vast amount of agency involvement in the creation of that life, materialists would forever argue, as they do now, that we just don’t yet know how these things happen without agency involvement. They will say, just as they do now, that any counter-argument can do no more than point to a gap in our knowledge. The counter-argument to unguided forces would be immediately labeled, as it is now, as an argument from ignorance. The proposition of unguided forces never has to undergo a test because it’s based on the speculation of an unknown mechanism, and therefore is subject only to the researcher’s imagination. It simply cannot be tested, which is the very definition of non-falsifiable.
I disagree, for the reasons given above.
Your “reasons given above” contained: a) a complete no-show on your list of ID claims that rely on theological backing, and b) a total lack of response to the proposition that “unguided forces” as a cause of life (i.e. those forces which you proposed, without argument, to model in your simulation) is a completely non-falsifiable thesis. Your “reasons given above” contained little new substance and a complete void in response to the key questions. There seems little reason to continue, as I am pressed for time otherwise. Upright BiPed
I have already told you Lizzie. We don't have to because no one can even produce a chance hypothesis. Joe
All I'm asking, KF, is how you compute P(T|H)? I'm happy to stipulate that T is one of a very small number of Targets out of a very large number of combinations. Elizabeth B Liddle
Dr Liddle; Pardon, still not good enough. Choice leading to design of functionally specific complex organised objects, systems and processes is a well known, frequently observed phenomenon with billions of cases in point. The same for chance and law. One could quibble a bit that chance is a little catch-all for things where outcomes are not purposefully connected to configurations, but this too is well observed starting with a falling tumbling die or molecular velocities or the like and of course sky noise, Zener noise, Johnson noise, flicker noise, Weibull distributed populations, Gaussian distributed populations, Poisson distributed populations, Boltzmann distributed populations and more, much more. The point is that cause is a known phenomenon that certain factors influence, enable/disable and in some cases determine outcomes. Causes follow in some cases regular patterns where on initial conditions being similar outcomes will be predictably similar, e.g. F = m*a, etc. In others there is high contingency of outcomes without obvious material difference on initial conditions. A dropped object falls, if it is a die it tumbles and gives a particular random distribution of outcomes. If fair, it will be more or less flat random. If loaded -- a case of design -- it will not. Contingency by chance or by choice. And we have disaggregated aspects of one and the same phenomenon to see causal patterns connected to them. Now, you need to address FSCO/I and its known, reliably observed cause and the analysis that shows why available atomic resources would only give a minuscule sample of the space of possible configs, W, so that the only reasonable outcome on chance is from the bulk -- which reliably will be non-functional because of the specificity constraints to achieve function. So, when we see patterns coming from narrow functionally specific zones T in such spaces W, the only reasonable cause is choice. Which we do routinely see, e.g this post manifests just that pattern, in a context of linguistically functional code. Object code in a system that processes info, will also be linguistic, but is specific to machine function. The notion of molecular noise in some pond or the like writing code is red flag ridiculous. It is only entertained because there is a strong prejudice against the alternative. And of course to the exact extent that molecular patterns are determined by forces of necessity, to that same extent they cannot bear information. KF kairosfocus
gpuccio:
Elizabeth: If this is not true for proteins – in other words if protein space is far more disconnected than, for example, the space of regulatory genes, then you may well be correct. But protein space is disconnected. Don’t you think that 2000 superfamilies, and growing, in the SCOP classification, completely separated at sequence level, and folding level, and function level, is not a disconnected space?
Well, I have to take your word for it that the families are thus disconnected, but the separation of the superfamilies is not the relevant separation. The fact that they exist is evidence that within families the space is highly connected, by which I mean, high correlation between sequence similarity and possession of function. And the fact that phylogenetic trees can be inferred is evidence for at least a substantial LGT signal, although clearly there is HGT signal as well. But the argument is not that one family can evolve into another, any more than evolutionists argue that cats can evolve into dogs (and that evolution is infirmed by the observation that they cannot). It's connectedness between the root of those families and postulated non-extant precursor sequences that you need to demonstrate is not there. In other words, I am not saying you can get from superfamily A to superfamily B; I'm saying that it may be possible to get from archaic protein A to the root of superfamily A. And given that you can get from the root of superfamily A to the twigs of superfamily A forward in time why should the root not extend further back in time?
Let’s suppose that some time, in LUCA or pre-LUCA , the living being survives and duplicates with only one protein superfamily (pure myth, but let’s suppose it can be true, for the sake of discussion). Even finding that single superfamily would already appear an extraordinary piece of luck. But let’s assume that pre-LUCA was very lucky. But then? What happens? Two possible scenarios, both equally impossible: a) In some way, the original functional superfamily, through RV, originates the other 1999 in the course of evolution, either from working genes or from non functional duplicates, realizing the stunning total of 1999 random walks that all reach a separated functional island, against all probabilities. b) In some way, the other 1999 superfamilies are found again by sheer luck, like the first one, from something like non coding sequences or similar. Against all probabilities. A good explanation indeed, is the neodarwinian model. My compliments to all those who are so enthusiast of the thing! Elizabeth, I have great respect for you. I find your concepts of epistemology a little bit mixed up, but after all that is not a crime. And you have at least one excuse in your darwinian faith: you obviously understand very little of molecular biology (definitely not a crime!). Others could not say the same in their own defense.
First of all, I do not have "darwinian faith". Second, I will defend my scientific epistemology. Thirdly, while I am no molecular biologist, I'm not entirely a naive reader of the literature, so I'm puzzled by your scenario. How do you know the walk to the root of a superfamily was random i.e. that there were no functional ancestral members of that family that are no longer extant? Elizabeth B Liddle
Oops ludid = lurid Alan Fox
164 JWTruthInLoveJune 22, 2013 at 5:28 pm
@AF & kf: First nazis (AF), then 1984 (kf). The weird desire of darwinists and trinitarians to use tyranny inspired polemic to their own benefit never ceases to amuse this onlooker.
You seem to be confused on who said what. Can't really blame you if you have been reading KF ;) I haven't made any reference to Nazis. KF opened a can of worms by posting some daft OP about marching people around death camps, complete with ludid pictures, as if it had something to do with ID or the price of bread. Alan Fox
So I take it we will not be presented with any evidence for darwinian processes producing irreducible complexity. And here I was thinking I missed some very important scientific discovery. Back to "Darwin's Doubt"... Joe
OK, one more time- design is not the default. One has to actively consider and eliminate non-agency explanations first. Not only do we have to eliminate chance and necessity, for example, there also has to be some correlation to mind- ie some specification. So, actively considering alternatives alone is enough to disqualify design as the default. Add to that the fact that it is also not enough to eliminate chance and necessity and design is no way close to being a default inference. Joe
KF:
I trust we can now lay the default talking point to rest.
Indeed. I will no longer use that term. Please mentally replace it in any previous context with something like "that which we conclude when alternate hypotheses have been rejected". Elizabeth B Liddle
oops, messed up quote tags. I think you can probably figure out who said what! I wrote the paragraph starting "Well, strictly, no", and all subsequent paragraphs at the same intentation level. Elizabeth B Liddle
We are getting a little out of synch, here, gpuccio, but let me try to address this:
Elizabeth: Always briefly and in no order: At the very least it is as supported by analogy as your designer is. Both darwinian processes and intentional designers have been demonstrated to produce Irreducibly Complex functions (i.e. functions in which there is no pathway of selectable precursors). ???? My tool for design inference is, as you well know, dFSCI. What darwinian process has ever been shown to produce dFSCI?
I think I can answer this, but before I do, can you link to a specific definition of your dFSCI? I am familiar with a number of comparable metrics, but would prefer be clear about yours.
I agree. And nor can the designer be falsified. There are many scientific explanations that cannot be falsified. Indeed, the only regular falsification we do in science is falsification of the null hypothesis. If a hypothesis cannot be set up as a null, it cannot be falsified. ???? What are you saying here? Are you rejecting the whole Popperian theory of science?
No, I'm not.
It seems you are rather confused. The null is never falsified, it is only rejected because improbable. H0 is not falsified, ever.
Well, you'd better have that argument with Sir Ronald Fisher! But certainly, falsification is always probabilistic in empirical science, not absolute, as in math. That's why we don't "prove" things in science, merely demonstrate that our models are supported, or infirmed.
Instead, necessity explanations can be falsified. For example, if I assume that the cause of the effect I observe is X, but further experimentation shows that X does not produce that effect, my H1 is falsified.
Well, strictly, no. All you have done is retained the null that X does not have an effect. It may have had an effect, but one so small that you lacked the statistical power to demonstrate it. This is the big problem with trying to show, for instance, that a drug has no adverse effects, hence the movement towards effect-size hypotheses, which can be rejected by a study of known statistical power. So what would be better would be to cast this as an effect-size hypothesis: X has an effect on Y that is at least Z. Then, if you get an effect size whose confidence limits don't overlap with Z you can consider your data improbable under your hypothesis. Alternatively, if your hypothesis is that ONLY X produces Y (rather than the simple hypothesis that X produces Y) then you could falsify that hypothesis without specifying an effect size, but you'd still be setting it up as the null: In the absence of X there can be no Y. Under that null hypothesis, the probability of observing Y in the absence of X would be (near) zero. If you observed Y in the absence of X, you could reject your null (i.e. consider it falsified).
Popper’s point is that if an explanation can never be falsified, for the same nature of the explanation, ot is not a scientific explanation. As I said in my #177, the ID theory is perfectly falsifiable, and is therefore perfectly scientific.
I do not, and have not, disputed your claim that the hypothesis that only a designer can produce dFCSO is unfalsifiable. It is, because that hypothesis can be cast as a null: In the absence of a designer, we will not observe the generation of dFCSO. If we can falsify that null (show the generation of dFCSO in the absence of a designer) we can infer that the null is false. Essentially, Fisherian hypothesis testing outputs the probability that you would observe what you observe under some hypothesis. If that probability is low, and you observe it anyway, you can reject the hypothesis. If your hypothesis can be rejected (you can show that what you observe would be unlikely were it true), it can be considered scientific in Popperian terms. The problem with Dembski's chi is that it is an attempt to falsify "non-design" which is far to vague to be cast as a falsifiable null. "Non design" is not a falsifiable hypothesis. Nor is "design".
Elizabeth B Liddle
Similarly, I must challenge the objectors to identify a definitive fourth causal option, in the sequence lo vs hi contingency, then for hi, chance vs choice. Where, in point of fact, chance is a very broad default indeed, i.e. if we cannot assign something to necessity as it does not show natural regularities for this aspect, we assume chance unless we have positive reason to infer design. Objectors need to show that any alternative to design for high contingency in an aspect of a phenomenon or process cannot be categorised as chance -- the default -- but as something else, and on what empirically justified basis. kairosfocus
I trust we can now lay the default talking point to rest. kairosfocus
P^4S: Webopedia: >>default A value or setting that a device or program automatically selects if you do not specify a substitute. For example, word processors have default margins and default page lengths that you can override or reset. The default drive is the disk drive the computer accesses unless you specify a different disk drive. Likewise, the default directory is the directory the operating system searches unless you specify a different directory. The default can also be an action that a device or program will take. For example, some word processors generate backup files by default. >> kairosfocus
Elizabeth: If this is not true for proteins – in other words if protein space is far more disconnected than, for example, the space of regulatory genes, then you may well be correct. But protein space is disconnected. Don't you think that 2000 superfamilies, and growing, in the SCOP classification, completely separated at sequence level, and folding level, and function level, is not a disconnected space? Let's suppose that some time, in LUCA or pre-LUCA :) , the living being survives and duplicates with only one protein superfamily (pure myth, but let's suppose it can be true, for the sake of discussion). Even finding that single superfamily would already appear an extraordinary piece of luck. But let's assume that pre-LUCA was very lucky. But then? What happens? Two possible scenarios, both equally impossible: a) In some way, the original functional superfamily, through RV, originates the other 1999 in the course of evolution, either from working genes or from non functional duplicates, realizing the stunning total of 1999 random walks that all reach a separated functional island, against all probabilities. b) In some way, the other 1999 superfamilies are found again by sheer luck, like the first one, from something like non coding sequences or similar. Against all probabilities. A good explanation indeed, is the neodarwinian model. My compliments to all those who are so enthusiast of the thing! Elizabeth, I have great respect for you. I find your concepts of epistemology a little bit mixed up, but after all that is not a crime. And you have at least one excuse in your darwinian faith: you obviously understand very little of molecular biology (definitely not a crime!). Others could not say the same in their own defense. gpuccio
PPPS: Collins English Dict, default: >> 6. (Electronics & Computer Science / Computer Science (also)) Computing a. the preset selection of an option offered by a system, which will always be followed except when explicitly altered b. (as modifier) default setting>> kairosfocus
Elizabeth Liddle:
Both darwinian processes and intentional designers have been demonstrated to produce Irreducibly Complex functions (i.e. functions in which there is no pathway of selectable precursors).
That is a lie. So now I understand why Lizzie sez what she does- she just makes it all up and presents it as reality. I challenge Lizzie and all other evos to present the evidence taht darwinian processes have been demonstrated to produce IC. If she fails to do so then it is obvious that she is just lying, again. Joe
PPS: Note, this explicitly accepts that cases of actual design that are too simple to pass the threshold will be assigned chance as default explanation. This is to be ever so sure when the decision design is made. kairosfocus
PS: Note, again the first default -- assumed so in absence of positive reason to go to an alternative -- is necessity, rejected on high contingency. There are two known sources of such, chance and choice. The second default is chance unless there is a positive reason such as FSCO/I, to decide in favour of choice. That is, an inference to design has to pass TWO positive reasons to reject a default. As has been explained over and over again, but ignored or distorted. kairosfocus
gpuccio:
Elizabeth: Let’s go to the final point, the most important: why the neo darwinian algorithm is not only unsupported by facts, but also usupported by logic. I will try to be simple and clear. My impression was that, in your initial discussion, you were only suggesting that selectable precursors could exist in the protein space, and that if they were many that would help the evolution of functional proteins. At this point you had not mentioned anything about protein structure and function, as you do in your following post. My answer to that was very simple. Even if many selectable precursors exist in the protein space, there is no reason to think that their distribution favors functional proteins versus non functional states. Therefore, the probability of getting to a functional protein remains the same, whatever the number of selectable intermediaries in the space. IOWs, even if selection acts, it will act as much to lead to non functional states as it does to lead to functional states, and as functional states are extremely rare, the probability of finding them remains extremely low. Is that clear?
I think so, but I may still be misunderstanding you. What you seem to me to be saying would be true if it were the case that there is no correlation between sequence similarity and functionality. So that if sequence ABCDE is functional, sequence ABCED is not more more likely to be functional than sequence ZYXWP. However, if similar sequences are likely to confer similar fitness, then what you seem to be saying would not hold. And in fact, similar sequences tend to yield proteins with similar properties. No?
Now, in the following post, you add considerations about protein structure and function. They are not completely clear, but I will try to make my point just the same. Here is your argument:
Are you saying that there is no reason to expect any correspondence between protein sequence and protein properties? If so, by what reasoning? I’d say that under the Darwinian hypothesis that is what you’d expect. Sequences for which a slight variation results in a similar phenotype will tend to be selected simply for that reason. Variants who produce offspring with similar fitness will leave more offspring than variants for whom the fitness of the offspring is more of a crapshoot. And in any case we know it is the case – similar genotypes tend to produce similar phenotypes. If sequences were as brittle as you suggest, few of us would be alive.
You seem to imply that, in some way, the relationship between structure and function can “help” the transition to a functional unrelated state. But the opposite is true. Let’s go in order. The scenario is, as usual, the emergence of a new basic protein domain. As I have already discussed with you in the past, we must decide what is our starting sequence. the most obvious possibilities are: a) An existing, unrelated protein coding gene b) An existing, unrelated pseudogene, no more functional c) An unrelated non coding sequence. Why do I insist on “unrelated”? Because othwerwise we are no more in the scenario of the emergence of a new basic protein domain. As I have explained many times, we have about 2000 superfamilies in the SCOP classification. Each of them is completely unrelated, at sequence level, to all the others, as can be easily verified. Each of them has different sequence, different folding, different functions. And they appear at different times of natural history, although almost half of them are already present in LUCA. So, the emergence of a new superfamily at some time is really the emergence of a new functional island. The new functional protein will be, by definition, unrelated at the sequence level to anything functional that already existed. It will have a new folding, and new functions. Is that clear?
Yes, I think so. Let's assume for simplicity that there is only one superfamily of proteins - that all functional proteins share some kind of sequence similarity, but are a tiny proportion of all possible protein sequences. And let's say that phylogetic analysis shows that the LUCA - the protein at the base of the tree (if there was one - protein sequences might will be the result of HGT as will as LGT) is still quite substantial in length. In other words that the shortest possible extent ancestor of the superfamily is still vastly unlikely, if picked at random from a barrel of all possible sequences. We know that similar, but longer, sequences tend to be functional (or we wouldn't have a superfamily). We do not know, because none exist, whether similar, but shorter (and therefore less improbable in our barrel pick) will also tend to be functional. But is there any reason to think not? I think you essentially address this below:
Now, as usual I will debate NS using the following terminology. Please, humor me.
Of course :) You have done no less for me :)
1) Negative NS: the process by which some new variation that reduces reproductive fitness can be eliminated. 2) Positive NS: the process by which some new variation that confer a reproductive advantage can expand in the population, and therefore increase its probabilistic resources (number of reproduction per time in the subset with that variation). Let’s consider hypothesis a). Here, negative NS can only act against the possibility of getting to a new, unrelated sequence with a new function by RV. Indeed, then only effect of negative NS will be to keep the existing function, and eliminate all intermediaries where that function is lost or decreases. The final effect is that neutral mutations can change the sequence, but the function will remain the same, and so the folding. That is what is expressed in the big bang theory of protein evolution, and explains very well the sequence variety in a same superfamily, while the function remains approximately the same. In this scenario, it is even more impossible to reach a new functional island, because negative NS will keep the variation within the boundaries of the existing functional island. What about positive NS? In this scenario, it can only have a limited role, maybe to tweak the existing function, improve it, or change a little bit the substrate affinity. Some known cases of microevolution, like nylonase, could well be explained in this context. Let’s go now to cases b) and c). In both situations, the original sequence is not transcribed, or simply is not functional. Otherwise, we are still in case a). That certainly improves our condition. There is no more the limitation of negative NS. Now we can walk in all directions, without any worry about an existing function or folding that must be preserved. Well, that’s much better! But… in the end, now we are in the field of a pure random walk. All existing unrelated states are equiprobable. The probability of reaching a new functional island is now the same as in the purely random hypothesis. Your suggestion that some privileged walks may exist between isolated functional islands is simply illogical. Why should that be so? The functional islands are completely separated at sequence level, we know that. SCOP classification proves that. They are also separated at the folding level: they fold differently. They also have different functions. Why in the universe should privileged pathways exist between them? What are you? An extreme theistic evolutionist, convinced that God has designed, in the Big Bang, a very unlikely universe where in the protein space, for no apparent reason, there are privileged walk between unrelated functional islands, so that darwinian evolution may occur? How credible is this “God supports Darwin” game? You, like anyone who finds the neo darwinian algorithm logically credible, should really answer these very simple questions.
First of all, I entirely agree that for Darwinian evolution to occur, there must be a correlation between genotypic similarity and phenotypic similarity. If there is no correlation between genotype and phenotype, then even if there is "heritable variance in reproductive success", then any offspring even slightly different, genetically, from its parent will have no more probability resembling its parent phenotypically than any other possible variant. This is essentially the No Free Lunch argument, and the basis for Dembski's Search for a Search. However, for now, let us merely observe that organisms tend to resemble their parents both genotypically and phenotypically, and that similar genotypes produce similar phenotypes, both at the organism level, and at the gene level - similar sequences tend to produce similar phenotypic effects. If this is not true for proteins - in other words if protein space is far more disconnected than, for example, the space of regulatory genes, then you may well be correct. I would certainly agree that the functional connectedness of protein space is potentially interesting issue, and that if you can show that sequence distance (taking into account various mechanism for variation generation including both HGT and LGT) is too poorly correlated with fitness distance for a Darwinian account to be plausible, than, cool. This is, in my view, a much better approach to ID (or rather a much better approach to critiquing Darwinian accounts, because as well as "designer", as a possible inference from a falsification of Darwinian mechanisms, there is also "other factor as yet unknown" - indeed I would class a designer as one such), than trying to compute quantities like dFSCO (as I understand it), which merely tell us what needs to be explained, and are not, in my view, for reasons stated above, themselves evidence for a particular explanation. Elizabeth B Liddle
F/N: A default is a first resort that is switched away from (a null hyp if you will), not the result of a reasonable trichotomy where we account for A and B as not credibly responsible and back up C with abundant empirical warrant. KF kairosfocus
@Liddle:
I am calling the “default” what you are left with if you reject other available options.
That's also how programming languages with switch-case-statements specify the "default"-behaviour. JWTruthInLove
Elizabeth: Always briefly and in no order: At the very least it is as supported by analogy as your designer is. Both darwinian processes and intentional designers have been demonstrated to produce Irreducibly Complex functions (i.e. functions in which there is no pathway of selectable precursors). ???? My tool for design inference is, as you well know, dFSCI. What darwinian process has ever been shown to produce dFSCI? I agree. And nor can the designer be falsified. There are many scientific explanations that cannot be falsified. Indeed, the only regular falsification we do in science is falsification of the null hypothesis. If a hypothesis cannot be set up as a null, it cannot be falsified. ???? What are you saying here? Are you rejecting the whole Popperian theory of science? It seems you are rather confused. The null is never falsified, it is only rejected because improbable. H0 is not falsified, ever. Instead, necessity explanations can be falsified. For example, if I assume that the cause of the effect I observe is X, but further experimentation shows that X does not produce that effect, my H1 is falsified. Popper's point is that if an explanation can never be falsified, for the same nature of the explanation, ot is not a scientific explanation. As I said in my #177, the ID theory is perfectly falsifiable, and is therefore perfectly scientific. gpuccio
Elizabeth:
Regarding the falsifiability of ID, even if I have not followed the whole discussion, I would say that while a generic hypothesis of design or of a designer is not a scientific issue, and cannot be falsified, ID theory is a scientific theory and can very well be falsified. ID theory states that a designer can be inferred because of specific properties of objects, such as dFSCI. The simple observation of objects exhibiting dFSCI which certainly came into existence without any design intervention would immediately falsify the theory.
I agree. Elizabeth B Liddle
Kairosfocus:
For, repeatedly and indeed again above, it has been pointed out to you that the design inference is TWICE OVER, not a default. The first being mechanical necessity, overcome through high contingency as opposed to natural regularity such as F = m*a. Secondly, highly contingent outcomes are held the result of chance showing itself in patterns such as statistical distributions, absent the FSCO/I pattern of complex, functional specificity, especially in something like strings of digital code that function as code, beyond 500 bits or the equivalent.
OK, let me make myself clear. Let's say I have a washing machine. If I want to a warm wash, I press "warm". If I want a full rinse, I press "full rinse". If I reject both, the machine stays on its factory setting (the "default") which is "eco" (cold water, half rinse). In other words I am calling the "default" what you are left with if you reject other available options. That's the sense in which I mean the EF (and indeed chi) is the "default" - it's what you conclude if you reject, in the case of the EF, Law, and Chance, and, in the case of chi, the null hypothesis of no-design. In other words, it's what's left, once you've rejected other alternatives on offer (spin, full rinse, Law, necessity, the null). But if you don't like the work "default" that is fine. I will avoid it. Elizabeth B Liddle
gpuccio (getting there, slowly!):
Elizabeth: The basic protein domains are extremely ancient. How would you test whether any precursor was selectable in those organisms in that environment? That’s why I’d say the onus (if you want to reject the “null” of selectable precursors) to demonstrate that such precursors are very unlikely. As explained, “selectable precursors” are not a “null”: they are an alternative hypothesis (H1b, not H0).
That's find. Dembski treats them as a null ("Darwinian or other material mechanisms"). You are not. This is good.
I reject the neo darwinian hypothesis H1b because it is completely unsupported by facts. I have no onus at all. It is unsupported by facts. Period. Show me the facts, and I will change my mind.
At the very least it is as supported by analogy as your designer is. Both darwinian processes and intentional designers have been demonstrated to produce Irreducibly Complex functions (i.e. functions in which there is no pathway of selectable precursors).
Moreover, if intermediaries exist, it must be possible to find them in the lab, and to argue about what advantage they could have given.
Possibly, possibly not. If archaic precursor proteins existed that provided reproductive advantage to their bearers in the archaic environment, neither may be available for examination today. That doesn't invalidate the hypothesis, any more than lack of any independent evidence of a designer invalidates the designer hypothesis.
If your point is that: a) Precursors could have existed, but we have no way to find them and: b) Even if we found them, there is no way to understand if they could have given an advantage in “those organisms” and in “that environment”, because we can know nothing of those organism and that environment, then you are typically proposing an hypothesis that can never be falsified. I suppose Popper would say that it is not a scientific hypothesis.
I agree. And nor can the designer be falsified. There are many scientific explanations that cannot be falsified. Indeed, the only regular falsification we do in science is falsification of the null hypothesis. If a hypothesis cannot be set up as a null, it cannot be falsified. Which is why falsification isn't really how science makes progress (null hypotheses are usually really boring, like "these two samples are drawn from the same population"; "this correlation is not zero".
We allow both Intelligent Perturbation and Unknown Object to remain as unrejected alternatives. The rejection of an alternative is always an individual choice. I would be happy if ID and neodarwinism could coexist as “unrejected alternatives” in the current scientific scenario. That’s not what is happening. Almost all scientists accept the unsupported theory, and fiercely reject the empirically supported one. Using all possible methods to discredit it, fight it, consider it as a non scientific religious conspiration, and so on. Not a good scenario at all, for human dignity.
I absolutely agree with you that there is no scientific reason to reject design, and to claim that the success of the Darwinian model is evidence against a designer is fallacious, in my view. It isn't. However, to say that an inference is invalid (as I consider all ID inferences I have met so far are) is not to say that the conclusion is untrue. Nor is to say that the ID argument against evolution is false is the same as saying that evolution is true. And I do disagree with you sharply that the Darwinian model is unsupported. The Darwinian model provides us with a theoretical mechanism by which information as to how to build an organism that can survive and reproduce in an environment full of threats and resources can be bootstrapped into a genome, and makes empirical predictions that have been repeatedly confirmed. In that sense it is a better theory than Newton's theory of gravity. We still do not have a mechanism for gravity - which means we have no explanation for space-time, and thus no mechanism for existence itself. If I wanted to make an ID argument, I'd say: never mind evolution, explain gravity, materialists!
I am merely arguing against the validity of the arguments for Design that you are presenting. The point is not missed at all. It’s my arguments for design that I defend, and nothing else. And I shall go on not using the capital letter, because I am arguing for some (non human) conscious intelligent being, not for God.
OK. And I see you are prepared to propose specific (times and means of interventions, for instance). This is good - and potentially allows you to make specific testable predictions. For example, would you not agree that the intentions of the designer could be investigating, and hypotheses developed as to how the genome was physically manipulated?
Actually I accept that the flaw here is not circularity. Thank you for that. It is if you assume that by rejecting the random-draw null you have rejected the non-design null, which I have never done.
That's fine. I assumed that your dFSCO calculation, like Dembski's chi, was based on rejecting a Fisherian null. If it is just a fact that requires explanation (the proportion of functional proteins out of all possible sequences) that's fine.
but as you claim that the inference rather is by “simple inference by analogy”, I agree it is not circular. I simply claim what I have always clearly stated for years, here and at TSZ.
OK.
On the other hand nor is it sound. You are free to believe so, but I see no reason for such a statement. More in next post.
May need to go to the supermarket, but will be back! Elizabeth B Liddle
Elizabeth: Regarding the falsifiability of ID, even if I have not followed the whole discussion, I would say that while a generic hypothesis of design or of a designer is not a scientific issue, and cannot be falsified, ID theory is a scientific theory and can very well be falsified. ID theory states that a designer can be inferred because of specific properties of objects, such as dFSCI. The simple observation of objects exhibiting dFSCI which certainly came into existence without any design intervention would immediately falsify the theory. gpuccio
Dr Liddle:
it’s an argument I’m making in good faith
I am sorry, but I must disagree, above and beyond the issue of the slanders you have hosted and denied that you have hosted, that sharply reduce your credibility to s[peak and be taken at face value. For, repeatedly and indeed again above, it has been pointed out to you that the design inference is TWICE OVER, not a default. The first being mechanical necessity, overcome through high contingency as opposed to natural regularity such as F = m*a. Secondly, highly contingent outcomes are held the result of chance showing itself in patterns such as statistical distributions, absent the FSCO/I pattern of complex, functional specificity, especially in something like strings of digital code that function as code, beyond 500 bits or the equivalent. I know for a fact that I have repeatedly pointed this out to you, highlighting the per aspect design inference filter in explanation. That is a matter of FACT, not opinion. You may wish to disagree that there are three main patterns of causes that are empirically relevant, but that is immediately not he same as that such a trichotomy is not commonly seen and widely understood. It is further not he case that after the two defaults, one can reasonably and truthfully say that the inference to design is a default. Finally, if one wishes to suggest a fourth way of causation, per aspect, one needs to warrant it. And a combination of blind chance and mechanical necessity is taken into account under chance, as the necessity is not responsible for the aspect of high contingency, by definition. So, until someone warrants on observation, a fourth causal pattern, we are well within epistemic rights to reason on the three longstanding abundantly warranted cases. And, to interpret the hope for a fourth pattern as an implicit acknowledgement of the force of the argument, but multiplied by a wish not to follow it to its conclusion. Good day, madam KF kairosfocus
Elizabeth: Some quick comments, for the moment, in no special order: It’s not “propaganda”, gpuccio, it’s an argument I’m making in good faith. I was not referring to any bad faith on your part. IMO, the "argument-from-ignorance" issue is darwinist propaganda, and has been for many years. You are certainly repeating it in "good faith", but the argument itself is darwinist propaganda. IMO. This is a different argument and I agree that in this argument design is not treated as the default. Thank you. I am not sure at all that you are right about Dembski, but as usual I will stick to my arguments, in the form they have had for years here. So, if you agree that in my arguments there is no default, it's fine with me. I have no time now, so I will take the more serious issues later... gpuccio
Upright Biped:
I don’t think design is falsifiable, because it is always possible (indeed it is highly defensible theology) to postulate a designer who conceived of a universe that Just Worked.
a) Which ID arguments present theology as a defense of their claims? If the answer is none, then your comment was ill-conceived (if we can assume that reasoning which has no meaning to the issue being reasoned over can be classified as ill-conceived).
I didn't claim that anyone presented theology as a defense of their claims (although many have inferred a deity from the perceived evidence for design, and, indeed, frequently accuse "Darwinists" of being resistant to the Design inference because of its theological implications). An my comment was perfectly "well conceived". If a postulated designer is unspecified and may have unlimited powers (as, for example, many postulated deities do), there is no way the falsify such a generic designer hypothesis.
b) ID arguments are not based on a universe that ‘just worked’. An argument based on a universe that ‘just worked’ would not require any evidence for its claim, and indeed, could not provide any evidence because there would be nothing to distinguish evidence from not-evidence,
Exactly. That was precisely my point: that the existence of a deity in no way depends on the Design Inference being valid; conversely, atheism is not justified by the invalidity of an Design Inference. In my view it is no more valid to argue evolution, therefore no god, than it is to argue no evolution, therefore god.
Biological ID is based on the tangible observation of living systems, and cosmological ID is based on a consilience of fine tuning (i.e. not the fact that the universe works, but the parameters in which it works). All of this is evidence that can be argued over, which is effectively opposite of having an absence of evidence. So again, your comment is ill-concieved if it is to be used as rationale.
I think that you misunderstood the import of my comment.
c) Following on von Neumann, Pattee argued that an iterative symbol system is the fundamental requirement in a process capable of the open-ended evolution that characterizes living things. ID can be fasified by a demonstration that unguided (non-design) forces are capable of establishing a iterative symbol system (a semiotic state) in a local system.
Well, it wouldn't falsify ID. It would merely falsify the specific claim that a designer is a necessary proximal cause for the generation of an iterative symbol system. It wouldn't rule out a designer as a necessary distal cause. For instance, as the cause of a universe with heavy elements including carbon, without which an "iterative symbol system" might (or might not) be impossible.
d) If it is demonstrated that unguided forces can establish this semiotic state in a local system, then ID as a theory would be very effectively falsified, even if the truth of ID remained in question.
I agree that that specific conjecture would be falsified. This is why I think it is useful to have specific ID conjectures.
On the other hand, the proposition of unguided forces as the origin of semiosis is not falsifiable under any cicumstances.
I agree. A conjecture has to be specific to be falsiable - make specific predictions. "Unguided forces" is far too vague to be falsifiable.
If mankind should someday create life from non-life, that achievement will not move a hair on the argument’s head. If ID proponents point out the vast amount of agency involvement in the creation of that life, materialists would forever argue, as they do now, that we just don’t yet know how these things happen without agency inovolvement.
Yes indeed. And so agency involvement cannot be rejected.
They will say, just as they do now, that any counter-argument can do no more than point to a gap in our knowledge. The counter-argument to unguided forces would be immediately labeled, as it is now, as an argument from ignorance. The proposition of unguided forces never has to undergo a test because its based on the speculation of an unknown mechanism, and therefore is subject only to the researcher’s imagination. It simply cannot be tested, which is the very definiton of non-falsifiable.
Non-specific conjectures ("A designer with unspecified powers"; "unspecified unguided forces") cannot be falsified. We falsify a conjecture when we make a specific prediction based on that conjecture, and that prediction is not confirmed by observation. However, note that all falsifications in science are probabilistic, and even when a prediction is confirmed (with a good p value), there are always potential alternative explanations, including unfalsfiable ones (omphalism, for instance).
So Dr Liddle, just as when you were pondering the evolution of replication machinery, you are once again wrong to the left and wrong to the right. Half of your comment is meaningless and the other half is demonstrably false.
I disagree, for the reasons given above. Elizabeth B Liddle
gpuccio, continuing:
But gpuccio, this then becomes an argument-from-ignorance.
Absolutely not! This is simply neo darwinist propaganda. The scenario is very simple. Design is a credible explanation for dFSCI, because of specific positive empirical observations of the relationship between dFSCI and design. That is a very positive argument for the design inference, and ignorance has nothing to do with it.
It's not "propaganda", gpuccio, it's an argument I'm making in good faith. Design and selectable precursors are both potential explanations for dFSCI; to say reject selectable precursors and infer design on the basis that there are no known selectable precursors is, literally, an "argument from ignorance": "we don't know of any, therefore they don't exist". And it is selective a well - we don't know of any designers around at the time of protein domain emergence, but we can't assume they don't exist, because that, too, would be an "argument from ignorance". (btw, in this context "ignorance" doesn't mean "not knowing what you should know" - it just means "not knowing" - not sure if this translates across the languages!)
Then, there is the attempt of neo darwinists to explain dFSCI in biology by an algorithm based on RV + NS. Without discussing the details (more on NS in a moment), let’s say that such an explanation has no credibility unless selectable intermediaries to all basic protein domains exist. Our empirical data offer at present no support to such an existence, and it is not even suggested by pure logical reasonings.
I agree that the selectable precursors/intermediaries is an alternative explanation. I'm saying that rejecting its credibility because there is no "empirical data" is no more (or less) justified than rejecting "a designer" because there is no empirical data to suggest the existence of a designer. As for logical reasoning - I disagree, but maybe we will get to that later.
Therefore, the situation is as follows: a) We reject H0 (pure random origin) b) We have a credible explanation, based on positive empirical observations (design): let’s call it H1a c) We have a suggested alternative explanation, unsupported by any empirical observation (the neodarwinian hypothesis): let’s call it H1b Now, it is obvious to me that H1a is far better than H1b. So, I accept it as the best explanation. You may disagree, but the fact that I reject H1b as a credible explanation because it is unsupported by known facts is in no way an “argument from ignorance”: it is simply sound scientific reasoning.
No, gpuccio, I don't think it is "sound scientific reasoning. Let me attempt again to say why, with reference to your nicely laid out chain of reasoning above:
a) We reject H0 (pure random origin)
Of course. Nobody suggests this as a hypothesis anyway, and we can clearly reject it. This leaves:
b) We have a credible explanation, based on positive empirical observations (design): let’s call it H1a
The only reason to grant this explanation credibility is by analogy with human design. I do not reject the argument, but nor do I think it merits the credibility you accord it. And the reasoning, though the conclusion may be correct, is fallacious: A has properties X Y and Z but not P or Q. B has properties P, and Z, but not X or Y. A is caused by D; therefore B is probably caused by D.
c) We have a suggested alternative explanation, unsupported by any empirical observation (the neodarwinian hypothesis): let’s call it H1b
It is supported by as much, if not more, empirical observation than the design hypothesis. We observe that things evolve via selectable precursors, both theoretically, and by direct observation. My point is not that we must reject H1a and consider H1b supported. It is that there is no principled reason for rejecting either. In order to choose the better, we'd have to make differential predictive hypothesis about new data; what would we expect if H1a was true that we would not expect if H1b was true?, and vice versa? Then go out and look for it. Again, at the risk of being repetitive, let me say: I am NOT arguing that the world in general, or biology specifically, was not designed. I'm not even arguing that we could not test certain design hypotheses. I'm simply arguing that the design inference methodology (Fisherian null hypothesis testing) suggested by Dembski doesn't work in any case where the probability distribution under the null is not calculable, and that the argument by analogy doesn't work either. The conclusion may be true, but the arguments are unsound. Elizabeth B Liddle
I don’t think design is falsifiable, because it is always possible (indeed it is highly defensible theology) to postulate a designer who conceived of a universe that Just Worked.
a) Which ID arguments present theology as a defense of their claims? If the answer is none, then your comment was ill-conceived (if we can assume that reasoning which has no meaning to the issue being reasoned over can be classified as ill-conceived). b) ID arguments are not based on a universe that 'just worked'. An argument based on a universe that 'just worked' would not require any evidence for its claim, and indeed, could not provide any evidence because there would be nothing to distinguish evidence from not-evidence, Biological ID is based on the tangible observation of living systems, and cosmological ID is based on a consilience of fine tuning (i.e. not the fact that the universe works, but the parameters in which it works). All of this is evidence that can be argued over, which is effectively opposite of having an absence of evidence. So again, your comment is ill-concieved if it is to be used as rationale. c) Following on von Neumann, Pattee argued that an iterative symbol system is the fundamental requirement in a process capable of the open-ended evolution that characterizes living things. ID can be fasified by a demonstration that unguided (non-design) forces are capable of establishing a iterative symbol system (a semiotic state) in a local system. d) If it is demonstrated that unguided forces can establish this semiotic state in a local system, then ID as a theory would be very effectively falsified, even if the truth of ID remained in question. On the other hand, the proposition of unguided forces as the origin of semiosis is not falsifiable under any cicumstances. If mankind should someday create life from non-life, that achievement will not move a hair on the argument's head. If ID proponents point out the vast amount of agency involvement in the creation of that life, materialists would forever argue, as they do now, that we just don't yet know how these things happen withou agency inovolvement. They will say, just as they do now, that any counter-argument can do no more than point to a gap in our knowledge. The counter-argument to unguided forces would be immediately labeled, as it is now, as an argument from ignorance. The proposition of unguided forces never has to undergo a test because its based on the speculation of an unknown mechanism, and therefore is subject only to the researcher's imagination. It simply cannot be tested, which is the very definiton of non-falsifiable. So Dr Liddle, just as when you were pondering the evolution of replication machinery, you are once again wrong to the left and wrong to the right. Half of your comment is meaningless and the other half is demonstrably false. Upright BiPed
Phinehas:
That seems like a pretty legitimate inference to me. What am I missing? Can you give me an example to work with?
Yes: For all cases of gastric ulcer for which a cause is known, the cause is a bacteria. Therefore all cases of gastric ulcer for which no cause is known, the cause must be a bacteria. This could be true - after all, it was a long time before the helicobacter was discovered, and it may well be that gastric ulcers for which no trace of bacteria is found is nonetheless caused by a bacteria that we haven't got a good test for yet. But it is not a sound inference. There may be a quite different cause of gastric ulcer in patients for whom the cause is not helicobacter. Elizabeth B Liddle
Hey Liz, nice to see you here! I don't understand this:
If all cases of X for which a cause is known the cause is Y, you cannot infer that for all cases of X for which no cause is known, the cause is also Y.
That seems like a pretty legitimate inference to me. What am I missing? Can you give me an example to work with? Phinehas
As for non-biological self-replicators- there isn't anything to suggest they exist- usually takes two, one for a template and one for a catalyst- and nothing to suggest that even given those 2 and plenty of respurces, that anything else will ever evolve. The point being is that there isn't any evidence to support Lizzie's position. Joe
You think Design is reasonable, and that material mechanisms are not. I think material mechanisms are reasonable, and Design is not (or rather interventionist design).
Excuse me but what you think is irrelevant. What has been demonstrated is that darwinian processes are not sufficient for producing protein machinery.
I don’t think design is falsifiable, because it is always possible (indeed it is highly defensible theology) to postulate a designer who conceived of a universe that Just Worked.
And yet we have said how to falsify ID. Also scientists have said they have falsified it (they were wrong but you can't have it both ways). But anyway, can you give us a testable hypothesis for the materialist position? That way we will know what you will accept. The reason being is that many people have offered up testable design hypotheses. So all we need is hypotheses from your side so we know what you will accept and we can compare- you know to see who is really doing science and who is bluffing. So can you produce or not? Joe
And what does the test have to do with biology? I don’t see any practical implications of your arbitrary algorithm. To quote Eric Anderson
Sorry, I disagree completely. The point is to convey/demonstrate the effectiveness of self-replication and heritable variation and not demonstrate it. computerist
Eric:
I was referring to the machines themselves.
OK. In that case, could you rephrase your question?
I’d say that there is a characteristic quality of things output by processes characterised by deep decision-trees. These include the output of human design, machine design, and evolutionary processes.
Here you are just lumping the alleged evolutionary processes into the same category as human design. Nonsense. Talk about assuming the conclusion. That is precisely the question at issue: do evolutionary processes have the ability to produce output like human design? And they have never been shown to do so.
I wasn't assuming a conclusion there - I was stating it. I agree it is the question at issue. The provisional conclusion I myself have reached is the one I gave.
No. ID points out that such quality is only known to come from purposeful intelligent activity. And no-one has ever shown that purely natural processes are up to the task. That is what the whole debate is about.
Well, Dembski specifically excluded intention and purpose from his definition of intellgence. But I agree that purposeful intelligent activity is what ID implies (and Dembski implies is implied!)
Why in the world would you have to know the answer beforehand? That certainly doesn’t follow logically. As long as the calculation is based on reasonable estimates and includes information that we do know, it can allow us to draw a reasonable inference based on the current state of knowledge. We certainly know enough about biology at this point to start making some calculations and drawing some reasonable conclusions. No-one has ever claimed that the exact probabilities are known with precision. And they need not be.
Because that is how Dembski defines chi - in terms of Fisherian null hypothesis testing. If the null can be rejected, at an alpha of 1/10^150, then design is inferred. If we cannot compute the null, then the calculation is meaningless. If the P(T|H) is just a guess, based on your view that the Target, given Darwinian processes or other material mechanisms, is extremely low, then, clearly, the calculation will output "Design". However, if your view is that the Target, given Darwinian processes or other material mechanisms is quite high, then the calculation will not output "Design". In other words, what the chi calculation outputs is entirely dependent on your estimate for how likely non-Design mechanisms are to have resulted in the Target. So it doesn't tell you anything that you didn't already think. You think Design is reasonable, and that material mechanisms are not. I think material mechanisms are reasonable, and Design is not (or rather interventionist design). One of us may be right and the other wrong, but all the chi calculation will do is tell us what we already think. You could call it an instance of "conservation of information" :)
Well, you’re back to your long-standing concern about being able to “precisely define the probability distribution.” Yet you freely admit that such a calculation is not needed in other instances (archaeology, forensics, etc.). So you are imposing an a priori different demand for what counts as evidence or what tool can be applied to infer design when it comes to living systems.
No, because that is not the method applied in other instances, or rather, it is only used when you can define the probability distribution under a relevant null. For instance, I'd be perfectly happy to use chi (actually something much simpler with a much more lenient alpha) to test the hypothesis that a casino had rigged the odds. That's because we know precisely what the probability distribution is under the null of no rigging. But I wouldn't use it for archaeology, or most forensics, or SETI. I'd use something else. My point is really very simple, and not even an attack on ID as a hypothesis: chi, as a concept, is an invalid way to test the ID hypothesis. So is inferring ID from the fact that biological organisms share some important properties with human artefacts. But there are plenty of other ways of setting about finding evidence for an interventionist designer. (Note the qualification "interventionist" - not all designers will leave evidence).
Now, we have a couple of possibilities: It could be that your position is simply based on a refusal to consider design in living systems. Some might be forgiven for thinking this is the case.
Well, I don't find the idea of interventionist design a very attractive one, it's true. But that wouldn't stop me considering it, if I found the arguments or evidence persuasive or valid. On the other hand, I should point out that for half a century I was perfectly happy to assume that the world had been brought into being by an omnicient, omnipotent and benevolent deity who designed it so beautifully that its workings could be discerned by science.
Or, perhaps, it could be that you are aware of some other calculation or some other “tool,” as you say, that will allow us to determine whether a particular living system was designed.
I can certainly suggest other approaches, but they are unlikely to be definitive. I don't think design is falsifiable, because it is always possible (indeed it is highly defensible theology) to postulate a designer who conceived of a universe that Just Worked. But it might be possible to set up specific testable hypothesis for an interventionist designer. Failure to demonstrate an interventionist designer would not, however, allow us to conclude either that there is no inteventionist designer, nor that there is no designer at all.
Please let us know what your proposed calculation or proposed tool is.
I'd actually be happy to do so. I'd start with some specific design hypothesis, such as Front Loading. But I should reiterate that I do not, myself, think that an interventionist designer is a terribly promising hypothesis. Elizabeth B Liddle
gpuccio: thanks for these responses. I'm going to have to take them in bite-size chunks, as I have visitors at the moment.
Another point I want to stress is that design is never inferred “by default”. I don’t even understand what you mean by such a strange wording.
Dembski sets up his null such that if it is rejected, he infers design. That is what I meant by "default".
Design is inferred because we observe dFSCI, and because we know that design is the only observed cause of dFSCI in all testable cases.
This is a different argument and I agree that in this argument design is not treated as the default. I suggest it is nonetheless fallacious, for reasons I gave above. If all cases of X for which a cause is known the cause is Y, you cannot infer that for all cases of X for which no cause is known, the cause is also Y. You would not do it for a disease, and there is no justification for doing it for anything else!
That makes a design inference perfectly reasonable. There is no defalut here, only sound reasoning.
I agree you are not defaulting to design (as Dembski does, in both the EF and CSI) but it is not sound reasoning.
Design is a very good explanation for any observed dFSCI. The fact that no other credible explanation is available makes design the best available explanation, not certainly “a default inference”.
There is a perfectly good alternative, namely, the proteins evolved by selectable precursors. You may find an interventionist designer more credible that a selectable precursors, but there is no intrinsic reason to choose one rather than the other, given that we have no independent evidence (on the table) for either designer, or selectable precursors. Or, if there is, I am not seeing the reasoning :) More in a bit Cheers Lizzie Elizabeth B Liddle
@AF & kf: First nazis (AF), then 1984 (kf). The weird desire of darwinists and trinitarians to use tyranny inspired polemic to their own benefit never ceases to amuse this onlooker. JWTruthInLove
@computerist:
probably should be the same as initial program
Why? And why do you need to execute some random code-string? Obviously, it depends on the compiler (or the language) whether it accepts any combination of a binary-string or not. And what does the test have to do with biology? I don't see any practical implications of your arbitrary algorithm. To quote Eric Anderson
It is very nice that people can write a computer program, with very little relevance to actual biology, and see it do something. That is a good exercise in programming experience, and it is fun to see your program do something.
JWTruthInLove
AF: You have simply compounded your errors and those of your colleagues, reminding me of 1984's lesson on the twisting of language as a step to tyranny. The long train of radical abuses and usurpations is ever more evident, and where it will end if unchecked is all too plain. Thank you for the inadvertent warning. Let me give you Websters 1828 on liberty with particular emphasis on the distinction from license:
liberty LIB'ERTY, n. [L. libertas, from liber, free.] 1. Freedom from restraint, in a general sense, and applicable to the body, or to the will or mind. The body is at liberty, when not confined; the will or mind is at liberty, when not checked or controlled. A man enjoys liberty, when no physical force operates to restrain his actions or volitions. 2. Natural liberty, consists in the power of acting as one thinks fit, without any restraint or control, except from the laws of nature. It is a state of exemption from the control of others, and from positive laws and the institutions of social life. This liberty is abridged by the establishment of government. 3. Civil liberty, is the liberty of men in a state of society, or natural liberty, so far only abridged and restrained, as is necessary and expedient for the safety and interest of the society, state or nation. A restraint of natural liberty, not necessary or expedient for the public, is tyranny or oppression. civil liberty is an exemption from the arbitrary will of others, which exemption is secured by established laws, which restrain every man from injuring or controlling another. Hence the restraints of law are essential to civil liberty.
Now, understand the chaos that radicals are about to plunge our civilisation into by playing with the fire of destabilising family and forcing men to stand on conscience-backed principle at any cost. A lesson Antiochus Epiphanes should have heeded before he decided to paganise the Judaeans by dint of state power. KF kairosfocus
@160, probably should be the same as initial program. If python is used (interpreted) then utilizing exec/execFile/eval/ come to mind. computerist
@computerist: It was not an objection. My point is stated in the question "How does the interpreter or compiler which executes the file look like?" JWTruthInLove
… anyone can write an interpretor or compiler that accepts any given language without errors.
So? I'm not getting your point here of how its an objection of any sort. Perhaps you can explain a bit more. computerist
@computerist I think that's the main question here: How does the interpreter or compiler which executes the file look like? ... Since ....
This is mostly up to the programmer depending on the language being used.
... anyone can write an interpretor or compiler that accepts any given language without errors. JWTruthInLove
@JWTruthInLove Simple, it will run the file, if it can. If its an interpreter if it runs and there are no errors will returns true. If its compiler and compiles and executes without errors, returns true. This is mostly up to the programmer depending on the language being used. This doesn't assume the type of function, this merely assumes that it functions. computerist
@computerist: What does "gaFile.execute(temp)" do? JWTruthInLove
We can test whether a Darwinian process is capable of generating FCSI, and sustain and build on top of the existing FCSI given self-replication. I propose to run this type of simulation (based roughly on the pseudo-code below), but more knowledge (which I don't have since I'm not a biologist) is required for more "realistic" results. GAFile gaFile = new GAFile(someArbitrarySizeOfTextFile); Fcsi fcsi = new Fcsi(); fcsi.setFcsiBound(theSmallestKnownFunctionalSubsystemExpressedInBits); boolean running = true; int count = 0; int generations = someBigNumber; while(running) { File temp = gaFile.randomize(); boolean runnable = gaFile.execute(temp); if(runnable) { gaFile.setFile(temp);//new one "survives", save for next iteration if(fcsi.hasFcsi(gaFile.getFile())) { fcsi.incrementFoundCount(); } else { fcsi.incrementNotFoundCount(); } } if(count > generations) { running = false; } count++; } fcsi.printStatistics(); This maybe a ridiculous proposition considering it only took me a few minutes to write (and it is merely a simulation), but unless this type of test is performed, I don't see how any credence can be given to the Darwinian mechanism that "self-replication with heritable variation is all that is required and therefore evolution is inevitable" as per comment 137. computerist
F/N @ KF How this declaration abuses the word "freedom" is a perfect illustration of why we need to oppose such bigotry. Alan Fox
Dr Liddle, pardon but I must highlight a question, are these declarants and signatories all to be invidiously compared to Nazis and Nazism, too? And is that to be seen as “a nuh nutten”? KF
I certainly wonder what would happen to human rights such as the right to free expression and the free exchange of ideas if you and your ilk were to gain any sort of political power base. Thank goodness you make yourself appear so ludicrous that most people can't take you seriously. Alan Fox
Kairosfocus:
Perhaps it has not dawned on you that before you can try to have a discussion with me on merits of points, you need to resolve the problem of hosting and denying that you have harboured slander against me.
I'm sorry, kairosfocus, but I do not understand the problem you want me to resolve. Please feel free to come over to TSZ and make your point, or, alternatively, as I am now able to post here, perhaps start a thread here? Or feel free to email me. Elizabeth B Liddle
Elizabeth @143:
Assuming you mean the output of machines, as opposed to machines as the putative artefact, yes.
I was referring to the machines themselves. Your “output of machines” is potentially also the product of design, but is typically so poorly defined (notwithstanding your self-congratulations on being so careful with your definitions) as to not be helpful. Let’s focus on the easy cases first: the actual machines we see before us.
I’d say that there is a characteristic quality of things output by processes characterised by deep decision-trees. These include the output of human design, machine design, and evolutionary processes.
Here you are just lumping the alleged evolutionary processes into the same category as human design. Nonsense. Talk about assuming the conclusion. That is precisely the question at issue: do evolutionary processes have the ability to produce output like human design? And they have never been shown to do so.
I think that quality is what the ID project has tended to assume must come from an intentional designing agent.
No. ID points out that such quality is only known to come from purposeful intelligent activity. And no-one has ever shown that purely natural processes are up to the task. That is what the whole debate is about.
My point is that key to the chi calculation is the parameter P(T|H), the Probability of the [specified] Target, given the null Hypothesis, which is “the relevant Chance Hypothesis taking into account Darwinian and other Material mechanisms”. That’s what I’m saying is not only non-calculable, but that you’d have to know the answer to your question before you calculate it the answer.
Why in the world would you have to know the answer beforehand? That certainly doesn’t follow logically. As long as the calculation is based on reasonable estimates and includes information that we do know, it can allow us to draw a reasonable inference based on the current state of knowledge. We certainly know enough about biology at this point to start making some calculations and drawing some reasonable conclusions. No-one has ever claimed that the exact probabilities are known with precision. And they need not be.
In other words, I suggest, null hypothesis testing in this form is a completely inappropriate and useless way of inferring Design. Not wrong, just useless. GIGO. . . . I’d say that null hypothesis testing simply won’t give you the answer to the question you are asking. I’m saying the wrong tool for the job. It can’t do it, unless you can precisely define the probability distribution under your null. So it will work to reject the null that a coin is fair. It won’t work to reject the null that a black monolith was not-designed.
Well, you’re back to your long-standing concern about being able to “precisely define the probability distribution.” Yet you freely admit that such a calculation is not needed in other instances (archaeology, forensics, etc.). So you are imposing an a priori different demand for what counts as evidence or what tool can be applied to infer design when it comes to living systems. Now, we have a couple of possibilities: It could be that your position is simply based on a refusal to consider design in living systems. Some might be forgiven for thinking this is the case. Or, perhaps, it could be that you are aware of some other calculation or some other “tool,” as you say, that will allow us to determine whether a particular living system was designed. Please let us know what your proposed calculation or proposed tool is. Eric Anderson
De Liddle, pardon butr I must highlight a question, are these declarants and signatories all to be invidiously compared to Nazis and Nazism, too? And is that to be seen as "a nuh nutten"? KF kairosfocus
F/N: EL in 128 -- after repeated cycles of correction over the course of at least a year, on what the design inference explanatory filter is and does:
there is no reason to reject selectable precursors and infer design by default”.
Agsain, after all of these months and more? At this point, I must chalk this up to a deliberately misleading strawman. If EL actually believes this, it is because she has repeatedly refused to accept what has been repeatedly, explicitly pointed out to her concerning the EF and is a simple fact easily ascertainable from the flowchart presented here in the very first post in January 2011 for the ID foundations series here at UD. Namely, as the two decision diamonds show, that there are TWO DEFAULTS, and design is not one of them. (This is also implied in the analysis above where the expression Chi_500 = I*S - 500, bits beyond the solar system threshold, is deduced.) I have explicitly, point by point explained this to Dr Liddle before, so I will not try again, I will simply highlight that the first default is that mechanical necessity suffices to causally explain a phenomenon, similar to how a dropped heavy object falls under 9.8 N/kg initial acceleration near earth's surface. This is defeated by observing high contingency. That is, when under evidently similar initial circumstances, we have materially diverse outcomes, e.g. the dropped object is a die and it tumbles and rests with different faces uppermost. High contingency has two empirically warranted explanations: chance circumstances and/or design, where both can be involved but under certain circumstances we can draw out the distinct effects. Chance is of course the second default. It produces stochastically distributed outcomes, reflective of underlying processes that may trace to quantum statistical distributions or the like, or to the sort of circumstances that happen with a die. That is, we have uncorrelated deterministic chains of cause, with some noise injected, and the result is amplified through sensitive dependence on initial and intervening conditions such as the surface roughness and the eight corners and twelve edges of a die. Similarly, my father's generation of statisticians had a trick to use phone directories as random number tables as the line codes [numbers] though deterministically assigned, are typically highly uncorrelated with names listed in alphabetical order. What defeats this default is what GP is highlighting, complex, functional specificity, especially in coded information such as we see in this post and in something like DNA. This is because, code is readily recognisable, is functionally specific and is therefore confined to narrow zones T in much larger config spaces W. This has been oulined above, and if you want to look at a widely accessible discussion, cf Signature in the Cell. There are three known relevant sources of cause: 1: necessity acting on initial circumstances through dynamics, and leading to natural regularities such as F = m*a, 2: chance contingency, leading to stochastically distributed outcomes, 3: choice contingency, aka design, leading to in some cases FSCO/I especially dFSCI. The only empirically warranted source for FSCO/I is design, and there are literally billions to trillions now of accessible cases in point. There is no fourth causal pattern that is empirically warranted. That is we have either regularites or high contingency,a nd contingency has two distinct sources with diverse empirical signatures in cases of inter3est. This is not reasoning on question-begging default it is inference to best empirically warrantted current explanation, on billions of tests that show the reliability of the inference. If objectors genuinely disagree, then they should put up clear cases of blind chance and/or -- notice, the combination is in this -- mechanical necessity producing FSCO/I, especially dFSCI. Let us just say that there is a long list of attempts that invariably turn out to instead be intelligence. All of this has been shown, right there before Dr Liddle and co, over and over again. So, when I see the sort of regurgitated, recirculated talking point above, I have to conclude -- with more of sorrow than of anger -- in light of the incident of denied slander already cited that this is willful continued misrepresentation. Good day GEM of TKI kairosfocus
Elizabeth: Let's go to the final point, the most important: why the neo darwinian algorithm is not only unsupported by facts, but also usupported by logic. I will try to be simple and clear. My impression was that, in your initial discussion, you were only suggesting that selectable precursors could exist in the protein space, and that if they were many that would help the evolution of functional proteins. At this point you had not mentioned anything about protein structure and function, as you do in your following post. My answer to that was very simple. Even if many selectable precursors exist in the protein space, there is no reason to think that their distribution favors functional proteins versus non functional states. Therefore, the probability of getting to a functional protein remains the same, whatever the number of selectable intermediaries in the space. IOWs, even if selection acts, it will act as much to lead to non functional states as it does to lead to functional states, and as functional states are extremely rare, the probability of finding them remains extremely low. Is that clear? Now, in the following post, you add considerations about protein structure and function. They are not completely clear, but I will try to make my point just the same. Here is your argument:
Are you saying that there is no reason to expect any correspondence between protein sequence and protein properties? If so, by what reasoning? I’d say that under the Darwinian hypothesis that is what you’d expect. Sequences for which a slight variation results in a similar phenotype will tend to be selected simply for that reason. Variants who produce offspring with similar fitness will leave more offspring than variants for whom the fitness of the offspring is more of a crapshoot. And in any case we know it is the case – similar genotypes tend to produce similar phenotypes. If sequences were as brittle as you suggest, few of us would be alive.
You seem to imply that, in some way, the relationship between structure and function can "help" the transition to a functional unrelated state. But the opposite is true. Let's go in order. The scenario is, as usual, the emergence of a new basic protein domain. As I have already discussed with you in the past, we must decide what is our starting sequence. the most obvious possibilities are: a) An existing, unrelated protein coding gene b) An existing, unrelated pseudogene, no more functional c) An unrelated non coding sequence. Why do I insist on "unrelated"? Because othwerwise we are no more in the scenario of the emergence of a new basic protein domain. As I have explained many times, we have about 2000 superfamilies in the SCOP classification. Each of them is completely unrelated, at sequence level, to all the others, as can be easily verified. Each of them has different sequence, different folding, different functions. And they appear at different times of natural history, although almost half of them are already present in LUCA. So, the emergence of a new superfamily at some time is really the emergence of a new functional island. The new functional protein will be, by definition, unrelated at the sequence level to anything functional that already existed. It will have a new folding, and new functions. Is that clear? Now, as usual I will debate NS using the following terminology. Please, humor me. 1) Negative NS: the process by which some new variation that reduces reproductive fitness can be eliminated. 2) Positive NS: the process by which some new variation that confer a reproductive advantage can expand in the population, and therefore increase its probabilistic resources (number of reproduction per time in the subset with that variation). Let's consider hypothesis a). Here, negative NS can only act against the possibility of getting to a new, unrelated sequence with a new function by RV. Indeed, then only effect of negative NS will be to keep the existing function, and eliminate all intermediaries where that function is lost or decreases. The final effect is that neutral mutations can change the sequence, but the function will remain the same, and so the folding. That is what is expressed in the big bang theory of protein evolution, and explains very well the sequence variety in a same superfamily, while the function remains approximately the same. In this scenario, it is even more impossible to reach a new functional island, because negative NS will keep the variation within the boundaries of the existing functional island. What about positive NS? In this scenario, it can only have a limited role, maybe to tweak the existing function, improve it, or change a little bit the substrate affinity. Some known cases of microevolution, like nylonase, could well be explained in this context. Let's go now to cases b) and c). In both situations, the original sequence is not transcribed, or simply is not functional. Otherwise, we are still in case a). That certainly improves our condition. There is no more the limitation of negative NS. Now we can walk in all directions, without any worry about an existing function or folding that must be preserved. Well, that's much better! But... in the end, now we are in the field of a pure random walk. All existing unrelated states are equiprobable. The probability of reaching a new functional island is now the same as in the purely random hypothesis. Your suggestion that some privileged walks may exist between isolated functional islands is simply illogical. Why should that be so? The functional islands are completely separated at sequence level, we know that. SCOP classification proves that. They are also separated at the folding level: they fold differently. They also have different functions. Why in the universe should privileged pathways exist between them? What are you? An extreme theistic evolutionist, convinced that God has designed, in the Big Bang, a very unlikely universe where in the protein space, for no apparent reason, there are privileged walk between unrelated functional islands, so that darwinian evolution may occur? How credible is this "God supports Darwin" game? You, like anyone who finds the neo darwinian algorithm logically credible, should really answer these very simple questions. gpuccio
Elizabeth: The basic protein domains are extremely ancient. How would you test whether any precursor was selectable in those organisms in that environment? That’s why I’d say the onus (if you want to reject the “null” of selectable precursors) to demonstrate that such precursors are very unlikely. As explained, "selectable precursors" are not a "null": they are an alternative hypothesis (H1b, not H0). I reject the neo darwinian hypothesis H1b because it is completely unsupported by facts. I have no onus at all. It is unsupported by facts. Period. Show me the facts, and I will change my mind. Moreover, if intermediaries exist, it must be possible to find them in the lab, and to argue about what advantage they could have given. If your point is that: a) Precursors could have existed, but we have no way to find them and: b) Even if we found them, there is no way to understand if they could have given an advantage in "those organisms" and in "that environment", because we can know nothing of those organism and that environment, then you are typically proposing an hypothesis that can never be falsified. I suppose Popper would say that it is not a scientific hypothesis. We allow both Intelligent Perturbation and Unknown Object to remain as unrejected alternatives. The rejection of an alternative is always an individual choice. I would be happy if ID and neodarwinism could coexist as "unrejected alternatives" in the current scientific scenario. That's not what is happening. Almost all scientists accept the unsupported theory, and fiercely reject the empirically supported one. Using all possible methods to discredit it, fight it, consider it as a non scientific religious conspiration, and so on. Not a good scenario at all, for human dignity. I am merely arguing against the validity of the arguments for Design that you are presenting. The point is not missed at all. It's my arguments for design that I defend, and nothing else. And I shall go on not using the capital letter, because I am arguing for some (non human) conscious intelligent being, not for God. Actually I accept that the flaw here is not circularity. Thank you for that. It is if you assume that by rejecting the random-draw null you have rejected the non-design null, which I have never done. but as you claim that the inference rather is by “simple inference by analogy”, I agree it is not circular. I simply claim what I have always clearly stated for years, here and at TSZ. On the other hand nor is it sound. You are free to believe so, but I see no reason for such a statement. More in next post. gpuccio
Elizabeth: For me it's a pleasure to go on discussing with you, provided it does not become too exacting on my time :) To find some balance, I will as a rule answer only the new aspects in your posts, and take for granted what we have already clearly discussed, with the due differences between our points. So, some comments to your last post (#128): Fair point, it was a lame example. My point is that positing selectable precursors seems at least no less credible than positing a completely unobserved entity. And at least we know where to look for the selectable precursors, and we know that Darwinian algorithms basically work. For example (I know UD proponents hate this demonstration, but it deserves a lot more credit than it’s given), Lenski’s AVIDA shows that even if you have functions that are all Irreducibly complex (require non-selectable precursors) they evolve, even when they require deleterious precursors. So we know that the principle works. My argument is not “therefore there must have been selectable precursors” but “therefore there is no reason to reject selectable precursors and infer design by default”. I will not comment on GAs. I have already done that, and in my past discussions I have clearly shown how even the GA you proposed in you blog is in no way an acceptable model of NS, and has no relevance to our discussion. As far as I remember, nobody at TSZ could refute my arguments about your GA. I invite you to read those past threads, if you want. Another point I want to stress is that design is never inferred "by default". I don't even understand what you mean by such a strange wording. Design is inferred because we observe dFSCI, and because we know that design is the only observed cause of dFSCI in all testable cases. That makes a design inference perfectly reasonable. There is no defalut here, only sound reasoning. Design is a very good explanation for any observed dFSCI. The fact that no other credible explanation is available makes design the best available explanation, not certainly "a default inference". But gpuccio, this then becomes an argument-from-ignorance. Absolutely not! This is simply neo darwinist propaganda. The scenario is very simple. Design is a credible explanation for dFSCI, because of specific positive empirical observations of the relationship between dFSCI and design. That is a very positive argument for the design inference, and ignorance has nothing to do with it. Then, there is the attempt of neo darwinists to explain dFSCI in biology by an algorithm based on RV + NS. Without discussing the details (more on NS in a moment), let's say that such an explanation has no credibility unless selectable intermediaries to all basic protein domains exist. Our empirical data offer at present no support to such an existence, and it is not even suggested by pure logical reasonings. Therefore, the situation is as follows: a) We reject H0 (pure random origin) b) We have a credible explanation, based on positive empirical observations (design): let's call it H1a c) We have a suggested alternative explanation, unsupported by any empirical observation (the neodarwinian hypothesis): let's call it H1b Now, it is obvious to me that H1a is far better than H1b. So, I accept it as the best explanation. You may disagree, but the fact that I reject H1b as a credible explanation because it is unsupported by known facts is in no way an "argument from ignorance": it is simply sound scientific reasoning. More in next post. gpuccio
PS: Just to highlight the key transformation:
X = – log2[10^120 ·phiS(T)·P(T|H)], where log(p*q*r) = log(p) + log(q ) + log(r), 10^120 ~ 2^398 and log(1/p) = – log (p), so: Chi = – log2(2^398 * D2 * p), in bits, and where also D2 = phiS(T) Chi = Ip – (398 + K2), where now: log2 (D2 ) = K2 That is, we have transformed the probability into an information metric; which is far more tractable and in part can be directly observed in the informational macromolecules of life which can be easily enough rendered into bits. Next, we get the thresholds and transform further into SPECIFIC, functional information by use of a dummy variable keyed to observation of functional specificity:
chi is a metric of bits from a zone of interest, beyond a threshold of “sufficient complexity to not plausibly be the result of chance,” (398 + K2). So, (a) since (398 + K2) tends to at most 500 bits on the gamut of our solar system [[our practical universe, for chemical interactions! ( . . . if you want , 1,000 bits would be a limit for the observable cosmos)] [--> the atomic interactions threshold] and (b) as we can define and introduce a dummy variable for specificity, S [--> This injects specificity per observations . . . ], where (c) S = 1 or 0 according as the observed configuration, E, is on objective analysis specific to a narrow and independently describable zone of interest, T: Chi = Ip*S – 500, in bits beyond a “complex enough” threshold NB: If S = 0, this locks us at Chi = – 500; and, if Ip is less than 500 bits, Chi will be negative even if S is positive. E.g.: a string of 501 coins tossed at random will have S = 0, but if the coins are arranged to spell out a message in English using the ASCII code [[notice independent specification of a narrow zone of possible configurations, T], Chi will — unsurprisingly — be positive. Following the logic of the per aspect necessity vs chance vs design causal factor explanatory filter, the default value of S is 0 [--> notice this, those who want to play talking point games on the value of S], i.e. it is assumed that blind chance and/or mechanical necessity are adequate to explain a phenomenon of interest.
I trust this makes the matter clear enough for those who want to understand. KF kairosfocus
Dr Liddle: Perhaps it has not dawned on you that before you can try to have a discussion with me on merits of points, you need to resolve the problem of hosting and denying that you have harboured slander against me. Until you resolve this matter positively, I am forced to assume that you are an agenda driven ideologue who has no respect for truth, accuracy or fairness, but will push any and all persuasive talking points to gain an advantage. Regardless of actual merit. Thus, until the matter is resolved, you have no credibility. Period. Beyond that, onlookers and other participants, the above clip I presented suffices to show that the solution to the probability challenge that EL seems to want to front is that it is in fact part of an information measure, but not drawn out. The solution to this -- as I showed above by clipping a longstanding log reduction -- is to simply move the equation one step forward by extracting the - log2 (p) to yield that information metric. (I must assume that Dr Liddle is able to check up that this is standard fare for measuring information. Even if she is not, I assure you that Connor, Taub and Schilling and a lot of others all the way back to Shannon et al are there to help the serious inquirer who actually wants to discuss matters on merits.) Informational measures, FYI, automatically take into account the issue of chance based hypotheses of all kinds, on getting to observed information that is functional. As was further shown, the informational metric Dembski proposed tuns out to be a threshold metric of info beyond a credible limit for sufficient specific complexity to be not credibly chance and necessity by ANY mechanism. Remember, I reduced the matter to atoms changing state every chemical reaction time, which so long as it is blind, by chance and or mechanical necessity, will fill the bill. There is absolutely nothing special about a cluster of organic chemicals in a living cell, that would make them suddenly not behave in accordance with what atoms do under chance factors and mechanical necessity. Where also, if you want to fuss and bother about the alleged special case of life -- I thought vitalism was supposed to be dead, we can easily see that the whole Darwinian mechanism for alleged design of complexity in life forms is:
CHANCE VARIATION (CV) + DIFFERENTIAL REPRODUCTIVE SUCCESS (DRS) --> DESCENT WITH MODIFICATION (DWM)
This can be analysed on an informational view. DRS, what is commonly called "Natural Selection" (which misleadingly suggests design powers), is actually a subtracter of varieties, through extinction of the inferior varieties. That is, it is NOT a source of added information, by direct implication. The only remaining possible source of added information -- and notice we are here begging the much bigger question of getting to a self replicating life form, which itself is enough to put design at the table and thus shifts onward discussion decisively -- is chance variations, triggered by anything from a radiation damaged water molecule reacting with any neighbouring molecule, on up. (And that is the primary mechanism we studied in radiation physics class, as water is the commonest molecule in the body. Suffice to say that the context for this was radiation sickness and cancer. Not exactly promising as the source for adding functionally specific complex info.) So, we have high contingency bearing abundant FSCO/I to explain, and the alternatives sitting at the table are chance variations such as by radiation etc, and design by someone able to do the equivalent of a molecular nanotech lab some generations beyond Venter et al, and maybe to use targetted viri or the equivalent as means of injection. Withe the sort of exceeding complexity and specificity involved in the associated digitally coded nanotech implemented info, the reasonable man would bet on design. And let us zoom in on the info beyond a threshold calc for a moment, to see how the thresholds are set:
X = – log2[10^120 ·phiS(T)·P(T|H)]. –> X is “chi” and phi is “phi” xx: To simplify and build a more “practical” mathematical model, we note that information theory researchers Shannon and Hartley showed us how to measure information by changing probability into a log measure that allows pieces of information to add up naturally: Ip = – log p, in bits if the base is 2. That is where the now familiar unit, the bit, comes from. Where we may observe from say — as just one of many examples of a standard result — Principles of Comm Systems, 2nd edn, Taub and Schilling (McGraw Hill, 1986), p. 512, Sect. 13.2:
Let us consider a communication system in which the allowable messages are m1, m2, . . ., with probabilities of occurrence p1, p2, . . . . Of course p1 + p2 + . . . = 1. Let the transmitter select message mk of probability pk; let us further assume that the receiver has correctly identified the message [[--> My nb: i.e. the a posteriori probability in my online discussion here is 1]. Then we shall say, by way of definition of the term information, that the system has communicated an amount of information Ik given by I_k = (def) log_2 1/p_k (13.2-1)
xxi: So, since 10^120 ~ 2^398 [--> we just took out the number of observations that can happen int he observed cosmos through binary events], we may “boil down” the Dembski metric using some algebra — i.e. substituting and simplifying the three terms in order — as log(p*q*r) = log(p) + log(q ) + log(r) and log(1/p) = – log (p): Chi = – log2(2^398 * D2 * p), in bits, and where also D2 = phiS(T) Chi = Ip – (398 + K2), where now: log2 (D2 ) = K2 That is, chi is a metric of bits from a zone of interest, beyond a threshold of “sufficient complexity to not plausibly be the result of chance,” (398 + K2). So, (a) since (398 + K2) tends to at most 500 bits on the gamut of our solar system [[our practical universe, for chemical interactions! ( . . . if you want , 1,000 bits would be a limit for the observable cosmos)] [--> the atomic interactions threshold] and (b) as we can define and introduce a dummy variable for specificity, S [--> This injects specificity per observations . . . ], where (c) S = 1 or 0 according as the observed configuration, E, is on objective analysis specific to a narrow and independently describable zone of interest, T: Chi = Ip*S – 500, in bits beyond a “complex enough” threshold NB: If S = 0, this locks us at Chi = – 500; and, if Ip is less than 500 bits, Chi will be negative even if S is positive. E.g.: a string of 501 coins tossed at random will have S = 0, but if the coins are arranged to spell out a message in English using the ASCII code [[notice independent specification of a narrow zone of possible configurations, T], Chi will — unsurprisingly — be positive. Following the logic of the per aspect necessity vs chance vs design causal factor explanatory filter, the default value of S is 0 [--> noticve this, those who want to play talking point games on the value of S], i.e. it is assumed that blind chance and/or mechanical necessity are adequate to explain a phenomenon of interest. S goes to 1 when we have objective grounds — to be explained case by case — to assign that value. That is, we need to justify why we think the observed cases E come from a narrow zone of interest, T, that is independently describable, not just a list of members E1, E2, E3 . . . ; in short, we must have a reasonable criterion that allows us to build or recognise cases Ei from T, without resorting to an arbitrary list.
That should be clear enough, save to those locked into ideological blindness, for whom (after literally years of patient correction ignored, strawmannised and twisted into occasions for slander) I have now lost all further patience. If ideologues play drumbeat talking point games beyond here (especially if they harbour, enable or carry out slanders), I simply note them down as having no respect for accuracy, logic, evidence, truth and fairness. And if they cannot accept that information is measured by taking a negative log probability, so that a probability is best understood int eh guise of the relevant info metric, then that is a sign that we are just looking at handy drumbeat talking points not serious discussion by people able to deal with the matter on its merits. On the charitable view. (I do not wish to elaborate the alternative view.) Good day. KF kairosfocus
Eric:
Oh, stop it. We all know what this means in the context of the design debate
Eric, you asked me to state my position, which I did. I like to be precise, so I don't like to be misleading, so I prefer to choose my own terms. "Natural" sometimes means "not articial" or "not designed", and sometimes it means "not-supernatural". So I avoided the term completely when I tried to convey my position. And I seem to have successfully clarified my position, so it seems to have paid off :)
And in the case of, say, machines, I presume you would also acknowledge that this applies generally and there is not some special exclusionary category reserved for machines made of biomolecules just because they happen to be organic molecules rather than inorganic molecules.
Assuming you mean the output of machines, as opposed to machines as the putative artefact, yes. Actually, either way, yes. Eric, you will have to forgive my pedantic insistence on definitions - I do think that a huge amount of the heat in the ID debate results from people talking past each other, and using the same terms to mean different things! So at the risk of irritating you further, I will err on the pedantic side. I'd say that there is a characteristic quality of things output by processes characterised by deep decision-trees. These include the output of human design, machine design, and evolutionary processes. I think that quality is what the ID project has tended to assume must come from an intentional designing agent. I disagree. I don't think the pattern is the mark of intention, I think the pattern is the mark of iterative decision-trees. I think that Intention may also be discernable in its products, but I think what we'd be looking for would be different.
This is a strange comment. The calculations put forth by design proponents typically give every reasonable conceivable edge to natural processes, often including the entire particle resources of the known universe, the fastest possible reaction rates known, etc. It doesn’t matter whether it is specifically calculable to the nth degree. Every opportunity is provided to natural processes and they still are pathetically impotent under any rational scenario to even begin to construct the kinds of systems we see in life.
You have missed my point (not surprisingly, as I didn't explicitly make it in this thread, although I have been making it over at TSZ). I am not concerned with the Universal Probability Bound. I'd be happier with a much less stringent alpha - even physicists after all only look for 5 sigma, and in my field we publish at 2. Nor am I concerned about Specification - I think it's problematic, but I'm happy with it in principle. My point is that key to the chi calculation is the parameter P(T|H), the Probability of the [specified] Target, given the null Hypothesis, which is "the relevant Chance Hypothesis taking into account Darwinian and other Material mechanisms". That's what I'm saying is not only non-calculable, but that you'd have to know the answer to your question before you calculate it the answer. So no, the formula doesn't give "every conceivable edge" to Darwinian evolution. It simply outputs an answer for any prior you might have as to the probability of Darwinian evolution. If that is low, you'll conclude Design; if it's high, you won't. In other words, I suggest, null hypothesis testing in this form is a completely inappropriate and useless way of inferring Design. Not wrong, just useless. GIGO.
Finally, it is strange that you would say treating “non-design” as a null hypothesis doesn’t work. Would you prefer that we treat design as the null hypothesis?
Absolutely not. That wouldn't work either. I'd say that null hypothesis testing simply won't give you the answer to the question you are asking. I'm saying the wrong tool for the job. It can't do it, unless you can precisely define the probability distribution under your null. So it will work to reject the null that a coin is fair. It won't work to reject the null that a black monolith was not-designed. Elizabeth B Liddle
Elizabeth @87:
I’m not sure what you mean by “purely natural processes”
Oh, stop it. We all know what this means in the context of the design debate. It means, without more, the regular workings of the known laws of the universe – gravity, electromagnetism, the strong/weak nuclear forces, and their sub-forces (chemistry, biochemistry, etc.). Or, to really simplify things for purposes of the design discussion, you can just think of it as processes that are not guided, directed, influenced or controlled by an intelligent being, i.e., not influenced by a designer.
I do think it is perfectly possible to determine that an event was due to a designer without defining and calculing the probability of it occurring by some non-design means. That is what archaeologists and forensic scientists do, for instance.
Good. I think that is a critical point. And in the case of, say, machines, I presume you would also acknowledge that this applies generally and there is not some special exclusionary category reserved for machines made of biomolecules just because they happen to be organic molecules rather than inorganic molecules.
What I am saying is much narrower than that, and concern’s Dembski’s concept of “CSI” or “chi” for which he gives a mathematical formula based on the principle of Fisherian null hypothesis testing. That formula contains the parameter p(T|H), which is the probability of observing the Target under the null hypothesis, which he defines as “the relevant chance hypothesis, including Darwinian and other material mechanisms”. I am saying that that is not calculable, and that treating “non-design” as an omnibus null doesn’t work, and that therefore the concept of chi doesn’t work as a method of detecting design.
This is a strange comment. The calculations put forth by design proponents typically give every reasonable conceivable edge to natural processes, often including the entire particle resources of the known universe, the fastest possible reaction rates known, etc. It doesn’t matter whether it is specifically calculable to the nth degree. Every opportunity is provided to natural processes and they still are pathetically impotent under any rational scenario to even begin to construct the kinds of systems we see in life. Furthermore, for decades now, the more we learn the more stringent the calculations become, not less. There is absolutely no rational way anyone can look at the calculations and conclude that a reasonable inference cannot be drawn. To say we can’t do a complete, entirely accurate calculation – and therefore, can’t draw any conclusion – is to hide behind a fig leaf and to demand a level of omniscience of design proponents that is never demanded from any other field. Finally, it is strange that you would say treating “non-design” as a null hypothesis doesn’t work. Would you prefer that we treat design as the null hypothesis? That is probably what we should do, given that virtually everyone acknowledges living systems look designed. Certainly a good argument can be made for considering living systems to be designed unless someone can affirmatively demonstrate that the system could reasonably have come about through purely natural processes. Eric Anderson
I'm satisfied. Yawn. CentralScrutinizer
And Elizabeth doesn't understand the implications of:
can you explain how you compute P(T|H) where, H, to quote Dembski 2005, is “the relevant chance hypothesis that takes into account Darwinian and other material mechanisms”?
LoL! Such a hypothesis doesn't exist. That is what I have been telling you. It is up to you guys to tell us what your position's hypotheses are. Joe
Slightly. As CentralScrutinizer rightly points out, self-replication is not itself sufficient – it has to be self-replication with heritable variance in reproductive success.
And that variance has to be happenstance in order for the process to be darwinian. Joe
kairosfocus: can you explain how you compute P(T|H) where, H, to quote Dembski 2005, is "the relevant chance hypothesis that takes into account Darwinian and other material mechanisms"? Elizabeth B Liddle
computerist
So as long as you have self-replication, evolution is inevitable. Whether subsequent mutations are “good” or “bad” or whatever, evolution continues. Whatever the outcome, evolution continues as long as self-replication prevails. Case closed. This is what I understand to be the core underlying position of Dr. Liddle. Am I wrong?
Slightly. As CentralScrutinizer rightly points out, self-replication is not itself sufficient - it has to be self-replication with heritable variance in reproductive success. If that is present, evolution is not inevitable but highly likely for the simple and logical reason that if you have self-replicators replicating with heritable variance in reproductive success in the current environment, the more successful variants will tend to become most prevalent. So what is near-inevitable, under those conditions, is that populations will adapt to their current environment. If that environment changes, they may or may not be able to readapt fast enough not to go extinct. Elizabeth B Liddle
F/N: Dr Liddle, here is the summary, that shows how EVERY CHEMICAL TIME EVENT OF EVERY ONE OF THE 10^57 ATOMS OF OUR SOLAR SYSTEM IS TAKEN INTO ACCOUNT IN THE 500 bit FSCO/I LIMIT, and onwards, every Planck time event of the 10^80 atoms of our observed cosmos in the 1,000 bit limit. That is, every atom every 10^-14 s is deemed an observer in the first case, and in the second, every atom, every 10^-45 s in the second. Where the issue is to search a space in the first instance that is such that even so generous a limit on probability stand as picking a straw sized sample blindly from a cubical haystack 1,000 LY on the side -- as thick as our galaxy, superposed on it. The second case makes the whole cosmos be swallowed up in the haystack. Where we can easily see that the firm result of sampling theory is that only the vast bulk of such a stack would be picked up by ANY process tracing to blind chance and mechanical necessity. And similarly, it is quite evident save to those who are committed not to see this, that the requisites of functionally specific complex organisation and associated information -- as are manifest in English ASCII text, in computer ASCII codes and in the genomic DNA codes alike -- will manifestly sharply constrict the subset T of the possible arrangements W that will be relevantly functional. This easily explains why Bill Gates does not hire monkeys to code his software by random typing, why random document generation exercises have failed to produce any functional text of relevant length (72+ ASCII characters), and why no cases of the chance and necessity driven actually observed evolution of relevantly complex biological function have ever been observed. Similarly, it is obvious why the ONLY empirically observed source of FSCO/I has been design. Thus, we are epistemically entitled to infer that the best causal explanation of FSCO/I is design, and that it is a highly reliable sign of design as cause. Period. Here is the excerpt, which has been repeatedly drawn to your attention and has been repeatedly ignored or distorted into strawman tactic pretzels: ________________ >> xix: Later on (2005), Dembski provided a slightly more complex formula, that we can quote and simplify, showing that it boils down to a "bits from a zone of interest [[in a wider field of possibilities] beyond a reasonable threshold of complexity" metric: ? = – log2[10^120 ·?S(T)·P(T|H)]. --> ? is "chi" and ? is "phi" xx: To simplify and build a more "practical" mathematical model, we note that information theory researchers Shannon and Hartley showed us how to measure information by changing probability into a log measure that allows pieces of information to add up naturally: Ip = - log p, in bits if the base is 2. That is where the now familiar unit, the bit, comes from. Where we may observe from say -- as just one of many examples of a standard result -- Principles of Comm Systems, 2nd edn, Taub and Schilling (McGraw Hill, 1986), p. 512, Sect. 13.2: Let us consider a communication system in which the allowable messages are m1, m2, . . ., with probabilities of occurrence p1, p2, . . . . Of course p1 + p2 + . . . = 1. Let the transmitter select message mk of probability pk; let us further assume that the receiver has correctly identified the message [[--> My nb: i.e. the a posteriori probability in my online discussion here is 1]. Then we shall say, by way of definition of the term information, that the system has communicated an amount of information Ik given by I_k = (def) log_2 1/p_k (13.2-1) xxi: So, since 10^120 ~ 2^398, we may "boil down" the Dembski metric using some algebra -- i.e. substituting and simplifying the three terms in order -- as log(p*q*r) = log(p) + log(q ) + log(r) and log(1/p) = – log (p): Chi = – log2(2^398 * D2 * p), in bits, and where also D2 = ?S(T) Chi = Ip – (398 + K2), where now: log2 (D2 ) = K2 That is, chi is a metric of bits from a zone of interest, beyond a threshold of "sufficient complexity to not plausibly be the result of chance," (398 + K2). So, (a) since (398 + K2) tends to at most 500 bits on the gamut of our solar system [[our practical universe, for chemical interactions! ( . . . if you want , 1,000 bits would be a limit for the observable cosmos)] and (b) as we can define and introduce a dummy variable for specificity, S, where (c) S = 1 or 0 according as the observed configuration, E, is on objective analysis specific to a narrow and independently describable zone of interest, T: Chi = Ip*S – 500, in bits beyond a "complex enough" threshold NB: If S = 0, this locks us at Chi = - 500; and, if Ip is less than 500 bits, Chi will be negative even if S is positive. E.g.: a string of 501 coins tossed at random will have S = 0, but if the coins are arranged to spell out a message in English using the ASCII code [[notice independent specification of a narrow zone of possible configurations, T], Chi will -- unsurprisingly -- be positive. Following the logic of the per aspect necessity vs chance vs design causal factor explanatory filter, the default value of S is 0, i.e. it is assumed that blind chance and/or mechanical necessity are adequate to explain a phenomenon of interest. S goes to 1 when we have objective grounds -- to be explained case by case -- to assign that value. That is, we need to justify why we think the observed cases E come from a narrow zone of interest, T, that is independently describable, not just a list of members E1, E2, E3 . . . ; in short, we must have a reasonable criterion that allows us to build or recognise cases Ei from T, without resorting to an arbitrary list. A string at random is a list with one member, but if we pick it as a password, it is now a zone with one member. (Where also, a lottery, is a sort of inverse password game where we pay for the privilege; and where the complexity has to be carefully managed to make it winnable. ) An obvious example of such a zone T, is code symbol strings of a given length that work in a programme or communicate meaningful statements in a language based on its grammar, vocabulary etc. This paragraph is a case in point, which can be contrasted with typical random strings ( . . . 68gsdesnmyw . . . ) or repetitive ones ( . . . ftftftft . . . ); where we can also see by this case how such a case can enfold random and repetitive sub-strings. Arguably -- and of course this is hotly disputed -- DNA protein and regulatory codes are another. Design theorists argue that the only observed adequate cause for such is a process of intelligently directed configuration, i.e. of design, so we are justified in taking such a case as a reliable sign of such a cause having been at work. (Thus, the sign then counts as evidence pointing to a perhaps otherwise unknown designer having been at work.) So also, to overthrow the design inference, a valid counter example would be needed, a case where blind mechanical necessity and/or blind chance produces such functionally specific, complex information. (Points xiv - xvi above outline why that will be hard indeed to come up with. There are literally billions of cases where FSCI is observed to come from design.) xxii: So, we have some reason to suggest that if something, E, is based on specific information describable in a way that does not just quote E and requires at least 500 specific bits to store the specific information, then the most reasonable explanation for the cause of E is that it was designed. The metric may be directly applied to biological cases: Using Durston’s Fits values -- functionally specific bits -- from his Table 1, to quantify I, so also accepting functionality on specific sequences as showing specificity giving S = 1, we may apply the simplified Chi_500 metric of bits beyond the threshold: RecA: 242 AA, 832 fits, Chi: 332 bits beyond SecY: 342 AA, 688 fits, Chi: 188 bits beyond Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond xxiii: And, this raises the controversial question that biological examples such as DNA -- which in a living cell is much more complex than 500 bits -- may be designed to carry out particular functions in the cell and the wider organism. >> _______________ I trust this should suffice for record. Good day, madam. GEM of TKI kairosfocus
Dr Liddle: I thought it wise to first give you a chance to show a better angle than I have seen recently. Unfortunately, the tone and tactics above tell me differently, for instance, here is the last straw that leads me to break silence:
[EL, 95:] If you didn’t include Darwinian processes and natural mechanisms (as Dembski says you must) then you can’t reject those processes and mechanisms. So how you are you computing your null of “random noise”? That is what I am calling the eleP(T|H)ant in the room.
Sorry, after this from the previous linked:
{EL, at TSZ:] >>Kairosfocus, this is outrageous. Nobody here, to my knowledge, has suggested that you are a Nazi, and I certainly have not.>> {In denial of responsibility for hosting the following outrage without correction . . . ] {OM at TSZ:] >>I would like to note however that both the Nazis and KF think that homosexuals are immoral and/or deviants. So draw your own conclusion as to who’ll be marching who round what camp if they get their way.>> [in company of AF and RTH, who did not correct this outrage] --> I need to add details from my for record, that:
Sometimes, it is needful to drive home a point, even when it is on an unpleasant matter and deals with uncivil conduct . . . . I think we can take it as a given that when one is characterised in the formula “both the Nazis and X think that . . . ” one is being compared to Nazis. In a way obviously intended to taint one with the justly deserved odium that attaches to Nazism. In short, the utterly offensive — and demonstrably unwarranted — suggestion is being made that one is a Nazi. That, sirs, is slander . . . . My having a principled objection to the agenda to homosexualise marriage in our day, and my wider concern that on significant evidence homosexual behaviour is disordered, damaging to the participant and potentially hazardous to society at large — which BTW is not even a part of the debates over design theory — is compared to Nazis. The insinuation is blatant, save to the willfully blind: implicit accusation of hatred rather than principled concern along with those of a great many people including some of the most distinguished across the ages and down to today (BTW, cf. here for some thoughts and concerns that are too often ignored or suppressed today). That is, principled concern is reduced to a loathsome caricature by invidious comparison with Nazis, in order to taint without good reason. And, to create a toxic, polarised atmosphere filled with the smoke of burning, slander-soaked strawmen, so that no reasonable and serious discussion of a serious concern can happen. As though, only Nazis and this hateful bigot now under scrutiny by being pushed into the same boat as Nazis could possibly have such a view. Sorry, TSZ management, this enabling of Alinskyite toxic rhetoric is not good enough, not by a long shot.
. . . I can no longer afford to take a lenient view of your attempted clever distractions and dismissals. I will remark briefly on the above. You full well know, or SHOULD know, Dr Liddle, that the 2005 Dembski expression was drawn out, simplified and applied to biological systems here at UD some years ago now. To try to pretend otherwise -- as you do in the excerpt I have just made -- is, at this stage, a willfully continued misrepresentation of easily accessed facts. Facts I will link now and intend to excerpt from a summary at IOSE in a moment. That is, Dr Liddle, we see here, with all due respects, a pattern of disregard for duties of care to accuracy, much less truth and fairness. Simply not good enough, and I for one am finished with leniency on such. Good day, madam. GEM of TKI kairosfocus
Well, Lizzie, you should have had an infinite supply of popcorn. :razz: Or better yet, seeing that infinities exist in the mind, you could just imagine the popcorn too. :roll: However none of that changes the fact that GAs and EAs are examples of Intelligent Design Evolution, ie evolution by design. And always will be. Joe
So as long as you have self-replication, evolution is inevitable. Whether subsequent mutations are "good" or "bad" or whatever, evolution continues. Whatever the outcome, evolution continues as long as self-replication prevails. Case closed. This is what I understand to be the core underlying position of Dr. Liddle. Am I wrong? computerist
I'm out of popcorn. I ate it all, watching some thread about infinite sets :) Elizabeth B Liddle
Not sure what you mean here, but GA’s are certainly “analogous” to “Darwin’s idea”! That’s why they are called “GAs” or “evolutionary algorithms”.
LoL! GAs are NOT related to darwinian evolution because unlike darwinian evolution both GAs and EAs have at least one goal. They are designed to solve specific problems. As I said, Lizzie does NOT understand what darwinian evolution entails. This is going to be entertaining. On one hand we have Lizzie, with absolutely no clue as to what darwinian evolution entails nor what is being debated. And on the other hand we have IDists who do not seem interested in correcting any of that. So all we have are people talking past each other because there isn't any common understanding. Break out the popcorn! Joe
oops, the above was to CentralScrutinizer, as is what is below:
and how it relates to your analogy of “Darwin’s idea” being analogous to a GA
Not sure what you mean here, but GA's are certainly "analogous" to "Darwin's idea"! That's why they are called "GAs" or "evolutionary algorithms". Not all GAs use exactly the same principle, but many do. Elizabeth B Liddle
There is an important distinction between that sort of self-replicator and a self-replicator that does not have the properties of “heritable variance in reproductive success in the current environment.”
Yes indeed. Which is why I usually include the full monty. You caught a rare occasion when I took a short-cut.
I’m wondering if you get the full impact of that distinction with regards to the fine tuning of the universe.
Probably not. I certainly accept that for life to begin (i.e. in my view for Darwinian-capable self-replicators to emerge from not-such) you need heavy atoms, including carbon. I don't know, once you've got those atoms, what else in the early universe would make a difference to whether Darwinian-capable self-replicators or mere common-or-garden self-replicators would emerge, because, of course, we don't yet know how they emerged (if they did :)) But I guess it might turn out that something is absolutely critical and makes the difference. Elizabeth B Liddle
gpuccio:
Here I lose you. We certainly can observe a lot of apples which have mass (as you say: “apples often have mass”). So, we have no need to demonstrate that for each apple. But no basic protein domain has been shown to have selectable intermediaries, so I can’t see why we should suppose that some have them. I can’t see the analogy with apples!
Fair point, it was a lame example. My point is that positing selectable precursors seems at least no less credible than positing a completely unobserved entity. And at least we know where to look for the selectable precursors, and we know that Darwinian algorithms basically work. For example (I know UD proponents hate this demonstration, but it deserves a lot more credit than it's given), Lenski's AVIDA shows that even if you have functions that are all Irreducibly complex (require non-selectable precursors) they evolve, even when they require deleterious precursors. So we know that the principle works. My argument is not "therefore there must have been selectable precursors" but "therefore there is no reason to reject selectable precursors and infer design by default".
I am not thinking “that for some reason there were no selectable intermediaries for a given protein“. I am stating that no selectable intermediaries are known for any basic protein domain. It’s quite different, don’t you agree?
Yes, but it's what I meant. Sorry I was unclear. But gpuccio, this then becomes an argument-from-ignorance. The basic protein domains are extremely ancient. How would you test whether any precursor was selectable in those organisms in that environment? That's why I'd say the onus (if you want to reject the "null" of selectable precursors) to demonstrate that such precursors are very unlikely. I'm not asking you to believe they existed. I'm simply saying that rejecting rejecting that hypothesis, is not warranted. If an astronomer detects a perturbation in the orbit of some planet that might, or might not, indicate an unknown object, we do not reject the hypothesis and infer Intelligent Perturbation just because we have not found that object. We allow both Intelligent Perturbation and Unknown Object to remain as unrejected alternatives. Again, in case the point is missed: I am not arguing against the hypothesis of design. Actually let me call that Design, because "design" could denote human design, which I certainly would not argue against! I am merely arguing against the validity of the arguments for Design that you are presenting.
Inferring (not deducing!) design from dFSCI is not circular, and is perfectly correct. No more on that point.
Actually I accept that the flaw here is not circularity. It is if you assume that by rejecting the random-draw null you have rejected the non-design null, but as you claim that the inference rather is by "simple inference by analogy", I agree it is not circular. On the other hand nor is it sound.
No, you seem not to understand the point. the point is very simple: all unrelated states have the same probability of being reached.
Yes, I know that is your point. I'm saying that is an unwarranted assumption. If that's your null, then under that null the probability distribution will indeed be not much different to random walk. But you can't then reject the hypothesis that all unrelated states do NOT have the same probability of being reached - that there are are viable evolutionary pathways. That would be assuming your conclusion, and yet again ignoring the eleP(T|H)ant!
If proteins which confer some advantage exist in the protein sequence space, there is absolutely no reason that their distribution in the space “favors” unrelated functional proteins instead of unrelated non functional proteins.
Are you saying that there is no reason to expect any correspondence between protein sequence and protein properties? If so, by what reasoning? I'd say that under the Darwinian hypothesis that is what you'd expect. Sequences for which a slight variation results in a similar phenotype will tend to be selected simply for that reason. Variants who produce offspring with similar fitness will leave more offspring than variants for whom the fitness of the offspring is more of a crapshoot. And in any case we know it is the case - similar genotypes tend to produce similar phenotypes. If sequences were as brittle as you suggest, few of us would be alive. Anyway, feel free to call a halt if you find my posts are causing hair loss :) But it's good to talk, and thanks. Cheers Lizzie Elizabeth B Liddle
Liddle, ... and how it relates to your analogy of "Darwin's idea" being analogous to a GA. (Not to mention you not being impressed that the fine-tuning argument.) CentralScrutinizer
Liddle: Self-replicators that replicate with heritable variance in reproductive success in the current environment.
There is an important distinction between that sort of self-replicator and a self-replicator that does not have the properties of "heritable variance in reproductive success in the current environment." I'm wondering if you get the full impact of that distinction with regards to the fine tuning of the universe. CentralScrutinizer
Upright Biped: I have restored your authoring rights at TSZ (got lost in the hack), so feel free to start an OP there if you would like. Elizabeth B Liddle
CentralScrutinizer:
No, not just self-replicators. Self-replicators that produce better self-replicators.
As I usually put it, laboriously, but truncated on this occasion: Self-replicators that replicate with heritable variance in reproductive success in the current environment. Elizabeth B Liddle
Elizabeth: I appreciate your goodwill to understand my points. I think at least some progress has been made. As you know, I have no intention to convince you, so just a few final (I hope :) ) considerations will do: OK. If your null assumes “no selectable intermediaries” then rejecting that null is not rejecting Darwinian processes. It’s rejecting a process that did not involve selection. OK. We do not need to separately postulate mass for every apple before attributing its fall to gravity. We cannot reject the null that this apple fell to earth because it had mass, simply because we have not been able to ascertain that did. What we can do is to say that apples often have mass, and when they do, they fall. Here I lose you. We certainly can observe a lot of apples which have mass (as you say: "apples often have mass"). So, we have no need to demonstrate that for each apple. But no basic protein domain has been shown to have selectable intermediaries, so I can't see why we should suppose that some have them. I can't see the analogy with apples! We can say: many things have selectable intermediaries, and when they do, they can evolve. I don't know to what you are referring here. Not proteins, I suppose. Unless therefore we have good reason to think that for some reason there were no selectable intermediaries for a given protein, we have no justification for rejecting that hypothesis, and accepting, by default, a hypothesis (design) that is equally without independent support. Except for the fact that no known basic protein domain has been shown to have selectable intermediaries. I am not thinking "that for some reason there were no selectable intermediaries for a given protein". I am stating that no selectable intermediaries are known for any basic protein domain. It's quite different, don't you agree? We cannot deduce their existence because of dFSCO, because that would be as circular as deducing design from dFSCO. Inferring (not deducing!) design from dFSCI is not circular, and is perfectly correct. No more on that point. Inferring or deducing the existence of intermediaries from dFSCI is simply senseless. I really can't see how you can compare two concepts so different. This argument would have merit if you could also show that a random walk through protein space would not go via a great many proteins that confer some advantage at some time. No, you seem not to understand the point. the point is very simple: all unrelated states have the same probability of being reached. If proteins which confer some advantage exist in the protein sequence space, there is absolutely no reason that their distribution in the space "favors" unrelated functional proteins instead of unrelated non functional proteins. So, the existence of proteins which "confer some advantage", be then 1, 10, 1000 or many more, does not change the probability distribution. All unrelated states have the same probability to be reached, because all of them have the same probability of having some protein that "confers some advantage" in the walk. So, the probability of reaching some functional state remains extremely low. whatever the number of sequences that "confer some advantage". That should answer all other following observations. Have a good time! :) gpuccio
...self-replicators that have to power to "climb Mount Improbable", if you will. CentralScrutinizer
Liddle: However, it’s possible that foresight was required to set up a universe that would bring forth self-replicators!
No, not just self-replicators. Self-replicators that produce better self-replicators. CentralScrutinizer
... to continue If any part of it resembles a GA, as you have implied, the whole of it necessarily does. Which means the entire universe is a goal-oriented GA that is "trying to find self-replicators that produce better self-replicators." This is your implication, whether you realize or not. CentralScrutinizer
Liddle: I agree that the laws of the universe would have to be such that self-replicators would form. Once that’s done, then features that promote survival and better self-replicators will be preferentially selected (that’s Darwin’s idea).
Foul. For "Darwin's idea" to be analogous to a GA, as you have implied, the laws of the universe would have to have to exist such that self-replicators would form that promote survival and produce preferentially-selected "better" self-replicators. Self-replicators could be plausibly envisioned that did nothing but self-replicate without leading to "better" (what do you mean by "better"?) self-replicators. But what you assert is that the laws of nature are such that, not only they lead to self-replicators, but they led to self-replicators of such a nature that they produce over time "better" self-replicators. You say, "once that's done", as if the nature of the self-replicators are now divorced from the laws that led to them. Not so. You don't get to "something new" once self-replicators have come to exist. It's all one process. And if any part of it resembles a GA, as you have implied, the whole of it necessarily does. CentralScrutinizer
I agree that the laws of the universe would have to be such that self-replicators would form. Once that’s done, then features that promote survival and better self-replicators will be preferentially selected (that’s Darwin’s idea).
No, darwin's idea was design without a designer. Ernst Mayr, one of the founders of the modern synthesis, goes over the fact that the variation has to be unguided, ie happenstance. Also what is "better" is all relative.
However, it’s possible that foresight was required to set up a universe that would bring forth self-replicators!
That would mean darwinian evolution wouldn't be the inference. It would only be part of the picture. The main thesis would be that those self-replicators were designed to evolve into living organisms and eventually into beings capable of scientific discovery.
This, I proposed would evolve via Darwinian mechanisms once a simple non-inert-symbolic-semiotic-whatsits self-replicating population had got going.
That is incorrect as you do not understand what darwinian mechanisms entail. Joe
EL, Instead of misunderstanding the second man on the bus, you should show him your proposition in #110 and ask what he thinks. Really. :) (by the way, the second man on the bus is Sterelny/Griffiths 1999) Upright BiPed
Awful typo above in my post 106, gpuccio:
We deduce their existence because of dFSCO, because that would be as circular as deducing design from dFSCO.
should of course read:
We cannot deduce their existence because of dFSCO, because that would be as circular as deducing design from dFSCO.
oops Elizabeth B Liddle
And good luck to you, Upright Biped. You may well be correct. Obviously it looks slightly different from here :) But that's the way it goes with communication, as pointed out so wisely by your man on the bus. Cheers Lizzie Elizabeth B Liddle
Dr Liddle, Man on the bus says to the other man “I do not suggest that inexorable forces can give rise to the relationships required for information to exist”. The other man replies “Without the potential of miscommunication, information is not possible”. :) I think there is a conceptual problem you have yet to understand. Good luck to you. Upright BiPed
CentralScrutinizer @106
Right. For the “blind watchmaker evolution” to be analogous to a GA, the laws of the universe would have to have been designed on purpose to favor the creation of functional features that have survival value. Of course, there goes the blind watchmaker right out the door in favor of the goal oriented designer.
I agree that the laws of the universe would have to be such that self-replicators would form. Once that's done, then features that promote survival and better self-replicators will be preferentially selected (that's Darwin's idea). The "blind watchmaker" refers to the idea that once you have self-replicators, adaptation will occur semi-automatically, without "foresight". However, it's possible that foresight was required to set up a universe that would bring forth self-replicators! Hence the "fine-tuning" argument. I don't think myself it has a great deal of force, but I think it's a better argument than argument-from-biology. After all, if the whole universe is designed, why would some bits look more designed than others? Elizabeth B Liddle
gpuccio:
b) The starting hypothesis is that no selectable intermediaries exist, and them the whole walk happens as the consequence of random variation. In that case, each unrelated state has the same probability to be reached, indeed lower than the probability of any related state. So, the probability of a random walk reaching the new functional state is at most the rate between functional space and search space, where the functional space is the number of sequences of that length that exhibit the function, and the search space is the number of possible sequences of that length. That is the concept of dFSI. A good approximation of the dFSI of protein families can be reached by the Durston method. Taking an appropriate threshold of complexity for the biological system in out planet, that IMO can be 150 bits, the null hypothesis of a random origin of the new sequence can easily be rejected.
Yes. Apologies for asking you to repeat this. It wasn't that I had forgotten, just that I wanted to make sure that this was what you meant. OK. If your null assumes "no selectable intermediaries" then rejecting that null is not rejecting Darwinian processes. It's rejecting a process that did not involve selection. With which I entirely agree. I am absolutely sure that proteins did not arise without selectable intermediaries. But having rejected that null, you cannot then extrapolate to rejecting the null of "using selectable intermediaries" because that is not included in your null.
c) If and when naturally selectable intermediaries are found, the reasoning can be repeated, rajecting or accepting the null random hypothesis for each random walk, from A to B and from B to C. Where B is a naturally selectable intermediary between A and C. If no selectable intermediary is known, we still have to apply the null random hypothesis to the full walk from A to C.
No. You merely reject the null of "a known selectable intermediary". We do not need to separately postulate mass for every apple before attributing its fall to gravity. We cannot reject the null that this apple fell to earth because it had mass, simply because we have not been able to ascertain that did. What we can do is to say that apples often have mass, and when they do, they fall. That is an extreme example, but the logic is the same. We do not have to say: this protein evolved because it had selectable intermediaries. We can say: many things have selectable intermediaries, and when they do, they can evolve. Unless therefore we have good reason to think that for some reason there were no selectable intermediaries for a given protein, we have no justification for rejecting that hypothesis, and accepting, by default, a hypothesis (design) that is equally without independent support. However, the Darwinian hypothesis actually predicts that there were selectable intermediaries. This means that we can look for evidence of them. We deduce their existence because of dFSCO, because that would be as circular as deducing design from dFSCO. But we can seek independent evidence, as we can for design. And if we then use some kind of Bayesian inference, we can evaluate the relative credibility of the two hypotheses. Interestingly, this is what Keynes seems to have been getting at!
As explained, I am rejecting a Fisherian null, and then examining the alternative explanations. My priors certainly differ from yours, but they have nothing to do with my Fisherian reasoning (thanks God!).
Yes, I know. And I am saying that in doing so you have only rejected the null of random walk. You have not rejected the null of selectable intermediaries. You do address the latter, but using a different inferential method. The null rejection doesn't do the job. As for the other inferential method, which you now flesh out (thank you!), I'll be brief, as I've got to go:
1) There is no logical reason at all that sequence intermediates between two unrelated sequences should give any reproductive advantage that leads from one sequence to the other. IOWs, variations to one sequence can rarely give a reproductive advantage, but there is no reason at all why they should lead the walk towards a new, unrelated sequence with a completely new function. That is wishful thinking at best, complete folly at worst.
This argument would have merit if you could also show that a random walk through protein space would not go via a great many proteins that confer some advantage at some time. A toy example: there are 1000 possible proteins. Of these, 10 have local function (catalyse something; are chemically active, whatever). And 20 confer some generic advantage (are good for stopping holes in membranes, for instance). If as you say there's no reason to suppose that the 10 locally functional proteins are sequentially related to the 20 generically advantageous functional proteins, the chances that one of the 20 will be on the pathway to one of the 10 is very small. So far so good. However, if 900 of the proteins had some kind of generically advantageous function, the chances that 10 of them will also be sequentially related to the 10 that are locally functional, becomes quite high. So we have a prediction: If the Darwinian hypothesis is correct, and there were selectable precursors to modern functional proteins, then most early simple proteins probably conferred some selective advantage. I don't know the answer. But that null has not been rejected, and I don't see any good reason to think it's less likely than a designer. Anyway, thanks for the conversation, and apologies for getting you to repeat what you've already said! Elizabeth B Liddle
CLAVDIVS @109: I absolutely agree. My own position is that the reason the results of evolution look so much like the products of human design is that human designers operate very much like evolutionary processes! Hence the term "neural darwinism". It's probably the easiest kind of design process to evolve. Which is probably why the designer chose it.... Elizabeth B Liddle
Upright Biped:
“Careless typing” was never the problem, and the your wish to see the information system of self-replication resolved by Darwinian mechanisms was something you often repeated.
I have never, ever, suggested that you could produce a system of self-replicators from a system of non-self-replicators by Darwinian evolution. If you thought I suggested such a thing, either I mistyped, or you misread. Clearly it would be an absurd claim, because you have to have self-replicators before you can have Darwinian evolution. By definition. That's why Darwinian evolution can't account for OoL. As I must have said rather often. However, what you did ask me to do was not simply to devise a system whereby self-replicators emerged from non-self-replicators (necessarily by non-Darwinian means), but to have those self-replicators self-replicate by means of some coding protocol in which an inert symbolic/semiotic information transmission medium served to code for some evolutionary advantageous phenotypic feature as in the DNA-tRNA-amino acid system in living cells. This, I proposed would evolve via Darwinian mechanisms once a simple non-inert-symbolic-semiotic-whatsits self-replicating population had got going Hence my emendation above from "evolve" to "emerge, then evolve" above. Here was my plan: 1. non-selfreplicating vMonomers 2. non-selfreplicating vPolymers 3. self-replicating vPolymers 4. self-replicating vPolymers in self-replicating vVesicles 5. self-replicating vPolymers in self-replicating vVesicles with some kind of semiotic information transfer system. 1 - 3 must be non Darwinian. 3-5 can be Darwinian, because once you have self-replication, you can have Darwinian processes. I hope this finally clears up the misunderstanding, and me of any charge of inconsistency (apart from the odd typo). Perhaps you misunderstood because you assumed that there could be no self-replication without a semiotic whatsits. But if self-replicators by definition have semiotic whatsits, then I'd be happy to have another go, and if I show that virtual self-replicators can emerge from non-self-replicators, I will have fulfilled the challenge. If not, then first I have to get virtual self-replicators from virtual non-self-replicators by non-Darwinian means, then have the semiotic thing evolve subsequently. Elizabeth B Liddle
gpuccio and elizabeth: I have always thought the real question is: Did humans, with their capacity for intelligent design, evolve step-wise by Darwinian mechanisms from simpler precursors that did not have the capacity for intelligent design? Assuming arguendo that humans did so evolve, then it is not surprising at all that features of biological life resemble human design, because ex hypothesi the human capacity for design arose from the evolutionary process acting on biological life -- in fact, we would expect to see parallels between human design and biology. So it seems to me both evolutionary theory and intelligent design theory would expect to find analogies between human design and features of biology. Accordingly, I have never really understood why the argument by analogy from human design to intelligent design of life on earth is thought by some to be a knock-down argument. CLAVDIVS
CentralScrutinizer- Elizabeth sez that darwinian evolution is not the blind watchmaker. As I have been saying, she doesn't understand darwinian evolution. That is why we need a thread about that before we can discuss anything else. Joe
Elizabeth: But what are they? What is your search space? Equiprobable random draw? Because that isn’t the Darwinian null. You may not remember, but I have discussed that in extreme detail with you here: https://uncommondesc.wpengine.com/intelligent-design/evolutionist-youre-misrepresenting-natural-selection/ (all the last part of the thread). Must I say it again? I will, in brief: a) My model is a random walk form some precursor sequence (a precious gene, or some non coding DNA sequence) to an unrelated functional gene protein (a new basic protein domain) b) The starting hypothesis is that no selectable intermediaries exist, and them the whole walk happens as the consequence of random variation. In that case, each unrelated state has the same probability to be reached, indeed lower than the probability of any related state. So, the probability of a random walk reaching the new functional state is at most the rate between functional space and search space, where the functional space is the number of sequences of that length that exhibit the function, and the search space is the number of possible sequences of that length. That is the concept of dFSI. A good approximation of the dFSI of protein families can be reached by the Durston method. Taking an appropriate threshold of complexity for the biological system in out planet, that IMO can be 150 bits, the null hypothesis of a random origin of the new sequence can easily be rejected. c) If and when naturally selectable intermediaries are found, the reasoning can be repeated, rajecting or accepting the null random hypothesis for each random walk, from A to B and from B to C. Where B is a naturally selectable intermediary between A and C. If no selectable intermediary is known, we still have to apply the null random hypothesis to the full walk from A to C. Why do you think that “a designer at an unknown time, by unknown means, designed and fabricated a DNA sequence likely to produce a functional protein and inserted into a living organism without leaving any trace of the process” is “more credible” than “unknown precursor proteins provided slight unknown selective advantages and so organisms with those sequences left more offspring”? a) The time is known: each emergence of a new basic protein domain is a time, that can be approximately known studying natural history. Our understanding of the times is constantly improving. b) We know little of the means, but much can be known as our understanding of molecular biology and of natural history improves. Guided mutation or intelligent selection, for example, are different "means", and they will leave different tracks in the genome and proteome. c) Traces of the process are abundant. The whole genome and proteome of living beings is a very strong trace. The more we know it, the more we understand of the design process. d) "unknown precursor proteins provided slight unknown selective advantages and so organisms with those sequences left more offspring" is completely non credible, for two different orders of reasons: 1) There is no logical reason at all that sequence intermediates between two unrelated sequences should give any reproductive advantage that leads from one sequence to the other. IOWs, variations to one sequence can rarely give a reproductive advantage, but there is no reason at all why they should lead the walk towards a new, unrelated sequence with a completely new function. That is wishful thinking at best, complete folly at worst. 2) There is no empirical support to the idea. Show those intermediaries, if they exist. But then we are not rejecting a Fisherian null. We are doing something more like Bayesian inference. And our priors will differ. As explained, I am rejecting a Fisherian null, and then examining the alternative explanations. My priors certainly differ from yours, but they have nothing to do with my Fisherian reasoning (thanks God!). gpuccio
Liddle: The Darwinian algorithm (again I am not arguing for “neo-” anything) has been shown to result in functional features. This is why people write GAs. A functional protein is a functional feature. Joe: GAs have nothing to do with darwinism. GAs have at least one goal. Darwinian evolution doesn’t have any. The darwinian algorithm is a contradiction of terms.
Right. For the "blind watchmaker evolution" to be analogous to a GA, the laws of the universe would have to have been designed on purpose to favor the creation of functional features that have survival value. Of course, there goes the blind watchmaker right out the door in favor of the goal oriented designer. CentralScrutinizer
Why do you think that “a designer at an unknown time, by unknown means, designed and fabricated a DNA sequence likely to produce a functional protein and inserted into a living organism without leaving any trace of the process”
How do you know no traces were left? And guess how the Stonehenge investigation started? "Some unknown designer, at an unknown time, designed and fabricated Stonehenge for unknown reasons and unknown processes." Once design is inferred then we get to the next questions. Joe
Dr Liddle at 97. "Careless typing" was never the problem, and the your wish to see the information system of self-replication resolved by Darwinian mechanisms was something you often repeated.
Dr Liddle: But there’s no reason (that I can see) to assume that such a system is IC, or, more to the point, “unevolvable”. ... Dr Liddle: I think, though I could be wrong, that his case is that a code can’t evolve because you need the code before you can have the evolution. This is what that old challenge was about. I still think it would be cool to simulate the evolution of code from non-code. ... Dr Liddle: There needs to be a mechanism by which that set, or an equivalent set, of tRNA molecules came to be templated by the DNA, and not some useless set in which one triplet could result in any one of a number of amino acids. So? Why shouldn’t evolutionary mechanisms result in such a set? ... etc, etc, etc
scrap the line of excuses, you were just wrong. Upright BiPed
And Elizabeth, ignoring me just makes you willfully ignorant. Joe
Elizabeth, Seeing that you do not understand darwinian evolution, it is safe to say that you have no idea what the darwinian null is. As I said you need to start by understanding darwinian evolution. Continuing your misrepresentations of it isn't going to do you any good. Joe
gpuccio:
Again, by the dFSCI metrics. No elephant here.
But what are they? What is your search space? Equiprobable random draw? Because that isn't the Darwinian null.
3) For biological objects, you may agree that at present the origin cannot be independently assessed, otherwise we would not be here discussing. As biological objects are the only other objects in the universe that exhibit dFSCI, it is simply natural to propose a design explanation for them. This is a very simple inference by analogy. In the absence of any other credible explanation, this remains the best explanation.
It may well be "simply natural" and I'm not saying it isn't, but the argument you are presenting here is not the rejection of a non-design Fisherian null. It is comparing two hypotheses, for neither of which you have much evidence independent of the explanandum. Why do you think that "a designer at an unknown time, by unknown means, designed and fabricated a DNA sequence likely to produce a functional protein and inserted into a living organism without leaving any trace of the process" is "more credible" than "unknown precursor proteins provided slight unknown selective advantages and so organisms with those sequences left more offspring"? I do understand your irritation with me, and I'm sure that from each of our points of view we think the other is failing to grasp a simple point! I am perfectly happy with reasoning that puts side by side two alternative explanations for a phenomenon, even though there is no (or little) independent evidence (independent of the explanandum) and evaluates their credibility. But then we are not rejecting a Fisherian null. We are doing something more like Bayesian inference. And our priors will differ.
I try to follow facts and explain them, not to interpret them according to dogmatic philosophical commitments.
Me too. But we need to put our priors on the table. And we cannot derive them from our posterior. As it were :) Elizabeth B Liddle
This PROVES Lizzie is clueless wrt darwinian evolution:
The Darwinian algorithm (again I am not arguing for “neo-” anything) has been shown to result in functional features. This is why people write GAs. A functional protein is a functional feature.
GAs have nothing to do with darwinism. GAs have at least one goal. Darwinian evolution doesn't have any. The darwinian algorithm is a contradiction of terms. Joe
Guys, Elizabeth Liddle doesn't even understand Darwinian evolution. So there is no way she understands what is being debated. Until she understands what Darwinian evolution entails, all of you are just wasting your time. Joe
Elizabeth: a) I use neo darwinism in the sense of the modern synthesis, a molecular hypothesis that is not "classical darwinism". If you prefer simply darwinism, do as you like. I will go on using "neo darwinism", that IMO is more precise. b) You need to be able to compute a probability distribution for your data under the null of “random noise”. If you reject that null, you have only rejected whatever you modeled as your null. If you didn’t include Darwinian processes and natural mechanisms (as Dembski says you must) then you can’t reject those processes and mechanisms. As said many times, I reject the null random hypothesis by the dFSCI metrics. c) So how you are you computing your null of “random noise”? That is what I am calling the eleP(T|H)ant in the room. Again, by the dFSCI metrics. No elephant here. d) Why is design any more “supported by facts” than Darwinian processes (by which I mean heritable variance in reproductive success – not sure what you are including as “neo Darwinian algorithm” – there’s nothing “neo” about the Darwinian algorithm I am proposing)? I think I have said it hundreds of times. I will say it once more. Because design is the only known cause of dFSCI. All objects exhibiting dFSCI, of which the origin can be independently assessed, are designed objects. e)
We have an explanandum (functional proteins that constitute a small proportion of theoretically possible proteins), for which we have two possible explanations: 1. A designer assembled those sequences, having selected them because of their potential as functional proteins. We have no evidence for this. 2. Precursors of those proteins conferred some reproductive advantage to their bearers. We have only a small amount of evidence for this. Why should we consider one of these a better explanation than the other?
See my previous answer. f) Well, no. You are assuming your conclusion. You are saying: this thing has dFSCI; only designed things have dFSCI; therefore this thing, like all other things with dFSCI was designed; therefore only designed things have dFSCI. Some time ago I have spent weeks here, defending dFSCI against false accusations of circularity from your blog, very similar to the one you repeat here. I will not go again into that in detail. I invite you to read the old threads. In a few words, I can restate here why dFSCI is not circular: 1) We have only two kinds of objects in the universe that exhibit dFSCI: human artifacts and biological objects. 2) In the case of human artifacts, it can be easily verified that all objects that exhibit dFSCI, and whose origin can be independently known, are human designed objects. These are facts. IOWs. we can safely use dFSCI to correctly infer design in for any object whose origin can be independently assesses, with 100% specificity. These are observed facts. 3) For biological objects, you may agree that at present the origin cannot be independently assessed, otherwise we would not be here discussing. As biological objects are the only other objects in the universe that exhibit dFSCI, it is simply natural to propose a design explanation for them. This is a very simple inference by analogy. In the absence of any other credible explanation, this remains the best explanation. There is no circularity in that, as even some of your friends at TSZ admitted in the end. If you are not convinced, please remain of your mind, but don't come again about that with the same old wrong arguments, because I have no more time to spend on that. Let's say we agree to disagree. g)
That certainly does not rule out a designer. Indeed, to be perfectly honest, I’d be somewhat disappointed in a Designer (capitalisation intentional) that had to continually intervened to tweak a flagellum here, or a protein there. It seems to me that an omniscient, omnipotent deity would be capable of designing a universe that Just Worked. The deity herself would be undetectable from within that universe, at least by scientific reasoning, but no less present or causal. Indeed, the ability of scientific reasoning to account for her creation without recourse to postulated intermittent tweaking might itself be adduced as evidence for her omipotence and omniscience. But we get the God there is, not the God we ask for
You can have the philosophical position you like. I try to follow facts and explain them, not to interpret them according to dogmatic philosophical commitments. I am not a skeptic, after all :) gpuccio
Thanks, Upright Biped. Yes, I certainly recognise that self-replicators cannot evolve themselves into being. I have never not-recognised that (indeed I have made the point explicitly many times), and I apologies if my careless typing gave you an erroneous impression. Elizabeth B Liddle
Dr Liddle,
You are of course entitled to your view of our interactions, as I am to mine.
Indeed.
As you point out, it is all available for anyone to form their own view, should they care.
Yes, and if they look as late as a year ago, they'll find you wondering if I would simply concede my argument if you could evolve information machinery, or suggesting that half a code (whatever that is) should do the trick, or wondering why I conceive of this as a design issue and not an evolutionary one.
We had better leave it there rather than derail this thread further...
Agreed.
...unless you’d like to discuss it at TSZ.
Perhaps that will be necessary at some point, although your recognition that self-replicators did not evolve themselves into being is probably the best that can be expected. As it stands for now, I'm out. Upright BiPed
gpuccio:
No. I think you are seriously wrong here. The null hypothesis in a Fisherian hypothesis testing is that the effect we are observing has no special cause, and is only explained by random noise. That null hypothesis can be safely rejected by dFSCI.
I disagree that I am wrong :) You need to be able to compute a probability distribution for your data under the null of "random noise". If you reject that null, you have only rejected whatever you modeled as your null. If you didn't include Darwinian processes and natural mechanisms (as Dembski says you must) then you can't reject those processes and mechanisms. So how you are you computing your null of "random noise"? That is what I am calling the eleP(T|H)ant in the room.
So, here we are in the following scenario. A random explanation pf biological information can safely be rejected by the dFSCI metrics. Design is a viable and credible explanation for what we observe. The only non design explanation that can be offered, at present, is the neo darwinian algorithm. This second explanation is completely unsupported by facts, so it cannot compete with the design explanation.
Why is design any more "supported by facts" than Darwinian processes (by which I mean heritable variance in reproductive success - not sure what you are including as "neo Darwinian algorithm" - there's nothing "neo" about the Darwinian algorithm I am proposing)? We have an explanandum (functional proteins that constitute a small proportion of theoretically possible proteins), for which we have two possible explanations: 1. A designer assembled those sequences, having selected them because of their potential as functional proteins. We have no evidence for this. 2. Precursors of those proteins conferred some reproductive advantage to their bearers. We have only a small amount of evidence for this. Why should we consider one of these a better explanation than the other?
I really don’t understand you. The handiwork is the explanandum. The designer is the explanation. So, the handiwork is evidence of a designer. It’s simple, isn’t it?
No. It's circular. Let's say I find a coin on the ground. I think: "it must have dropped out of someone's pocket". I can't then turn round and say: "The evidence the coin dropped out of someone's pocket is that I found a coin on the ground". This is because there are other possible explanations. Perhaps someone was tossing a coin for who goes in to bat first, and couldn't see where it landed. The fact of the coin on the ground gives no more support to one of these hypotheses than the other. However, if I have independent (of the coin-on-the-ground) evidence for the first, for instance, I find a hole in my pocket, and all my money gone, I can consider the first explanation more likely. Or if I see a bunch of cricketers tossing coins in a wild manner, I can consider the second fairly well supported. A designer, in the absence of independent evidence, is no better supported than a Darwinian explanation for which there is no independent evidence.
Again, I am losing you. dFSCI in proteins is the explanandum. A designer is an explanation. Neo darwinian algorithm is another possible explanation.
Yes.
But: a) A designer explains observed facts, because the form and properties of the facts we observe are exactly the form and properties of designed things (dFSCI).
Well, no. You are assuming your conclusion. You are saying: this thing has dFSCI; only designed things have dFSCI; therefore this thing, like all other things with dFSCI was designed; therefore only designed things have dFSCI.
b) Neo darwinian algorithm does not explain anything, because it has never been shown capable to produce that kind of results.
The Darwinian algorithm (again I am not arguing for "neo-" anything) has been shown to result in functional features. This is why people write GAs. A functional protein is a functional feature. But even if you reject this, it doesn't get you out of the circularity problems with your (a) :) As you repeat here:
Wrongt. As I said, there is a specific, strong, credible and positive reason why we offer design as an explanation of dFSCI in the biological world: because design is the only empirical explanation for dFSCI in the whole known universe. Only designed things exhibit dFSCI, as far as we know.
Consider: In in some cases of disease X we find evidence of bacterial activity. In other cases of disease X we find no evidence of bacterial activity. This is disease X. Therefore it was caused by bacterial activity. I would argue that this reasoning is fallacious (although the conclusion could be correct). And I suggest your argument has the same form: In in some cases of dFSCO we find evidence of designers designing it. In other cases of dFSCO we find no evidence of designers designing it. This is a case of dFSCO Therefore it was caused by designers designing it. Again, the conclusion could be true, but the argument is fallacious.
Well, I can agree that if God materialized tomorrow, in Times Square and in front of hundreds of witnesses, a book written in golden letters with all the details of the project of biological beings in the course of our planet’s existence, and a final declaration: “It was Me!”, that would certainly be some evidence for design. :) In the meantime, and in the absence of such explicit miracles, we have to be content with scientific reasoning. And scientific reasoning brings us to the design inference as best explanation of what we observe, according to a lot of positive evidence that, in a sense, is even better than the golden book.
Only if the scientific reasoning is sound. My position is that it is not. That certainly does not rule out a designer. Indeed, to be perfectly honest, I'd be somewhat disappointed in a Designer (capitalisation intentional) that had to continually intervened to tweak a flagellum here, or a protein there. It seems to me that an omniscient, omnipotent deity would be capable of designing a universe that Just Worked. The deity herself would be undetectable from within that universe, at least by scientific reasoning, but no less present or causal. Indeed, the ability of scientific reasoning to account for her creation without recourse to postulated intermittent tweaking might itself be adduced as evidence for her omipotence and omniscience. But we get the God there is, not the God we ask for :) Elizabeth B Liddle
franklin: My phrase was: Let’s take the simplest example: an enzyme that accelerates a reaction. I am not saying that this is the simplest protein function. I am saying that it is IMO the simplest example of protein function for my discussion. Why don't you offer arguments, instead of fastidious critics? Why would I deny that enzymes exist and perform biochemical function? But we are discussing OOL and emergent self-replicators might require. Certainly, the ability to maintain osmolality, pH balance, and metal homeostasis are vital roles to play in any living organisms. That these functions aren’t sexy enough for you does not mean that they can be ignored. I was not discussing OOL at all. I was discussing the emergence of new basic protein domains, as I usually do. I discuss what I consider sexy. You are free to do the same. And, possibly, offer arguments. If the function of serum albumins is not complex enough for you perhaps you could explain how well you, as a living organism, would be able to get along without these proteins? Would you still be alive? You are really something! I have not discussed the functional complexity of serum albumins exactly because their functional complexity is probably lower. Globulins and enzymes are certainly better examples of functional complexity. So, I debate them. I have never stated that there are not simpler objects in living beings. I discuss these that are certainly complex, because I am discussing dFSCI, and dFSCI needs a very serious threshold of complexity to be a good tool for design detection. Again, you are free to offer counterarguments, if you have any, using your own sexy examples. Specialkinds of binding? what does that even mean? For example, from Wikipedia: "ATP-binding cassette transporters (ABC-transporter) are members of a protein superfamily that is one of the largest and most ancient families with representatives in all extant phyla from prokaryotes to humans.[1][2] ABC transporters are transmembrane proteins that utilize the energy of adenosine triphosphate (ATP) hydrolysis to carry out certain biological processes including translocation of various substrates across membranes and non-transport-related processes such as translation of RNA and DNA repair.[3][4] They transport a wide variety of substrates across extra- and intracellular membranes, including metabolic products, lipids and sterols, and drugs. Proteins are classified as ABC transporters based on the sequence and organization of their ATP-binding cassette (ABC) domain(s). ABC transporters are involved in tumor resistance, cystic fibrosis and a range of other inherited human diseases along with both bacterial (prokaryotic) and eukaryotic (including human) development of resistance to multiple drugs." Is the binding of protons not a special enough function for a protein to be considered in a OOl scenario? Is that function simpler than a enzyme therefore making it a much simper example of a protein function? As said, I am not discussing specifically an OOL scenario. Moreover, a function is complex according to the number of bits in the sequence that are necessary to implement the function. You can offer any possible function, and do your calculations. Some functions are simpler, but most protein functions are very, very complex. I offer my examples and make my calculations. You can do the same. gpuccio
Elizabeth: I apologize! I got the formatting wrong. Here is the correct version:
What I am pointing out is that the concept of CSI (and your own version) intrinsically involves precisely that; eliminating the non-design alternative. And it can’t be done.
No. I think you are seriously wrong here. The null hypothesis in a Fisherian hypothesis testing is that the effect we are observing has no special cause, and is only explained by random noise. That null hypothesis can be safely rejected by dFSCI. Once the null (random) hypothesis is rejected, any credible hypothesis that explains the observed pattern can compete for “best explanation”. So, here we are in the following scenario. A random explanation pf biological information can safely be rejected by the dFSCI metrics. Design is a viable and credible explanation for what we observe. The only non design explanation that can be offered, at present, is the neo darwinian algorithm. This second explanation is completely unsupported by facts, so it cannot compete with the design explanation. Any other non design explanation is welcome to the competition, provided it is supported by facts. So, we are perfectly safe in a Fisherian context. Like any other biological and medical theory, once the chance hypothesis is rejected, we must choose the best non chance explanation. I definitely choose design, and so should do, IMO, all unbiased thinkers. Well, I meant, independent of the handiwork! I’m not saying the postulated handiwork isn’t evidence. I’m saying it cannot as both your explanandum and your explanation. I really don’t understand you. The handiwork is the explanandum. The designer is the explanation. So, the handiwork is evidence of a designer. It’s simple, isn’t it? I am happy to stipulate, for the sake of argument, that number of functional proteins is a tiny fraction of the number of possible proteins. That fact needs to be explained. But it cannot simultanously be evidence for the proffered explanation over some other proffered explanation. Again, I am losing you. dFSCI in proteins is the explanandum. A designer is an explanation. Neo darwinian algorithm is another possible explanation. But: a) A designer explains observed facts, because the form and properties of the facts we observe are exactly the form and properties of designed things (dFSCI). b) Neo darwinian algorithm does not explain anything, because it has never been shown capable to produce that kind of results. we cannot appeal to the explanandum itself (the small proportion of modern functional proteins out of all possible proteins) to differentiate between the two hypotheses proffered to explain it. That is why I suggested that providing independent (of the explanandum) evidence would be a potentially better approach. Wrong. As I said, there is a specific, strong, credible and positive reason why we offer design as an explanation of dFSCI in the biological world: because design is the only empirical explanation for dFSCI in the whole known universe. Only designed things exhibit dFSCI, as far as we know. On the contrary, there is absolutely no clue that the neo darwinian algorithm can produce dFSCI. So, your epistemological position is not correct. Lack of such evidence wouldn’t rule out design (indeed design, unspecified, is unrule-out-able), but positive evidence could rule it in. Well, I can agree that if God materialized tomorrow, in Times Square and in front of hundreds of witnesses, a book written in golden letters with all the details of the project of biological beings in the course of our planet’s existence, and a final declaration: “It was Me!”, that would certainly be some evidence for design. In the meantime, and in the absence of such explicit miracles, we have to be content with scientific reasoning. And scientific reasoning brings us to the design inference as best explanation of what we observe, according to a lot of positive evidence that, in a sense, is even better than the golden book. gpuccio
Elizabeth:
What I am pointing out is that the concept of CSI (and your own version) intrinsically involves precisely that; eliminating the non-design alternative. And it can’t be done. No. I think you are seriously wrong here. The null hypothesis in a Fisherian hypothesis testing is that the effect we are observing has no special cause, and is only explained by random noise. That null hypothesis can be safely rejected by dFSCI. Once the null (random) hypothesis is rejected, any credible hypothesis that explains the observed pattern can compete for "best explanation". So, here we are in the following scenario. A random explanation pf biological information can safely be rejected by the dFSCI metrics. Design is a viable and credible explanation for what we observe. The only non design explanation that can be offered, at present, is the neo darwinian algorithm. This second explanation is completely unsupported by facts, so it cannot compete with the design explanation. Any other non design explanation is welcome to the competition, provided it is supported by facts. So, we are perfectly safe in a Fisherian context. Like any other biological and medical theory, once the chance hypothesis is rejected, we must choose the best non chance explanation. I definitely choose design, and so should do, IMO, all unbiased thinkers. Well, I meant, independent of the handiwork! I’m not saying the postulated handiwork isn’t evidence. I’m saying it cannot as both your explanandum and your explanation. I really don't understand you. The handiwork is the explanandum. The designer is the explanation. So, the handiwork is evidence of a designer. It's simple, isn't it? I am happy to stipulate, for the sake of argument, that number of functional proteins is a tiny fraction of the number of possible proteins. That fact needs to be explained. But it cannot simultanously be evidence for the proffered explanation over some other proffered explanation. Again, I am losing you. dFSCI in proteins is the explanandum. A designer is an explanation. Neo darwinian algorithm is another possible explanation. But: a) A designer explains observed facts, because the form and properties of the facts we observe are exactly the form and properties of designed things (dFSCI). b) Neo darwinian algorithm does not explain anything, because it has never been shown capable to produce that kind of results. we cannot appeal to the explanandum itself (the small proportion of modern functional proteins out of all possible proteins) to differentiate between the two hypotheses proffered to explain it. That is why I suggested that providing independent (of the explanandum) evidence would be a potentially better approach. Wrongt. As I said, there is a specific, strong, credible and positive reason why we offer design as an explanation of dFSCI in the biological world: because design is the only empirical explanation for dFSCI in the whole known universe. Only designed things exhibit dFSCI, as far as we know. On the contrary, there is absolutely no clue that the neo darwinian algorithm can produce dFSCI. So, your epistemological position is not correct. Lack of such evidence wouldn’t rule out design (indeed design, unspecified, is unrule-out-able), but positive evidence could rule it in. Well, I can agree that if God materialized tomorrow, in Times Square and in front of hundreds of witnesses, a book written in golden letters with all the details of the project of biological beings in the course of our planet's existence, and a final declaration: "It was Me!", that would certainly be some evidence for design. :) In the meantime, and in the absence of such explicit miracles, we have to be content with scientific reasoning. And scientific reasoning brings us to the design inference as best explanation of what we observe, according to a lot of positive evidence that, in a sense, is even better than the golden book.
gpuccio
gpuccioLIt’s IMO the simplest available example to make a simple and clear discussion on my point. Am I free to choose my examples? You are certainly free to choose your own examples but you realize that they may not be the simplest example of protein function contrary to your claims of being so.
gpuccioLOr do you deny that enzymes exist, and do what I say?
Why would I deny that enzymes exist and perform biochemical function? But we are discussing OOL and emergent self-replicators might require. Certainly, the ability to maintain osmolality, pH balance, and metal homeostasis are vital roles to play in any living organisms. That these functions aren't sexy enough for you does not mean that they can be ignored.
gpuccioLSimple binding to some biochemical compound is not in itself an interesting enough functional specification. Indeed, in many cases that kind of function so defined is not complex at all.
If the function of serum albumins is not complex enough for you perhaps you could explain how well you, as a living organism, would be able to get along without these proteins? Would you still be alive?
gpuccioLSpecial kinds of binding, especially if related to specific conformational changes and biochemical actions, are much more complex. But the example would become complex too
Specialkinds of binding? what does that even mean? Is the binding of protons not a special enough function for a protein to be considered in a OOl scenario? Is that function simpler than a enzyme therefore making it a much simper example of a protein function? franklin
gpuccio:
Your statement is the equivalent of: “I will never accept your credible explanation unless you logically eliminate my empirically unsupported explanation”.
No, it isn't, gpuccio, and the fact that you think it is is at the base of the problem IMO (and not unique to you!). I am perfectly happy, in principle, to accept your credible explanation. I do not require that you eliminate the non-design alternative before I do so. What I am pointing out is that the concept of CSI (and your own version) intrinsically involves precisely that; eliminating the non-design alternative. And it can't be done. That doesn't mean we must conclude that non-design-did-it after all. It just means that the inference of Design via Fisherian null hypothesis testing where the null is the omnibus null of "non-design" isn't valid. A Bayesian approach might work better.
The handiwork of a designer is independent evidence of a designer (no need to use the capital letter here). Your refute of that evidence is “dependent” on your worldview. But the evidence of dFSCI in biological objects is independent on any worldview: it is just empirical reasoning.
Well, I meant, independent of the handiwork! I'm not saying the postulated handiwork isn't evidence. I'm saying it cannot as both your explanandum and your explanation. I am happy to stipulate, for the sake of argument, that number of functional proteins is a tiny fraction of the number of possible proteins. That fact needs to be explained. But it cannot simultanously be evidence for the proffered explanation over some other proffered explanation. If you say: I hypothesise that a designer designed and fabricated the DNA sequences required and I say: I hypothesise that there were a series of precursor sequences that offered some slight reproductive advantage to their bearers in the environment in which they lived we cannot appeal to the explanandum itself (the small proportion of modern functional proteins out of all possible proteins) to differentiate between the two hypotheses proffered to explain it. That is why I suggested that providing independent (of the explanandum) evidence would be a potentially better approach. Lack of such evidence wouldn't rule out design (indeed design, unspecified, is unrule-out-able), but positive evidence could rule it in. Elizabeth B Liddle
Upright Biped @82 You are of course entitled to your view of our interactions, as I am to mine. As you point out, it is all available for anyone to form their own view, should they care. We had better leave it there rather than derail this thread further, unless you'd like to discuss it at TSZ. Elizabeth B Liddle
Oh, and thanks for the welcome, Eric :) Elizabeth B Liddle
Eric:
Is it your position that it is impossible to determine whether an event is unlikely to have occurred by purely natural processes unless we are able to fully define and calculate the probability of it occurring by such processes?
I'm not sure what you mean by "purely natural processes", but that is not what I am saying. I do think it is perfectly possible to determine that an event was due to a designer without defining and calculing the probability of it occurring by some non-design means. That is what archaeologists and forensic scientists do, for instance. What I am saying is much narrower than that, and concern's Dembski's concept of "CSI" or "chi" for which he gives a mathematical formula based on the principle of Fisherian null hypothesis testing. That formula contains the parameter p(T|H), which is the probability of observing the Target under the null hypothesis, which he defines as "the relevant chance hypothesis, including Darwinian and other material mechanisms". I am saying that that is not calculable, and that treating "non-design" as an omnibus null doesn't work, and that therefore the concept of chi doesn't work as a method of detecting design. Elizabeth B Liddle
franklin: That certainly is not the simplest available example. Why not apply your reasoning to function along the lines of ‘maintains osmolality’ or ‘binds oxygen’ or ‘binds a metal ion’? It's IMO the simplest available example to make a simple and clear discussion on my point. Am I free to choose my examples? Or do you deny that enzymes exist, and do what I say? Simple binding to some biochemical compound is not in itself an interesting enough functional specification. Indeed, in many cases that kind of function so defined is not complex at all. Any chemical compound can bind to something else. Special kinds of binding, especially if related to specific conformational changes and biochemical actions, are much more complex. But the example would become complex too. gpuccio
Elizabeth: Very briefly: If we are to rule out a Darwinian explanation for modern proteins, we need to demonstrate that there is no possible series of precursors to modern “locally” functional proteins that did not confer a reproductive advantage to their ancient bearer organisms. I know we have been here before, gpuccio, but it’s important! The Darwinian hypothesis is that precursor proteins served some advantageous function for the organism – if that’s the hypothesis you want to falsify, that’s the level at which you need to demonstrate lack of prior utility of precursor for the organism. The simple fact is, we have no necessity to "rule out" something that does not exists, if not as a mere logical possibility. That is not science. We need not to "demonstrate that there is no possible series of precursors to modern “locally” functional proteins that did not confer a reproductive advantage to their ancient bearer organisms". It's you (or anyone who supports the neo darwinian "explanation") who must show that there is any empirical support for such a bizarre idea! At present, the only support to that "explanation" is a dogmatic, ideological refute of the credible alternative, intelligent design. That is an ideological worldview (reductive materialism), and refusing the only credible explanation for a "non explanation", completely unsupported by both facts and logic, is not science. Again, science is not done by "demonstrating" that there could not logically exist the explanation someone likes. It is done by supporting a possible explanation with facts, and then comparing it with other possible explanations. The game is simple here: all known facts support the ID explanation. None of them supports the imaginary scenario where, as you say, there exists a "series of precursors to modern “locally” functional proteins that did confer a reproductive advantage to their ancient bearer organisms". Your statement is the equivalent of: "I will never accept your credible explanation unless you logically eliminate my empirically unsupported explanation". The null hypothesis we reject here is the non design hypothesis. Under that hypothesis, only random events and/or natural laws can be invoked as the explanation of what we observe. Well, those two causes cannot explain what we observe. Then you say: "No, I will not reject the null, because maybe if we could observe such and such then a bizarre explanation could work". But that is not scientifically correct. First show that we have observed such and such (for example, the precursors, either in the proteome or in the lab). Then your reasoning will gain any credibility. Until then, all unbiased thinkers will correctly reject the null. This is empirical science. Most protein domain families are inferred to be extremely ancient, so it is at that ancient level – prior to multicellularity – that we need to be looking for potential reproductive advantage, at least for modern proteins in those domains. I agree. That certainly makes things easier, because prokaryotes are vastly available for experimentation and research. Now this is your field, not mine I agree :) But if you are to persuade me that there is no evolutionary pathway to a modern functional protein, I will need to be assured that its precursors were necessarily useless in all organisms that they inhabited in any environment – in other words “unselectable”. I only want to "persuade" you, or anyone else, that there is no known evolutionary pathway to a modern functional protein, and that relegates the neo darwinian hypothesis to the field of myth, not science. That's much simpler :) Again, I need not falsify your claims, if your claims are empirically unsupported. That is falsification enough in itself. I could simply argue that dark energy was the cause of a necessary pathway to modern proteins, but unless and until I give some empirical support to that statement, you need no "falsify" it. I do not claim that they were not – but my point is that to reject the null of non-design, you’d have to show that they were. Or, alternatively, provide independent evidence of a Designer (independent of the handiwork of the postulated Designer), The handiwork of a designer is independent evidence of a designer (no need to use the capital letter here). Your refute of that evidence is "dependent" on your worldview. But the evidence of dFSCI in biological objects is independent on any worldview: it is just empirical reasoning. Nonetheless positive ID hypotheses are in principle possible (“Frontloading” for instance, probably makes different predictions to Darwinian evolution). My hypothesis of designed variation certainly makes different predictions: for example, the lack of selectable intermediaries, and the possible existence of rather "sudden" jumps in the emergence of information. Both predictions are supported by a lot of known facts. Frontloading makes different predictions still, but I am not aware of much empirical support for that hypothesis. That's why I usually don't like it. gpuccio
Elizabeth: Welcome back. I'm glad to see that your posting privileges have been restored.
P(T|H) is fine to compute if you have a clearly defined non-design hypothesis for which you can compute a probability distribution.
Let's cut to the chase. You've brought this up many times, so it seems like a central theme for you. Is it your position that it is impossible to determine whether an event is unlikely to have occurred by purely natural processes unless we are able to fully define and calculate the probability of it occurring by such processes? Thanks, Eric Anderson
Elizabeth:
If we are to rule out a Darwinian explanation for modern proteins,...
First you have to understand what a "Darwinian explanation" entails. Until you do that all you are doing is equivocating.
The Darwinian hypothesis is that precursor proteins served some advantageous function for the organism –
Nope, only that it wasn't fatal. Natural selection is eliminative. What doesn't work or is fatal is what gets filtered out. As for independent evidence for the designer- again, the evidence for a designer wrt biology is independent of the evidence for a designer in physics and cosmology. But anyway, until you understand what a "Darwinian explanation" entails, you will never understand what is being debated. And that means your opinion on the subject is tainted by that ignorance. Just sayin'... Joe
Dr Liddle: “Well, not really, Upright Biped.” What is clear to me Elizabeth is that when you and I began this conversation two years ago, you had no disciplined conception of what information was - at all. If you did, it was certainly not made evident by your positions. The argument presented to you has focused your understanding of what information is, and how it must operate in order to produce material effects. You clearly now know that Darwinian evolution did not evolve the material requirements for biological information to exist, but it’s saddening that you do not possess the capacity to admit it. What you’ve done here - this ridiculous denial of what is made obvious by your own words - is nothing new. I have been pulling up your words and pointing out the inconsistencies almost from the very start, and yet it’s always the same damn thing: You absolutely never integrate. I would never expect someone like Byers or even Sandstrom to really have the capacity to be wrong about something and learn from a competitor, but for some odd reason I thought you might. I was wrong about that, and I have been wrong about it from the start. Upright BiPed
gpuccio: Let’s take the simplest example: an enzyme that accelerates a reaction.
That certainly is not the simplest available example. Why not apply your reasoning to function along the lines of 'maintains osmolality' or 'binds oxygen' or 'binds a metal ion'? franklin
Thanks for this response, and it's good that we agree on more than you anticipated :) I am not persuaded, however, by your distinction between "local" and organismic function. If we are to rule out a Darwinian explanation for modern proteins, we need to demonstrate that there is no possible series of precursors to modern "locally" functional proteins that did not confer a reproductive advantage to their ancient bearer organisms. I know we have been here before, gpuccio, but it's important! The Darwinian hypothesis is that precursor proteins served some advantageous function for the organism - if that's the hypothesis you want to falsify, that's the level at which you need to demonstrate lack of prior utility of precursor for the organism. Most protein domain families are inferred to be extremely ancient, so it is at that ancient level - prior to multicellularity - that we need to be looking for potential reproductive advantage, at least for modern proteins in those domains. Now this is your field, not mine, so I will stop there. But if you are to persuade me that there is no evolutionary pathway to a modern functional protein, I will need to be assured that its precursors were necessarily useless in all organisms that they inhabited in any environment - in other words "unselectable". I do not claim that they were not - but my point is that to reject the null of non-design, you'd have to show that they were. Or, alternatively, provide independent evidence of a Designer (independent of the handiwork of the postulated Designer), but I know that is an unpopular approach amongst ID proponents! Nonetheless positive ID hypotheses are in principle possible ("Frontloading" for instance, probably makes different predictions to Darwinian evolution). Elizabeth B Liddle
VJ: You are always a wonderful example of a sincere search for truth. Your deep attempts at clarifying difficult concepts are certainly precious. At the cost of being repetitive, I will try again to give my contribution to the problem of specification and complexity. We must not forget our real purpose: our real purpose is a tool for design detection in the empirical world, nothing else. A tool is good if it works. So I say again that functional specification is completely valid for most discussions about biological objects. Biological objects are special because they are functional, not because they are repetitive, or compressible, and so on. So, the only pertinent question, in form of a protein, is: is it functional? And then: how many bits of specific information are necessary to have that function? That is, in a nutshell, the concept of dFSCI. Specification is not important in itself. A lot of objects can be specified in some way, and yet they are not designed. What is unique of design is a specification that cannot be attained without a very high number of bits of specific information. That kind of functions are never obtained in s "natural", non designed context. That's why dFSCI is exactly the tool we need. It works. Let's go to the problem of "order". I would simply say that any statement must be relative to a context. I have always emphasized that any attempt at design detection must be relative to a specific physical system, with defined temporal frame, and with definite probabilistic resources. To infer design for an object in that system, we just need to ascertain: a) That the digital functional complexity (ratio of sequences that exhibit the function to the search space of sequences) for that function is high enogh, considering the probabilistic resources of the system in the temporal frame. b) That no known algorithm physically available in that system can generate the observed functional sequence in a necessity way, in alternative to mere random probability. If a) and b) are true, we can safely infer design as the best explanation. Neo darwinists usually refute b) saying that we can never be sure that some day we can find an algorithm that can generate the functional sequence. That argument is silly and utterly unscientific. Science works with the explanations we have, not with the mere theoretical possibility that some day one can be found. That is religious expectation, not science. That's why, in my b), I always stress the words "known algorithm". Let's go to the case of sequences from coin tossing. Let's say that we have a sequence of 100 heads. What does it mean? I don't know what Dembski would say, but for me 100 heads is not a good sequence to infer design in a system, unless we can verify many conditions. 100 heads is a highly compressible sequence. Its Kolmogorov complexity is very low. What does it mean? It means that you can easily have that sequence by some very simple physical algorithm: for example, the easiest way is that you are tossing a coin that always gives a head, for its physical properties. But other possibilities should be excluded, like some special magnetic field, and so on. None of these hypotheses necessarily entails design. But again, if our coin gives us the first 500 bits of pi in binary form, the situation is completely different. The result is not highly compressible. It is in a sense compressible, because some algorithm can calculate the digits of pi. But: a) The algorithm would anyway be rather complex (It would express the Kolmogorov complexity of any result corresponding to pi, however long) b) If our system is simply a man who tosses a coin, I can't see how any pi computing algorithm can be incorporated in such a system. So, the only way to explain a sequence of coin tossing that expresses the first 500 bits of pi is design: the man who tosses the coin already knows the sequence to be attained, and in some way he controls the outcome of each toss. So, I really believe that if we stay empirical, define our systems and time correctly, compute our probabilistic resources, and use a good tool for design detection like dFSCI, our design inferences will be really good and scientifically valid. gpuccio
Elizabeth: Thank you for the clear answers. I see that you agree on more points than I expected. I can only be happy of that. Your main "difficulty" seems to be the following: Not easily, because I don’t know how you could compute how many of the theoretically possible proteins could perform some advantageous (i.e. promote reproduction) function in some organism in some environment at some time. Which is what you’d have to do if you wanted to compute, say the probability of the protein evolving. So I’ve never really understood the reasoning there. But, again, here you seem not to understand the peculiar problem of proteins. The fact is, most proteins have definite biochemical functions, what I called, in an earlier discussion with you, the "local" function. That is exactly the function that you find immediately defined in protein databases, when it is well understood. Now, for a moment, forget the higher level of organization, and how in the end the function will give or not give a reproductive advantage. The point is, if a protein is not an efficient molecular machine that does something that would not otherwise happen, it is generally useless. Of course, a proteins could be just a messenger or a signal, but usually most basic important proteins are biological catalysts, and very efficient ones. Indeed, even signal cascades are always realized by very efficient biochemical reactions. Let's take the simplest example: an enzyme that accelerates a reaction. The simple fact is: that reaction, which is necessary for the biological environment (for metabolism, or reproduction, you name it), would never happen spontaneously, or it would happen at a ridiculously low speed. But we find in the cell a specific protein, very complex and efficient, which folds in such a way that it can, for example, bind very efficiently to the two components that must react, and makes them react one with the other against all "natural" biochemical laws. IOWs, it is a machine that performs a "local" function extremely well. Nothing like that exists in nature, out of living beings. Now, the local function, in itself, could have no special meaning. Obviously, it can be related to a reproductive advantage, or to any kind of biological advantage, only if correctly integrated in a complex system that needs just that function. That is, in few words, the concept of irreducible complexity. But the point i am trying to make here is the following: if the protein is not able to accelerate enormously that reaction, it is of no utility. So, when you reason that "proteins could perform some advantageous (i.e. promote reproduction) function in some organism in some environment at some time", you are reasoning abstractly, and forgetting that each protein must be able to perform its "wonderful" local function, to be able to help in any possible way. Otherwise, it is only a sequence of aminoacids: some useless burden for the cell. It is exactly that point that invalidates any neo darwinian explanation: those "local" functions are extremely complex, and separated in the space of sequences, as I have many times demonstrated by simple data taken from the proteome, in particular from the SCOP classification. Basic protein domains are usually longer than 100 AAs, sometimes much longer. Durston has found very high functional complexity in most protein families he has examined. Those basic domains are the essential foundation of all biochemical functions in the cell. They are many, they are complex, they are separated. No neo darwinian explanation has ever been found for even one of them. These are facts. In front of these facts, the design explanation has huge credibility. It should be the main hypothesis in biological science, today. Eacj new development in our understanding of biological complexity, at all levels, adds strngth and credibility to the design explanation, and in no way helps the neo darwinian theory, which is really reduced to a dogmatic myth. Science must go on, It must go on according to what is credible, and explains observed facts. That is certainly not true of the neo darwinian paradigm. The absolute abnormality is that such an unsupported paradigm is still accepted by most scientists as "truth". That can only be explained by such a huge cognitive bias that the only correct way to express it is "dogma". You say: If we could simply calculate for all possible proteins the proportion that are “functional”, all would be well. If the proportion was small enough, you could claim they were “Irreducibly Complex” in that a vast number of unselectable steps would be required to get them from a short peptide (or even a long peptide) to a useful protein. But that is exactly what all the facts point to! And you must consider that what we should look at is not "any functional protein" in absolute, but rather "any protein with a new, original local function that, by itself, can give a reproductive advantage in a certain pre-existing cell type". Neo darwinists like to dream of small variations that give reproductive advantages. Sometimes (very rarely) that happens, it is well known, it is supported by facts, and it is called "microevolution". But those adaptations are only "tweakings" of what already exists. In no way they are "steps" towards new, original sequences, with new, original complex local functions. That has never been shown to happen, for the very simple reason that it does not happen. That is a dream, a myth, completely unsupported by facts. In other words, you can’t separate the “function” of a protein from the job it does in keeping the organism alive and fecund, which will vary depending on the environment the organism is in, and, for multicellular organisms, the tissues in which it is expressed, and under what conditions. I surely can separate the two things. It is very simple. If there is no local function, there can be no higher level of organization, no "job" at all. IOWs, if an enzyme does not work, it does not work. No environment can use it, because it does not work. That's why you have to explain the emergence of basic local functions, that means the emergence of new basic protein domains. Once a protein works, that is it does the wonderful, miraculous biochemical job that it does, then it can be integrated in different contexts, for different higher functions, with different levels of expression in different tissues, and so on. But the beginning of everything is always there: the basic biochemical function, the wonderful, complex biochemical machine that makes something happen that would never happen otherwise. gpuccio
My objection to ID is the positive claim that “intelligence was required” to account for proteins, or whatever.
Well seeing that you cannot explain how proteins arose, your "objection" amounts to whining. Joe
Well, not really, Upright Biped. But I guess we leave the judgement as an exercise for the reader. But sure, I should have said "emerge", not "evolve" (as I did correctly later in that thread). So I accept responsibility for the misunderstanding. I hope it is now clear. Elizabeth B Liddle
Translation:
You misunderstood my claim Clearly, what evolution requires - didn't evolve I've been certain of that all along If you thought I was claiming otherwise, then thats absurd of you. I'm mystified by why you thought I claimed that Oh I see, I was arguing for it after all Obviously I meant something different than my words Honestly
cue the applause :) Upright BiPed
A small point, but an important point, Dr Torley (I shall digest the rest of your appendix later, if that's not too horrible a mixed metaphor!) - you write:
In the meantime, can you forgive us in the Intelligent Design community for being just a little skeptical of claims that “no intelligence was required” to account for the origin of proteins, of the first living cell (which would have probably required hundreds of proteins), of complex organisms in the early Cambrian period, and even of the appearance of a new species, in view of what has been learned about the prevalence of singleton proteins and genes in living organisms?
I'd like to make something really clear: I personally do not make ANY claim "that 'no intelligence was required' to account for to origin of proteins", and my position is that any such claim would be hard to defend scientifically. My objection to ID is the positive claim that "intelligence was required" to account for proteins, or whatever. In other words, ID is (in my view unjustifiably) attempting to reject a pantechnicon null of non-design; in contrast, mainstream science does not, and cannot, reject a pantechnicon null of design. This is because neither hypothesis is capable of serving as the null, because to reject a null, you have to be able to estimate the probability distribution of your data under that null, and under neither null can that probability distribution be computed. Elizabeth B Liddle
Hi everyone, I've updated this post with an Appendix, as I've revised my views on some key points. Comment is welcome. vjtorley
Ah, further down that same page, I get it right:
That’s why I presented a specific proposal. I’ve thought it out a little more thoroughly, so here it is: I propose to devise a virtual world populationed by virtual monomers (which I will refer to as vMonomers). Each of these monomers will have a set of “chemical” properties, i.e. they will have certain affinities and phobias (if that’s the right word) and some of those affinities will be shallowly contingent. This if you like is the “Necessity” part of the virtual world – a set of simple rules that govern what happens when my vMonomers come into contact with each other. It will be entirely deterministic. In contrast, the way my vMonomers move around their virtual world will be entirely stochastic (virtual Brownian motion, if you like) so that the probability of any one of them moving in any given direction is completely flat – all directions are equiprobable. So we have Necessity, and we have Chance. And what I hope to demonstrate is that in that virtual world, self-reproducing structures will emerge. If I succeed, then the very fact that I have self-reproducing structures, means, I think, that information (in your sense) has been created, because each structure embodies the information required to make a copy of itself. However, those copies will not be perfect, and so I also foresee that once my self-reproducing structures have emerged they will evolve, in other words the most prevalent structure type in each generation will tend to change. As I say, I don’t know that I can do this (although I believe it can be done!) If I succeeded, would you agree that information (meaningful information, i.e. the information required to duplicate a structure) had been created by Chance and Necessity?
So really, there should have been no confusion, UBP. Indeed, lower down that thread you yourself quote my later words:
And what I hope to demonstrate is that in that virtual world, self-reproducing structures will emerge. If I succeed, then the very fact that I have self-reproducing structures, means, I think, that information (in your sense) has been created, because each structure embodies the information required to make a copy of itself.
But I fully concede, that my word "evolve" in the earlier post was an error. (apologies for this derail - UBP we can take this to TSZ if you like). Elizabeth B Liddle
Ah, I see you quoted me as saying:
And what I propose to do is that starting with a random distribution of these units, a self-replicating population of more complex units will evolve…
I agree that was ambiguous - I should have written "emerge" - although as soon as you have a population of self-replicators (as long as there is heritable variance in reproductive success) it will indeed evolve, which was the point I next made. Here is the whole of what I wrote, amended (I googled it - a link from you would have been helpful)
I’m going to start off with a “toy” chemistry – a virtual environment populated with units (chemicals, atoms, ions, whatever) that have certain properties (affinities, phobias, ambiphilic, etc) in a fluid medium where motion is essentially brownian (all directions equiprobable) unless influenced by another unit. I may have to introduce an analog of convection, but at this stage I’m not sure. And what I propose to do is that starting with a random distribution of these units, a self-replicating population of more complex units will evolve emerge and then evolve, in which each unit (or “organism” if you like, or “critter”) has, encoded with in it, the “recipe” for its own offspring. That way we will have a Darwinian process (if I achieve it) where I don’t even specify a fitness function that isn’t intrinsic to the “chemistry”, that depends entirely on random motion (“Chance” if you like) and “necessity” (the toy chemistry) to create an “organism” with a “genome” that encodes information for making the next generation. Information “about” the next generation that is “sent” to the processes involved in replication. If I succeeded, would you accept that I had met the challenge, or do you foresee a problem? (I have to say, I’m not sure I can do it!)
Note that last sentence! Elizabeth B Liddle
Upright Biped:
You were claiming that Darwinian processes could certainly account for any “new information” introduced into the genome.
Yes indeed. And I would still stand by that claim.
This is the challenge you undertook. In other words, Dr Liddle, you were specifically trying to create a simulation to show that Darwinian processes could originate a self-repicating system. That was what you believed it could do, and that was what you intended to show.
Um, no. If you thought that was what I was claiming, you misunderstood. Perhaps you could track down the place where you think I said that. But clearly, as Darwinian processes require a self-replicating system to function you cannot generate a self-replicating system ab initio by Darwinian processes! If that's what you thought I was claiming, then I concede completely that it isn't possible. It would be absurd. But I'm mystified as to why you think I might have made such a claim. (And, btw, although it's nice to be back, it would be good if you could at least do me the courtesy of considering alternate possibilities to dishonesty when trying to account for my words. My being in error is one; your own misunderstanding of my position is another.) Elizabeth B Liddle
EL: “by setting up a model in which there is no initial breeding-with-variance population, but only a world of Deterministic and non-deterministic rules (Necessities and Chance) from which I hope my self-replicators will emerge”.
EL: Darwinian processes can’t explain the origin of self-replicators
EL: I see no inconsistency between the statements
You are so terribly predictable Dr Liddle. The very moment I hung up the phone, I knew you'd simply claim there was "nothing inconsistent". Being a clever apologist, I knew you'd play on the great possibility that the average reader here wouldn't know the history of the conversation. But the truth of the matter is that the simulation you were trying to run was being done to defend a very specific statement that I had challenged you on. You were on UD taking the existence of the information system in the genome for granted. This is something materialists do quite frequently. You were claiming that Darwinian processes could certainly account for any "new information" introduced into the genome. Perhaps you'll remember the conversation, you were proclaiming the powers of the Darwinian process, and said:
I simply do not accept the tenet that replication with modification + natural selection cannot introduce “new information” into the genome. It demonstrably can, IMO, on any definition of information I am aware of.
To which I had responded:
Neo-Darwinism doesn’t have a mechanism to bring information into existence in the first place. To speak freely of what it can do with information once it exist, is to ignore the 600lbs assumption in the room.
Obviously, that 600lbs assumption is the origin of the information system inside a self-replicator (i.e."into existence in the first place"). And just as obvious, I was challenging you on the power of Darwinian processes to create such a system from scratch (i.e. "Neo-Darwinian doesn't have a mechanism"). This is the challenge you undertook. In other words, Dr Liddle, you were specifically trying to create a simulation to show that Darwinian processes could originate a self-repicating system. That was what you believed it could do, and that was what you intended to show. - - - - - - - - - - - - - - - - - - - - Your history here has taught me that your next move will be to simply repeat the denial of any inconsistency. This sort of thing has become expected. And as already stated, this will only send me back into the vault for more of your clarifying comments on the subject. Such as...
I’m going to start off with a “toy” chemistry – a virtual environment populated with units... And what I propose to do is that starting with a random distribution of these units, a self-replicating population of more complex units will evolve... That way we will have a Darwinian process where I don’t even specify a fitness function...
So as I suggested earlier (and given that hell will freeze over before you admit the obvious) perhaps you would prefer to just drop it. It seem to be far easier on you just to conclude that we've "misunderstood each other". :| Upright BiPed
Thanks for letting Elizabeth post again. Now we know, without a doubt, that she does NOT understand what Darwinian evolution entails. That also explains all of her equivocating. Perhaps that needs a thread- what does darwinian evolution entail... Joe
niwrad @66
Here you find the hierarchy of the human body. All higher organisms have similar functional nesting: chemical level, cells, tissues, organs, and systems. Each of the 10^14 cells (of 300 different types) of the body has its place and function in such hierarchy. This is not simply “order”. This is real “complexity”, or ? better said ? “organization”, what only intelligent design can create.
In what sense is that "functional" nesting, niwrad? I would agree that all the parts serve the organism, in the sense that they are all there to keep it alive and healthy and able to breed, but while there is a hierarchy of composition (the organism contains organs contain tissues contain cells contain organelles contain molecules contain atoms etc) there is no functional hierarchy below the level of the organism itself, and an organism can consist of a single cell. Indeed the majority of cells in our bodies consist of single celled organisms (bacteria) with whom we live in symbiosis: we feed them; they protect us. The neurons do not serve the brain, and the brain does not serve the neurons. I guess you could say there is a hierarchy of necessity - I'd rather lose a bit of my foot, or a kidney than a bit of my brain - but that hierarchy doesn't correspond to the one you linked to. So I still don't buy your premise :) I would still say that the striking feature of multicelled (and even single celled) organisms is the feed-forward and feed-back loops between all parts and levels - their non-hierarchical nature, in other words.
Anyway, Elizabeth, don’t worry if you don’t yet grasp it. After all, whoever of us IDers ? before such architectural masterpieces ? doesn’t fall on his knees, with his eyes filled with tears, and doesn’t say “Dominus, non sum dignum”, knows nothing of intelligent design, despite all our fine discourses. For some said: “You see their eyes overflowing with tears by what they recognise of the Truth.”
Well, I certainly find the universe awe-inspiring, its human beings in particular. So we can agree on that, perhaps :) Elizabeth B Liddle
Elizabeth B Liddle #63 Here you find the hierarchy of the human body. All higher organisms have similar functional nesting: chemical level, cells, tissues, organs, and systems. Each of the 10^14 cells (of 300 different types) of the body has its place and function in such hierarchy. This is not simply "order". This is real "complexity", or ? better said ? "organization", what only intelligent design can create. Anyway, Elizabeth, don't worry if you don't yet grasp it. After all, whoever of us IDers ? before such architectural masterpieces ? doesn't fall on his knees, with his eyes filled with tears, and doesn't say "Dominus, non sum dignum", knows nothing of intelligent design, despite all our fine discourses. For some said: "You see their eyes overflowing with tears by what they recognise of the Truth." niwrad
gpuccio:
Elizabeth: I have read your posts here, and sometimes I have difficulties following your reasoning (this is not news, I suppose).
I will try to be clearer :)
Well, if all your concern is about the famous Dembski paper, I will not go into details: I have expressed my ideas many times about that. So I will offer some simple questions to you: a) Do you accept that functional specification is valid and simple?
I accept that if we could ascertain what proportion of theoretically possible amino acid sequences would fold into proteins that performed some useful function in some tissue in some organism in some environment (i.e. promoted survival and reproduction) we would have an estimate of the probability of such a protein by some process that randomly drew amino acids from a hypothetical hat. Whether that is a useful concept, I don't know - Durston et al and Hazen et al both think so. They may be right.
b) Do you accept that, once defined a function objectively, we can in principle measure the functional complexity for that function in a specific context? For example, the subset of functional sequences vs the total search space?
Well, as long as you can define the search space, sure.
c) Do you accept that such a metrics expresses the improbability of getting that functional result in a random system, given some simple assumptions, like an approximately uniform distribution of the results by random variation, and a correct estimate of the probabilistic resources of the system?
Well, the fine print here, matters, but broadly, yes.
Do you accept that such a model is also equivalent to the case of a random walk from a starting state to a completely sequence unrelated functional state?
If I'm understanding you correctly above, yes.
d) Do you accept that such a reasoning can be applied to basic protein domains?
Not easily, because I don't know how you could compute how many of the theoretically possible proteins could perform some advantageous (i.e. promote reproduction) function in some organism in some environment at some time. Which is what you'd have to do if you wanted to compute, say the probability of the protein evolving. So I've never really understood the reasoning there.
e) Do you accept that no explicit algorithm, either simple or complex, is known for any basic protein domain that can explain its appearance, given that the so called darwinian algorithm cannot be explicitly applied to any of them, given that no functional and naturally selectable intermediate are known for any of them?
That seems extremely likely.
And so on, and so on. I suppose you will not accept any of that, but as you can see there are much simpler and realistic scientific problems at stake than “P(T|H)”. Biology is an empirical domain, and biological information is an empirical scientific problem to which neo darwinism refuses to give any acceptable scientific explanation in the name of bad reasoning and bad philosophy, falsely offered as “scientific explanations”.
But you have immediately stumbled on P(T|H) there - how do you compute the probability of the observed protein under your null hypothesis of non-design? I absolutely agree that "Biology is an empirical domain, and biological information is an empirical scientific problem". That's the core of my issue with attempts to infer Design from an estimate of the likelihood of a protein under some non-design null. If we could simply calculate for all possible proteins the proportion that are "functional", all would be well. If the proportion was small enough, you could claim they were "Irreducibly Complex" in that a vast number of unselectable steps would be required to get them from a short peptide (or even a long peptide) to a useful protein. But we can't do that calculation, because whether something is functional depends on what it is doing for the organism, which is the unit of selection (the thing that reproduces with heritable variance in reproductive success). In other words, you can't separate the "function" of a protein from the job it does in keeping the organism alive and fecund, which will vary depending on the environment the organism is in, and, for multicellular organisms, the tissues in which it is expressed, and under what conditions. Elizabeth B Liddle
UB
Doesn’t the term “no inital breeding-with-variation population” mean the same thing as no “self-replicators”?
Yes.
If it does, then you clearly intended to demonstrate the rise of Darwinian self-replicators from a purely stochanistic environment.
Yes. (Well, not sure what you mean by "stochanistic" but from an environment in which there are no self-replicators, just non-self-replicating virtual molecules with virtual chemical and physical properties in a fluid environment, with stochastic brownian motion, yes).
This is in direct opposition to the words you typed upthread “[Darwinian processes] can’t explain the origin of those self-replicators”.
No, they can't. There is nothing contradictory about this. Darwinian processes can't explain the origin of self-replicators.
For you to suddenly claim otherwise is, well.. Frankly, since the constant inconsistencies in your positions force me to highlight them in response – which only offends the image you have of your participation – perhaps its best if we just drop it.
Well, it is certainly clear that we are unable to understand each other UB. So yes, probably dropping it is a good idea. But I see no inconsistency between the statements of mine you quote, and struggle to see what inconsistency you find. Elizabeth B Liddle
niwrad: Darwinian processes can not and do not generate organisms. Organisms are immense hierarchies of countless biological nested functions. Blind evolution, by definition, is unable in principle to create the least functional hierarchy. In hierarchical systems, a sub-function exists only because the parent function needs it. The parent function in turn exists only because a grand-parent function requires it, and so on all the way up. At the top level is the concept of the system as a whole hierarchy, which the designer originally conceives in his mind (teleology). Thus a functional hierarchy is simply incapable of being generated from the bottom. It has to begin at the top, with a complete view of what the final system will be (foresight). Darwinian processes “work” only at the bottom level, and are entirely devoid of the teleology/foresight required to create something for future use in a hierarchy. The unique all-overarching and all-comprehensive viewpoint necessary to create a functional hierarchy is completely missing at the bottom level. That's an interesting argument, but I think I dispute your premise. I don't think organisms are "hierarchical". They are homeostatic, and consist of interactive parts with feed-forward and feedback loops. I really don’t understand as a Lady intelligent like you can believe the nonsense of Darwinism. But you has yet time to change your mind and pass to the ID side, you would be welcome. Well, that's very kind, and I assure you that if I find the arguments persuasive, I will happily change sides :) Elizabeth B Liddle
So that Elizabeth doesn't miss it- Ernst Mayr in “What Evolution Is” page 281: On natural selection being a pressure or force
What is meant, of course, is simply that a consistent lack of success of certain phenotypes and their elimination from the population result in the observed changes in a population
On the role of chance:
The first step in selection, the production of genetic variation, is almost exclusively a chance phenomenon except that the nature of the changes at a given locus is strongly constrained. Chance plays an important role even at the second step, the process of elimination of the less fit individuals. Chance may be particularly important in the haphazard survival during periods of mass extinction.
Natural selection- the bad gets eliminated and whatever works gets to at least have a chance of passing down its genetic material. And the variation is all unguided, ie happenstance. Joe
Elizabeth: I have read your posts here, and sometimes I have difficulties following your reasoning (this is not news, I suppose). Well, if all your concern is about the famous Dembski paper, I will not go into details: I have expressed my ideas many times about that. So I will offer some simple questions to you: a) Do you accept that functional specification is valid and simple? b) Do you accept that, once defined a function objectively, we can in principle measure the functional complexity for that function in a specific context? For example, the subset of functional sequences vs the total search space? c) Do you accept that such a metrics expresses the improbability of getting that functional result in a random system, given some simple assumptions, like an approximately uniform distribution of the results by random variation, and a correct estimate of the probabilistic resources of the system? Do you accept that such a model is also equivalent to the case of a random walk from a starting state to a completely sequence unrelated functional state? d) Do you accept that such a reasoning can be applied to basic protein domains? e) Do you accept that no explicit algorithm, either simple or complex, is known for any basic protein domain that can explain its appearance, given that the so called darwinian algorithm cannot be explicitly applied to any of them, given that no functional and naturally selectable intermediate are known for any of them? And so on, and so on. I suppose you will not accept any of that, but as you can see there are much simpler and realistic scientific problems at stake than "P(T|H)". Biology is an empirical domain, and biological information is an empirical scientific problem to which neo darwinism refuses to give any acceptable scientific explanation in the name of bad reasoning and bad philosophy, falsely offered as "scientific explanations". gpuccio
Elizabeth Liddle:
In other words, Dembski’s CSI is useless.
In your hands CSI is definitely useless. However to those who don't have an aganda, CSI is very useful. Joe
Elizabeth, Darwin's entire point was design without a designer. Throughout his book he made reference to chance wrt variation. All just chance/ happenstance and if it aids in survival, it may get passed down.
It’s not essential to his theory that the variance is directed, but nor is it essential that it is undirected.
If one reads his book it is clear that it does matter. He posits unguided variation- again design without a designer.
ID proponents (and a very few Strong Atheists) take the view that evolutionary theory essentially rules out Design.
Perhaps that is because that was darwin's whole point- a point that Dawkins, Coyne and many others have echoed throughout the years.
They may correct that there is no Designer (I would be inclined to agree) but it is not correct to say that evolutionary theory rules it out. It merely does not require it.
All evidence to the contrary, of course. Look at Lenski's long running experiment. No new proteins in over 50,000 generations. No new protein machinery. Nothing that we can extrapolate into what this alleged theory of evolution claims.
For the Darwinian algorithm to result in an adapted population, all that is required is heritable variance in reproductive success.
Again, that is incorrect. You obviously do not understand what Darwin said. And you obviously do not understand what ID says.
“Happenstance” variation works fine. But the core of the theory isn’t that the variation is Happenstance,...
Yes, it is. That is what Darwin said and that is what all of his followers say- all of his educated followers, anyway. Ernst Mayr refutes Lizzie:
The first step in selection, the production of genetic variation, is almost exclusively a chance phenomenon except that the nature of the changes at a given locus is strongly constrained. Chance plays an important role even at the second step, the process of elimination of the less fit individuals. Chance may be particularly important in the haphazard survival during periods of mass extinction.- Mayr, page 281 "What Evolution Is"
Ooops, so sorry Elizabeth. Perhaps you need to go back and learn what the alleged theory of evolution actually is.
So I think that “guided” variance is a perfectly decent hypothesis – if a mechanism for that guidance to be effected could be postulated!
And we have!
So even under standard Darwinian theory, you’d expect to see variance-generation mechanisms evolve in such a manner as to optimise adaptation.
Unfortunately you do not understand the standard Darwin theory. Joe
Oh well sorry about that niwrad, it doesn't seem to go to the proper time,,, but anyways the overview of the system hierarchy of the bacterial cell they were working on is picked up around the 41:00 minute mark bornagain77
Hmm, let's try this time link: The Systems Architecture of a Bacterial Cell Cycle with Lucy Shapiro – video http://www.youtube.com/watch?feature=player_detailpage&v=8qiwOxVf1T4#t=2428s bornagain77
niwrad you may be interested in this following video, which I have time linked to the summary, that was posted the other day by a UD commenter. in the Q&A she comments on the great strides made when physicists were allowed to contribute to experimentation techniques of biologists: The Systems Architecture of a Bacterial Cell Cycle with Lucy Shapiro - video http://www.youtube.com/watch?&v=8qiwOxVf1T4#t=2489s Related notes: ExPASy - Biochemical Pathways - interactive schematic http://web.expasy.org/cgi-bin/pathways/show_thumbnails.pl Systems biology: Untangling the protein web - July 2009 Excerpt: Vidal thinks that technological improvements — especially in nanotechnology, to generate more data, and microscopy, to explore interaction inside cells, along with increased computer power — are required to push systems biology forward. "Combine all this and you can start to think that maybe some of the information flow can be captured," he says. But when it comes to figuring out the best way to explore information flow in cells, Tyers jokes that it is like comparing different degrees of infinity. "The interesting point coming out of all these studies is how complex these systems are — the different feedback loops and how they cross-regulate each other and adapt to perturbations are only just becoming apparent," he says. "The simple pathway models are a gross oversimplification of what is actually happening." http://www.nature.com/nature/journal/v460/n7253/full/460415a.html bornagain77
Elizabeth B Liddle #51
"I don’t know how the first Darwinian self-replicators came about – perhaps they were designed. But once you have them, Darwinian processes can and do generate information."
Darwinian processes can not and do not generate organisms. Organisms are immense hierarchies of countless biological nested functions. Blind evolution, by definition, is unable in principle to create the least functional hierarchy. In hierarchical systems, a sub-function exists only because the parent function needs it. The parent function in turn exists only because a grand-parent function requires it, and so on all the way up. At the top level is the concept of the system as a whole hierarchy, which the designer originally conceives in his mind (teleology). Thus a functional hierarchy is simply incapable of being generated from the bottom. It has to begin at the top, with a complete view of what the final system will be (foresight). Darwinian processes "work" only at the bottom level, and are entirely devoid of the teleology/foresight required to create something for future use in a hierarchy. The unique all-overarching and all-comprehensive viewpoint necessary to create a functional hierarchy is completely missing at the bottom level. I really don't understand as a Lady intelligent like you can believe the nonsense of Darwinism. But you has yet time to change your mind and pass to the ID side, you would be welcome. :) niwrad
Dr Liddle, you say "there is nothing inconsistent in what I’ve said." Yet in defending your original claim, you set out to show the exact opposite was true:
"by setting up a model in which there is no initial breeding-with-variance population, but only a world of Deterministic and non-deterministic rules (Necessities and Chance) from which I hope my self-replicators will emerge”.
Doesn't the term "no inital breeding-with-variation population" mean the same thing as no "self-replicators"? If it does, then you clearly intended to demonstrate the rise of Darwinian self-replicators from a purely stochanistic environment. This is in direct opposition to the words you typed upthread "[Darwinian processes] can’t explain the origin of those self-replicators". For you to suddenly claim otherwise is, well.. Frankly, since the constant inconsistencies in your positions force me to highlight them in response - which only offends the image you have of your participation - perhaps its best if we just drop it. Upright BiPed
UB: you mentioned neither Dembski nor Meyer in making your original claim, nor did you abandon your claim once I told you flatly that the challenge to you had nothing to do with Dembski or Meyer. Indeed I did not mentioned Dembski or Meyer initially, having assumed that on Dembski's blog we were talking about Dembski's definition of Specified Information (which is also Meyer's). And indeed I did recant my claim when you explained you were talking about something else, but agreed that it would be an interesting challenge, not to show that Darwinian processes can generate information, but that non-Darwinian processes could generate Darwinian processes, including processes that used an inert intermediary information channel. I also said quite clearly that I didn't know that I could do this, but I'd have a shot. I then said, again quite clearly, having got a new job and all, that it was on an indefinite back burner, where it remains. I hope that has cleared things up for you. However, I will also withdraw my claim that Darwinian processes can generate CSI by Dembski's definition. As Dembski's definition requires one to compute the probability of the Target by "the relevant chance hypothesis including evolutionary or material mechanisms", clearly any evolutionary mechanism that produces a specified pattern is, by definition not CSI, because it was highly probable by evolutionary mechanisms! In other words, Dembski's CSI is useless. Elizabeth B Liddle
Elizabeth, thank you for your kind invitation to post at TSZ, I posted my nominally critical essay of CSI v2.0(aka 1.22) and actually sided with one of your colleagues, Mathgrrl/Patrick in some respects. Thanks again for visiting. I think some of criticisms from TSZ are substantive and there are a few rare instances I will side with the critics at your blog. scordova
Upright Biped: This is at odds with you previous position concerning ID. Your previous position was “contra ID claims” that (via Darwinian processes) biological “information could be generated without design”. No it isn't. I don't know how the first Darwinian self-replicators came about - perhaps they were designed. But once you have them, Darwinian processes can and do generate information. At least that remains my view. You of course disagree. But there is nothing inconsistent in what I've said. Elizabeth B Liddle
Sal: The wording in Dembski’s writings may be criticized, but I certainly didn’t come away from reading Dembski’s writings and conclude he meant only algorithmically simple patterns. Well, that's the implication of the Specification paper, but I agree that he backs off it even within that paper, and has said several times that function is also a Specification. That's one reason I think CSI is a mess, but the biggest problem with it, IMO, isn't the Specification part, although that is problematic (I think Eric agrees that Dembski's attempt to formalise it mathematically is not successful) but the P(T|H) part. Which is the point of my linked TSZ post, which was a counter-response to Winston Ewart's response to my CSI challenge earlier. I do think someone from the Evolutionary Informatics Lab needs to address that straight on, or at least concede that CSI as formulated in Dembski's 2005 paper isn't calculable for any but simple "relevant chance hypotheses", e.g. random draw, or harmonic oscillator. Elizabeth B Liddle
Dr Liddle in 2013: "Darwin’s mechanism only accounts for evolution once you have a population of self-replicators ... It can’t explain the origin of those self-replicators.
This is at odds with you previous position concerning ID. Your previous position was "contra ID claims" that (via Darwinian processes) biological "information could be generated without design".
Dr Liddle in 2011: Well, my position is that IDists have failed to demonstrate that what they consider the signature of intentional design is not also the signature of Darwinian evolutionary processes
You were then challenged with the ID claim that Darwinian processes could not account for the material conditions that underlie (make possible) the recorded information that Darwinian processes require in order to exist. You firmly disagreed with that claim, and set out to demonstrate that you could use Darwinian processes to cause the rise of recorded information from nothing but a stochastic environment. You were going to do so by:
"setting up a model in which there is no initial breeding-with-variance population, but only a world of Deterministic and non-deterministic rules (Necessities and Chance) from which I hope my self-replicators will emerge".
You then abandonded that demonstration once it was understood what was actually required in order to be successful, and that you would be required to demonstrate actual success. This shows despite your original claim (and the subsequent obfuscation about what you meant by it*) that IDists have indeed demonstrated "what they consider the signature of intentional design" is not also a signature of Darwinian processes. You now seem prepared to agree. Are you now prepared to recant your original claim? If you are not prepared to recant you claim, then you should be prepared to fulfill your demonstration. I'm sorry if that puts you on the spot, but your continued participation in this debate often takes the form of a " where the rubber-meets-the-road" thing - so the abruptneess of having to support your claims seems particularly appropriate where you are concerned. As for my part, I am prepared to wager my participation in ID on the outcome of your demonstration. Are you? - - - - - - - *you mentioned neither Dembski nor Meyer in making your original claim, nor did you abandon your claim once I told you flatly that the challenge to you had nothing to do with Dembski or Meyer. Upright BiPed
Well, it’s a question of definition, rather than accuracy! Dembski, in his 2005 Specification paper defines specification as “easy description of pattern”, and, more formally, as algorithmic compressibility, citing Chaitin, Kolmogorov and Solomonoff.
Those were specialized cases, Dembski went on to describe messages that were decoded via a Caesar cipher, and thus there is no constraint the message was itself an algorithmically simple message. Coded messages are analogous to ZIP files and JPEG files (in fact ZIP files and JPEG are coded messages) -- they are maximally algorithmically complex as far as we can tell. The wording in Dembski's writings may be criticized, but I certainly didn't come away from reading Dembski's writings and conclude he meant only algorithmically simple patterns. As far as his 2005 paper, I've already pointed out I side with MathGrrl/Patrick on some of his points. I don't even try to calculate CSI in "v2.0" (aka 1.22) anymore... It may also be a reasonable criticism that Bill uses the word "complex" to mean improbable. He wrote in an e-mail he gave me permission to publish at ARN that he thought of using the notion of "Specified Improbability". IMHO, that would have been a far more accurate notion. The word "complexity" carries too many connotations and confusion factors.... scordova
Joe: That’s wrong. The heritable variance has to be happenstance, ie unguided/ undirected. Well, not according to Darwin. He didn't know how variance was generated, and at one point favored Lamarck's theory. It's not essential to his theory that the variance is directed, but nor is it essential that it is undirected. The variance certainly has to come from somewhere. And all evidence suggests that variants are drawn from a really very narrow distribution of fitnesses, with a peak close to that of the parent organism or sequence. tbh, I think misunderstandings like this (for which "Darwinists" must take their share of blame) are at the bottom of a lot of the arguments over ID. ID proponents (and a very few Strong Atheists) take the view that evolutionary theory essentially rules out Design. They may correct that there is no Designer (I would be inclined to agree) but it is not correct to say that evolutionary theory rules it out. It merely does not require it. For the Darwinian algorithm to result in an adapted population, all that is required is heritable variance in reproductive success. You can artificially introduce variance (by genetic engineering, for instance, or by pre-filtering your variants in a GA, I guess) but it isn't necessary. "Happenstance" variation works fine. But the core of the theory isn't that the variation is Happenstance, but that heritable variance in reproductive success will result in an increasing prevalence of the more successful variants - by definition. So much so that some people dismiss it as "tautological". It isn't - it's just a near-syllogism (and only a "near" syllogism, because the adaptation isn't absolutely inevitable, because of drift). So I think that "guided" variance is a perfectly decent hypothesis - if a mechanism for that guidance to be effected could be postulated! (i.e. some force to move the nucleotides around). And indeed, although I wouldn't call it "guidance", there's no reason, under Darwin's theory, to restrict the unit of selection to the organism - it can happen at the level of the population, and thus variance-generation mechanisms likely to produce robustness to environmental change will tend to be selected at population level (or, rather, to keep Eric sweet: "population adaptation with heritable variance in adaptive success will result in an increased prevalence of populations with variance-generating mechanisms that tend to promote rapid adaptation", but that's a bit of a mouthful!) So even under standard Darwinian theory, you'd expect to see variance-generation mechanisms evolve in such a manner as to optimise adaptation. Elizabeth B Liddle
Sal: That is only true in specialized cases and is not universally true. Mark Perakh and others tended to use this claim, but it is inaccurate. Well, it's a question of definition, rather than accuracy! Dembski, in his 2005 Specification paper defines specification as "easy description of pattern", and, more formally, as algorithmic compressibility, citing Chaitin, Kolmogorov and Solomonoff. So Perakh is being perfectly accurate if he is using Dembski's definition. But I would agree with you that it is not a good definition if you want to exclude patterns generated by mechanical processes. Presumably this is why Vincent has suggested a way of eliminating obviously compressible patterns that are obviously not designed, calling them "ordered". However, to be fair to Dembski, he deals with this by including in his formula for "chi", the parameter P(T|H), which means he can still use his Kolmogorov compressibility method, and highly "ordered" sequences (produced, for instance, by some kind of oscillator) will still produce a low chi value because the probability P of the Target T given the "relevant chance hypothesis", H, will be high, as that "relevant chance hypothesis" will include oscillatory hypotheses. But here Vincent and Dembski meet the same eleP(T|H)ant in the Room :) P(T|H) is fine to compute if you have a clearly defined non-design hypothesis for which you can compute a probability distribution. But nobody, to my knowledge, has yet as suggested how you would compute it for a biological organism, or even for a protein. (I will crosspost this response in the thread at TSZ, and warmly invite others to do the same :) Elizabeth B Liddle
Elizabeth:
A better way of describing the Darwinian process, in my view, is as above: self-replication with heritable variance in reproductive success.
That's wrong. The heritable variance has to be happenstance, ie unguided/ undirected. ID is OK"heritable variance in reproductive success"- ID and YEC say that most of the variation is guided/ directed. See "Not By Chance" by Dr Lee Spetner.
It must work, and it does work, which is why most IDists (and virtually all YECs) accept at least microevolution.
Again your ignorance of the debate is astounding, Lizzie. Perhaps you should take time off from blogging and actually learn what ID is about. Joe
Elizabeth Liddle:
But Bright Line is still Bright: Darwin’s mechanism only accounts for evolution once you have a population of self-replicators that replicate with heritable variance in reproductive success. It can’t explain the origin of those self-replicators.
Lizzie, if the OoL was designed then the inference would be they evolved by design. It is only if life arose via blind and undirected chemical processes that we would infer darwinism for the subsequent evolution. Why is it that you cannot grasp that? And BTW, there still isn't any evidence that darwinian processes can take a simple self-replicator and change it into something else. As a matter of fact there isn't any evidence that darwinain processes can do anything but break and deteriorate. So you have some work to do. Joe
B: Can be generated by a short algorithm
That is only true in specialized cases and is not universally true. Mark Perakh and others tended to use this claim, but it is inaccurate. "all coins heads" is algorithmically simple, but a ZIP file is algorithmically complex (probably maximally so). Both are examples of intelligent design. Biology has both algorithmically simple designs (homochirality, duplications), and algorithmically complex (development instructions for the embryo to full grown human). scordova
Yes Alan ignores my commenst because he is too much of a coward to respond to them. He knows that I expose his ignorance, and he likes his ignorance, so he ignores me and prattles on. Nice jon Alan... Joe
@ Axel 37 Unless someone else mentions something of possible interest in a comment of theirs, I usually scroll past and generally don't respond to comments by Joe, BA77 or Kairosfocus. Life is too short. Alan Fox
So: given a pattern with A and B, how do we compute C?
Via physics, if possible. In physics we can sometimes count the probability of systems occupying certain states. This is only possible in some circumstances, and most times is quite intractable. Examples of this was the computation of entropy. It was no accident Shannon borrowed the notions of entropy from statistical mechanics and thermodynamics because much of his theory was rooted in counting. A set of fair coins is probably the easiest way to compute states via simple counting. Amino acid configurations in a protein, or DNA configurations in a DNA strand are relatively easy to count. Some weighting factors can give even more refined probabilities if one really wants to be rigorous because there are some very slight biases, but those biases become somewhat moot when dealing with long strands. PS what made Shannon famous was not merely his counting method that led to the notion of bits, but his ability to relate bits to bandwidth to signal-to-noise ratio -- that was sheer genius on his part. Shannon's is probably more famous for the most trivial aspect of his theory (the notion of bit), but the paper that coined the notion of "bit" was really more about his famous bandwidth theorem which is known as Shannon's theorem of communication. scordova
And thanks for the welcome guys :) Elizabeth B Liddle
Eric: And what is the important distinction you draw between (i) getting chemicals to form more complex chemicals/systems and eventually life, and (ii) getting complex chemicals to form more complex chemicals/systems and eventually different life? (And please don’t say “natural selection,” as it isn’t a force of any kind, just a label.) I would draw a distinction between a scenarios with and without assemblies of molecules that self-replicate with heritable variation in reproductive success. And yes, that is another way of saying "natural selection", but as you say "natural selection" is not a force. It's an anthropomorphic quasi-teleological metaphor made by analogy with artifical selection aka selective breeding. A better way of describing the Darwinian process, in my view, is as above: self-replication with heritable variance in reproductive success. Where that is present, adaptive evolution is the near-inevitable result, not because of some mysterious "force" but as a logical necessity: clearly if self-replicators reproduce with heritable variance in reproductive success, those variants that tend to be more successful in the current environment will become more prevalent. This, as Stephen Meyer points out in the Prologue to his new book, was "Darwin's great insight" (I'll check the exact quote later). It must work, and it does work, which is why most IDists (and virtually all YECs) accept at least microevolution. But obviously it can't explain how the necessary conditions for Darwinian evolution to occur arose in the first place i.e. a population of self-replicators with heritable variance in reproductive success. Elizabeth B Liddle
Joe, re your #32, silence (from M. Renard) came the stern reply... They inhabit a different planet; actually an epistemological multiverse, where anything can mean anything, so logic has no prerogative. Axel
Welcome back, Elizabeth :) gpuccio
Welcome back Dr. Liddle! scordova
Oh, and thanks to Barry for approving my account. Elizabeth B Liddle
Eric: Indeed, very many evolutionists aren’t making any distinction between life and non-life at all. As soon as we can get a self-replicating polymer going (which certainly precedes anything that would be called a living organism), then evolution takes over. So, no, you are not correct that there is some bright-line distinction between origin of life theory and evolutionary theory. If anything, it would be between simple non-replicating polymers and everything else. Darwin himself took such a position in his letters, notwithstanding his “first form or forms” comment thrown into the 5th edition of The Origin. I think this is a fair comment. But what remains true, and I assume is what Alan was getting at, is that Darwinian processes require a population of somethings self-replicating with heritable variance in reproductive success. So the "OoL" issue is really semantics. If you don't call something "life" until it is really quite complex, then the "origin of life" could well include Darwinian processes. However, if you call "life" anything that replicates with heritable variance in reproductive success, then clearly Darwinian processes can't account for life by this definition. But Bright Line is still Bright: Darwin's mechanism only accounts for evolution once you have a population of self-replicators that replicate with heritable variance in reproductive success. It can't explain the origin of those self-replicators. Elizabeth B Liddle
Alan Fox:
ID proponents claim to have found an insurmountable barrier to the process of evolution. It is up to them to justify it.
No Alan. We have found that neither natural selection nor any other accumulations of genetic accidents are capable of anything and unfortunately for you, no one from your side can demonstrate otherwise. Ya see Alan, it is up to YOU to provide POSITIVE evidence for the claims of your position. And you have failed to do so. IOW we don't have to justify anything because you cannot. As Hitchens said- "That which can be asserted without evidence can be dismissed without evidence." So eat it. Joe
Alan Fox:
You keep hearing that the theory of evolution does not address the origin of life because it is a fact.
The two are directly linked. How life originated tells us how it evolved. Joe
Alan Fox:
Evolutionary theory does not address the origin of life.
Alan, please don't be ignorant. How living organisms originated is directly linked to how they evolved. If living organisms were designed then they evolved by design. It is only if living organism arose via blind and undirected chemical processes would we infer Darwinism. That you and your ilk refuse to understand that says a lot about your agenda. Joe
Do the abiogenesis skeptics have some valid points?
I have the greatest respect for and fond memories of Robert Shapiro. Some would categorize him as a sceptic of abiogenesis theories as then presented. I don't personally think we'll ever know for sure about the origin of life on Earth. Like Professor Shapiro, I think ther may be clues awaiting discovery beyond Earth. But, for now, that's just speculation. Planetary Dreams Alan Fox
Yeah, we keep hearing that out of one side of the mouth.
You keep hearing that the theory of evolution does not address the origin of life because it is a fact. Alan Fox
How did your childhood and relationship in your father play into all this? Does this insanity run in your family?
? Alan Fox
Alan @18:
Evolutionary theory does not address the origin of life.
Yeah, we keep hearing that out of one side of the mouth. But then out of the other side of the mouth we hear all these stories about Miller-Urey, comets with amino acids, protein formation through natural processes, etc. And, despite your rhetoric in this particular comment #18, we find many evolutionists and papers and textbooks that either explicitly or implicitly include the origin of life as a part of the overall theory. I included it purposely in my comment, as it is foundational to the materialist creation myth. Further, many of the same issues are present both for initial life and subsequent development of new forms. To be sure, one of the rhetorical gambits that necessitates the distinction is the absurd idea propounded by so many evolutionists that once life gets going, then hey, anything goes, because . . . wait for it . . . we got reproduction! Indeed, very many evolutionists aren't making any distinction between life and non-life at all. As soon as we can get a self-replicating polymer going (which certainly precedes anything that would be called a living organism), then evolution takes over. So, no, you are not correct that there is some bright-line distinction between origin of life theory and evolutionary theory. If anything, it would be between simple non-replicating polymers and everything else. Darwin himself took such a position in his letters, notwithstanding his "first form or forms" comment thrown into the 5th edition of The Origin. But, hey, I'm happy to separate the two for purposes of discussion if you want. So, even though you don't think the subsequent development and diversity of life on Earth required any guidance or intelligent input, would you acknowledge that the origin of life did require such intelligent guidance? Do the abiogenesis skeptics have some valid points? After all, evolutionary theory has nothing to say about it in your view. And what is the important distinction you draw between (i) getting chemicals to form more complex chemicals/systems and eventually life, and (ii) getting complex chemicals to form more complex chemicals/systems and eventually different life? (And please don't say "natural selection," as it isn't a force of any kind, just a label.) Eric Anderson
correction: relationship TO your father bornagain77
Mr Fox you state:
Dialogue is good, Phil. Try it someday!
What's to dialogue? You made a claim. It was shown to be false. You failed to acknowledge you made a false claim? Do you really want to honestly 'dialogue' as to why you always make false claims and fail to acknowledge them? Okie Dokie, I'm no psychologist but let's give it a try. Please tell me Mr. Fox about your pathological need to deny the overwhelming evidence of design in nature? How did your childhood and relationship in your father play into all this? Does this insanity run in your family? bornagain77
Hi everyone (including those over at the Skeptical Zone): I've been very busy over the last day or so, so I haven't been able to comment. I should be back in about 16 hours. Sorry for the delay. vjtorley
Dialogue is good, Phil. Try it someday! Alan Fox
...origin and diversity of life...
Eric, please don't be sloppy in language. Evolutionary theory does not address the origin of life. Alan Fox
Mr. Fox you quote: …philosophy is dead. - Stephen Hawking That is funny for you to believe that since number one John Lennox shows Stephen Hawking to be up to his eyeballs in philosophy in his book the Grand Design: John Lennox on Stephen Hawking's "The Grand Design" - video http://www.youtube.com/watch?v=6eHfhbP1K_4&feature=player_detailpage#t=669s Moreover #2, you yourself our inextricably wed to the materialistic philosophy even though you have not one iota of empirical evidence, especially considering advances in quantum mechanics, that materialism is true. i.e. People in glass houses and all that Mr. Fox! bornagain77
Alan Fox:
ID proponents claim to have found an insurmountable barrier to the process of evolution. It is up to them to justify it.
Presumably we are all talking about unguided evolution. True, ID proponents claim to have found an insurmountable barrier to unguided evolution. And they have done a good job of laying that out in terms of getting people to look at coordinated complexity, information content, semiotic states of living systems. In contrast, evolutionary proponents have never provided any solid evidence that the origin and diversity of life has arisen through purely natural process, nor indeed any rational reason to think that it in fact could ever arise. It is the evolutionists who admit, like every rational and thinking person does, that life looks designed. The onus is therefore very much on the evolutionist who claims that such appearance is an illusion and that they have a designer substitute adequate to the task. It is the evolutionist making the, quite outrageous, claim in this regard. So, despite your a priori approach to the issue and your attempt to make evolution the default assumption unless evolutionary critics can prove a negative, the ball in fact is in the evolutionists' court to provide a rational mechanism for the materialist creation myth they propagate. Eric Anderson
...the apparent isolation of protein function...
Unwarranted assumption, William. You can't conclude this without actually looking at how things are. Reasoning without looking at reality is not productive.
...philosophy is dead.
Stephen Hawking. Alan Fox
@ VJTorley and cpuccio And just to convey Elizabeth Liddlle's invitation to dip your toe in at TSZ. Lizzie has tried to comment here, and, notwithstanding KF's claims, her comment has not made it through. How bad could it be. Vincent, you have waded into Ed Feser's blog unscathed, haven't you? Alan Fox
The issue, obviously, remains open to new investigations.
Well, indeed, gpuccio. There is a lot of work building on Keefe and Szostak. We're scratching the surface. Maybe someone will develop a predicting tool where you can feed in a novel residue sequence and the 3D conformation will pop out as a prediction. Then we'll see! Alan Fox
If we don’t know how rare functional protein sequences are, why is Darwinism assumed to be true until otherwise proven false?
*chuckles* This is such a non-sequitur I'm not sure whether it's worth replying. It is (as far as I can tell) a central tenet of ID propaganda that evolutionary theory fails because of the probability of "finding" viable solutions in total search space. It's not an issue for biologists as they do not have to paint in the target after the search because there is no search. As far as biologists are concerned, they see what they see and look for explanations. ID proponents claim to have found an insurmountable barrier to the process of evolution. It is up to them to justify it. Alan Fox
The real key here being that specification is not something that is amenable to precise mathematical calculation. Rather, it is a logical, functional, semantic concept. That in our use of language we can use a single word or a small number of simple words to describe something, does not mean the something itself is simple. Eric Anderson
VJ Torley, Good work. These are topics that need to be discussed. scordova
gpuccio, IMO, that way of reasoning generates great difficulties,
Agreed, but perhaps for different reasons. I posted on this matter which Eric referenced: https://uncommondesc.wpengine.com/intelligent-design/siding-with-mathgrrl-on-a-point-and-offering-an-alternative-to-csi-v2-0/ scordova
Good discussion. We've discussed this specification definitional issue a couple of times recently: https://uncommondesc.wpengine.com/intelligent-design/siding-with-mathgrrl-on-a-point-and-offering-an-alternative-to-csi-v2-0/ https://uncommondesc.wpengine.com/intelligent-design/csi-revisited/ Eric Anderson
Scientific theories never take into account what scientists don't know they don't know; science can only make the best theories it can based on what is known, which is why there is widespread agreement even outside of the ID community that the apparent isolation of protein function in a vast space of apparent non-function needs to be addressed in some way. William J Murray
Alan
Vincent, you miss my point about rarity of function in proteins. We have, as yet, no way to predict functionality in unknown proteins. Without knowing what you don’t know, you can’t calculate rarity.
...And in the mean time, Darwinism wins by default? If we don't know how rare functional protein sequences are, why is Darwinism assumed to be true until otherwise proven false? The burden of proof is on NDT to show that these natural processes can produce functional protein sequences. It's never been done, yet NDT is never questioned. It's the only scientific discipline that I can think of that behaves like this. Coincidentally, it also happens to be the only one that is the bedrock of most of its scientist's world views. uoflcard
gpuccio: we could, conceivably, know how rare a protein is in the space of possible arrangements of amino acids, as Durston, and Hazen also, propose. The problem that VJtorley raises, and I do myself, is: how do we compute the space of not "possible" but "probable" arrangements, given material hypotheses? This is what Dembski mandates in his 2005 paper, and Winston Ewart re-iterates here. i.e. that the "relevant chance hypothesis", the null, isn't necessarily random draw from all possible arrangements, but must take into account "Darwinian and other material mechanisms". If it turns out that the probable space under the "relevant chance hypothesis" is extremely small, by VJtorley's argument, that pattern becomes "ordered" not "complex". That's why I keep asking how you propose to compute the null under the "relevant chance hypothesis". Elizabeth B Liddle
Alan Fox: As many times discussed, we have a lot of indications about the rarity of function in the protein space. The issue, obviously, remains open to new investigations. However, the Durston method gives a very good idea of how rare a function is, and how much its functional complexity is. gpuccio
And, cross-posted from TSZ, my question to you both:
So a pattern has CSI if it: A: Has a large amount of Shannon Information B: Can be generated by a short algorithm C: Has a low probability of being generated by a material process If it has A and B but not C, it is “ordered”, but not necessarily designed. If it has A and C but not B, it is unordered, and not necessarily designed. If it doesn’t have A it doesn’t have C, whether or not it has B If it has all three, it is Designed. So: given a pattern with A and B, how do we compute C?
Elizabeth B Liddle
I believe Barry offered to let me post. It seems I have to re-register, so let's see if this works. gpuccio and VJtorley: you are extremely welcome to post directly at TSZ, where there are several current threads relevant to this discussion. Elizabeth B Liddle
Vincent, you miss my point about rarity of function in proteins. We have, as yet, no way to predict functionality in unknown proteins. Without knowing what you don't know, you can't calculate rarity. Alan Fox
Graham2: Why don't you say what you think, here? gpuccio
VJ: Very good work. I agree with your comments about Dembski's definitions of specification, even if, as everybody knows, I am not really satisfied with his particular definition in the famous paper referring to semiotic description. IMO, that way of reasoning generates great difficulties, and it is not a case that our "enemies" stick to that concept for their attacks, having nothing better. The simple truth is, IMO, that any kind of specification, however defined, will do, provided that we can show that that specification defines a specific subset of the search space that is too small to be found by a random search, and which cannot reasonably be found by some natural algorithm. Now, here the discussion becomes more subtle, and many detailed clarifications should be made. They have all been made, in the course of my long discussions here, and at TSZ. Mark Frank, for example, tried to use an argument about random sequences pointing to specific items in a database. His arguments, was obviously senseless, although he certainly intended it in perfect good faith. You cannot take some result of a random search, and then a posteriori look for a meaning for it that is in no way a general and repeatable meaning. A non general specification works only as "a priori" specification. To be more clear, I cannot take 3 random numbers, let's say 34, 5476, 12347, and then give the specification: "My function is a sequence of numbers such that it points to items number 34, 5476 and 12347 of this database". And then affirm that you had that sequence from a random search. It should be obvious even to a child that, if you have that result "again" from a random search, then you really got a very unlikely result given your pre-specification. But the simple fact that you got that result "before" your specification, and then simply built your contingent specification on that particular result, is in no way unlikely. It is not a valid functional specification, and means nothing. Instead, if you get the decimal sequence of the first 100 digits of pi from a random number generator, that would be something! In that case, it is of no relevance that you gave the specification before or after the event. The sequence of pi is a very special sequence with an objective functional meaning (it describes the ratio of circumference to diameter), that does not depend on any arbitrary contingency. So, this is the kind of arguments with which we have to deal, even from intelligent and honest people like Mark Frank. In the end, I will say it again: the important point is not how you specify, but that your specification identifies: a)an utterly unlikely subset as a pre-specification or b) an utterly unlikely subset which is objectively defined without any arbitrary contingency, like in the case of pi. In the second case, specification needs not be a pre-specification. Functional specification is a perfect example of the second case. Provided that the function can be objectively defined and measured, the only important point is how complex it is: IOWs, how small is the subset of sequences that provide the function as defined, in the search space. That simple concept is the foundation for the definition of dFSCI, or any equivalent metrics. It is simple, it is true, it works. gpuccio
Why not remove the middle man and engage in the discussion over at TSZ?. Or unblock a few people so they can post here. Graham2

Leave a Reply