Uncommon Descent Serving The Intelligent Design Community

Lobbing a grenade into the Tetrapod Evolution picture

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

A year ago, Nature published an educational booklet with the title 15 Evolutionary gems (as a resource for the Darwin Bicentennial). Number 2 gem is Tiktaalik a well-preserved fish that has been widely acclaimed as documenting the transition from fish to tetrapod. Tiktaalik was an elpistostegalian fish: a large, shallow-water dwelling carnivore with tetrapod affinities yet possessing fins. Unfortunately, until Tiktaalik, most elpistostegids remains were poorly preserved fragments.

“In 2006, Edward Daeschler and his colleagues described spectacularly well preserved fossils of an elpistostegid known as Tiktaalik that allow us to build up a good picture of an aquatic predator with distinct similarities to tetrapods – from its flexible neck, to its very limb-like fin structure. The discovery and painstaking analysis of Tiktaalik illuminates the stage before tetrapods evolved, and shows how the fossil record throws up surprises, albeit ones that are entirely compatible with evolutionary thinking.”

Just when everyone thought that a consensus had emerged, a new fossil find is reported – throwing everything into the melting pot (again!). Trackways of an unknown tetrapod have been recovered from rocks dated 10 million years earlier than Tiktaalik. The authors say that the trackways occur in rocks that: “can be securely assigned to the lower-middle Eifelian, corresponding to an age of approximately 395 million years”. At a stroke, this rules out not only Tiktaalik as a tetrapod ancestor, but also all known representatives of the elpistostegids. The arrival of tetrapods is now considered to be 20 million years earlier than previously thought and these tetrapods must now be regarded as coexisting with the elpistostegids. Once again, the fossil record has thrown up a big surprise, but this one is not “entirely compatible with evolutionary thinking”. It is a find that was not predicted and it does not fit at all into the emerging consensus.

“Now, however, Niedzwiedzki et al. lob a grenade into that picture. They report the stunning discovery of tetrapod trackways with distinct digit imprints from Zachemie, Poland, that are unambiguously dated to the lowermost Eifelian (397 Myr ago). This site (an old quarry) has yielded a dozen trackways made by several individuals that ranged from about 0.5 to 2.5 metres in total length, and numerous isolated footprints found on fragments of scree. The tracks predate the oldest tetrapod skeletal remains by 18 Myr and, more surprisingly, the earliest elpistostegalian fishes by about 10 Myr.” (Janvier & Clement, 2010)

The Nature Editor’s summary explained: “The finds suggests that the elpistostegids that we know were late-surviving relics rather than direct transitional forms, and they highlight just how little we know of the earliest history of land vertebrates.” Henry Gee, one of the Nature editors, wrote in a blog:

“What does it all mean?
It means that the neatly gift-wrapped correlation between stratigraphy and phylogeny, in which elpistostegids represent a transitional form in the swift evolution of tetrapods in the mid-Frasnian, is a cruel illusion. If – as the Polish footprints show – tetrapods already existed in the Eifelian, then an enormous evolutionary void has opened beneath our feet.”

For more, go here:
Lobbing a grenade into the Tetrapod Evolution picture
http://www.arn.org/blogs/index.php/literature/2010/01/09/lobbing_a_grenade_into_the_tetrapod_evol

Additional note: The Henry Gee quote is interesting for the words “elpistostegids represent a transitional form”. In some circles, transitional forms are ‘out’ because Darwinism presupposes gradualism and every form is no more and no less transitional than any other form. Gee reminds us that in the editorial office of Nature, it is still legitimate to refer to old-fashioned transitional forms!

Comments
Aleta, check out vjtorley's posts starting at 310 and especially 315. Your concerns regarding CSI being a pure chance hypotheses are specifically addressed. Also here is a link to a pdf of a Dembski paper addressing them.tribune7
January 22, 2010
January
01
Jan
22
22
2010
06:34 AM
6
06
34
AM
PDT
Well done, vjtorley.tribune7
January 22, 2010
January
01
Jan
22
22
2010
06:26 AM
6
06
26
AM
PDT
Cabal The theory of intelligent design (ID) holds that certain features of the universe and of living things are best explained by an intelligent cause rather than an undirected process such as natural selection. Does not involve the supernatural in the least. Why would you hold intelligence to be synonymous with supernatural? Other evidence challenges the adequacy of natural or material causes to explain both the origin and diversity of life. I'd say the other evidence clause refers to evidence other than that presented by ID.tribune7
January 22, 2010
January
01
Jan
22
22
2010
06:07 AM
6
06
07
AM
PDT
As to Aleta's scenario in comment 169 what a load of tripe. How did the side with a 1 get sticky Aleta? So yes if there is agency involvement for the initial conditions then all bets are off and channecessity in this case would be due to the agency involvement. Jerry and VJ you guys are wasting your time. That is fine if you don't care but trtying to eductae people who just don't care to learn is a fruitless enterprise.Joseph
January 22, 2010
January
01
Jan
22
22
2010
05:52 AM
5
05
52
AM
PDT
In order to be a candidate for natural selection a system must have minimal function: the ability to accomplish a task in physically realistic circumstances.- M. Behe page 45 of “Darwin’s Black Box”
He goes on to say:
Irreducibly complex systems are nasty roadblocks for Darwinian evolution; the need for minimal function greatly exacerbates the dilemma. – page 46
Joseph
January 22, 2010
January
01
Jan
22
22
2010
05:35 AM
5
05
35
AM
PDT
Aleta:
Yes, the “pure chance hypothesis” of just computing the odds of all the components happening at once rather than taking possible step-wise creation into account (configuration rather than history) is the “tornado in the junkyard” argument.
Yet ID does not make that argument. Ragardless of what you say or think the design inference considers both chance and necessity acting together. Now it seems that you refuse to accept that fact. And that is why discussing this with you is a waste of time. Also if someone could demonstrate there is a possible step-wise creation via blind and undirected processes then the design inference would be in trouble. Mutation and selection- step-wise. Well selection only "works" when function is present. And without selection all you have is some just-so story.Joseph
January 22, 2010
January
01
Jan
22
22
2010
05:34 AM
5
05
34
AM
PDT
When I wrote that it is a mistake for ID advocates "to hitch their wagon to the faulty “tornado in a junkyard” argument,” Jerry wrote, "ID makes no such argument." Yes, the "pure chance hypothesis" of just computing the odds of all the components happening at once rather than taking possible step-wise creation into account (configuration rather than history) is the "tornado in the junkyard" argument. That's precisely what I have been discussing. But you have agreed that in theory step-wise creation will yield a different probability than pure chance, and I'm satisfied with that at this time.Aleta
January 22, 2010
January
01
Jan
22
22
2010
04:42 AM
4
04
42
AM
PDT
DEFINITION FIVE (Durston et al., 2007; Abel, 2009): [Note: In the quotes below, X denotes “X with the subscript f,” and SUM denotes “sigma” – VJT.] Abel, 2009, cites the definition in Durston et al., 2007:
1) Prescriptive Information (PI) ... PI refers not just to intuitive or semantic information, but specifically to linear digital instructions using a symbol system (e.g., 0’s and 1’s, letter selections from an alphabet, A, G, T, or C from a phase space of four nucleotides). PI can also consist of purposefully programmed configurable switch settings that provide cybernetic controls. 2) Bona fide Formal Organization ... By “formal” we mean function-oriented, computationally halting, integrated-circuit producing, algorithmically optimized, and choice-contingent at true decision nodes (not just combinatorial bifurcation points). Note that statistical order and pattern have no more to do with function and formal utility than does maximum complexity (randomness). Neither order nor complexity can program, compute, optimize algorithms, or organize. A law of physics also contains very little information because the data it compresses is so highly ordered. The best way to view a parsimonious physical law is as a compression algorithm for reams of data. This is an aspect of valuing Occam’s razor so highly in science. Phenomena should be explained with as few assumptions as possible. The more parsimonious a statement that reduces all of the data, the better [188, 189]. A sequence can contain much order with frequently recurring patterns, yet manifest no utility. Neither order nor recurring pattern is synonymous with meaning or function. Prescriptive Information (PI) cannot be reduced to human epistemology. To attempt to define information solely in terms of human observation and knowledge is grossly inadequate. Such anthropocentrism blinds us to the reality of life’s objective genetic programming, regulatory mechanisms, and biosemiosis using symbol systems .... Well, what about a combination of order and complexity? Doesn’t that explain how prescriptive information comes into being? Three subsets of linear complexity have been defined in an abiogenesis environment. These subsets are very helpful in understanding potential sources of Functional Sequence Complexity (FSC) as opposed to mere Random Sequence Complexity (RSC) and Ordered Sequence Complexity (OSC). FSC requires a third dimension not only to detect, but to produce formal utility. Neither chance nor necessity (nor any combination of the two) has ever been observed to produce non trivial FSC. Durston and Chiu at the University of Guelph developed a method of measuring what they call functional uncertainty (H). They extended Shannon uncertainty to measure a joint variable (X, F) , where X represents the variability of data, and F its functionality. This explicitly incorporated the empirical knowledge of embedded function into the measure of sequence complexity: H(X(t)) = -SUM[P(X(t))* log P(X(t))] (2) where X denotes the conditional variable of the given sequence data (X) on the described biological function f which is an outcome of the variable (F). The state variable t, representing time or a sequence of ordered events, can be fixed, discrete, or continuous. Discrete changes may be represented as discrete time states. Mathematically, the above measure is defined precisely as an outcome of a discrete-valued variable, denoted as F={f}. The set of outcomes can be thought of as specified biological states.
My comments: As for Definition Four, except that the authors’ discussion of functionality is much more technically advanced in this 2009 paper. FSC is simply a biological version of FCSI. The authors provide a very detailed account of bio-functionality, but do not attempt to provide a definition of the term “specified.” (See Definitions One and Two above for this.) That’s all for the time being. It’s taken many hours to put all this together. I’ll be back later.vjtorley
January 22, 2010
January
01
Jan
22
22
2010
04:16 AM
4
04
16
AM
PDT
DEFINITION FOUR (Abel and Trevors, 2005.) Quotes from 2005 paper by Abel and Trevors:
"Sequence complexity falls into three qualitative categories 1. Random Sequence Complexity (RSC), 2. Ordered Sequence Complexity (OSC), and 3. Functional Sequence Complexity (FSC). Random Sequence Complexity (RSC) A linear string of stochastically linked units, the sequencing of which is dynamically inert, statistically unweighted, and is unchosen by agents; a random sequence of independent and equiprobable unit occurrence. Ordered Sequence Complexity (OSC) A linear string of linked units, the sequencing of which is patterned either by the natural regularities described by physical laws (necessity) or by statistically weighted means (e.g., unequal availability of units), but which is not patterned by deliberate choice contingency (agency). Ordered Sequence Complexity is exampled by a dotted line and by polymers such as polysaccharides. OSC in nature is so ruled by redundant cause-and-effect "necessity" that it affords the least complexity of the three types of sequences. The mantra-like matrix of OSC has little capacity to retain information. Functional Sequence Complexity (FSC) A linear, digital, cybernetic string of symbols representing syntactic, semantic and pragmatic prescription; each successive sign in the string is a representation of a decision-node configurable switch-setting – a specific selection for function. FSC is a succession of algorithmic selections leading to function. Selection, specification, or signification of certain "choices" in FSC sequences results only from non-random selection. These selections at successive decision nodes cannot be forced by deterministic cause-and-effect necessity. If they were, nearly all decision-node selections would be the same. They would be highly ordered (OSC). And the selections cannot be random (RSC). No sophisticated program has ever been observed to be written by successive coin flips where heads is "1" and tails is "0."… Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC)…. In summary, Sequence complexity can be 1) random (RSC), 2) ordered (OSC), or functional (FSC). OSC is on the opposite end of the bi-directional vectorial spectrum of complexity from RSC. FSC is usually paradoxically closer to the random end of the complexity scale than the ordered end. FSC is the product of non-random selection. FSC results from the equivalent of a succession of integrated algorithmic decision node "switch settings." FSC alone instructs sophisticated metabolic function. Self-ordering processes preclude both complexity and sophisticated functions. Self-ordering phenomena are observed daily in accord with chaos theory. But under no known circumstances can self-ordering phenomena like hurricanes, sand piles, crystallization, or fractals produce algorithmic organization. Algorithmic "self-organization" has never been observed despite numerous publications that have misused the term. Bona fide organization always arises from choice contingency, not chance contingency or necessity.
My comments: Neither RSC nor OSC constitutes an example of specified information. Only FSC does that. FSC is simply a biological version of FCSI. The authors provide a detailed account of bio-functionality, but do not attempt to provide a definition of the term “specified.” (See Definitions One and Two above for this.)vjtorley
January 22, 2010
January
01
Jan
22
22
2010
04:11 AM
4
04
11
AM
PDT
DEFINITION THREE (Hazen et al., 2007 and Kalinsky, 2008) What Hazen and Kalinsky call functional information is one kind of functional complex specified information (FCSI). [Note: In the quotes below, E denotes “E with the subscript x,” and log denotes “log to base 2” – VJT.] Hazen’s definition: Quote from 2007 paper by Hazen et al.:
Complex emergent systems of many interacting components, including complex biological systems, have the potential to perform quantifiable functions. Accordingly, we define “functional information,” I(E), as a measure of system complexity. For a given system and function, x (e.g., a folded RNA sequence that binds to GTP), and degree of function, E (e.g., the RNA–GTP binding energy), I(E) = ?log[F(E )], where F(E) is the fraction of all possible configurations of the system that possess a degree of function >= E. Functional information, which we illustrate with letter sequences, artificial life, and biopolymers, thus represents the probability that an arbitrary configuration of a system will achieve a specific function to a specified degree.
Kalinsky, 2008: [Note: In the quotes below, E denotes “E with the subscript x,” I denotes “I with the subscript nat,” and log denotes “log to base 2” – VJT.]
Measuring functional information A method to measure functional information has recently been published by Hazen et al. whereby functional information is defined as I(E) = - log[M(E) / N] where E is the degree of function x, M(E) is the number of different configurations that achieves or exceeds the specified degree of function x, >= E, and N is the total number of possible configurations…. (Kalinsky cites Hazen in footnote 2.) [T]he highest level of functional information that natural processes could reasonably be expected to produce for a given function would be the case where only one functional configuration would reasonably be found in R trials, or I = - log[1-(1-0.5)^1/R]. (3)
Kalinsky also includes a method of inferring intelligent design for life on Earth:
Given that there is no known upper limit for the amount of functional information a mind can produce, for any effect requiring or producing functional information, intelligent design is the more likely explanation if I(E) > I. (4) The greater the difference between I(E) and I, the more likely it is that intelligent design was required. It will be assumed, for simplicity, that the probability that mindless natural processes can achieve I is 1 and decreases probabilistically for I(E) > I. The probability that intelligent design can achieve I(E) will be assumed to be 1 for any finite amount of functional information. This is a reasonable assumption, given our observations of what intelligence can do and the apparent absence of any upper limit. It is usually assumed that the origin and diversification of life is not a blind search.Actual mutations, insertions, deletions, and genetic drift may be chance events, but natural selection essentially guides the search and, hence, the search is not blind… Of course, this raises the question, does natural selection, itself, require intelligent design? The fatal mistake made by many who appeal to natural selection is the assumption that natural selection, itself, does not require intelligent design. Although natural selection is credited with somehow discovering the right combination of nucleotides to code for, say, proteins like SecY or RecA, there is a great deal of vagueness about how it actually is supposed to do this, and not just for two proteins, but for thousandsNatural selection requires a fitness function. If a given protein is a product of natural selection operating within a fitness landscape, then sufficient functional information required to find that protein in an evolutionary search must be encoded within the fitness function…To summarize; if natural selection or a fitness function are credited with producing a given amount of functional information, then if that functional information exceeds I, by the method proposed in this article, ID is required to properly configure the fitness function. Regardless of whether one prefers a genetic approach or a metabolic approach, we do know that at some point, proteins must be produced, or at least the information coding for stable, folded proteins must be achieved. We can, therefore, take all origin of life scenarios and put them into a 'black box,' which performs an evolutionary search and outputs the stable folded proteins that are permitted by physics. It is not necessary to know what the processes within this black box do; all we need to know is the output. The output can be evaluated two ways, one way is to assume that the black box is performing a blind search which, of course, requires no intelligent design, and the other way is to assume that some sort of fitness function is operating within the black box which may or may not require intelligent design, depending upon how much functional information is required for the output. To estimate I for a prebiotic, origin of life search, we must estimate the number of trials available for a blind search. We will then be in a position to estimate I and compare it with the functional information required to produced a minimal genome to see if a fitness function would be necessary that would require intelligent design. Since we do not know what processes could perform the search, let us be extremely generous. Taylor et al. have estimated that the mass of the earth would equal about 10^47 proteins, of 100 amino acids each. If we suppose that the entire set of 10^47 proteins reorganized once per year over a 500 million year interval (about the estimated time period for pre-biotic evolution), then that search permits about 10^55 options to be tried. Using Eqn. (3), I = approx. 185 bits of functional information. Of course, this scenario is much more generous than any scenario under consideration, but at least we will not be underestimating I. If I(E) requires more than I, then we can assume that either a fitness function requiring intelligent design must be included in the black box, or intelligent design is operating in some other fashion to properly encode the functional information.
My comments: It should be clear from the foregoing that when Hazen (and by extension, Kalinsky) write about functional information, they mean information that is complex, specified and functional – i.e. FCSI. Although they don’t define “specified” as such, it is of course true that FCSI is a subset of CSI. It should also be clear that the authors’ definition of functional information pertains to a specific function x. It is also interesting that Kalinsky doesn’t see natural selection as something opposed to intelligent design, but as something which may require intelligent design, if it is “rigged” enough. Kalinsky’s design detection approach is different to Dembski’s, insofar as it makes use of estimates regarding the abundance of amino acids on the early Earth. (It also omits the possibility of life arising in space.) It would be interesting to compare the two methods and see if they come up with similar results for which patterns are products of intelligent design.vjtorley
January 22, 2010
January
01
Jan
22
22
2010
04:08 AM
4
04
08
AM
PDT
DEFINITION TWO (continued) Dembski (2005) contains a very detailed discussion of what “easily describable” means with regard to specified complexity, as well as a context-independent procedure for ruling out chance. I’ve taken out the heavy math and the card examples, and tried to keep the focus on biology, as you requested: [Note: In the quotes below, PHI denotes “Phi with the subscript s,” and log denotes “log to base 2” – VJT.]
Specificity [W]hat makes [a] pattern … a specification is that the pattern is easily described but the event it denotes is highly improbable and therefore very difficult to reproduce by chance. It’s this combination of pattern simplicity (i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing the corresponding event by chance) that makes [a] pattern … a specification. This intuitive characterization of specification needs now to be formalized. We begin with an agent S trying to determine whether an event E that has occurred did so by chance according to some chance hypothesis H …S notices that E exhibits a pattern T…As a pattern, T is typically describable within some communication system. In that case, we may also refer to T, described in this communication system, as a description. S, to identify a pattern T exhibited by an event E, formulates a description of that pattern. To formulate such a description, S employs a communication system, that is, a system of signs. S is therefore not merely an agent but a semiotic agent. Each S [i.e. each agent – VJT] can therefore rank order these patterns in an ascending order of descriptive complexity, the simpler coming before the more complex, and those of identical complexity being ordered arbitrarily. Given such a rank ordering, it is then convenient to define PHI as follows: PHI(T) = the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T. Thus, if PHI(T) = n, there are at most n patterns whose descriptive complexity for S does not exceed that of T. [Now] imagine a dictionary of 100,000 (= 10^5) basic concepts. There are then 10^5 1-level concepts, 10^10 2-level concepts, 10^15 3-level concepts, and so on. If “bidirectional,” “rotary,” “motor-driven,” and “propeller” are basic concepts, then the molecular machine known as the bacterial flagellum can be characterized as a 4-level concept of the form “bidirectional rotary motor-driven propeller.” Now, there are approximately N = 10^20 concepts of level 4 or less, which therefore constitute the specificational resources relevant to characterizing the bacterial flagellum. Next, define p = P(T|H) as the probability for the chance [i.e. undirected – VJT] formation for the bacterial flagellum. T, here, is conceived not as a pattern but as the evolutionary event/pathway that brings about that pattern (i.e., the bacterial flagellar structure). Moreover, H, here, is the relevant chance hypothesis that takes into account Darwinian and other material mechanisms. We may therefore think of the specificational resources as allowing as many as N = 10^20 possible targets for the chance formation of the bacterial flagellum, where the probability of hitting each target is not more than p. Factoring in these N specificational resources then amounts to checking whether the probability of hitting any of these targets by chance is small, which in turn amounts to showing that the product Np is small. The negative logarithm to the base 2 of this last number, –logN*p, we now define as the specificity of the pattern in question. Thus, for a pattern T, a chance hypothesis H, and a semiotic agent S for whom PHI measures specificational resources, the specificity sigma is given as follows: sigma = –log[ PHI(T) * P(T|H) ]. Note that T in PHI(T) is treated as a pattern and that T in P(T|H) is treated as an event (i.e., the event identified by the pattern). What is the meaning of this number, the specificity, sigma? To unpack sigma, consider first that the product PHI(T) * P(T|H) provides an upper bound on the probability (with respect to the chance hypothesis H) for the chance occurrence of an event that matches any pattern whose descriptive complexity is no more than T and whose probability is no more than P(T|H). The intuition here is this: think of S as trying to determine whether an archer, who has just shot an arrow at a large wall, happened to hit a tiny target on that wall by chance. The arrow, let us say, is indeed sticking squarely in this tiny target. The problem, however, is that there are lots of other tiny targets on the wall. Once all those other targets are factored in, is it still unlikely that the archer could have hit any of them by chance? That’s what PHI(T) * P(T|H) computes… Note that putting the logarithm to the base 2 in front of the product (PHI(T) * P(T|H)) has the effect of changing scale and directionality, turning probabilities into number of bits and thereby making the specificity a measure of information. This logarithmic transformation therefore ensures that the simpler the patterns and the smaller the probability of the targets they constrain, the larger specificity. Specified Complexity Let us now return to our point of departure, namely, an agent S trying to show that an event E that has occurred is not properly attributed to a chance hypothesis H. Suppose that E conforms to the pattern T and that T has high specificity, that is, – log [ PHI(T) * P(T|H) ] is large or, correspondingly, PHI(T) * P(T|H) is positive and close to zero. Is this enough to show that E did not happen by chance? No. What specificity tells us is that a single archer with a single arrow is less likely than not (i.e., with probability less than 1/2) to hit the totality of targets whose probability is less than or equal to P(T|H) and whose corresponding patterns have descriptive complexity less than or equal to that of T. But what if there are multiple archers shooting multiple arrows? …It depends on how many archers and how many arrows are available. More formally, if a pattern T is going to be adequate for eliminating the chance occurrence of E, it is not enough just to factor in the probability of T and the specificational resources associated with T. In addition, we need to factor in what I call the replicational resources associated with T, that is, all the opportunities to bring about an event of T’s descriptive complexity and improbability by multiple agents witnessing multiple events. If you will, the specificity PHI(T) * P(T|H) (sans negative logarithm) needs to be supplemented by factors M and N where M is the number of semiotic agents (cf. archers) that within a context of inquiry might also be witnessing events and N is the number of opportunities for such events to happen (cf. arrows). Just because a single archer shooting a single arrow may be unlikely to hit one of several tiny targets, once the number of archers M and the number of arrows N are factored in, it may nonetheless be quite likely that some archer shooting some arrow will hit one of those targets. As it turns out, the probability of some archer shooting some arrow hitting some target is bounded above by M * N * PHI(T) * P(T|H). If, therefore, this number is small (certainly less than 1/2 and preferably close to zero), it follows that it is less likely than not for an event E that conforms to the pattern T to have happened according to the chance hypothesis H… Moreover, we define the logarithm to the base 2 of M *N * PHI(T) * P(T|H) as the context-dependent specified complexity of T given H, the context being S’s context of inquiry: CHI-tilde = –log[ M* N * (PHI(T)) *P(T|H)]. Note that the tilde above the Greek letter chi indicates CHI-tilde’s dependence on the replicational resources within S’s context of inquiry. As defined, CHI-tilde is context sensitive, tied to the background knowledge of a semiotic agent S and to the context of inquiry within which S operates. Even so, it is possible to define specified complexity so that it is not context sensitive in this way. Theoretical computer scientist Seth Lloyd has shown that 10^120 constitutes the maximal number of bit operations that the known, observable universe could have performed throughout its entire multi-billion year history. This number sets an upper limit on the number of agents that can be embodied in the universe and the number of events that, in principle, they can observe. Accordingly, for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M*N will be bounded above by 10^120. We thus define the specified complexity of T given H (minus the tilde and context sensitivity) as CHI = –log[ 10^120 * (PHI(T)) * P(T|H)]. [T]here is never any need to consider replicational resources M·N that exceed 10^120 (say, by invoking inflationary cosmologies or quantum many-worlds) because to do so leads to a wholesale breakdown in statistical reasoning, and that’s something no one in his saner moments is prepared to do (for the details about the fallacy of inflating one’s replicational resources beyond the limits of the known, observable universe, see my article The Chance of the Gaps . It follows that if (10^120 * PHIS(T) * P(T|H)) < 1/2 or, equivalently, that if CHI = –log[ 10^120 * PHI(T) * P(T|H)] > 1, then it is less likely than not on the scale of the whole [observable] universe, with all replicational and specificational resources factored in, that E should have occurred according to the chance hypothesis H. Consequently, we should think that E occurred by some process other than one characterized by H. Since specifications are those patterns that are supposed to underwrite a design inference, they need, minimally, to entitle us to eliminate chance. Since to do so, it must be the case that CHI = –log[ 10^120 * PHI(T) * P(T|H)] > 1, we therefore define specifications as any patterns T that satisfy this inequality. In other words, specifications are those patterns whose specified complexity is strictly greater than 1. As an example of specification and specified complexity in their context-independent form, let us return to the bacterial flagellum. Recall the following description of the bacterial flagellum given in section 6: “bidirectional rotary motor-driven propeller.” This description corresponds to a pattern T. Moreover, given a natural language (English) lexicon with 100,000 (= 10^5) basic concepts (which is supremely generous given that no English speaker is known to have so extensive a basic vocabulary), we estimated the complexity of this pattern at approximately PHI(T) = 10^20 (for definiteness, let’s say S here is me; any native English speaker with a some of knowledge of biology and the flagellum would do). It follows that –log[ 10^120 * PHI(T) * P(T|H)] > 1 if and only if P(T|H) < 0.5 * 10^-140, where H, as we noted in section 6, is an evolutionary chance hypothesis that takes into account Darwinian and other material mechanisms and T, conceived not as a pattern but as an event, is the evolutionary pathway that brings about the flagellar structure (for definiteness, let’s say the flagellar structure in E. coli). Is P(T|H) in fact less than 0.5 * 10^-140, thus making T a specification? The precise calculation of P(T|H) has yet to be done.
My comments: Now can you all see where the Boolean comes in? Either the pattern in question will satisfy the inequality or it won’t. Only if it satisfies the inequality can we infer intelligent design. It may also have occurred to some of you that the simplicity of description is dependent on the language in which it is formulated. Dembski is aware of this, and defines PHI(T) in terms of the simplest description in any of the languages used by the agent S. Dembski is also offering scientists a challenge: all you have to do is show me an undirected mechanism whereby the bacterial flagellum can arise with a probability of greater than 10^-140, and I’ll stop claiming that it was designed.vjtorley
January 22, 2010
January
01
Jan
22
22
2010
03:20 AM
3
03
20
AM
PDT
Let's take the alphabet too: (God gave Moses clay tablets, even twice. Now when we have the Internet, why can’t he have a web page?) Be that as it may, WRT invention of the alphabet: The ancients wrote on clay tablets too. To begin with, the sound A was represented by a pictogram of the head of an ox, Apis. Looking at the A, you may see the two horns at the bottom with the snout at the top. That actually is the ox head rotated 90 degrees. The cause for that is that the orientation of the tablet when being written at a later time was changed from a vertical to a horizontal position. I suppose it all means that the ancients realized that the sound ‘A’, the initial vowel of Apis, could be represented by the image of Apis. I learned this from John Allegro’s “The Sacred Mushroom and the Cross” that I read forty years ago. A fascinating book, although I don’t think I agree with all of Allegro’s inferences. But it gives a fascinating insight in how a scientist works. To recreate, to read and speak the Mesopotamian language! Inventing from scratch things we don't have a clue about, don't have a name for, that is not easy!Cabal
January 22, 2010
January
01
Jan
22
22
2010
03:18 AM
3
03
18
AM
PDT
Upright biped,
Cabal, where did we get the idea for a wheel,
Just looking at a dung beetle in action?Cabal
January 22, 2010
January
01
Jan
22
22
2010
02:33 AM
2
02
33
AM
PDT
DEFINITION TWO (Dembski, 2008; Dembski, 2005) The following extracts are taken from Dembski, W. A. and Wells, J. "The Design of Life." 2008. Foundation for Thought and Ethics, Dallas. The definitions are similar to those in Dembski’s 2005 paper, “Specification” (see below).
Information (p. 314) Literally the act of giving form or shape to something. Because giving form to a thing rules out other forms it might take, information theory characterizes information as the reduction of possibilities of uncertainty. In classical information theory, the amount of information in a string of characters is inversely related to the probability of the occurrence of that string. Hence, the more improbable the string, the more uncertainty is reduced by identifying it and therefore the more information it conveys. Information defined in this way provides only a mathematical measure of improbability or complexity. It does not establish whether a string of characters conveys meaning, performs a function, or is otherwise significant. Descriptive Complexity (p. 311) A measure of the difficulty needed to describe a pattern. Probabilistic complexity (p. 318) A measure of the difficulty for chance-based processes to reproduce an event. Specified Complexity (p. 320) An event or object exhibits specified complexity provided that (1) the pattern to which it conforms identifies a highly improbable event (i.e. has high PROBABILISTIC COMPLEXITY) and (2) the pattern itself is easily described (i.e. has low DESCRIPTIVE COMPLEXITY). Specified complexity is a type of INFORMATION. Complex specified information (p. 311) Information that is both complex and specified. Synonymous with SPECIFIED COMPLEXITY. Functional information (p. 313) Information in the base sequence of a species’ DNA that codes for structures capable of performing biological functions; much of this functional information exhibits specified complexity. (2) More generally, patterns embodied in material structures that enable them to perform functions.
(To be continued)vjtorley
January 22, 2010
January
01
Jan
22
22
2010
02:25 AM
2
02
25
AM
PDT
DEFINITION ONE (Meyer, 2009) Meyer’s definition of CSI appears on pages 106-107 of Meyer's book, and again on pages 352-353.
Pages 106-107: Complex sequences exhibit an irregular, non-repeating arrangement that defies expression by a general law or computer algorithm... Information theorists say that repetitive sequences are compressible, whereas complex sequences are not. To be compressible means a sequence can be expressed in a shorter form or generated by a shorter number of characters... Information scientists typically equate "complexity" with "improbability," whereas they regard repetitive or redundant sequences as highly probable... In our parable, ... Smith's sequence [the ten digits comprising Jones's telephone number] was specifically arranged to perform a function, whereas Jones's [a random sequence of ten digits] was not. For this reason, Smith's sequence exhibits what has been called specified complexity, while Jones's exhibits mere complexity. The term specified complexity is, therefore, a synonym for specified information or information content. Page 352: Dembski notes that we invariably attribute events, systems, or sequences that have the joint properties of "complexity" (or small probability) and "specification" to intelligent causes - to design - not to chance or physical-chemical necessity. Complex events or sequences of events are extremely improbable and exhibit an irregular arrangement that defies description by a simple rule, law or algorithm. A specification is a match or correspondence between an observed event and a pattern or set of functional requirements that we know independently of the event in question. Events or objects are "specified" if they exhibit a pattern that matches another pattern that we know independently. Pages 352-353 (referring to students at a lecture who inferred intelligent design - i.e. a set-up - when they saw a "randomly" selected student open a combination lock on the first try): When John (my plant) turned the dial in three ways to pop the lock open, the other students realized that the event matched a set of independent functional requirements - the requirements for opening the lock that were set when its tumblers were configured... My students perceived an improbable event that matched an independent pattern and met a set of independent functional requirements. Thus for two reasons, the event manifested a specification as defined above. Pages 359-360: Since specifications come in two closely related forms, we detect design in two closely related ways. First, we can detect design when we recognize that a complex pattern of events matches or conforms to a pattern that we know from something else we have witnessed... Second, we can detect design when we recognize that a complex pattern of events has a functional significance because of some operational knowledge that we possess about, for example, the functional requirements or conventions of a system. If I observe someone opening a combination lock on the first try, I correctly infer an intelligence cause rather than a chance event. Why? I know that the odds of guessing the combination are extremely low, relative to the probabilistic resources, the single trial available. Pages 364-365, 367: Do the sequence of bases in DNA match a pattern that we know independently from some other realm of experience? If so, where does that pattern reside? ... While certainly we do not see any pattern in DNA molecule that we recognize from having seen such a pattern elsewhere, we - or at least molecular biologists - do recognize a functional significance in the sequences of bases in DNA based upon something else we know. As discussed in chapter 4, since Francis Crick articulated the sequence hypothesis in 1957, molecular biologists have recognized that the sequence of bases in DNA produce a functionally significant outcome - the synthesis of proteins. Yet as noted above, events that produce such outcomes are specified, provided they actualize or exemplify independent functional requirements (or "hit" independent functional targets). Because the base sequences in the coding region of DNA do exemplify such independent functional requirements (and produce outcomes that hit independent functional targets in combinatorial space), they are specified in the sense required by Dembski's theory... The nucleotide base sequences in the coding regions of DNA are highly specific relative to the independent requirements of protein function, protein synthesis, and cellular life. To maintain viability, the cell must regulate its metabolism, pass materials back and forth across its membranes, destroy waste materials, and do many other specific tasks. Each of these functional requirements, in turn, necessitates specific molecular constituents, machines, or systems (usually made of proteins) to accomplish these tasks. As discussed in chapters 4 and 5, building these proteins with their specific three-dimensional shapes depends upon the existence of specific arrangements of nucleotide bases in the DNA molecule. For this reason, any nucleotide base sequence that directs the production of proteins hits a functional target within an abstract space of possibilities.... The chemical properties of DNA allow a vast ensemble of possible arrangements of nucleotide bases. Yet within that set of combinatorial possibilities relatively few will - given the way the molecular machinery of the gene-expression system works - actually produce functional proteins. This smaller set of functional sequences, therefore, delimits a domain (or target or pattern) within a larger set of possibilities. Moreover, this smaller domain constitutes an independent pattern or target, since it distinguishes functional from non-functional sequences, and the functionality of nucleotide base sequences depends on the independent requirements of protein function. Therefore, any actual nucleotide sequence that falls within this domain or matches one of the possible functional sequences corresponding to it "hits a functional target" and exhibits a specification. Accordingly, the nucleotide sequences in the coding regions of DNA are not only complex, but also specified. Therefore, according to Dembski, the specific arrangements of bases in DNA point to prior intelligent activity...
My comments: I think Meyer’s definition of “specified” marks an advance on Dembski’s definition (below), which defines “specified” as being easily describable. However, the difference between the two definitions is relatively insignificant: as I argued above, if we know a pattern independently, it will either be because we have seen such a pattern before, or because it satisfies a functional requirement that we can understand from investigating it. Because we can readily make sense of a specified pattern, it follows that a specified pattern will be easily describable in our language.vjtorley
January 22, 2010
January
01
Jan
22
22
2010
01:38 AM
1
01
38
AM
PDT
Mustela and Heinrich and others who may be interested: You’ve been asking for a clear definition of FCSI, and you’ve commented that the various definitions given in the ID literature employ different terminology, leading to confusion. You’ve also commented that some of the articles make too much use of examples drawn from card games. So I’ve decided to bring all the definitions together on this thread, in a simplified form, making use of extracts from the ID literature – the “meat” of the argument, as it were. After that, I’ll give some concrete examples of how FCSI can be computed in a biological context. Finally, I’ll argue that ID proponents have a very strong argument to show that life could not have originated by undirected processes and was therefore designed by an intelligent agent. Where should you start reading? If you want to read something online that attempts to quantify CSI, I suggest you consult the following articles: (1) Abel, D. “The Capabilities of Chaos and Complexity,” in International Journal of Molecular Sciences, 2009, 10, pp. 247-291, at http://mdpi.com/1422-0067/10/1/247/pdf . (2) Durston K., Chiu D., Abel D. and Trevors J., “Measuring the functional sequence complexity of proteins,” in Theoretical Biology and Medical Modelling, 2007, 4:47 at http://www.tbiomed.com/content/pdf/1742-4682-4-47.pdf . (3) “Intelligent Design: Required by Biological Life?” by Kalinsky, K. D. at http://www.newscholars.com/papers/ID%20Web%20Article.pdf . (4) Hazen, R.M., Griffen, P.L., Carothers, J.M. & Szostak, J.W. 2007. "Functional information and the emergence of biocomplexity," in PNAS 104, 8574-8581, at http://www.pnas.org/content/104/suppl.1/8574.full . (5) Abel, D. and Trevors, J. "Three subsets of sequence complexity and their relevance to biopolymeric information," in Theoretical Biology and Medical Modelling, 2005, 2:29, doi:10.1186/1742-4682-2-29 at http://www.tbiomed.com/content/2/1/29 . (6) Dembski, W. A. "Specification: The Pattern that Signifies Intelligence." August 15 2005. Version 1.22. Available at http://www.designinference.com/documents/2005.06.Specification.pdf . In addition, I’d recommend that you read the following books: (7) Dembski, W. A. and Wells, J. "The Design of Life." 2008. Foundation for Thought and Ethics, Dallas. (8) Meyer, S. C. “Signature in the Cell.” 2009. Harper One, New York. Summary of my conclusions (a) There are five overlapping definitions floating around in the ID literature, but they are mutually compatible, and the differences between them are relatively unimportant. Meyer’s (2009) definition of specified complexity is the clearest. Meyer also defines functional complex specified information (FCSI). (b) Functional complex specified information (FCSI) is a subset of complex specified information (CSI). More precisely, FCSI is just CSI that meets a set of independent functional requirements. Some CSI is non-functional; it simply matches a pre-specified pattern. (c) Complex specified information (CSI) is information that is both complex (i.e. highly improbable) and specified. An event is "specified" if it exhibits a pattern that matches another pattern that we know independently – either because we have seen such a pattern before, or because it satisfies a functional requirement that we can readily understand from investigating it. Because we can readily make sense of a specified pattern, it follows that a specified pattern will be easily describable in our language. (d) Information is just a mathematical measure of improbability or complexity. (e) Looking at these articles, I have so far identified two methods for identifying patterns that require intelligent design: Dembski’s probability bound and Kalinsky’s approach. (There may be others; I’m still reading through Dembski’s and Meyer’s books.) Both of these methods make use of estimates regarding what undirected processes can do. Interestingly, Kalinsky’s method explicitly considers the possibility that natural selection may be rigged in favor of producing life and complex organisms, by an intelligent process. Kalinsky expressly says that we should not assume natural selection is mindless. Now let’s have a look at the various definitions of FCSI in the ID literature.vjtorley
January 22, 2010
January
01
Jan
22
22
2010
01:35 AM
1
01
35
AM
PDT
tribune7, What is true, this:
ID is science. It can be falsified. It does not the involve the supernatural in the least.
or that: The theory of intelligent design (ID) holds that certain features of the universe and of living things are best explained by an intelligent cause rather than an undirected process such as natural selection. Other evidence challenges the adequacy of natural or material causes to explain both the origin and diversity of life.Cabal
January 22, 2010
January
01
Jan
22
22
2010
01:34 AM
1
01
34
AM
PDT
Mr Jerry, It is hard to predict what my reaction to that situation might be, but I find the combination of "inevitable" and "incredibly fine tuned" to be an unlikely one.Nakashima
January 21, 2010
January
01
Jan
21
21
2010
09:27 PM
9
09
27
PM
PDT
"“Why don’t you provide evidence for the naturalistic evolution of complex novel capabilities, say something like the eye.” I’m not a biologist and I can’t answer that question." I am not a biologist either but can understand the arguments made by biologists. None have ever made an argument to support the evolution of complex novel characteristics. Doesn't that tell you something? "In fact, that topic has not been one I’ve addressed, or had any interest in addressing." It should be because that is the essence of the debate. Essentially you have no idea whether one of ID's core beliefs is based on good evidence or not. Someone should not criticize people when they do not understand why they hold their positions. It is like saying I do not care what is true, it is better done another way. That is a hard position to sell anyone. It is an extremely arrogant one to suggest that deception is better than truth.jerry
January 21, 2010
January
01
Jan
21
21
2010
09:12 PM
9
09
12
PM
PDT
"As I have made clear, it is not design per se that I am arguing against. From the very beginning I have stated that what I am claiming is that the “pure chance hypothesis” that is used as an argument for ID is a flawed argument. I have in fact pointed out that I think it is detrimental to the larger goal, for those who have this goal, of advocating for design to hitch their wagon to the faulty “tornado in a junkyard” argument." ID makes no such argument.jerry
January 21, 2010
January
01
Jan
21
21
2010
09:03 PM
9
09
03
PM
PDT
Jerry writes, "Since you are in the question asking mode which is the modus operandi of the anti ID people here, maybe you should start answering question. Why don’t you provide evidence for the naturalistic evolution of complex novel capabilities, say something like the eye. And if you cannot, then would you admit that ID has a point by questioning naturalistic evolution for such capabilities." I have had a very limited goal in this discussion, even though you have painted me with a large, and stereotyped, brush. Answers to your questions: Question #1: "Why don’t you provide evidence for the naturalistic evolution of complex novel capabilities, say something like the eye." I'm not a biologist and I can't answer that question. In fact, that topic has not been one I've addressed, or had any interest in addressing. Question #2: "And if you cannot, then would you admit that ID has a point by questioning naturalistic evolution for such capabilities." As I have made clear, it is not design per se that I am arguing against. From the very beginning I have stated that what I am claiming is that the "pure chance hypothesis" that is used as an argument for ID is a flawed argument. I have in fact pointed out that I think it is detrimental to the larger goal, for those who have this goal, of advocating for design to hitch their wagon to the faulty "tornado in a junkyard" argument.Aleta
January 21, 2010
January
01
Jan
21
21
2010
08:21 PM
8
08
21
PM
PDT
Jerry writes, "I answered this. Yes I would agree with that ..." Good, and thanks. I am glad to have this cleared up.Aleta
January 21, 2010
January
01
Jan
21
21
2010
08:11 PM
8
08
11
PM
PDT
"Notice that I am not asking you at all to accept the truth of the premise as a fact about the real world. I’m asking you merely to think about an abstract model irrespective of any connection to the real world: IF the model includes selection of some sort after each step, THEN the probability of the final state being reached through a set of steps is not the same as if they happened all at once." I answered this. Yes I would agree with that but indicated what that implies and what one has to show in order to make the cumulative argument. It is much easier to see the reasonableness of a cumulative argument when we know there are lots of different life forms possible in world. It is quite a different thing when we do not know if any step along the way is feasible and when there are no other possibilities. That is what I tried to explain to you and to others who make this specious argument for the origin of life, that it happened in steps and that it changes the probabilites. No one knows if there is an alternative to ATP synthase or the ribosome. And if there are no alternatives then the step wise argument falls apart on that particular problem but not on every other problem in the universe. So the step wise argument that Mustela Nivalis said was the answer to the highly unlikely probability of something like ATP synthase By the way there are varieties of ATP synthase and the ribosome and a lot of other proteins but generally they are very conserved over many different species and kingdoms. Since you are in the question asking mode which is the modus operandi of the anti ID people here, maybe you should start answering question. Why don't you provide evidence for the naturalistic evolution of complex novel capabilities, say something like the eye. And if you cannot, then would you admit that ID has a point by questioning naturalistic evolution for such capabilities. Remember ID does not say it never happened naturalistically only that there is no evidence that it did.jerry
January 21, 2010
January
01
Jan
21
21
2010
08:04 PM
8
08
04
PM
PDT
"I would be overjoyed if we learned so much that we could say confidently that life was inevitable on this planet." Would you be overjoyed to find out that the process which led inevitably to life was obviously part of the basic design of the universe and that extremely small perturbations from that would have destroyed that inevitability both here and everywhere? I am not implying that is true, only that it could be a finding some day. The current fine tuning argument is about the receptibility of this universe to life as we know it. I was implying something incredibly more fine tuned than that. By the way such a finding if true would upset some theistic evolutionists who insist that God left no trail. But that would be a trail.jerry
January 21, 2010
January
01
Jan
21
21
2010
07:46 PM
7
07
46
PM
PDT
I take it your long response was not actually a response to my question even though you quoted me at the start. Notice the bolded and capitalized parts: So, if you don’t think about how the real world works, and just think about logical and mathematical models we make, can you agree that IF the model includes selection of some sort after each step, THEN the probability of the final state being reached through a set of steps is not the same as if they happened all at once? Notice that I am not asking you at all to accept the truth of the premise as a fact about the real world. I'm asking you merely to think about an abstract model irrespective of any connection to the real world: IF the model includes selection of some sort after each step, THEN the probability of the final state being reached through a set of steps is not the same as if they happened all at once. Does this statement seem true or false to you? ThanksAleta
January 21, 2010
January
01
Jan
21
21
2010
07:43 PM
7
07
43
PM
PDT
"So, if you don’t think about how the real world works, and just think about logical and mathematical models we make, can you agree that if the model includes selection of some sort after each step, the probability of the final state being reached through a set of steps is not the same as if they happened all at once?" You are new here and are maybe not picking up the differences between discussions. The term FSCI or FCSI refers to structures that either arise through natural processes or through intelligent intervention. ID will say that it is very unlikely for these structures to arise through naturalistic processes (maybe a few did but that is all.) The use of FSCI is mainly not about evolution per se but more about the origin of life and the origin of certain proteins and RNA polymers in a cell. What is their origin. When it is used for evolution, it is more about the origin of novel information needed to control new complex capabilities in organisms. As such when applied to the presence of certain genes, the concept of selection is less appropriate in the origin of proteins for the first cell and may in fact be not appropriate at all. So what I have been discussing is very real world and in certain cases one has to apply science and logic and know when to distinguish one argument for another. There are two places where the idea of gradualism seems appropriate and has been used to justify naturalistic processes. One is in evolution and we can get back to that in a couple paragraphs but the other is in the origin of these incredibly complicated molecules that were necessary at the get go for life to function. So to say they happened in steps is a completely different process and discussion from saying that evolution happened in steps. I am not confusing the two but it seems many here are confused, which is why I pointed out the term selection was not appropriate to use when discussing FSCI. And that the probabilities of the so called steps essentially added up to the probability of the whole. To dispute this, one would have to show two things, that the sub parts were indeed functional and remember we are talking at the very beginning probably before life existed and the individual steps were somehow favored by non biological processes. And secondly that there were other viable paths to transit to get to something resembling life. Otherwise this so called fantastic process just happened to find the one viable path to what Dawkins calls the Greatest Show on Earth at every step. The probability of that is the same as if it poofed into existence all at once. So by steps or in one fell swoop, the probabilities are the same. The person who questions the probability assessment has to show first that there is function all the way down part by part and secondly that there are zillions of alternatives so at each step along the way there was no big deal finding a viable next step. If there was only one viable next step or only a few then the probabilities are the same or essentially the same. Now for gradualism in evolution and FSCI. The amount of information difference between the cell of a mammal and a prokaryote is immense. How did all this new information arise? Again the individual coding regions can be assessed for FSCI and this concept could be used to compare the two but it probably isn't necessary to go that far. It is overkill. There is no evidence that much of these differences arose naturally. They just appeared as in the Cambrian Explosion. There is zero evidence that gradualist processes over time led to the complexity seen in the microbe to man progression. Gradualist changes do happen but they essential reshuffle what is already there. They may make for some interesting changes but not the building of complex novel capabilities. For that one has to go elsewhere. I am currently reading Dawkins' new book and it is very interesting reading but so far is completely compatible with ID. So here is a challenge to any anti ID person here, point out one thing in Dawkins' book that undermines ID. So far I have not found any. I might change my mind as I move on. We mention every now and then the discussions of an evolutionary biologist who comes here named Allen MacNeill. By the way Allen has declared Darwin or Darwinian processes dead even though he believes in naturalistic evolution. He has identified about 50+ ways that genomes can be modified. Many just add or delete base pairs to a genome or change a SNP. The biggest addition is when the whole genome gets duplicated. The most frequently mentioned change by people challenging ID is gene duplication like it is something magic and it certainly does happen. Many of the various way that a genome is modified will usually not cause the organism to perform any differently in nature so it does not affect its survivability immediately. But these extra pieces of DNA can mutate away because they are extra and don't affect surviving. And then the theory says they will eventually mutate into something new and viable and will then affect survival in a positive way and possibly very dramatically so a major phenotype change has taken place. Hence while a gradual process it it the antithesis of Darwinian gradualism which says the gradual changes are in the coding regions. So here is a way that FSCI could arise naturally and probably did a few times. The already in place transcription/translation system will then produce a new protein to this mutated segment of DNA and it will affect the phenotype positively. Over time enough of these changes will cause all we have seen in biology. Or so the theory goes. The only problem with this scenario is that there is no proof it ever happened except for an occasional or two change. There is no indication that even if it could produce the 2000 genes necessary for vision, that it could somehow coordinate this massive amount of information necessary for these complicated processes to take place. Evolutionary biology would go a long way to legitimize itself by showing how these paths arose and to show the process working even today to increase information in genomes. But we have radio silence and we have asked for any evidence of this here for years or anything published on this. No one steps up including Richard Dawkins in his book. So if Dawkins cannot do it, what are we to think. I just want to say that the previous comment is part of ID's strategy of avoiding answers to difficult questions and banning anyone who does so.jerry
January 21, 2010
January
01
Jan
21
21
2010
07:32 PM
7
07
32
PM
PDT
Mr Jerry, But people like Nakashima would be dismayed. Please do me the courtesy of not pretending to read my mind, and I will continue to extend the same courtesy to you. I would be overjoyed if we learned so much that we could say confidently that life was inevitable on this planet.Nakashima
January 21, 2010
January
01
Jan
21
21
2010
05:12 PM
5
05
12
PM
PDT
I read all of 209 and 222 again and haven't a clue what you are talking about. I don't see where I changed course at all or changed definitions. "The thing about jokes is they’re, um, you know, supposed to be funny." Well apparently you fell for the absurdity even after I pointed it out a couple times. You continue to pursue the nonsense I said it was. I find the anti ID position absurd so I was, to use the phrase of a contemporary commentator, using an absurdity to illustrate absurdity. Why bother pursuing it when I said it was absurd to do so. I said it might be done for a prokaryote but it would take lot of time but said why would anyone want to do it because all it takes is to estimate it for one gene and the whole naturalistic argument falls apart. Why pile on and waste time by trying to calculate it for a whole organism even a simple one. You have a way to calculate it for an individual gene, so have at it and do it for every gene in a prokaryote. I believe a few have been completely determined. But what one would get is an incredibly high number that defies every logic of naturalistic approaches. If you want to calculate it for more than one gene, just multiply the probabilities with each other. I believe that in information theory they take the logs and then just add them to make it easier. Now I am well aware that two genes may not be independent of each other and so multiplying the two probabilities may not be entirely accurate but it gets to the essence of the problem. It is possible to reduce the probabilities somewhat by considering redundant segments, or the interchangeability of some amino acids. Theoretically this could be done for all the genes in a genome if someone has a large number of monkeys working for them but why bother. All that is needed is just one to make the point. I was using 4^ because of the four bases and in order to make it simpler, switched over to amino acids which is 20^. It's been a while since I practiced my mathematics but they are close enough for this type of evolutionary work.jerry
January 21, 2010
January
01
Jan
21
21
2010
04:31 PM
4
04
31
PM
PDT
A short recap; (all post numbers subject to change, it seems, due to moderated posts being inserted later.) Jerry at 282 : "A series of events have the same probability as all the events happening at once.” Me at 286: "This is not true if there is selection involved at each step. My dice example at 169 shows clearly that if each step has a law which selects for some state over another, then the probability of the final state being reached through a set of steps is most definitely not the same as if they happened all at once." I don't want to repost this again, so I ask you to go look at this again if you don't remember it. Jerry at 292: "What selection? Where could there have been selection in the first life form or before it?" Jerry, you have broadened the question way beyond the scope of my statement. I would like to explain a bit, and then re-state my point for your consideration. When we use math and logic to describe the world, there are two components to our work: 1. We build a theoretical model and analyze it logically to see how it works and what logical conclusions we can reach from it, and 2. We test the model against the real world to see if the model is accurate. I am talking about the first step - I'm talking about some mathematical considerations. Let us leave aside for the moment whether the model can be applied to such a difficult problem as the origin life. Let's just talk theory. My dice example at 169 shows clearly, I think, that there is a mathematical difference between an event which happens all at once an a series of events in which some type of selection takes place at each step. So, if you don't think about how the real world works, and just think about logical and mathematical models we make, can you agree that if the model includes selection of some sort after each step, the probability of the final state being reached through a set of steps is not the same as if they happened all at once?Aleta
January 21, 2010
January
01
Jan
21
21
2010
04:11 PM
4
04
11
PM
PDT
jerry at 289, “But when I used that definition and example,” What did I change and what example did I change. Well, let's see. At 209 and before you claim that FSCI is the number of bits required to describe the genome (actually you used 4 to the power of the length of the genome, but what's a binary order of magnitude between friends?). At 222 you said "You could not calculate the total FSCI for those genomes except for the prokaryotes since I believe every bit of DNA is coding." But then, at 259, you said "The FSCI applies to the specific coding areas, not the whole genome." To clear up the confusion, please provide a clear, mathematically rigorous definition of FSCI. If you are referring to the whole genome, that was a joke and explained why no one in their right mind would ever do it or even be interested in it from a FSCI point of view. The thing about jokes is they're, um, you know, supposed to be funny. At the least, they should be distinguishable in some way from apparently serious claims that are made here. I did not change anything. On the contrary, I provided references to where you did. Let me know what I said, so I can retract if I was wrong or explain it better to you. I would appreciate that, thank you. A clear, mathematically rigorous definition of FSCI, preferably with a worked example for a real biological artifact would be great.Mustela Nivalis
January 21, 2010
January
01
Jan
21
21
2010
02:51 PM
2
02
51
PM
PDT
1 2 3 4 5 6 14

Leave a Reply