Uncommon Descent Serving The Intelligent Design Community

Evolution driven by laws? Not random mutations?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

So claims a recent book, Arrival of the Fittest, by Andreas Wagner, professor of evolutionary biology at U Zurich in Switzerland (also associated with the Santa Fe Institute). He lectures worldwide and is a fellow of the American Association for the Advancement of Sciences.

From the book announcement:

Can random mutations over a mere 3.8 billion years solely be responsible for wings, eyeballs, knees, camouflage, lactose digestion, photosynthesis, and the rest of nature’s creative marvels? And if the answer is no, what is the mechanism that explains evolution’s speed and efficiency?

In Arrival of the Fittest, renowned evolutionary biologist Andreas Wagner draws on over fifteen years of research to present the missing piece in Darwin’s theory. Using experimental and computational technologies that were heretofore unimagined, he has found that adaptations are not just driven by chance, but by a set of laws that allow nature to discover new molecules and mechanisms in a fraction of the time that random variation would take.

From a review (which is careful to note that it is not a religious argument):

The question “how does nature innovate?” often elicits a succinct but unsatisfying response – random mutations. Andreas Wagner first illustrates why random mutations alone cannot be the cause of innovations – the search space for innovations, be it at the level of genes, protein, or metabolic reactions is too large that makes the probability of stumbling upon all the innovations needed to make a little fly (let alone humans) too low to have occurred within the time span the universe has been around.

He then shows some of the fundamental hidden principles that can actually make innovations possible for natural selection to then select and preserve those innovations.

Like interacting parallel worlds, this would be momentous news if it is true. But someone is going to have to read the book and assess the strength of the laws advanced.

One thing for sure, if an establishment figure can safely write this kind of thing, Darwin’s theory is coming under more serious fire than ever. But we knew, of course, when Nature published an article on the growing dissent within the ranks about Darwinism.

In origin of life research, there has long been a law vs. chance controversy. For example, Does nature just “naturally” produce life? vs. Maybe if we throw enough models at the origin of life… some of them will stick?

Note: You may have to apprise your old schoolmarm that Darwin’s theory* is “natural selection acting on random mutations,” not “evolution” in general. It is the only theory that claims sheer randomness can lead to creativity, in conflict with information theory. See also: Being as Communion.

*(or neo-Darwinism, or whatever you call what the Darwin-in-the-schools lobby is promoting or Evolution Sunday is celebrating).*

Follow UD News at Twitter!

Comments
MT: The issue is to recognise reliably, cases of design on a relevant sign. There are trillions of cases [web etc pages are over the trillion mark, add screws, nuts, bolts etc and other things] of FSCO/I of directly known cause, all design, and nil of credible cases of FSCO/I of directly known cause not by intelligently directed configuration. This is a mark of inductive reliability, one backed up by the sparse search to config space with isolated islands of function analysis. When we turn to the root of the tree of life, OOL, we see a case of multiple FSCO/I involving gated, encapsulated integrated metabolic processes in an astonishing network, with numerically controlled protein synthesis in an automaton using codes and involving a von Neumann self replicator facility. This, to be assembled by sparse search on observed cosmos resources in a Darwin's pond or the like, backed by actual observational evidence. And the same sort of challenge proceeds beyond that point. The only vera causa backed means of getting the required FSCO/I is design, and indeed that is why Shapiro's metabolism first and Orgel's Genes first approaches came to mutual ruin. And if (per the OP) the laws of physics and Chemistry programmed life -- on what observations? -- that would point to an astonishing fine tuning of the cosmos above and beyond what is already on the table. When it comes to crop circles as a suggested case, find us one showing FSCO/I that reasonably came about by blind chance and mechanical necessity . . . ironically, this is a case where a design inference is routinely made on FSCO/I. Here, my emphasis -- and hi Joe welcome back -- is that in science explanations should be empirically controlled and the causal adequacy of suggested explanations needs to be shown on the ground not assumed for argument. KFkairosfocus
November 10, 2014
November
11
Nov
10
10
2014
05:58 AM
5
05
58
AM
PDT
I don't see any natural crop circles there. Also I wouldn't use dFSCI to determine design of something that isn't readily amendable into bits. There are other design detection tools that are better in some situations.Joe
November 10, 2014
November
11
Nov
10
10
2014
05:50 AM
5
05
50
AM
PDT
Joe @ 670 You can check Crop Circle images from various sources. One of the great source is : Crop Circle Images I am curious how dFSCI calcuation would be able to detect which one of them is Natural and which is man made.Me_Think
November 10, 2014
November
11
Nov
10
10
2014
05:40 AM
5
05
40
AM
PDT
Me_Think- Please produce an example of a "natural crop circle". Thank you.Joe
November 10, 2014
November
11
Nov
10
10
2014
05:21 AM
5
05
21
AM
PDT
DNA jock- what is it? What is "H"? It is up to you and yours to provide it and yours is the position which relies on necessity and chance.Joe
November 10, 2014
November
11
Nov
10
10
2014
05:20 AM
5
05
20
AM
PDT
Hi Joe, Glad to see you're back. [Checks punctuation. Grins] Actually H is ALL chance hypotheses; Winston Ewert makes the hilarious mistake of considering them sequentially, rather than as a whole. If you, or anyone else, wants to calculate p(T|H) you are the ones who need to delineate H (and T) in sufficient detail to do the calculations. Still waiting...DNA_Jock
November 10, 2014
November
11
Nov
10
10
2014
05:18 AM
5
05
18
AM
PDT
kairosfocus and gpuccio Thank you very much for elaborate explanation. However, I am not convinced that dFSCI and related method can help identify design. Eg. Let's take two similar Crop circles - one Natural and one man made. Both would exhibit the same pattern and both would thus have the same dFSCI. How would you know which was man made based on dFCSI calculations?Me_Think
November 10, 2014
November
11
Nov
10
10
2014
05:15 AM
5
05
15
AM
PDT
OK DNA Jock- please provide the relevant chance hypotheses. I say that you can't. "H" should be given a big fat 0 yet we give it the benefit of the doubt by granting it something greater than 0. Then our critics cry foul when in fact it is our critics who cannot provide H and it is our critics who need to do so.Joe
November 10, 2014
November
11
Nov
10
10
2014
05:07 AM
5
05
07
AM
PDT
Kf @ 663 I agree with you that uniformitarianism is appropriate. Your inference that I do not apply it is incorrect, and appears to stem from a failure to understand the effects of selection over time. Simply put, extant proteins are optimized. Any sane person would expect the degree of substitution allowed to differ from the DoSA in a non-optimized protein, under uniformitarian principles. As I said to gpuccio in #544:
Bottom up studies like Keefe’s are the only way to explore the frequency of “the shores of the islands of function” in protein space (that I have heard of). Studies like McLauglin explore the degree to which functional protein space is interconnected via single steps near an optimum. Durston asks “how broad is the peak?”, a question of secondary relevance, at best. Axe doesn’t explore anything; the paper is based on a glaring fallacy. See my attempt to explain this, inter alia, to Mung. WordPress is mangling my attempts to provide you with a linkout. please enter “http://theskepticalzone.com/wp/?p=1472&cpage=7#comment-19065? in your browser. Dr. Axe is represented by “Dr. A” — I’m a subtle guy. (Off-topic: [snip].) There is not any inconsistency between, to use your terms, the forward data and the reverse data: Keefe’s forward data are compatible with McLauglin’s and Durston’s reverse data. You may be mis-understanding Durston’s data. Axe himself is mis-understanding his own data. [Emphasis in original. I am referring to Gauger and Axe's "The Evolutionary Accessibility of New Enzymes Functions: A Case Study from the Biotin Pathway"]
Any comments about FSCO/I are moot until you find a way to calculate p(T|H).DNA_Jock
November 10, 2014
November
11
Nov
10
10
2014
04:49 AM
4
04
49
AM
PDT
GP, I should note that a 3-d nodes arcs mesh is analogue, but can be reduced to a structured string of y/n q's that specify components, orientations, couplings and more, all of course targetting function. But we should note there are many ways to clump parts, but only a relative few will function correctly. The discussion on coded strings, with understanding that codes have contexts, is WLOG. KFkairosfocus
November 10, 2014
November
11
Nov
10
10
2014
02:10 AM
2
02
10
AM
PDT
DJ: I am disappointed. It seems to me that the uniformity principle espoused by Newton and extended to origins studies by Lyell and Darwin alike, emphasised that that which we may not directly inspect ought to be explained on forces and factors we observe as acting here and now capable of sufficiently similar effect? Failing to provide a true cause with adequate capability known to be able to produce the effect and imposing ideologically loaded redefinitions of science and its methods to lock out causes that are adequate seems to me to be an abuse of the name of science and science education. In this case, all that is needed is to show that with reasonable likelihood and observability, FSCO/I can and is observed to be caused by blind chance and mechanical necessity. Am I therefore to interpret your remarks attacking imaginary YECs not present to defend themselves, as a backhanded admission that you do not have adequate blind watchmaker cause in hand? That would be a very interesting implication as FSCO/I is routinely and reliably produced by intelligently directed contingency, the very context in which design thinkers infer inductively that it is a strong and reliable sign of design, one backed up by what Axe speaks of as the sparse search challenge. KF PS: The abusive appellation, IDiots is a stock in trade of too many anti-design trolls on various attack sites. I point it out as an example of unfortunately typical schoolyard level contempt-laced namecalling. This, you know or should know.kairosfocus
November 10, 2014
November
11
Nov
10
10
2014
12:41 AM
12
12
41
AM
PDT
Reality, sorry but while I am guilty of the crime of creating summary string, the FSCO/I concept is not my creation nor that of design thinkers. It is a commonplace in Engineering though it may not be put in those words, and as you can see it traces to work by Orgel and Wicken, major OOL researchers, in the 1970s. As for your rather familiar personalities and excuses for dodging providing empirical warrant for the Darwinist tree of life from the root up, all I have done for two years is take the Smithsonian presentation of the tree, point to root and branches and say, here is an open invitation to do a feature-length article that I will personally host here at UD that will empirically ground the claimed causal forces and process held to account for the world of life by evolutionary materialists and those who go along with them to one extent or another; without loading up on a priori assumptions and impositions that beg big questions. Perhaps, you would be so kind as to provide an answer that is more satisfactory than what Wiki provides, or Theobald -- fact, fact, FACT -- provides, or the composite based on discussion comments by EL and Jerad after a year of trying, a year ago? Remember, if you can provide a coherent narrative that actually makes the case it blows up design theory as applied to the world of life, answering also the intelligent evolution view espoused by co-founder of the theory, Wallace. The invitation stands open and there is no need for personalities if an answer on the merits can do the job. You may include any number of links and multimedia inclusions to Youtube etc, as well as images but need to provide a coherent, feature article length account. if you or your fellows cannot, but resort to personalities, that reeks of the familiar tactic of shooting at the messenger. KFkairosfocus
November 10, 2014
November
11
Nov
10
10
2014
12:28 AM
12
12
28
AM
PDT
Me_Think at #641: "Is FSCO/I = dFSCI = FSC = CSI ?" I will try to help. CSI is the general concept. Complex Specified Information. Even if there are different approaches to the formal definition of specification, CSI mean the information needed for a particular specification. I have adopted a very simple definition of specification: Specification = any explicit rule which generates a binary partition in a search space. Given a specification, the complexity linked to it is the ration between the target space (all the objects which satisfy the specification) and the search space, expresses as -log2. Functional Specification is a subset of all possible specifications, where the rule which specifies is a function, defined explicitly and which can be assessed by an explicit procedure in any object of the search space, as present or absent, so that a binary partition is generated. In Functional Specification, we can accept any possible function definition for any object, and also different definitions for the same object. In any case, the complexity is computed for each specific definition. FSC means Functionally Specified Complexity: it is the complexity linked to a functional specification. The complexity is computed in the same way as for any generic specification, as said above. dFSI means digital Functionally Specified Information. It is a term that I have introduced to name a further subset of functional information, where we consider only digital sequences in objects. This is useful because the mathematical treatment is easier, and it is appropriate to our purposes, because biological information is mainly digital. Information and Complexity, in the ID context, mean the same thing (for all practical purposes, -log2 of the probability of the target space in a random search). dFSCI is the binary form of dFSI: it is derived from the numeric value by using an appropriate threshold, which must be chosen according to the system and time span we are considering. Any object of the search space which exhibits dFSI above the chosen threshold is said to exhibit dFSCI and is a candidate for the design inference, if there is no reasonable evidence that it can be explained algorithmically in the system. Finally, FSCO/I is the term used by KF. To be correct, I will leave the explanation to him, but I think it is essentially the same thing as my FSCI, without the digital restriction. As KF has explained many times, the digital restriction is without loss of generality, because any analog information can be converted to a digital form. May choice to discuss only digital information is purely methodological: it is simpler to do that. Fit is the term used in the Durston paper to mean the bits linked to the functional restraint in proteins. It is the same as saying "bits of CSI". I hope that can help. As you can see, the concept is rather the same, and the units are the same. There are different subsets of the same concept, that's all.gpuccio
November 10, 2014
November
11
Nov
10
10
2014
12:24 AM
12
12
24
AM
PDT
F/N: Pardon some not cleaned up chunking of clips from Sect A, with Durston et al 2007 (I hate how WP usually murders unusual symbols): ______________ >> 11 --> Durston, Chiu, Abel and Trevors [--> 2007] provide a third metric, the Functional H-metric [--> Shannon's H is avg info per symbol] in functional bits or fits, a functional bit extension of Shannon's H-metric of average information per symbol, here. The way the Durston et al metric works by extending Shannon's H-metric of the average info per symbol to study null, ground and functional states of a protein's AA linear sequence -- illustrating and providing a metric for the difference between order, randomness and functional sequences discussed by Abel and Trevors -- can be seen from an excerpt of the just linked paper. Pardon length and highlights, for clarity in an instructional context: Abel and Trevors have delineated three qualitative aspects of linear digital sequence complexity [2,3], Random Sequence Complexity (RSC), Ordered Sequence Complexity (OSC) and Functional Sequence Complexity (FSC). RSC corresponds to stochastic ensembles with minimal physicochemical bias and little or no tendency toward functional free-energy binding. OSC is usually patterned either by the natural regularities described by physical laws or by statistically weighted means. For example, a physico-chemical self-ordering tendency creates redundant patterns such as highly-patterned polysaccharides and the polyadenosines adsorbed onto montmorillonite [4]. Repeating motifs, with or without biofunction, result in observed OSC in nucleic acid sequences. The redundancy in OSC can, in principle, be compressed by an algorithm shorter than the sequence itself. As Abel and Trevors have pointed out, neither RSC nor OSC, or any combination of the two, is sufficient to describe the functional complexity observed in living organisms, for neither includes the additional dimension of functionality, which is essential for life [5]. FSC includes the dimension of functionality [2,3]. Szostak [6] argued that neither Shannon's original measure of uncertainty [7] nor the measure of algorithmic complexity [8] are sufficient. Shannon's classical information theory does not consider the meaning, or function, of a message. Algorithmic complexity fails to account for the observation that 'different molecular structures may be functionally equivalent'. For this reason, Szostak suggested that a new measure of information–functional information–is required [6] . . . . Shannon uncertainty, however, can be extended to measure the joint variable (X, F), where X represents the variability of data, and F functionality. This explicitly incorporates empirical knowledge of metabolic function into the measure that is usually important for evaluating sequence complexity. This measure of both the observed data and a conceptual variable of function jointly can be called Functional Uncertainty (Hf) [17], and is defined by the equation: H(Xf(t)) = -?P(Xf(t)) logP(Xf(t)) . . . (1) where Xf denotes the conditional variable of the given sequence data (X) on the described biological function f which is an outcome of the variable (F). For example, a set of 2,442 aligned sequences of proteins belonging to the ubiquitin protein family (used in the experiment later) can be assumed to satisfy the same specified function f, where f might represent the known 3-D structure of the ubiquitin protein family, or some other function common to ubiquitin. The entire set of aligned sequences that satisfies that function, therefore, constitutes the outcomes of Xf. Here, functionality relates to the whole protein family which can be inputted from a database . . . . In our approach, we leave the specific defined meaning of functionality as an input to the application, in reference to the whole sequence family. It may represent a particular domain, or the whole protein structure, or any specified function with respect to the cell. Mathematically, it is defined precisely as an outcome of a discrete-valued variable, denoted as F={f}. The set of outcomes can be thought of as specified biological states. They are presumed non-overlapping, but can be extended to be fuzzy elements . . . Biological function is mostly, though not entirely determined by the organism's genetic instructions [24-26]. The function could theoretically arise stochastically through mutational changes coupled with selection pressure, or through human experimenter involvement [13-15] . . . . The ground state g (an outcome of F) of a system is the state of presumed highest uncertainty (not necessarily equally probable) permitted by the constraints of the physical system, when no specified biological function is required or present. Certain physical systems may constrain the number of options in the ground state so that not all possible sequences are equally probable [27]. An example of a highly constrained ground state resulting in a highly ordered sequence occurs when the phosphorimidazolide of adenosine is added daily to a decameric primer bound to montmorillonite clay, producing a perfectly ordered, 50-mer sequence of polyadenosine [3]. In this case, the ground state permits only one single possible sequence . . . . The null state, a possible outcome of F denoted as ø, is defined here as a special case of the ground state of highest uncertainly when the physical system imposes no constraints at all, resulting in the equi-probability of all possible sequences or options. Such sequencing has been called "dynamically inert, dynamically decoupled, or dynamically incoherent" [28,29]. For example, the ground state of a 300 amino acid protein family can be represented by a completely random 300 amino acid sequence where functional constraints have been loosened such that any of the 20 amino acids will suffice at any of the 300 sites. From Eqn. (1) the functional uncertainty of the null state is represented as H(Xø(ti))= - ?P(Xø(ti)) log P(Xø(ti)) . . . (3) where (Xø(ti)) is the conditional variable for all possible equiprobable sequences. Consider the number of all possible sequences is denoted by W. Letting the length of each sequence be denoted by N and the number of possible options at each site in the sequence be denoted by m, W = mN. For example, for a protein of length N = 257 and assuming that the number of possible options at each site is m = 20, W = 20^257. Since, for the null state, we are requiring that there are no constraints and all possible sequences are equally probable, P(Xø(ti)) = 1/W and H(Xø(ti))= - ?(1/W) log (1/W) = log W . . . (4) The change in functional uncertainty from the null state is, therefore, ?H(Xø(ti), Xf(tj)) = log (W) - H(Xf(ti)). (5) . . . . The measure of Functional Sequence Complexity, denoted as ?, is defined as the change in functional uncertainty from the ground state H(Xg(ti)) to the functional state H(Xf(ti)), or ? = ?H (Xg(ti), Xf(tj)) . . . (6) The resulting unit of measure is defined on the joint data and functionality variable, which we call Fits (or Functional bits). The unit Fit thus defined is related to the intuitive concept of functional information, including genetic instruction and, thus, provides an important distinction between functional information and Shannon information [6,32]. Eqn. (6) describes a measure to calculate the functional information of the whole molecule, that is, with respect to the functionality of the protein considered. The functionality of the protein can be known and is consistent with the whole protein family, given as inputs from the database. However, the functionality of a sub-sequence or particular sites of a molecule can be substantially different [12]. The functionality of a sub-molecule, though clearly extremely important, has to be identified and discovered . . . . To avoid the complication of considering functionality at the sub-molecular level, we crudely assume that each site in a molecule, when calculated to have a high measure of FSC, correlates with the functionality of the whole molecule. The measure of FSC of the whole molecule, is then the total sum of the measured FSC for each site in the aligned sequences. Consider that there are usually only 20 different amino acids possible per site for proteins, Eqn. (6) can be used to calculate a maximum Fit value/protein amino acid site of 4.32 Fits/site [NB: Log2 (20) = 4.32]. We use the formula log (20) - H(Xf) to calculate the functional information at a site specified by the variable Xf such that Xf corresponds to the aligned amino acids of each sequence with the same molecular function f. The measured FSC for the whole protein is then calculated as the summation of that for all aligned sites. The number of Fits quantifies the degree of algorithmic challenge, in terms of probability, in achieving needed metabolic function. For example, if we find that the Ribosomal S12 protein family has a Fit value of 379, we can use the equations presented thus far to predict that there are about 10^49 different 121-residue sequences that could fall into the Ribsomal S12 family of proteins, resulting in an evolutionary search target of approximately 10^-106 percent of 121-residue sequence space. In general, the higher the Fit value, the more functional information is required to encode the particular function in order to find it in sequence space. A high Fit value for individual sites within a protein indicates sites that require a high degree of functional information. High Fit values may also point to the key structural or binding sites within the overall 3-D structure. 11 --> Thus, we here see an elaboration, in the peer reviewed literature, of the concepts of Functionally Specific, Complex Information [FSCI] (and related, broader specified complexity) that were first introduced by Orgel and Wicken in the 1970's. This metric gives us a way to compare the fraction of residue space that is used by identified islands of function, and so validates the islands of function in a wider configuration space concept. So, we can profitably go on to address the issue of how plausible it is for a stochastic search mechanism to find such islands of function on essentially random walks and trial and error without foresight of location or functional possibilities. We already know that intelligent agents routinely create entities on islands of function based on foresight, purpose, imagination, skill, knowledge and design. 12 --> Such entities typically exhibit FSCI, as Wicken describes: ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)] 13 --> The Wicken wiring diagram is actually a very useful general concept. Strings of elements -- e.g. S-T-R-I-N-G -- are of course a linear form of the nodes, arcs and interfaces pattern that is common in complex structures. Indeed, even the specification of controlled points and a "wire mesh" that joins them then is faceted over in digital three-dimensional image modelling and drawing, is an application of this principle. Likewise, the flow network or the flowchart or blocks and arrows diagrams common in instrumentation, control, chemical engineering and software design are another application. So is the classic exploded view used to guide assembly of complex machinery. All such can be reduced to combinations of strings that specify nodes, interfaces and interconnecting relationships. From this set of strings, we can get a quantitative estimate of the functionally specific complex information embedded in such a network, and can thus estimate the impact of random changes to the components on functionality. This allows clear identification and even estimating the scope of Islands of Function in wider configuration spaces, through a Monte Carlo type sampling of the impacts of random variation on known functional configurations. (NB: If we add in a hill climbing subroutine, this is now a case of a genetic algorithm. Of course the scope of resources available limits the scope of such a search, and so we know that such an approach cannot credibly initially find such islands of function from arbitrary initial points once the space is large enough. 1,000 bits of space is about 1.07 * 10^301 possibilities, and that is ten times the square of the number of Planck-time states for the 10^80 or so atoms in our observed cosmos. That is why genetic type algorithms can model micro-evolution but not body-plan origination level macro-evolution, which credibly requires of the order of 100,000+ bits for first life and 10,000,000+ for the origin of the dozens of main body plans. So far, also, the range of novel functional information "found" by such algorithms navigating fitness landscapes within islands of function -- intelligently specified, BTW -- seems (from the case of ev) to have peaked at less than 300 bits. HT, PAV.) 14 --> Indeed, the use of the observed variability of AA sequences in biological systems by Durston et al, is precisely an example of this an entity that is naturally based on strings that then fold to the actual functional protein shapes. >> _____________ Hope this further helps. Fireman duties start early today, so later. KFkairosfocus
November 10, 2014
November
11
Nov
10
10
2014
12:13 AM
12
12
13
AM
PDT
Me_Think at #644: "gpuccio explained that dFSCI doesn’t detect design, only confirms if a design is real design or apparent design." I don't understand what you mean. dFSCI is essential to distinguish between true design and apparent design, therefore it is an essential part of scientific design detection. If you are not able to distinguish between true design and apparent design, you are making no design detection you are only making recognition of the appearance of design, which is not a scientific procedure because it has a lot of false positives and a lot of false negatives. So, just recognition of the appearance of design is not scientific design detection. On the contrary, dFSCI eliminates the false positives, and design detection becomes a scientific reality. Therefore, dFSCI is an essential part of scientific design detection. Surely you can understand such a simple concept, can you?gpuccio
November 10, 2014
November
11
Nov
10
10
2014
12:01 AM
12
12
01
AM
PDT
PPS: To understand things connected to information concepts please read Section A my always linked through my name on every comment I have made at UD; which has a boiled down 101 (based ultimately on what I used to teach T/comms students . . . and yes that is my version on Shannon's Diagram though IIRC I found a similar thing in Duncan's A Level Physics. My context is this feeds into the layercake style T/comms protocol models that are now common). The First appendix on thermodynamics matters will also help. PPPS: GP is a busy Physician, I am mostly doing policy analysis stuff in support of a recent change of govt here, with a fair amount of fireman on tap to go with it, rejoice in insomnia power for stuff you see here at UD.kairosfocus
November 9, 2014
November
11
Nov
9
09
2014
11:57 PM
11
11
57
PM
PDT
PS: Closed comments FYIs are based on a need to provide reference, supplementary info, to headline things easy to overlook, to give graphics and/or vids, and the situation of a problem with what has to be called internet vandalism or trollish behaviour. They normally are linked to live threads of discussion.kairosfocus
November 9, 2014
November
11
Nov
9
09
2014
11:47 PM
11
11
47
PM
PDT
F/N: I note:
KF has closed comment threads about dFSCI and gpuccio explained that dFSCI doesn’t detect design, only confirms if a design is real design or apparent design. Joe refers to FSC paper which discusses fit and not bit, so it would be helpful if someone can give the relationship between all those ID units to detect/confirm design .
1 --> Functional specificity based on organisation and coupling of interacting parts to yield relevant function (e.g. info storage/ transmission/ control of NC process, operation of a multi-part entity based on a 3-d nodes & arcs wiring diagram, etc) is a relevant, observable form of specification. It is also a commonplace, ranging from text in English to source and object programs to fishing reels, pens, gear trains, machine tools, pc mother boards, oil refineries, ribosomes and wider protein synthesis, the D/RNA molecules, proteins and esp. enzymes. 2 --> Due to the interactive coupling based on particular configurations there is a lock-down to relatively few of the many possible clumped or scattered possibilities for the same parts/components etc. Thus, islands of function. 3 --> Digitally coded functionally specific complex information [dFSCI] is a subset of functionally specific complex organisation and associated information [FSCO/I], using a string data structure to store coded info, cf punched paper tape, magnetic tape, memory registers in a PC and R/DNA. In turn, FSCO/I or even just FSCI, is a subset of specified complexity or complex specified information, emphasising the biologically and technologically most relevant form of specification, function depending on properly arranged and coupled interacting parts. Both CSI and FSCO/I trace to the concepts put on the table across the 1970's by Orgel and Wicken. Let me cite them:
ORGEL, 1973: . . . In brief, living organisms are distinguished by their specified complexity [--> AmHD: Consisting of interconnected or interwoven parts; composite. the living organisms context points to biofunctions dependent on specific arrangements of interconnected, interacting parts]. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [The Origins of Life (John Wiley, 1973), p. 189. [--> later attempts to quantify and generate metric models build on this] ] WICKEN, 1979: ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]
4 --> Observation and recognition of FSCO/I and its significance do not depend on the particulars of any one metric model, but we routinely measure it based on the length of the string of y/n q's required to specify a given state in the field of possibilities, cf the details mode display of a folder on a PC, though note bytes are 8-bit clusters and k's are based on 2^10 = 1024. 5 --> Your understanding of GP is flawed, dFSCI is an example of a tested empirically reliable sign pointing strongly to design. 6 --> Functional sequence complexity -- FSC, as discussed by Durston, Abel, Trevors etc, has to do with FSCO/I. Fits means functional, binary digits, i.e. functionally specific bits. It is given, eg in the Durston et al 2007 paper. 7 --> In all these cases, once we are beyond a reasonable threshold of complexity, functionally specific, organised complexity is maximally implausible to have come about by sparse blind chance and mechanical necessity -- Blind Watchmaker thesis -- search of an abstract space of possible configurations. This last is essentially a cut-down phase space used in Mathematics, Physics, Chemistry, Thermodynamics and Engineering, we are not looking at momentum. This is the context of the metaphor, searching for islands of function in vast seas of possible configs dominated by non-function. (Think about parts for the fishing reel, and BTW, the conventional/baitcasting reel IIRC was originally developed by watchmakers.) 8 --> This is as opposed to incremental climbing of fitness sloped typically viewed as hills of reasonable smoothness. 9 --> Detect design is a tricky term. It hints at, universal decoder algorithm, but computation theory points out that that is not a reasonable expectation. Instead we are looking at an explanatory filter that contemplates aspects of an entity, phenomenon or process and asks, can we on observable after the fact evidence infer causal factors crucial to the outcomes we see? 10 --> Mechanical necessity rooted in forces and materials of nature gives rise to natural regularities, such as ripe guavas and breadfruit or mangoes or even apples dropping at 9.8 N/kg from a tree. Likewise, attenuating for distance, the same flux accounts for the centripetal force on the Moon keeping it in orbit around Earth. And of course it took genius to make that connexion, and so doing launched the Newtonian synthesis. 11 --> If an aspect of a phenomenon or process shows high contingency of outcomes under similar initial conditions, the likely suspect is chance based stochastic contingency, e.g. a die falls, tumbles and settles to a reading. 12 --> But of course, if the die persistently shows unusual outcomes, we suspect loading . . . or at least that's what would happen in Las Vegas. The reason is of course that functional specificity of the pattern of outcomes beyond a certain point is increasingly implausible under stochastic contingency but very plausible under intelligently directed contingency and/or configuration and/or contrivance. (If you study a bit on loaded dice you will never be inclined again to bet anything significant on dice! Subtle are the ways of trickery. And yes, loading of dice is a good place to begin understanding design.) I trust this helps. KFkairosfocus
November 9, 2014
November
11
Nov
9
09
2014
11:44 PM
11
11
44
PM
PDT
Did I shutdown this discussion yet? :) With serious interlocutors this discussion would get much deeper and interesting, but with mockers it goes nowhere. But still the onlookers/lurkers may benefit from seeing what's going on: bunch of interlocutors that avoid answering questions or complain about simple yes/no questions, calling them 'diversionary' (post 648) or accuse others of wanting to change the subject (post 651). Pathetic.Dionisio
November 9, 2014
November
11
Nov
9
09
2014
10:13 PM
10
10
13
PM
PDT
#650 Me_Think #647 follow-up
Why – so everyone (including ID proponents) can understand how those terms are related) How- We can calculate one and derive other terms from single calculation To Whom -Pretty much everyone (including you) who needs to understand the terms and make sense of the calculations
Why do you need to understand the terms and make sense of the calculations? BTW, your assumption is wrong - you included me, but I don't need to understand those terms or make sense of those calculations.Dionisio
November 9, 2014
November
11
Nov
9
09
2014
09:43 PM
9
09
43
PM
PDT
#651 keith s
Dionisio is particularly eager to change the subject,...
Why do you say so? What do you base your argument on? How do you know my intention?Dionisio
November 9, 2014
November
11
Nov
9
09
2014
09:23 PM
9
09
23
PM
PDT
#650 Me_Think #647 follow-up
Why – so everyone (including ID proponents) can understand how those terms are related) How- We can calculate one and derive other terms from single calculation To Whom -Pretty much everyone (including you) who needs to understand the terms and make sense of the calculations
Apparently KF and gpuccio have explained those terms extensively on more than one occasion. They have provided links to those explanations. Why do they have to write it all over again? Please, keep in mind that those two gentlemen have daily responsibilities at their work. They seem to use a substantial part of their limited spare time to write in this blog. Their dedication to writing here is very commendable and highly appreciated by some of us here. Just read what they wrote already. If you have specific questions, ask them. But make sure your questions sound serious. Otherwise KF and gpuccio have all the right to ignore your questioning, although for the sake of the onlookers/lurkers most times they answer anyway. Generally speaking, from my observations here, both KF and gpuccio seem much more patient than me when it comes to answering questions. Most times they go directly to explaining things in details, with much pedagogy. I admire their enormous patience. Definitely I lack such virtue (well, I don't possess any virtue, as far as I know). Perhaps that's one of the reasons DGN_Jock prefers to discuss anything with gpuccio and KF rather than with me. He's right. I would think twice before engaging in a discussion with such a nasty guy like me, who respond to questions with more questions. :)Dionisio
November 9, 2014
November
11
Nov
9
09
2014
09:17 PM
9
09
17
PM
PDT
Dionisio is particularly eager to change the subject, because CSI, FSCO/I, and dFSCI are taking an absolute drubbing in this thread. What an embarrassment for ID.keith s
November 9, 2014
November
11
Nov
9
09
2014
08:57 PM
8
08
57
PM
PDT
Dionisio @ 647 Why - so everyone (including ID proponents) can understand how those terms are related) How- We can calculate one and derive other terms from single calculation To Whom -Pretty much everyone (including you) who needs to understand the terms and make sense of the calculationsMe_Think
November 9, 2014
November
11
Nov
9
09
2014
08:41 PM
8
08
41
PM
PDT
#648 Reality
Dionisio, your diversionary questions have nothing to do with what I pointed out.
Who said they have anything to do with that? Do you only respond questions that have to do with what you have pointed out? The first question in that post was just a simple 'yes/no' question. No one has asked you to answer other questions. Read the post carefully. Go ahead, try again. I'm sure you can do much better than this. :)Dionisio
November 9, 2014
November
11
Nov
9
09
2014
08:39 PM
8
08
39
PM
PDT
Dionisio, your diversionary questions have nothing to do with what I pointed out.Reality
November 9, 2014
November
11
Nov
9
09
2014
08:20 PM
8
08
20
PM
PDT
#644 Me_Think
KF has closed comment threads about dFSCI and gpuccio explained that dFSCI doesn’t detect design, only confirms if a design is real design or apparent design. Joe refers to FSC paper which discusses fit and not bit, so it would be helpful if someone can give the relationship between all those ID units to detect/confirm design .
Why would that be helpful? How could it be helpful? Helpful to whom?Dionisio
November 9, 2014
November
11
Nov
9
09
2014
08:16 PM
8
08
16
PM
PDT
Me_Think Please, can you answer the first question in post #645? Thank you.Dionisio
November 9, 2014
November
11
Nov
9
09
2014
08:12 PM
8
08
12
PM
PDT
#643 Reality Have you ever dealt with questions like these?
What makes myosin VIII to become available right when it’s required for cytokinesis? Same question for actin. What genes are they associated with? What signals trigger those genes to express those proteins for the cytokinesis? BTW, how does the transcription and translation processes for those two proteins look like? Are they straightforward or convoluted through some splicing and stuff like that? Are there chaperones involved in the post-translational 3D folding? Where is it delivered to? How does that delivery occur? How does the myosin pull the microtubule along an actin filament? How many of each of those proteins should get produced for that particular process? Any known problems in the cases of deficit or excess?
Dionisio
November 9, 2014
November
11
Nov
9
09
2014
08:09 PM
8
08
09
PM
PDT
Dionisio @ 642
Hasn’t your question been addressed by KF and gpuccio before?
KF has closed comment threads about dFSCI and gpuccio explained that dFSCI doesn't detect design, only confirms if a design is real design or apparent design. Joe refers to FSC paper which discusses fit and not bit, so it would be helpful if someone can give the relationship between all those ID units to detect/confirm design .Me_Think
November 9, 2014
November
11
Nov
9
09
2014
07:54 PM
7
07
54
PM
PDT
1 2 3 4 24

Leave a Reply