Uncommon Descent Serving The Intelligent Design Community

Functionally Specific, Complex Organisation and Associated Information (FSCO/I) is real and relevant

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Over the past few months, I noticed objectors to design theory dismissing or studiously ignoring a simple — much simpler than a clock — macroscopic example of Functionally Specific, Complex Organisation and/or associated Information (FSCO/I) and its empirically observed source, the ABU-Garcia Ambassadeur 6500 C3 fishing reel:

abu_6500c3mag

Yes, FSCO/I is real, and has a known cause.

{Added, Feb 6} It seems a few other clearly paradigmatic cases will help rivet the point, such as the organisation of a petroleum refinery:

Petroleum refinery block diagram illustrating FSCO/I in a process-flow system
Petroleum refinery block diagram illustrating FSCO/I in a process-flow system

. . . or the wireframe view of a rifle ‘scope (which itself has many carefully arranged components):

wireframe_scope

. . . or a calculator circuit:

calc_ckt

. . . or the wireframe for a gear tooth (showing how complex and exactingly precise a gear is):

spiral_gear_tooth

And if you doubt its relevance to the world of cell based life, I draw your attention to the code-based, ribosome using protein synthesis process that is a commonplace of life forms:

Protein Synthesis (HT: Wiki Media)
Protein Synthesis (HT: Wiki Media)

Video:

[vimeo 31830891]

U/D Mar 11, let’s add as a parallel to the oil refinery an outline of the cellular metabolism network as a case of integrated complex chemical systems instantiated using molecular nanotech that leaves the refinery in the dust for elegance and sophistication . . . noting how protein synthesis as outlined above is just the tiny corner at top left below, showing DNA, mRNA and protein assembly using tRNA in the little ribosome dots:

cell_metabolism

Now, the peculiar thing is, the demonstration of the reality and relevance of FSCO/I was routinely, studiously ignored by objectors, and there were even condescending or even apparently annoyed dismissals of my having made repeated reference to a fishing reel as a demonstrative example.

But, in a current thread Andre has brought the issue back into focus, as we can note from an exchange of comments:

Andre, #3: I have to ask our materialist friends…..

We have recently discovered a 3rd rotary motor [ –> after the Flagellum and the ATP Synthase Enzyme] that is used by cells for propulsion.

http://www.cell.com/current-bi…..%2901506-1

Please give me an honest answer how on earth can you even believe or hang on to the hope that this system not only designed itself but built itself? This view is not in accrodance with what we observe in the universe. I want to believe you that it can build and design itself but please show me how! I’m an engineer and I can promise you in my whole working life I have NEVER seen such a system come into existence on its own. If you have proof of this please share it with me so that I can also start believing in what you do!

Andre, 22: I see no attempt by anyone to answer my question…

How do molecular machines design and build themselves?

Anyone?

KF, 23: providing you mean the heavily endothermic information rich molecules and key-lock fitting components in the nanotech machines required for the living cell, they don’t, and especially, not in our observation. Nor do codes (languages) and algorithms (step by step procedures) assemble themselves out of molecular noise in warm salty ponds etc. In general, the notion that functionally specific complex organisation and associated information comes about by blind chance and mechanical necessity is without empirical warrant. But, institutionalised commitment to Lewontinian a priori evolutionary materialism has created the fixed notion in a great many minds that this “must” have happened and that to stop and question this is to abandon “Science.” So much the worse for the vera causa principle that in explaining a remote unobserved past of origins, there must be a requirement that we first observe the actual causes seen to produce such effects and use them in explanation. If that were done, the debates and contentions would be over as there is but one empirically grounded cause of FSCO/I; intelligently directed configuration, aka design

Andre, 24: On the money.

Piotr is an expert on linguistics, I wonder if he can tell us how the system of speech transmission, encoding and decoding could have evolved in a stepwise fashion.

Here is a simple example…..

http://4.bp.blogspot.com/_1VPL…..+Model.gif

[I insert:]

Transactional_Model
[And, elaborate a bit, on technical requisites:]

A communication system
A communication system

I really want to know how or am I just being unreasonable again?

We need to go back to the Fishing reel, with its story:

[youtube bpzh3faJkXk]

The closest we got to a reasonable response on the design-indicating implications of FSCO/I in fishing reels as a positive demonstration (with implications for other cases), is this, from AR:

It requires no effort at all to accept that the Abu Ambassadeur reel was designed and built by Swedes. My father had several examples. He worked for a rival company and was tasked with reverse-engineering the design with a view to developing a similar product. His company gave up on it. And I would be the first to suggest there are limits to our knowledge. We cannot see beyond the past light-cone of the Earth.

I think a better word that would lead to less confusion would be “purposeful” rather than “intelligent”. It better describes people, tool-using primates, beavers, bees and termites. The more important distinction should be made between material purposeful agents about which I cannot imagine we could disagree (aforesaid humans, other primates, etc) and immaterial agents for which we have no evidence or indicia (LOL) . . .

Now, it should be readily apparent . . . let’s expand in step by step points of thought [u/d Feb 8] . . . that:

a –> intelligence is inherently purposeful, and

b –> that the fishing reel is an example of how the purposeful intelligent creativity involved in the intelligently directed configuration — aka, design — that

c –> leads to productive working together of multiple, correct parts properly arranged to achieve function through their effective interaction

d –> leaves behind it certain empirically evident and in principle quantifiable signs. In particular,

e –> the specific arrangement of particular parts or facets in the sort of nodes-arcs pattern in the exploded view diagram above is chock full of quantifiable, function-constrained information. That is,

f –> we may identify a structured framework and list of yes/no questions required to bring us to the cluster of effective configurations in the abstract space of possible configurations of relevant parts.

g –> This involves specifying the parts, specifying their orientation, their location relative to other parts, coupling, and possibly an assembly process. Where,

h –> such a string of structured questions and answers is a specification in a description language, and yields a value of functionally specific information in binary digits, bits.

If this sounds strange, reflect on how AutoCAD and similar drawing programs represent designs.

This is directly linked to a well known index of complexity, from Kolmogorov and Chaitin. As Wikipedia aptly summarises:

In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity (also known as descriptive complexity, Kolmogorov–Chaitin complexity, algorithmic entropy, or program-size complexity) of an object, such as a piece of text, is a measure of the computability resources needed to specify the object . . . .  the complexity of a string is the length of the shortest possible description of the string in some fixed universal description language (the sensitivity of complexity relative to the choice of description language is discussed below). It can be shown that the Kolmogorov complexity of any string cannot be more than a few bytes larger than the length of the string itself. Strings, like the abab example above, whose Kolmogorov complexity is small relative to the string’s size are not considered to be complex.

A useful way to picture this is to recognise from the above, that the three dimensional complexity and functionally specific organisation of something like the 6500 C3 reel, may be reduced to a descriptive string. In the worst case (a random string), we can give some header contextual information and reproduce the string. In other cases, we may be able to spot a pattern and do much better than that, e.g. with an orderly string like abab . . . n times we can compress to a very short message that describes the order involved. In intermediate cases, in all codes we practically observe there is some redundancy that yields a degree of compressibility.

So, as Trevors and Abel were able to visualise a decade ago in one of the sleeping classic peer reviewed and published papers of design theory, we may distinguish random, ordered and functionally specific descriptive strings:

osc_rsc_fscThat is, we may see how islands of function emerge in an abstract space of possible sequences in which compressibility trades off against order and specific function in an algorithmic (or more broadly informational) context emerges. Where of course, functionality is readily observed in relevant cases: it works, or it fails, as any software debugger or hardware troubleshooter can tell you. Such islands may also be visualised in another way that allows us to see how this effect of sharp constraint on  configurations in order to achieve interactive function enables us to detect the presence of design as best explanation of FSCO/I:

csi_defnObviously, as the just above infographic shows, beyond a certain level of complexity, the atomic and temporal resources of our solar system or the observed cosmos would be fruitlessly overwhelmed by the scope of the space of possibilities for descriptive strings, if search for islands of function was to be carried out on the approach of blind chance and/or mechanical necessity. We therefore now arrive at a practical process for operationally detecting design on its empirical signs — one that is independent of debates over visibility or otherwise of designers (but requires us to be willing to accept that we exemplify capabilities and characteristics of designers but do not exhaust the list of in principle possible designers):

explan_filterFurther, we may introduce relevant cases and a quantification:

fscoi_facts

That is, we may now introduce a metric model that summarises the above flowchart:

Chi_500 = I*S – 500, bits beyond the solar system search threshold . . . Eqn 1

What this tells us, is that if we recognise a case of FSCO/I beyond 500 bits (or if the observed cosmos is a more relevant scope, 1,000 bits) then the config space search challenge above becomes insurmountable for blind chance and mechanical necessity. The only actually empirically warranted adequate causal explanation for such cases is design — intelligently directed configuration. And, as shown, this extends to specific cases in the world of life, extending a 2007 listing of cases of FSCO/I by Durston et al in the literature.

To see how this works, we may try the thought exercise of turning our observed solar system into a set of 10^57 atoms regarded as observers, assigning to each a tray of 500 coins. Flip every 10^-14 s or so, and observe, doing so for 10^17 s, a reasonable lifespan for the observed cosmos:

sol_coin_fliprThe resulting needle in haystack blind search challenge is comparable to a search that samples a one straw sized zone in a cubical haystack comparable in thickness to our galaxy. That is, we here apply a blind chance and mechanical necessity driven dynamic-stochastic search to a case of a general system model,

gen_sys_proc_model

. . . and find it to be practically insuperable.

By contrast, intelligent designers routinely produce text strings of 72 ASCII characters in recognisable, context-responsive English and the like.

[U/D Feb 5th:] I forgot to add, on the integration of a von Neumann Self Replication facility, which requires a significant increment in FSCO/I, which may be represented:

jvn_self_replicatorFollowing von Neumann generally, such a machine uses . . .

(i) an underlying storable code to record the required information to create not only
(a) the primary functional machine [[here, for a “clanking replicator” as illustrated, a Turing-type “universal computer”; in a cell this would be the metabolic entity that transforms environmental materials into required components etc.] but also
(b) the self-replicating facility; and, that
(c) can express step by step finite procedures for using the facility; 
 
(ii) a coded blueprint/tape record of such specifications and (explicit or implicit) instructions, together with   
 
(iii) a tape reader [called “the constructor” by von Neumann] that reads and interprets the coded specifications and associated instructions; thus controlling:   
 
(iv) position-armimplementing machines with “tool tips” controlled by the tape reader and used to carry out the action-steps for the specified replication (including replication of the constructor itself); backed up by   
 
(v) either:   
 
(1) a pre-existing reservoir of required parts and energy sources, or
   
(2) associated “metabolic” machines carrying out activities that as a part of their function, can provide required specific materials/parts and forms of energy for the replication facility, by using the generic resources in the surrounding environment.

Also, parts (ii), (iii) and (iv) are each necessary for and together are jointly sufficient to implement a self-replicating machine with an integral von Neumann universal constructor.That is, we see here an irreducibly complex set of core components that must all be present in a properly organised fashion for a successful self-replicating machine to exist. [Take just one core part out, and self-replicating functionality ceases: the self-replicating machine is irreducibly complex (IC).]
Here, Mignea’s 2012 discussion [cf. slide how here and presentation here] of a minimal self replicating cellular form will be also relevant, involving duplication and arrangement then separation into daughter automata. This requires stored algorithmic procedures, descriptions sufficient to construct components, means to execute instructions, materials handling, controlled energy flows, wastes disposal and more.:
self_replication_migneaThis irreducible complexity is compounded by the requirement (i) for codes, requiring organised symbols and rules to specify both steps to take and formats for storing information, and (v) for appropriate material resources and energy sources.

Immediately, we are looking at islands of organised function for both the machinery and the information in the wider sea of possible (but mostly non-functional) configurationsIn short, outside such functionally specific — thus, isolated — information-rich hot (or, “target”) zones, want of correct components and/or of proper organisation and/or co-ordination will block function from emerging or being sustained across time from generation to generation. So, once the set of possible configurations is large enough and the islands of function are credibly sufficiently specific/isolated, it is unreasonable to expect such function to arise from chance, or from chance circumstances driving blind natural forces under the known laws of nature.

And ever since Paley spoke of the thought exercise of a watch that replicated itself in the course of its movement, it has been pointed out that such a jump in FSCO/I points to yet higher more perfect art as credible cause.

It bears noting, then, that the only actually actually observed source of FSCO/I is design.

That is, we see here the vera causa test in action, that when we set out to explain observed traces from the unobservable deep past of origins, we should apply in our explanations only such factors as we have observed to be causally adequate to such effects. The simple application of this principle to the FSCO/I in life forms immediately raises the question of design as causal explanation.

A good step to help us see why is to consult Leslie Orgel in a pivotal 1973 observation:

. . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity.

These vague idea can be made more precise by introducing the idea of information. Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure. [–> this is of course equivalent to the string of yes/no questions required to specify the relevant “wiring diagram” for the set of functional states, T, in the much larger space of possible clumped or scattered configurations, W, as Dembski would go on to define in NFL in 2002, also cf here, here and here (with here on self-moved agents as designing causes).]  One can see intuitively that many instructions are needed to specify a complex structure. [–> so if the q’s to be answered are Y/N, the chain length is an information measure that indicates complexity in bits . . . ] On the other hand a simple repeating structure can be specified in rather few instructions.  [–> do once and repeat over and over in a loop . . . ] Complex but random structures, by definition, need hardly be specified at all . . . . Paley was right to emphasize the need for special explanations of the existence of objects with high information content, for they cannot be formed in nonevolutionary, inorganic processes.

[The Origins of Life (John Wiley, 1973), p. 189, p. 190, p. 196. Of course,

a –> that immediately highlights OOL, where the required self-replicating entity is part of what has to be explained (cf. Paley here), a notorious conundrum for advocates of evolutionary materialism; one, that has led to mutual ruin documented by Shapiro and Orgel between metabolism first and genes first schools of thought, cf here.

b –> Behe would go on to point out that irreducibly complex structures are not credibly formed by incremental evolutionary processes and Menuge et al would bring up serious issues for the suggested exaptation alternative, cf. his challenges C1 – 5 in the just linked. Finally,

c –> Dembski highlights that CSI comes in deeply isolated islands T in much larger configuration spaces W, for biological systems functional islands. That puts up serious questions for origin of dozens of body plans reasonably requiring some 10 – 100+ mn bases of fresh genetic information to account for cell types, tissues, organs and multiple coherently integrated systems. Wicken’s remarks a few years later as already were cited now take on fuller force in light of the further points from Orgel at pp. 190 and 196 . . . ]

. . . and J S Wicken in a 1979 remark:

Organized’ systems are to be carefully distinguished from ‘ordered’ systems.  Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’[[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]

. . . then also this from Sir Fred Hoyle:

 Once we see that life is cosmic it is sensible to suppose that intelligence is cosmic. Now problems of order, such as the sequences of amino acids in the chains which constitute the enzymes and other proteins, are precisely the problems that become easy once a directed intelligence enters the picture, as was recognised long ago by James Clerk Maxwell in his invention of what is known in physics as the Maxwell demon. The difference between an intelligent ordering, whether of words, fruit boxes, amino acids, or the Rubik cube, and merely random shufflings can be fantastically large, even as large as a number that would fill the whole volume of Shakespeare’s plays with its zeros. So if one proceeds directly and straightforwardly in this matter, without being deflected by a fear of incurring the wrath of scientific opinion, one arrives at the conclusion that biomaterials with their amazing measure or order must be the outcome of intelligent design. No other possibility I have been able to think of in pondering this issue over quite a long time seems to me to have anything like as high a possibility of being true.” [[Evolution from Space (The Omni Lecture[ –> Jan 12th 1982]), Enslow Publishers, 1982, pg. 28.]

 Why then, the resistance to such an inference?

AR gives us a clue:

The more important distinction should be made between material purposeful agents about which I cannot imagine we could disagree (aforesaid humans, other primates, etc) and immaterial agents for which we have no evidence or indicia (LOL) . . .

That is, there is a perception that to make a design inference on origin of life or of body plans based on the observed cause of FSCO/I is to abandon science for religious superstition. Regardless, of the strong insistence of design thinkers from the inception of the school of thought as a movement, that inference to design on the world of life is inference to ART as causal process (in contrast to blind chance and mechanical necessity), as opposed to inference to the supernatural. And underneath lurks the problem of a priori imposed Lewontinian evolutionary materialism, as was notoriously stated in a review of Sagan’s A Demon Haunted World:

demon_haunted. . . the problem is to get them [hoi polloi] to reject irrational and supernatural explanations of the world, the demons that exist only in their imaginations, and to accept a social and intellectual apparatus, Science, as the only begetter of truth [–> NB: this is a knowledge claim about knowledge and its possible sources, i.e. it is a claim in philosophy not science; it is thus self-refuting]. . . .

It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes [–> another major begging of the question . . . ] to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute [–> i.e. here we see the fallacious, indoctrinated, ideological, closed mind . . . ], for we cannot allow a Divine Foot in the door . . .

[From: “Billions and Billions of Demons,” NYRB, January 9, 1997. In case you imagine this is “quote-mined” I suggest you read the fuller annotated cite here.]

 

 

 

A priori Evolutionary Materialism has been dressed up in the lab coat and many have thus been led to imagine that to draw an inference that just might open the door a crack to that barbaric Bronze Age sky-god myth — as they have been indoctrinated to think about God (in gross error, start here) — is to abandon science for chaos.

Philip Johnson’s reply, rebuttal and rebuke was well merited:

For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them “materialists employing science.” And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) “give the appearance of having been designed for a purpose.”
. . . .   The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]
Darwin-ToL-full-size-copy
Tree of Life model, per Smithsonian Museum; note the root, OOL

And so, our answer to AR must first reflect BA’s: Craig Venter et al positively demonstrate that intelligent design and/or modification of cell based life forms is feasible, effective and an actual cause of observable information in life forms. To date, by contrast — after 150 years of trying — the observational base for bio-functional complex, specific information beyond 500 – 1,000 bits originating by blind chance and mechanical necessity is ZERO.

So, straight induction trumps ideological speculation, per the vera causa test.

That is, at minimum, design sits at the explanatory table regarding origin of life and origin of body plans, as of inductive right.

And, we may add that by highlighting the case for the origin of the living cell, this applies from the root on up and should shift our evaluation of the reasonableness of design as an alternative for major, information-rich features of life-forms, including our own. Particularly as regards our being equipped for language.

Going beyond, we note that we observe intelligence in action, but have no good reason to confine it to embodied forms. Not least, because blindly mechanical, GIGO-limited computation such as in a ball and disk integrator:

thomson_integrator

. . . or a digital circuit based computer:

mpu_model

. . . or even a neural network:

A neural network is essentially a weighted sum interconnected gate array, it is not an exception to the GIGO principle
A neural network is essentially a weighted sum interconnected gate array, it is not an exception to the GIGO principle

. . . is dynamic-stochastic system based signal processing, it simply is not equal to insightful, self-aware, responsibly free rational contemplation, reasoning, warranting, knowing and linked imaginative creativity. Indeed, it is the gap between these two things that is responsible for the intractability of the so-called Hard Problem of Consciousness, as can be seen from say Carter’s formulation which insists on the reduction:

The term . . . refers to the difficult problem of explaining why we have qualitative phenomenal experiences. It is contrasted with the “easy problems” of explaining the ability to discriminate, integrate information, report mental states, focus attention, etc. Easy problems are easy because all that is required for their solution is to specify a mechanism that can perform the function. That is, their proposed solutions, regardless of how complex or poorly understood they may be, can be entirely consistent with the modern materialistic conception of natural phenomen[a]. Hard problems are distinct from this set because they “persist even when the performance of all the relevant functions is explained.”

Notice, the embedded a priori materialism.

2350 years past, Plato spotlighted the fatal foundational flaw in his The Laws, Bk X, drawing an inference to cosmological design:

Ath. . . . when one thing changes another, and that another, of such will there be any primary changing element? How can a thing which is moved by another ever be the beginning of change?Impossible. But when the self-moved changes other, and that again other, and thus thousands upon tens of thousands of bodies are set in motion, must not the beginning of all this motion be the change of the self-moving principle? . . . . self-motion being the origin of all motions, and the first which arises among things at rest as well as among things in motion, is the eldest and mightiest principle of change, and that which is changed by another and yet moves other is second. 

[[ . . . .]Ath. If we were to see this power existing in any earthy, watery, or fiery substance, simple or compound-how should we describe it?Cle. You mean to ask whether we should call such a self-moving power life?Ath. I do.

Cle. Certainly we should. 

Ath.
And when we see soul in anything, must we not do the same-must we not admit that this is life? [[ . . . . ]

Cle. You mean to say that the essence which is defined as the self-moved is the same with that which has the name soul?

Ath. Yes; and if this is true, do we still maintain that there is anything wanting in the proof that the soul is the first origin and moving power of all that is, or has become, or will be, and their contraries, when she has been clearly shown to be the source of change and motion in all things? 

Cle. Certainly not; the soul as being the source of motion, has been most satisfactorily shown to be the oldest of all things.

Ath. And is not that motion which is produced in another, by reason of another, but never has any self-moving power at all, being in truth the change of an inanimate body, to be reckoned second, or by any lower number which you may prefer?  Cle. Exactly. 
Ath. Then we are right, and speak the most perfect and absolute truth, when we say that the soul is prior to the body, and that the body is second and comes afterwards, and is born to obey the soul, which is the ruler?
[ . . . . ]
Ath.If, my friend, we say that the whole path and movement of heaven, and of all that is therein, is by nature akin to the movement and revolution and calculation of mind, and proceeds by kindred laws, then, as is plain, we must say that the best soul takes care of the world and guides it along the good path.[[Plato here explicitly sets up an inference to design (by a good soul) from the intelligible order of the cosmos.]

In effect, the key problem is that in our time, many have become weeded to an ideology that attempts to get North by insistently heading due West.

Mission impossible.

Instead, let us let the chips lie where they fly as we carry out an inductive analysis.

Patently, FSCO/I is only known to come about by intelligently directed — thus purposeful — configuration. The islands of function in config spaces and needle in haystack search challenge easily explain why, on grounds remarkably similar to those that give the statistical underpinnings of the second law of thermodynamics.

Further, while we exemplify design and know that in our case intelligence is normally coupled to brain operation, we have no good reason to infer that it is merely a result of the blindly mechanical computation of the neural network substrates in our heads. Indeed, we have reason to believe that blind GIGO limited mechanisms driven by forces of chance and necessity are utterly at categorical difference from our familiar responsible freedom. (And it is noteworthy that those who champion the materialist view often seek to undermine responsible freedom to think, reason, warrant, decide and act.)

To all such, we must contrast the frank declaration of evolutionary theorist J B S Haldane:

 “It seems to me immensely unlikely that mind is a mere by-product of matter. For if my mental processes are determined wholly by the motions of atoms in my brain I have no reason to suppose that my beliefs are true.They may be sound chemically, but that does not make them sound logically. And hence I have no reason for supposing my brain to be composed of atoms. In order to escape from this necessity of sawing away the branch on which I am sitting, so to speak, I am compelled to believe that mind is not wholly conditioned by matter.” [[“When I am dead,” in Possible Worlds: And Other Essays [1927], Chatto and Windus: London, 1932, reprint, p.209. (Highlight and emphases added.)]

 And so, when we come to something like the origin of a fine tuned cosmos fitted for C-Chemistry, aqueous medium, code and algorithm using, cell-based life, we should at least be willing to seriously consider Sir Fred Hoyle’s point:

From 1953 onward, Willy Fowler and I have always been intrigued by the remarkable relation of the 7.65 MeV energy level in the nucleus of 12 C to the 7.12 MeV level in 16 O. If you wanted to produce carbon and oxygen in roughly equal quantities by stellar nucleosynthesis, these are the two levels you would have to fix, and your fixing would have to be just where these levels are actually found to be. Another put-up job? . . . I am inclined to think so. A common sense interpretation of the facts suggests that a super intellect has “monkeyed” with the physics as well as the chemistry and biology, and there are no blind forces worth speaking about in nature. [F. Hoyle, Annual Review of Astronomy and Astrophysics, 20 (1982): 16.  Emphasis added.]

As he also noted:

I do not believe that any physicist who examined the evidence could fail to draw the inference that the laws of nuclear physics have been deliberately designed with regard to the consequences they produce within stars. [[“The Universe: Past and Present Reflections.” Engineering and Science, November, 1981. pp. 8–12]

That is, we should at minimum be willing to ponder seriously the possibility of creative mind beyond the cosmos, beyond matter, as root cause of what we see. If, we are willing to allow FSCO/I to speak for itself as a reliable index of design. Even, through a multiverse speculation.

For, as John Leslie classically noted:

One striking thing about the fine tuning is that a force strength or a particle mass often appears to require accurate tuning for several reasons at once. Look at electromagnetism. Electromagnetism seems to require tuning for there to be any clear-cut distinction between matter and radiation; for stars to burn neither too fast nor too slowly for life’s requirements; for protons to be stable; for complex chemistry to be possible; for chemical changes not to be extremely sluggish; and for carbon synthesis inside stars (carbon being quite probably crucial to life). Universes all obeying the same fundamental laws could still differ in the strengths of their physical forces, as was explained earlier, and random variations in electromagnetism from universe to universe might then ensure that it took on any particular strength sooner or later. Yet how could they possibly account for the fact that the same one strength satisfied many potentially conflicting requirements, each of them a requirement for impressively accurate tuning?

. . .  [.]  . . . the need for such explanations does not depend on any estimate of how many universes would be observer-permitting, out of the entire field of possible universes. Claiming that our universe is ‘fine tuned for observers’, we base our claim on how life’s evolution would apparently have been rendered utterly impossible by comparatively minor alterations in physical force strengths, elementary particle masses and so forth. There is no need for us to ask whether very great alterations in these affairs would have rendered it fully possible once more, let alone whether physical worlds conforming to very different laws could have been observer-permitting without being in any way fine tuned. Here it can be useful to think of a fly on a wall, surrounded by an empty region. A bullet hits the fly Two explanations suggest themselves. Perhaps many bullets are hitting the wall or perhaps a marksman fired the bullet. There is no need to ask whether distant areas of the wall, or other quite different walls, are covered with flies so that more or less any bullet striking there would have hit one. The important point is that the local area contains just the one fly. [Our Place in the Cosmos, 1998 (courtesy Wayback Machine) Emphases added.]

 In short, our observed cosmos sits at a locally deeply isolated, functionally specific, complex configuration of underlying physics and cosmology that enable the sort of life forms we see. That needs to be explained adequately, even as for a lone fly on a patch of wall swatted by a bullet.

And, if we are willing to consider it, that strongly points to a marksman with the right equipment.

Even, if that may be a mind beyond the material, inherently contingent cosmos we observe.

Even, if . . . END

Comments
Me_Think: "If you need to create a 3D world, you have to be on higher dimension." I am not sure I understand: are you saying that 3D printers are on higher dimensions? Or, if you refer to creating the whole universe (and therefore to God), then the correct way to say it is, IMO: "If you need to create a whole universe based on dimensions, time and space, and whatever else, you have to be out of (beyond) dimensions, time and space, and whatever else". Which is exactly what God is for many religious views: have you ever heard the word "transcendent"? Where is the problem?gpuccio
February 5, 2015
February
02
Feb
5
05
2015
01:20 AM
1
01
20
AM
PDT
Ah, and praise is due to Aurelio Smith and spark for avoiding the mock addiction and for trying to bring the discussion where it belongs, on the biological facts. I hope we can have a fruitful discussion later.gpuccio
February 5, 2015
February
02
Feb
5
05
2015
01:15 AM
1
01
15
AM
PDT
Cross: You are right: mock, definitely! :)gpuccio
February 5, 2015
February
02
Feb
5
05
2015
01:12 AM
1
01
12
AM
PDT
Cross @ 57
If you insist “First this: God created the Heavens and Earth
Well, you have got a problem right there. If you need to create a 3D world, you have to be on higher dimension. Even a simple 4th (spatial) dimension is impossible to manage - You can’t even tie a knot beyond 3rd dimension. All knots in other dimension will be unknots. A creator who creates in 4D or any higher dimension can’t even create a single atom which conforms to 3D world, as the atoms will have nP orbitals (n=the number of dimension). In 4th dimension, you have 4P orbitals, in 5th, 5P orbitals and so on. This will allow more than two electron per orbital- which will change the element ! Every element will have weird properties ! Atoms will collapse easily. Atomic bonding will be shot. Molecules forming will be difficult and weird, not to mention minor problems like sound will ripple back (at least in even number dimensions) ,which will make communication with ID helpers difficult, then there is the problem of constructing structures with higher edges(10,32,24,96,720,1200)... etcMe_Think
February 5, 2015
February
02
Feb
5
05
2015
01:11 AM
1
01
11
AM
PDT
Me_think: "I think the ID agent pulled up a chair and worked out all parameters of nano machine on his Mac, converted it into stl file and printed it with a bio 3d printer. What do you think?" I think that is a good metaphor of what really happened. But probably, Windows and Linux were given a chance too. Seriously, you have described very correctly the steps that the designer must have implemented: a purpose, a plan, an implementation at software level, a final implementation at hardware level. You are, definitely, a good ID thinker. :) Ah, I suppose the chair is a bonus!gpuccio
February 5, 2015
February
02
Feb
5
05
2015
01:09 AM
1
01
09
AM
PDT
Me_think:
Heh. The non-physical conscious being (just a thought – may be he is made of dark matter with a dark energy brain) has to go over from place to place to fix processes – right? So ‘from where’ is within context, but of course you can’t divulge the secrets.
Wrong. I can and will divulge all secrets pertinent and which I know! (I have never been good at keeping secrets :) ) This is pertinent and this I know: once we localize in time and space a design intervention, then the consciousness of the designer must have acted at that time and in that place to design the thing. This is pertinent. "From where" the designer came there is not pertinent, and I know nothing about that non pertinent question. So, let's make an example. I have inferred design for ATP synthase. So, wherever ATP synthase first emerged, and at whatever time it happened, the designer acted there. Now, I am afraid that at present we have no clear spacial location for such an event: it probably happened somewhere in the seas. OK, the designer acted there. We have, however, some idea of when it happened: in LUCA, and in the window of time when LUCA existed. Not exactly a precise date, but certainly a definite range of possibilities. Anyway, the designer acted at that time. The design activity can certainly be localized well in time and space, because specific design events can be inferred by ID theory. Our knowledge about the specific temporal and spacial collocation of those design events depends simply on out growing understanding of the events themselves. On the contrary, the wanderings of the designer between his design activities are not, at present, in the range of ID theory, because ID theory is interested in the observed design events. If you are really curious, maybe you can try different disciplines for that.
No. Claiming xyz detects design in a blog and posting few OPs on that same blog is not proof of the claim. There are thousands of websites and blogs claiming all kinds of things. Their posting articles in their own blog proving their claim is inconsequential.
In science nothing can be "proved". We discuss our different intepretations of data. Each thinking person has to decide what is the best explanation. That choice cannot be delegated to anyone. Then, there is the "consensus": what the majority believes. That is interesting, but in no way it is "proof". Many people think differently from the consensus. Who is right? We cannot say in general. The consensus can be biased, and often it is. On the other hand, single individuals can certainly be wrong, and often are. So, again, the final decision is of each thinking person. That's why we discuss here, even if our ideas are certainly those of a minority (at least in the scientific Academy). I believe that my ideas are true, and therefore I discuss them here, even if the consensus does not approve. This is worthwhile, for me, and I like it.
His paper is about Functional Sequence Complexity which is measured in Fits. It is about calculating change in functional uncertainty from ground state.
It certainly is. And that is exactly the same thing as computing dFSI. As I have discussed many times. The only thing that Durston does not give in his paper is a threshold to categorize the dFSI value into a binary value (dFSCI yes/no). I have used Durston's numbers with a threshold proposed by me for biological objects (150 bits), and infered design for most of his protein families. But there is absolutely no doubt that Durston's paper is about the measurement of digital functional complexity in protein families: he has explained that many times, himself. And there is no doubt that Durston is a researcher in the ID field.
No, the truth is even ID journals shun away from CSI and design detection. They prefer the ‘search landscape with steep hills which poor evolution can’t climb’ papers.
You should be able to understand that it's the same thing. The main objection (probably the only serious one) of neo darwinists to the application of design detection to proteins is that the protein functional landscape, at present not completely known, could be such that the silly neo darwinist explanation may be vaguely credible. That is not true, and the researchers you mention are trying to demonstrate exactly that. It's completely pertinent and important ID research.gpuccio
February 5, 2015
February
02
Feb
5
05
2015
01:03 AM
1
01
03
AM
PDT
F/N: One key point in all the above is to rivet the reality and direct observability of FSCO/I in mind. This is a real, intuitively recognisable, routinely observed and quantifiable phenomenon with one actually observed cause, intelligently directed configuration. AKA, design. Where, the islands of function effect and linked needle in haystack blind search challenge give a plausible reason for that. Where also the quantification approach at first level is as outlined by Orgel in 1973, structured string of Y/N q's that reduces a 3-d interactive structure to a linear textual description in ultimately binary digits, which can be assessed on tolerance for noise or variability. And of course an onward point since Paley then von Neumann is that the addition of an integrated self replicating facility reflects an enormous increment of FSCO/I. KFkairosfocus
February 5, 2015
February
02
Feb
5
05
2015
12:56 AM
12
12
56
AM
PDT
sparc: I will answer your comments later today, in the new thread (Well, I really hope that I will have time to keep this promise! :) )gpuccio
February 5, 2015
February
02
Feb
5
05
2015
12:35 AM
12
12
35
AM
PDT
Jerad, S is a dummy variable of binary assignment, default 0 for non-specific. It is set to 1 on having adequate empirical reason to accept functional specificity; as has been discussed many times in and around UD. For the case of the main gear and the wider assembly above, that is intuitive but can be formally explored through assessment of tolerances. Let's just say that the previous reels prior to the 1950's were not interchangeable, i.e. the precision was too low and parts had to be particularly matched to get a working reel. That BTW was true of the famous 0.303 Lee-Enfield family of military rifles in British and Commonwealth service from 1895 on [and in 7.62 NATO form still in Indian reserve and/or Police service it seems], with IIRC 17 million examples. That's why the detachable magazine was specific to the individual rifle and why loading was by charger clip. A similar pattern was discovered in the US on attempting to mass produce the Swedish Bofors 40 mm cannon in the 1940's. The idea of tolerance easily extends to noise tolerance of the resulting bit string on reduction to specifying description; indeed, tolerance is a part of specification, what with error budgets etc. S, functional specificity, then allows us to assess the presence or absence of the island of function effect resulting from particularity of organised interaction to achieve function; i.e. it is naturally present and non arbitrary . . . why (as was noted above) shaking the reel parts together in a bait bucket will be utterly unlikely to be fruitful. KFkairosfocus
February 5, 2015
February
02
Feb
5
05
2015
12:34 AM
12
12
34
AM
PDT
Aurelio Smith: "There seems to be a wealth of literature so it seems a fruitful area of research. Have there been any developments produced using ID as a paradigm?" We will see better in the new thread. But for the moment, a very simple comment. It is, certainly, a "fruitful area of research". IOWs, we are understanding many important things about how it works. But what has that to do with understanding how it came into existence? Those are two very different issues. ID and neo darwinism are paradigms about the origin of functional information. Of course, all research about the nature and organization of function in biology is precious and relevant to compare those two paradigms. So, all research of that kind, whoever does it, is ultimately ID research (if ID is right) or neo-darwinist research (if neo-darwinism is right), but in itself it is neither, it is just research to understand how things work. The results of that research can be intepreted in an ID framework or a neo-darwinist framework, and any intelligent observer is free to decide which is better. That most of the good research about how things work is done by the official Academy which believes in the folly of neo darwinism is really obvious: who do you think owns the resources of people, money, institutions and so on, today? However, good research is always good research, whatever the prejudices of those who do it. The methodology, sometimes, will be biased, the interpretations, sometimes, will be biased (cognitive bias can never be really suppressed in human activities), but the data and results, in most cases, will be precious and important. However, if you are aware of good research which helps explaining how the system described by me came into existence, please link to it, and we will discuss it in the new thread.gpuccio
February 5, 2015
February
02
Feb
5
05
2015
12:33 AM
12
12
33
AM
PDT
Dionisio: I think we can proceed this way. We can go on discussing here the general issue raised so well by KF, and I will answer here the comments about that. For those interested in the discussion about affinity maturation and its possible explanations (see spark's two posts) I will open (later today) a simple new thread where we can go deeper.gpuccio
February 5, 2015
February
02
Feb
5
05
2015
12:21 AM
12
12
21
AM
PDT
sparc, are you aware that GP is a pretty serious practising physician, as is also Dr Giem? KFkairosfocus
February 5, 2015
February
02
Feb
5
05
2015
12:18 AM
12
12
18
AM
PDT
DATCG, very interesting catch. Each of the cases you identify, beyond a modest threshold will easily pass the FSCO/I threshold of inferring design. And yes, they are cases of that overall pattern. Algorithms and digitally coded info too. KFkairosfocus
February 5, 2015
February
02
Feb
5
05
2015
12:16 AM
12
12
16
AM
PDT
MT, Functional Sequence complexity, in a context of specific function is a case of functionally specific complex organisation and associated information. The term is descriptive, and one key result is the reduction of 3-d organised nodes-arcs, wireframes etc to linear sequences through structured strings of y/n q's. Where Durston et al went was to the further extent of assessing variability that retains function, applying the H-metric. KFkairosfocus
February 5, 2015
February
02
Feb
5
05
2015
12:12 AM
12
12
12
AM
PDT
In your equation 1: Chi_500 = I*S – 500 how is S determined?Jerad
February 4, 2015
February
02
Feb
4
04
2015
11:40 PM
11
11
40
PM
PDT
#60 addendum Before we move further on this, we may want to know if a separate discussion thread will be started just for this. Thus the posts won't have to be cross-referenced between threads. Let's keep in mind that there's a substantial amount of detailed information to dissect further for this discussion. The enzymes involved in the processes could be reviewed, as well as their variations involved within different scenarios. But most importantly we should look carefully at the actual choreographies where the referred enzymes and their variations play any role. How did they get orchestrated? Special attention to timing, location, quantity? Other issues to consider too? A separate thread could make it easier to follow-up the posted comments. Any suggestions on this?Dionisio
February 4, 2015
February
02
Feb
4
04
2015
10:10 PM
10
10
10
PM
PDT
#58 sparc
You haven’t looked up evolution of AID, did you?
Are you referring to the activation-induced deaminase ? As in this paper? http://journal.frontiersin.org/journal/10.3389/fmicb.2014.00534/full If that's what you meant, then, that's a valid point, thank you for bringing it up here. We could discuss this in reference to gpuccio's post #6. Perhaps gpuccio will consider KF's suggestion to start a separate thread just for this particular discussion, as I can see it may extend quite a bit, after we dig in the details of the most recent related papers on this subject.Dionisio
February 4, 2015
February
02
Feb
4
04
2015
09:42 PM
9
09
42
PM
PDT
BTW, you let out the part of the B-cell development that occurs without any antigen. Lots of mutations, rearragements and selection. Where and how does ID interfere in these processes. Especially, in cases of man made synthetic artificial antigens that were not present 50 years ago?sparc
February 4, 2015
February
02
Feb
4
04
2015
09:09 PM
9
09
09
PM
PDT
6 gpuccioFebruary 4, 2015 at 8:50 am KF: Thank you for the very good summary. Among many other certainly interesting discussions, we may tend to forget sometimes that functionally specified complex information is the central point in ID theory. You are very good at reminding that to all here. I would like to suggest a very good example of multilevel functional complexity in biology, which is often overlooked. It is an old favourite of mine, the maturation of antibody affinity after the initial immunological response. Dionisio has recently linked an article about a very recent paper. The paper is not free, but I invite all those interested to look at the figures and legends, which can be viewed here: http://www.nature.com/nri/jour.....28_ft.html The interesting point is that the whole process has been defined as “darwinian”, while it is the best known example of functional protein engineering embedded in a complex biological system. In brief, the specific B cells which respond to the hapten (antigen) at the beginning of the process undergo a sequence of targeted mutations and specific selection, so that new cells with more efficient antibody DNA sequences can be selected and become memory cells or plasma cells. The whole process takes place in the Germinative Center of lymph nodes, and involves (at least): 1) Specific B cells with a BCR (B cell receptor) which reacts to the external hapten. 2) Specific T helper cells 3) Antigen presenting cells (Follicular dendritic cell) which retain the original hapten (the external information) during the whole process, for specific intelligent selection of the results 4) Specific, controlled somatic hypermutation of the Variable region of the Ig genes, implemented by the following molecules (at least): a) Activation-Induced (Cytidine) Deaminase (AID): a cytosine:guanine pair is directly mutated to a uracil:guanine mismatch. b) DNA mismatch repair proteins: the uracil bases are removed by the repair enzyme, uracil-DNA glycosylase. c) Error-prone DNA polymerases: they fill in the gap and create mutations. 5) The mutated clones are then “measured” by interaction with the hapten presented by the Follicular DC. The process is probably repeated in multiple steps, although it could also happen in one step. 6) New clones with reduced or lost affinity are directed to apoptosis. 7) New clones with higher affinity are selected and sustained by specific T helper cells. In a few weeks, the process yields high affinity antibody producing B cells, in the form of plasma cells and memory cells. You have it all here: molecular complexity, high control, multiple cellular interactions, irreducible complexity in tons, spacial and temporal organization, extremely efficient engineering. The process is so delicate that errors in it are probably the cause of many human lymphomas. Now, that’s absolute evidence for Intelligent Design, if ever I saw it.
You haven't looked up evolution of AID, did you?sparc
February 4, 2015
February
02
Feb
4
04
2015
08:50 PM
8
08
50
PM
PDT
Me_Think If you insist "First this: God created the Heavens and Earth—all you see, all you don’t see." Genesis 1:1 MSG "Oh yes, you shaped me first inside, then out; you formed me in my mother’s womb. I thank you, High God—you’re breathtaking! Body and soul, I am marvelously made! I worship in adoration—what a creation! You know me inside and out, you know every bone in my body; You know exactly how I was made, bit by bit, how I was sculpted from nothing into something. Like an open book, you watched me grow from conception to birth; all the stages of my life were spread out before you, The days of my life all prepared before I’d even lived one day." Psalm 139:14 MSG Now, where is your materialistic explanation?Cross
February 4, 2015
February
02
Feb
4
04
2015
08:02 PM
8
08
02
PM
PDT
Me_Think @ 55 Please put away your "Ned Flanders" view of Christians. You really have an axe to grind, don't you. I am sure Tim Cook is a nice guy, he is just another sinner in need of a Savior like you or me. Could we get back to the Op? Do you have a real explanation for the nano machines or are you just intent on side tracks?Cross
February 4, 2015
February
02
Feb
4
04
2015
07:55 PM
7
07
55
PM
PDT
Corss @ 53 Apple wouldn't be pleased about it. It's CEO is gay and apparently 'Objective morality' dictated by ID agents doesn't allow that orientation . Anyway, your answer is still pending.Me_Think
February 4, 2015
February
02
Feb
4
04
2015
07:39 PM
7
07
39
PM
PDT
From Frontiers in Bioengineering and Biotechnology... an interesting look at why a Design heuristic is profitable for discovery and research. And why applying Design methodology helps in research. A systems engineering perspective on homeostasis and disease
With our increasing understanding of life’s multi-scale trans-hierarchical architecture, it has been suggested that living systems share characteristics common to engineered systems and that there is much to be learned about biological complexity from engineered systems (Csete and Doyle, 2002; Doyle and Csete, 2011). This is not to say that biological systems are engineered systems: biological systems are clearly distinct and different by virtue of having resulting from evolution(obligatory denial of design) as opposed to design. However(now that we bowed to Darwin lets move on), there are some similarities between their consequent organization and that of engineered systems that can provide useful insights (D’Onofrio and An, 2010). For instance, engineered systems can be perceived as coupled networks of interacting sub-systems, whose dynamics are constrained to tight requirements of robustness (to maintain safe operation) on one hand, and maintaining a certain degree of flexibility to accommodate changeover on the other. The aim of analysis, synthesis, and design of complex supply chains is to identify the laws governing optimally integrated systems. Optimality of operations is not a uniquely defined property and usually expresses the decision maker’s balance between alternative, often conflicting, objectives. Both biological and engineered complex constructs have evolved through multiple iterations, the former by natural processes(obligatory nod to random, unguided iterations) and the latter by design, to optimize function in a dynamically changing environment by maintaining systemic responses within acceptable ranges. Deviation from these limits leads to possibly irreversible damage. Stability and resiliency of these constructs results from dynamic interactions among constitutive elements. The precise definition and prediction of complex outcomes dependent on these traits is critical in the diagnosis and treatment of many disease processes, such as inflammatory diseases (Vodovotz and An, 2013).
Dynamic, Integrated Systems, Feedback Loops, Responses, Messaging, Optimal Integration, Organization and Decision Making are hallmarks of Design, not unguided processes.DATCG
February 4, 2015
February
02
Feb
4
04
2015
07:30 PM
7
07
30
PM
PDT
Me_Think @ 50 I can't wait for the response when you publish your theory, Apple will be pleased, the computer that God uses!Cross
February 4, 2015
February
02
Feb
4
04
2015
07:22 PM
7
07
22
PM
PDT
Cross @ 51, I gave an answer. What is yours?Me_Think
February 4, 2015
February
02
Feb
4
04
2015
07:10 PM
7
07
10
PM
PDT
Point (2) Mock, also confirmed, still no answer.Cross
February 4, 2015
February
02
Feb
4
04
2015
07:09 PM
7
07
09
PM
PDT
Cross @ 49 I think the ID agent pulled up a chair and worked out all parameters of nano machine on his Mac, converted it into stl file and printed it with a bio 3d printer. What do you think?Me_Think
February 4, 2015
February
02
Feb
4
04
2015
06:58 PM
6
06
58
PM
PDT
Barry Arrington @ 5 [prediction]"They will most certainly not explain how ultra sophisticated nanotech machines can self-assemble. Nor will they explain how sophisticated algorithms arise spontaneously from nothing." Prediction confirmed.Cross
February 4, 2015
February
02
Feb
4
04
2015
06:44 PM
6
06
44
PM
PDT
KF @ 38
This deserves to be framed:
Thanks.
Design detection is a routine matter
True.
and it uses intuitive or quantitative estimates that boil down to recognizing FSCO/I.
False. How many of your colleagues have claimed they calculate FSCO/I to detect design? Durston et al paper (if you mean the paper that GP linked) is about Functional Sequence Complexity which is measured in Fits. It is about calculating change in functional uncertainty from ground state, not design detection. The rest of the post shows you believe FSCO/I is nothing more than the intuition that something is designed. So why calculate FSCO/I at all ?Me_Think
February 4, 2015
February
02
Feb
4
04
2015
06:01 PM
6
06
01
PM
PDT
gpuccio @ 34
Indeed, it is much more likely that we are dealing with a non physical conscious being. So, asking “where” is a little out of context here.
Heh. The non-physical conscious being (just a thought - may be he is made of dark matter with a dark energy brain) has to go over from place to place to fix processes - right? So 'from where' is within context, but of course you can't divulge the secrets. gpuccio @ 35
ID has shown many times that it can detect design.
No. Claiming xyz detects design in a blog and posting few OPs on that same blog is not proof of the claim. There are thousands of websites and blogs claiming all kinds of things. Their posting articles in their own blog proving their claim is inconsequential.
..And the Durston paper is a peer reviewed paper,
His paper is about Functional Sequence Complexity which is measured in Fits. It is about calculating change in functional uncertainty from ground state.
Not much, I know, but what would you expect from an Academy which has already decided that any reference to ID must be banned from science?
No, the truth is even ID journals shun away from CSI and design detection. They prefer the 'search landscape with steep hills which poor evolution can't climb' papers.Me_Think
February 4, 2015
February
02
Feb
4
04
2015
05:44 PM
5
05
44
PM
PDT
1 2 3 4 5 6

Leave a Reply