Darwinist rhetorical tactics Functionally Specified Complex Information & Organization ID Foundations Science, worldview issues/foundations and society

Functionally Specific, Complex Organisation and Associated Information (FSCO/I) is real and relevant

Spread the love

Over the past few months, I noticed objectors to design theory dismissing or studiously ignoring a simple — much simpler than a clock — macroscopic example of Functionally Specific, Complex Organisation and/or associated Information (FSCO/I) and its empirically observed source, the ABU-Garcia Ambassadeur 6500 C3 fishing reel:

abu_6500c3mag

Yes, FSCO/I is real, and has a known cause.

{Added, Feb 6} It seems a few other clearly paradigmatic cases will help rivet the point, such as the organisation of a petroleum refinery:

Petroleum refinery block diagram illustrating FSCO/I in a process-flow system
Petroleum refinery block diagram illustrating FSCO/I in a process-flow system

. . . or the wireframe view of a rifle ‘scope (which itself has many carefully arranged components):

wireframe_scope

. . . or a calculator circuit:

calc_ckt

. . . or the wireframe for a gear tooth (showing how complex and exactingly precise a gear is):

spiral_gear_tooth

And if you doubt its relevance to the world of cell based life, I draw your attention to the code-based, ribosome using protein synthesis process that is a commonplace of life forms:

Protein Synthesis (HT: Wiki Media)
Protein Synthesis (HT: Wiki Media)

Video:

[vimeo 31830891]

U/D Mar 11, let’s add as a parallel to the oil refinery an outline of the cellular metabolism network as a case of integrated complex chemical systems instantiated using molecular nanotech that leaves the refinery in the dust for elegance and sophistication . . . noting how protein synthesis as outlined above is just the tiny corner at top left below, showing DNA, mRNA and protein assembly using tRNA in the little ribosome dots:

cell_metabolism

Now, the peculiar thing is, the demonstration of the reality and relevance of FSCO/I was routinely, studiously ignored by objectors, and there were even condescending or even apparently annoyed dismissals of my having made repeated reference to a fishing reel as a demonstrative example.

But, in a current thread Andre has brought the issue back into focus, as we can note from an exchange of comments:

Andre, #3: I have to ask our materialist friends…..

We have recently discovered a 3rd rotary motor [ –> after the Flagellum and the ATP Synthase Enzyme] that is used by cells for propulsion.

http://www.cell.com/current-bi…..%2901506-1

Please give me an honest answer how on earth can you even believe or hang on to the hope that this system not only designed itself but built itself? This view is not in accrodance with what we observe in the universe. I want to believe you that it can build and design itself but please show me how! I’m an engineer and I can promise you in my whole working life I have NEVER seen such a system come into existence on its own. If you have proof of this please share it with me so that I can also start believing in what you do!

Andre, 22: I see no attempt by anyone to answer my question…

How do molecular machines design and build themselves?

Anyone?

KF, 23: providing you mean the heavily endothermic information rich molecules and key-lock fitting components in the nanotech machines required for the living cell, they don’t, and especially, not in our observation. Nor do codes (languages) and algorithms (step by step procedures) assemble themselves out of molecular noise in warm salty ponds etc. In general, the notion that functionally specific complex organisation and associated information comes about by blind chance and mechanical necessity is without empirical warrant. But, institutionalised commitment to Lewontinian a priori evolutionary materialism has created the fixed notion in a great many minds that this “must” have happened and that to stop and question this is to abandon “Science.” So much the worse for the vera causa principle that in explaining a remote unobserved past of origins, there must be a requirement that we first observe the actual causes seen to produce such effects and use them in explanation. If that were done, the debates and contentions would be over as there is but one empirically grounded cause of FSCO/I; intelligently directed configuration, aka design

Andre, 24: On the money.

Piotr is an expert on linguistics, I wonder if he can tell us how the system of speech transmission, encoding and decoding could have evolved in a stepwise fashion.

Here is a simple example…..

http://4.bp.blogspot.com/_1VPL…..+Model.gif

[I insert:]

Transactional_Model
[And, elaborate a bit, on technical requisites:]

A communication system
A communication system

I really want to know how or am I just being unreasonable again?

We need to go back to the Fishing reel, with its story:

[youtube bpzh3faJkXk]

The closest we got to a reasonable response on the design-indicating implications of FSCO/I in fishing reels as a positive demonstration (with implications for other cases), is this, from AR:

It requires no effort at all to accept that the Abu Ambassadeur reel was designed and built by Swedes. My father had several examples. He worked for a rival company and was tasked with reverse-engineering the design with a view to developing a similar product. His company gave up on it. And I would be the first to suggest there are limits to our knowledge. We cannot see beyond the past light-cone of the Earth.

I think a better word that would lead to less confusion would be “purposeful” rather than “intelligent”. It better describes people, tool-using primates, beavers, bees and termites. The more important distinction should be made between material purposeful agents about which I cannot imagine we could disagree (aforesaid humans, other primates, etc) and immaterial agents for which we have no evidence or indicia (LOL) . . .

Now, it should be readily apparent . . . let’s expand in step by step points of thought [u/d Feb 8] . . . that:

a –> intelligence is inherently purposeful, and

b –> that the fishing reel is an example of how the purposeful intelligent creativity involved in the intelligently directed configuration — aka, design — that

c –> leads to productive working together of multiple, correct parts properly arranged to achieve function through their effective interaction

d –> leaves behind it certain empirically evident and in principle quantifiable signs. In particular,

e –> the specific arrangement of particular parts or facets in the sort of nodes-arcs pattern in the exploded view diagram above is chock full of quantifiable, function-constrained information. That is,

f –> we may identify a structured framework and list of yes/no questions required to bring us to the cluster of effective configurations in the abstract space of possible configurations of relevant parts.

g –> This involves specifying the parts, specifying their orientation, their location relative to other parts, coupling, and possibly an assembly process. Where,

h –> such a string of structured questions and answers is a specification in a description language, and yields a value of functionally specific information in binary digits, bits.

If this sounds strange, reflect on how AutoCAD and similar drawing programs represent designs.

This is directly linked to a well known index of complexity, from Kolmogorov and Chaitin. As Wikipedia aptly summarises:

In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity (also known as descriptive complexity, Kolmogorov–Chaitin complexity, algorithmic entropy, or program-size complexity) of an object, such as a piece of text, is a measure of the computability resources needed to specify the object . . . .  the complexity of a string is the length of the shortest possible description of the string in some fixed universal description language (the sensitivity of complexity relative to the choice of description language is discussed below). It can be shown that the Kolmogorov complexity of any string cannot be more than a few bytes larger than the length of the string itself. Strings, like the abab example above, whose Kolmogorov complexity is small relative to the string’s size are not considered to be complex.

A useful way to picture this is to recognise from the above, that the three dimensional complexity and functionally specific organisation of something like the 6500 C3 reel, may be reduced to a descriptive string. In the worst case (a random string), we can give some header contextual information and reproduce the string. In other cases, we may be able to spot a pattern and do much better than that, e.g. with an orderly string like abab . . . n times we can compress to a very short message that describes the order involved. In intermediate cases, in all codes we practically observe there is some redundancy that yields a degree of compressibility.

So, as Trevors and Abel were able to visualise a decade ago in one of the sleeping classic peer reviewed and published papers of design theory, we may distinguish random, ordered and functionally specific descriptive strings:

osc_rsc_fscThat is, we may see how islands of function emerge in an abstract space of possible sequences in which compressibility trades off against order and specific function in an algorithmic (or more broadly informational) context emerges. Where of course, functionality is readily observed in relevant cases: it works, or it fails, as any software debugger or hardware troubleshooter can tell you. Such islands may also be visualised in another way that allows us to see how this effect of sharp constraint on  configurations in order to achieve interactive function enables us to detect the presence of design as best explanation of FSCO/I:

csi_defnObviously, as the just above infographic shows, beyond a certain level of complexity, the atomic and temporal resources of our solar system or the observed cosmos would be fruitlessly overwhelmed by the scope of the space of possibilities for descriptive strings, if search for islands of function was to be carried out on the approach of blind chance and/or mechanical necessity. We therefore now arrive at a practical process for operationally detecting design on its empirical signs — one that is independent of debates over visibility or otherwise of designers (but requires us to be willing to accept that we exemplify capabilities and characteristics of designers but do not exhaust the list of in principle possible designers):

explan_filterFurther, we may introduce relevant cases and a quantification:

fscoi_facts

That is, we may now introduce a metric model that summarises the above flowchart:

Chi_500 = I*S – 500, bits beyond the solar system search threshold . . . Eqn 1

What this tells us, is that if we recognise a case of FSCO/I beyond 500 bits (or if the observed cosmos is a more relevant scope, 1,000 bits) then the config space search challenge above becomes insurmountable for blind chance and mechanical necessity. The only actually empirically warranted adequate causal explanation for such cases is design — intelligently directed configuration. And, as shown, this extends to specific cases in the world of life, extending a 2007 listing of cases of FSCO/I by Durston et al in the literature.

To see how this works, we may try the thought exercise of turning our observed solar system into a set of 10^57 atoms regarded as observers, assigning to each a tray of 500 coins. Flip every 10^-14 s or so, and observe, doing so for 10^17 s, a reasonable lifespan for the observed cosmos:

sol_coin_fliprThe resulting needle in haystack blind search challenge is comparable to a search that samples a one straw sized zone in a cubical haystack comparable in thickness to our galaxy. That is, we here apply a blind chance and mechanical necessity driven dynamic-stochastic search to a case of a general system model,

gen_sys_proc_model

. . . and find it to be practically insuperable.

By contrast, intelligent designers routinely produce text strings of 72 ASCII characters in recognisable, context-responsive English and the like.

[U/D Feb 5th:] I forgot to add, on the integration of a von Neumann Self Replication facility, which requires a significant increment in FSCO/I, which may be represented:

jvn_self_replicatorFollowing von Neumann generally, such a machine uses . . .

(i) an underlying storable code to record the required information to create not only
(a) the primary functional machine [[here, for a “clanking replicator” as illustrated, a Turing-type “universal computer”; in a cell this would be the metabolic entity that transforms environmental materials into required components etc.] but also
(b) the self-replicating facility; and, that
(c) can express step by step finite procedures for using the facility; 
 
(ii) a coded blueprint/tape record of such specifications and (explicit or implicit) instructions, together with   
 
(iii) a tape reader [called “the constructor” by von Neumann] that reads and interprets the coded specifications and associated instructions; thus controlling:   
 
(iv) position-armimplementing machines with “tool tips” controlled by the tape reader and used to carry out the action-steps for the specified replication (including replication of the constructor itself); backed up by   
 
(v) either:   
 
(1) a pre-existing reservoir of required parts and energy sources, or
   
(2) associated “metabolic” machines carrying out activities that as a part of their function, can provide required specific materials/parts and forms of energy for the replication facility, by using the generic resources in the surrounding environment.

Also, parts (ii), (iii) and (iv) are each necessary for and together are jointly sufficient to implement a self-replicating machine with an integral von Neumann universal constructor.That is, we see here an irreducibly complex set of core components that must all be present in a properly organised fashion for a successful self-replicating machine to exist. [Take just one core part out, and self-replicating functionality ceases: the self-replicating machine is irreducibly complex (IC).]
Here, Mignea’s 2012 discussion [cf. slide how here and presentation here] of a minimal self replicating cellular form will be also relevant, involving duplication and arrangement then separation into daughter automata. This requires stored algorithmic procedures, descriptions sufficient to construct components, means to execute instructions, materials handling, controlled energy flows, wastes disposal and more.:
self_replication_migneaThis irreducible complexity is compounded by the requirement (i) for codes, requiring organised symbols and rules to specify both steps to take and formats for storing information, and (v) for appropriate material resources and energy sources.

Immediately, we are looking at islands of organised function for both the machinery and the information in the wider sea of possible (but mostly non-functional) configurationsIn short, outside such functionally specific — thus, isolated — information-rich hot (or, “target”) zones, want of correct components and/or of proper organisation and/or co-ordination will block function from emerging or being sustained across time from generation to generation. So, once the set of possible configurations is large enough and the islands of function are credibly sufficiently specific/isolated, it is unreasonable to expect such function to arise from chance, or from chance circumstances driving blind natural forces under the known laws of nature.

And ever since Paley spoke of the thought exercise of a watch that replicated itself in the course of its movement, it has been pointed out that such a jump in FSCO/I points to yet higher more perfect art as credible cause.

It bears noting, then, that the only actually actually observed source of FSCO/I is design.

That is, we see here the vera causa test in action, that when we set out to explain observed traces from the unobservable deep past of origins, we should apply in our explanations only such factors as we have observed to be causally adequate to such effects. The simple application of this principle to the FSCO/I in life forms immediately raises the question of design as causal explanation.

A good step to help us see why is to consult Leslie Orgel in a pivotal 1973 observation:

. . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity.

These vague idea can be made more precise by introducing the idea of information. Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure. [–> this is of course equivalent to the string of yes/no questions required to specify the relevant “wiring diagram” for the set of functional states, T, in the much larger space of possible clumped or scattered configurations, W, as Dembski would go on to define in NFL in 2002, also cf here, here and here (with here on self-moved agents as designing causes).]  One can see intuitively that many instructions are needed to specify a complex structure. [–> so if the q’s to be answered are Y/N, the chain length is an information measure that indicates complexity in bits . . . ] On the other hand a simple repeating structure can be specified in rather few instructions.  [–> do once and repeat over and over in a loop . . . ] Complex but random structures, by definition, need hardly be specified at all . . . . Paley was right to emphasize the need for special explanations of the existence of objects with high information content, for they cannot be formed in nonevolutionary, inorganic processes.

[The Origins of Life (John Wiley, 1973), p. 189, p. 190, p. 196. Of course,

a –> that immediately highlights OOL, where the required self-replicating entity is part of what has to be explained (cf. Paley here), a notorious conundrum for advocates of evolutionary materialism; one, that has led to mutual ruin documented by Shapiro and Orgel between metabolism first and genes first schools of thought, cf here.

b –> Behe would go on to point out that irreducibly complex structures are not credibly formed by incremental evolutionary processes and Menuge et al would bring up serious issues for the suggested exaptation alternative, cf. his challenges C1 – 5 in the just linked. Finally,

c –> Dembski highlights that CSI comes in deeply isolated islands T in much larger configuration spaces W, for biological systems functional islands. That puts up serious questions for origin of dozens of body plans reasonably requiring some 10 – 100+ mn bases of fresh genetic information to account for cell types, tissues, organs and multiple coherently integrated systems. Wicken’s remarks a few years later as already were cited now take on fuller force in light of the further points from Orgel at pp. 190 and 196 . . . ]

. . . and J S Wicken in a 1979 remark:

Organized’ systems are to be carefully distinguished from ‘ordered’ systems.  Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’[[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]

. . . then also this from Sir Fred Hoyle:

 Once we see that life is cosmic it is sensible to suppose that intelligence is cosmic. Now problems of order, such as the sequences of amino acids in the chains which constitute the enzymes and other proteins, are precisely the problems that become easy once a directed intelligence enters the picture, as was recognised long ago by James Clerk Maxwell in his invention of what is known in physics as the Maxwell demon. The difference between an intelligent ordering, whether of words, fruit boxes, amino acids, or the Rubik cube, and merely random shufflings can be fantastically large, even as large as a number that would fill the whole volume of Shakespeare’s plays with its zeros. So if one proceeds directly and straightforwardly in this matter, without being deflected by a fear of incurring the wrath of scientific opinion, one arrives at the conclusion that biomaterials with their amazing measure or order must be the outcome of intelligent design. No other possibility I have been able to think of in pondering this issue over quite a long time seems to me to have anything like as high a possibility of being true.” [[Evolution from Space (The Omni Lecture[ –> Jan 12th 1982]), Enslow Publishers, 1982, pg. 28.]

 Why then, the resistance to such an inference?

AR gives us a clue:

The more important distinction should be made between material purposeful agents about which I cannot imagine we could disagree (aforesaid humans, other primates, etc) and immaterial agents for which we have no evidence or indicia (LOL) . . .

That is, there is a perception that to make a design inference on origin of life or of body plans based on the observed cause of FSCO/I is to abandon science for religious superstition. Regardless, of the strong insistence of design thinkers from the inception of the school of thought as a movement, that inference to design on the world of life is inference to ART as causal process (in contrast to blind chance and mechanical necessity), as opposed to inference to the supernatural. And underneath lurks the problem of a priori imposed Lewontinian evolutionary materialism, as was notoriously stated in a review of Sagan’s A Demon Haunted World:

demon_haunted. . . the problem is to get them [hoi polloi] to reject irrational and supernatural explanations of the world, the demons that exist only in their imaginations, and to accept a social and intellectual apparatus, Science, as the only begetter of truth [–> NB: this is a knowledge claim about knowledge and its possible sources, i.e. it is a claim in philosophy not science; it is thus self-refuting]. . . .

It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes [–> another major begging of the question . . . ] to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute [–> i.e. here we see the fallacious, indoctrinated, ideological, closed mind . . . ], for we cannot allow a Divine Foot in the door . . .

[From: “Billions and Billions of Demons,” NYRB, January 9, 1997. In case you imagine this is “quote-mined” I suggest you read the fuller annotated cite here.]

 

 

 

A priori Evolutionary Materialism has been dressed up in the lab coat and many have thus been led to imagine that to draw an inference that just might open the door a crack to that barbaric Bronze Age sky-god myth — as they have been indoctrinated to think about God (in gross error, start here) — is to abandon science for chaos.

Philip Johnson’s reply, rebuttal and rebuke was well merited:

For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them “materialists employing science.” And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) “give the appearance of having been designed for a purpose.”
. . . .   The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]
Darwin-ToL-full-size-copy
Tree of Life model, per Smithsonian Museum; note the root, OOL

And so, our answer to AR must first reflect BA’s: Craig Venter et al positively demonstrate that intelligent design and/or modification of cell based life forms is feasible, effective and an actual cause of observable information in life forms. To date, by contrast — after 150 years of trying — the observational base for bio-functional complex, specific information beyond 500 – 1,000 bits originating by blind chance and mechanical necessity is ZERO.

So, straight induction trumps ideological speculation, per the vera causa test.

That is, at minimum, design sits at the explanatory table regarding origin of life and origin of body plans, as of inductive right.

And, we may add that by highlighting the case for the origin of the living cell, this applies from the root on up and should shift our evaluation of the reasonableness of design as an alternative for major, information-rich features of life-forms, including our own. Particularly as regards our being equipped for language.

Going beyond, we note that we observe intelligence in action, but have no good reason to confine it to embodied forms. Not least, because blindly mechanical, GIGO-limited computation such as in a ball and disk integrator:

thomson_integrator

. . . or a digital circuit based computer:

mpu_model

. . . or even a neural network:

A neural network is essentially a weighted sum interconnected gate array, it is not an exception to the GIGO principle
A neural network is essentially a weighted sum interconnected gate array, it is not an exception to the GIGO principle

. . . is dynamic-stochastic system based signal processing, it simply is not equal to insightful, self-aware, responsibly free rational contemplation, reasoning, warranting, knowing and linked imaginative creativity. Indeed, it is the gap between these two things that is responsible for the intractability of the so-called Hard Problem of Consciousness, as can be seen from say Carter’s formulation which insists on the reduction:

The term . . . refers to the difficult problem of explaining why we have qualitative phenomenal experiences. It is contrasted with the “easy problems” of explaining the ability to discriminate, integrate information, report mental states, focus attention, etc. Easy problems are easy because all that is required for their solution is to specify a mechanism that can perform the function. That is, their proposed solutions, regardless of how complex or poorly understood they may be, can be entirely consistent with the modern materialistic conception of natural phenomen[a]. Hard problems are distinct from this set because they “persist even when the performance of all the relevant functions is explained.”

Notice, the embedded a priori materialism.

2350 years past, Plato spotlighted the fatal foundational flaw in his The Laws, Bk X, drawing an inference to cosmological design:

Ath. . . . when one thing changes another, and that another, of such will there be any primary changing element? How can a thing which is moved by another ever be the beginning of change?Impossible. But when the self-moved changes other, and that again other, and thus thousands upon tens of thousands of bodies are set in motion, must not the beginning of all this motion be the change of the self-moving principle? . . . . self-motion being the origin of all motions, and the first which arises among things at rest as well as among things in motion, is the eldest and mightiest principle of change, and that which is changed by another and yet moves other is second. 

[[ . . . .]Ath. If we were to see this power existing in any earthy, watery, or fiery substance, simple or compound-how should we describe it?Cle. You mean to ask whether we should call such a self-moving power life?Ath. I do.

Cle. Certainly we should. 

Ath.
And when we see soul in anything, must we not do the same-must we not admit that this is life? [[ . . . . ]

Cle. You mean to say that the essence which is defined as the self-moved is the same with that which has the name soul?

Ath. Yes; and if this is true, do we still maintain that there is anything wanting in the proof that the soul is the first origin and moving power of all that is, or has become, or will be, and their contraries, when she has been clearly shown to be the source of change and motion in all things? 

Cle. Certainly not; the soul as being the source of motion, has been most satisfactorily shown to be the oldest of all things.

Ath. And is not that motion which is produced in another, by reason of another, but never has any self-moving power at all, being in truth the change of an inanimate body, to be reckoned second, or by any lower number which you may prefer?  Cle. Exactly. 
Ath. Then we are right, and speak the most perfect and absolute truth, when we say that the soul is prior to the body, and that the body is second and comes afterwards, and is born to obey the soul, which is the ruler?
[ . . . . ]
Ath.If, my friend, we say that the whole path and movement of heaven, and of all that is therein, is by nature akin to the movement and revolution and calculation of mind, and proceeds by kindred laws, then, as is plain, we must say that the best soul takes care of the world and guides it along the good path.[[Plato here explicitly sets up an inference to design (by a good soul) from the intelligible order of the cosmos.]

In effect, the key problem is that in our time, many have become weeded to an ideology that attempts to get North by insistently heading due West.

Mission impossible.

Instead, let us let the chips lie where they fly as we carry out an inductive analysis.

Patently, FSCO/I is only known to come about by intelligently directed — thus purposeful — configuration. The islands of function in config spaces and needle in haystack search challenge easily explain why, on grounds remarkably similar to those that give the statistical underpinnings of the second law of thermodynamics.

Further, while we exemplify design and know that in our case intelligence is normally coupled to brain operation, we have no good reason to infer that it is merely a result of the blindly mechanical computation of the neural network substrates in our heads. Indeed, we have reason to believe that blind GIGO limited mechanisms driven by forces of chance and necessity are utterly at categorical difference from our familiar responsible freedom. (And it is noteworthy that those who champion the materialist view often seek to undermine responsible freedom to think, reason, warrant, decide and act.)

To all such, we must contrast the frank declaration of evolutionary theorist J B S Haldane:

 “It seems to me immensely unlikely that mind is a mere by-product of matter. For if my mental processes are determined wholly by the motions of atoms in my brain I have no reason to suppose that my beliefs are true.They may be sound chemically, but that does not make them sound logically. And hence I have no reason for supposing my brain to be composed of atoms. In order to escape from this necessity of sawing away the branch on which I am sitting, so to speak, I am compelled to believe that mind is not wholly conditioned by matter.” [[“When I am dead,” in Possible Worlds: And Other Essays [1927], Chatto and Windus: London, 1932, reprint, p.209. (Highlight and emphases added.)]

 And so, when we come to something like the origin of a fine tuned cosmos fitted for C-Chemistry, aqueous medium, code and algorithm using, cell-based life, we should at least be willing to seriously consider Sir Fred Hoyle’s point:

From 1953 onward, Willy Fowler and I have always been intrigued by the remarkable relation of the 7.65 MeV energy level in the nucleus of 12 C to the 7.12 MeV level in 16 O. If you wanted to produce carbon and oxygen in roughly equal quantities by stellar nucleosynthesis, these are the two levels you would have to fix, and your fixing would have to be just where these levels are actually found to be. Another put-up job? . . . I am inclined to think so. A common sense interpretation of the facts suggests that a super intellect has “monkeyed” with the physics as well as the chemistry and biology, and there are no blind forces worth speaking about in nature. [F. Hoyle, Annual Review of Astronomy and Astrophysics, 20 (1982): 16.  Emphasis added.]

As he also noted:

I do not believe that any physicist who examined the evidence could fail to draw the inference that the laws of nuclear physics have been deliberately designed with regard to the consequences they produce within stars. [[“The Universe: Past and Present Reflections.” Engineering and Science, November, 1981. pp. 8–12]

That is, we should at minimum be willing to ponder seriously the possibility of creative mind beyond the cosmos, beyond matter, as root cause of what we see. If, we are willing to allow FSCO/I to speak for itself as a reliable index of design. Even, through a multiverse speculation.

For, as John Leslie classically noted:

One striking thing about the fine tuning is that a force strength or a particle mass often appears to require accurate tuning for several reasons at once. Look at electromagnetism. Electromagnetism seems to require tuning for there to be any clear-cut distinction between matter and radiation; for stars to burn neither too fast nor too slowly for life’s requirements; for protons to be stable; for complex chemistry to be possible; for chemical changes not to be extremely sluggish; and for carbon synthesis inside stars (carbon being quite probably crucial to life). Universes all obeying the same fundamental laws could still differ in the strengths of their physical forces, as was explained earlier, and random variations in electromagnetism from universe to universe might then ensure that it took on any particular strength sooner or later. Yet how could they possibly account for the fact that the same one strength satisfied many potentially conflicting requirements, each of them a requirement for impressively accurate tuning?

. . .  [.]  . . . the need for such explanations does not depend on any estimate of how many universes would be observer-permitting, out of the entire field of possible universes. Claiming that our universe is ‘fine tuned for observers’, we base our claim on how life’s evolution would apparently have been rendered utterly impossible by comparatively minor alterations in physical force strengths, elementary particle masses and so forth. There is no need for us to ask whether very great alterations in these affairs would have rendered it fully possible once more, let alone whether physical worlds conforming to very different laws could have been observer-permitting without being in any way fine tuned. Here it can be useful to think of a fly on a wall, surrounded by an empty region. A bullet hits the fly Two explanations suggest themselves. Perhaps many bullets are hitting the wall or perhaps a marksman fired the bullet. There is no need to ask whether distant areas of the wall, or other quite different walls, are covered with flies so that more or less any bullet striking there would have hit one. The important point is that the local area contains just the one fly. [Our Place in the Cosmos, 1998 (courtesy Wayback Machine) Emphases added.]

 In short, our observed cosmos sits at a locally deeply isolated, functionally specific, complex configuration of underlying physics and cosmology that enable the sort of life forms we see. That needs to be explained adequately, even as for a lone fly on a patch of wall swatted by a bullet.

And, if we are willing to consider it, that strongly points to a marksman with the right equipment.

Even, if that may be a mind beyond the material, inherently contingent cosmos we observe.

Even, if . . . END

165 Replies to “Functionally Specific, Complex Organisation and Associated Information (FSCO/I) is real and relevant

  1. 1
    kairosfocus says:

    As long promised on FSCO/I, courtesy insomnia power . . .

  2. 2
    Andre says:

    KF

    Thank you for the OP, as an engineer all I’m asking is, give me observable and testable evidence for this and I will absolutely give up my theistic worldview. Reason and logic however compel me to hold onto that view precisely because I know as a matter of fact that such systems are not capable of designing or building themselves. It contradicts everything in the observable universe to assume that it can. So when the materialist says it can without any evidence for such a claim how do they accept it as truth? Are we not supposed to love truth?

  3. 3
    kairosfocus says:

    Andre,

    while I understand your sentiment, the worldview theism does not stand or fall with the design inference, especially on the world of life. That is a point objectors — over enamoured with scientism and empiricism — need to realise.

    Many people realise there is a God for the simple reason that they have met him in life-transforming power.

    Millions.

    Similarly, one realises that conscience shines clearly enough to show us a dimension of reality beyond the empirical that is as real as anything else, that means if we have rights and duties, they have to have a foundation. Ought must stand on IS, and the IS has to be at world foundation level, post Hume.

    There is precisely one serious candidate, the inherently good Creator God, a necessary and maximally great being.

    (And, if one argues instead — as many materialists do — that conscience is illusory, that would bring conscious mindedness under the influence of general delusion, which would undermine rationality and responsible freedom. Indeed, it ends in self referential incoherence.)

    Likewise, the sheer contingency of the cosmos cries out for a necessary being to ground it. One, again, sufficient to be a basis for morality.

    In that context, the beauty we see in the cosmos at large cries out for an Artist behind it.

    Not least, the life, death and resurrection of Jesus in light of centuries old prophecies in the Hebraic scriptures, and the impact on the 500 witnesses at the core of the Chris6tian movement . . . as well as its impact in the face of all odds, must be explained.

    Morison’s challenge still rings out:

    [N]ow the peculiar thing . . . is that not only did [belief in Jesus’ resurrection as in part testified to by the empty tomb] spread to every member of the Party of Jesus of whom we have any trace, but they brought it to Jerusalem and carried it with inconceivable audacity into the most keenly intellectual centre of Judaea . . . and in the face of every impediment which a brilliant and highly organised camarilla could devise. And they won. Within twenty years the claim of these Galilean peasants had disrupted the Jewish Church and impressed itself upon every town on the Eastern littoral of the Mediterranean from Caesarea to Troas. In less than fifty years it had began to threaten the peace of the Roman Empire . . . .

    Why did it win? . . . .

    We have to account not only for the enthusiasm of its friends, but for the paralysis of its enemies and for the ever growing stream of new converts . . . When we remember what certain highly placed personages would almost certainly have given to have strangled this movement at its birth but could not – how one desperate expedient after another was adopted to silence the apostles, until that veritable bow of Ulysses, the Great Persecution, was tried and broke in pieces in their hands [the chief persecutor became the leading C1 Missionary/Apostle!] – we begin to realise that behind all these subterfuges and makeshifts there must have been a silent, unanswerable fact. [Who Moved the Stone, (Faber, 1971; nb. orig. pub. 1930), pp. 114 – 115.]

    All of this brings out one of the deepest ironies of the debates over design.

    While evolutionary materialists have staked all on eliminating the possibility of design in the origin of cosmos and f life and its forms including us, also mind, theists are not anywhere so much at hazard.

    For instance, I stand up for the power of the design inference, not because I fear that absent such, the case for God collapses, but because as one trained in the empirical sciences and with a little knowledge of information, organisation and its sources as well as of inductive logic, and as one who seeks to learn the truth about the world through evidence and reasoning, the case is patently a good one.

    A good one that is attacked and driven out beyond all proportion because entrenched objectors think they have everything at stake on this matter.

    But, ironically, the whole evolutionary materialism project is fatally flawed from the foundations.

    For argument, spot them a quantum foam multiverse or whatever [and no, that is not a genuine nothing, non-being . . . ], and however many terrestrial planets they want. Spot them OOL and equally blind chance and necessity origin of body plans, including our own.

    Now, watch the self-referential incoherence emerge:

    . . . . a: Evolutionary materialism argues that the cosmos is the product of chance interactions of matter and energy, within the constraint of the laws of nature; from hydrogen to humans by undirected chance and necessity.

    b: Therefore, all phenomena in the universe, without residue, are determined by the working of purposeless laws of chance and/or mechanical necessity acting on material objects, under the direct or indirect control of happenstance initial circumstances.

    (This is physicalism. This view covers both the forms where (a) the mind and the brain are seen as one and the same thing, and those where (b) somehow mind emerges from and/or “supervenes” on brain, perhaps as a result of sophisticated and complex software looping. The key point, though is as already noted: physical causal closure — the phenomena that play out across time, without residue, are in principle deducible or at least explainable up to various random statistical distributions and/or mechanical laws, from prior physical states. Such physical causal closure, clearly, implicitly discounts or even dismisses the causal effect of concept formation and reasoning then responsibly deciding, in favour of specifically physical interactions in the brain-body control loop; indeed, some mock the idea of — in their view — an “obviously” imaginary “ghost” in the meat-machine. [[There is also some evidence from simulation exercises, that accuracy of even sensory perceptions may lose out to utilitarian but inaccurate ones in an evolutionary competition. “It works” does not warrant the inference to “it is true.”] )

    c: But human thought, clearly a phenomenon in the universe, must now fit into this meat-machine picture. So, we rapidly arrive at Crick’s claim in his The Astonishing Hypothesis (1994): what we subjectively experience as “thoughts,” “reasoning” and “conclusions” can only be understood materialistically as the unintended by-products of the blind natural forces which cause and control the electro-chemical events going on in neural networks in our brains that (as the Smith Model illustrates) serve as cybernetic controllers for our bodies.

    d: These underlying driving forces are viewed as being ultimately physical, but are taken to be partly mediated through a complex pattern of genetic inheritance shaped by forces of selection [[“nature”] and psycho-social conditioning [[“nurture”], within the framework of human culture [[i.e. socio-cultural conditioning and resulting/associated relativism]. And, remember, the focal issue to such minds — notice, this is a conceptual analysis made and believed by the materialists! — is the physical causal chains in a control loop, not the internalised “mouth-noises” that may somehow sit on them and come along for the ride.

    (Save, insofar as such “mouth noises” somehow associate with or become embedded as physically instantiated signals or maybe codes in such a loop. [[How signals, languages and codes originate and function in systems in our observation of such origin — i.e by design — tends to be pushed to the back-burner and conveniently forgotten. So does the point that a signal or code takes its significance precisely from being an intelligently focused on, observed or chosen and significant alternative from a range of possibilities that then can guide decisive action.])

    e: For instance, Marxists commonly derided opponents for their “bourgeois class conditioning” — but what of the effect of their own class origins? Freudians frequently dismissed qualms about their loosening of moral restraints by alluding to the impact of strict potty training on their “up-tight” critics — but doesn’t this cut both ways? Should we not ask a Behaviourist whether s/he is little more than yet another operantly conditioned rat trapped in the cosmic maze? And — as we saw above — would the writings of a Crick be any more than the firing of neurons in networks in his own brain?

    f: For further instance, we may take the favourite whipping-boy of materialists: religion. Notoriously, they often hold that belief in God is not merely cognitive, conceptual error, but delusion. Borderline lunacy, in short. But, if such a patent “delusion” is so utterly widespread, even among the highly educated, then it “must” — by the principles of evolution — somehow be adaptive to survival, whether in nature or in society. And so, this would be a major illustration of the unreliability of our conceptual reasoning ability, on the assumption of evolutionary materialism.

    g: Turning the materialist dismissal of theism around, evolutionary materialism itself would be in the same leaky boat. For, the sauce for the goose is notoriously just as good a sauce for the gander, too.

    h: That is, on its own premises [[and following Dawkins in A Devil’s Chaplain, 2004, p. 46], the cause of the belief system of evolutionary materialism, “must” also be reducible to forces of blind chance and mechanical necessity that are sufficiently adaptive to spread this “meme” in populations of jumped- up apes from the savannahs of East Africa scrambling for survival in a Malthusian world of struggle for existence. Reppert brings the underlying point sharply home, in commenting on the “internalised mouth-noise signals riding on the physical cause-effect chain in a cybernetic loop” view:

    . . . let us suppose that brain state A, which is token identical to the thought that all men are mortal, and brain state B, which is token identical to the thought that Socrates is a man, together cause the belief that Socrates is mortal. It isn’t enough for rational inference that these events be those beliefs, it is also necessary that the causal transaction be in virtue of the content of those thoughts . . . [[But] if naturalism is true, then the propositional content is irrelevant to the causal transaction that produces the conclusion, and [[so] we do not have a case of rational inference. In rational inference, as Lewis puts it, one thought causes another thought not by being, but by being seen to be, the ground for it. But causal transactions in the brain occur in virtue of the brain’s being in a particular type of state that is relevant to physical causal transactions. [[Emphases added. Also cf. Reppert’s summary of Barefoot’s argument here.]

    i: The famous geneticist and evolutionary biologist (as well as Socialist) J. B. S. Haldane made much the same point in a famous 1932 remark:

    “It seems to me immensely unlikely that mind is a mere by-product of matter. For if my mental processes are determined wholly by the motions of atoms in my brain I have no reason to suppose that my beliefs are true. They may be sound chemically, but that does not make them sound logically. And hence I have no reason for supposing my brain to be composed of atoms. In order to escape from this necessity of sawing away the branch on which I am sitting, so to speak, I am compelled to believe that mind is not wholly conditioned by matter.” [[“When I am dead,” in Possible Worlds: And Other Essays [1927], Chatto and Windus: London, 1932, reprint, p.209. (Highlight and emphases added.)]

    j: Therefore, though materialists will often try to pointedly ignore or angrily brush aside the issue, we may freely argue: if such evolutionary materialism is true, then (i) our consciousness, (ii) the “thoughts” we have, (iii) the conceptualised beliefs we hold, (iv) the reasonings we attempt based on such and (v) the “conclusions” and “choices” (a.k.a. “decisions”) we reach — without residue — must be produced and controlled by blind forces of chance happenstance and mechanical necessity that are irrelevant to “mere” ill-defined abstractions such as: purpose or truth, or even logical validity.

    (NB: The conclusions of such “arguments” may still happen to be true, by astonishingly lucky coincidence — but we have no rational grounds for relying on the “reasoning” that has led us to feel that we have “proved” or “warranted” them. It seems that rationality itself has thus been undermined fatally on evolutionary materialistic premises. Including that of Crick et al. Through, self-reference leading to incoherence and utter inability to provide a cogent explanation of our commonplace, first-person experience of reasoning and rational warrant for beliefs, conclusions and chosen paths of action. Reduction to absurdity and explanatory failure in short.)

    k: And, if materialists then object: “But, we can always apply scientific tests, through observation, experiment and measurement,” then we must immediately note that — as the fate of Newtonian Dynamics between 1880 and 1930 shows — empirical support is not equivalent to establishing the truth of a scientific theory. For, at any time, one newly discovered countering fact can in principle overturn the hitherto most reliable of theories. (And as well, we must not lose sight of this: in science, one is relying on the legitimacy of the reasoning process to make the case that scientific evidence provides reasonable albeit provisional warrant for one’s beliefs etc. Scientific reasoning is not independent of reasoning.)

    l: Worse, in the case of origins science theories, we simply were not there to directly observe the facts of the remote past, so origins sciences are even more strongly controlled by assumptions and inferences than are operational scientific theories. So, we contrast the way that direct observations of falling apples and orbiting planets allow us to test our theories of gravity.

    m: Moreover, as Harvard biologist Richard Lewontin reminds us all in his infamous January 29, 1997 New York Review of Books article, “Billions and billions of demons,” it is now notorious that:

    . . . It is not that the methods and institutions of science somehow compel [[materialistic scientists] to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door. [[And if you have been led to imagine that the immediately following words justify the above, kindly cf. the more complete clip and notes here.]

    n: Such a priori assumptions of materialism are patently question-begging, mind-closing and fallacious.

    o: More important,to demonstrate that empirical tests provide empirical support to the materialists’ theories would require the use of the very process of reasoning and inference which they have discredited.

    p: Thus, evolutionary materialism arguably reduces reason itself to the status of illusion. But, as we have seen: immediately, that must include “Materialism.”

    q: In the end, it is thus quite hard to escape the conclusion that materialism is based on self-defeating, question-begging logic.

    r: So, while materialists — just like the rest of us — in practice routinely rely on the credibility of reasoning and despite all the confidence they may project, they at best struggle to warrant such a tacitly accepted credibility of mind and of concepts and reasoned out conclusions relative to the core claims of their worldview. (And, sadly: too often, they tend to pointedly ignore or rhetorically brush aside the issue.)

    16 –> Notwithstanding such sharp exchanges, through the Derek Smith model we have potentially fruitful frameworks of thought on which we can investigate the nature of mind and its interaction with the body and brain . . .

    Stumbled fatally coming out the starting gates.

    KF

  4. 4
    Dionisio says:

    KF,

    Very insightful OP. Thank you.

  5. 5
    Barry Arrington says:

    Prediction: In response to this post our materialist friends will (1) be struck silent; (2) mock; (3) whine, especially about the number of words KF uses; or (4) scorn. They will most certainly not explain how ultra sophisticated nanotech machines can self-assemble. Nor will they explain how sophisticated algorithms arise spontaneously from nothing.

  6. 6
    gpuccio says:

    KF:

    Thank you for the very good summary. Among many other certainly interesting discussions, we may tend to forget sometimes that functionally specified complex information is the central point in ID theory. You are very good at reminding that to all here.

    I would like to suggest a very good example of multilevel functional complexity in biology, which is often overlooked. It is an old favourite of mine, the maturation of antibody affinity after the initial immunological response.

    Dionisio has recently linked an article about a very recent paper. The paper is not free, but I invite all those interested to look at the figures and legends, which can be viewed here:

    http://www.nature.com/nri/jour.....28_ft.html

    The interesting point is that the whole process has been defined as “darwinian”, while it is the best known example of functional protein engineering embedded in a complex biological system.

    In brief, the specific B cells which respond to the hapten (antigen) at the beginning of the process undergo a sequence of targeted mutations and specific selection, so that new cells with more efficient antibody DNA sequences can be selected and become memory cells or plasma cells.

    The whole process takes place in the Germinative Center of lymph nodes, and involves (at least):

    1) Specific B cells with a BCR (B cell receptor) which reacts to the external hapten.

    2) Specific T helper cells

    3) Antigen presenting cells (Follicular dendritic cell) which retain the original hapten (the external information) during the whole process, for specific intelligent selection of the results

    4) Specific, controlled somatic hypermutation of the Variable region of the Ig genes, implemented by the following molecules (at least):

    a) Activation-Induced (Cytidine) Deaminase (AID): a cytosine:guanine pair is directly mutated to a uracil:guanine mismatch.

    b) DNA mismatch repair proteins: the uracil bases are removed by the repair enzyme, uracil-DNA glycosylase.

    c) Error-prone DNA polymerases: they fill in the gap and create mutations.

    5) The mutated clones are then “measured” by interaction with the hapten presented by the Follicular DC. The process is probably repeated in multiple steps, although it could also happen in one step.

    6) New clones with reduced or lost affinity are directed to apoptosis.

    7) New clones with higher affinity are selected and sustained by specific T helper cells.

    In a few weeks, the process yields high affinity antibody producing B cells, in the form of plasma cells and memory cells.

    You have it all here: molecular complexity, high control, multiple cellular interactions, irreducible complexity in tons, spacial and temporal organization, extremely efficient engineering. The process is so delicate that errors in it are probably the cause of many human lymphomas.

    Now, that’s absolute evidence for Intelligent Design, if ever I saw it. 🙂

  7. 7
    kairosfocus says:

    BA: You missed the latest, they now cannot spot the points because of the “dayglow” multimedia elements. Including, I suppose, infographics that make key parts of the case in a nutshell. KF

    PS: The little movie is a nice extra, that tells the story of the actual designers.

  8. 8
    kairosfocus says:

    GP, quite a case. Why not use your posting powers and do a full, headlined post? KF

  9. 9
    kairosfocus says:

    D, Thanks. I also see GP is using something you posed. If you care to make a guest post, just drop me a line. KF

    PS: Objectors, that goes for you too, especially if you are stepping up to the plate to address the UD pro-darwinism essay challenge that still stands open after a couple of years . . . OOL and the tree of life i/l/o the same Smithsonian diagram above.

  10. 10
    gpuccio says:

    Barry:

    Easy prediction! 🙂

    I would hope for scorn at least, silence is so boring…

  11. 11
    Zachriel says:

    So what is the quantitative FSCO/I of the Ambassadeur 6500?

  12. 12
    kairosfocus says:

    GP, well there are over a dozen graphic elements and an embedded video. Dayglow! And, so many citations, too. KF

  13. 13
    gpuccio says:

    Zachriel:

    So, neither silence nor scorn! Just a question. 🙂

    OK, you propose some “natural” system which could generate the object by non design mechanisms and randomness, and then we can try to compute the FSCI according to the probability of the functional result. OK?

  14. 14
    kairosfocus says:

    Z, well past 1,000 bits (143 ASCII characters) just for the main gear [shaft top right, the hollowed out gear that holds the drag washer stack] . . . gears are astonishingly complex entities with seriously exacting specifications for alignments, orientations, tooth shapes and cuts etc. There are several dozen highly precise parts, and a highly precise pattern for their assembly. Where of course, protein strings are similarly highly specific and for a functional flagellum or an ATP synthase, quite a few parts have to be put together just right, or no go. Nodes and arcs y/n q-chains galore. KF

    PS: I’d love to see a good vid on self assembly if someone’s got one.

  15. 15
    Zachriel says:

    kairosfocus: well past 1,000 bits (143 ASCII characters) just for the main gear

    So a quantitative value for FSCO/I of the Ambassadeur 6500 is not available? Have you tried the manufacturer specifications?

  16. 16
    gpuccio says:

    Zachriel:

    Strange tactics from an intelligent person like you.

    In front of a functional complexity so big that we cannot even conceive of a system which could reasonably have any chance of assembling the object, your only argument is that we cannot compute a precise value?

    Oh, I understand.Not silence, not scorn. Just mock.

    Barry was right, after all. Obviously… 🙂

  17. 17
    Zachriel says:

    gpuccio: In front of a functional complexity so big that we cannot even conceive of a system which could reasonably have any chance of assembling the object, your only argument is that we cannot compute a precise value?

    We didn’t make an argument. So is it not possible to provide a reasonably precise quantification of the FSCO/I of the Ambassadeur 6500?

  18. 18
    Joe says:

    So is not possible to provide a reasonably precise quantification of the FSCO/I of the Ambassadeur 6500?

    It could be possible. However the point of any CSI is to show if it is present or not and a precise number is not important.

  19. 19
    Me_Think says:

    gpuccio @ 16
    Seriously, when will there be a theory to show how ID agents work to solve high FSCO/I problems? You need not show the mechanism. At least let us know the form of ID agent.
    It would be helpful to know how the ID agent knows there is a high FSCO/I problem to be solved in the first place. When are you all going to take the first step towards a ID theory?

  20. 20
    gpuccio says:

    Me_Think:

    I don’t understand your point. Conscious agents solve problems of high functional complexity all the time. How? Because conscious faculties of cognition (experience of meaning) and purpose can implement a process which organizes knowledge and information, and is the true source of design events.

    We see that at work all the time in human design.

    What is your problem?

  21. 21
    Joe says:

    Me Think:

    Seriously, when will there be a theory to show how ID agents work to solve high FSCO/I problems?

    When will there be a theory of evolution? Your mechanisms have proven to be impotent. At least let us know the form of mechanism that can do the job required.

    When are you all going to take the first step towards an evolutionary theory?

    Acartia Bogart/ William Spearshake has told me there isn’t a theory of evolution so you need to get started.

  22. 22
    Joe says:

    gpuccio- Our opponents’ problems stem from the fact that they have nothing so they have to throw all the poop they can at ID and hope some of it sticks.

  23. 23
    gpuccio says:

    Zachriel:

    Beyond 1000 bits is a quantification. It’s more than enough to infer design.

    In the case in argument, it is certainly much more than 1000 bits.

    And you certainly understand that a threshold categorization is exactly what we need here.

  24. 24
    gpuccio says:

    Well, the confusion brigade is at full work, finally!

  25. 25
    Me_Think says:

    Joe @ 21
    yawn. Please answer the question

  26. 26
    Me_Think says:

    gpuccio @ 20

    I don’t understand your point….
    We see that at work all the time in human design.
    What is your problem?

    I am sure you understand the problem well.How does the ID agent know at which point to come into the process? From where does the ID agent fly in ?

  27. 27
    Joe says:

    Me Think- your question proves your ignorance. Why does a theory of Intelligent Design need ” to show how ID agents work to solve high FSCO/I problems”? Are you really that ignorant that you don’t understand that Intelligent design is about the DESIGN and not the intelligent designer(s)? So why do you think that your ignorance is an argument?

  28. 28
    Me_Think says:

    Joe @ 27

    Why does a theory of Intelligent Design need ” to show how ID agents work to solve high FSCO/I problems”? Are you really that ignorant that you don’t understand that Intelligent design is about the DESIGN and not the intelligent designer(s)?

    ID has never shown that it can detect design. Neither FSCO/I nor CSI has been used in real life or even in a peer reviewed paper to detect design. What ID boils down to is just a list of complaints against ToE.

  29. 29
    Joe says:

    Me Think:

    ID has never shown that it can detect design.

    Yes, we have. And guess what? Other venues use basically the same techniques to detect design and they are very successful. OTOH your position still lacks a methodology.

    Neither FSCO/I nor CSI has been used in real life or even in a peer reviewed paper to detect design.

    Yes, they have. OTOH there isn’t anything in peer-review that supports the claims of your position.

    What ID boils down to is just a list of complaints against ToE.

    There isn’t any ToE. There isn’t any testable hypotheses. There isn’t any model.

    Again your ignorance betrays you.

  30. 30
    Dionisio says:

    gpuccio,

    Please, be nice to our friendly interlocutor. See how confused he appears to be in this other discussion:

    http://www.uncommondescent.com.....ent-546186

  31. 31
    Paul Giem says:

    Me_Think (#28),

    ID has never shown that it can detect design.

    You are forgetting this post. You can detect design yourself, at least under certain circumstances.

    PS. You posted there.

  32. 32
    gpuccio says:

    Dionisio:

    Well, as you know, about consciousness and AI Me_Think and I have harmoniously decided that it’s not the case to go on discussing one with the other. It’s beautiful to see such agreement between people.

    Regarding “simpler” questions, like design detection, we will see… 🙂

  33. 33
    gpuccio says:

    Me_Think:

    How does the ID agent know at which point to come into the process? From where does the ID agent fly in ?

    Obviously, the ID agent has a plan (he is a conscious, purposeful agent, by definition, otherwise he would not be able to design). So, I would say that he comes into the process according to his plan, and probably to the constraints inherent in his plan and in his resources.

    From where? Strange question. We only know that he is a conscious intelligent purposeful being (or more than one). He interacts with matter, but there no need that he is a physical being, as I have said many times. Indeed, it is much more likely that we are dealing with a non physical conscious being. So, asking “where” is a little out of context here.

  34. 34
    gpuccio says:

    Me_Think:

    ID has never shown that it can detect design. Neither FSCO/I nor CSI has been used in real life or even in a peer reviewed paper to detect design. What ID boils down to is just a list of complaints against ToE.

    One thing at a time.

    ID has shown many times that it can detect design. Paul Giem has given an example. My post about English language here:

    http://www.uncommondescent.com.....-language/

    is another example, with a detailed quantitative analysis.

    In biology, I have inferred design for about 30 of the protein families analyzed in the Durston paper:

    http://www.tbiomed.com/content/4/1/47

    according to a threshold of 150 bits for biological objects.

    On my own, I have inferred design here for ATP synthase and for histone H3, giving the details of my reasoning.

    And so on.

    This is real life, I hope. And the Durston paper is a peer reviewed paper, like the papers by Axe and Gauger and Behe and so on.

    Not much, I know, but what would you expect from an Academy which has already decided that any reference to ID must be banned from science? You get what you want.

    An integral part of ID is the criticism of the conventional explanation of functional complexity in biology, that is neo darwinism in all its forms. That is absolutely true. But the main principle of ID is the recognition that functional complexity is a reliable marker of design.

    So, ID theory “boils down” to both developing a theory of design inference from functional complexity and applying it to biological objects, and criticizing and falsifying the silly theory of neo darwinism (which is, I must say, rather easy). 🙂

  35. 35
    gpuccio says:

    To all:

    Any comments about some possible neo darwinian explanation of the antibody affinity maturation system?

    Come on, don’t be afraid. I know that neo darwinists’imagination is second to nothing.

  36. 36
    kairosfocus says:

    Z, we can make some estimates, but the manufacturer has the engineering drawings [I assume by now digitised, that was a bit of an industry for a time], and I am not about to go do a major reverse engineering effort just to compute a number that the operative element is a threshold that we can easily meet. Just on the main gear. To give you an idea, it is now a common move to replace original drag washers with Carbon Tex, but the risk is to strip the gears if you push the drag down too tight as the gears were strength calibrated for the original drag washers. KF

  37. 37
    Joe says:

    gpuccio- Ready yourself for arguments railing against the use of “neo darwinian” and “neo darwinists”- or a barrage of obfuscation pertaining to unguided, intelligence, consciousness, goal-oriented, searches, etc., etc., etc.

    The neo darwinian explanation for anything is “We know that it happened and we just have to figure out how, but that is unimportant because we know that it happened”

    The visual system – “The original populations obviously didn’t have them, they exist now, and because of that some blind watchmaker process didit”

    “Humans didn’t exist at the beginning of the universe, they exist now, and therefor (because of that) some blind watchmaker process didit”

    “Now prove us wrong!”

  38. 38
    kairosfocus says:

    MT, 28:

    This deserves to be framed:

    ID has never shown that it can detect design. Neither FSCO/I nor CSI has been used in real life or even in a peer reviewed paper to detect design.

    False on all counts:

    1 –> Design detection is a routine matter and it uses intuitive or quantitative estimates that boil down to recognising FSCO/I. For instance, try how you recognise that a ring ditch is “archaeology” not “natural.” Which is a hint on one routine real world application.

    2 –> The very example before you is a case of the routine demonstration of the reliability of FSCO/I as an index of design.

    3 –> As for use in peer reviewed papers the ones that strike me just now are the ones in which the evolutionary algorithms were presented as showing specified complexity arising by blind chance and mechanical necessity, only to be exposed case by case as smuggling in active information.

    4 –> The Durston et al paper of 2007 cited above did not explicitly state the verdict, but simply to show reasonable I-values well beyond universal plausibility bounds was enough. As in, a word to the wise is sufficient.

    So, what we see here again is the asymmetry of commitment. I have no worldview level need for design to be detectable (and freely acknowledge that in many cases it isn’t, we are looking at those where it is). But because of a heavy commitment to back a horse . . . a priori evolutionary materialism or its fellow travellers . . . that stumbled fatally in the starting gates (it is patently self referentially incoherent), reasonable methods and cases — trillions of cases show the empirical reliability of FSCO/I and other indicia of design — must be dismissed.

    And BTW, being peer reviewed is not an adequate criterion of warrant, it is an appeal to authority of an ideologised collective magisterium. Antecedent to such is the point that no authority is better than his facts, assumptions and reasoning.

    For instance, the text of every post in this thread is an example of a coded string using ASCII code or the like, reasonably conforming to the specification of English language conventions. Once such text exceeds 72 characters, the design inference explanatory filter will rule design. There are many instances above.

    Would you care to argue and show that any of them are produced by blind chance and mechanical necessity that somehow got loose on the Internet?

    Actually we know the answer.

    The objectors are intuitively, routinely inferring to design.

    On the manifestation of FSCO/I.

    Why then is there a refusal on seeing the code in D/RNA?

    Because, of an a priori worldview commitment that locks out the willingness to see the evidence that points to design.

    And as for our good old friend from Sweden (these days more likely China or maybe Taiwan I hear) this one is a patent example.

    KF

  39. 39
    kairosfocus says:

    Dr Giem, great to see you popping up. KF

  40. 40
    kairosfocus says:

    Joe,

    there is no need to prove wrong on design detection, evolutionary materialism and fellow travellers stumbled fatally coming out the starting gates through self referential incoherence long long ago.

    As tot he notion that blind chance and mechanical necessity can and do account for FSCO/I, th matter is simple: vera causa.

    We ought only to use explanations in discussing remote things we cannot directly observe, that have been shown to achieve the relevant effects here and now.

    The routine fate of attempts to do such should serve notice.

    As should the needle in haystack search challenge FSCO/I poses to such.

    KF

  41. 41
    kairosfocus says:

    All, useful discussion. Busy just now. KF

    PS: For those looking for grand theories on designer methods etc, you may find it useful to examine TRIZ, e.g.:

    https://www.aitriz.org/triz

  42. 42
    bornagain77 says:

    per Dr. Giem at 31:

    Design Detection 1-24-2015 by Paul Giem
    https://www.youtube.com/watch?v=ZO_Cp00kJU8

    and Here is the latest video from the Biological Information series of lectures by Dr. Giem:

    Biological Information – Mendel’s Accountant and Avida 1-31-2015 by Paul Giem
    https://www.youtube.com/watch?v=cGd0pznOh0A&list=PLHDSWJBW3DNUUhiC9VwPnhl-ymuObyTWJ&index=14

  43. 43
    Zachriel says:

    kairosfocus: I am not about to go do a major reverse engineering effort just to compute a number that the operative element is a threshold that we can easily meet.

    It was your own example.

  44. 44
    kairosfocus says:

    Z,

    yes, the reel is an example that shows how correct components form a network that exhibits interactive function.

    It also — per the exploded view diagram above — shows how the nodes-arcs pattern involved can be reduced to a structured string of Y/N q’s giving an information value.

    Such, further, is in the context of a whole industrial sector out there dominated by AutoCAD that routinely provides engineering drawing files that are routinely measured in bytes, clusters of eight bits.

    That, is not in serious question.

    Typical sizes for such files greatly exceed 1 kbits, which is 125 bytes.

    Yet further, just the main gear in the reel has a set of specifications tied to function that will easily exceed 125 bytes. If you doubt me, try to make one out of a lump of brass and see how well you do . . . unless you are a qualified machinist.

    So, no — just like archaeologists examining a putative artifact (as opposed to “natural”) — I do not need to carry out a major reverse engineering exercise to identify that an Abu Garcia 6500 C3 reel exemplifies FSCO/I.

    Likewise we reasonably know just from the requisites of its main gear, that it will easily exceed a threshold that is such that the only empirically plausible explanation will be design.

    And such is more than enough for our purposes.

    For instance by reducing 3-d functionally specific organisation to analysis on digital strings, we show that the sort of analysis commonly used to assess information that naturally occurs in strings (such as text in English or D/RNA coded strings) is WLOG.

    And, it is easy to work with information expressed in strings.

    The AutoCAD dwg files may be compressible, etc but that won’t make a material difference to the result.

    So, we see that the sort of discussion on strings that is so often derided or dismissed by objectors, is in fact pivotal.

    Let me clip Thaxton et al in TMLO ch 8, echoing Orgel and Wicken:

    1. [Class 1:] An ordered (periodic) and therefore specified arrangement:

    THE END THE END THE END THE END

    Example: Nylon, or a crystal . . . .

    2. [Class 2:] A complex (aperiodic) unspecified arrangement:

    AGDCBFE GBCAFED ACEDFBG

    Example: Random polymers (polypeptides).

    3. [Class 3:] A complex (aperiodic) specified arrangement:

    THIS SEQUENCE OF LETTERS CONTAINS A MESSAGE!

    Example: DNA, protein.

    In short we can readily note a difference between order, typical randomness and functionally specific complex organisation when expressed in strings. In the OP, the Trevors-Abel illustration of their relationships, is informative.

    Where, finally, for 500 bits the atomic resources of our solar system would be hopelessly overwhelmed by a blind search challenge, and for 1,000 bits, those of the observed cosmos.

    So, all that is really necessary is to have good reason to understand that such circumstances obtain. And the main gear alone is sufficient for that.

    So, the dismissive attempt collapses.

    KF

  45. 45
    Cross says:

    kairosfocus @ 45

    Great response, I’m betting Z will not be satisfied.

    We haven’t even got to the Intelligent Agent needed to put the reel together or do we put the pieces in a bait bucket and shake them around for long enough for it to self-assemble?

  46. 46
    Me_Think says:

    gpuccio @ 34

    Indeed, it is much more likely that we are dealing with a non physical conscious being. So, asking “where” is a little out of context here.

    Heh. The non-physical conscious being (just a thought – may be he is made of dark matter with a dark energy brain) has to go over from place to place to fix processes – right? So ‘from where’ is within context, but of course you can’t divulge the secrets.
    gpuccio @ 35

    ID has shown many times that it can detect design.

    No. Claiming xyz detects design in a blog and posting few OPs on that same blog is not proof of the claim. There are thousands of websites and blogs claiming all kinds of things. Their posting articles in their own blog proving their claim is inconsequential.

    ..And the Durston paper is a peer reviewed paper,

    His paper is about Functional Sequence Complexity which is measured in Fits. It is about calculating change in functional uncertainty from ground state.

    Not much, I know, but what would you expect from an Academy which has already decided that any reference to ID must be banned from science?

    No, the truth is even ID journals shun away from CSI and design detection. They prefer the ‘search landscape with steep hills which poor evolution can’t climb’ papers.

  47. 47
    Me_Think says:

    KF @ 38

    This deserves to be framed:

    Thanks.

    Design detection is a routine matter

    True.

    and it uses intuitive or quantitative estimates that boil down to recognizing FSCO/I.

    False. How many of your colleagues have claimed they calculate FSCO/I to detect design?
    Durston et al paper (if you mean the paper that GP linked) is about Functional Sequence Complexity which is measured in Fits. It is about calculating change in functional uncertainty from ground state, not design detection.
    The rest of the post shows you believe FSCO/I is nothing more than the intuition that something is designed. So why calculate FSCO/I at all ?

  48. 48
    Cross says:

    Barry Arrington @ 5

    [prediction]”They will most certainly not explain how ultra sophisticated nanotech machines can self-assemble. Nor will they explain how sophisticated algorithms arise spontaneously from nothing.”

    Prediction confirmed.

  49. 49
    Me_Think says:

    Cross @ 49
    I think the ID agent pulled up a chair and worked out all parameters of nano machine on his Mac, converted it into stl file and printed it with a bio 3d printer. What do you think?

  50. 50
    Cross says:

    Point (2) Mock, also confirmed, still no answer.

  51. 51
    Me_Think says:

    Cross @ 51,
    I gave an answer. What is yours?

  52. 52
    Cross says:

    Me_Think @ 50

    I can’t wait for the response when you publish your theory, Apple will be pleased, the computer that God uses!

  53. 53
    DATCG says:

    From Frontiers in Bioengineering and Biotechnology… an interesting look at why a Design heuristic is profitable for discovery and research. And why applying Design methodology helps in research.
    A systems engineering perspective on homeostasis and disease

    With our increasing understanding of life’s multi-scale trans-hierarchical architecture, it has been suggested that living systems share characteristics common to engineered systems and that there is much to be learned about biological complexity from engineered systems (Csete and Doyle, 2002; Doyle and Csete, 2011). This is not to say that biological systems are engineered systems: biological systems are clearly distinct and different by virtue of having resulting from evolution(obligatory denial of design) as opposed to design.

    However(now that we bowed to Darwin lets move on), there are some similarities between their consequent organization and that of engineered systems that can provide useful insights (D’Onofrio and An, 2010). For instance, engineered systems can be perceived as coupled networks of interacting sub-systems, whose dynamics are constrained to tight requirements of robustness (to maintain safe operation) on one hand, and maintaining a certain degree of flexibility to accommodate changeover on the other. The aim of analysis, synthesis, and design of complex supply chains is to identify the laws governing optimally integrated systems. Optimality of operations is not a uniquely defined property and usually expresses the decision maker’s balance between alternative, often conflicting, objectives. Both biological and engineered complex constructs have evolved through multiple iterations, the former by natural processes(obligatory nod to random, unguided iterations) and the latter by design, to optimize function in a dynamically changing environment by maintaining systemic responses within acceptable ranges.

    Deviation from these limits leads to possibly irreversible damage. Stability and resiliency of these constructs results from dynamic interactions among constitutive elements. The precise definition and prediction of complex outcomes dependent on these traits is critical in the diagnosis and treatment of many disease processes, such as inflammatory diseases (Vodovotz and An, 2013).

    Dynamic, Integrated Systems, Feedback Loops, Responses, Messaging, Optimal Integration, Organization and Decision Making are hallmarks of Design, not unguided processes.

  54. 54
    Me_Think says:

    Corss @ 53
    Apple wouldn’t be pleased about it. It’s CEO is gay and apparently ‘Objective morality’ dictated by ID agents doesn’t allow that orientation .
    Anyway, your answer is still pending.

  55. 55
    Cross says:

    Me_Think @ 55

    Please put away your “Ned Flanders” view of Christians. You really have an axe to grind, don’t you.

    I am sure Tim Cook is a nice guy, he is just another sinner in need of a Savior like you or me.

    Could we get back to the Op? Do you have a real explanation for the nano machines or are you just intent on side tracks?

  56. 56
    Cross says:

    Me_Think

    If you insist

    “First this: God created the Heavens and Earth—all you see, all you don’t see.” Genesis 1:1 MSG

    “Oh yes, you shaped me first inside, then out; you formed me in my mother’s womb. I thank you, High God—you’re breathtaking! Body and soul, I am marvelously made! I worship in adoration—what a creation! You know me inside and out, you know every bone in my body; You know exactly how I was made, bit by bit, how I was sculpted from nothing into something. Like an open book, you watched me grow from conception to birth; all the stages of my life were spread out before you, The days of my life all prepared before I’d even lived one day.” Psalm 139:14 MSG

    Now, where is your materialistic explanation?

  57. 57
    sparc says:

    6
    gpuccioFebruary 4, 2015 at 8:50 am

    KF:

    Thank you for the very good summary. Among many other certainly interesting discussions, we may tend to forget sometimes that functionally specified complex information is the central point in ID theory. You are very good at reminding that to all here.

    I would like to suggest a very good example of multilevel functional complexity in biology, which is often overlooked. It is an old favourite of mine, the maturation of antibody affinity after the initial immunological response.

    Dionisio has recently linked an article about a very recent paper. The paper is not free, but I invite all those interested to look at the figures and legends, which can be viewed here:

    http://www.nature.com/nri/jour…..28_ft.html

    The interesting point is that the whole process has been defined as “darwinian”, while it is the best known example of functional protein engineering embedded in a complex biological system.

    In brief, the specific B cells which respond to the hapten (antigen) at the beginning of the process undergo a sequence of targeted mutations and specific selection, so that new cells with more efficient antibody DNA sequences can be selected and become memory cells or plasma cells.

    The whole process takes place in the Germinative Center of lymph nodes, and involves (at least):

    1) Specific B cells with a BCR (B cell receptor) which reacts to the external hapten.

    2) Specific T helper cells

    3) Antigen presenting cells (Follicular dendritic cell) which retain the original hapten (the external information) during the whole process, for specific intelligent selection of the results

    4) Specific, controlled somatic hypermutation of the Variable region of the Ig genes, implemented by the following molecules (at least):

    a) Activation-Induced (Cytidine) Deaminase (AID): a cytosine:guanine pair is directly mutated to a uracil:guanine mismatch.

    b) DNA mismatch repair proteins: the uracil bases are removed by the repair enzyme, uracil-DNA glycosylase.

    c) Error-prone DNA polymerases: they fill in the gap and create mutations.

    5) The mutated clones are then “measured” by interaction with the hapten presented by the Follicular DC. The process is probably repeated in multiple steps, although it could also happen in one step.

    6) New clones with reduced or lost affinity are directed to apoptosis.

    7) New clones with higher affinity are selected and sustained by specific T helper cells.

    In a few weeks, the process yields high affinity antibody producing B cells, in the form of plasma cells and memory cells.

    You have it all here: molecular complexity, high control, multiple cellular interactions, irreducible complexity in tons, spacial and temporal organization, extremely efficient engineering. The process is so delicate that errors in it are probably the cause of many human lymphomas.

    Now, that’s absolute evidence for Intelligent Design, if ever I saw it.

    You haven’t looked up evolution of AID, did you?

  58. 58
    sparc says:

    BTW, you let out the part of the B-cell development that occurs without any antigen. Lots of mutations, rearragements and selection. Where and how does ID interfere in these processes. Especially, in cases of man made synthetic artificial antigens that were not present 50 years ago?

  59. 59
    Dionisio says:

    #58 sparc

    You haven’t looked up evolution of AID, did you?

    Are you referring to the activation-induced deaminase ?

    As in this paper?

    http://journal.frontiersin.org.....00534/full

    If that’s what you meant, then, that’s a valid point, thank you for bringing it up here.

    We could discuss this in reference to gpuccio’s post #6. Perhaps gpuccio will consider KF’s suggestion to start a separate thread just for this particular discussion, as I can see it may extend quite a bit, after we dig in the details of the most recent related papers on this subject.

  60. 60
    Dionisio says:

    #60 addendum

    Before we move further on this, we may want to know if a separate discussion thread will be started just for this. Thus the posts won’t have to be cross-referenced between threads.

    Let’s keep in mind that there’s a substantial amount of detailed information to dissect further for this discussion.

    The enzymes involved in the processes could be reviewed, as well as their variations involved within different scenarios.

    But most importantly we should look carefully at the actual choreographies where the referred enzymes and their variations play any role. How did they get orchestrated? Special attention to timing, location, quantity? Other issues to consider too?

    A separate thread could make it easier to follow-up the posted comments.

    Any suggestions on this?

  61. 61
    Jerad says:

    In your equation 1:

    Chi_500 = I*S – 500

    how is S determined?

  62. 62
    kairosfocus says:

    MT, Functional Sequence complexity, in a context of specific function is a case of functionally specific complex organisation and associated information. The term is descriptive, and one key result is the reduction of 3-d organised nodes-arcs, wireframes etc to linear sequences through structured strings of y/n q’s. Where Durston et al went was to the further extent of assessing variability that retains function, applying the H-metric. KF

  63. 63
    kairosfocus says:

    DATCG, very interesting catch. Each of the cases you identify, beyond a modest threshold will easily pass the FSCO/I threshold of inferring design. And yes, they are cases of that overall pattern. Algorithms and digitally coded info too. KF

  64. 64
    kairosfocus says:

    sparc, are you aware that GP is a pretty serious practising physician, as is also Dr Giem? KF

  65. 65
    gpuccio says:

    Dionisio:

    I think we can proceed this way. We can go on discussing here the general issue raised so well by KF, and I will answer here the comments about that.

    For those interested in the discussion about affinity maturation and its possible explanations (see spark’s two posts) I will open (later today) a simple new thread where we can go deeper.

  66. 66
    gpuccio says:

    Aurelio Smith:

    “There seems to be a wealth of literature so it seems a fruitful area of research. Have there been any developments produced using ID as a paradigm?”

    We will see better in the new thread. But for the moment, a very simple comment.

    It is, certainly, a “fruitful area of research”. IOWs, we are understanding many important things about how it works. But what has that to do with understanding how it came into existence? Those are two very different issues.

    ID and neo darwinism are paradigms about the origin of functional information. Of course, all research about the nature and organization of function in biology is precious and relevant to compare those two paradigms. So, all research of that kind, whoever does it, is ultimately ID research (if ID is right) or neo-darwinist research (if neo-darwinism is right), but in itself it is neither, it is just research to understand how things work. The results of that research can be intepreted in an ID framework or a neo-darwinist framework, and any intelligent observer is free to decide which is better.

    That most of the good research about how things work is done by the official Academy which believes in the folly of neo darwinism is really obvious: who do you think owns the resources of people, money, institutions and so on, today?

    However, good research is always good research, whatever the prejudices of those who do it. The methodology, sometimes, will be biased, the interpretations, sometimes, will be biased (cognitive bias can never be really suppressed in human activities), but the data and results, in most cases, will be precious and important.

    However, if you are aware of good research which helps explaining how the system described by me came into existence, please link to it, and we will discuss it in the new thread.

  67. 67
    kairosfocus says:

    Jerad, S is a dummy variable of binary assignment, default 0 for non-specific. It is set to 1 on having adequate empirical reason to accept functional specificity; as has been discussed many times in and around UD. For the case of the main gear and the wider assembly above, that is intuitive but can be formally explored through assessment of tolerances. Let’s just say that the previous reels prior to the 1950’s were not interchangeable, i.e. the precision was too low and parts had to be particularly matched to get a working reel. That BTW was true of the famous 0.303 Lee-Enfield family of military rifles in British and Commonwealth service from 1895 on [and in 7.62 NATO form still in Indian reserve and/or Police service it seems], with IIRC 17 million examples. That’s why the detachable magazine was specific to the individual rifle and why loading was by charger clip. A similar pattern was discovered in the US on attempting to mass produce the Swedish Bofors 40 mm cannon in the 1940’s. The idea of tolerance easily extends to noise tolerance of the resulting bit string on reduction to specifying description; indeed, tolerance is a part of specification, what with error budgets etc. S, functional specificity, then allows us to assess the presence or absence of the island of function effect resulting from particularity of organised interaction to achieve function; i.e. it is naturally present and non arbitrary . . . why (as was noted above) shaking the reel parts together in a bait bucket will be utterly unlikely to be fruitful. KF

  68. 68
    gpuccio says:

    sparc:

    I will answer your comments later today, in the new thread (Well, I really hope that I will have time to keep this promise! 🙂 )

  69. 69
    kairosfocus says:

    F/N: One key point in all the above is to rivet the reality and direct observability of FSCO/I in mind. This is a real, intuitively recognisable, routinely observed and quantifiable phenomenon with one actually observed cause, intelligently directed configuration. AKA, design. Where, the islands of function effect and linked needle in haystack blind search challenge give a plausible reason for that. Where also the quantification approach at first level is as outlined by Orgel in 1973, structured string of Y/N q’s that reduces a 3-d interactive structure to a linear textual description in ultimately binary digits, which can be assessed on tolerance for noise or variability. And of course an onward point since Paley then von Neumann is that the addition of an integrated self replicating facility reflects an enormous increment of FSCO/I. KF

  70. 70
    gpuccio says:

    Me_think:

    Heh. The non-physical conscious being (just a thought – may be he is made of dark matter with a dark energy brain) has to go over from place to place to fix processes – right? So ‘from where’ is within context, but of course you can’t divulge the secrets.

    Wrong. I can and will divulge all secrets pertinent and which I know! (I have never been good at keeping secrets 🙂 )

    This is pertinent and this I know: once we localize in time and space a design intervention, then the consciousness of the designer must have acted at that time and in that place to design the thing. This is pertinent.
    “From where” the designer came there is not pertinent, and I know nothing about that non pertinent question.

    So, let’s make an example. I have inferred design for ATP synthase. So, wherever ATP synthase first emerged, and at whatever time it happened, the designer acted there.

    Now, I am afraid that at present we have no clear spacial location for such an event: it probably happened somewhere in the seas. OK, the designer acted there.

    We have, however, some idea of when it happened: in LUCA, and in the window of time when LUCA existed. Not exactly a precise date, but certainly a definite range of possibilities. Anyway, the designer acted at that time.

    The design activity can certainly be localized well in time and space, because specific design events can be inferred by ID theory. Our knowledge about the specific temporal and spacial collocation of those design events depends simply on out growing understanding of the events themselves.

    On the contrary, the wanderings of the designer between his design activities are not, at present, in the range of ID theory, because ID theory is interested in the observed design events. If you are really curious, maybe you can try different disciplines for that.

    No. Claiming xyz detects design in a blog and posting few OPs on that same blog is not proof of the claim. There are thousands of websites and blogs claiming all kinds of things. Their posting articles in their own blog proving their claim is inconsequential.

    In science nothing can be “proved”. We discuss our different intepretations of data. Each thinking person has to decide what is the best explanation. That choice cannot be delegated to anyone.

    Then, there is the “consensus”: what the majority believes. That is interesting, but in no way it is “proof”. Many people think differently from the consensus. Who is right? We cannot say in general. The consensus can be biased, and often it is. On the other hand, single individuals can certainly be wrong, and often are.

    So, again, the final decision is of each thinking person. That’s why we discuss here, even if our ideas are certainly those of a minority (at least in the scientific Academy). I believe that my ideas are true, and therefore I discuss them here, even if the consensus does not approve. This is worthwhile, for me, and I like it.

    His paper is about Functional Sequence Complexity which is measured in Fits. It is about calculating change in functional uncertainty from ground state.

    It certainly is. And that is exactly the same thing as computing dFSI. As I have discussed many times.

    The only thing that Durston does not give in his paper is a threshold to categorize the dFSI value into a binary value (dFSCI yes/no). I have used Durston’s numbers with a threshold proposed by me for biological objects (150 bits), and infered design for most of his protein families.

    But there is absolutely no doubt that Durston’s paper is about the measurement of digital functional complexity in protein families: he has explained that many times, himself. And there is no doubt that Durston is a researcher in the ID field.

    No, the truth is even ID journals shun away from CSI and design detection. They prefer the ‘search landscape with steep hills which poor evolution can’t climb’ papers.

    You should be able to understand that it’s the same thing. The main objection (probably the only serious one) of neo darwinists to the application of design detection to proteins is that the protein functional landscape, at present not completely known, could be such that the silly neo darwinist explanation may be vaguely credible. That is not true, and the researchers you mention are trying to demonstrate exactly that. It’s completely pertinent and important ID research.

  71. 71
    gpuccio says:

    Me_think:

    “I think the ID agent pulled up a chair and worked out all parameters of nano machine on his Mac, converted it into stl file and printed it with a bio 3d printer. What do you think?”

    I think that is a good metaphor of what really happened. But probably, Windows and Linux were given a chance too.

    Seriously, you have described very correctly the steps that the designer must have implemented: a purpose, a plan, an implementation at software level, a final implementation at hardware level. You are, definitely, a good ID thinker. 🙂

    Ah, I suppose the chair is a bonus!

  72. 72
    Me_Think says:

    Cross @ 57

    If you insist
    “First this: God created the Heavens and Earth

    Well, you have got a problem right there. If you need to create a 3D world, you have to be on higher dimension. Even a simple 4th (spatial) dimension is impossible to manage – You can’t even tie a knot beyond 3rd dimension. All knots in other dimension will be unknots.
    A creator who creates in 4D or any higher dimension can’t even create a single atom which conforms to 3D world, as the atoms will have nP orbitals (n=the number of dimension). In 4th dimension, you have 4P orbitals, in 5th, 5P orbitals and so on. This will allow more than two electron per orbital- which will change the element ! Every element will have weird properties ! Atoms will collapse easily. Atomic bonding will be shot. Molecules forming will be difficult and weird, not to mention minor problems like sound will ripple back (at least in even number dimensions) ,which will make communication with ID helpers difficult, then there is the problem of constructing structures with higher edges(10,32,24,96,720,1200)… etc

  73. 73
    gpuccio says:

    Cross:

    You are right: mock, definitely! 🙂

  74. 74
    gpuccio says:

    Ah, and praise is due to Aurelio Smith and spark for avoiding the mock addiction and for trying to bring the discussion where it belongs, on the biological facts. I hope we can have a fruitful discussion later.

  75. 75
    gpuccio says:

    Me_Think:

    “If you need to create a 3D world, you have to be on higher dimension.”

    I am not sure I understand: are you saying that 3D printers are on higher dimensions?

    Or, if you refer to creating the whole universe (and therefore to God), then the correct way to say it is, IMO:

    “If you need to create a whole universe based on dimensions, time and space, and whatever else, you have to be out of (beyond) dimensions, time and space, and whatever else”.

    Which is exactly what God is for many religious views: have you ever heard the word “transcendent”?

    Where is the problem?

  76. 76
    Jerad says:

    KF #68

    S is a dummy variable of binary assignment, default 0 for non-specific. It is set to 1 on having adequate empirical reason to accept functional specificity; as has been discussed many times in and around UD.

    What mathematical/analytic tools are used to determine this and to prevent the determination to be other than ‘it looks designed’?

    For example: if I gave you a sequence of numbers how would you analyse it to determine the value of S?

  77. 77
    kairosfocus says:

    MT, There is a serious discussion as to whether our world embraces 11 dimensions, so that particles become strings with the higher dimensionality rolled up. And certain quantum effects sound a lot like reflection into a lower order space of something of higher dimensionality. So, I would keep an open mind on how many dimensions exist in and “around” our world. Bring on non-spatial dimensions and things open up wonderfully. Start with time and concepts tied to phase or configuration spaces, as well as things like Laplace and Z transforms where complex frequency domain phenomena definitely have time-space domain effects to the point where for several years I seemed to be living in that world more than this. KF.

  78. 78
    kairosfocus says:

    Jerad, we already noted on tolerance for variation in config spaces in the setting of islands of function, with several very practical cases in point. My onward connexion is signals vs noise and breakdown of function. Practical test, inject some stochastic noise, I favour whitened Zener or Sky noise as sources that are non-algorithmic. H’mm that sounds like a toothpaste advert. White noise is flat across relevant spectrum on power per Hz. KF

    PS: It looks designed in a setting of a wide base of known design patterns and the vera causa test is itself fruitful, just watch archaeologists in action on archaeology vs natural. BBC’s Time Team 20 year series would make a very good reality check.

  79. 79
    kairosfocus says:

    F/N: I added a bit on self replication to the OP. Pardon a bit of typographic murder, there is a busy-busy mind focus issue with a budget process ongoing so I clipped myself from elsewhere. KF

    PS: Can I get a nicer WP editor for Valentine’s Day or whatever holiday is handy? [Imagine Blogger has a more functional editor!]

  80. 80
    Me_Think says:

    KF @ 78

    There is a serious discussion as to whether our world embraces 11 dimensions, so that particles become strings with the higher dimensionality rolled up.

    If you believe in compactified dimensions rolled up to Planck scale then good luck. You will need energy up to 10^18 GeV to probe the compact manifold, We have just reached 125 GeV with Higgs Boson. Wonder how big a tunnel you will need to produce 10^18 GeV, not to mention that the very suggestion of God huddled in Planck scale dimension will generate ire in the faithful.

  81. 81
    Me_Think says:

    gpuccio @ 76

    Which is exactly what God is for many religious views: have you ever heard the word “transcendent”?
    Where is the problem?

    The problem is we can’t conflate religion with physics. Parceling dimensional impossibilities into ‘transcendent’ and insisting ID is science is not gonna work.

  82. 82
    kairosfocus says:

    MT, I am simply outlining what has been on the table in recent decades and suggesting that we need to bear that in mind. Yup, testing is going to be a bear, but that is more common in and around physics than we commonly admit . . . try contentions on climate dynamics and linked dynamic-stochastic system models for size (and no let us not go off on a tangent, I am simply exemplifying and showing one reason why I brought up D-S system models above). KF

    PS: I rather doubt that the concept of a transcendent orderly Creator-Sustainer who is communicative reason himself, in whom we live, move and have our being who sustains all things by the word of his power [= natural laws in an integrated framework actively sustained everywhere and every-when . . . that’s Newton’s vision of Physics in his General Scholium BTW and that reflects thinking God’s thoughts after him per Boyle, Kepler, Faraday, Maxwell et al . . . ] is about God huddling in Planck scale dimensions. He would be present and active at all scales and dimensions. And as necessary and transcendent being the essential nature is spirit active through the lens of mind. (Recall, nothing, non-being, has no causal powers. If ever there were an utter nothing just such would forever obtain. A necessary being at the root of reality is required ontologically, and the issue is the nature, cf recent discussion here.)

  83. 83
    Me_Think says:

    gpuccio @ 71

    So, let’s make an example. I have inferred design for ATP synthase. So, wherever ATP synthase first emerged, and at whatever time it happened, the designer acted there.
    Now, I am afraid that at present we have no clear spacial location for such an event: it probably happened somewhere in the seas. OK, the designer acted there.

    Houston, we have a problem – not only is the ID agent creating impossible processes, he/she/it even has to serve the processes:
    You would need atleast million agents to serve the universe. Calculation from another thread:

    If you consider that just 30,000 process (against the actual billion process (in millions of organisms) – like protein folding, 2 point mutations, searching new functions, metabolism – pretty much everything which ID claims is improbable with unguided process), needs to be fixed in a given time frame, Binomial calculation shows:
    the minimum number of ID agents that can provide a 90% probability of getting service (attention to processes) for just 30,000 process is 3,069. IOW, Minimize the capacity required for Binomial Distribution with n = 30,000 p=0.1
    For a 99.9% ‘service’ probability, minimum 3,162 agents will be required. Imagine how much will be required for a billion process !

    Of course you can again claim ID agent is transcendental, which of course is just another way of bringing religion into ID science

  84. 84
    Me_Think says:

    KF @ 83

    He would be present and active at all scales and dimensions. And as necessary and transcendent being the essential nature is spirit active through the lens of mind. (Recall, nothing, non-being, has no causal powers. If ever there were an utter nothing just such would forever obtain.

    I am sure you will agree that if ID agent is God, we are just conflating religion with science. God is omnipotent so He can do anything is not an argument at all. We don’t need CSI IFSCO etc if all ID wants to say is ‘God can do it, no matter what’.

  85. 85
    kairosfocus says:

    MT: Nope, there is no one THE ID agent. The focal issue in design theory is design as process and its outputs, leading to the question, can we detect design as causal process from signs evident in aspects of objects, processes and phenomena? The answer on analysis of FSCO/I etc is, yes, insofar as science proceeds by inductive logic. This then . . . reality just called, forget the short pre-work nap I hoped for . . . allows us to apply design detection i/l/o vera causa to the remote past of origins. On world of life, there is an asymmetry, evo mat etc are committed to no design, ID thinkers ab initio have accepted inference to design does not specify a particular designer. Something Creationists don’t like. A molecular nanotech lab would account for what we see. Where things do point to a cosmological designer on fine tuning even through a multiverse speculation, is where ID opens a door that then Philosophy may go through. But at this level all are doing phil anyway, just many who do so dressed in a lab coat don’t realise. That’s where getting the enormous energies in hand would make the difference of empirical test. Gotta run and go play with policy debates. Later. KF

  86. 86
    Jerad says:

    KF #79

    Jerad, we already noted on tolerance for variation in config spaces in the setting of islands of function, with several very practical cases in point. My onward connexion is signals vs noise and breakdown of function. Practical test, inject some stochastic noise, I favour whitened Zener or Sky noise as sources that are non-algorithmic. H’mm that sounds like a toothpaste advert. White noise is flat across relevant spectrum on power per Hz.

    Forgive me for being stupid but . . .

    If I give you a sequence of numbers (or zeroes and ones) and ask you to determine S for that you would introduce some noise? Some random numbers into the sequence? I don’t see how that helps determine S. I’m not talking about islands of function, I’m just talking about a sequence of values.

    It looks designed in a setting of a wide base of known design patterns and the vera causa test is itself fruitful, just watch archaeologists in action on archaeology vs natural. BBC’s Time Team 20 year series would make a very good reality check.

    I actually know Helen Geake who was on the show. Oh, it’s not BBC by the way. It’s ITV.

    But that’s a side issue, I’m still not clear how you would determine S for a sequence of numbers.

    Archaeologist do compare with known patterns. Would you do that with a number sequence?

  87. 87
    Jerad says:

    Ooops, my bad. Time Team is a Channel 4 programme. Not BBC or ITV.

    But I really do know Helen Geake. I’ve been to her house.

  88. 88
    Joe says:

    Me Think:

    His paper is about Functional Sequence Complexity which is measured in Fits.

    Biological CSI has functional sequence complexity. The two are one in the same. And if you cannot understand that it is because you are ignorant of both FSC and CSI.

  89. 89
    Joe says:

    Jerad:

    But that’s a side issue, I’m still not clear how you would determine S for a sequence of numbers.

    If you saw a sequence of numbers would you think that nature, operating freely, did it? Or would you think some intelligent agency, most likely a human, did it?

  90. 90
    Joe says:

    Me Think:

    I am sure you will agree that if ID agent is God, we are just conflating religion with science.

    So you are also ignorant of science. Figures.

    Science only cares about reality which means science doesn’t care if God didit. Newton and others saw science as a way of understanding God’s handiwork.

  91. 91
    Joe says:

    It’s hilarious watching the anti-ID mob try to criticize ID’s concepts when all they really need to do is step up and at least TRY to support the claims of their position. They can’t produce testable hypotheses for it. They don’t have a model. They don’t have any testable predictions based on the posited mechanisms. They don’t even have a methodology.

    And that is why they are forced to flail away at ID with all the misrepresentations, lies and bloviations they can muster.

  92. 92
    Joe says:

    If ID says that something has FSCO/I and because of that is designed, all someone has to do is demonstrate that nature, operating freely, can produce it and the design inference is refuted.

  93. 93
    kairosfocus says:

    MT, First, a few moments between duties and phone calls. Omnipotence is not ability to do anything we may imagine, it implies limits of logic [no square circles], character — goodness and coherent order come to mind — and more. Second, God would be intelligent and presumably as communicative reason himself should be accessible to the principle that intelligence often leaves traces behind, e.g. codes, algorithms and algorithm executing machines, all marked by FSCO/I. Note from OP the remarks of Hoyle, an agnostic. I suggest the Psalmist remarked on how the heavens declare the glory of God, the expanse of the skies above shows his handiwork. That points to the very same cosmological signs pointing to design that wee picked up in various ways by non theists from Plato to Hoyle. KF

  94. 94
    Piotr says:

    So, no — just like archaeologists examining a putative artifact (as opposed to “natural”) — I do not need to carry out a major reverse engineering exercise to identify that an Abu Garcia 6500 C3 reel exemplifies FSCO/I.

    When archeologists find e.g. fragments of a chariot wheel, a golden bull figure and some horse bones in a Scythian burial mound, they recognise the first two as man-made artifact and the third as natural animal remains. Please let me know when an archaeologists detects “intelligent design” in a skeleton.

  95. 95
    kairosfocus says:

    Jerad, the implication is hit with noise, build per revised specs and test. In fact there is a name for this, sensitivity analysis. Manufactured items need to have a balance of precision and resilience to range of possible combinations that makes the system economicallly and technically feasible. This is a commonplace of Electronics, where there is a huge difference between a one off or job shop type product and real automated mass production. The case of the Record reel vs the Ambassadeur should be instructive as is the 303 and the bofors. KF

  96. 96
    Joe says:

    Piotr:

    Please let me know when an archaeologists detects “intelligent design” in a skeleton.

    Please let us know when someone, anyone, comes up with a methodology of determining if a skeleton arose via blind and unguided processes.

  97. 97
    kairosfocus says:

    Jerad, Pardon my lack of knowledge on who owns Channel 4. I thought BBC. TT in any case is accessible online. KF

  98. 98
    gpuccio says:

    Me_Think:

    The problem is we can’t conflate religion with physics. Parceling dimensional impossibilities into ‘transcendent’ and insisting ID is science is not gonna work.

    Don’t cheat.

    I paste here my post #76:

    Me_Think:

    “If you need to create a 3D world, you have to be on higher dimension.”

    I am not sure I understand: are you saying that 3D printers are on higher dimensions?

    Or, if you refer to creating the whole universe (and therefore to God), then the correct way to say it is, IMO:

    “If you need to create a whole universe based on dimensions, time and space, and whatever else, you have to be out of (beyond) dimensions, time and space, and whatever else”.

    Which is exactly what God is for many religious views: have you ever heard the word “transcendent”?

    As anyone can see, I am conflating absolutely nothing. I am keeping the two things strictly separate.

    So, either the designer needs not be “on higher dimension”, because he is simply designing things in our dimensions (see my question about 3D printers), and there is no problem, and we can do science,

    OR

    as you suggest, the designer is creating a 3D world, and then my statement that to do that he needs to be beyond dimensions is the correct one, philosophically.

    I am conflating nothing. You are conflating the idea of a creator with the idea of a designer in time and space and dimensions. Accept your responsibilities.

  99. 99
    gpuccio says:

    Me_Think at #84:

    As you seem to be aware of the computational resources and programming abilities of the ID agents, while I am not, please could you give me some details of the source of your knowledge?

    And by the way, in principle I have no objections to the maximum number of designers needed.So, if you can show me reasonably that 30000 designers are needed, I will be happy to believe that 30000 designers acted.

    Details, please.

  100. 100
    kairosfocus says:

    Piotr, Side track. Archaeologists — per their dept — detect artifacts based on observed FSCO/I, from flint or bone tools to potsherds in timeline correlated styles to walls and murals to buried crashed a/c from WW II these days, and soon we will hear of VN war archaeology doubtless now it was 50 years ago. Whether the human body plan shows signs of design, incl in the skeleton, would come under other departments, anthropology, medicine, molecular biology etc. And, the FSCO/I involved in the living cell is already pointing to design from the root of the tree on up. Think about the FSCO/I involved in setting up the physical and neural requisites for speech. So, an archaeologist would draw a difference between a skeleton caught in an avalanche or landslide and one buried with ritual objects in a grave. All of which involves significant design inferences, as could skeletal signs on cause of death if murder is a material issue for the archaeology. KF

  101. 101
    kairosfocus says:

    Joe:

    If ID says that something has FSCO/I and because of that is designed, all someone has to do is demonstrate that nature, operating freely, can produce it and the design inference is refuted.

    Excellent point.

    Strikes me, the failure of dozens of attempts across years, is what likely led to the rhetorical tactics we tend to see today.

    Of course the fate of those dozens of attempts at and around UD is not brought up save occasionally by those of us who did the refuting.

    Mars canali and self assembling clocks remain favs. The latter, doubtless is still on Youtube, not realising what it takes to make properly interacting gear trains and clocks.

    KF

  102. 102
    kairosfocus says:

    Jerad, on S, get as a simple case a string of characters for a coded algor. Hit it with moderate noise, what happens, why? Gotta run. KF

  103. 103
    kairosfocus says:

    F/N: That is, what happens when you try to compile and try to run the noise hit code. Many things can happen. KF

  104. 104
    Jerad says:

    Joe #90

    If you saw a sequence of numbers would you think that nature, operating freely, did it? Or would you think some intelligent agency, most likely a human, did it?

    Let’s say the sequence of numbers was measurements of some phenomena we’re trying to figure out if it’s designed or not.

    How about this: there are four bases in DNA: A, T, C, G. If I gave you a sequence of As, Ts, Gs and Cs how would you go about determining S for that sequence?

  105. 105
    Jerad says:

    KF #96, #103

    the implication is hit with noise, build per revised specs and test. In fact there is a name for this, sensitivity analysis. Manufactured items need to have a balance of precision and resilience to range of possible combinations that makes the system economicallly and technically feasible. This is a commonplace of Electronics, where there is a huge difference between a one off or job shop type product and real automated mass production. The case of the Record reel vs the Ambassadeur should be instructive as is the 303 and the bofors

    on S, get as a simple case a string of characters for a coded algor. Hit it with moderate noise, what happens, why? Gotta run

    It’s just a sequence of numbers (my example and question), what can happen? Are you ‘adding’ the noise? Term by term? What are you looking for after adding the noise that you can’t see before?

  106. 106
    gpuccio says:

    To all:

    OK, the post about affinity maturation is there. In some way I found the time! 🙂

  107. 107
    kairosfocus says:

    Jerad, the context I/l/o OP is, we are dealing with structured strings giving system descriptions or instructions that guide construction or otherwise express functional coded info; with AutoCAD files as paradigm practical case. Sensitivity to noise comes out when you put the perturbed system description to work; and will readily show the island of function effect. I for one would not want random disturbances to the wiring diagram of a complex digital ckt, or to a program I cared about — not even a hello world. If you are talking about arbitrary strings then generally one string will be as good or bad most likely as another. Kindly cf OP. KF

    PS: there are many ways to inject noise into a string, a simple one would be to use a random value to step off number of digits then change the bit encountered. Repeat n times as relevant.

  108. 108
    Joe says:

    Jerad:

    Let’s say the sequence of numbers was measurements of some phenomena we’re trying to figure out if it’s designed or not.

    Let’s say that you are desperate and grasping for something.

    How about this: there are four bases in DNA: A, T, C, G. If I gave you a sequence of As, Ts, Gs and Cs how would you go about determining S for that sequence?

    Biological specification pertains to function. Now you could help your position by demonstrating blind and unguided processes can produce DNA from scratch but you won’t. Your side can’t even model such a premise. And your flailings at ID prove that your position is a fraud.

    Thank you

  109. 109
    Jerad says:

    KF #108

    the context I/l/o OP is, we are dealing with structured strings giving system descriptions or instructions that guide construction or otherwise express functional coded info; with AutoCAD files as paradigm practical case. Sensitivity to noise comes out when you put the perturbed system description to work; and will readily show the island of function effect. I for one would not want random disturbances to the wiring diagram of a complex digital ckt, or to a program I cared about — not even a hello world. If you are talking about arbitrary strings then generally one string will be as good or bad most likely as another. Kindly cf OP.

    If I have a sequence of numbers or characters and I’m trying to decide if it’s got complex and specified information then I have to have a way to test when I’m not sure of the ‘source’ of the sequence. Yes? So, if I use your metric I have to be able to decide on S. I’m just trying to figure out how that might be done.

    PS: there are many ways to inject noise into a string, a simple one would be to use a random value to step off number of digits then change the bit encountered. Repeat n times as relevant.

    I still don’t see how doing that with a sequence of numbers that I’m not sure about is going to help determine S.

  110. 110
    Joe says:

    Jerad, Flailing away at ID is not going to help your position’s claims. And the way to scientifically refute ID is by actually supporting your position’s claims.

  111. 111
    Jerad says:

    Joe #109

    Let’s say the sequence of numbers was measurements of some phenomena we’re trying to figure out if it’s designed or not.

    Let’s say that you are desperate and grasping for something.

    You don’t have to be rude, it’s a legitimate question. In the 60s some astronomers heard some signals which seemed too regular to be natural. They had to analyse the sequence of readings to try and figure out if it was intelligently designed or something else. I’m just asking how KF’s metric would be applied in such a case.

    Biological specification pertains to function. Now you could help your position by demonstrating blind and unguided processes can produce DNA from scratch but you won’t. Your side can’t even model such a premise. And your flailings at ID prove that your position is a fraud.

    IF I had a sequence of DNA base pairs and I was trying to decide if it was just random/junk or designed then I’d want a way to analyse it. If I chose to use KF’s metric then I’d have to know how to find S.

    But if you don’t like that example then let’s just stick with a sequence of numbers representing readings of signals/transmissions/detections from a location in space. How would you use KF’s metric to decide if it had complex and specified information?

  112. 112
    Jerad says:

    Joe #111

    Flailing away at ID is not going to help your position’s claims. And the way to scientifically refute ID is by actually supporting your position’s claims.

    I’m just asking questions I think are pertinent. You don’t have to answer them or pay any attention if you don’t wish to. If KF thinks my questions are silly or impertinent then I think he’s perfectly capable of saying so himself.

  113. 113
    Joe says:

    Jerad:

    IF I had a sequence of DNA base pairs and I was trying to decide if it was just random/junk or designed then I’d want a way to analyse it.

    So these base pairs just appeared out of nowhere? In science context is important, Jerad.

    But if you don’t like that example then let’s just stick with a sequence of numbers representing readings of signals/transmissions/detections from a location in space

    No Jerad, I would look at the actual readings- any scientist and investigator would.

    And your questions are silly because your position has all the power and yet is unable to do anything with it so you are forced to flail away at ID.

    We get it, Jerad.

  114. 114
    Jerad says:

    Joe #114

    So these base pairs just appeared out of nowhere? In science context is important, Jerad.

    Hmmm . . . I found them in a smear of organic material in the depths of an asteroid.

    But if you don’t like that example then let’s just stick with a sequence of numbers representing readings of signals/transmissions/detections from a location in space

    No Jerad, I would look at the actual readings- any scientist and investigator would.

    The numbers are relative frequency readings from a base frequency in the UHF band. Each number represents a +nx10Hz change. It’s a frequency modulated signal.

    And your questions are silly because your position has all the power and yet is unable to do anything with it so you are forced to flail away at ID.

    We get it, Jerad.

    As I said, you don’t have to interact with me at all if you don’t want to.

  115. 115
    Joe says:

    Jerad:

    Hmmm . . . I found them in a smear of organic material in the depths of an asteroid.

    Then you can be sure they came from a living organism as mother nature doesn’t have the ability to produce DNA.

    The numbers are relative frequency readings from a base frequency in the UHF band. Each number represents a +nx10Hz change. It’s a frequency modulated signal.

    I would need to see the readings, Jeard. The actual raw stuff.

    As I said, you don’t have to interact with me at all if you don’t want to.

    Geez I am just pointing out the obvious.

  116. 116
    kairosfocus says:

    Jerad,

    it will probably help for you to look at the context; and the first case given is a start point. We already have an observed function, dependent on properly organised and coupled parts that we then develop a nodes-arcs wiring diagram or wireframe description etc as appropriate; which has an informational metric based in the first instance on the number of y/n q’s that specifically describe the relevant configuration.

    We can then study how perturbation by noise or variability affects the already in hand performance. This characterises the island of function in the space of configs.

    With the 6500 reel a simple case is that substituting Carbon Tex drag washers enhances smoothness at low drag settings but if the drag is locked down the brass gear [brass is very good for machining] may have its teeth stripped. That’s a case showing how interactive and interdependent function is.

    It helps to start from a clear case, to prevent falling into unnecessary confusions.

    Also, to make sure we never lose sight of it.

    Now with an exemplar in hand, we may look to the associated info metric, which is a description of what Wicken called the wiring diagram. To assess effects of perturbation, one first needs to have observed and probably measured the function. Then, as config varies — here based on noise — we can see the effect on the already observed function.

    So, for instance a Hello World text string will compile and on running the object code will print hello world. Hit the source code with noise.

    Most likely, it won’t compile, or if you are lucky it will only affect the spelling or a comment.

    This already shows us that position value can be differently related to function, and that some zones will be very intolerant of perturbation.

    Now, we can consider our mRNA string to be fed to a ribosome to compile an in vivo AA chain.

    3 of 64 codes are stop, so it is easy to get a premature stop, with several codons being just one letter off. Then, chains must fold and function as proteins in the cell, generally speaking.

    It is easy to get a fold failure, or a fit failure or a functional failure, even if by good luck we have not prematurely terminated.

    The same island of function effect.

    For novel folds and functions, we note that fold clusters are deeply isolated in AA sequence space, leading to severe challenges to blind search approaches.

    And of course, the root problem here is to get to not a protein but to first organised, FSCO/I rich cell based life.

    Blind chance and mechanical necessity simply have never been observed as originating such, no surprise given the search space challenge. But we do routinely know a source: intelligently directed configuration.

    KF

    PS: The wow signal rapidly showed itself to be an oscillation. If you look above, relevant cases are generally associated with aperiodic, organised patterns that are non-random and functional. A smooth gear surface or its toothed pattern are regular in a sense but again are complex and specific, functional in a nodes-arcs wiring diagram context. The description of a gear will not be the equivalent of adadad . . . ad either. flat parallel sides, a disk with defined radius, a central hole set and keyed for a shaft, precisely shaped and cut teeth, a 3-d orientation in proper alignment, coupling to other components all need specification in that description.

    PS: Glance at the discussion of gears with illustrations here:

    http://www.uncommondescent.com.....-are-made/

  117. 117
    Jerad says:

    Joe #116

    Then you can be sure they came from a living organism as mother nature doesn’t have the ability to produce DNA.

    I know you make that assumption but how can I use KF’s metric to show that?

    I would need to see the readings, Jeard. The actual raw stuff.

    If I gave you a sequence of 400 frequency readings each lasting 5 seconds in the order received and all within 10kHz of a ‘base’ frequency what would you do next?

  118. 118
    kairosfocus says:

    Jerad, there are many issues in spontaneous origin of RNA and DNA, starting with getting to the monomers. The issue of coded algorithmic organisation tied to execution machinery in a metabolising automaton with smart encapsulation then kicks in. The critical issue is blind search for islands of function in config spaces that once we hit 500 – 1,00 bits to characterise states we utterly overwhelm search resources of solar system or observed cosmos. What the metric does is to summarise that and give an indication of hen it has kicked in. Information value high/low; if too low, irrelevant to design inference. Even if high information metric if not functionally specific, there is no needle in haystack challenge. 500 or 1,000 bits, reasonable threshold on scale of config space where available resources to do search kicks in. KF

  119. 119
    kairosfocus says:

    Jerad, you are getting close to the notion that design inference must possess a universal decoder. Post Godel, that’s not relevant, there is no good reason to imagine general algorithms exist. Show function, show dependence on a configuration or code etc. Show specificity and complexity beyond a reasonable threshold. That FSCO/I is likely designed. Where defaults will point away from inferring design. KF

  120. 120
    Joe says:

    I know you make that assumption but how can I use KF’s metric to show that?

    The same type of assumption I use to know which direction the Sun will first appear on the horizon.

    If I gave you a sequence of 400 frequency readings each lasting 5 seconds in the order received and all within 10kHz of a ‘base’ frequency what would you do next?

    Ask if they were narrow-band or did they bleed all over the channels. Nature’s signals bleed all over whereas artificial signals are narrow-band.

  121. 121
    Jerad says:

    Sorry, sorry, I’m still not quite getting a clear criterium for determining S.

    Okay, let’s say I’m one of those SETI guys, and I detect a signal from a specific spot in the galaxy. The signal is a series of about 400 sequential bursts at varying frequencies in the UHS band. The burst are each 5 to 10 seconds long and vary at about _/- nx1kHz from the central frequency, n varying from -5 to +5 . . . about. There’s a bit of noise and distortion (as you would expect regardless of the source at a great distance) but I haven’t got an explanation for the source.

    So . . . how do I use KF’s metric to evaluate the signal? What helps me to determine S? Let’s just nail down this one situation.

  122. 122
    kairosfocus says:

    Jerad, at this point I simply do not believe you. S holds default 0, and only goes to 1 if there is positive, observable reason to infer that the entity or aspect that is the basis for an information metric is such that its configuration is functionally and specifically constrained; a random value will leave it at 0 and something that is a manifestation of mechanically necessary order will be of low contingency thus is restrained from being an information store. In the case of a potential radio signal; absent positive indication of a modulated carrier with a message, it locks to 0 as a reject incorrectly error is acceptable for purpose, this is not a universal decoder test or algorithm, it is a criterion of highly confident positive warrant. That’s not too hard to figure out and more than enough illustrative cases have been given. KF

  123. 123
    kairosfocus says:

    PS: It has not escaped my notice that you are busily discussing a hypothetical possible ET signal — which unless something like CW or SSB AM or Phase mod or FM or the like or a spread spectrum pattern were detected would have S stay at its default 0 — while studiously side stepping a well known case of coded algorithmically functional and highly specific information in copious quantities in our own cells in our bodies. Note, the OP.

  124. 124
    Piotr says:

    KF,

    Archaeologists — per their dept — detect artifacts based on observed FSCO/I

    Nope. They don’t “observe” the FSCO/I of their finds (and of course they don’t know what FSCO/I is). They detect artifacts by comparing them with known patterns of human activity, not by calculating the probability that the object they have found has self-assebled by chance. And they can make mistakes, or be unable to decide if they are dealing with an artifact. For example, the origin of the so-called “Neanderthal flute” of Divje Babe (Slovenia) has been hotly disputed for 20 years now without reaching consensus. Some argue that the punctures in the bone are carefully designed and “consistent with four notes of the diatonic scale”; others claim that they are fairly typical bite marks left by a carnivore. What a pity they haven’t asked you for help. Could you perhaps compute the FSCO/I of a cave bear femur with a few holes in it?

  125. 125
    fifthmonarchyman says:

    Hey Jerad.

    you said.

    Sorry, sorry, I’m still not quite getting a clear criterium for determining S.

    I say,

    I would like more elaboration on this as well. I think S is where the action is.

    kairosfocus says.

    S holds default 0, and only goes to 1 if there is positive, observable reason to infer that the entity or aspect that is the basis for an information metric is such that its configuration is functionally and specifically constrained

    I say,

    So a design inference can not be made if we don’t know the function of a thing?

    peace

  126. 126
    fifthmonarchyman says:

    Jerad

    if I gave you a sequence of numbers how would you analyse it to determine the value of S?

    I say.

    I might try and see if I could recognize a non-algorithmic pattern in the string when compared to a randomized grouping of the same numbers. But for me S is not especially associated with function.

    At first glance I think function is not the best criteria to focus on. It seems to be too “practical”. Lots of designed things aren’t practical at all.

    Maybe I’m asking too much of the concept

    peace

  127. 127
    Jerad says:

    KF #124, 125

    Jerad, at this point I simply do not believe you. S holds default 0, and only goes to 1 if there is positive, observable reason to infer that the entity or aspect that is the basis for an information metric is such that its configuration is functionally and specifically constrained; a random value will leave it at 0 and something that is a manifestation of mechanically necessary order will be of low contingency thus is restrained from being an information store. In the case of a potential radio signal; absent positive indication of a modulated carrier with a message, it locks to 0 as a reject incorrectly error is acceptable for purpose, this is not a universal decoder test or algorithm, it is a criterion of highly confident positive warrant. That’s not too hard to figure out and more than enough illustrative cases have been given.

    I’d just like to know how you would analyse a very ambiguous situation to determine S. I think I understand what you’re looking for but I don’t quite get how you’d do it. What is a ‘positive, observable reason’?

    I’m asking because of the known arguments over other physical phenomena whose design is disputed. I’d like to be very, very clear as to what your S criteria are.

    PS: It has not escaped my notice that you are busily discussing a hypothetical possible ET signal — which unless something like CW or SSB AM or Phase mod or FM or the like or a spread spectrum pattern were detected would have S stay at its default 0 — while studiously side stepping a well known case of coded algorithmically functional and highly specific information in copious quantities in our own cells in our bodies. Note, the OP.

    It was just an example. I can think of others.

    I agree with what I think is your central sentiment: unless it’s very, very clear we cannot infer design.

    It does sound like S is your central design detection criteria. I was hoping there’d be more ‘math’ behind it.

  128. 128
    Joe says:

    Piotr:

    They detect artifacts by comparing them with known patterns of human activity…

    IE observed FSCO/I. However that isn’t the only ingredient as it must also not correlate to known patterns of nature’s activity.

  129. 129
    kairosfocus says:

    Piotr, Archaeologists are investigating things based on functional specificity and complex functional organisation; from an arrow head shaped by flintknapping to the remains of a wall in the ground. That is how they and we for that matter routinely recognise artifacts. And the term is first descriptive and secondarily quantified starting with the chain of y/n q’s approach — cf. Orgel in the OP. Where, the underlying phenomenon is widely familiar from a world of artifacts. The point being that, reliably a blind chance and necessity search of a sufficiently large config space will not plausibly find islands of function. KF

  130. 130
    Joe says:

    Consider pulsars – stellar objects that flash light and radio waves into space with impressive regularity. Pulsars were briefly tagged with the moniker LGM (Little Green Men) upon their discovery in 1967. Of course, these little men didn’t have much to say. Regular pulses don’t convey any information–no more than the ticking of a clock. But the real kicker is something else: inefficiency. Pulsars flash over the entire spectrum. No matter where you tune your radio telescope, the pulsar can be heard. That’s bad design, because if the pulses were intended to convey some sort of message, it would be enormously more efficient (in terms of energy costs) to confine the signal to a very narrow band. Even the most efficient natural radio emitters, interstellar clouds of gas known as masers, are profligate. Their steady signals splash over hundreds of times more radio band than the type of transmissions sought by SETI.- Seth Shostak, SETI Institute

    Have you ever dealt with 2-way radio communication, Jerad? Those of us who have know what to look for, ie specific set of criteria, in order to even think about an intelligent transmission.

  131. 131
    kairosfocus says:

    5th, in practical terms, you first observe functional specificity, in some form. This varies from an arrow head to coded info to a large precisely cuboidal monolith to the coded sequences in a protein that fold [meta-]stably to functional form. KF

  132. 132
    Joe says:

    Cause and effect relationships- That is how archaeologists do it. That is how forensic science does it. That is how SETI does it. And that is how Intelligent Design does it.

    FSCO/I is a strong indicator of design as the only known cause of FSCO/I is via intelligent agencies. If there is one and only one cause of something when it is observed you know what caused it. It is a one-to-one correspondence.

  133. 133
    Dionisio says:

    OT: Example of sloppy speculative ‘scientific’ methods assumed as ‘serious’ science until the bad practice was made public by a recent paper:

    http://www.uncommondescent.com.....ent-546525

  134. 134
    kairosfocus says:

    Jerad, it has already been noted — for years — that unless there is positive reason on good observational warrant regarding functional specificity, S retains its default i.e. 0. A false negative ruling is acceptable as the price for very high confidence in holding functionally specific. To give a further instance, if one holds that the huge quantities of coded info in D/RNA etc could be produced by blind chance and necessity so a design inference on FSCO/I is unwarranted, consistency would require that even a long apparent message from space in a language would have to be regarded as suspect. But you and I both know that such a signal would be headlined as proof of ETs. KF

  135. 135
    kairosfocus says:

    5th, the relevant and most objectively identifiable form of specification is functional; on long experience other forms will be debated to death in a cloud of obfuscation, but it is hard — and revealing — to deny something that is carrying out interactive function based on arrangement and coupling of parts. This is also the case most relevant to the world of life and cosmology. It is broad enough to enfold the polygons in a wireframe mesh for a sculpture, or the coded strings in computer memory, or R/DNA, or the arrangement of parts in a fishing reel. KF

  136. 136
    fifthmonarchyman says:

    Hey Piotr,

    Thanks for the link I had not read of this find before.

    I think the similarity to a flute would provide the S in KF’s schema. The controversy seems to be over the probability of a carnivore producing a similar pattern.

    Suppose we found a similar bone with 8 perfectly evenly spaced and in line holes that was not consistent with the diatonic scale. I think I would still infer design despite not knowing the artifacts function.

    Doubling the holes would eliminate any question of carnivore gnawing. and the inline and evenly spaced holes would provide a specification in my view.

    Just my opinion

    peace

  137. 137
    kairosfocus says:

    F/N: Astute readers will observe that there has been no clear admission by objectors above that even so clear a case as a fishing reel exhibits functionally specific complex organisation and associated information. That is revealing on what is at stake for them. KF

  138. 138
    kairosfocus says:

    F/N: I find the non-engagement of central facts, issues and concerns in the OP by especially objectors inadvertently highly revealing. In particular, there is need to acknowledge the simple fact that FSCO/I is real and may readily be both demonstrated empirically and also reduced to a metric of information by use of the structured string of y/d descriptive q’s and extensions, in one form or another. Likewise there is need to face the implications of config based interactive function with high specificity (which is observable by noting effects of perturbation or variability of components and configs) — islands of function in config spaces. Thence, beyond a reasonable complexity threshold intractability of the approach of blind chance and necessity driven needle in haystack search. KF

  139. 139
    Jerad says:

    KF #135

    it has already been noted — for years — that unless there is positive reason on good observational warrant regarding functional specificity, S retains its default i.e. 0. A false negative ruling is acceptable as the price for very high confidence in holding functionally specific.

    Yes, I understand that. What I don’t completely understand is what are the criteria for deciding that S can be changed to 1. What are good observational warranted reasons. So I’m asking about some hard to decide situations. I’m not arguing about fishing reels.

    But I don’t think I’m going to get any better answer than you’ve already given so feel free to drop the topic.

  140. 140
    Joe says:

    Jerad, Read Dembski’s 2005 paper on Specification.

  141. 141
    kairosfocus says:

    Jerad,

    the fishing reel is a clear paradigm example of FSCO/I, and the exploded view diagram above shows how function arises from specific arrangement and coupling of parts. Similarly, a text string *-*-*- . . . -*, that expresses a description or algorithm in coded form depends on placement and interaction of components.

    High specificity of function is seen from sensitivity to variability of parts and/or arrangements, whether natural to the situation or via injected perturbations. Fishing reels lose interchangeability if precision of parts slips a bit, and are very sensitive to orientation, placement, coupling and presence/absence of key parts. program code, apart from in places like comments, tends to be very sensitive to perturbation.

    Where of course informationally a 3-d nodes-arcs pattern is reducible to a structured descriptive string. So, discussion on strings is WLOG.

    Extend by reasonably close family resemblance.

    The question of borderline cases is very simple to address: if there is a reasonable doubt that the function under observation is configuration sensitive, the default is, regard it as not sensitive.

    In the expression, S = 0 is default and holds benefit of the doubt.

    An erroneous ruling due to doubt, not specific, is acceptable once there are responsible grounds.

    And, function is observable as is sensitivity to perturbation etc.

    So, not an issue.

    If say a SSB AM signal is detected from remote space and is from an unknown source, where we have the equivalent of over 1 kbit of information — a cosmic source — we may reasonably infer design.

    Which is exactly what would be headlined.

    But if we have fairly broadband bursts with no observed definitive signal or carrier, there is no basis to infer functional specificity.

    KF

  142. 142
    fifthmonarchyman says:

    KF said

    High specificity of function is seen from sensitivity to variability of parts and/or arrangements

    I say,

    I think sensitivity to variability is the key to all specification whether we are talking about function or not.

    I would say that specification is deeply related to lossless data compression and Irreducible Complexity.

    Returning to number strings for just a second

    look at this string

    3.1415….

    Pi would be the specification/nonlossy compression.

    If even one digit were to change we would have no nonlossy way to compress the string and S would default to zero.

    Now look again at KF’s fishing reel. If only a few parts were to change it would no longer function so it could not be nolossyly compressed as a functioning “mechanism” and S would default to zero.

    A specification that captured more of the essence of the reel would be something like “Ambassador 6500”.

    This compression does not contradict the first one “functioning mechanism” but only moves up a step in descriptive knowledge of the artifact. That is the Y axis I sometimes talk about.

    As we know more about an object our specification becomes more stringent and sensitivity to variability increases. So that less complexity is required to infer design.

    That is the way I see it anyway.

    Peace

  143. 143
    kairosfocus says:

    F/N: Added a few exemplars of FSCO/I-rich systems, to help rivet the sheer empirical reality. KF

  144. 144
    Jerad says:

    Joe #141

    Jerad, Read Dembski’s 2005 paper on Specification.

    I have read it. Dr Dembski does not have S in his formulation. I’m asking KF about his metric.

    KF #142

    the fishing reel is a clear paradigm example of FSCO/I, and the exploded view diagram above shows how function arises from specific arrangement and coupling of parts. Similarly, a text string *-*-*- . . . -*, that expresses a description or algorithm in coded form depends on placement and interaction of components.

    Fine but I didn’t ask about a fishing reel.

    High specificity of function is seen from sensitivity to variability of parts and/or arrangements, whether natural to the situation or via injected perturbations. Fishing reels lose interchangeability if precision of parts slips a bit, and are very sensitive to orientation, placement, coupling and presence/absence of key parts. program code, apart from in places like comments, tends to be very sensitive to perturbation.

    I asked how would you go about determinng S fo a sequence of numbers.

    Where of course informationally a 3-d nodes-arcs pattern is reducible to a structured descriptive string. So, discussion on strings is WLOG.

    Extend by reasonably close family resemblance.

    Is this pertinent to my question?

    The question of borderline cases is very simple to address: if there is a reasonable doubt that the function under observation is configuration sensitive, the default is, regard it as not sensitive.

    What is reasonable doubt? For example: biologists say there is quite reasonable doubt that DNA was designer whereas you disagree. Which is why I’m asking for your criteria when determining S. I’d like to know what kinds of analysis you would bring to bear.

    In the expression, S = 0 is default and holds benefit of the doubt.

    An erroneous ruling due to doubt, not specific, is acceptable once there are responsible grounds.

    And, function is observable as is sensitivity to perturbation etc.

    So, not an issue.

    Except that I still don’t know what kind of analytic tools you would use in a given situation.

    If say a SSB AM signal is detected from remote space and is from an unknown source, where we have the equivalent of over 1 kbit of information — a cosmic source — we may reasonably infer design.

    Which is exactly what would be headlined.

    That’s it, you’d just set S = 1 in that case? Why would that kind of signal be so indicative?

    But if we have fairly broadband bursts with no observed definitive signal or carrier, there is no basis to infer functional specificity.

    So, you’re saying the narrowness of the frequencies observed is part of your criteria?

  145. 145
    Joe says:

    OK so Jerad couldn’t understand Dembski’s paper. Par for the course, that

  146. 146
    Jerad says:

    Joe #146

    OK so Jerad couldn’t understand Dembski’s paper. Par for the course, that

    Show me where in Dr Dembski’s paper he uses KF’s S.

  147. 147
    Joe says:

    So S isn’t Specification? Really?

    GEM of TKI- is your S specification?

  148. 148
    kairosfocus says:

    Joe, of course S is a dummy variable that takes value 1 for specification [with a particular emphasis on the relevant type, functional specificity of interactive configurations to achieve said observable function], as is shown in the OP just above the equation, for quick reference. KF

  149. 149
    kairosfocus says:

    Jerad, for a sequence of numbers in general S defaults to 0. Where there is a context for the numbers that locks them to a cluster of functionally defined possible values, e.g. the numbers are a bit string giving a system and config description, then on good reason and evidence it would become 1. Then, the string of structured numbers would provide an information metric, and if this exceeds the relevant limit of 500 – 1,000 bits, then the conclusion will be that the best explanation of said thing or at least the relevant aspect of it is design. In the case of RF reception, if we have something like SSB AM, or phase mod or the like, then we have a basis for inferring design, but absent the patterns that gives us functional specificity, we default to 0. As one result, absent a relevant key to detect say a spread spectrum signal, the default would be what it appears to be, noise. First get your function, then see specificity based on particular configs of parts, then note whether such complexity is a reasonable result from blind chance and mechanical necessity [–> what the threshold part does] and if beyond the threshold, we have FSCO/I best explained on design. KF

  150. 150
    kairosfocus says:

    5th, if we had a black box that is capable of outputting a stream of bits where at first they seem to be random, and suddenly we see the string of bits for successive binary digits of pi which keeps up beyond 500 – 1,000 digits in succession, we would have excellent reason to infer design and transmission of an intelligent signal. Of course, it is easy to miss that one is seeing digits of pi, as pi is transcendental and successive digits have no neat correlation to the value so that we get what looks pretty random. KF

    PS: If we are seeing the sort of resistance to a patent case of FSCO/I as is shown by a 6500 reel, that speaks volumes on what more abstract concepts of specification would face.

  151. 151
    kairosfocus says:

    Jerad, I have given concrete cases, stated that one first identifies function and that it is dependent on configs of parts then we look at perturbing the pattern and looking for limits of clusters of functional configs. Case after case has been given to show that we are dealing with something that is based on empirical investigation, to make the matter directly relevant to the real world key cases. You tossed out stuff on strings of numbers and I set them in context, You talked about radio signals and I put them in context. At this point it looks uncommonly like you do not see because you are imposing something that blocks addressing what is patently and plainly there starting with the paradigm case that you repeatedly dismiss, a fishing reel. That reel and its dependence on specific configs of correct parts to work, is a demonstration of the reality of FSCO/I. The specificity that you are making a mountain out of a molehill on, is patent from what happens if you put it together badly or get sand in it etc. Won’t work. The function is obvious. The informational complexity comes from the nodes-arcs and structured strings of y/n q’s approach, and just for the main gear we are well past 125 bytes. Try to understand the simple and obvious case then let that guide you on more complicated ones. If you cannot figure this out from a diagram go pay a tackle shop a visit. I suspect a lot more is riding on this sort of approach than you realise, but at minimum, you have been a significant objector presence in and around UD for years and years. At the least try to understand a simple case of what we have been talking about. KF

  152. 152
    fifthmonarchyman says:

    KF says,

    Of course, it is easy to miss that one is seeing digits of pi, as pi is transcendental and successive digits have no neat correlation to the value so that we get what looks pretty random.

    I say,

    I agree,…. a little digression if I may

    There seem to me to be 3 kinds of number sequences

    1)Rational numbers that can be ascribed to normal halting algorithms

    2)Irrational numbers that appear random until you know the specification they represent.

    3) schizophrenic numbers that split the difference between the other two. http://en.wikipedia.org/wiki/Schizophrenic_number

    I think that sequences that represent schizophrenic numbers are the ones most likely cause us to infer design. The presence of patterns with no algorithmic explanation seems to draw us to that conclusion

    You say,

    If we are seeing the sort of resistance to a patent case of FSCO/I as is shown by a 6500 reel, that speaks volumes on what more abstract concepts of specification would face.

    I say,

    I have long since given up hope that the other side can be fair here. They just have too much at stake. I think we should explore this stuff with out them if necessary.

    I am happy if they understand what I saying. Agreeing with my conclusions is probably too much to ask.

    peace

    PS excellent thread Thank you

  153. 153
    kairosfocus says:

    5th,

    Thanks for thoughts.

    I feel a bit like someone trying to draw focussed attention to Newton’s three laws of motion in a world where somehow, an empirically based ABC level inductive generalisation has suddenly become politically utterly incorrect.

    If you want to look more broadly at specification, try Dembski’s paper here:

    http://designinference.com/doc.....cation.pdf

    If you look here, you will see how I used a log reduction and a heuristic in the context of an exchange with May’s MathGrrrl persona that had significant inputs from VJT and Dr Giem.

    I note that many objectors utterly refuse to acknowledge that a log probability is an information metric so that in an empirically oriented context one may transfer to the dual form and then look out there at more direct empirical metrics. Where, the underlying point is that had the world of life proceeded by blind chance and mechanical necessity, the patterns sampled across the history and diversity of life constitute a valid sampling of what is empirically possible. Where, just from the clustering of groups of functional proteins in AA sequence space it is evident that deeply isolated islands of function are very real.

    But however I got there (and earlier I used other metrics that do the job but do not tie in with Dembski’s work), the expression above stands on its own merits.

    And it is glorified common sense.

    Information may be measured in several ways, starting with the premise that to store significant info one needs to have high contingency in an entity, leading to a configuration space of possibilities. So, one basic metric is the chain length of y/n q’s to specify particular state. And yes this ties to Kolmogorov, but is even older than that cropping up in Shannon’s famous 1948 paper.

    In a communicative context, that can lead to giving info values to essentially random states, i.e. noise can appear in a comms system, think snow and frying noise on the old fashioned analogue TV. So there is a premise of distinguishing signal and noise on characteristics that are empirically observable enough to define signal to noise ratio; a key quality metric. And as section A of my always linked note from comments I make at UD (click on my handle) points out, yes, there is thus a design inference in the heart of communications theory.

    It is recognising that and thinking about linked thermodynamics issues — I am an applied physicist who worked with electronics and education before ending up in sustainability oriented policy matters . . . — that led me to see the value of the design inference in the first place.

    So, in a context where many configs of parts are possible but only a relative few will carry out a specific function depending on particular configuration of parts per a wiring diagram, it makes sense to use an old modelling trick from economics to define being in/out of observable circumstances. (One use of the binary state dummy variable is to tell if you are in/out of a war in an Economic series.)

    So, We take the info metric value I and multiply by a dummy variable S tied to observable functionality based on wiring diagram organisation. Hence the 6500 C3 reel as example . . . BTW there is a whole family of 6500’s out there, a case of tech evolution by design. And of course 3-d wiring diagrams reduce to descriptive strings. Discussion on strings is WLOG.

    Using the Trevors Abel sequence categories,

    a: a random sequence may have a high I value but will have S at default, 0.

    b: An orderly sequence driven by say crystallisation or simple polymerisation or the equivalent will have high specificity but with little contingency its I value will be low.

    c: A sequence that is functionally specific will have both S = 1 per observation, and I potentially quite high.

    The question is, at what threshold is one unlikely to achieve state c by blind chance and mechanical necessity. The answer is, to use the island of function in a large config space implication of wiring diagram interactive function to identify when needles will be too deeply isolated in large haystacks. 500 – 1,000 bits of complexity works for sol system to cosmos scale resources. And these are quite conservative.

    The first pivots on the idea that the atomic resources of the sol system searching at a fast chem rxn rate for a generous lifespan estimate will only be able to do the equivalent of plucking one straw from a cubical haystack comparable in thickness to our galaxy. And for 1,000 bits the comparable haystack would swallow up the observable cosmos.

    The 6500 C3 is a useful start point, giving a familiar relatively simple case that is accessible.

    It shows that FSCO/I is real, that wiring diagram organisation is real, and that he equivalent of trying to assemble a fishing reel by shaking up its parts in a bucket is not a reasonable approach.

    The question is whether such extends to the world of life.

    Orgel and Wicken answer yes, as the OP cites . . . notice not one objector above has tried to challenge that.

    A look at protein synthesis already gives several direct cases with emphasis on D/RNA and proteins.

    Where, as these are strings, we have direct applicability of sequence complexity and information metrics. RNA is a control tape for protein assembling ribosomes (much more complex cases in point!) and proteins must fold stably and do some work in or for the cell.

    A typical 300 AA protein has raw info capacity 4.32 bits per AA, and the study of patterns of variability in the world of life per Durston et al 2007, gives the sort of values reported in the OP. If you want a much cruder metric at OOL, try hydro-phil vs phob at 1 bit per AA and try 100 proteins as a simplistic number. That’s 300 bits per protein x 100 proteins, or 30,000 bits.

    The message is clear: FSCO/I as a reasonable, empirically grounded needle in haystack search backed criterion for reasonably inferring design implies the living cell is a strong candidate for design in nature.

    Similar analyses of the physics of a cosmos suited for C-chemistry, aqueous medium cell based life strongly point to our cosmos sitting at a locally deeply isolated operating point; even through multiverse speculations — cf. discussion and links in the OP.

    That is, we see cell based life in a cosmos evidently fine tuned for it.

    That points strongly to design of a cosmos in order to facilitate such life.

    Design sits at the table of scientific discussion from origin of cosmos to origin of life and body plans to our own origin as of right, not sufferance.

    But, that is extremely politically incorrect in our day of a priori imposed evolutionary materialist scientism.

    No wonder there are ever so many attempts to expel design from that table.

    I stand confident that in the end common sense rationality, inductive logic and the needle inn haystack challenge will prevail.

    Design just makes sense.

    If you doubt me, go take a searching look at the Abu-Garcia 6500 C3 Ambassadeur reel.

    KF

  154. 154

    What’s really odd is that even while science has found at both ends of the spectrum (cosmic & microscopic/subatomic) exponentially increasing gaps in the materialist explanatory account, those materialists still insist that science has been on a trajectory of closing those gaps.

    Cosmology, biology and quantum physics have long since shredded the materialist explanation. Materialists are the true Victorian-age, anti-science cult today. One might as well be a flat-earther as to be a materialist in the light of evidence that’s been around for decades and is growing every day.

  155. 155
    Axel says:

    Below, is a link to an interesting article from the online Catholic Exchange by Dr Donald DeMarco, entitled, The Half-Truths of Materialist Design.

    I particularly like the neat formulation at the end of the piece, pointing to the fundamental truth of the harmony of religion and science:

    ‘The notion of intelligent design is the logical complement of scientific research.’

    Read more: http://www.ncregister.com/dail.....z3R4Bjvuks

  156. 156
    kairosfocus says:

    Folks, notice the distinct lack of interest by objectors across the board (with one or two exceptions as seen above) in addressing the core issue of functionally specific, complex organisation and associated information, FSCO/I? And, its implications per empirically reliable, tested induction and the needle in haystack search challenge? That may be telling us something on the actual reality and relevance of FSCO/I, and where it points. KF

    PS: Rest assured, if there were obvious, major holes in recognising the reality and relevance of FSCO/I, objectors would be piling on to pound away.

  157. 157
    kairosfocus says:

    AS,

    kindly see the above OP — which you need to actually address on the merits and specifics.

    Note, FSCO/I is a description (in effect, first put on the table by Orgel and Wicken in the 1970’s to address defining characteristics observed with life forms that strongly parallel patterns familiar from the world of technology, cf W’s “wiring diagram” in the OP) based on our participation in a world of systems where function is dependent on particular interaction of correct, correctly oriented and coupled parts that will work effectively only in a very few of possible configs of said parts; with trillions of cases all around you.

    Indeed, to object you just composed a string of glyphs that to function as text in English had to conform to many specific rules and conventions, with a very strict node-arc-node pattern: *-*-*- . . . -*.

    Your rhetorical dismissal that actually exemplifies FSCO/I in attempting to brush it aside, in the teeth of addressing identification by concrete example, description of same, application to the world of life and quantification i/l/o say Orgel’s approach laid out in 1973, is an inadvertent example of the point just underscored.

    KF

  158. 158
    kairosfocus says:

    F/N: Let me clip for convenient reference from the OP, citing Orgel, Wicken and Hoyle . . . the OP has highlights, onward links etc:

    A good step to help us see why is to consult Leslie Orgel in a pivotal 1973 observation:

    . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity.

    These vague idea can be made more precise by introducing the idea of information. Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure. [–> this is of course equivalent to the string of yes/no questions required to specify the relevant “wiring diagram” for the set of functional states, T, in the much larger space of possible clumped or scattered configurations, W, as Dembski would go on to define in NFL in 2002, also cf here, here and here (with here on self-moved agents as designing causes).] One can see intuitively that many instructions are needed to specify a complex structure. [–> so if the q’s to be answered are Y/N, the chain length is an information measure that indicates complexity in bits . . . ] On the other hand a simple repeating structure can be specified in rather few instructions. [–> do once and repeat over and over in a loop . . . ] Complex but random structures, by definition, need hardly be specified at all . . . . Paley was right to emphasize the need for special explanations of the existence of objects with high information content, for they cannot be formed in nonevolutionary, inorganic processes.

    [The Origins of Life (John Wiley, 1973), p. 189, p. 190, p. 196. Of course,

    a –> that immediately highlights OOL, where the required self-replicating entity is part of what has to be explained (cf. Paley here), a notorious conundrum for advocates of evolutionary materialism; one, that has led to mutual ruin documented by Shapiro and Orgel between metabolism first and genes first schools of thought, cf here.

    b –> Behe would go on to point out that irreducibly complex structures are not credibly formed by incremental evolutionary processes and Menuge et al would bring up serious issues for the suggested exaptation alternative, cf. his challenges C1 – 5 in the just linked. Finally,

    c –> Dembski highlights that CSI comes in deeply isolated islands T in much larger configuration spaces W, for biological systems functional islands. That puts up serious questions for origin of dozens of body plans reasonably requiring some 10 – 100+ mn bases of fresh genetic information to account for cell types, tissues, organs and multiple coherently integrated systems. Wicken’s remarks a few years later as already were cited now take on fuller force in light of the further points from Orgel at pp. 190 and 196 . . . ]

    . . . and J S Wicken in a 1979 remark:

    ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’[[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]

    . . . then also this from Sir Fred Hoyle:

    Once we see that life is cosmic it is sensible to suppose that intelligence is cosmic. Now problems of order, such as the sequences of amino acids in the chains which constitute the enzymes and other proteins, are precisely the problems that become easy once a directed intelligence enters the picture, as was recognised long ago by James Clerk Maxwell in his invention of what is known in physics as the Maxwell demon. The difference between an intelligent ordering, whether of words, fruit boxes, amino acids, or the Rubik cube, and merely random shufflings can be fantastically large, even as large as a number that would fill the whole volume of Shakespeare’s plays with its zeros. So if one proceeds directly and straightforwardly in this matter, without being deflected by a fear of incurring the wrath of scientific opinion, one arrives at the conclusion that biomaterials with their amazing measure or order must be the outcome of intelligent design. No other possibility I have been able to think of in pondering this issue over quite a long time seems to me to have anything like as high a possibility of being true.” [[Evolution from Space (The Omni Lecture[ –> Jan 12th 1982]), Enslow Publishers, 1982, pg. 28.]

    Why then, the resistance to such an inference? . . .

    That, of course, is the pivotal question.

    KF

  159. 159
    kairosfocus says:

    F/N: Of course, as a matter of overlooked science — read here, the gap between an apple falling from a tree and the Moon swinging by in orbit* — spotlighting the significance of FSCO/I (and especially digitally coded functionally specific complex information, dFSCI as GP stresses) is a crucial bridge between the world of technology and that of the nanotech of cell based life. That is, FSCO/I is the decisive scientific point in the whole controversy over inferring design on empirical signs. Which gives the above pattern of reactions by objectors quite a telling colour. KF

    *PS: That connexion between two everyday phenomena was the pivot on which Newton conceived his universal law of gravitation, cf discussion here with context on doing science:

    http://iose-gen.blogspot.com/2.....earch.html

    PPS: Let’s add a vid of protein synthesis to the OP . . .

  160. 160
    kairosfocus says:

    5th, schizophrenic numbers are really unusual oddball irrationals that in early digits act like they are rational then the other side swamps out. Sounds familiar! KF

  161. 161
    Dionisio says:

    KF,

    FYI

    Posts #194 & #199 in this discussion thread that GP started as per your suggestion:

    http://www.uncommondescent.com.....ent-547368

    http://www.uncommondescent.com.....ent-547393

  162. 162
    kairosfocus says:

    D, simple vs complex. G

  163. 163

    You will probably be interested in the scientific paper I published looking at Durston’s FI argument.

    http://www.biorxiv.org/content...../06/114132

    One important to make is that Durston is a great guy. I appreciate his contributions, and this is not an attack on him personally.

    Feel free to find me on the BioLogos forums if you want to discuss more. Interesting stuff!

  164. 164
    Origenes says:

    Swami: Here, we use simulations to demonstrate that sequence alignments are a poor estimate of functional information. The mutual information of sequence alignments fantastically underestimates of the true number of functional proteins.

    Even if this were true, a new functional protein needs to fit perfectly in order to be truly functional — right amount, right location. Not any old function is ‘functional’ for the organism. There has to be a need for it otherwise it is most likely to be detrimental.

  165. 165
    kairosfocus says:

    Origines, precisely: correct folding and fitting are required, leading to tight constraints on acceptable proteins in AA sequence space, indeed we should not overlook the fold domains that have just one or a few members, which are deeply isolated in the space of possible AA sequences. Nor should we forget the implications of prions that more stable mis-folds seem to be possible, i.e. some proteins are metastable structures. KF

    PS: I am puzzled why there was a comment in a two-year-old thread.

Leave a Reply