Uncommon Descent Serving The Intelligent Design Community

Just what is the CSI/ FSCO/I concept trying to say to us?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
arroba Email

When I was maybe five or six years old, my mother (a distinguished teacher) said to me about problem solving, more or less: if you can draw a picture of a problem-situation, you can understand it well enough to solve it.

Over the many years since, that has served me well.

Where, after so many months of debates over FSCO/I and/or CSI, I think many of us may well be losing sight of the fundamental point in the midst of the fog that is almost inevitably created by vexed and complex rhetorical exchanges.

So, here is my initial attempt at a picture — an info-graphic really — of what the Complex Specified Information [CSI] – Functionally Specific Complex Organisation and/or Information [FSCO/I] concept is saying, in light of the needle in haystack blind search/sample challenge; based on Dembski’s remarks in No Free Lunch, p. 144:

csi_defnOf course, Dembski was building on earlier remarks and suggestions, such as these by Orgel (1973) and Wicken (1979):

ORGEL, 1973:  . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [The Origins of Life (John Wiley, 1973), p. 189.]

WICKEN, 1979: ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems.  Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in. Observe also, the idea roots of the summary terms specified complexity and/or complex specified information (CSI) and functionally specific complex organisation and/or associated information, FSCO/I.)]

Where, we may illustrate a nodes + arcs wiring diagram with an exploded view (forgive my indulging in a pic of a classic fishing reel):

Fig 6: An exploded view of a classic ABU Cardinal, showing  how functionality arises from a highly specific, tightly constrained complex arrangement of matched parts according to a "wiring diagram." Such diagrams are objective (the FSCO/I on display here is certainly not "question-begging," as some -- astonishingly -- are trying to suggest!), and if one is to build or fix such a reel successfully, s/he had better pay close heed.. Taking one of these apart and shaking it in a shoe-box is guaranteed not to work to get the reel back together again. (That is, even the assembly of such a complex entity is functionally specific and prescriptive information-rich.)
Fig 6: An exploded view of a classic ABU Cardinal, showing how functionality arises from a highly specific, tightly constrained complex arrangement of matched parts according to a “wiring diagram.” Such diagrams are objective (the FSCO/I on display here is certainly not “question-begging,” as some — astonishingly — are trying to suggest!), and if one is to build or fix such a reel successfully, s/he had better pay close heed.. Taking one of these apart and shaking it in a shoe-box is guaranteed not to work to get the reel back together again. (That is, even the assembly of such a complex entity is functionally specific and prescriptive information-rich.)

That is, the issue pivots on being able to specify an island of function T containing the observed case E and its neighbours, or the like, in a wider sea of possible but overwhelmingly non-functional configurations [OMEGA], then challenging the atomic and temporal resources of a relevant system — our solar system or the observed cosmos — to find it via blind, needle in haystack search.

The proverbial needle in the haystack
The proverbial needle in the haystack

In the case of our solar system of some 10^57 atoms, which we may generously give 10^17 s of lifespan and assign actions at the fastest chemical reaction times, ~10^-14 s, we can see that if we were to give each atom a tray of 500 fair H/T coins, and toss and examine the 10^57 trays every 10^-14s, we would blindly sample something like one straw to a cubical haystack 1,000 light years across as a fraction of the 3.27 * 10^500 configurational possibilities for 1,000 bits.

Such a stack would be comparable in thickness to our galaxy at its central bulge.

Consequently, if we were to superpose our haystack on our galactic neighbourhood, and then were to take a blind sample, with all but absolute certainty, we would pick up a straw and nothing else. Far too much haystack vs the “needles” in it. And, the haystack for 1,000 bits would utterly swallow up our observed cosmos, relative to a straw-sized scale for a 10^80 atoms, 10^17 s, once each per 10^-14 s search. Just, to give a picture of the type of challenge we are facing.

(Notice, I am here speaking to the challenge of blind sampling based on a small fraction of a space of possibilities, not a precise probability estimate.  All we really need to see is that it is reasonable that such a search would reliably only capture the bulk of the distribution. To do so, we do not actually need odds of 1 in 10^150 for success of such a blind search, 1 in 10^60 or so, the result for a 500-bit threshold, solar system scale search on a back of envelope calculation, are quite good enough. This is also closely related to the statistical mechanical basis for the second law of thermodynamics, in which the bulk cluster of microscopic distributions of matter and energy utterly dominates what we are likely to see, so the system tends to move strongly to and remain in that state-cluster unless otherwise constrained. And that is what gives teeth to Sewell’s note that we may sum up: if something is extremely unlikely to spontaneously happen in an isolated system, it will remain extremely unlikely, save if something is happening . . . such as design . . . that makes it much more likely, when we open up the system.)

Or, as Wikipedia’s article on the Infinite Monkey theorem (which was referred to in an early article in the UD ID Foundations series) put much the same matter, echoing Emile Borel:

A monkey at the keyboard
A monkey at the keyboard

The infinite monkey theorem states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will almost surely type a given text, such as the complete works of William Shakespeare.

In this context, “almost surely” is a mathematical term with a precise meaning, and the “monkey” is not an actual monkey, but a metaphor for an abstract device that produces an endless random sequence of letters and symbols. One of the earliest instances of the use of the “monkey metaphor” is that of French mathematician Émile Borel in 1913, but the earliest instance may be even earlier. The relevance of the theorem is questionable—the probability of a universe full of monkeys typing a complete work such as Shakespeare’s Hamlet is so tiny that the chance of it occurring during a period of time hundreds of thousands of orders of magnitude longer than the age of the universe is extremely low (but technically not zero) . . . .

Ignoring punctuation, spacing, and capitalization, a monkey typing letters uniformly at random has a chance of one in 26 of correctly typing the first letter of Hamlet. It has a chance of one in 676 (26 × 26) of typing the first two letters. Because the probability shrinks exponentially, at 20 letters it already has only a chance of one in 2620 = 19,928,148,895,209,409,152,340,197,376 (almost 2 × 1028). In the case of the entire text of Hamlet, the probabilities are so vanishingly small they can barely be conceived in human terms. The text of Hamlet contains approximately 130,000 letters.[note 3] Thus there is a probability of one in 3.4 × 10183,946 to get the text right at the first trial. The average number of letters that needs to be typed until the text appears is also 3.4 × 10183,946,[note 4] or including punctuation, 4.4 × 10360,783.[note 5]

Even if every atom in the observable universe were a monkey with a typewriter, typing from the Big Bang until the end of the universe, they would still need a ridiculously longer time – more than three hundred and sixty thousand orders of magnitude longer – to have even a 1 in 10500 chance of success.

The 130,000 letters of Hamlet can be directly compared to a Genome at 7 bits per ASCII character , i.e. 910 k bits, in the same sort of range as a genome for a “reasonable” first cell-based life form.  That is, we here see how the FSCO/I issue puts a challenge before any blind chance and mechanical necessity. In effect, such is not a reasonable expectation, as storage of information depends on high contingency [a necessary configuration will store little information], but searching the relevant space of possibilities then is  not practically feasible.

The same Wiki article goes on to acknowledge the futility of such searches once we face a sufficiently complex string-length:

One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the “monkeys” typed, “VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t” The first 19 letters of this sequence can be found in “The Two Gentlemen of Verona”. Other teams have reproduced 18 characters from “Timon of Athens”, 17 from “Troilus and Cressida”, and 16 from “Richard II”.[24]

A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took “2,737,850 million billion billion billion monkey-years” to reach 24 matching characters:

RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d

500 bits is about 72 ASCII characters, and the configuration space doubles for each additional bit, we are looking at being able to search a space of 10^50, about a factor of 10^100 short of the CSI/ FSCO/I target thresholds.

Where also, of course, this is a case of an admission against notorious ideological interest on the part of Wikipedia.

But, what about Dawkins’ Weasel?

This was an intelligently targetted search that rewarded non-functional configurations for being an increment closer to the target phrase.  That is, it inadvertently illustrated the power of intelligent design; though, it was — largely successfully — rhetorically presented as showing the opposite. (And, on closer inspection, Genetic Algorithm searches and the like turn out to be much the same, injecting a lot of active information that allows overwhelming the search challenge implied in the above. But the foresight that is implied is exactly what we cannot allow, and incremental hill-climbing is plainly WITHIN an island of function, it is not a good model of blindly searching for its shores.)

Another implicit claim is found in the Darwinist tree of life (here, I use a diagram that comes from the Smithsonian, under fair use):

The Smithsonian's tree of life model, note the root in OOL
The Smithsonian’s tree of life model, note the root in OOL

The tree reveals two things, an implicit claim that there is a smoothly incremental path from an original body plan to all others, and the missing root of the tree of life.

For the first, while that may be a requisite for Darwinist-type models to work, there is little or no good empirical evidence to back it up; and it is wise not to leave too many questions a-begging. In fact, it is easy to show that whereas maybe 100 – 1,000 kbits of genomic information may account for a first cell based life form, to get to basic body plans we are looking at 10 – 100+ mn bits each, dozens of times over.

Further to this, there is in fact only one actually observed cause of FSCO/I beyond that 500 – 1,000 bit threshold, design. Design by an intelligence. Which dovetails neatly with the implications of the needle in haystack blind search challenge. And, it meets the requisites of the vera causa test for causally explaining what we do not observe directly in light of causes uniquely known to be capable of causing the like effect.

So, perhaps, we need to listen again to the distinguished, Nobel-equivalent prize holding astrophysicist and lifelong agnostic — so much for “Creationists in cheap tuxedos” — Sir Fred Hoyle:

From 1953 onward, Willy Fowler and I have always been intrigued by the remarkable relation of the 7.65 MeV energy level in the nucleus of 12 C to the 7.12 MeV level in 16 O. If you wanted to produce carbon and oxygen in roughly equal quantities by stellar nucleosynthesis, these are the two levels you would have to fix, and your fixing would have to be just where these levels are actually found to be. Another put-up job? . . . I am inclined to think so. A common sense interpretation of the facts suggests that a super intellect has “monkeyed” with the physics as well as the chemistry and biology, and there are no blind forces worth speaking about in nature. [F. Hoyle, Annual Review of Astronomy and Astrophysics, 20 (1982): 16.]

And again, in his famous Caltech talk:

The big problem in biology, as I see it, is to understand the origin of the information carried by the explicit structures of biomolecules. The issue isn’t so much the rather crude fact that a protein consists of a chain of amino acids linked together in a certain way, but that the explicit ordering of the amino acids endows the chain with remarkable properties, which other orderings wouldn’t give. The case of the enzymes is well known . . . If amino acids were linked at random, there would be a vast number of arrange-ments that would be useless in serving the pur-poses of a living cell. When you consider that a typical enzyme has a chain of perhaps 200 links and that there are 20 possibilities for each link,it’s easy to see that the number of useless arrangements is enormous, more than the number of atoms in all the galaxies visible in the largest telescopes. [–> ~ 10^80] This is for one enzyme, and there are upwards of 2000 of them, mainly serving very different purposes. So how did the situation get to where we find it to be? This is, as I see it, the biological problem – the information problem . . . .

I was constantly plagued by the thought that the number of ways in which even a single enzyme could be wrongly constructed was greater than the number of all the atoms in the universe. So try as I would, I couldn’t convince myself that even the whole universe would be sufficient to find life by random processes – by what are called the blind forces of nature . . . . By far the simplest way to arrive at the correct sequences of amino acids in the enzymes would be by thought, not by random processes . . . .

Now imagine yourself as a superintellect [–> this shows a clear and widely understood concept of intelligence] working through possibilities in polymer chemistry. Would you not be astonished that polymers based on the carbon atom turned out in your calculations to have the remarkable properties of the enzymes and other biomolecules? Would you not be bowled over in surprise to find that a living cell was a feasible construct? Would you not say to yourself, in whatever language supercalculating intellects use: Some supercalculating intellect must have designed the properties of the carbon atom, otherwise the chance of my finding such an atom through the blind forces of nature would be utterly minuscule. Of course you would, and if you were a sensible superintellect you would conclude that the carbon atom is a fix.

Noting also:

I do not believe that any physicist who examined the evidence could fail to draw the inference that the laws of nuclear physics have been deliberately designed with regard to the consequences they produce within stars. [“The Universe: Past and Present Reflections.” Engineering and Science, November, 1981. pp. 8–12]

No wonder, in that same period, the same distinguished scientist went on record on January 12th, 1982, in the Omni Lecture at the Royal Institution, London, entitled “Evolution from Space”:

The difference between an intelligent ordering, whether of words, fruit boxes, amino acids, or the Rubik cube, and merely random shufflings can be fantastically large, even as large as a number that would fill the whole volume of Shakespeare’s plays with its zeros. So if one proceeds directly and straightforwardly in this matter, without being deflected by a fear of incurring the wrath of scientific opinion, one arrives at the conclusion that biomaterials with their amazing measure or order must be the outcome of intelligent design. No other possibility I have been able to think of in pondering this issue over quite a long time seems to me to have anything like as high a possibility of being true. [This appeared in a book of the same title, pp. 27-28. Emphases added.]

Perhaps, the time has come to think again. END

_________________

PS: Let me add an update June 28, by first highlighting the design inference explanatory filter, in the per aspect flowchart form I prefer to use:

explan_filterHere, we see that the design inference pivots on seeing a logical/contrastive relationship between three familiar classes of causal factors. For instance, natural regularities tracing to mechanical necessity (e.g. F = m*a, a form of Newton’s Second Law) give rise to low contingency outcomes. That is, reliably, a sufficiently similar initial state will lead to a closely similar outcome.

By contrast, there are circumstances where outcomes will vary significantly under quite similar initial conditions. For example, take a fair, common die and arrange to drop it repeatedly under very similar initial conditions. It will predictably not consistently land, tumble and settle with any particular face uppermost. Similarly, in a population of apparently similar radioactive atoms, there will be a stochastic pattern of decay that shows a chance based probability distribution tracing to a relevant decay constant. So, we speak of chance, randomness, sampling of populations of possible outcomes and even of probabilities.

But that is not the only form of high contingency outcome.

Design can also give rise to high contingency, e.g. in the production of text.

And, ever since Thaxton et al, 1984, in The Mystery of Life’s Origin, Ch 8, design thinkers have made text string contrasts that illustrate the three typical patterns:

1. [Class 1:] An ordered (periodic) and therefore specified arrangement:

THE END THE END THE END THE END

Example: Nylon, or a crystal . . . . 

2. [Class 2:] A complex (aperiodic) unspecified arrangement:

AGDCBFE GBCAFED ACEDFBG

Example: Random polymers (polypeptides).

3. [Class 3:] A complex (aperiodic) specified arrangement:
THIS SEQUENCE OF LETTERS CONTAINS A MESSAGE!
Example: DNA, protein.

Of course, class 3 exhibits functionally specific, complex organisation and associated information, FSCO/I.

As the main post shows, this is an empirically reliable, analytically plausible sign of design. It is also one that in principle can quite easily be overthrown. Show credible cases where cases of FSCO/I beyond a reasonable threshold are observed to be produced by blind chance and/or mechanical necessity.

Absent that, we are epistemically entitled to note that per the vera causa test, it is reliably seen that design causes FSCO/I. So, it is a reliable sign of design, even as deer-tracks are reliable signs of deer:

A probable Mule Deer track, in mud, showing dew claws (HT: http://www.saguaro-juniper.com, deer page.)
A probable Mule Deer track, in mud, showing dew claws (HT: http://www.saguaro-juniper.com, deer page.)

Consequently, while it is in-principle possible for chance to toss up any outcome from a configuration space, we must reckon with available search resources and the plausibility that feasible blind samples would be reasonably expected to catch needles in the haystack.

As a threshold, we can infer for solar system scale resources that, using:

Chi_500 = Ip*S – 500, bits beyond the solar system threshold,

we can be safely confident that if Chi_500 is at least 1, the FSCO/I observed is not a plausible product of blind chance and/or mechanical necessity. Where, Ip is a relevant information-content metric in bits, and S is a dummy variable that defaults to zero, save in cases of positive reason to accept that observed patterns are relevantly specific, coming from a zone T in the space of possibilities. If we have such reason, S switches to 1.

That is, it is default that first, something is minimally informational. the result of mechanical necessity, which would show as a low Ip value. Next, it is default that chance accounts for high contingency so that while there may be a high information-carrying capacity, the configurations observed do not come from T-zones.

Only when something is specific and highly informational (especially functionally specific) will Ip*S rise beyond the confident detection threshold that puts Chi_500 to at least 1.

And, if one wishes for a threshold relevant to the observed cosmos as scope of search resources, we can use 1,000 bits as threshold.

That is, the eqn summarises what the flowchart does.

And, the pivotal test is to find cases where the filter would vote designed, but we actually observe blind chance and mechanical necessity as credible cause. Actually observe . . . the remote past of origins or the like is not actually observed. We only observe traces which are often interpreted in certain ways.

But, the vera causa test does require that before using cause-factor X in explaining traces from the unobserved past, P, we should first verify in the present through observation that X can and does credibly produce materially similar effects, P’.

If this test is consistently applied, it will be evident that many features of the observed cosmos, especially the world of cell based life forms, exhibit FSCO/I in copious quantities and are best understood as designed.

IF . . .

Comments
Wind rustling through the coconut trees even as there's some slip-slidin awaaaaay on the pull a cosmos out of a non existent hat front. And de duppies leanin on de boneyard fence are looking at one another as one of them is about to say BOO! kairosfocus
Cockadoodle doo off in the distance, and it is now quite clear that the usual objectors have no interest in addressing pivotal matters. kairosfocus
Heavy equipment in the far background . . . kairosfocus
Weed whackers on an early morning run . . . kairosfocus
Wind swishing through coconut trees, with birds tweeting away. kairosfocus
Mung: An excellent discussion, here. Yes, the inference that one has received a message rather than noise is a design inference, as I argued long ago in my always linked note. If one doubts this, ponder the significance of signal to noise power ratio in a communication system. So, if one comes to UD and communicates on the understanding that one is seeing messages not lucky noise, one is in fact accepting the possibility of intelligent designers of messages and that what appears to be messages is so as noise is a vastly inferior and implausible explanation. This focuses the issue on origins: objectors typically disbelieve in the possibility of relevant designers or may be hostile to possible designers, and so are inclined to cling to the utterly implausible, driven by their worldview a prioris. Or, through scientism, they are influenced by such ideologues. That's where, in my view, the evidence strongly points. KF kairosfocus
Every Anti-ID who posts here, whether sincere or merely trolling, admits to the validity of the design inference. Shannon Comunication Mung
EugeneS ( re no 7), it's good to see you around. KF kairosfocus
I have updated, to include a discussion of the design inference explanatory filter. kairosfocus
F/N: Why am I underscoring the studious silence of objectors to the design inference as manifested in the significance of FSCO/I? Precisely because it is the pivot of the real issue. And, to underscore that, I have just added a PS, to underscore that. Until and unless objectors to design inferences on FSCO/I can cogently show on observational evidence that FSCO/I is in fact credibly produced by blind chance and mechanical necessity, we are entitled to continue to go with the body of observational evidence that shows it to be a reliable sign of design, and the needle in haystack analysis that shows why that is so. KF PS: Informer years, it was common to see such attempts, but after dozens of tries crashed in flames, there has now been instead a pattern of trying to argue that design assumes intelligence, that intelligence is meaningless, that CSI or FSCO/I are hopelessly ill-defined, that it is only humans who have been seen to design, that brains come before designing minds, etc etc etc. These may sound impressive and may well obfuscate the issue, but it seems to me the simplest answer is to lay out what FSCO/I is, how it is sufficiently clear and empirically founded, and why it is seen as a reliable sign of design. Which is what the OP does, or at least attempts. kairosfocus
Coconut trees swishing in the background as the trade winds pick up strength as the sun comes up . . . kairosfocus
F/N: Observe how the materials in this thread are highly relevant to debates elsewhere, e.g. here on. The pivotal design issue is that FSCO/I is a highly empirically reliable, analytically plausible sign of design as cause. On grounds as given. Unless that is faced and cogently refuted, it stands as in and of itself decisively diagnostic of causal process. Where, that tweredun is antecedent to issues of whodunit, how, where, when etc. KF kairosfocus
Chirp, chirpity, chirp . . . chirping frogs PS: Ours are brown and maybe a tad smaller. kairosfocus
Zenaida Doves are cooing now, but it's well past sunup now. Time to put the old nose to the grind. KF PS: Roosters, dogs and one or two late tree frogs are still going at it. kairosfocus
Right now, I am hearing tree frogs chirping even as cocks crow and dogs woof as light begins in the sky. kairosfocus
Ribbit, ribbit... Let's not leave the frogs out of this... :-) tgpeeler
kf:
Chirp, chirp...
Interesting, isn't it, that a noise (crickets chirping) has become a metaphor for silence? Mung
Chirp, chirp . . . kairosfocus
I don't understand why Dembski, Meyer, Luskin et al. rest on their laurels and never referred to your work. Thus, I would appreciate if the DI would offer you the chance to present your concepts during their next summer school or if they would invite you to the next ID Alaskan cruise. BM40
Great link, KF. Thanks. Random chance, I expect, that it was so euphonious to the human ear. And that someone should have discovered it. Axel
Chirp, chirp, chirp (check the link!) . . . kairosfocus
PS: Just think, a cosmos that sits at a very narrow operating point, with the first four elements being H, He, O and C; with N close. That's stars, galaxies and the gateway to the periodic table. O brings in water, the wonder molecule . . . and with other elements a lot of rocks. C opens up the connector block space of organic chemistry. With N we are at the Amine group -NH2, O having enabled the carboxylic acid group -COOH, and we are looking at proteins already. The stage is set. And if one imagines this is forced by some super-law, that only pushes fine tuning back one step. If instead you think, winning the lottery in a multiverse lottery, ponder the tightness of the local "island" and then consider what is needed to search it out on sampling resources. That's before we get a good answer on empirical evidence of such a multiverse. As to notions on getting a cosmos from nothing, the proper definition of nothing is non-being. Non-being simply cannot have causal powers. kairosfocus
TGP, 17: The very laws of physics themselves are part of what we need to ponder, per Sir Fred Hoyle. Indeed, in the context of design thought, that is where we must ponder mind ontologically prior to matter, and designing a cosmos that looks like such a suspicious put-up job. When PHYSICS begins to look like a case of FSCO/I, materialists should ponder Hoyle and others since very carefully, and begin to reckon with the idea of a designed cosmos. What does it take to design a cosmos? Is this "the heavens declare the glory of God, the firmament sheweth His handiwork" on steroids? KF kairosfocus
Axel, yes if "everybody" imagines something notoriously volatile and complex is all neatly under control, that is a sign of a bad psychology at work. That is part of the queasy feeling in the pit of my stomach when I see otherwise smart folks blithely telling us origins science is all figured out. Do you know what it means to think the human mind-brain and linguistic system, skeletal transformation from an ape etc can be packed down into 120 mbits and 6 - 10 mn years? The search space boggles the imagination, never mind the search for such an efficient search in the power set space. I won't even bother on the way the notion that deterministic and/or random forces shape and control the neural wiring of our brains, which drives and controls mindedness looks to utterly fail the challenge to generate the required computational basis much less account for self-aware, rational contemplation. It looks to me to crash of its own weight in self-referential incoherence. But I need to get back to that on other still active threads. This one is just to clear the air on a pivotal concept, for reference. KF kairosfocus
TGP: Let us see how they will respond to the attempted info-graphic summary on what CSI and FSCO/I are about. Where, in IT, we COMMONLY use FSCO/I in computer files. They are functional, specific, complex, organised and informational. I suggest DNA is much the same and protein fold domains too. We routinely see such FSCO/I produced by design, even this post is designed; we see the sort of search challenge implied in attempts to get to such by blind chance and/or mechanical necessity. We see that DNA expesses code and that a minimal viable genome is likely 100 - 1,000 kbits. We see how long it takes to fix increments in light of realistic pop sizes and generation times. We see that new body plans likely run 10 - 100+ mn bits of new genetic info. We see that at even 2% of 6 bn bits, we are looking at 2 x [6 * 10^7) = 120 mn bits difference Chimp-human, with maybe 12 - 15 y generation time, from a hypothesised common ancestor 6 - 10 MY ago. We see the search space gaps implied by isolation of protein folding domains in AA sequence space. We need to have a reasonable and vera causa plausible answer on the Tree of Life from root on up. Which, of course has been on the table for quite a while without a reasonable answer. KF kairosfocus
Yes. They have nothing. No arguments. No evidence. No truth. Sad, really, when it's as plain as the nose on one's face. Stakes are high here and as far as I can tell there is no partial credit for wrong answers. tgpeeler
Notice some chirping crickets? kairosfocus
Mung, what I have in the back of my mind is Dembski-Marks on search and successive search for search. Samples of a space can be seen as selections of subsets, i.e. from the power set . . . and I guess we can just allow duplicates to be represented by the first time they pop up. (Though with reasonably random samples of strings of relevant scale, I suspect duplications will be quite rare; logically possible, practically implausible. Think about the odds of duplication on two strings of 500 coins tossed at random. Of course if the coins are loaded, that's a different matter . . . ) KF PS: The power set of a set of cardinality N, is of cardinality 2^N, of course you can drop off the empty set {} if you want as no-search, but that hardly makes a difference with what we are looking at. N for relevant sets starts out at 3.27*10^150. That's calculator smoking territory. The sampling space for searches dwarfs the space for the original set, much as Dembski and Marks pointed out. PPS: Let's do a crude thing: log [ a^n] = n log a, so lg [2 ^(3.27*10^150)] = 3.27*10^150 *[0.3010] ~ 10^150. i.e. we are in the ballpark of 10^[10^150] subsets. The number could not be written out in normal decimal form. kairosfocus
Phil, Welcome to a fellow member of the brotherhood of the burnt thumb! KF PS: 704s or Squidders? Senators or Internationals . . . but at that end you would be hiring reel mechanics not tossing! kairosfocus
kf, I have to ask, what sort of sampling mechanism do you have in mind? Is it designed? How does it avoid sampling the same point over and over in the sampling space? Wouldn't it be the case that as the target or targets represent a smaller number of points in the search space that it would be ever more likely that the same non target points/spaces would be repeatedly sampled over and over? I think i intuitively understand the sampling problem, but I'd like to have a better grasp. cheers Mung
"forgive my indulging in a pic of a classic fishing reel" Ha! I think it's a great illustration demonstrating exactly what "Functionally Specific Complex Organisation and/or Information" actually is. "Such diagrams are objective (the FSCO/I on display here is certainly not “question-begging,” as some — astonishingly — are trying to suggest!), and if one is to build or fix such a reel successfully, s/he had better pay close heed" I've been avid fisherman almost all my life, and I've ended up throwing away more reels than I've repaired. Sometimes I don't even bother attempting to fix a reel (unless it's one of my vintage Penn reels) because the task is so tedious. "Taking one of these apart and shaking it in a shoe-box is guaranteed not to work to get the reel back together again." All things are possible through the power of Darwin. ^_^ Phil2232
Tom ... spot on, as usual. The translation of recorded information into a physical effect requires that the physicochemical discontinuity between the arrangment of the medium and its physical effect be preserved by the system (or the system simply cannot function). The necessary preservation of this discontinuity results in the sum reality that the product of translated information is not reducible to physical law. Done. Upright BiPed
oops. This is all the explanation anyone should need to see that the project of explaining information, whether biological or otherwise, BY MEANS OF THE LAWS OF PHYSICS is a fool’s errand. tgpeeler
If one is serious about a materialist ontology (all that exists is matter and energy, essentially) then one must also commit to the proposition that all explanations are reducible somehow to the laws of physics. Which laws explain, in principle, the behavior of said sub-atomic particles in energy fields. The problem immediately arises that information of any kind, whether it is functional or specified or complex or whatever is always ENCODED into a physical substrate (by means of language) but it is NOT the substrate. Physics can always explain the substrate but it cannot, in principle, ever explain the language used to encode the information. Those rules, or conventions, if you please, are arbitrary and agreed upon by the users. The laws of physics will NEVER be able to account for why "cat" means a generally obnoxious furry little mammal and der Hund (in German) means the dog. Or more to the point, that a 3 billion "letter" string of ATCGs means human being and other ones "mean" some other living thing. The information is ENCODED into the DNA, it is NOT THE DNA. This is all the explanation anyone should need to see that the project of explaining information, whether biological or otherwise, is a fool's errand. Literally. Don't these people ever get tired of being wrong and not having an argument upon which to stand? Apparently not. tgpeeler
I sometimes post on a techie blog, Next Big Future, and have the exact same experience. Most people do not wish to reasonably discuss the scientific evidence, precisely because they fear where it leads. anthropic
I they can't accept the manifold, often palpable evidence for theism, indeed, Christianity, afforded by modern science.... Axel
WJM, apparently, when old 'Blackjack' Kennedy was asked why he was selling all his stocks just before the Wall Street Crash, he replied, 'When my shoe-shine man tells me what shares to buy, I know there is something very wrong with the market.' You may be familiar with that anecdote. Anyway, it strikes me that I'm strongly prone to respond or not to these threads on much the same principle, although in reverse. When I see some of the most extraordinary and impressive 'brains' on here, such as KF and VJT (you appear to be substantially AWOL in these cases - evidently for the reason you cite), tearing their hair out, trying to point out to a materialist poster (sometimes really pleasant-sounding, young chaps such as RDF) the faults in their arguments, and eventually being reduced to reiterating, rephrasing, etc, their rebuttals, I know there's something really desperately wrong with the 'market'! Of course, by the very nature of this blog, it 'goes with the territory', to a large extent. Axel
WJM @7: Well said. Eric Anderson
aqeels @4: I think you were being sarcastic, but . . . "We also must never forget the other great silent hero, namely deep time…" Yes, materialists imagine they see in the long dark past ages of time a solution to the conundrum. However, that impression quickly evaporates in the light of the morning sun as soon as we realize that the billions of years since the beginning of the universe constitutes but a rounding error against the awful probabilities that beset the materialist creation myth. Eric Anderson
KF, Yes, I see what you mean. Thanks. Dionisio
F/N: Notice, also how this approach may easily be extended to take in the fine-tuning issue. KF kairosfocus
WJM: A powerful heartfelt plea for reasonableness. All I can say is in the end the higher the monkey climbs the coconut tree the more he exposes himself to the cross-bow shot. Thwack! Monkey stew for lunch. KF kairosfocus
D, indeed. And, it took a few days to get that main diagram right to my satisfaction. A big subtlety in it is the implications of search space to credible sample size in a context where islands of function are isolated like needles in a haystack. KF kairosfocus
This is why I stopped arguing with anti-IDists about ID; one cannot reasonably argue with those that deny the obvious. Every rational argument depends upon a reasonable, mutual assumption of certain framework values and terms that are agreed (usually without formal recognition) as obvious or necessary. The anti-ID position is entirely about denying, obfuscating, refusing and demanding absolute proof of framework values such as "intelligence", "design", "chance" and "natural". It is so bad now that they police themselves over this "problematic" lexicon and everyone else over fair use of quotes in order to protect themselves against their own inability to phrase their work or their views without such references. They are so desperate to deny the obvious that they must now employ thought and terminology police and raise a stink over that which they would never raise a stink about in any other situation. Who raised a stink about what "intelligence" means over SETI? Who raised a stink over what "design" means and insisted it cannot be properly inferred when any ancient artifacts were discovered and thought to be the remains a civilization? Are string theorists and multiverse proponents excoriated for promoting untestable pseudoscience on the masses and driven from the scientific community? Obviously, the anti-ID agenda is ideologically driven, and one simply cannot argue reasonably with those who will question any term, deny any reference and obfuscate that which would never otherwise be resisted to stymie the blatant nature of what is right in front of one's eyes. The incredible part of this is that the onus has been put on ID advocates to "prove" that the equivalent of a fully functioning, computerized 747 is in fact the product of intelligent design (first, of course, proving that "intelligence" and "design" are valid scientific concepts), instead of the other way around. IMO, it is the naturalist who must prove that the 747 was in fact produced by non-intelligent forces before any reasonable person should consider their claim anything more than naturalist mysticism. William J Murray
if you can draw a picture of a problem-situation, you can understand it well enough to solve it.
Yes, that's a very clever suggestion, because a picture may, in most cases, tell much more than many words, assuming that the picture does correspond to the most accurate and valid description of the given problem, not a misinterpretation of it, though it doesn't have to be as detailed as the entire description. Actually, it could be a complementary -but very important- part of the whole description. Perhaps this is one of the reasons 3D animations are becoming so popular in biological presentations. Most of us like to watch colorful video animations of biological processes. However, most animations tend to be kind of reductionist, because it's difficult to describe the whole choreography of a particular biological process in accurate details. But it would take many pages of 'boring' text to describe what the 3D animation shows in few minutes. Being able to draw a descriptive picture does not necessarily imply that we understand the problem well enough to resolve it, but definitely better than if we don't know how to draw it. The visualization of a given problem most certainly takes us closer to its understanding, hence to the resolution. Many times the initial drawing undergoes several iterations of adjustments to account for additional data or to corrections of previous misunderstandings. Sometimes it's easier to adjust a drawing than a lengthy textual description. Mothers (and fathers too) should encourage their children to visualize their homework problems as much as they can. That's a wise advice of a caring parent. Dionisio
Aq: Thanks for your thoughts, which reflect a common and very understandably widely held view. That is why, forgive me, I gave a part-answer in the above. This is one of the points that needs to be hammered out, as it seems that there is a gap in our understanding of just how isolated islands of function are in config spaces, and just how far from the stepping-stones across a stream view, the real world situation is. As you can see, I highlighted the ToL diagram with the rather obvious black box at OOL. (That same diagram is the pivot of my challenge to warrant the darwinist frame of thought from OOL up.) At origin, there was no irreducibly complex self replicating, code and algorithim using von Neumann self-replicating facility. That is a part of what needs to be explained in Darwin's warm little pond or the like. Next, 10^17 s takes in the about 13.7 BY held to have occurred since the origin of our observed cosmos, in a big bang event. 10^-14 s/search for each of 10^57 or 10^80 atoms takes in an over-generous search rate. We still find ourselves facing a needle in haystack challenge on steroids. Then, for origin of body plans, as the original post remarks, we are looking at, not 100 - 1,000 kbits as for OOL, but 10 - 100+ millions, as typical genome scales easily show, and that within 600 - 1,000 mn yrs on our planet. So, there is an obvious search for islands of function challenge. But, the tree of life and RV + NS --> DWIM pattern implicitly answers that we are looking at INCREMENTAL functional patterns across a CONTINENT of life forms. For which continent, there is no good evidence. For one instance, in amino acid sequence space, there are essentially 20 values per position relevant to life [yup, there are a few oddball exceptions], and typical proteins are ~ 300 AA long. With the requisite for function that they fold into precise, reasonably stable 3-d patterns, and fit in key-lock matching patterns, with relevant function clefts etc. It turns out, there are several thousand fold domains, and many of them hold only a very few members. These domains, moreover, are deeply isolated in the AA sequence space, i.e. a great many fold domains are structurally unrelated to others. In short, we have exactly the isolated island of function search challenge problem in accounting for the workhorse molecules that are the requisite, not only of first life, but the dozens of body plans. In fact, there are indications that it is common to find isolated domains popping up between seemingly neighbouring life forms. The tree of life, incrementalist model is in deep but typically unacknowledged trouble. I hope that helps in the rethink? KF kairosfocus
#4, that is not necessarily the case. It least, it is not *that* easy. Any non-trivial function (especially one as sophisticated as bio-function) must be programmed first. As the program's complexity increases, the probability of it coming about just by "blind forces" reduces catastrophically. These meaningful coded instructions must be computationally halting. The halting problem is undecidable so needs an oracle. At the same time, the environment cannot serve as a halting oracle because it cannot choose for future function. This is why ID maintains that "behind" a (complex enough) program, in practice there must always be intelligence (directly or indirectly). So you can't just explain everything away like this. This type of explanations work within a very limited area of viable oscillations around the already existing functional attraction basins. This is why, generally, you cannot search for future function far enough without guidance (i.e. forethought, for which you need intelligence). In order for your search to succeed at all, very stringent limitations must be observed (e.g. there must be a search space in the first place, which is not always the case in practice). EugeneS
A typo in my previous post. Obviously, I meant to say function could NOT have come about by “blind forces”. EugeneS
You forget of course that natural selection acting out on random variation is the great silent force that can tear down improbable odds no matter what they are! We also must never forget the other great silent hero, namely deep time... I often wonder what would be the naturalist paradigm in relation to origins today be, if the concept of "mutation" did not exist. Would they even have a paradigm, or would they concede design? aqeels
KF, Great post. Perhaps, the only thing I would add to it is that function could have come about by "blind forces" just because these blind forces are inert to it. They have no capability to choose for function other than impose trivial ordering. Just ordering (i.e. the drive towards equilibria by the blind forces alone) and bio-function (bona fide choice between inert states in order to maximize utility) are massively different. EugeneS
Also, let us bear in mind this, from Merriam-Webster:
CODE: . . . 3a : a system of signals or symbols for communication b : a system of symbols (as letters or numbers) used to represent assigned and often secret meanings 4: genetic code 5: a set of instructions for a computer
kairosfocus
And, is Sir Fred Hoyle the effective idea- father of modern design theory? kairosfocus
Just what is the CSI- FSCO/I - needle in haystack, million monkeys challenge trying to tell us about the design inference in light of the vera causa principle? kairosfocus

Leave a Reply