Uncommon Descent Serving The Intelligent Design Community

Just what is the CSI/ FSCO/I concept trying to say to us?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

When I was maybe five or six years old, my mother (a distinguished teacher) said to me about problem solving, more or less: if you can draw a picture of a problem-situation, you can understand it well enough to solve it.

Over the many years since, that has served me well.

Where, after so many months of debates over FSCO/I and/or CSI, I think many of us may well be losing sight of the fundamental point in the midst of the fog that is almost inevitably created by vexed and complex rhetorical exchanges.

So, here is my initial attempt at a picture — an info-graphic really — of what the Complex Specified Information [CSI] – Functionally Specific Complex Organisation and/or Information [FSCO/I] concept is saying, in light of the needle in haystack blind search/sample challenge; based on Dembski’s remarks in No Free Lunch, p. 144:

csi_defnOf course, Dembski was building on earlier remarks and suggestions, such as these by Orgel (1973) and Wicken (1979):

ORGEL, 1973:  . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [The Origins of Life (John Wiley, 1973), p. 189.]

WICKEN, 1979: ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems.  Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in. Observe also, the idea roots of the summary terms specified complexity and/or complex specified information (CSI) and functionally specific complex organisation and/or associated information, FSCO/I.)]

Where, we may illustrate a nodes + arcs wiring diagram with an exploded view (forgive my indulging in a pic of a classic fishing reel):

Fig 6: An exploded view of a classic ABU Cardinal, showing  how functionality arises from a highly specific, tightly constrained complex arrangement of matched parts according to a "wiring diagram." Such diagrams are objective (the FSCO/I on display here is certainly not "question-begging," as some -- astonishingly -- are trying to suggest!), and if one is to build or fix such a reel successfully, s/he had better pay close heed.. Taking one of these apart and shaking it in a shoe-box is guaranteed not to work to get the reel back together again. (That is, even the assembly of such a complex entity is functionally specific and prescriptive information-rich.)
Fig 6: An exploded view of a classic ABU Cardinal, showing how functionality arises from a highly specific, tightly constrained complex arrangement of matched parts according to a “wiring diagram.” Such diagrams are objective (the FSCO/I on display here is certainly not “question-begging,” as some — astonishingly — are trying to suggest!), and if one is to build or fix such a reel successfully, s/he had better pay close heed.. Taking one of these apart and shaking it in a shoe-box is guaranteed not to work to get the reel back together again. (That is, even the assembly of such a complex entity is functionally specific and prescriptive information-rich.)

That is, the issue pivots on being able to specify an island of function T containing the observed case E and its neighbours, or the like, in a wider sea of possible but overwhelmingly non-functional configurations [OMEGA], then challenging the atomic and temporal resources of a relevant system — our solar system or the observed cosmos — to find it via blind, needle in haystack search.

The proverbial needle in the haystack
The proverbial needle in the haystack

In the case of our solar system of some 10^57 atoms, which we may generously give 10^17 s of lifespan and assign actions at the fastest chemical reaction times, ~10^-14 s, we can see that if we were to give each atom a tray of 500 fair H/T coins, and toss and examine the 10^57 trays every 10^-14s, we would blindly sample something like one straw to a cubical haystack 1,000 light years across as a fraction of the 3.27 * 10^500 configurational possibilities for 1,000 bits.

Such a stack would be comparable in thickness to our galaxy at its central bulge.

Consequently, if we were to superpose our haystack on our galactic neighbourhood, and then were to take a blind sample, with all but absolute certainty, we would pick up a straw and nothing else. Far too much haystack vs the “needles” in it. And, the haystack for 1,000 bits would utterly swallow up our observed cosmos, relative to a straw-sized scale for a 10^80 atoms, 10^17 s, once each per 10^-14 s search. Just, to give a picture of the type of challenge we are facing.

(Notice, I am here speaking to the challenge of blind sampling based on a small fraction of a space of possibilities, not a precise probability estimate.  All we really need to see is that it is reasonable that such a search would reliably only capture the bulk of the distribution. To do so, we do not actually need odds of 1 in 10^150 for success of such a blind search, 1 in 10^60 or so, the result for a 500-bit threshold, solar system scale search on a back of envelope calculation, are quite good enough. This is also closely related to the statistical mechanical basis for the second law of thermodynamics, in which the bulk cluster of microscopic distributions of matter and energy utterly dominates what we are likely to see, so the system tends to move strongly to and remain in that state-cluster unless otherwise constrained. And that is what gives teeth to Sewell’s note that we may sum up: if something is extremely unlikely to spontaneously happen in an isolated system, it will remain extremely unlikely, save if something is happening . . . such as design . . . that makes it much more likely, when we open up the system.)

Or, as Wikipedia’s article on the Infinite Monkey theorem (which was referred to in an early article in the UD ID Foundations series) put much the same matter, echoing Emile Borel:

A monkey at the keyboard
A monkey at the keyboard

The infinite monkey theorem states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will almost surely type a given text, such as the complete works of William Shakespeare.

In this context, “almost surely” is a mathematical term with a precise meaning, and the “monkey” is not an actual monkey, but a metaphor for an abstract device that produces an endless random sequence of letters and symbols. One of the earliest instances of the use of the “monkey metaphor” is that of French mathematician Émile Borel in 1913, but the earliest instance may be even earlier. The relevance of the theorem is questionable—the probability of a universe full of monkeys typing a complete work such as Shakespeare’s Hamlet is so tiny that the chance of it occurring during a period of time hundreds of thousands of orders of magnitude longer than the age of the universe is extremely low (but technically not zero) . . . .

Ignoring punctuation, spacing, and capitalization, a monkey typing letters uniformly at random has a chance of one in 26 of correctly typing the first letter of Hamlet. It has a chance of one in 676 (26 × 26) of typing the first two letters. Because the probability shrinks exponentially, at 20 letters it already has only a chance of one in 2620 = 19,928,148,895,209,409,152,340,197,376 (almost 2 × 1028). In the case of the entire text of Hamlet, the probabilities are so vanishingly small they can barely be conceived in human terms. The text of Hamlet contains approximately 130,000 letters.[note 3] Thus there is a probability of one in 3.4 × 10183,946 to get the text right at the first trial. The average number of letters that needs to be typed until the text appears is also 3.4 × 10183,946,[note 4] or including punctuation, 4.4 × 10360,783.[note 5]

Even if every atom in the observable universe were a monkey with a typewriter, typing from the Big Bang until the end of the universe, they would still need a ridiculously longer time – more than three hundred and sixty thousand orders of magnitude longer – to have even a 1 in 10500 chance of success.

The 130,000 letters of Hamlet can be directly compared to a Genome at 7 bits per ASCII character , i.e. 910 k bits, in the same sort of range as a genome for a “reasonable” first cell-based life form.  That is, we here see how the FSCO/I issue puts a challenge before any blind chance and mechanical necessity. In effect, such is not a reasonable expectation, as storage of information depends on high contingency [a necessary configuration will store little information], but searching the relevant space of possibilities then is  not practically feasible.

The same Wiki article goes on to acknowledge the futility of such searches once we face a sufficiently complex string-length:

One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the “monkeys” typed, “VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t” The first 19 letters of this sequence can be found in “The Two Gentlemen of Verona”. Other teams have reproduced 18 characters from “Timon of Athens”, 17 from “Troilus and Cressida”, and 16 from “Richard II”.[24]

A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took “2,737,850 million billion billion billion monkey-years” to reach 24 matching characters:

RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d

500 bits is about 72 ASCII characters, and the configuration space doubles for each additional bit, we are looking at being able to search a space of 10^50, about a factor of 10^100 short of the CSI/ FSCO/I target thresholds.

Where also, of course, this is a case of an admission against notorious ideological interest on the part of Wikipedia.

But, what about Dawkins’ Weasel?

This was an intelligently targetted search that rewarded non-functional configurations for being an increment closer to the target phrase.  That is, it inadvertently illustrated the power of intelligent design; though, it was — largely successfully — rhetorically presented as showing the opposite. (And, on closer inspection, Genetic Algorithm searches and the like turn out to be much the same, injecting a lot of active information that allows overwhelming the search challenge implied in the above. But the foresight that is implied is exactly what we cannot allow, and incremental hill-climbing is plainly WITHIN an island of function, it is not a good model of blindly searching for its shores.)

Another implicit claim is found in the Darwinist tree of life (here, I use a diagram that comes from the Smithsonian, under fair use):

The Smithsonian's tree of life model, note the root in OOL
The Smithsonian’s tree of life model, note the root in OOL

The tree reveals two things, an implicit claim that there is a smoothly incremental path from an original body plan to all others, and the missing root of the tree of life.

For the first, while that may be a requisite for Darwinist-type models to work, there is little or no good empirical evidence to back it up; and it is wise not to leave too many questions a-begging. In fact, it is easy to show that whereas maybe 100 – 1,000 kbits of genomic information may account for a first cell based life form, to get to basic body plans we are looking at 10 – 100+ mn bits each, dozens of times over.

Further to this, there is in fact only one actually observed cause of FSCO/I beyond that 500 – 1,000 bit threshold, design. Design by an intelligence. Which dovetails neatly with the implications of the needle in haystack blind search challenge. And, it meets the requisites of the vera causa test for causally explaining what we do not observe directly in light of causes uniquely known to be capable of causing the like effect.

So, perhaps, we need to listen again to the distinguished, Nobel-equivalent prize holding astrophysicist and lifelong agnostic — so much for “Creationists in cheap tuxedos” — Sir Fred Hoyle:

From 1953 onward, Willy Fowler and I have always been intrigued by the remarkable relation of the 7.65 MeV energy level in the nucleus of 12 C to the 7.12 MeV level in 16 O. If you wanted to produce carbon and oxygen in roughly equal quantities by stellar nucleosynthesis, these are the two levels you would have to fix, and your fixing would have to be just where these levels are actually found to be. Another put-up job? . . . I am inclined to think so. A common sense interpretation of the facts suggests that a super intellect has “monkeyed” with the physics as well as the chemistry and biology, and there are no blind forces worth speaking about in nature. [F. Hoyle, Annual Review of Astronomy and Astrophysics, 20 (1982): 16.]

And again, in his famous Caltech talk:

The big problem in biology, as I see it, is to understand the origin of the information carried by the explicit structures of biomolecules. The issue isn’t so much the rather crude fact that a protein consists of a chain of amino acids linked together in a certain way, but that the explicit ordering of the amino acids endows the chain with remarkable properties, which other orderings wouldn’t give. The case of the enzymes is well known . . . If amino acids were linked at random, there would be a vast number of arrange-ments that would be useless in serving the pur-poses of a living cell. When you consider that a typical enzyme has a chain of perhaps 200 links and that there are 20 possibilities for each link,it’s easy to see that the number of useless arrangements is enormous, more than the number of atoms in all the galaxies visible in the largest telescopes. [–> ~ 10^80] This is for one enzyme, and there are upwards of 2000 of them, mainly serving very different purposes. So how did the situation get to where we find it to be? This is, as I see it, the biological problem – the information problem . . . .

I was constantly plagued by the thought that the number of ways in which even a single enzyme could be wrongly constructed was greater than the number of all the atoms in the universe. So try as I would, I couldn’t convince myself that even the whole universe would be sufficient to find life by random processes – by what are called the blind forces of nature . . . . By far the simplest way to arrive at the correct sequences of amino acids in the enzymes would be by thought, not by random processes . . . .

Now imagine yourself as a superintellect [–> this shows a clear and widely understood concept of intelligence] working through possibilities in polymer chemistry. Would you not be astonished that polymers based on the carbon atom turned out in your calculations to have the remarkable properties of the enzymes and other biomolecules? Would you not be bowled over in surprise to find that a living cell was a feasible construct? Would you not say to yourself, in whatever language supercalculating intellects use: Some supercalculating intellect must have designed the properties of the carbon atom, otherwise the chance of my finding such an atom through the blind forces of nature would be utterly minuscule. Of course you would, and if you were a sensible superintellect you would conclude that the carbon atom is a fix.

Noting also:

I do not believe that any physicist who examined the evidence could fail to draw the inference that the laws of nuclear physics have been deliberately designed with regard to the consequences they produce within stars. [“The Universe: Past and Present Reflections.” Engineering and Science, November, 1981. pp. 8–12]

No wonder, in that same period, the same distinguished scientist went on record on January 12th, 1982, in the Omni Lecture at the Royal Institution, London, entitled “Evolution from Space”:

The difference between an intelligent ordering, whether of words, fruit boxes, amino acids, or the Rubik cube, and merely random shufflings can be fantastically large, even as large as a number that would fill the whole volume of Shakespeare’s plays with its zeros. So if one proceeds directly and straightforwardly in this matter, without being deflected by a fear of incurring the wrath of scientific opinion, one arrives at the conclusion that biomaterials with their amazing measure or order must be the outcome of intelligent design. No other possibility I have been able to think of in pondering this issue over quite a long time seems to me to have anything like as high a possibility of being true. [This appeared in a book of the same title, pp. 27-28. Emphases added.]

Perhaps, the time has come to think again. END

_________________

PS: Let me add an update June 28, by first highlighting the design inference explanatory filter, in the per aspect flowchart form I prefer to use:

explan_filterHere, we see that the design inference pivots on seeing a logical/contrastive relationship between three familiar classes of causal factors. For instance, natural regularities tracing to mechanical necessity (e.g. F = m*a, a form of Newton’s Second Law) give rise to low contingency outcomes. That is, reliably, a sufficiently similar initial state will lead to a closely similar outcome.

By contrast, there are circumstances where outcomes will vary significantly under quite similar initial conditions. For example, take a fair, common die and arrange to drop it repeatedly under very similar initial conditions. It will predictably not consistently land, tumble and settle with any particular face uppermost. Similarly, in a population of apparently similar radioactive atoms, there will be a stochastic pattern of decay that shows a chance based probability distribution tracing to a relevant decay constant. So, we speak of chance, randomness, sampling of populations of possible outcomes and even of probabilities.

But that is not the only form of high contingency outcome.

Design can also give rise to high contingency, e.g. in the production of text.

And, ever since Thaxton et al, 1984, in The Mystery of Life’s Origin, Ch 8, design thinkers have made text string contrasts that illustrate the three typical patterns:

1. [Class 1:] An ordered (periodic) and therefore specified arrangement:

THE END THE END THE END THE END

Example: Nylon, or a crystal . . . . 

2. [Class 2:] A complex (aperiodic) unspecified arrangement:

AGDCBFE GBCAFED ACEDFBG

Example: Random polymers (polypeptides).

3. [Class 3:] A complex (aperiodic) specified arrangement:
THIS SEQUENCE OF LETTERS CONTAINS A MESSAGE!
Example: DNA, protein.

Of course, class 3 exhibits functionally specific, complex organisation and associated information, FSCO/I.

As the main post shows, this is an empirically reliable, analytically plausible sign of design. It is also one that in principle can quite easily be overthrown. Show credible cases where cases of FSCO/I beyond a reasonable threshold are observed to be produced by blind chance and/or mechanical necessity.

Absent that, we are epistemically entitled to note that per the vera causa test, it is reliably seen that design causes FSCO/I. So, it is a reliable sign of design, even as deer-tracks are reliable signs of deer:

A probable Mule Deer track, in mud, showing dew claws (HT: http://www.saguaro-juniper.com, deer page.)
A probable Mule Deer track, in mud, showing dew claws (HT: http://www.saguaro-juniper.com, deer page.)

Consequently, while it is in-principle possible for chance to toss up any outcome from a configuration space, we must reckon with available search resources and the plausibility that feasible blind samples would be reasonably expected to catch needles in the haystack.

As a threshold, we can infer for solar system scale resources that, using:

Chi_500 = Ip*S – 500, bits beyond the solar system threshold,

we can be safely confident that if Chi_500 is at least 1, the FSCO/I observed is not a plausible product of blind chance and/or mechanical necessity. Where, Ip is a relevant information-content metric in bits, and S is a dummy variable that defaults to zero, save in cases of positive reason to accept that observed patterns are relevantly specific, coming from a zone T in the space of possibilities. If we have such reason, S switches to 1.

That is, it is default that first, something is minimally informational. the result of mechanical necessity, which would show as a low Ip value. Next, it is default that chance accounts for high contingency so that while there may be a high information-carrying capacity, the configurations observed do not come from T-zones.

Only when something is specific and highly informational (especially functionally specific) will Ip*S rise beyond the confident detection threshold that puts Chi_500 to at least 1.

And, if one wishes for a threshold relevant to the observed cosmos as scope of search resources, we can use 1,000 bits as threshold.

That is, the eqn summarises what the flowchart does.

And, the pivotal test is to find cases where the filter would vote designed, but we actually observe blind chance and mechanical necessity as credible cause. Actually observe . . . the remote past of origins or the like is not actually observed. We only observe traces which are often interpreted in certain ways.

But, the vera causa test does require that before using cause-factor X in explaining traces from the unobserved past, P, we should first verify in the present through observation that X can and does credibly produce materially similar effects, P’.

If this test is consistently applied, it will be evident that many features of the observed cosmos, especially the world of cell based life forms, exhibit FSCO/I in copious quantities and are best understood as designed.

IF . . .

Comments
"forgive my indulging in a pic of a classic fishing reel" Ha! I think it's a great illustration demonstrating exactly what "Functionally Specific Complex Organisation and/or Information" actually is. "Such diagrams are objective (the FSCO/I on display here is certainly not “question-begging,” as some — astonishingly — are trying to suggest!), and if one is to build or fix such a reel successfully, s/he had better pay close heed" I've been avid fisherman almost all my life, and I've ended up throwing away more reels than I've repaired. Sometimes I don't even bother attempting to fix a reel (unless it's one of my vintage Penn reels) because the task is so tedious. "Taking one of these apart and shaking it in a shoe-box is guaranteed not to work to get the reel back together again." All things are possible through the power of Darwin. ^_^Phil2232
June 24, 2014
June
06
Jun
24
24
2014
02:56 PM
2
02
56
PM
PDT
Tom ... spot on, as usual. The translation of recorded information into a physical effect requires that the physicochemical discontinuity between the arrangment of the medium and its physical effect be preserved by the system (or the system simply cannot function). The necessary preservation of this discontinuity results in the sum reality that the product of translated information is not reducible to physical law. Done.Upright BiPed
June 24, 2014
June
06
Jun
24
24
2014
02:21 PM
2
02
21
PM
PDT
oops. This is all the explanation anyone should need to see that the project of explaining information, whether biological or otherwise, BY MEANS OF THE LAWS OF PHYSICS is a fool’s errand.tgpeeler
June 24, 2014
June
06
Jun
24
24
2014
01:11 PM
1
01
11
PM
PDT
If one is serious about a materialist ontology (all that exists is matter and energy, essentially) then one must also commit to the proposition that all explanations are reducible somehow to the laws of physics. Which laws explain, in principle, the behavior of said sub-atomic particles in energy fields. The problem immediately arises that information of any kind, whether it is functional or specified or complex or whatever is always ENCODED into a physical substrate (by means of language) but it is NOT the substrate. Physics can always explain the substrate but it cannot, in principle, ever explain the language used to encode the information. Those rules, or conventions, if you please, are arbitrary and agreed upon by the users. The laws of physics will NEVER be able to account for why "cat" means a generally obnoxious furry little mammal and der Hund (in German) means the dog. Or more to the point, that a 3 billion "letter" string of ATCGs means human being and other ones "mean" some other living thing. The information is ENCODED into the DNA, it is NOT THE DNA. This is all the explanation anyone should need to see that the project of explaining information, whether biological or otherwise, is a fool's errand. Literally. Don't these people ever get tired of being wrong and not having an argument upon which to stand? Apparently not.tgpeeler
June 24, 2014
June
06
Jun
24
24
2014
01:09 PM
1
01
09
PM
PDT
I sometimes post on a techie blog, Next Big Future, and have the exact same experience. Most people do not wish to reasonably discuss the scientific evidence, precisely because they fear where it leads.anthropic
June 24, 2014
June
06
Jun
24
24
2014
12:15 PM
12
12
15
PM
PDT
I they can't accept the manifold, often palpable evidence for theism, indeed, Christianity, afforded by modern science....Axel
June 24, 2014
June
06
Jun
24
24
2014
10:28 AM
10
10
28
AM
PDT
WJM, apparently, when old 'Blackjack' Kennedy was asked why he was selling all his stocks just before the Wall Street Crash, he replied, 'When my shoe-shine man tells me what shares to buy, I know there is something very wrong with the market.' You may be familiar with that anecdote. Anyway, it strikes me that I'm strongly prone to respond or not to these threads on much the same principle, although in reverse. When I see some of the most extraordinary and impressive 'brains' on here, such as KF and VJT (you appear to be substantially AWOL in these cases - evidently for the reason you cite), tearing their hair out, trying to point out to a materialist poster (sometimes really pleasant-sounding, young chaps such as RDF) the faults in their arguments, and eventually being reduced to reiterating, rephrasing, etc, their rebuttals, I know there's something really desperately wrong with the 'market'! Of course, by the very nature of this blog, it 'goes with the territory', to a large extent.Axel
June 24, 2014
June
06
Jun
24
24
2014
10:25 AM
10
10
25
AM
PDT
WJM @7: Well said.Eric Anderson
June 24, 2014
June
06
Jun
24
24
2014
08:58 AM
8
08
58
AM
PDT
aqeels @4: I think you were being sarcastic, but . . . "We also must never forget the other great silent hero, namely deep time…" Yes, materialists imagine they see in the long dark past ages of time a solution to the conundrum. However, that impression quickly evaporates in the light of the morning sun as soon as we realize that the billions of years since the beginning of the universe constitutes but a rounding error against the awful probabilities that beset the materialist creation myth.Eric Anderson
June 24, 2014
June
06
Jun
24
24
2014
08:55 AM
8
08
55
AM
PDT
KF, Yes, I see what you mean. Thanks.Dionisio
June 24, 2014
June
06
Jun
24
24
2014
08:24 AM
8
08
24
AM
PDT
F/N: Notice, also how this approach may easily be extended to take in the fine-tuning issue. KFkairosfocus
June 24, 2014
June
06
Jun
24
24
2014
06:38 AM
6
06
38
AM
PDT
WJM: A powerful heartfelt plea for reasonableness. All I can say is in the end the higher the monkey climbs the coconut tree the more he exposes himself to the cross-bow shot. Thwack! Monkey stew for lunch. KFkairosfocus
June 24, 2014
June
06
Jun
24
24
2014
06:12 AM
6
06
12
AM
PDT
D, indeed. And, it took a few days to get that main diagram right to my satisfaction. A big subtlety in it is the implications of search space to credible sample size in a context where islands of function are isolated like needles in a haystack. KFkairosfocus
June 24, 2014
June
06
Jun
24
24
2014
06:10 AM
6
06
10
AM
PDT
This is why I stopped arguing with anti-IDists about ID; one cannot reasonably argue with those that deny the obvious. Every rational argument depends upon a reasonable, mutual assumption of certain framework values and terms that are agreed (usually without formal recognition) as obvious or necessary. The anti-ID position is entirely about denying, obfuscating, refusing and demanding absolute proof of framework values such as "intelligence", "design", "chance" and "natural". It is so bad now that they police themselves over this "problematic" lexicon and everyone else over fair use of quotes in order to protect themselves against their own inability to phrase their work or their views without such references. They are so desperate to deny the obvious that they must now employ thought and terminology police and raise a stink over that which they would never raise a stink about in any other situation. Who raised a stink about what "intelligence" means over SETI? Who raised a stink over what "design" means and insisted it cannot be properly inferred when any ancient artifacts were discovered and thought to be the remains a civilization? Are string theorists and multiverse proponents excoriated for promoting untestable pseudoscience on the masses and driven from the scientific community? Obviously, the anti-ID agenda is ideologically driven, and one simply cannot argue reasonably with those who will question any term, deny any reference and obfuscate that which would never otherwise be resisted to stymie the blatant nature of what is right in front of one's eyes. The incredible part of this is that the onus has been put on ID advocates to "prove" that the equivalent of a fully functioning, computerized 747 is in fact the product of intelligent design (first, of course, proving that "intelligence" and "design" are valid scientific concepts), instead of the other way around. IMO, it is the naturalist who must prove that the 747 was in fact produced by non-intelligent forces before any reasonable person should consider their claim anything more than naturalist mysticism.William J Murray
June 24, 2014
June
06
Jun
24
24
2014
05:59 AM
5
05
59
AM
PDT
if you can draw a picture of a problem-situation, you can understand it well enough to solve it.
Yes, that's a very clever suggestion, because a picture may, in most cases, tell much more than many words, assuming that the picture does correspond to the most accurate and valid description of the given problem, not a misinterpretation of it, though it doesn't have to be as detailed as the entire description. Actually, it could be a complementary -but very important- part of the whole description. Perhaps this is one of the reasons 3D animations are becoming so popular in biological presentations. Most of us like to watch colorful video animations of biological processes. However, most animations tend to be kind of reductionist, because it's difficult to describe the whole choreography of a particular biological process in accurate details. But it would take many pages of 'boring' text to describe what the 3D animation shows in few minutes. Being able to draw a descriptive picture does not necessarily imply that we understand the problem well enough to resolve it, but definitely better than if we don't know how to draw it. The visualization of a given problem most certainly takes us closer to its understanding, hence to the resolution. Many times the initial drawing undergoes several iterations of adjustments to account for additional data or to corrections of previous misunderstandings. Sometimes it's easier to adjust a drawing than a lengthy textual description. Mothers (and fathers too) should encourage their children to visualize their homework problems as much as they can. That's a wise advice of a caring parent.Dionisio
June 24, 2014
June
06
Jun
24
24
2014
05:57 AM
5
05
57
AM
PDT
Aq: Thanks for your thoughts, which reflect a common and very understandably widely held view. That is why, forgive me, I gave a part-answer in the above. This is one of the points that needs to be hammered out, as it seems that there is a gap in our understanding of just how isolated islands of function are in config spaces, and just how far from the stepping-stones across a stream view, the real world situation is. As you can see, I highlighted the ToL diagram with the rather obvious black box at OOL. (That same diagram is the pivot of my challenge to warrant the darwinist frame of thought from OOL up.) At origin, there was no irreducibly complex self replicating, code and algorithim using von Neumann self-replicating facility. That is a part of what needs to be explained in Darwin's warm little pond or the like. Next, 10^17 s takes in the about 13.7 BY held to have occurred since the origin of our observed cosmos, in a big bang event. 10^-14 s/search for each of 10^57 or 10^80 atoms takes in an over-generous search rate. We still find ourselves facing a needle in haystack challenge on steroids. Then, for origin of body plans, as the original post remarks, we are looking at, not 100 - 1,000 kbits as for OOL, but 10 - 100+ millions, as typical genome scales easily show, and that within 600 - 1,000 mn yrs on our planet. So, there is an obvious search for islands of function challenge. But, the tree of life and RV + NS --> DWIM pattern implicitly answers that we are looking at INCREMENTAL functional patterns across a CONTINENT of life forms. For which continent, there is no good evidence. For one instance, in amino acid sequence space, there are essentially 20 values per position relevant to life [yup, there are a few oddball exceptions], and typical proteins are ~ 300 AA long. With the requisite for function that they fold into precise, reasonably stable 3-d patterns, and fit in key-lock matching patterns, with relevant function clefts etc. It turns out, there are several thousand fold domains, and many of them hold only a very few members. These domains, moreover, are deeply isolated in the AA sequence space, i.e. a great many fold domains are structurally unrelated to others. In short, we have exactly the isolated island of function search challenge problem in accounting for the workhorse molecules that are the requisite, not only of first life, but the dozens of body plans. In fact, there are indications that it is common to find isolated domains popping up between seemingly neighbouring life forms. The tree of life, incrementalist model is in deep but typically unacknowledged trouble. I hope that helps in the rethink? KFkairosfocus
June 24, 2014
June
06
Jun
24
24
2014
05:44 AM
5
05
44
AM
PDT
#4, that is not necessarily the case. It least, it is not *that* easy. Any non-trivial function (especially one as sophisticated as bio-function) must be programmed first. As the program's complexity increases, the probability of it coming about just by "blind forces" reduces catastrophically. These meaningful coded instructions must be computationally halting. The halting problem is undecidable so needs an oracle. At the same time, the environment cannot serve as a halting oracle because it cannot choose for future function. This is why ID maintains that "behind" a (complex enough) program, in practice there must always be intelligence (directly or indirectly). So you can't just explain everything away like this. This type of explanations work within a very limited area of viable oscillations around the already existing functional attraction basins. This is why, generally, you cannot search for future function far enough without guidance (i.e. forethought, for which you need intelligence). In order for your search to succeed at all, very stringent limitations must be observed (e.g. there must be a search space in the first place, which is not always the case in practice).EugeneS
June 24, 2014
June
06
Jun
24
24
2014
05:27 AM
5
05
27
AM
PDT
A typo in my previous post. Obviously, I meant to say function could NOT have come about by “blind forces”.EugeneS
June 24, 2014
June
06
Jun
24
24
2014
05:10 AM
5
05
10
AM
PDT
You forget of course that natural selection acting out on random variation is the great silent force that can tear down improbable odds no matter what they are! We also must never forget the other great silent hero, namely deep time... I often wonder what would be the naturalist paradigm in relation to origins today be, if the concept of "mutation" did not exist. Would they even have a paradigm, or would they concede design?aqeels
June 24, 2014
June
06
Jun
24
24
2014
05:09 AM
5
05
09
AM
PDT
KF, Great post. Perhaps, the only thing I would add to it is that function could have come about by "blind forces" just because these blind forces are inert to it. They have no capability to choose for function other than impose trivial ordering. Just ordering (i.e. the drive towards equilibria by the blind forces alone) and bio-function (bona fide choice between inert states in order to maximize utility) are massively different.EugeneS
June 24, 2014
June
06
Jun
24
24
2014
05:08 AM
5
05
08
AM
PDT
Also, let us bear in mind this, from Merriam-Webster:
CODE: . . . 3a : a system of signals or symbols for communication b : a system of symbols (as letters or numbers) used to represent assigned and often secret meanings 4: genetic code 5: a set of instructions for a computer
kairosfocus
June 24, 2014
June
06
Jun
24
24
2014
04:50 AM
4
04
50
AM
PDT
And, is Sir Fred Hoyle the effective idea- father of modern design theory?kairosfocus
June 24, 2014
June
06
Jun
24
24
2014
04:30 AM
4
04
30
AM
PDT
Just what is the CSI- FSCO/I - needle in haystack, million monkeys challenge trying to tell us about the design inference in light of the vera causa principle?kairosfocus
June 24, 2014
June
06
Jun
24
24
2014
04:07 AM
4
04
07
AM
PDT
1 2

Leave a Reply