Uncommon Descent Serving The Intelligent Design Community

A “simple” summing up of the basic case for scientifically inferring design (in light of the logic of scientific induction per best explanation of the unobserved past)

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In answering yet another round of G’s talking points on design theory and those of us who advocate it, I have outlined a summary of design thinking and its links onward to debates on theology,  that I think is worth being  somewhat adapted, expanded and headlined.

With your indulgence:

_______________

>> The epistemological warrant for origins science is no mystery, as Meyer and others have summarised. {Let me clip from an earlier post  in the same thread:

Let me give you an example of a genuine test (reported in Wiki’s article on the Infinite Monkeys theorem), on very easy terms, random document generation, as I have cited many times:

One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the “monkeys” typed, “VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t” The first 19 letters of this sequence can be found in “The Two Gentlemen of Verona”. Other teams have reproduced 18 characters from “Timon of Athens”, 17 from “Troilus and Cressida”, and 16 from “Richard II”.[24]

A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took “2,737,850 million billion billion billion monkey-years” to reach 24 matching characters:

RUMOUR. Open your ears; 9r”5j5&?OWTY Z0d…

Of course this is chance generating the highly contingent outcome.

What about chance plus necessity, e.g. mutations and differential reproductive success of variants in environments? The answer is, that the non-foresighted — thus chance — variation is the source of high contingency. Differential reproductive success actually SUBTRACTS “inferior” varieties, it does not add. The source of variation is various chance processes, chance being understood in terms of processes creating variations uncorrelated with the functional outcomes of interest: i.e. non-foresighted.

If you have a case, make it . . . .

In making that case I suggest you start with OOL, and bear in mind Meyer’s remark on that subject in reply to hostile reviews:

The central argument of my book is that intelligent design—the activity of a conscious and rational deliberative agent—best explains the origin of the information necessary to produce the first living cell. I argue this because of two things that we know from our uniform and repeated experience, which following Charles Darwin I take to be the basis of all scientific reasoning about the past. First, intelligent agents have demonstrated the capacity to produce large amounts of functionally specified information (especially in a digital form).

Notice the terminology he naturally uses and how close it is to the terms I and others have commonly used, functionally specific complex information. So much for that rhetorical gambit.

He continues:

Second, no undirected chemical process has demonstrated this power.

Got that?

Hence, intelligent design provides the best—most causally adequate—explanation for the origin of the information necessary to produce the first life from simpler non-living chemicals. In other words, intelligent design is the only explanation that cites a cause known to have the capacity to produce the key effect in question . . . . In order to [[scientifically refute this inductive conclusion] Falk would need to show that some undirected material cause has [[empirically] demonstrated the power to produce functional biological information apart from the guidance or activity a designing mind. Neither Falk, nor anyone working in origin-of-life biology, has succeeded in doing this . . .}

In effect, on identifying traces from the remote past, and on examining and observing candidate causes in the present and their effects, one may identify characteristic signs of certain acting causes. These, on observation, can be shown to be reliable indicators or signs of particular causes in some cases.

From this, by inductive reasoning on inference to best explanation, we may apply the Newtonian uniformity principle of like causing like.

It so turns out that FSCO/I is such a sign, reliably produced by design, and design is the only empirically grounded adequate cause known to produce such. Things like codes [as systems of communication], complex organised mechanisms, complex algorithms expressed in codes, linguistic expressions beyond a reasonable threshold of complexity, algorithm implementing arrangements of components in an information processing entity, and the like are cases in point.

It turns out that the world of the living cell is replete with such, and so we are inductively warranted in inferring design as best causal explanation. Not, on a priori imposition of teleology, or on begging metaphysical questions, or the like; but, on induction in light of tested, reliable signs of causal forces at work.

And in that context the Chi_500 expression,

Chi_500 = Ip*S – 500, bits beyond the solar system threshold

. . . is a metric that starts with our ability to measure explicit or information content, directly [an on/off switch such as for the light in a room has two possible states and stores one bit, two store two bits . . . ] or by considering the relevant analysis of observed patterns of configurations. It then uses our ability to observe functional specificity (does any and any configuration suffice, or do we need well-matched, properly arranged parts with limited room for variation and alternative arrangement before function breaks] to move beyond info carrying capacity to functionally specific info.

This is actually commonly observed in a world of info technology.

I have tried the experiment of opening up the background file for an empty basic Word document then noticing the many seemingly meaningless repetitive elements. So, I pick one effectively at random, and clip it out, saving the result. Then, I try opening the file from Word again. It reliably breaks. Seeming “junk digits” are plainly functionally required and specific.

But, as we saw from the infinite monkeys discussion, it is possible to hit on functionally specific patterns if they are short enough, by chance. Though, discovering when one has done so can be quite hard. The sum of the random document exercises is that spaces of about 10^50 are searchable within available resources. At 25 ASCII characters, at 7 bits per character, that is about 175 bits.

The proverbial needle in the haystack
The proverbial needle in the haystack

Taking in the fact that for each additional bit used in a system, the config space DOUBLES, the difference between 175 or so bits, and the solar system threshold adopted based on exhausting the capacity of the solar system’s 10^57 atoms and 10^17 s or so, is highly significant. At the {500-bit} threshold, we are in effect only able to take a sample in the ratio of one straw’s size to a cubical haystack as thick as our galaxy, 1,000 light years. As CR’s screen image case shows, and as imagining such a haystack superposed on our galactic neighbourhood would show, by sampling theory, we could only reasonably expect such a sample to be typical of the overwhelming bulk of the space, straw.

In short, we have a very reasonable practical threshold for cases where examples of functionally specific information and/or organisation are sufficiently complex that we can be comfortable that such cannot plausibly be accounted for on blind — undirected — chance and mechanical necessity.

{This allows us to apply the following flowchart of logical steps in a case . . . ladder of conditionals . . .  structure, the per aspect design inference, and on a QUANTITATIVE approach grounded in a reasonable threshold metric model:

The per aspect explanatory filter that shows how design may be inferred on empirically tested, reliable sign
The per aspect explanatory filter that shows how design may be inferred on empirically tested, reliable sign

On the strength of that, we have every epistemic right to infer that cell based life shows signs pointing to design. {For instance, consider how ribosomes are used to create new proteins in the cell:

The step-by-step process of protein synthesis, controlled by the digital (= discrete state) information stored in DNA
The step-by-step process of protein synthesis, controlled by the digital (= discrete state) information stored in DNA

And, in so doing, let us zoom in on the way that the Ribosome uses a control tape, mRNA, to step by step assemble a new amino acid chain, to make a protein:

Step by step protein synthesis in action, in the ribosome, based on the sequence of codes in the mRNA control tape (Courtesy, Wikipedia and LadyofHats)
Step by step protein synthesis in action, in the ribosome, based on the sequence of codes in the mRNA control tape (Courtesy, Wikipedia and LadyofHats)

This can be seen as an animation, courtesy Vuk Nikolic:

[vimeo 31830891]

Let us note the comparable utility of punched paper tape used in computers and numerically controlled industrial machines in a past generation:

Punched paper Tape, as used in older computers and numerically controlled machine tools (Courtesy Wiki & Siemens)
Punched paper Tape, as used in older computers and numerically controlled machine tools (Courtesy Wiki & Siemens)

Given some onward objections, May 4th I add an info graphic on DNA . . .

Fig I.0: DNA as a stored code exhibiting functionally specific complex digital information (HT: NIH)
Fig I.0: DNA as a stored code exhibiting functionally specific complex digital information (HT: NIH)

And a similar one on the implied communication system’s general, irreducibly complex architecture:

A communication system
A communication system. Notice the required arrangement of a set of well-matched, corresponding components that are each necessary and jointly sufficient to achieve function, e.g. coder and decoder, transmitter and receiver, Transmitter, channel and receiver, etc.

In turn,  that brings up the following clip from the ID Foundation series article on Irreducible Complexity, on Menuge’s criteria C1 – 5 for getting to such a system (which he presented in the context of the Flagellum):

But also, IC is a barrier to the usual suggested counter-argument, co-option or exaptation based on a conveniently available cluster of existing or duplicated parts. For instance, Angus Menuge has noted that:

For a working [bacterial] flagellum to be built by exaptation, the five following conditions would all have to be met:

C1: Availability. Among the parts available for recruitment to form the flagellum, there would need to be ones capable of performing the highly specialized tasks of paddle, rotor, and motor, even though all of these items serve some other function or no function.

C2: Synchronization. The availability of these parts would have to be synchronized so that at some point, either individually or in combination, they are all available at the same time.

C3: Localization. The selected parts must all be made available at the same ‘construction site,’ perhaps not simultaneously but certainly at the time they are needed.

C4: Coordination. The parts must be coordinated in just the right way: even if all of the parts of a flagellum are available at the right time, it is clear that the majority of ways of assembling them will be non-functional or irrelevant.

C5: Interface compatibility. The parts must be mutually compatible, that is, ‘well-matched’ and capable of properly ‘interacting’: even if a paddle, rotor, and motor are put together in the right order, they also need to interface correctly.

( Agents Under Fire: Materialism and the Rationality of Science, pgs. 104-105 (Rowman & Littlefield, 2004). HT: ENV.)

In short, the co-ordinated and functional organisation of a complex system  is itself a factor that needs credible explanation.

However, as Luskin notes for the iconic flagellum, “Those who purport to explain flagellar evolution almost always only address C1 and ignore C2-C5.” [ENV.]

And yet, unless all five factors are properly addressed, the matter has plainly not been adequately explained. Worse, the classic attempted rebuttal, the Type Three Secretory System [T3SS] is not only based on a subset of the genes for the flagellum [as part of the self-assembly the flagellum must push components out of the cell], but functionally, it works to help certain bacteria prey on eukaryote organisms. Thus, if anything the T3SS is not only a component part that has to be integrated under C1 – 5, but it is credibly derivative of the flagellum and an adaptation that is subsequent to the origin of Eukaryotes. Also, it is just one of several components, and is arguably itself an IC system. (Cf Dembski here.)

Going beyond all of this, in the well known Dover 2005 trial, and citing ENV, ID lab researcher Scott Minnich has testified to a direct confirmation of the IC status of the flagellum:

Scott Minnich has properly tested for irreducible complexity through genetic knock-out experiments he performed in his own laboratory at the University of Idaho. He presented this evidence during the Dover trial, which showed that the bacterial flagellum is irreducibly complex with respect to its complement of thirty-five genes. As Minnich testified: “One mutation, one part knock out, it can’t swim. Put that single gene back in we restore motility. Same thing over here. We put, knock out one part, put a good copy of the gene back in, and they can swim. By definition the system is irreducibly complex. We’ve done that with all 35 components of the flagellum, and we get the same effect. [Dover Trial, Day 20 PM Testimony, pp. 107-108. Unfortunately, Judge Jones simply ignored this fact reported by the researcher who did the work, in the open court room.]

That is, using “knockout” techniques, the 35 relevant flagellar proteins in a target bacterium were knocked out then restored one by one.

The pattern for each DNA-sequence: OUT — no function, BACK IN — function restored.

Thus, the flagellum is credibly empirically confirmed as irreducibly complex. [Cf onward discussion on Knockout Studies, here.]

The kinematic von Neumann self-replicating machine [vNSR] concept is then readily applicable to the living cell:

jvn_self_replicator
The kinematic vNSR shows how stored coded information on a tape can be used to control a self-replicating automaton, relevant to both paper tape and the living cell

Mignea’s model of minimal requisites for a self-replicating cell [speech here], are then highly relevant as well:

self_replication_mignea
Mignea’s schematic of the requisites of kinematic self-replication, showing duplication and arrangement then separation into daughter automata. This requires stored algorithmic procedures, descriptions sufficient to construct components, means to execute instructions, materials handling, controlled energy flows, wastes disposal and more. (Source: Mignea, 2012, slide show as linked; fair use.)

HT CR, here’s a typical representation of cell replication through Mitosis:

[youtube C6hn3sA0ip0]

And, we may then ponder Michael Denton’s reflection on the automated world of the cell, in his foundational book, Evolution, a Theory in Crisis (1986):

To grasp the reality of life as it has been revealed by molecular biology, we must magnify a cell a thousand million times until it is twenty kilometers in diameter [[so each atom in it would be “the size of a tennis ball”] and resembles a giant airship large enough to cover a great city like London or New York. What we would then see would be an object of unparalleled complexity and adaptive design. On the surface of the cell we would see millions of openings, like the port holes of a vast space ship, opening and closing to allow a continual stream of materials to flow in and out. If we were to enter one of these openings we would find ourselves in a world of supreme technology and bewildering complexity. We would see endless highly organized corridors and conduits branching in every direction away from the perimeter of the cell, some leading to the central memory bank in the nucleus and others to assembly plants and processing units. The nucleus itself would be a vast spherical chamber more than a kilometer in diameter, resembling a geodesic dome inside of which we would see, all neatly stacked together in ordered arrays, the miles of coiled chains of the DNA molecules. A huge range of products and raw materials would shuttle along all the manifold conduits in a highly ordered fashion to and from all the various assembly plants in the outer regions of the cell.
We would wonder at the level of control implicit in the movement of so many objects down so many seemingly endless conduits, all in perfect unison. We would see all around us, in every direction we looked, all sorts of robot-like machines . . . . We would see that nearly every feature of our own advanced machines had its analogue in the cell: artificial languages and their decoding systems, memory banks for information storage and retrieval, elegant control systems regulating the automated assembly of components, error fail-safe and proof-reading devices used for quality control, assembly processes involving the principle of prefabrication and modular construction . . . . However, it would be a factory which would have one capacity not equaled in any of our own most advanced machines, for it would be capable of replicating its entire structure within a matter of a few hours . . . .
Unlike our own pseudo-automated assembly plants, where external controls are being continually applied, the cell’s manufacturing capability is entirely self-regulated . . . .[[Denton, Michael, Evolution: A Theory in Crisis, Adler, 1986, pp. 327 – 331. This work is a classic that is still well worth reading. Emphases added. (NB: The 2009 work by Stephen Meyer of Discovery Institute, Signature in the Cell, brings this classic argument up to date. The main thesis of the book is that: “The universe is comprised of matter, energy, and the information that gives order [[better: functional organisation]  to matter and energy, thereby bringing life into being. In the cell, information is carried by DNA, which functions like a software program. The signature in the cell is that of the master programmer of life.” Given the sharp response that has provoked, the onward e-book responses to attempted rebuttals, Signature of Controversy, would also be excellent, but sobering and sometimes saddening, reading.) ]}

An extension of this, gives us reason to infer that body plans similarly show signs of design. And, related arguments give us reason to infer that a cosmos fine tuned in many ways that converge on enabling such C-chemistry, aqueous medium cell based life on habitable terrestrial planets or similarly hospitable environments, also shows signs of design.

Not on a prioi impositions, but on induction from evidence we observe and reliable signs that we establish inductively. That is, scientifically.

Added, May 11: Remember, this focus on the cell is in the end because it is the root of the Darwinist three of life and as such origin of life is pivotal:

The Smithsonian's tree of life model, note the root in OOL
The Smithsonian’s tree of life model, note the root in OOL

Multiply that by the evidence that there is a definite, finitely remote beginning to the observed cosmos, some 13.7 BYA being a common estimate, and 10 – 20 BYA a widely supported ballpark. That says, it is contingent, has underlying enabling causal factors, and so is a contingent, caused being.

All of this to this point is scientific, with background logic and epistemology.

Not theology, revealed or natural.

It owes nothing to the teachings of any religious movement or institution.

However, it does provide surprising corroboration to the statements of two apostles who went out on a limb philosophically by committing the Christian faith in foundational documents to reason/communication being foundational to observed reality, our world. In short the NT concepts of the Logos [John 1, cf Col 1, Heb 1, Ac 17] and that the evident, discernible reality of God as intelligent creator from signs in the observed cosmos [Rom 1 cf Heb 11:1 – 6, Ac 17 and Eph 4:17 – 24], are supported by key findings of science over the past 100 or so years.

There are debates over timelines and interpretations of Genesis, as well there would be.

They do not matter, in the end, given the grounds advanced on the different sides of the debate. We can live with Gen 1 – 11 being a sweeping, often poetic survey meant only to establish that the world is not a chaos, and it is not a product of struggling with primordial chaos or wars of the gods or the like. The differences between the Masoretic genealogies and those in the ancient translation, the Septuagint, make me think we need to take pause on attempts to precisely date creation on such evidence. Schaeffer probably had something right in his suggestion that one would be better advised to see this as describing the flow and outline of Biblical history rather than a precise, sequential chronology. And that comes up once we can see how consistently reliable the OT is as reflecting its times and places, patterns and events, even down to getting names right.

Strawman
A Strawman

So, debating Genesis is to follow a red herring and go off to pummel a strawman smeared with stereotypes and set up for rhetorical conflagration. A fallacy of distraction, polarisation and personalisation. As is too often found as a habitual pattern of objectors to design theory.

What is substantial is the evidence on origins of our world and of the world of cell based life in the light of its challenge to us in our comfortable scientism.

And, in that regard, we have again — this is the umpteenth time, G; and you have long since worn out patience and turning the other cheek in the face of personalities, once it became evident that denigration was a main rhetorical device at work — had good reason to see that design theory is a legitimate scientific endeavour, regardless of rhetorical games being played to make it appear otherwise.>>

_______________

In short, it is possible to address the design inference and wider design theory without resort to ideologically loaded debates. And, as a first priority, we should. END

______________

PS: In support of my follow up to EA at 153 below, at 157, it is worth adding (May 8th) the Trevors-Abel diagram from 2005 (SOURCE), contrasting the patterns of OSC, RSC and FSC:

osc_rsc_fsc

Figure 4: Superimposition of Functional Sequence Complexity onto Figure 2. The Y1 axis plane plots the decreasing degree of algorithmic compressibility as complexity increases from order towards randomness. The Y2 (Z) axis plane shows where along the same complexity gradient (X-axis) that highly instructional sequences are generally found. The Functional Sequence Complexity (FSC) curve includes all algorithmic sequences that work at all (W). The peak of this curve (w*) represents “what works best.” The FSC curve is usually quite narrow and is located closer to the random end than to the ordered end of the complexity scale. Compression of an instructive sequence slides the FSC curve towards the right (away from order, towards maximum complexity, maximum Shannon uncertainty, and seeming randomness) with no loss of function.

 

Comments
kairosfocus, A fool (or a liar) is as a fool (or a liar) does- do NOT blame the messenger. And maybe your "fiery hell" is the cause of global warming ;)Joe
May 9, 2013
May
05
May
9
09
2013
06:08 AM
6
06
08
AM
PDT
Lizzie is [SNIP -- cf here KF]:
This is really quite simple: 1. Is x designed? 2. Who designed x? I trust you can see that these are separate questions and that it is possible to answer the first without ever answering, or even asking for that matter, the second.
No, they are not separate questions, Eric. It is the fundamental error of ID to think that they are. This is why E-prime is so useful in rooting out such errors. Translating into E-prime: 1. Did somebody or something design x? 2. Who designed x? My first is logically identical to Eric’s first, but written in E-Prime we can see that the questions are not separate at all, but intimately related. To answer the first we need to consider the second, and to answer the second, we need to consider the first.
Total nonsense. We do NOT have to consider the second question in order to answer the first and Lizzie cannot make her case that shows otherwise. The who comes AFTER we have determined design, Lizzie. Science cannot answer the who without first determining design, duh. And then she reposts keths' total nonsense about unguided evolution beong a better explanation than ID. Unfortunately keiths is an imbecile that couldn't understand science if his life depended on it. Earh to Lizzie: Do you really think that your bald assertions mean something?Joe
May 9, 2013
May
05
May
9
09
2013
04:06 AM
4
04
06
AM
PDT
But you only measure the way they 'appear'. If they don't appear you can't. Not realistically. 'Appear'! It's that word again. It's getting like 'counter-rational': a boogey-man.... as y'all say on your side of the Atlantic.Axel
May 8, 2013
May
05
May
8
08
2013
03:39 PM
3
03
39
PM
PDT
'To rule out a non-design mechanism simply because you have decided, a priori, that the appearance of design is evidence of design, is to assume your consequent' No, Lizzie. It's kind of empirical stuff. You know, the way things appear. And you kind of measure their characteristics 'n' stuff, when it's appropriate.Axel
May 8, 2013
May
05
May
8
08
2013
03:35 PM
3
03
35
PM
PDT
Oh. Sorry. I meant, Lizzie's proofs of the occurrence of evolution.Axel
May 8, 2013
May
05
May
8
08
2013
03:31 PM
3
03
31
PM
PDT
It's the [SNIP -- too close], I think.Axel
May 8, 2013
May
05
May
8
08
2013
03:30 PM
3
03
30
PM
PDT
Joe (& EL): The basic issue is in the root of the Darwinist tree of life model of the history of life. Yes, I know the theory neatly omits this, though it often crops up in school textbooks. No roots, nothing further, just as -- as EA highlighted -- no main branches and innumerable successive finely graded fossils, no twigs and leaves. So, let us see a sound Darwinist/evo mat account of OOL, grounded on observations that break the general inductive conclusion that FSCO/I is a reliable sign of design as cause. The cell is chock full of FSCO/I and so far, no observationally anchored evo mat account and no Nobel Prize consequently. (Prigogine -- as he admitted -- doesn't count. Never mind what popular and news mags said 30 years ago.) Similarly, let us see an actual, observationally warranted Darwinist account of OO body plans that answers inter alia the Cambrian fossil revo challenge that stumped Darwin, and also the other major body plan gaps. For instance the actual origin of adaptations to make a whale with associated pop genetics solutions would help. So would an account of origin of our language using capacity. As in, what I am getting back to is that it will soon be eight months since the 6,000 word darwinist essay challenge with no serious takers. Sniping games at TSZ, etc., don't count. That tells me the REAL degree of observationally grounded confidence in the Darwinist system. (As opposed to the Lewontinian a prioris that lead true believers in circles of false triumphalism, in which gross extrapolations and dubious icons seem much stronger than they are.) LOW. KFkairosfocus
May 8, 2013
May
05
May
8
08
2013
11:30 AM
11
11
30
AM
PDT
TJguy, I read your post on no death before the fall, and I have to disagree, I have made an argument about that on my blog, responses are welcome. http://www.thetruthenquirer.blogspot.com/2013/04/perfect-creation-vs-perfect-plan.htmlAndre
May 8, 2013
May
05
May
8
08
2013
10:49 AM
10
10
49
AM
PDT
Lizzie continues:
So what you need to do is to make a differential prediction – something that predicts something that would not be predicted on the basis of Darwinian theory, but would be predicted on the basis of an Intelligent Designer.
Darwinian theory does not make any predictions that are exclusive to it. However Darwin did say what would falsify his claims and modern biologists have uncovered many such systems and subsystems that fit his falsification criteria. IOW for all intents and purposes, Darwin has been falsified and darwinian mechanisms have been found wanting. However that will not stop you from saying otherwise. So please, carry on.Joe
May 8, 2013
May
05
May
8
08
2013
08:20 AM
8
08
20
AM
PDT
Joe:
Then Lizzie also claims that darwinian evolution produces testable hypotheses along with predictions yet she never sez what any of those are.
It does produce a testable prediction, which Darwin himself focused on: there should be "innumerable" intermediate fossils in the fossil record, demonstrating slight, successive changes from A to B to Z. This has been tested and is demonstrably false. It also predicts that reproduction is the be-all and end-all and that everything about an organism should converge on this all-important "goal." If we look at current biology, this is demonstrably false. As to concrete predictions about what will happen to a particular species or another, whether complexity will increase or decrease, whether a particular body plan, or organ will arise, etc., you are right, there are no concrete predictions. Because the central, key, most fundamental "explanation" in all of evolutionary theory is simply this: Stuff Happens.Eric Anderson
May 8, 2013
May
05
May
8
08
2013
08:09 AM
8
08
09
AM
PDT
Now Lizzie thinks that we think that chance and necessity are actual mechanisms. Earth to Lizzie: In a fair roll of the dice, what comes up is a combination of chance and necessity. Necessity because gravity and force get the dice going, as well as the friction that stops the movement. And chance because of the probability distribution. When getting dealt, via a fair deal, a hand of cards, the cards you get are the result of a combination of chance and necessity. Necessity because, well, you are playing cards so it is necessary to be dealt a hand and chance due to the probability distribution. Nothing else. That said, if someone, playing 5 card stud, gets dealt a royal flush several times in one night, you would expect something else besides chance and necessity at play. IOW there wasn't an equal probability distribution. Then Lizzie also claims that darwinian evolution produces testable hypotheses along with predictions yet she never sez what any of those are. So it appears that blatant misrepresentation and lies are the best the TSZ have. Hey Lizzie- the NCSE has an opening for you.Joe
May 8, 2013
May
05
May
8
08
2013
07:08 AM
7
07
08
AM
PDT
EA@153: Significant observations. The strawman distortion of design thought is an ever present danger. At one level, it can be innocent, an error. At the next level, one has been taken in by the distortions presented by critics [hence, inter alia the UD WACs]. At this level, already, one is in default of the duty of care to hear both sides carefully. At the next level, one is willfully propagating what one knows or SHOULD know -- per duties of care to truth and fairness -- is a strawman caricature. Then, at the final level, one is manufacturing distortions, in the teeth of these duties of care, often in the teeth of cogent correction. Let's go to the OP again. 1 --> Notice context, where specific considerations out of which modern design theory emerged, OOL [per Thaxton et al TMLO, 1984] are in view. 2 --> Notice, the focus on information, especially the coded algorithmic information in DNA used to synthesise proteins, and the note on how that is also a part of the vNSR that allows self replication of an encapsulated, intelligently gated, self assembling, self-maintaining metabolic automaton with vNSR. 3 --> Observe how FSCO/I arises naturally as a key feature of all this. First, directly in coded strings. Second, indirectly as the result of reducing functional organisation networks of nodes and arcs to structured string lists that describe it similar to how a 3-d object is represented in a drawing package such as AutoCAD [or, these days, Blender]. 4 --> Now, let us focus how FSCO/I can be quantified, by taking up models of specified complexity and deducing info content then adjusting the value per objective grounds for seeing specificity of function, and passing a threshold of sufficient complexity on the gamut of, say the solar system: Chi_500 = Ip*S - 500, bits beyond the solar system threshold 5 --> Observe the context of empirical support for same, that on billions of test cases, growing with every post in this or a similar blog thread, we routinely see in action the ONLY observed source of such, intelligence. 6 --> Where also intelligence, design etc can be reasonably defined, independent of this debate, and where for example the beavers and their dams that are adapted to specific circumstances allows us to see good reason not to confine intelligence and design to humans only. 7 --> So, there is direct warrant for FSCO/I as a sign of design. This, being backed up by the sampling analysis comparable to picking a single handful of beans from a truckload at random where there are some few gold beads scattered, which will predictably and reliably pick up the bulk not the exception. For excellent reasons. 8 --> All, in a context where the requisites of FSCO/I: high contingency, but expressed in a way that requires multiple parts to be specifically arranged and organised to achieve function, sharply constrains possible arrangements W, to zones T that are much narrower and by consequence, un-representative of the bulk. 9 --> Now, let us focus the flowchart that I think was first developed c. 2009, maybe 2008. A check says, last modded in Dec 2008 (for at least the version used in my always linked briefing note). 10 --> Here we have an exploration of some object, entity, process network, phenomenon, situation, etc. on an aspect by aspect basis. That term being used to denote that we abstract out features of interest, one by one, to assess their likely causal source. It is entirely possible for an entity to have parts or aspects or behaviours, etc diversely ascribable to chance, necessity and design. 11 --> Now, this is a case structure. Not presented in parallel, switch style, but in an explicit, step by step ladder of IF X THEN A, ELSE B steps. With a definite start point and flow of control. A guide to multi-fork decision, with certain options given priority, indeed default. 12 --> First option being mechanical necessity, i.e. deterministic dynamical law reflecting forces acting similar to those of Newtonian Dynamics as an ideal model. the criterion for this out being: regular, repeatable, predictable low contingency pattern. Similar to how a dropped heavy object near earth initially falls reliably at 9.8 N/kg. Such a model can also accommodate change processes and circumstances that modify behaviour by changing parameters. It fits with periodic oscillations or patterns etc. There are reams of physics and chemistry etc. that are covered by this. And the response is to explore those reams if that is seen. 13 --> This first default option is defeated by high contingency: under sufficiently similar initial circumstances, we see materially divergent outcomes not ascribable to mere noise or scatter or experimental error or perturbations from interfering neighbouring entities etc. 14 --> On the high contingency side, there is a second default. Chance. 15 --> That is, the built-in assumption -- default -- if high contingency obtains is that chance forces and factors are at work. That is, that either:
a: there is a clash of uncorrelated streams of actions leading to a scattered outcome [similar to how one can use the last four digits of phone numbers in a textbook as a random number table as -- usually -- there is little sustained correlation between names picked at random on pages picked at random and line codes assigned], and/or b: there is sensitive dependence on initial or intervening conditions that causes an unpredictable outcome in accordance with some random distribution model [e.g. thrown fair dice], and/or c: there is a cluster of underlying factors that each create a small effect but which vary in a significantly uncorrelated way leading to a distribution of outcomes clustered on some mean in some sort of bell-curve or the like, and/or d: there is some other similar stochastic pattern giving rise to a population distribution, and/or e: there is quantum-level randomness giving rise to a stochastically distributed macro-observable outcome, such as with radioactivity or the like, and/or f: there is some other comparable pattern or combination of influences or circumstances.
16 --> In sum, there is no good reason to associate the observed high contingency with a goal, or an organisation that is goal-directed and foresighted. 17 --> This is the high-contingency default. (And, this has been pointed out, over and over and over across the course of years, just such as been willfully ignored by major persons now associated particularly with TSZ.) 18 --> However, there are other circumstances that are not amenable to such an explanation by defaulting to in effect it could as well have been this as any other in a range of possibilities, maybe biased a bit by factors that move us away from a flat random underlying pattern. (E.g. if one has two dice and sums up faces, a flat random underlying pattern gives a peaked result. The same holds for a tossed coin, where the cumulative number of H's and T's will sharply peak, but have tails.) 19 --> A good example is the text of this post. There is a string of ASCII characters, which could in principle be generated by a random text generator, as the OP examines. 20 --> However, the sort of definite functional pattern in the strings that we see in accordance with meaning -- semiotics -- which is physico-dynamically arbitrary but fits with a conventional framework for human communication in English. 21 --> That functional pattern is taken from a population overwhelmingly dominated by gibberish: ti3utogugio[244 . . . But, it is not gibberish. 22 --> Nor is this an oscillating or spatially distributed repeating pattern similar to crystals or swinging pendulums:eseseseseseseses . . . 23 --> That is, as Trevors and Abel have long since pointed out, following Wicken and before him Orgel, random sequence complexity is diverse from orderly sequence complexity and is separately distinct from functional sequence complexity. 24 --> As I noted in the second background note to the ID Foundations series, over two years ago now:
In 2005, David L Abel and Jack T Trevors published a key article on order, randomness and functionality, that sets a further context for appreciating the warrant for the design inference. The publication data and title for the peer-reviewed article are as follows: Theor Biol Med Model. 2005; 2: 29. Published online 2005 August 11. doi: 10.1186/1742-4682-2-29. PMCID: PMC1208958 Copyright © 2005 Abel and Trevors; licensee BioMed Central Ltd. Three subsets of sequence complexity and their relevance to biopolymeric information A key figure (NB: in the public domain) in the article was their Fig. 4: [Figure, showing a 3-D pattern of sequence possibilities, to be added as an appendix to the OP] Figure 4: Superimposition of Functional Sequence Complexity onto Figure 2. The Y1 axis plane plots the decreasing degree of algorithmic compressibility as complexity increases from order towards randomness. The Y2 (Z) axis plane shows where along the same complexity gradient (X-axis) that highly instructional sequences are generally found. The Functional Sequence Complexity (FSC) curve includes all algorithmic sequences that work at all (W). The peak of this curve (w*) represents “what works best.” The FSC curve is usually quite narrow and is located closer to the random end than to the ordered end of the complexity scale. Compression of an instructive sequence slides the FSC curve towards the right (away from order, towards maximum complexity, maximum Shannon uncertainty, and seeming randomness) with no loss of function. We may discuss this figure in steps: 1 –> The data structure T & A have in view is the string, where symbols are chained in a line like: c-h-a-i-n-e-d 2 –> Since any other data structure can be built up from a combination of strings, this is without loss of generality. 3 –> They then envision three types of sequences: (a) orderly ones that are repetitive: jjjjjjjjjjjjjjjjjjjj (b) random ones that are essentially incompressible: f3erug4huevb (c) functional ones, that are almost as incompressible, but are constrained by that functionality: this is a functional, non- orderly and non-random sequence 4 –> Fig 4 then shows how these three types of sequences can be represented in a 3-dimensional space that in principle can be a metric: for, order and randomness are on two ends of a continuum of compressibility and a similar continuum of complexity, both being low on algorithmic [or, by extension, linguistic-contextual] functionality. 5 –> The location of the FSC peak is particularly revealing: first, it is not quite as incompressible as a truly random sequence, because there is normally some redundancy in meaningful messages. So, the Shannon Information carrying capacity metric is not quite what is needed. 6 –> Compressibility metrics will show that FSC sequences will be slightly less resistant to compression than are truly random sequences — for the latter, to communicate them, you essentially have to quote them. 7 –> By contrast, an orderly sequence can be compressed by giving its unit cell then saying replicate n times. It is highly compressible. 8 –> But neither orderly nor random sequences are generally able to function, and so we see a sharp peak in the curve as we hit the FSC. 9 –> If we imagine the curve as sitting in a sea that floods the diagram, we can see how the image of islands of isolated function can emerge: FSC peaks up out of the sea of non-functional orderly or random sequences. And of course, functionality is always in a context: parts or components or elements combine to do the job in hand. 10 –> J S Wicken, in his key 1979 remarks, captures the next key point: we routinely and habitually observe that functional sequences are the product of design, and thus they are a longstanding puzzle for those who would account for living forms on natural selection:
‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and note added. Also, the idea-roots of a term commonly encountered at UD, functionally specific, complex information [FSCI], should be obvious. The onward restriction to digitally coded FSCI [dFSCI] as is seen in DNA — and as will feature below, should also be obvious.)]
[I then describe a metric that is similar to the Chi_500 metric but which I have retired from use in favour of the Chi_500 metric. ] 11 –>We can compose a simple metric that would capture the idea: Where function is f, and takes values 1 or 0 [as in pass/fail], complexity threshold is c [1 if over 1,000 bits, 0 otherwise] and number of bits used is b, we can measure FSCI in functionally specific bits, as the simple product: FX = f*c*b, in functionally specific bits 12 –> Actually, we commonly see such a measure; e.g. when we see that a document is say 197 kbits long, that means it is functional as say an Open Office Writer document, is complex and uses 197 k bits storage space. [The second page then continues] 13 –> Durston et al, in 2007, extended this reasoning, by creating a more sophisticated metric that they used to measure the FSC value, in functional bits, or FITS, for 35 protein families [where a certain range of variants are functional, folding correctly and being biologically active]; which was again published as a peer-reviewed article. Excerpting the UD Weak Argument Correctives, no 27:
[an] empirical approach to measuring functional information in proteins has been suggested by Durston, Chiu, Abel and Trevors in their paper “Measuring the functional sequence complexity of proteins”, and is based on an application of Shannon’s H (that is “average” or “expected” information communicated per symbol: H(Xf(t)) = -[SUM]P(Xf(t)) logP(Xf(t)) ) to known protein sequences in different species . . .
25 --> So, the distinction is objective, observable and measurable. 26 --> That is, we have a way to OBJECTIVELY identify that something is functionally specific and simultaneously complex in a way that makes non-foresighted, chance sampling of the space of possibilities -- the configuration space -- by whatever chance mechanism, maximally implausible. 27 --> This has been backed up by formal and informal observation of cumulatively billions of cases, and it reliably indicates that FSCO/I is a highly reliable sign of intelligent, purposefully directed choice contingency -- contrivance, design or art -- as the best explanation to date of such cases. 28 --> To overturn such, the reasonable answer would be to provide counter-examples. But as Meyer, in the challenge noted in the OP, observes: that has not been done, and for the very same reasons why this sign is so reliable as an index of design. 29 --> It is quite evident, that had this been in a non-polarised context, none of the above would require constant belabouring over the course of years, it would be a no-brainer. But the problem is, this points strongly to signs of design in origin of life and of body plans, as well as by extension, in the origins of the cosmos. 30 --> The proper answer, however, remains the same: simply show substantial counter-examples and the inference to design on FSCO/I as reliable sign would collapse instantly and decisively. 31 --> The problem objectors have with that, of course, s that the threshold of complexity for inferring design has been set so high relative to the search resources of our solar system or of the observed cosmos as a whole, that such a counter-example is not likely to emerge. 32 --> And, it is no surprise to see that, repeatedly, counter examples suggested have turned out to depend crucially in implicitly (or even explicitly added) active information coming from an intelligent source. 33 --> As well, it is highly relevant to see how the objectors so often refuse to address the root of the darwinian tree of life, and how they often duck the cosmological fine tuning issue. (Look above and see how not one objector has taken this up.) 34 --> The significance of this is, that at OOL, the favourite out, the asserted or assumed powers of "natural selection" is off the table. This, because the origin of self replication per vNSR, is a major aspect of what needs to be accounted for. Once that beclouding assumption is out of the picture, it becomes instantly clear that FSCO/I is not credibly accounted for on blind chance and mechanical necessity, separately or in combination. 35 --> Similarly, the notion that we can posit an unobserved quasi-infinity of unobserved sub-cosmi with physics scattered at random, is not particularly plausible and reeks of being ad hoc. This is philosophising while wearing a lab coat, it is not empirically grounded science. 36 --> And that is before we observe that the operating point of our observed cosmos is LOCALLY isolated, so we need to in effect explain Leslie's isolated fly on a wall swatted by a bullet. Regardless of other sections being carpeted by flies, the one lone fly is hard to find and a reasonable target for a marksman. Fine tuning as an evident sign of design does not go away once multiverse speculations are admitted to the table. +++++++ So, here is my prediction, On track record, the above will have precisely zero significant impact on the circles of objectors we are dealing with. At most they will try to snip, distract, strawmannise and snipe. Hence my point in light of what I saw 20 - 25 years ago: we are dealing with bitter enders in significant part, who will cling to the ship until it goes down. Then, they will try to hop to a raft and act as though nothing happened. KFkairosfocus
May 8, 2013
May
05
May
8
08
2013
03:44 AM
3
03
44
AM
PDT
tjguy @134, well said. :)Chance Ratcliff
May 7, 2013
May
05
May
7
07
2013
11:43 PM
11
11
43
PM
PDT
Gregory: It is not ‘rude’ to tell the truth, except for the person who doesn’t want to hear it.
Here is the truth: You refuse to engage the material evidence of semiosis because it completely empties your twin claims that ID relies on an analogy between biological design and human design, and that there is no theory of human design. I have tried many times to get you to engage; below is the most recent:
Gregory: The best (read: authentic or legitimate) theories of ‘design’ and ‘intelligence’ are those that involve ‘designs’ and ‘designing’ by ‘intelligences’ that we can study here and now, or historically through evidences available of various kinds and types, e.g. typed or written documents, signed contracts, photographs, recordings, sketches, artwork, architecture, archaeology, etc. – all human-made things. --- UB: Gregory, each of your examples (documents, photographs, sketches, recordings, etc) all stem from living systems. As such, they all exemplify a singularly unique and readily identifiable material condition. This material condition is not demonstrated anywhere else in the physical record of the cosmos — except at the origin of life on Earth. They are all semiotic, i.e. they all have physicochemically arbitrary relationships instantiated in a material system. Therefore, the distinctions you require in order to rail against ID are decimated by the material evidence. This is why you will (as you must) continue to ignore that evidence. --- Gregory: (after 8 comments and 2300 words... no response) --- UB: Gregory says “[Big ID] regurgitates on analogies with human-made things. This is refuted by physical evidence – which you ignore for that very reason that you cannot refute it. Gregory says “There simply is no Big-ID theory of human-made things.” Humans are semiotic beings; sensory input and the exchange of information play a supreme role in virtually all human activities. This phenomenon entails specific material conditions. These material conditions tie the observation of human discourse to a larger set of semiotic observations, which include the origin of living systems. To the extent that design requires a “theory of human-made things”, your claim is simply and demonstrably false. If you’d like to attack this counterclaim, I will provide you my position in a single paragraph, and you can take the opportunity to show it to be false. --- Gregory: (after another 1020 words) “Humans are semiotic beings” – UB Bravo! And what does that have to do with OoL, OoBI or ‘human origins’? --- Upright BiPed: Gregory, I have previously commented on how you willfully ignore (and otherwise refuse to engage) material evidence that refutes your position. Then in comment #171, I stated that ‘humans are semiotic beings’, and immediately followed that statement with a straightforward material argument demonstrating precisely how that fact relates to OoL. In response, you ignored my argument in its totality and responded with: “Humans are semiotic beings” Bravo! And what does that have to do with OoL. It requires integrity to engage well-reasoned opposing arguments. Your response above is a sufficiently clear example of how you undermine that integrity in yourself. It also demonstrates the lengths you are willing to go intellectually in order to sell your anti-ID position, as well as why your efforts are destined to the failure you’ve experienced with them thus far. --- Gregory: (no response)
All you haughty posturing is for naught. You simply avoid material evidence. And since you’ve already demonstrated that both sides of your “id versus ID” screenplay concern themselves with material evidence anyway, perhaps the question for those watching your divisive, self-important politics is what does the evidence have to say, and why are you ignoring it? The evidence shows that the activity of living things, and the origin of life, both share a singularly unique material condition among physical systems. That condition is what is physically required for semiosis to occur. In other words, what humans do and what is required for life (and evolution) share a singular physical requirement which can be readily identified. Obviously, you would avoid that evidence if your professional hat is hung on the falsehood that ID requires an analogy between human design and biological design.Upright BiPed
May 7, 2013
May
05
May
7
07
2013
09:11 PM
9
09
11
PM
PDT
Eric, yes I can see where somebody could make that mistake. The question being, is it really the result of confusion, or an intentional unsophisticated smear, or is it really the measure of the quality of their refutations. It sounds like sloganeering, something that's fit for a bumper sticker or ball cap. Design is the default, pass it on.Chance Ratcliff
May 7, 2013
May
05
May
7
07
2013
07:30 PM
7
07
30
PM
PDT
Chance @144: I think someone could mistake the design filter as having design as the default if the design filter worked as follows: "Unless it can be affirmatively proven that x was brought about by either chance or necessity, then x was designed." or slightly more softly: "Unless it can be affirmatively proven that x could have been brought about by either chance or necessity, then x was designed." Articulated as such, the filter would have design as its default. Unfortunately for Patrick and others who have misrepresented the filter thusly, that is not how the filter works. The differences lie in: (i) the fact that the filter does not demand definitive proof of chance or necessity, only a reasonable probability, and (ii) there is, in addition, some requirement of affirmative evidence for design.* ----- * Because design and non-design are, by definition, mutually exclusive, some people may mistakenly think that affirmative evidence for design is just the oft-derided "negative evidence" against chance and necessity. Due to the mutual exclusivity it plays that role too, but the affirmative evidence for design needs to be acknowledged for its own positive evidentiary side.Eric Anderson
May 7, 2013
May
05
May
7
07
2013
05:50 PM
5
05
50
PM
PDT
Gregory to Kairosfocus:
Please link to the thread and post where you carefully, clearly and *theoretically* distinguish between what is more commonly known as ‘design theory’ (i.e. which many legitimate scholars and practitioners around the world use) and what you are advocating as ‘Intelligent Design Theory’ (IDT) with your FSCO/I, KF/GEM.
On many occasions, I have not only made the distinction between classical design arguments and those proposed by the Discovery Institute, I have dramatized it. Gregory knows this, so he knows that he is not telling the truth. Ironically, when I explain the differences between, say, design arguments from Aristotle/Aquinas/Paley vs design arguments from Dembski/Meyer/Behe, Gregory calls me a divisive "separatist," but when I explain the similarities, he contradicts himself and calls me a conflating "flip-flopper." Gregory's war on reasoned discourse has become legendary.
When I wrote in #4 that “Most ‘design theorists’ reject IDT,” that is not a ‘false accusation’. It is truth telling. Whether that is uncomfortable for you or not is not my concern.
This is more laughable nonsense coming from one of most undisciplined minds UD readers have ever encountered. I challenge Gregory to prove his unsubstantiated claim that most design theorists reject IDT. Surely, he doesn't expect anyone here to take his word for it. He can begin by defining a "design theorist." If he gets past that hurdle, which is highly unlikely, he can provide some semblance of evidential support for his claim. One would think that a sociologist would at least be able to recognize the empirical requirements for a sociological claim. Remarkable!StephenB
May 7, 2013
May
05
May
7
07
2013
05:49 PM
5
05
49
PM
PDT
I suppose you're saying that implication is appropriate if there is warrant for the relationship between the sign and the signified. For all x, if x is an apple then x is red. The above is appropriate as long as no green apple has ever been produced, from what I'm gathering. Keith Devlin, in his book Introduction to Mathematical Thinking uses the following example as an exercise for logical implication: If the Yuan rises, the Dollar will fall. However there is no deductive certainty there either. It's only appropriate as some sort of economic postulate. Yet the phrase is modeled on implication. I guess this is part of my confusion. These types of propositions seem commonly used as examples of implication, yet they aren't all deductively provable.Chance Ratcliff
May 7, 2013
May
05
May
7
07
2013
05:41 PM
5
05
41
PM
PDT
CR: The logic of implication is subsequent to the challenge of warrant. The pivotal issue is the reliability of the sign, and Newton's stricture on THE INABILITY OF A FUNDAMENTALLY INDUCTIVE PROCESS TO DELIVER DEDUCTIVE CERTAINTY HOLDS. Inductive arguments can rise to moral certainty, but not to deduction. If you introduce a provisional implication, that would work. That is, per warrant, F -p-> I. This is subject to one or more empirical cases where I: [F AND NOT_I], I observe F where also the observed cause is non-intelligent. Once such obtains, the inference breaks, as the warrant is broken. However, the point is that the warrant includes reasons why it should be all but certain that we will not see the breakdown case. KFkairosfocus
May 7, 2013
May
05
May
7
07
2013
04:52 PM
4
04
52
PM
PDT
KF, perhaps you could clarify something for me. I'm still left wondering if your remarks suggest that logical implication is an inappropriate model for the empirical relationship between the indicia of design and the act of design, or if it's instead warranted. For instance, I find that the statement, "FSCO implies intelligence," seems justified upon examination of the evidence, as does, "FSCI implies design." In both of these cases, it seems to me that we find reasonable, if not remarkable, correlation with the truth table for the conditional operator, with false positives being excluded by definition, and each of the other cases consistent. We could extend this to an example with predicate logic. F: x has dFSCI I: x is the product of intelligence S: the set of all strings Premise: ∀(x ∈ S)[F(x)→I(x)] Falsification: ∃(x ∈ S)[F(x)∧¬I(x)] Do you think that modeling this with propositional and/or predicate logic is inappropriate or unhelpful, or do you ultimately think it's justified based upon warrant and inductive reasoning? Thanks in advance for your answer.Chance Ratcliff
May 7, 2013
May
05
May
7
07
2013
04:34 PM
4
04
34
PM
PDT
Gregory, you have chosen to be insistently rude and to make false accusations. you full well know the opening words you used in 4 above, and what they mean. Those words are false, and rise above disagreement to false accusation. In the below [cf. SB here and UB here, also now EA here, T here and here (and PJ here)], you proceed further to strike up a false, accusatory contrast between "legitimate" scholarship and people who think and reason as I do. That's denigration, character assassination and Alinskyite "all the angels are on our side and only devils are on yours" demonisation. Evidently you cannot defend the words you have used and wish to substitute a different matter, on the pretence that such a change of subject is good enough. It is not. Any further attempts to proceed without making amends will be deleted. And, BTW, you are still distorting what the inference to design is about. Game over. Goodbye. GEM of TKI PS: Let me make something crystal clear. after observing your insistence on something that sets up a strawman, I do not give five cents worth of credence to your attempt to impose an alien, procrustean bed taxonomy on design thought. In particular, on checking I found your categories to be seriously rhetorically loaded and that they strawmannise what I and others have sought to do [in ways that set up pretty serious ad hominems, probably including your false accusation of deception on my part -- something that is utterly beyond the pale . . . ]. What we have sought to do is an inductive exercise pivoting on identifying empirically reliable signs of design and then using inference on signs, as for instance is being discussed with CR. The proper way to break such an inductive, inherently provisional inference, is to show a clear counter-instance. Which you either full well know or should know. But you have willfully ignored such, and have played the polarising troll. Your false accusation of deceit is the point where I say, enough. Either, you make amends now, or leave this and other threads I own, or your further posts will be deleted. ++++++++ Please link to the thread and post where you carefully, clearly and *theoretically* distinguish between what is more commonly known as 'design theory' (i.e. which many legitimate scholars and practitioners around the world use) and what you are advocating as 'Intelligent Design Theory' (IDT) with your FSCO/I, KF/GEM. When I wrote in #4 that "Most ‘design theorists’ reject IDT," that is not a 'false accusation'. It is truth telling. Whether that is uncomfortable for you or not is not my concern. "Universal designism" fails as a bandage explanation here. It's not about whether one is a Christian or not. I have not engaged in "false accusation" in this thread. It is not 'rude' to tell the truth, except for the person who doesn't want to hear it. #145 was addressed to Eric, not to KF/GEM.Gregory
May 7, 2013
May
05
May
7
07
2013
04:11 PM
4
04
11
PM
PDT
Gregory, you now have a choice: return to civil behaviour by making amends, or please leave as one insistently, rudely disruptive and making irresponsible false accusations as can be seen in your opening words at 4 above. GEM of TKIkairosfocus
May 7, 2013
May
05
May
7
07
2013
02:57 PM
2
02
57
PM
PDT
CR: Thanks for the positive discussion. That, too is important, as a demonstration of reasoned and reasonable interaction on important matters. I should add some logical framing on the inference to cause: 1 --> Consider a match aflame. It illustrates the nature of cause, as absent any of heat, fuel and oxidiser, and it will not light or will go out. 2 --> These factors (I implicitly include the heat generating chain reaction under "fuel," for simplicity) are jointly sufficient, and each is necessary. A fire, as a contingent entity is dependent on one or more enabling, necessary factors like that. 3 --> We may generalise. Something that is contingent on such factors requires sufficient external conditions to begin or be sustained. Any sufficient set of factors will meet at least all necessary factors. And since such factors can be on/off, present/absent, that which begins to exist or may cease from existing is CAUSED. 4 --> Another way of looking at this, is to see it as existing in some possible worlds, and not existing in some possible worlds. 5 --> Thus, we see things which are conceivable in our minds in vague terms but are actually impossible, like a square circle. What would be required for it to exist is such that all the proposed necessary factors cannot be met under the same circumstances. (E.g: Squareness AND circularity.) 6 --> There is another possibility: a being that has no dependence on enabling, external factors and which is not impossible; a necessary being. 7 --> The truth in 2 + 3 = 5 is an example, it has no beginning, cannot fail to hold and cannot cease from being so. (This leads down the road to understanding the nature of God.) a necessary being exists in all possible worlds. This leads to the point that a serious candidate -- something like the spaghetti monster or a pink unicorn is a composite entity and is not a serious candidate -- will be possible or impossible. If possible, not impossible and in every possible world so, actual. 8 --> Now, let us consider causal factors of interest to science. When we see that when certain circumstances are present, a regular, predictable pattern appears, we see mechanical necessity. This is exemplified by how a dropped, heavy object near Earth's surface, reliably falls at 9.8 N/kg. 9 --> Ever since the days when statistical mechanics arose, we recognised that there are situations of high contingency, that arise. Results vary significantly under initial conditions. If we drop a die, due to sensitive dependencies and amplifications, the outcome is stochastically distributed. We speak of chance contingency. 10 --> Likewise, we observe choice contingency, with intelligent behaviour, as in composing this post. One may compose one way -- or any way one chooses. 11 --> Obviously, we infer intelligent cause when one has a reason to do so, and to chance otherwise, when there is high contingency. Low contingency is the signature of mechanical necessity. 12 --> In short, we have two successive dichotomies of causal factors, per a world of experience: LO/HI contingency, then chance as second default save where there is positive reason per reliable tested sign, to infer design. ======== That is why the refusal to accept design as a serious candidate when that is not convenient is so revealing. Especially, when the posts to make such objections are manifestations of how, reliably, design gives rise to FSCO/I. Ironically. KFkairosfocus
May 7, 2013
May
05
May
7
07
2013
02:45 PM
2
02
45
PM
PDT
G: You have an issue of a false accusation to resolve to return to the province of the civil. If you refuse to do so, kindly leave this thread, and do not return until you make amends. KF ______________ "design is the default." - Patrick "Patrick has it exactly backwards." - Eric So what? 'Default is the design'? Universal designism. THE 'design inference' (gulp IDism) and THE 'explanatory filter' (have another gulp). No other options! Truth has been revealed to Eric in a 'scientific' theory called 'IDism'! Eric demonstrates why only "a little knowledge," which in this case is also dangerous, has led him to become vulnerable to IDism. Does Eric actually think he is not an ideologue for IDism? He seems to openly embrace being an IDist based on his participation at UD. If Chance is a reflexive human being, his 'explanation' for it would likely be entertaining! ;)Gregory
May 7, 2013
May
05
May
7
07
2013
02:19 PM
2
02
19
PM
PDT
Eric @143, exactly. Design is not the default, chance-and-necessity is. I would like to see an actual explanation of "design is default." That would be entertaining. It might go something like this: "Anytime you guys see something that looks designed, you assume it was designed, period." Even that would be incorrect, since material causation has preference over design in ID reasoning.Chance Ratcliff
May 7, 2013
May
05
May
7
07
2013
02:06 PM
2
02
06
PM
PDT
Patrick via Joe @136:
That’s a good summary of Dembski’s Explanatory Filter, but it highlights its fatal flaw, namely that design is the default.
Patrick's comment demonstrates why a little knowledge (in this case, his limited understanding of the design inference) is a dangerous thing. The explanatory filter does not say that we have to conclusively prove necessity or chance in order to avoid design. Quite the contrary. The explanatory filter assumes chance or necessity, unless there is good reason to exclude them. Add to that the positive evidence for design that Joe alluded to, and then we have a reliable and reasonable design inference. Patrick has it exactly backwards.Eric Anderson
May 7, 2013
May
05
May
7
07
2013
02:00 PM
2
02
00
PM
PDT
KF @140, yes I think you are correct. It's likely an intentional tactic. I do think it's possible for one to be deluded, and to exhibit cognitive dissonance, but in many of these cases it seems like willful misrepresentation -- a political "contest" where truth is not at issue, only power. By the way, thanks for your response at #131, and for your clarifying remarks regarding the warrant to infer causes from the signs of their effects, and for your included material on inductive reasoning. :)Chance Ratcliff
May 7, 2013
May
05
May
7
07
2013
01:46 PM
1
01
46
PM
PDT
Petrushka via Joe,
"If you are going to assert design by means other than evolution you need to demonstrate that it is possible."
In other words, default to evolution. We don't need to establish that evolution is capable, only that design is -- even though from an empirical evidentiary point of view, the evidence favors design. So we have a system, say a prokaryotic self-replicator. It exhibits FSCO/I in the form of specified and irreducible complexity. It contains embedded digital codes, which specify protein machinery requiring sequence arrangement well beyond the UPB by orders of magnitude. It has manufacturing and synthesis systems, error correction, transport and signalling infrastructure, and so on. Each of these things at least have analogs to systems that are known to be designed, such as computers and machinery, if they are not the same by definition. In no case can this organism or its subsystems be explained by material processes; but they do actually have design features, or the inarguable appearance of design. However, we are supposed to accept that we must credit unguided evolution, which cannot produce any positive empirical evidence for a viable mechanism, instead of intelligence, which has shown proficiency in engineering such systems, because a) we cannot prove that an intelligent designer could actually engineer life from scratch; and b) we cannot prove that it's impossible for unguided evolution to accomplish such a feat. It's hard to imagine a weaker position than insisting that we most prove a thing impossible, otherwise accept it's virtually unlimited power. If the strength of these arguments are any indication of the confidence in the materialist position, these guys are not very secure in their beliefs. They just have nowhere else to go.Chance Ratcliff
May 7, 2013
May
05
May
7
07
2013
01:39 PM
1
01
39
PM
PDT
CR: Sadly, this is a well-known agit-prop tactic, the turn-speech, twisted or turned about accusation. By projecting fault to the other side, the now accused side finds itself defending under a cloud of suspicion (in a context where there is always a tendency to think or feel that an accusation may well be true). So, there is confusion and polarisation. In a context like this, where the error has been patiently, repeatedly corrected, we are looking in too many cases at speaking in willful defiance of duties of care to truth and soundness. BTW, that flowchart has been in use since about 2009, so one would think that by now it would be understandable to those who want to understand. KFkairosfocus
May 7, 2013
May
05
May
7
07
2013
01:22 PM
1
01
22
PM
PDT
Joe @136, I really don't understand this "design is default" meme that gets repeated constantly as if it's some refutation of design arguments. Ironically, ID is getting saddled with items from the baggage of materialism. In other words, chance and necessity are the defaults, not design. Design is only considered for objects of investigation which exhibit FSCO/I. Everything else is, by default, attributable to material causes. According to KF's explanatory filter diagram, if the item is contingent and the item is complex and specified, infer design. Otherwise, default to material causation. In other words, only consider design in special cases. This is the opposite of default. In your formulation, material causes get first crack at the explanation. If they cannot account for the phenomenon, then we check to see if design features are present. If they are not, keep the question open. This too defaults to material causation. This design is default mantra has things exactly backwards. Design isn't even the default on things which give the appearance of design. With regard to living systems, ID allows material explanations to have the first shot at explanation. If they can account for the phenomenon empirically, we don't infer design. This is the opposite of "default". It's difficult to attribute sincerity to the people who advance such obvious misrepresentations. I'd prefer to think that this is born of confusion or lack of understanding; but I find credulity to be strained here.Chance Ratcliff
May 7, 2013
May
05
May
7
07
2013
01:09 PM
1
01
09
PM
PDT
1 2 3 4 7

Leave a Reply